AMD EPYC™ Processors

Advancing Data Centre Performance & Efficiency

Introducing the AMD EPYC™ server processor family, where nothing stacks up quite like it.
Elevate your business productivity with AMD EPYC™:

AMD Solutions

4th Gen AMD EPYC™ Processors

No matter what the workload demands are, the 4th Gen AMD EPYC™ processor portfolio offers a solution to advance your business. From general purpose to edge computing, there is a processor to meet every data centre need. AMD EPYC™ Processors power the highest-performing x86 servers for the modern data centre with World Record performance across major industry benchmarks.

  • 70% increase enterprise critical operation throughput1
  • 2.1x the speed time-to-market running HPC Computational Fluid Dynamics2
  • 2.7x the performance-per-system watt3

5th Gen AMD EPYC™ Processors

The 5th Generation AMD EPYC™ 9005 series processors are designed to accelerate AI workloads.

They provide end-to-end AI performance and can efficiently handle AI tasks such as

  • ·language models with 13 billion parameters and below,
  • ·image and fraud analysis,
  • ·and recommendation systems on CPU-only servers.


Servers with two 5th Gen AMD EPYC 9965 CPUs offer up to 2x inference throughput compared to previous generations 7.

The processors are also optimized for GPU-enabled systems, enhancing performance on select AI workloads and improving the ROI of each GPU server 8.

Don't forget to keep your eye on AMD for exciting new products!

Visit InTouch to discover Dell, HPE and Lenovo Rack Servers powered by AMD Processors:

AMD helps you to identify key trends in the different industries/verticals. Check which vertical your customers belong to, and meet the infrastructure and budget requirements. Learn how to prescribe the right server according to your customers’ industry and workload needs.

Why AMD?

AMD EPYC™ brings you the highest performing processors from the broadest portfolio which allows optimisation for all of the different workloads in a data centre. The result is better efficiency and sustainability for critical industries such as health, energy and physics. With AMD now powering 8 of the 10 most energy efficient supercomputers in the world5, every major cloud provider has deployed EPYC for internal workloads as well as their customer-facing instances, to benefit from:
  • Exceptional performance for cloud, enterprise and High Performance Computing workloads
  • Cutting-edge security features with AMD Infinity Guard6
  • The most energy-efficient x86 servers in the game
  • Outstanding return on IT investment
  • Broad ecosystem support for worry-free migration and seamless integration

Watch now to discover how AMD can help your customers to reduce CAPEX and OPEX while advancing their sustainability goals:

AMD EPYC™ processors and AMD Instinct™ accelerators continue to be the solutions of choice for many of the most innovative, energy efficient and fastest supercomputers in the world. AMD now powers 140 supercomputers on the latest Top500 list representing a 39% year-over-year increase.

Notable applications of AMD EPYC™ processors are the Frontier supercomputer at Oak Ridge National Laboratory – the fastest computer in the world – and El Capitan at Lawrence Livermore National Laboratory which, by combining CPU and GPU cores, is expected to provide dramatic increases in programmability, energy efficiency and performance.

AMD is also providing the software portfolio needed to meet the rapidly growing demand for applications of AI within the HPC industry, working together with AI and HPC communities to support new applications, frameworks, languages and more.

Discover Your Possibilities

Anything is possible with AMD EPYC™ Processors. Take a look at our resources to find out how your customers can improve performance, increase efficiency and reach sustainability goals with the help of AMD. Our comparison tools will show them how their business can benefit when they switch to AMD EPYC™ Processors from their existing CPUs.

Strengthen your knowledge of AMD EPYC™ CPUs with an array of infographics, videos and fact sheets that you can use to educate your team and explain the benefits of AMD EPYCTM to your customers.

Our comparison tools will provide valuable data that can help you to close a deal by identifying potential improvements on CPU prices, cores and performance, as well as calculating TCO and Greenhouse Gas Emissions (GHG) savings on:

  • Server Virtualisation 
  • Bare Metal
  • Cooling/Refresh

Get in Touch

Need to know more? Simply fill in the form and one of our expert advisors will get back to you.

Notes:

  1. www.spec.org
  2. Results may vary https://www.amd.com/system/files/documents/amd-epyc-9004x-pb-ansys-fluent.pdf
  3. Results may vary https://www.amd.com/system/files/documents/amd-epyc-9754-pb-spec-power.pdf
  4. Example only: 1P AMD EPYC™ EPYC_7453 (28c) Vs 2P Intel® Xeon® Gold_6334 (8c)
  5. Latest Top500 List Highlights Several World’s Fastest and Most Efficient Supercomputers Powered by AMD
  6. https://www.amd.com/en/technologies/infinity-guard
  7. 9xx5-040A: XGBoost (Runs/Hour) throughput results based on AMD internal testing as of 09/05/2024.
    XGBoost Configurations: v2.2.1, Higgs Data Set, 32 Core Instances, FP32 2P AMD EPYC 9965 (384 Total Cores), 12 x 32 core instances, 1.5TB 24x64GB DDR5-6400 (at 6000 MT/s), 1.0 Gbps NetXtreme BCM5720 Gigabit Ethernet PCIe, 3.5 TB Samsung MZWLO3T8HCLS-00A07 NVMe®, Ubuntu® 22.04.4 LTS, 6.8.0-45-generic (tuned-adm profile throughput-performance, ulimit -l 198078840, ulimit -n 1024, ulimit -s 8192), BIOS RVOT1000C (SMT=off, Determinism=Power, Turbo Boost=Enabled), NPS=1 2P AMD EPYC 9755 (256 Total Cores), 1.5TB 24x64GB DDR5-6400 (at 6000 MT/s), 1DPC, 1.0 Gbps NetXtreme BCM5720 Gigabit Ethernet PCIe, 3.5 TB Samsung MZWLO3T8HCLS-00A07 NVMe®, Ubuntu 22.04.4 LTS, 6.8.0-40-generic (tuned-adm profile throughput-performance, ulimit -l 198094956, ulimit -n 1024, ulimit -s 8192), BIOS RVOT0090F (SMT=off, Determinism=Power, Turbo Boost=Enabled), NPS=1 2P AMD EPYC 9654 (192 Total cores), 1.5TB 24x64GB DDR5-4800, 1DPC, 2 x 1.92 TB Samsung MZQL21T9HCJR-00A07 NVMe®, Ubuntu 22.04.4 LTS, 6.8.0-40-generic (tuned-adm profile throughput-performance, ulimit -l 198120988, ulimit -n 1024, ulimit -s 8192), BIOS TTI100BA (SMT=off, Determinism=Power), NPS=1 Versus 2P Xeon Platinum 8592+ (128 Total Cores), AMX On, 1TB 16x64GB DDR5-5600, 1DPC, 1.0 Gbps NetXtreme BCM5719 Gigabit Ethernet PCIe, 3.84 TB KIOXIA KCMYXRUG3T84 NVMe®, Ubuntu 22.04.4 LTS, 6.5.0-35 generic (tuned-adm profile throughput-performance, ulimit -l 132065548, ulimit -n 1024, ulimit -s 8192), BIOS ESE122V (SMT=off, Determinism=Power, Turbo Boost = Enabled) Results: CPU Run 1 Run 2 Run 3 Median Relative Throughput Generational 2P Turin 192C, NPS1 1565.217 1537.367 1553.957 1553.957 3 2.41 2P Turin 128C, NPS1 1103.448 1138.34 1111.969 1111.969 2.147 1.725 2P Genoa 96C, NPS1 662.577 644.776 640.95 644.776 1.245 1 2P EMR 64C 517.986 421.053 553.846 517.986 1 NA Results may vary due to factors including system configurations, software versions and BIOS settings.
  8. 9xx5-014: Llama3.1-70B inference throughput results based on AMD internal testing as of 09/01/2024.
    Llama3.1-70B configurations: TensorRT-LLM 0.9.0, nvidia/cuda 12.5.0-devel-ubuntu22.04 , FP8, Input/Output token configurations (use cases): [BS=1024 I/O=128/128, BS=1024 I/O=128/2048, BS=96 I/O=2048/128, BS=64 I/O=2048/2048]. Results in tokens/second.

    2P AMD EPYC 9575F (128 Total Cores ) with 8x NVIDIA H100 80GB HBM3, 1.5TB 24x64GB DDR5-6000, 1.0 Gbps 3TB Micron_9300_MTFDHAL3T8TDP NVMe®, BIOS T20240805173113 (Determinism=Power,SR-IOV=On), Ubuntu 22.04.3 LTS, kernel=5.15.0-117-generic (mitigations=off, cpupower frequency-set -g performance, cpupower idle-set -d 2, echo 3> /proc/syss/vm/drop_caches) ,

    2P Intel Xeon Platinum 8592+ (128 Total Cores) with 8x NVIDIA H100 80GB HBM3, 1TB 16x64GB DDR5-5600, 3.2TB Dell Ent NVMe® PM1735a MU, Ubuntu 22.04.3 LTS, kernel-5.15.0-118-generic, (processor.max_cstate=1, intel_idle.max_cstate=0 mitigations=off, cpupower frequency-set -g performance ), BIOS 2.1, (Maximum performance, SR-IOV=On),

    I/O Tokens Batch Size EMR Turin Relative

    128/128 1024 814.678 1101.966 1.353

    128/2048 1024 2120.664 2331.776 1.1

    2048/128 96 114.954 146.187 1.272

    2048/2048 64 333.325 354.208 1.063

    For average throughput increase of 1.197x.

    Results may vary due to factors including system configurations, software versions and BIOS settings.