Best CPUs For Machine Learning of 2023

Looking to supercharge your deep learning projects? While GPUs have long been the go-to for parallel processing, CPUs are making a big comeback. They’re versatile, handling a mix of deep learning tasks with ease. Dive into our 2023 roundup of the best CPUs for every budget and need—from cost-effective choices to powerhouses for intense tasks. Want to compare? Check our table. And if you’re considering GPUs too, we’ve got a guide for that too. Boost your setup, get results faster, and save both time and money! If you’re in the process of building a complete deep learning rig, make sure to check out our comprehensive deep learning workstation guide.


Our Team’s Text-to-Video AI Tool Picks


Best Overall Text-to-Video AI Tool

1. Intel Core i9-12900K

Cores: 8 + 8 (P-Cores + E-Cores)
Threads: 24
Base Clock: 3.2 GHz / 2.4 GHz
Max. Boost: 5.2 GHz / 3.9 GHz
L3 Cache: 30 MB
TDP: 125 W
Architecture: Alder Lake
Process: 10nm
Socket: LGA 1700
What we like: Hybrid architecture improves power efficiency, Thread Director technology optimizes multi-threaded performance, Support for DDR5 memory and PCIe 5.0 interface bolsters deep learning tasks performance
What we don’t like: Full efficiency of hybrid design limited to Windows 11, Hyper-Threading only supported by P-cores, High investment needed for new motherboard and potentially expensive DDR5 memory

Intel’s 12th generation Core i9-12900K processor is notable for its hybrid architecture, featuring both performance-oriented P-cores and energy-efficient E-cores. This “Alder Lake” design harnesses the strengths of both core types, with the Thread Director technology optimizing workload distribution between cores depending on the task intensity. This translates to an improved power efficiency and performance in multi-threaded applications. Moreover, the processor supports DDR5 memory and the latest PCIe 5.0 interface, which could significantly bolster the performance of deep learning tasks with the right hardware.

However, the efficiency of the hybrid design heavily relies on the operating system’s ability to identify and effectively allocate tasks between P- and E-cores. As of now, this is only fully supported by Windows 11. Another potential drawback is that only the P-cores support Hyper-Threading, restricting the thread count to 24 rather than the expected 32 from a 16-core processor. Overclocking enthusiasts would also need to contend with the chip’s considerable power demands, as indicated by the 241-watt maximum turbo power.

Furthermore, the Core i9-12900K demands a new motherboard with the FCLGA1700 socket, implying a substantial investment for those upgrading from an older platform. While the processor supports DDR5 memory, the limited availability and higher cost of DDR5 compared to DDR4 could pose an additional challenge. Lastly, despite its PCIe 5.0 support, the lack of compatible storage drives as of now undermines this feature’s immediate value. Thus, while Intel’s Core i9-12900K offers robust deep learning capabilities, these come with certain system requirements and constraints to consider… Read in-depth review

Runner-up to Best Overall

2. Intel Core i9-9900K

Cores: 8
Threads: 16
Base Clock: 3.6 GHz
Max. Boost: 5.0 GHz
L3 Cache: 16 MB
TDP: 95 W
Architecture: Coffee Lake
Process: 14nm
Socket: LGA 1151
What we like: Significant boost in performance with 8 physical cores and 16 MB of L3 cache, Support for up to 128 GB of memory, Hardware-level fixes for speculative execution vulnerabilities.
What we don’t like: Lack of architectural improvements since the 6th generation, Stark capability differences between the i9-9900K and i7-9700K not justifying the cost difference, Adoption of the Z390 Express chipset necessitates high-end, expensive motherboards.

The Intel Core i9-9900K CPU, based on the 9th generation “Coffee Lake Refresh” architecture, significantly boosts performance by doubling the core counts from its earlier quad-core designs. Its most notable strength lies in its 8 physical cores and 16 MB of L3 cache, providing twice the processing power compared to its predecessor, the “Kaby Lake” quad-core CPU. Furthermore, its improved integrated memory controller supports up to 128 GB of memory, making it highly suitable for deep learning tasks requiring substantial memory. This processor also provides hardware-level fixes for speculative execution vulnerabilities, minimizing potential performance impacts compared to software fixes.

Despite these significant strengths, the Core i9-9900K also has a number of drawbacks. It’s a result of Intel’s defensive stance against AMD’s “Zen” architecture, rather than an innovative leap, suggesting a lack of vision and leadership. The CPU design has remained relatively stagnant since the 6th generation “Skylake”, with Intel focusing more on core-count increases and improvements to the 14 nm fabrication process for efficiency. This shows a clear lack of architectural improvements. Furthermore, the Core i9-9900K and the Core i7-9700K, despite both being carved out from the same 8-core die, show stark differences in capabilities such as HyperThreading and L3 cache amounts. The higher cost of the i9-9900K may not be justified for users not needing hyperthreading or the extra cache. Moreover, while the new Z390 Express chipset provides better overclocking headroom and integrated USB 3.1 controller, it imposes higher CPU VRM requirements, necessitating higher-end, more expensive motherboards. Hence, the adoption of this new processor may entail a significant financial commitment, particularly for larger-scale deep learning infrastructures… Read in-depth review

Best for Extreme Workloads

3. Intel Core i9-7980XE

Cores: 18
Threads: 36
Base Clock: 2.6 GHz
Max. Boost: 4.4 GHz
L3 Cache: 24.75 MB
TDP: 165 W
Architecture: Skylake-X
Process: 14nm
Socket: Socket 2066
What we like: Exceptional multi-core performance with 18 cores and 36 threads, Support for Intel’s Turbo Boost Max 3.0 technology providing strong single-thread performance, Provision of 44 PCI Express lanes catering to demanding users.
What we don’t like: Prohibitively high price tag limiting its accessibility, Fewer PCI Express lanes compared to AMD’s Threadripper platform, High power consumption could be an issue for energy-conscious users or those with thermal constraints.

The Intel Core i9-7980XE Extreme Edition is, without argument, a beast of a CPU when it comes to high-performance computing tasks. Its 18-core, 36-thread design sets the bar for raw performance, especially in tasks that can take advantage of multiple cores and threads, like video editing or 3D rendering. Additionally, it supports Intel’s Turbo Boost Max 3.0 technology, which allows two of its cores to reach speeds as high as 4.4GHz when needed, thus providing robust single-thread performance as well.

Despite its impressive prowess, the Core i9-7980XE is not without its flaws. Its whopping price tag of $1,999 places it out of reach for most consumers, targeting a narrow market of high-end users, professionals, or enthusiasts who are willing to invest heavily in their hardware. Moreover, while the chip offers 44 PCI Express lanes, enough for even the most demanding users, it is somewhat overshadowed by AMD’s Threadripper platform, which provides 64 lanes of PCI Express on all its processors. Lastly, while its power consumption is understandably high for such a high-performance chip, this could potentially be an issue for those who are energy-conscious or have to deal with thermal constraints.

While Intel provides a number of other CPUs in the Core X-series that are lower-priced and still offer solid performance, the Core i9-7980XE finds itself challenged by fierce competition from AMD, especially considering that AMD’s Threadripper 1950X offers nearly equivalent performance at half the price. This competition forces potential consumers to look beyond mere core counts, considering other factors like the price of compatible motherboards, the number of PCI Express lanes, and power consumption, before making a decision.

In conclusion, the Intel Core i9-7980XE Extreme Edition is a remarkable CPU in terms of raw power, but its high cost and stiff competition from AMD make it a choice that requires careful consideration. It is best suited for high-end users who need the absolute best performance possible and are willing to pay a premium for it… Read in-depth review

Runner-up to Best for Extreme Workloads

4. Intel Core i9-13900K

Cores: 24
Threads: 32
Base Clock: 3.0 GHz
Max. Boost: 5.8 GHz
L3 Cache: 36 MB
TDP: 125 W
Architecture: Raptor Lake-S
Process: 10nm
Socket: Socket 1700
What we like: Robust computational capacity with 24 cores and 32 threads, Strong support for parallel processing and overclocking, Compatibility with multiple GPUs and popular deep learning libraries.
What we don’t like: High price tag potentially limiting market reach, High power consumption necessitating expensive cooling solutions, Limited availability due to its status as a premium binned silicon chip.

The Intel Core i9-13900K stands out in the domain of deep learning due to its high performance and robust computational capacity, facilitated by its impressive 24 cores and 32 threads. Capable of handling the enormous computational loads typical of deep learning tasks, this CPU excels at parallel processing operations, which are vital for most AI and ML workloads. Overclockability adds another dimension to its utility, enabling users to potentially enhance its performance even further. Accompanied by 20 PCIe lanes, expandable with a Z690/Z790 motherboard, the i9-13900K readily supports the use of multiple GPUs, a significant boon for deep learning operations that are predominantly GPU-reliant. The support for popular deep learning libraries like TensorFlow, PyTorch, and Keras, augments its appeal. A considerable advantage is the CPU’s ability to reach high boost speeds of up to 6 GHz, a considerable feat for a consumer-grade processor.

Despite these numerous strengths, the Intel Core i9-13900K has certain weaknesses. The high price tag is perhaps its most glaring drawback, deterring budget-conscious buyers and limiting its market reach. The high power consumption, necessitating a robust and potentially expensive cooling solution, not only adds to the cost but might also result in increased operational noise. Furthermore, as a premium binned silicon chip, the i9-13900K might pose procurement difficulties due to limited availability, which could be a deterrent for potential users seeking this high-end solution for their deep learning needs… Read in-depth review

Best Budget

5. AMD Ryzen 5 5600X

Cores: 6
Threads: 12
Base Clock: 3.7 GHz
Max. Boost: 4.6 GHz
L3 Cache: 32 MB
TDP: 65 W
Architecture: Vermeer
Process: 7nm
Socket: AMD Socket AM4
What we like: Unprecedented power and performance, Significant improvement in instructions per cycle (IPC) throughput, Superior power efficiency and single-threaded performance.
What we don’t like: High pricing potentially alienating budget-focused customers, Absence of bundled coolers with Ryzen 9 and 7 processors, Large price gap in product stack between entry-level and next tier.

The AMD Ryzen 5000 series, with its flagship chips Ryzen 9 5950X and 5900X, brings unprecedented power and performance to the desktop PC. The Ryzen 5 5600X specifically takes the mid-range sector by storm, with its six cores and twelve threads powered by the Zen 3 architecture on the 7nm process. This results in a significant ~19% improvement in instructions per cycle (IPC) throughput. The improved boosting algorithm, better memory overclocking, and redesigned cache topology make this chip incredibly power efficient. Furthermore, the 5600X shines in terms of performance, exceeding even Intel’s flagship Core i9-10900K in most single-threaded workloads and 1080p gaming.

However, the Ryzen 5000 series is not without its drawbacks. With AMD’s rise in performance, there’s also been an increase in pricing. The Ryzen 5 5600X is priced at $300, a significant increase from previous models, potentially alienating budget-focused customers. The new pricing strategy and absence of bundled coolers with the Ryzen 9 and 7 processors might come across as a disadvantage against Intel’s potential price cuts, especially for enthusiasts who valued these extras. There’s also a sizable gap in AMD’s product stack, with a $150 leap to ascend above the entry-level six-core twelve-thread Ryzen 5 5600X, which could be a potential stumbling block in a price war scenario.

In conclusion, the AMD Ryzen 5000 series represents a leap forward in CPU technology, delivering remarkable power and performance across the board. However, the increased pricing and potential impact on consumer choice must be taken into account when considering these new chips… Read in-depth review

Comparison Table

CPUCoresThreadsBase ClockMax. BoostL3 CacheTDPArch.ProcessSocket
Intel Core i9-12900K8 P-cores + 8 E cores243.2GHz (P) + 2.4 GHz (E)5.2GHz (P) + 3.9 GHz (E)30MB125WAlder Lake10nmLGA 1700
Intel Core i9-9900K8163.6 GHz5.0 GHz16MB95WCoffee Lake14nmLGA 1151
Intel Core i9-7980XE18362.6 GHz4.4 GHz24.75 MB165WSkylake-X14nmSocket 2066
Intel Core i9-13900K24323.0 GHz5.8 GHz36MB125WRaptor Lake-S10nmSocket 1700
AMD Ryzen 5 5600X6123.7 GHz4.6 GHz32MB65WVermeer7nmAMD Socket AM4

CPU Buying Advice

In evaluating CPUs for deep learning applications, a few crucial factors should be prioritized. Notably, the number of cores and threads, base clock speed, thermal design power (TDP), and the CPU’s generation each play a vital role in determining a CPU’s compatibility with machine learning tasks. AMD processors have been consistently praised for their superior multi-core architecture, making them ideal for handling the parallel computing tasks inherent in deep learning applications. Conversely, for those whose primary tasks are not concentrated on deep learning, the CPU’s clock speed becomes a critical deciding factor, outweighing the number of cores. Intel processors, such as the Intel Core i9 i9-9900K Octa-core, are well-regarded for their high-speed cores and provide commendable performance, even with fewer cores. This CPU, for example, boasts a remarkable overclock speed of up to 5.0 GHz. However, it’s worth noting that these high-performance options are not the most budget-friendly, as engaging in deep learning can be a substantial financial investment.

Cores and Threads

A CPU’s cores form the backbone of its processing ability. More cores facilitate efficient parallelization of work. Threads, closely linked to cores, are hosted by each physical CPU core, with each accommodating two threads. Threads split a single physical CPU core into two virtual cores, which allows for maximum simultaneous processing of tasks.

Modern CPUs comprise two types of cores: performance cores and efficient cores. Performance cores cater to more intensive tasks due to their larger size, while efficient cores tackle simpler tasks and are more power-efficient owing to their smaller size. The total number of cores in a CPU is the sum of these two core types.

Threads are virtual cores that complement the performance cores. A single performance core can accommodate two threads, effectively doubling its processing power. If we equate a core to a brain, then a thread would be comparable to a thought; thus, a six-core CPU can process 12 thoughts concurrently.

CPU Tasks in Machine Learning

There are two primary compute-intensive tasks that a CPU undertakes during machine learning applications: preprocessing and model training.

Preprocessing

The CPU handles all data preparation and initial reading. For bulk preprocessing, the speed of your processing cores is paramount. Libraries like Pandas or NumPy can distribute this processing across multiple cores, where clock speed takes priority, followed by the number of cores.

Conversely, mini-batch processing involves asynchronous data preprocessing. This method is often employed in deep learning models, where the number of cores takes precedence, followed by clock speed.

Model Training

In workflows that don’t involve training deep neural networks, you’ll likely encounter bulk preprocessing, followed by model training on your CPU. In such instances, clock speed trumps the number of cores. However, when training deep neural networks, a large number of cores become essential for efficiently feeding data to your GPU. Hence, the priority changes to the number of cores first, then clock speed.

Role of PCIe Lanes in Machine Learning

Although PCIe lanes play a significant role in machine learning, their importance tends to be less pronounced at smaller scales. For most personal workstation users, the number of PCIe lanes doesn’t drastically impact performance. However, for large-scale projects, like those seen at Tesla, PCIe lanes become critical due to the enormous amounts of data involved.

Base Clock Speed (Turbo Frequency)

The base clock speed, or turbo frequency, is the CPU’s maximum attainable speed, measured in gigahertz (GHz). A higher frequency equals faster CPU performance, which, in turn, accelerates deep learning computations. Understanding the nuances of CPU clock speed is essential for optimizing your deep learning computations.

Base Power (TDP)

TDP, or Thermal Design Power, represents the average heat a CPU generates when operating at its base clock speed. A higher TDP suggests a high-performance CPU but also indicates increased power consumption.

CPU Generation

With each new generation, CPU technology evolves. Generally, a newer generation CPU offers enhanced processing power, improved hardware compatibility, better power efficiency, and superior thermal management, all while maintaining a similar price point. Consequently, it’s advisable to opt for the latest generation CPUs for peak performance.

Cooling and Heat

Modern desktop systems employ both fan-based and liquid-based cooling systems. Liquid cooling systems outperform fan-based ones in terms of efficiency. 

Liquid cooling excels in keeping a CPU cool as water transfers heat far more effectively than air. Additionally, liquid cooling ensures your PC runs quieter due to the reduced need for constantly high-RPM fans. However, liquid cooling poses a risk to your PC in the event of a water leak onto hardware.

When choosing a cooling system, consider factors like the noise level of the fans or pump, ease of installation, and cost.

Concluding Remarks

While GPUs often steal the limelight for their superior parallel processing power, the importance of CPUs in certain aspects of deep learning operations cannot be understated. Deep learning practitioners must develop a thorough understanding of key CPU features, such as the number of cores and threads, clock speed, TDP, and generational advancements, to make informed choices that fit their specific requirements. Budget constraints, computational needs, and future scalability should be integral components of the decision-making process. No single choice will fit all scenarios, and therefore, a degree of customization is needed in each case. Recognizing that the choice of hardware significantly influences model performance, we advocate for staying updated with the latest advancements and routinely reassessing one’s computational needs. The dynamic nature of technology necessitates this continuous learning and reevaluation process to optimize deep learning performance over time. With an eye towards the future, we expect CPUs to continue evolving and adapting, increasingly fitting into the deep learning landscape with greater potency.

Read More From AI Buzz