Intel Xeon E5 can run on dual-processor systems, resulting in a server performance improvements
of 80-90% for each processor. Such servers execute two threads of instructions simultaneously,
while a single-processor server has to queue the second thread.
The performance improvement of dual-processor servers based on Intel Xeon E5 is especially
noticeable with web technologies, such as web servers, where the speed can be doubled.
Development environments and distributed databases also receive a large increase in speed.
For large enterprise projects, we offer dual-processor servers based on the first and second
generations of Xeon Scalable processors. One of the advantages of these servers is power and
scalability: the maximum configuration of a Scalable-based server can have up to 48 cores (96
threads with Hyper-Threading), up to 1536 GB of RAM, and up to 56 TB of disk storage.
Thanks to Hyper-Threading and Turbo Boost technologies support, Xeon Scalable-based servers
become a great choice for cloud technologies, storage systems, computing, and machine learning.
The first generation of Intel Xeon Scalable was built for high-performance systems and global
data centers. Compared to processors of the previous architecture, the first generation received
an increase in performance (due to the number of cores and increased frequency), speed, and RAM
volume. Servers with first-generation processors (41 * *, 61 * *) have 16 to 36 cores with up to
3.7 GHz of maximum frequency per core and L3 cache up to 24.75 MB.
Scalable processors of the second generation (42 * *, 62 * *) are based on the updated Cascade
Lake SP architecture. The new models have 20 to 48 cores with a maximum frequency of 3.2-4.0 GHz
and L3 cache capacity of up to 27.5 MB. The processors have hardware-based protection mechanisms
against Spectre and Foreshadow attacks. This generation supports Intel Optane DC Persistent
Memory (DCPM) technology, which allows you to connect up to 512 GB of RAM a server. It is also
worth highlighting the optimization of processor resource management, finer load sharing
capabilities between the cores, and an additional set of instructions for working with AI.
According to the results of internal tests, the second generation received a performance
increase of 10 to 40% compared to the previous one.