Microsoft’s Project Olympus, the tech titan’s open-source cloud server initiative, has turned its attention to artificial intelligence (AI).
Along with graphics card maker Nvidia and Ingrasys, a subsidiary of contract manufacturing giant Foxconn, Microsoft unveiled the HGX-1 at the Open Compute Project (OCP) Summit in Santa Clara, Calif. The multi-GPU (graphical processing unit) system specification is not only aimed at accelerating AI workloads, but also at helping to make AI-enabled servers a fixture in the modern data center.
“The Project Olympus hyperscale GPU accelerator chassis for AI, also referred to as HGX-1, is designed to support eight of the latest ‘Pascal’ generation NVIDIA GPUs and NVIDIA’s NVLink high speed multi-GPU interconnect technology, and provides high bandwidth interconnectivity for up to 32 GPUs by connecting four HGX-1 together,” blogged Kushagra Vaid, general manager of Azure Hardware Infrastructure at Microsoft.
“The HGX-1 AI accelerator provides extreme performance scalability to meet the demanding requirements of fast growing machine learning workloads, and its unique design allows it to be easily adopted into existing datacenters around the world,” continued Vaid.
For more traditional server workloads, Microsoft is building on its collaboration with Intel. In addition to existing support for “Skylake” Xeon processors, the companies teased upcoming specifications that could potentially include room for accelerators based on Intel’s field-programmable gate array (FPGA) and Nervana deep learning technologies.
Intel acquired FPGA provider Altera in 2015 for $16.7 billion, forming the chipmaker’s Programmable Solutions Group. Last summer, the company snapped up Nervana Systems, a developer of machine-learning software and hardware.
Fresh off the launch of its new Ryzen processors, AMD and Microsoft are readying updated designs based on the upcoming “Naples” server processor featuring a system-on-chip (SoC) design based on AMD’s “Zen” processing engine.
Previewed at the OCP Summit earlier this week, and scheduled to ship in the second quarter, the x86 chip will feature up to 32 cores and support for eight memory channels per processor. A server outfitted with two Naples processors can have access to up to four terabytes (TB) of memory, spread across 32 DIMMS on 16 memory channels, according to AMD’s specifications.
“This 32-core, 64-thread CPU signals AMD’s re-entry into the high-performance server market and our intention to once again be a significant player in the datacenter,” wrote AMD’s Forrest Norrod, senior vice president and general manager of the company’s Enterprise, Embedded and Semi-Custom Business Group, in a March 7 announcement.
“The new AMD server processor exceeds today’s top competitive offering on critical parameters, with 45 percent more cores, 60 percent more input/output capacity (I/O), and 122 percent more memory bandwidth.”
Microsoft and its technology partners aren’t the only ones making waves at the OCP Summit.
On March 8, Facebook announced an ambitious server refresh that will require the company to replace practically all of its old equipment to make room for the company’s latest OCP hardware designs. (Facebook, Intel and Rackspace are founding OCP members.) Those designs include Bryce Canyon, the first major storage chassis revamp from the social media giant.