This week at its annual GPU technology conference, Nvidia announced a new chip, but it’s not a GPU (Graphics Processing Unit). The new chip, NVSwitch, is a communication switch that allows multiple GPUs to work in concert at extremely high speeds. The first product to use NVSwitch will be Nvidia’s new DGX-2 deep learning server, a beast of a system with 16 GPUs connected by 12 NVSwitches. With peak performance of two quadrillion operations per second, the NVSwitch should become the most powerful deep learning computer in the world.
Nvidia built the NVSwitch to address the insatiable computational demands associated with deep learning. During the past five years, based on ARK’s research, deep learning networks and data sets have exploded in size and complexity, far outstripping processor performance. Nvidia’s V100 GPU is the largest chip that state-of-the-art semiconductor equipment can accommodate today. The NVSwitch will enable many GPUs – currently 16 but potentially many more – to work together.
The NVSwitch will distance Nvidia from the dozen or so companies developing competing AI (artificial intelligence) chips. While most are focused on their first chips, Nvidia is building out highly scalable AI systems which will be difficult to dislodge.
View original article and other research here.
ARK’s statements are not an endorsement of any company or a recommendation to buy, sell or hold any security. For a list of all purchases and sales made by ARK for client accounts during the past year that could be considered by the SEC as recommendations, click here. It should not be assumed that recommendations made in the future will be profitable or will equal the performance of the securities in this list. For full disclosures, click here.