Photo provided by NVIDIA |
NVIDIA has unveiled the Blackwell architecture, which has up to 30 times faster AI inference speed than the previous generation. Aiming to accelerate the advent of general artificial intelligence (AGI), Blackwell performs AI learning 5 times faster and the power-to-performance ratio by 25 times than the previous model, H100 Hopper.
On the 18th (local time), Jensen Huang, CEO of NVIDIA, disclosed the new Blackwell series chipset at the annual developer’s live GTC 2024 held at the SAP Center in San Jose, USA. “NVIDIA has pursued accelerated computing for the past 30 years to realize innovations like deep learning and AI,” said CEO Huang. He emphasized that “the Blackwell GPU is the engine driving the industrial revolution of generative AI that defines our era.”
Blackwell is a combined structure of two graphics processing units (GPUs) into one based on the basic chipset B100 and boasts the largest GPU of all time, with a total of 208 billion transistors. The B100 can calculate up to 40 PetaFlops (floating point processing per second), 5 times more than the existing H200, meaning that AI learning speed can be 5 times faster. NVIDIA also presented the NVL72 SuperPOD, equipped with 72 GPUs and 36 GB200s, along with the GB200 platform that combines two B200s with higher power consumption and higher throughput and a Grace central processing unit (CPU). The NVL72 has a 30 times higher AI inference speed and a 25 times higher power-to-performance ratio than the existing H100 with the same number of GPUs. It aims to usher in the era of AI development with 10 trillion parameter units (AI operation units).
Most Commented