Nvidia Unveils Game-Changing ‘Blackwell’ Chip with 30x Faster AI

admin19 March 2024Last Update :
Nvidia Unveils Game-Changing 'Blackwell' Chip with 30x Faster AI

Nvidia Unveils Game-Changing ‘Blackwell’ Chip with 30x Faster AI،

Touted as “the most powerful AI chip in the world,” Nvidia’s latest revelation, the Black well GPU B200, is just the latest example of how the company is pushing the boundaries of AI computing. Designed to make AI models with billions of parameters accessible to more companies, Nvidia's new GPU will change AI as we know it.

The Blackwell platform is very remarkable, with the ability to run Large Language Models (LLM) much more efficiently, with 25 times lower costs and less energy. Its revolutionary GPU architecture is based on six new technologies. These technologies can accelerate computing, which is useful in many areas such as data processing, technical modeling, and generative AI.

Via: Nvidia

Huge credit for his remarkable performance goes to 208 billion transistors on the chip. For comparison, its predecessor, the H100 chip, had only 80 billion transistors. Compared to the H100, Nvidia's new GPU is 25 times more expensive and energy efficient.

ALSO READ: Apple roadmap for 2024-2027 revealed: includes foldable iPhone, OLED iPads, iPhone SE4 and more

B200 versus the H100
Via: Nvidia

The Blackwell chip is manufactured using a custom-developed material 4NPTS™ process that offers twice the computing power and model size compared to previous models, thanks to the improvement 4-bit floating point AI inference abilities.

The new Blackwell B200 GPU has FP4 power from up to 20 petaflops. The GB200 “superchip”, built with two B200 GPUs and a Grace CPU, promises a substantial improvement in power efficiency and an increase in performance. 30x for LLM inference applications.

GB200 super chip
Via: Nvidia

ALSO READ: Apple acquires DarwinAI to defeat Google and Microsoft

With these upgrades and the next-generation NVLink switch, which enables 576 GPUs To communicate with each other, Nvidia has reached unprecedented levels of AI performance. THE GB200 NVL72 and other larger variants of the GB200 GPU show how seriously Nvidia takes the goal of improving AI capabilities.

GB200 NVL72
Via: Nvidia

A rack case 72 GPUs And 36 processors can provide 1,440 petaflops inference or 720 petaflops training in AI. The DGX Superpod for DGX GB200 from Nvidia has an astonishing 11.5 exaflops (11,500 petaflops) of FP4 processing capacity in addition to 240 TB of memory, 288 processors, And 576 GPUs.

FP4 and FP6
Via: Nvidia

With the Eight-in-one DGX Superpod For the DGX GB200, Nvidia also offers a unique solution for high-performance computing tasks. This is probably the best option for a company that wants to include AI in its regular operations.

When it comes to safety, Blackwell doesn't compromise either. This means the security of AI models and consumer data is encrypted without compromising performance.

Later this year, partners will be able to purchase Blackwell-based devices, although Nvidia hasn't said which racks will be available. We can also expect Nvidia to integrate the Blackwell architecture into its gaming GPUs, potentially in the upcoming RTX 50 series, which will launch in late 2024 or early 2025.

READ ALSO : PS5 Pro Rumor Roundup: Specs, Features, Price, and Expected Release Date

You can follow Smartprix on Twitter, Facebook, Instagram and Google News. Visit smartprix.com for the latest news, reviews and technical guides.