Those who follow the fast-growing world of artificial intelligence chips may have noticed fanfare on Monday surrounding Nvidia’s new H200 AI chip, a successor to the vaunted H100 that everyone has been trying to get their hands on this year.
But Nvidia’s announcement simply may have been an attempt to steal the spotlight from its biggest competitor, AMD, rather than show off a drastic technological improvement. After all, the H200’s architecture is very similar to that of the H100. The main upgrade is its increased memory capacity, which allows large-language models powered by H200 chips to generate results nearly twice as fast as those running on H100s, Nvidia said.
That’s significant but not revolutionary. AMD, in fact, has touted higher memory capacity in its upcoming MI300X chips, which are slated to be released later this year and are expected to eat into Nvidia’s dominant market share. AMD’s chip boasts 192 gigabytes of memory versus 80 gigabytes for Nvidia’s H100. Now, Nvidia is closing that gap with 141 gigabytes of memory in its H200 chip, the company said Monday.