Google of Alphabet Inc. claimed on Tuesday that its supercomputers are quicker and more environmentally friendly than the Nvidia A100 processor.
Google recently revealed new information regarding the supercomputers it employs to develop artificial intelligence models. According to Google, the system is faster and more energy-efficient than similar Nvidia Corp. systems.

However, Google has always aspired to dominate the tech industry, so it has developed its own proprietary chip known as the Tensor Processing Unit, or TPU.

Google claims that the firm uses the chips for more than 90% of its work on training artificial intelligence.

The chip feeds data through models to enable them to perform tasks like producing images or replying to inquiries with text that resembles human speech.

The TPU used by Google is also in its fourth iteration. Google released a scholarly paper on Tuesday that described the method it used to string more than 4,000 chips together.

It brought individual machines together using optical switches that it had created on its own.
As the scale of the massive language models that power services like Google Bard or Open AI’s chatGPT has increased exponentially. They are currently too big to accommodate on one chip.

The development of these connections has become a key area of rivalry among companies that produce AI supercomputers.

Additionally, to train the model, these models collaborate for weeks or longer across thousands of processors.

PaLM, Google’s most important language model that has been publicly revealed to date, was trained over the course of 50 days using two of the 4,000 chip supercomputers.

Its supercomputers “make it easy to reconfigure connections between chips on the fly,” according to Google.

Circuit switching “makes it easy to route around failed components,” Norm Jouppi and David Patterson, Google Distinguished Engineers, wrote in a blog post about the system.

They added that “this flexibility even allows us to change the topology of the supercomputer interconnect to accelerate the performance of machine learning (ML) model” as well.

Google only divulges information about its supercomputer because it is so secure in its chip. It has also been operational within the organisation since 2020 in a data facility in Mayes County, Oklahoma.

According to the business, the startup MidJourney used the system to hone a model that creates new images after being fed a small amount of text.

The firm also disclosed that its chips outperform systems built around Nvidia’s A100 chip in terms of speed and power efficiency by up to 1.7 and 1.9 times, respectively.

The fourth-generation TPU was available at the same time as the processors. The spokesperson for Nvidia, on the other hand, refused to comment.

In comparison, Google claims that because its chip is more sophisticated and constructed using cutting-edge technology, it cannot be compared to Nvidia’s H100 chip. Just now, we’re contrasting it with A100.

Google made a suggestion that they are developing a new TPU to contend with the Nvidia H100. Google “has a healthy pipeline of future chips,” claims Jouppi.

You May Like