On Wednesday, Google revealed that its Tensor Processing Unit (TPU), the company’s custom machine learning accelerator chip, is faster than contemporary CPUs and GPUs. The search giant has provided more details about the server chip it employs to perform artificial intelligence computing workloads efficiently. It is finally disclosing information and benchmarks about the project.
It is no secret that the Mountain View tech giant has built its own custom chips to speed up its machine learning algorithm. Google first unveiled the chips at its I/O developer conference last year. The company however, never revealed many details about its AI chips, except for noting that they were optimized around the firm’s own machine-learning framework TensorFlow.
CNET notes that although Google’s custom-built artificial chips are not in today’s laptop or smartphones, they are important to just about anybody online. The TPU processors already are hard at work in the Internet giant’s data centers, generating search results, translating text, screening out email spam, identifying people in photos, and drafting Gmail message.
The Alphabet subsidiary is not the only tech company working on artificial intelligence chips. Qualcomm Inc. is developing AI tech into its mobile chips which could offer machine-learning tasks a boost on smartphones when people do not want to wait for a Mountain View server on the other side of the network.
According to Tech Times, Google’s TPU chips are at least 10 to 30 times faster that Nvidia’s Tesla K80 GPUs and Intel’s Haswell Xeon E5-2699 v3 processors. The chips that have been powering Big G’s data centers for the past few years, also proved to be more energy and power efficient than standard processors. This is notable because efficiency and power consumption is key aspect of data centers.
Mountain View said that most developers optimize their chips for convolutional neural networks. The company however, explained that those networks only account for about five percent of its own data center workload while most of its applications employ multi-layer perceptrons.
Google’s machine learning chip represents a much bigger shift in the world of computer processors. Wired notes that as the Alphabet subsidiary, Microsoft Corporation, Facebook, and other internet firms create new services using deep neural networks, they have all required specialized chips for coaching and executing their artificial intelligence models.
This week, for the first time, Google has shared more details about its TPU chips, noting that they are 15 to 20 times faster than conventional GPUs and CPUs. The Web giant has embraced artificial neural networks more strongly than many other tech firms. The company will not likely make the chips available outside its cloud platform, although it notes that other firms can glean from its results and develop their own AI chips.
By Anila Maring
Photo Courtesy Google