For theand , search giant Google is taking a page from Apple — by styling itself as a chip designer with more control over its flagship smartphones. For owners of , that could mean a phone with artificial intelligence that you’ll actually want to talk to and battery life long enough to still power through a busy day.
Google, which unveiled its $599 Pixel 6 and $899 Pixel 6 Pro phones during a virtual event last week, introduced changes like new colors, display improvements,and a horizontal camera bump strip on the back instead of a square-shaped corner array. But the biggest difference is what’s inside: Tensor, the first chip designed by Google.
Read more: ‘sand review
Phil Carmack, the vice president and general manager in charge of Google’s chip business, and Monika Gupta, senior director of Google Silicon, chatted with in a videoconference the week before the Pixel event to get an in-depth look at what makes Tensor special — and why Google even bothered to make the processor.
“The key element in making this decision was around AI and how we could bring AI at a much different, more personal level to the end user,” Carmack said. “We simply weren’t able to get there with the existing solutions that were out there.”
The goal is to create a Pixel phone loaded with more AI smarts and the power to make those calculations without sacrificing battery life. In the new Pixel 6 and 6 Pro, that manifests through real-time language translations, highly accurate voice transcription and high-end camera features like the ability to unblur the face of a person in motion. On top of that, Google promises 24 to 48 hours of battery life.
“It was kind of like we were being held back a little bit,” Gupta said. “We have access to state-of-the-art [machine learning] right within Google, yet we couldn’t bring it to our Pixel users. We have this vision for Pixel, and we couldn’t realize that vision.”
That changes with Tensor, which marks Google’s first foray into the expensive and difficult world of SoC design. It started working on the processor about four years ago with a team of 76 people. (The team “is a lot bigger now,” Carmack said, though he declined to specify its size.) Most semiconductor companies have thousands of engineers developing their new chips. But Google has something else behind it — its team of researchers who specialize in artificial intelligence and machine learning.
“On Tensor, we are running the state-of-the-art [machine learning] models from Google Research, like the latest and greatest,” Gupta said. “And we’re doing it much more efficiently than ever possible before due to the Tensor architecture.”
While most handset makers use chips supplied by Qualcomm, Apple has stood out as an exception by creating processors for everything from its iPhones to its Macs. Designing its own processors lets Apple optimize for the features it cares about the most, like high performance and long battery life. Now, Google is looking to follow that strategy as it taps into its strength in search and AI. The move deals a blow to Qualcomm.
“Qualcomm Technologies and Google have been partners for more than 15 years, starting with bringing the first Android devices to market,” Qualcomm said in a statement. “We will continue to work closely with Google on existing and future products based on Snapdragon platforms to deliver the next-generation of user experiences for the 5G era.”
Google could use a fresh direction. While its Android software powers almost nine out of every 10 smartphones shipped globally, Pixels make up less than 1% of phones shipped around the world, according to Strategy Analytics.
But Anisha Bhatia, a senior analyst at researcher GlobalData, called Google’s efforts to make Tensor “significant.”
“Making a chip is a complex and expensive process, and Google, a company that isn’t strong in smartphones, is investing money and resources into this process,” Bhatia noted. “It would allow Google to better compete in the smartphone marketplace.”
Tensor is what’s called a system-on-a-chip, essentially a monster processor that combines the CPU — the brains powering the device — with other capabilities.
In the case of Google’s design, the eight-core CPU is integrated with a 20-core Arm Mali-G78 MP20 GPU, a Tensor Processing Unit machine learning engine, an advanced image signal processor for photography, a Tensor security core, a “context hub” for ultra-low power capabilities, and 8MB system cache. Tensor also includes a 4MB CPU L3 cache and is built on the 5-nanometer manufacturing node.
The key is all of the parts of Tensor working together to carry out an action on the Pixel 6.
“It’s kind of rare that one of these blocks is the star of any important experience,” Carmack said. “They have to be carefully choreographed.”
The Tensor security core is a CPU-based subsystem that’s separate from the main apps processor. It allows sensitive tasks and controls to run in an isolated and secure environment. Alongside Tensor will sit a co-processor, Google’s next-generation, dedicated security chip called Titan M2. And because Google created Tensor, it’s extending security support to Pixel owners to five years, letting people hold onto their devices much longer.
The CPU itself is eight cores based on architecture from chip designer Arm — two high-performance Cortex-X1 cores with clock speeds of 2.8GHz, two midrange A76 cores at 2.25GHz and four small, high-efficiency A55 cores at 1.8GHz. Chip experts like XDA Developers called the configuration — which combines Arm’s most powerful new cores with older cores — unusual when the specs leaked before the event.
Carmack said Google designed the CPU in a way that would deliver the best responsiveness and power efficiency for intensive use cases like.
“We did something different than the rest of the Android ecosystem is doing because we didn’t build it in order to win a benchmark of single threaded performance for the minimum amount of dollars invested,” Carmack said. “We built it to deliver the experiences. … Having two really high-performance cores helps us get better overall responsiveness and better overall sustained performance.”
Ultimately, the Pixel 6’s Tensor CPU performance is 80% faster and its GPU is 370% faster than the chip in the Pixel 5, Qualcomm’s midrange Snapdragon 765G. Notably, that chip was slower than the top-of-the-line processor found in other premium Android phones.
One of the areas that will see improvements on the Pixel 6 is photography. Already a strong area for Pixels, Tensor enables even smarter camera abilities.
A new feature called Face Unblur puts the face of a subject-in-motion in focus while maintaining the blur of the rest of the body. For instance, you can capture a photo of a child jumping on a trampoline, with the face sharp and in focus and the limbs blurred in motion.
The technology works by tapping into a Google Research face-detection ML model called FaceSSD. Before even taking the photo, the Pixel will detect there’s a face in the scene. If it’s blurry, the Pixel will automatically spin up the second camera so it’s ready to go when you press the shutter button.
The camera takes two images simultaneously, one from the main camera and one from the ultrawide camera. The main camera uses normal exposure to give you a low-noise photograph, and then the ultrawide uses a faster exposure to provide a super sharp image.
“Machine learning will align these two images, it will merge them, and it will take the sharper face from the ultrawide and the low-noise shot from the main to get you sort of the best of both worlds,” Gupta said. Lastly, the Pixel will take one final look to see if there’s any blur remaining in the merged images. If so, the phone will estimate the levels and direction of the blur and remove it.
“All in all, it takes about four machine learning models to combine this data from two different cameras and then deliver you the photograph with a nice clear face,” Gupta said. “Not only is almost the entire chip lit up with different subsystems doing different things, even at the phone level, we have multiple camera sensors lit up. But this sort of gets to kind of the heart of why we approach Tensor differently and why our focus is really rooted in starting from the end users.”
FaceSSD can also work with video, Gupta said. The company has never been able to apply face detection to video before because the older chips weren’t fast enough and consumed too much power, she said. The new FaceSSD runs twice as fast, at 30 frames per second, which makes it suitable for video. It’s also lower power.
“A lot of our computational photography techniques that we applied to photos historically, we are now able to apply the videos for the first time,” Gupta said. “All of these techniques are just going to make videos better.”
Another photography feature called Motion Mode adds blur into a still image. It “brings sort of a professional look and feel to urban photos, a night out or even nature scenes,” Gupta said. “Typically, you’d create these effects with like maybe a tripod or long exposures or fancy equipment and a lot of practice. But within the Google Camera app, we make this super easy with the Motion Mode.”
The Pixel 6’s camera takes several photos and combines them. Using the on-device machine learning and computational photography, it will identify the subject of the photo, figure out what’s moving and what’s not, and then add a sort of aesthetic blur to the background. For instance, if taking a photo of a cyclist, Motion Mode would make the cyclist in focus and blur the wheels and background slightly to indicate the action.
Google Pixels have long won accolades for their photo quality, but video typically hasn’t measured up to the quality found in rival phones. The company aims to make its video just as good as still images in the new Pixel 6 using its computational photography experience, Gupta said.
Google developed an algorithm called HDRnet. It embedded parts of the algorithm directly into the Tensor image signal processor to “vastly” speed up the processing while “drastically” reducing the power consumption, she said. HDRnet allows the Pixel 6 to capture video at the quality of still images, and it can run on all video formats, even 4K video at 60 frames per second.
“Once you put ML in the interactive loop, it’s a real-time guidance system now,” Carmack said. “An intelligent photographer is there in real time, making sure you get the right thing.”
Speech and transcription
Another area where Tensor benefits Pixel users is in how you talk to the device. The Pixel 6 comes with on-device speech recognition that “can transcribe speech with incredible accuracy,” Gupta said. Because of Tensor, that transcription consumes half the power as before.
That improved speech recognition will help with features like Voice Access, which lets users talk to the phone without having to type in commands, and Google Call Screen, which uses Google Assistant to answer incoming calls, talk to the caller and provide a transcript of what the caller’s saying.
Google Assistant also should have improvements, thanks to Tensor and the advanced ML enabled by the chip. The processor allows for more effective “hot word” detection — the “OK Google” that triggers the voice assistant to respond. In the Pixel 6, it can detect the hot word even when the background is noisy.
And the new Tensor security core works with the rest of the processor to secure sensitive information. For instance, you may need to call on the Google Assistant to unlock your phone or grant access to your contacts. The Pixel 6 will do all of that processing on the device instead of sending it over the cloud.
“On-device AI might help them placate some of the concerns people have around Google and better compete against iOS on the privacy and security side,” Creative Strategies analyst Carolina Milanesi said.
And because of Tensor, Google can apply more ML to problems to further refine and understand the nuances of the individual speaker, Gupta said.
“This could be the difference between knowing is it Monica with the ‘C’ or Monica with the ‘K,’ or where do you put the comma, where do you put the punctuation, all the nuances you get depending on who’s speaking, based on their accent or based on the nouns they often use,” Gupta said.
When it comes to translation, the Pixel is running a new, “state-of-the-art” model called Neural Machine Translation, Gupta said. It consumes half the power than previously possible, Gupta said.
Live Chat Translation quickly translates from one language to another. It works on any Android chat app, and translations happen directly in that app, rather than forcing a user to copy and paste text into Google Translate. The new Pixels also can translate media in real time, like providing live English interpretations on a French-speaking video.
And thanks to Tensor and NMT, the new Pixel 6 can provide on-device translation for speech.
“Tensor was essentially designed to be the best on-device [machine learning] SoC in the market,” Gupta said. “If I’m watching something, and it’s being live captioned for me or live translated for me, that’s private.”
These sorts of tasks would typically drain a phone’s battery, if not for Tensor.
“One of our product goals was to kind of eliminate this compromise with Tensor and really push [performance and energy efficiency] simultaneously,” Gupta said.
For Google, Tensor and the Pixel’s new AI features aren’t the end. They’re just the beginning — and they’re only limited by how powerful and energy efficient Google can make its chips.
“This tight collaboration between Research, Android platform, Pixel and the silicon team creates a nice virtuous cycle,” Carmack said. “We have a whole roadmap of products.”