Generated by GPT-5-mini| Tensor (system-on-chip) | |
|---|---|
![]() | |
| Name | Tensor (system-on-chip) |
| Developer | |
| Manufacturer | Samsung Electronics |
| Release | 2021 |
| Soc family | ARM architecture |
| Cpu | Google Tensor G1 / Tensor G2 / Tensor G3 |
| Gpu | Mali GPU / Immortalis |
| Npu | Neural processing unit |
| Process | 5 nm semiconductor manufacturing process / 4 nm |
| Os | Android (operating system) / Android 13 |
| Predecessor | Qualcomm Snapdragon |
| Successor | Tensor G3 |
Tensor (system-on-chip) Tensor is a family of application-specific system-on-chip (SoC) designs developed by Google for flagship Pixel (smartphone) devices, integrating CPU, GPU, neural accelerators and image signal processing to optimize on-device machine learning, photography, and security. Tensor aims to combine custom silicon design with partners such as Samsung Electronics and leverage ecosystems like Android (operating system) and services from Google Play and Firebase to enable advanced user experiences. The SoC program positions Google among vertically integrated device-makers alongside firms such as Apple Inc. and Huawei.
Tensor was introduced to shift key computational workloads from cloud services like Google Cloud Platform to on-device processing for products including Pixel 6, Pixel 6 Pro, Pixel 7, and Pixel 8. The initiative aligns with hardware strategies seen at Apple Inc. with Apple A-series, at Huawei with Kirin (system-on-chip), and at Samsung Electronics with Exynos (system-on-chip), while interacting with standards from ARM Limited and fabs such as TSMC. Google announced Tensor in contexts including Google I/O and Made by Google hardware events, citing privacy, latency, and energy efficiency goals.
Tensor's architecture combines licensed CPU cores from ARM Limited (big.LITTLE configurations), GPU designs from Mali GPU vendors, and a dedicated neural processing unit influenced by research from Google Brain and DeepMind. The chip incorporates heterogeneous compute elements similar to concepts in NVIDIA's work on tensor cores and draws on software frameworks like TensorFlow and TensorFlow Lite to map models to hardware. Tensor design choices reference work from institutions such as MIT and Stanford University on on-device AI, leveraging compiler stacks akin to LLVM and runtime integration with Android Runtime.
Key hardware blocks include multi-core CPUs, integrated GPUs, a dedicated tensor processor or NPU, an image signal processor (ISP) optimized for computational photography workflows pioneered at Google Research, and a security module analogous to the Titan (security chip). Memory subsystems interface with LPDDR RAM sourced from suppliers like SK Hynix and storage controllers compatible with UFS standards used by Samsung Electronics. Radio and connectivity subsystems support modem technologies implemented by partners such as Qualcomm and MediaTek for 5G, Wi‑Fi standards governed by IEEE 802.11, and Bluetooth profiles overseen by the Bluetooth Special Interest Group.
Tensor integrates with developer platforms and APIs maintained by Google Play Services, Android Studio, and Flutter for application development, while machine learning models can be compiled via TensorFlow Lite, converted with tools like ONNX, and profiled using Android Profiler. System-level firmware works alongside components from AOSP and security frameworks such as Android Verified Boot and SafetyNet; debugging and driver stacks follow conventions set in Linux kernel development and communities like XDA Developers. Third-party partnerships include cloud services from Google Cloud Platform and tooling from GitHub.
Benchmark analyses compare Tensor models against contemporaries like Qualcomm Snapdragon 8 Gen 1, Apple A15 Bionic, and Samsung Exynos 2200 using suites such as Geekbench, GFXBench, and ML benchmarks like MLPerf. Performance profiles emphasize accelerated inference for vision and speech tasks, showing advantages in workload-specific metrics reported by outlets including AnandTech, The Verge, and Ars Technica. Thermal and power behavior is evaluated in reviews referencing standards and labs like UL Laboratories and techniques from ARM Performance Libraries.
Tensor targets computational photography features exemplified by Pixel Visual Core work and software such as Night Sight, Magic Eraser, and real‑time transcription services rooted in Live Caption. It supports voice recognition and assistant features connected to Google Assistant, on-device translation linked to Translate (service), and health‑adjacent sensors integrated with platforms like Fitbit. Deployment spans retail Pixel devices sold through channels like Google Store and carriers including Verizon Wireless and T-Mobile US.
Security posture relies on a dedicated security enclave inspired by designs such as Apple Secure Enclave and Google's own Titan M, implementing hardware-backed key storage, verified boot chains aligned with Android Verified Boot and protections against side‑channel attacks studied at institutions like University of Cambridge and ETH Zurich. Privacy benefits derive from on-device inference for services formerly hosted on Google Cloud Platform, reducing telemetry needs discussed in regulatory contexts involving entities like the Federal Trade Commission and legislation such as the General Data Protection Regulation.
Development milestones occurred alongside announcements at Google I/O and Made by Google events; initial collaboration with Samsung Electronics was reported in industry analyses by Bloomberg and The Wall Street Journal. The first-generation Tensor debuted with the Pixel 6 in 2021, followed by iterative revisions (Tensor G2, Tensor G3) announced with subsequent Pixel launches covered by outlets like CNET and Engadget. The roadmap reflects broader semiconductor trends involving fabs such as TSMC and design influences from academic labs including Carnegie Mellon University.