Add to Favourites
To login click here

Google’s TPU is a coprocessor managed by a conventional host CPU via the TPU’s PCI Express interface. It is designed to optimize the number-crunching required by deep neural networks (DNNs) and deep learning (DL) and is 15 to 30 times faster than CPUs and GPUs when it comes to DNN acceleration. The TPU board can perform 92 TeraOps/s (TOPS) and has a 30- to 80-fold improvement in TOPS/W. It uses a 28-nm process and the die size is about 600 mm, consuming only 40 W of power.