Imagine that a race car driver showed up to the Indianapolis 500 race with a pickup truck. No matter how big the engine in that pickup, the design limitations of the car would quickly become apparent. It’s simply not light enough, agile enough, or aerodynamic enough to compete. We see a similar problem with processors.
General computing processors have made amazing strides over the past few decades, exponentially increasing their capabilities. However, the prevalence of machine learning and AI keep pushing the need for latency while processors like CPUs and GPUs are hitting their ceilings.
This problem led Google to unveil the first Tensor Processing Unit (TPU) in 2016, making two new iterations since. What is a TPU, and why should machine learning experts care?
The TPU Is The Race Car of Computer Processing
A TPU is a specialized processor that limits its general processing ability to provide more power for specific use cases — specifically, to run machine learning algorithms. Traditional processors are constantly storing values in registers. Then a program tells the Arithmetic Logic Units (ALUs) which registers to read, the operation to perform, and where to put the result. This process is necessary for general-purpose processors but creates bottlenecks and slows down performance for machine learning.
Like a race car designer who gets rid of any excess weight that will slow down the car, the TPU eliminates the need for the constant read, operate, and write operations, speeding up performance. How does a TPU do this? It uses a systolic array to perform large, hardwired matrix calculations that allow the processor to reuse the result of reading a single register and chain together many operations. The system thus batches these calculations in large quantities, bypassing the need for memory access and speeding up the specialized processing.
These properties are part of what makes a TPU 15-30 times faster than top-of-the-line GPUs and 30-80 times more efficient. However, these powerhouses aren’t suitable for every use case. Similar to how race cars aren’t practical for most other environments, the TPU shines only in specialized conditions. The following conditions may make using a TPU impractical:
- Your workload includes custom TensorFlow operations written in C++
- Your workload requires high-precision arithmetic
- Your workload uses linear algebra programs that require frequent branching or are dominated element-wise by algebra
Does Your Infrastructure Need a Race Car or a Utility Vehicle?
The power and efficiency of TPUs are undeniable. When clustered together, a TPU 3.0 pod can generate up to 100 petaflops of computing power. But this power is limited to jobs unique to machine learning. So the question of whether or not to use a TPU in your organization comes down to the use case, which the following questions can help you analyze:
- What job are you procuring compute infrastructure for?
- Will your computing needs stay consistent, or do you need flexibility?
- What scripts and languages will your software be running?
Race cars are fun, but they’re not practical for every task. If you have complex computing needs for AI and ML applications, we can help. Equus provides the robust compute and storage capabilities you need to power these advanced technologies. Our team can help you find the right balance of processing power, high-density storage, and networking tools to ensure powerful yet cost-effective tools. Contact us to learn more.