Google has lifted the lid [googleblog.com] off of an internal project to create custom application-specific integrated circuits [wikipedia.org] (ASICs) for machine learning tasks. The result is what they are calling a "TPU":
[We] started a stealthy project at Google several years ago to see what we could accomplish with our own custom accelerators for machine learning applications. The result is called a Tensor Processing Unit (TPU), a custom ASIC we built specifically for machine learning — and tailored for TensorFlow [tensorflow.org]. We've been running TPUs inside our data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning. This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore's Law). [...] TPU is an example of how fast we turn research into practice — from first tested silicon, the team
had them up and running applications at speed in our data centers within 22 days.
The processors are already being used to improve search and Street View, and were used to power AlphaGo during its matches against Go champion Lee Sedol. More details can be found at Next Platform [nextplatform.com], Tom's Hardware [tomshardware.com], and AnandTech [anandtech.com].