Stories
Slash Boxes
Comments

SoylentNews is people

posted by CoolHand on Friday May 20 2016, @03:06PM   Printer-friendly
from the skynet-development dept.

Google has lifted the lid off of an internal project to create custom application-specific integrated circuits (ASICs) for machine learning tasks. The result is what they are calling a "TPU":

[We] started a stealthy project at Google several years ago to see what we could accomplish with our own custom accelerators for machine learning applications. The result is called a Tensor Processing Unit (TPU), a custom ASIC we built specifically for machine learning — and tailored for TensorFlow. We've been running TPUs inside our data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning. This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore's Law). [...] TPU is an example of how fast we turn research into practice — from first tested silicon, the team
had them up and running applications at speed in our data centers within 22 days.

The processors are already being used to improve search and Street View, and were used to power AlphaGo during its matches against Go champion Lee Sedol. More details can be found at Next Platform, Tom's Hardware, and AnandTech.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday May 20 2016, @04:26PM

    by Anonymous Coward on Friday May 20 2016, @04:26PM (#348825)

    The name sounds wrong. A tensor is generally an array of vectors. So when I hear "tensor processor" I think hardcore floating point processor. But, afaik, AI is not a floating-point workload. So, is this just a case of someone picking a name that just sounds cool, or is it really doing actual tensor computations?

  • (Score: 0) by Anonymous Coward on Friday May 20 2016, @05:27PM

    by Anonymous Coward on Friday May 20 2016, @05:27PM (#348849)

    I'm of course just speculating here, but they've probably got it performing the ASIC version of a classification algorithm in multiple dimensions (n-dimensional gradients, where n is some predetermined order). If that's the case then it could legitimately be a chip that performs tensor operations.

  • (Score: 2) by jcross on Friday May 20 2016, @09:33PM

    by jcross (4009) on Friday May 20 2016, @09:33PM (#348902)

    Apparently the TPU is based on 8-bit integer math, but there's really no reason you can't have discrete tensors with 8-bit values, is there? I mean even floating point numbers are not continuous in the mathematical sense of the word, so all we're talking about is a massive reduction in range and resolution.

  • (Score: 2) by Non Sequor on Friday May 20 2016, @10:35PM

    by Non Sequor (1005) on Friday May 20 2016, @10:35PM (#348920) Journal

    Well there are eigenvector based methods for machine learning. These aren't what you would call AI on their own but they capture key elements of problems of classification and inference.

    --
    Write your congressman. Tell him he sucks.