Stories
Slash Boxes
Comments

SoylentNews is people

posted by CoolHand on Friday May 20 2016, @03:06PM   Printer-friendly
from the skynet-development dept.

Google has lifted the lid off of an internal project to create custom application-specific integrated circuits (ASICs) for machine learning tasks. The result is what they are calling a "TPU":

[We] started a stealthy project at Google several years ago to see what we could accomplish with our own custom accelerators for machine learning applications. The result is called a Tensor Processing Unit (TPU), a custom ASIC we built specifically for machine learning — and tailored for TensorFlow. We've been running TPUs inside our data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning. This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore's Law). [...] TPU is an example of how fast we turn research into practice — from first tested silicon, the team
had them up and running applications at speed in our data centers within 22 days.

The processors are already being used to improve search and Street View, and were used to power AlphaGo during its matches against Go champion Lee Sedol. More details can be found at Next Platform, Tom's Hardware, and AnandTech.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by dyingtolive on Friday May 20 2016, @03:49PM

    by dyingtolive (952) on Friday May 20 2016, @03:49PM (#348810)

    Contrast that with the guy I was talking to the other day who, when I brought up the notion of using a local git repo rather than github, responded with "I make things... that aren't code repository services".

    And I mean, while setting up a git repo is braindead easy, he has a point to a certain degree. Vertical integration is great as long as you have the manpower for it. If you don't, then you never get any actual work done, because you're trying to keep your infra alive.

    Maybe my example is imperfect, but I see this as the natural extension to the constant "cloud vs. inhouse" argument, except taken to the point where it goes beyond services and into hardware. It's the natural place to go for someone who's gotten to the point that they've streamlined enough bottlenecks. Sooner or later, the most worthwhile ones to go after begin winding up in off the shelf hardware.

    --
    Don't blame me, I voted for moose wang!
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by opinionated_science on Friday May 20 2016, @04:43PM

    by opinionated_science (4031) on Friday May 20 2016, @04:43PM (#348830)

    well git(hub) allows it to be shared , without their inertia. As a developed (sometimes), our codes might go through many iterations, but relying upon external libraries/code is best static.

    I am interesting in the details of the math/operations their TPU specialise in. In my field (biophysics/informatics), we are always looking for ways of converting problems to exploit hardware advances!!!

  • (Score: 2) by WillR on Friday May 20 2016, @06:36PM

    by WillR (2012) on Friday May 20 2016, @06:36PM (#348859)
    If your business needs a widget, you buy one off the shelf.
    If it needs a million widgets a year, you build a dedicated widget factory to make them as efficiently as possible.