Stories
Slash Boxes
Comments

SoylentNews is people

posted by chromas on Thursday July 26 2018, @03:41PM   Printer-friendly
from the cloudscale-deeplearn dept.

Google unwraps its gateway drug: Edge TPU chips for IoT AI code; Custom ASICs make decisions on sensors as developers get hooked on ad giant's cloud

Google has designed a low-power version of its homegrown AI math accelerator, dubbed it the Edge TPU, and promised to ship it to developers by October. Announced at Google Next 2018 today, the ASIC is a cutdown edition of its Tensor Processing Unit (TPU) family of in-house-designed coprocessors. TPUs are used internally at Google to power its machine-learning-based services, or are rentable via its public cloud. These chips are specific[ally] designed for and used to train neural networks and perform inference.

Now the web giant has developed a cut-down inference-only version suitable for running in Internet-of-Things gateways. The idea is you have a bunch of sensors and devices in your home, factory, office, hospital, etc, connected to one of these gateways, which then connects to Google's backend services in the cloud for additional processing.

Inside the gateway is the Edge TPU, plus potentially a graphics processor, and a general-purpose application processor running Linux or Android and Google's Cloud IoT Edge software stack. This stack contains lightweight Tensorflow-based libraries and models that access the Edge TPU to perform AI tasks at high speed in hardware. This work can also be performed on the application CPU and GPU cores, if necessary. You can use your own custom models if you wish.

The stack ensures connections between the gateway and the backend are secure. If you wanted, you could train a neural network model using Google's Cloud TPUs and have the Edge TPUs perform inference locally.

Google announcement. Also at TechCrunch, CNBC, and CNET.

Related: Google's New TPUs are Now Much Faster -- will be Made Available to Researchers
Google Renting Access to Tensor Processing Units (TPUs)
Nvidia V100 GPUs and Google TPUv2 Chips Benchmarked; V100 GPUs Now on Google Cloud


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by c0lo on Friday July 27 2018, @08:54AM

    by c0lo (156) Subscriber Badge on Friday July 27 2018, @08:54AM (#713599) Journal

    Why are Google marketing the "cloud" when most serious NN models will be privately held and fiercely protected?

    Their business model: you pay them for the cloud to train your NN, download the result into your local (Google made) TPU and use it close to your sensors.
    The training is energy intensive, the sensing/recognition needs to be very frugal with the energy available in an IoT.

    Bottom line: at no point you are allowed independence from Google.

    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2