Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Saturday August 08 2020, @02:01AM   Printer-friendly
from the can-you-hear-me-now? dept.

Google resumes human review of Assistant audio 'recordings' - 9to5Google:

Last summer, Amazon, Apple, and Google were criticized for not properly disclosing how human reviewers analyze audio snippets from each of their assistants. Google in response paused the practice for Assistant and other products, but is now resuming and making audio recordings entirely opt-in.

As noted by The Verge, Google is sending out a somewhat confusing email about how it "recently updated settings for voice and audio recordings." The crux is how the company is having human reviewers analyze audio snippets again.

This process — which involves listening, transcribing, and annotating — improves Google's speech recognition technology, and helps expand support to more languages. As of last year, only 0.2 percent of all snippets are reviewed by humans.

These language experts review and transcribe a small set of queries to help us better understand those languages. This is a critical part of the process of building speech technology, and is necessary to creating products like the Google Assistant.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by takyon on Saturday August 08 2020, @01:04PM

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Saturday August 08 2020, @01:04PM (#1033422) Journal

    https://arstechnica.com/gadgets/2020/08/apple-explains-how-it-uses-machine-learning-across-ios-and-soon-macos/2/ [arstechnica.com]

    Borchers and Giannandrea both repeatedly made points about the privacy implications of doing this work in a data center, but Giannandrea said that local processing is also about performance.

    "One of the other big things is latency," he said. "If you're sending something to a data center, it's really hard to do something at frame rate. So, we have lots of apps in the app store that do stuff, like pose estimation, like figure out the person's moving around, and identifying where their legs and their arms are, for example. That's a high-level API that we offer. That's only useful if you can do it at frame rate, essentially."

    [...] Further, both Apple executives credited Apple's custom silicon—specifically the Apple Neural Engine (ANE) silicon included in iPhones since the iPhone 8 and iPhone X—as a prerequisite for this on-device processing. The Neural Engine is an octa-core neural processing unit (NPU) that Apple designed to handle certain kinds of machine learning tasks.

    Best case scenario, CPU, GPU, dedicated ML/AI, neuromorphic, etc. performance increases dramatically, and most work can and will be moved to the local device because of latency and other concerns. And you can do all of the cool stuff on Linux x86/ARM/RISC-V and interact with the cloud only if you want to.

    Worst case scenario, just think of your favorite dystopia and combine it with teh Singularity.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2