Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday November 21 2019, @11:57AM   Printer-friendly
from the one-API-to-rule-them-all dept.

Write AI code once, run anywhere—it's not Java, it's Intel's oneAPI

Saturday afternoon (Nov. 16) at Supercomputing 2019, Intel launched a new programming model called oneAPI. Intel describes the necessity of tightly coupling middleware and frameworks directly to specific hardware as one of the largest pain points of AI/Machine Learning development. The oneAPI model is intended to abstract that tight coupling away, allowing developers to focus on their actual project and re-use the same code when the underlying hardware changes.

This sort of "write once, run anywhere" mantra is reminiscent of Sun's early pitches for the Java language. However, Bill Savage, general manager of compute performance for Intel, told Ars that's not an accurate characterization. Although each approach addresses the same basic problem—tight coupling to machine hardware making developers' lives more difficult and getting in the way of code re-use—the approaches are very different.

[...] When we questioned Savage about oneAPI's design and performance expectations, he distanced it firmly from Java, pointing out that there is no bytecode involved. Instead, oneAPI is a set of libraries that tie hardware-agnostic API calls directly to heavily optimized, low-level code that drives the actual hardware available in the local environment. So instead of "Java for Artificial Intelligence," the high-level takeaway is more along the lines of "OpenGL/DirectX for Artificial Intelligence."

For even higher-performance coding inside tight loops, oneAPI also introduces a new language variant called "Data Parallel C++" allowing even very low-level optimized code to target multiple architectures. Data Parallel C++ leverages and extends SYCL, a "single source" abstraction layer for OpenCL programming.

In its current version, a oneAPI developer still needs to target the basic hardware type he or she is coding for—for example, CPUs, GPUs, or FPGAs. Beyond that basic targeting, oneAPI keeps the code optimized for any supported hardware variant. This would, for example, allow users of a oneAPI-developed project to run the same code on either Nvidia's Tesla v100 or Intel's own newly released Ponte Vecchio GPU.

Related: Intel Xe High Performance Computing GPUs will use Chiplets


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Interesting) by Anonymous Coward on Thursday November 21 2019, @12:15PM (2 children)

    by Anonymous Coward on Thursday November 21 2019, @12:15PM (#922955)

    One of my colleagues is using one of Intel's AI frameworks at the moment. It appears to be a serious cluster-fsck with improper thread locking, outdated documentation and poor performance plastered over by shiny marketing claims*. Whenever he works around one hurdle something else pops up to bite him in the ass. I don't think Intel is the right company to write software in this area.

    * Not unlike their CPUs when you think of it

    Starting Score:    0  points
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   1  
  • (Score: 4, Touché) by takyon on Thursday November 21 2019, @12:28PM (1 child)

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Thursday November 21 2019, @12:28PM (#922961) Journal

    Maybe oneAPI is what they need to fix the situation.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2, Funny) by Anonymous Coward on Thursday November 21 2019, @03:19PM

      by Anonymous Coward on Thursday November 21 2019, @03:19PM (#923000)

      oneAPI to bring them all and in the darkness bind them.