Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday September 04 2015, @06:15PM   Printer-friendly
from the MIAOW-sounds-like-one-cat-crying-out dept.

While open-source hardware is already available for CPUs, researchers from the Vertical Research Group at the University of Wisconsin-Madison have announced at the Hot Chips Event in Cupertino, Calif., that they have created the first open source general-purpose graphics processor (GPGPU).

Called MIAOW, which stands for Many-core Integrated Accelerator Of the Waterdeep, the processor is a resistor-transistor logic implementation of AMD's open source Southern Islands instruction set architecture. The researchers published a white paper on the device.

The creation of MIAOW is the latest in a series of steps meant to keep processor development in step with Moore's Law, explains computer scientist Karu Sankaralingham, who leads the Wisconsin research group.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: -1, Troll) by Anonymous Coward on Friday September 04 2015, @06:21PM

    by Anonymous Coward on Friday September 04 2015, @06:21PM (#232365)

    I'm a fag! I sucks cocks! LOL!

    • (Score: -1, Troll) by Anonymous Coward on Friday September 04 2015, @06:29PM

      by Anonymous Coward on Friday September 04 2015, @06:29PM (#232369)

      How about you kill yourself instead

      • (Score: -1, Redundant) by Anonymous Coward on Friday September 04 2015, @06:44PM

        by Anonymous Coward on Friday September 04 2015, @06:44PM (#232372)

        -1, Don't-Feed-the-Trolls

  • (Score: 3, Funny) by maxwell demon on Friday September 04 2015, @06:38PM

    by maxwell demon (1608) on Friday September 04 2015, @06:38PM (#232371) Journal

    So your graphics processor may get killed if you use it with a quantum computer? :-)

    --
    The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 0) by Anonymous Coward on Friday September 04 2015, @07:22PM

      by Anonymous Coward on Friday September 04 2015, @07:22PM (#232390)

      Hmm..depends on which universe you are in.

  • (Score: 3, Funny) by Gravis on Friday September 04 2015, @06:58PM

    by Gravis (4596) on Friday September 04 2015, @06:58PM (#232377)

    MIAOW guys, don't be too hard on this project MIAOW because it's open hardware MIAOW and forever. sure, you can't pick on up on newegg right MIAOW but maybe one day you will. MIAOW i don't know about you guys but that's pretty dang exciting to me!

    • (Score: 1, Insightful) by Anonymous Coward on Friday September 04 2015, @07:23PM

      by Anonymous Coward on Friday September 04 2015, @07:23PM (#232391)

      As someone mentioned on... slashdot(?) when this was mentioned the other day, nyozi is 'feature complete' for a subset of features, has a complete toolchain to program it, and fits in an affordable FPGA *TODAY*. Compared to that, MIAOW is just a student research project that will only ever amount to anything if a group of students run off with the design and try to create a proprietary product out of it (like so often happens in academic research projects.)

      That said: More projects is always better, but calling something that barely counts as a 2D accelerator a 'Graphic Processor' is really stretching it, especially when you add 'THE FIRST' to it, which it is decidely not (How many failed Open source GPU projects do we have now, at least 5-10?)

      • (Score: 0) by Anonymous Coward on Friday September 04 2015, @08:12PM

        by Anonymous Coward on Friday September 04 2015, @08:12PM (#232410)

        like so often happens in academic research projects

        Ah, no. That very rarely happens.

      • (Score: 2) by tibman on Friday September 04 2015, @10:07PM

        by tibman (134) Subscriber Badge on Friday September 04 2015, @10:07PM (#232444)

        My google-fu must be weak today. I can't find your "nyozi" GPU anywhere. Link please : )

        --
        SN won't survive on lurkers alone. Write comments.
    • (Score: 1) by doctorate on Saturday September 05 2015, @07:10PM

      by doctorate (5826) on Saturday September 05 2015, @07:10PM (#232704)

      Well done... Though you missed the record of 10.

  • (Score: 0) by Anonymous Coward on Friday September 04 2015, @07:01PM

    by Anonymous Coward on Friday September 04 2015, @07:01PM (#232379)

    will there be good graphics drivers?

    • (Score: 3, Funny) by Anonymous Coward on Friday September 04 2015, @07:03PM

      by Anonymous Coward on Friday September 04 2015, @07:03PM (#232382)

      They can't do much worse than AMD currently doesn.

    • (Score: 2, Funny) by Anonymous Coward on Saturday September 05 2015, @09:49AM

      by Anonymous Coward on Saturday September 05 2015, @09:49AM (#232562)

      Yes. They will even be optimized for playing cat videos.

  • (Score: 2) by Hawkwind on Friday September 04 2015, @07:04PM

    by Hawkwind (3531) on Friday September 04 2015, @07:04PM (#232383)

    Many-core Integrated Accelerator Of the Waterdeep

    Waterdeep, is this some kind of reference to Wisconsin among in the know D&D players? At least I found Wisconsin as an alternative on their Git page: Many-core Integrated Accelerator Of Waterdeep/Wisconsin [github.com].

  • (Score: 0) by Anonymous Coward on Friday September 04 2015, @07:31PM

    by Anonymous Coward on Friday September 04 2015, @07:31PM (#232393)

    Because games are all about graphics, and you need a more powerful GPU than CPU if you want to brag about your elite status as an elite gamer.

    • (Score: -1, Troll) by Anonymous Coward on Friday September 04 2015, @07:39PM

      by Anonymous Coward on Friday September 04 2015, @07:39PM (#232397)

      I'm going to glory hole in about an hour or so. Would you like to join me and let me fuck your ass?

      • (Score: -1, Troll) by Anonymous Coward on Friday September 04 2015, @07:46PM

        by Anonymous Coward on Friday September 04 2015, @07:46PM (#232398)

        No thanks. Your cock is too short to reach through the wall.

        • (Score: -1, Troll) by Anonymous Coward on Friday September 04 2015, @07:55PM

          by Anonymous Coward on Friday September 04 2015, @07:55PM (#232400)

          Your dad wasn't complaining last night.

          • (Score: -1, Flamebait) by Anonymous Coward on Friday September 04 2015, @10:08PM

            by Anonymous Coward on Friday September 04 2015, @10:08PM (#232445)

            He is a very considerate man.

    • (Score: 5, Interesting) by VortexCortex on Friday September 04 2015, @08:39PM

      by VortexCortex (4067) on Friday September 04 2015, @08:39PM (#232416)

      Because games are all about graphics, and you need a more powerful GPU than CPU if you want to brag about your elite status as an elite gamer.

      Well, first off, most Linux gamers are happy when they have graphics that WORK. I'll spend 10x$ on a GPU if it has open source drivers that work. Also, as a gamedev who cares less about graphics than gameplay I'm glad for vastly more parallel compute power.

      Protip: Tired of dumb enemy AI? That's because AAA gamedevs aren't running more complex and embarrassingly parallel AI on GPUs (AI devs get budgeted 1%-2% max processing/storage/RAM). Tired of the same old gameplay? Hardware advances will bring net types of games.

      As more GPU becomes available the graphical fidelity hits a point where 10% less it's not much visibly different, and so more parallel processing can be dedicated to physics (gameplay) and AI (gameplay).

      IMO, I like AMD's APU design. Shared memory architecture is where we need to go, because my physics code on the GPU can't talk to the NIC or input devices without going through the CPU/main memory. Currently I have to keep two copies of geometry in RAM - One CPU side doing collision detection and some physics, and one GPU side to render. The CPU side is typically lower res, but the GPU also contains the lower res data for reducing model complexity with increased view distance (LoD). Also, the GPU driver might have to "swap out" geometry or memory to the CPU side to render some complex frame. Devs try to avoid this at all costs, but "streaming assets" really is just "swapping out" to the CPU side. With shared memory architecture not only does parallel computing / heterogeneous computing become easier but you get a LOT more RAM to play with even without increasing the available memory. And, since the "CPU" side code of networking and gameplay relevant physics (which must be available for network sync, as opposed to visual-only physics, like particle effects) can then communicate directly with "GPU" memory then MORE visible things can affect your gameplay. The reason particles go through walls and don't do gameplay changing things (like any visible ember being able to catch your clothes on fire) is because it's so painful and costly to read back data from the GPU. APU and shared memory architectures -- Advances in "GPUs" increases possibilities for GAMEPLAY, even leading to new types of gameplay not possible on current platforms.

      Back in the days before hardware acceleration we had more freedom to interface physics, networking and rendering via software rasterization. New and more advanced GPUs are bringing us back to that place where graphics and gameplay can co-mingle again. It's not just about bragging about my "elite status" as an "elite gamer", I want to implement some game mechanics that are not possible (in realtime) on today's hardware. In other words: Blame crummy console compatibility constraints for AAA's not doing more with GPUs, not gamers. Gamers just throw the best hardware they can at what game developers create. Don't dismiss new hardware as "muh graphics" simply because risk averse AAA studios don't use them to do more. The faster everyone gets high memory bandwidth parallel computing, the more feasible it is to recoup the cost of making new types of games.

      I'm not sure what to think of MIAOW. I want open hardware, but I also want established vendors to have open sourced hardware. If AMD can benefit from MIAOW's developments then maybe this will be good for everyone. They're a long way off, but if something like MIAOW becomes the defacto GPU for open source platforms it puts less pressure on proprietary vendors to open their sources. Or, to put it another way: Proprietary hardware vendors have a window of time to capitalize on open source platforms. As open hardware becomes more competitive this window slowly closes and the proprietary shops will find themselves competing with "open". The forward thinking response from proprietary vendors would be to open more hardware designs, even manufacture and sell the MIAOW design. Focus more on the manufacturing process and less on protecting their bullshit driver code (of course factoring in the MPAA and Microsoft tends to complicate such decisions).

  • (Score: 5, Informative) by VortexCortex on Friday September 04 2015, @08:57PM

    by VortexCortex (4067) on Friday September 04 2015, @08:57PM (#232422)

    the processor is a resistor-transistor logic implementation of AMD's open source Southern Islands instruction set architecture.

    Holy shit! Now THAT is something I'd like to see: Somehow revitalizing obsolete and heat limited technology of Resistor-Transistor Logic as used in the early 60's. Wait, this isn't a monstrous retro-computing rig ridiculously expanded into the role of a GPGPU? Awwwww. Protip: RTL is more likely Register Transfer Logic, as is apparent in the verilog sources, not the now ancient Resistor-Transistor Logic.

    TFA's author is showing their age and/or ignorance.

    • (Score: 0) by Anonymous Coward on Friday September 04 2015, @09:42PM

      by Anonymous Coward on Friday September 04 2015, @09:42PM (#232430)

      Age XOR Ignorance

    • (Score: 2) by TheRaven on Saturday September 05 2015, @06:48PM

      by TheRaven (270) on Saturday September 05 2015, @06:48PM (#232698) Journal

      The IEEE page doesn't say register-transistor logic, and neither does the MIAW site, though the latter says RTL (register-transfer level). You might expand RTL to resister-transistor logic if you use a particularly incompetent search engine.

      It's a shame that they're doing this in Verilog, rather than a high-level HDL (e.g. BlueSpec or Chisel) as that makes it harder to use for experimentation and that makes it harder to integrate with things like lowRISC.

      --
      sudo mod me up
  • (Score: 2) by jmorris on Friday September 04 2015, @10:49PM

    by jmorris (4844) on Friday September 04 2015, @10:49PM (#232455)

    So another Uni group did a student project. Good for them I guess, some actual education obviously happened.

    But we hear these sort of open hardware stories on a regular basis and nothing happens. It is like the obligatory alt energy story every week, breakthrough claimed and disappears without a ripple in the mists of time.

    I want to see a fully open SoC. I'd settle for one with 100% Open Specs and reference Linux drivers. But more than all that I do not want to read an article about it, I want it in a product that I can actually buy and I'd like it before I'm too old to care, drooling in a nursing home somewhere. But I'm betting on the nursing home winning the race.

  • (Score: 2, Funny) by dingus on Saturday September 05 2015, @06:24PM

    by dingus (5224) on Saturday September 05 2015, @06:24PM (#232686)

    All we need now is an open motherboard and RISC-V CPU's and it will be possible to make a fully libre computer.

    At which point we will ascend into valhalla.