Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Wednesday June 22 2016, @05:38AM   Printer-friendly
from the more-core dept.

Motherboard reports on a press release by the University of California Davis, where researchers designed a multiple instruction, multiple data (MIMD) microprocessor. Unlike a GPU, each core can run distinct instructions on distinct data.

According to the researchers the chip has a greater number of cores than any other "fabricated programmable many-core [chip]," exceeding the 336 cores of the Ambric Am2045, which was produced commercially.

IBM was commissioned to fabricate the processor in 32 nm partially depleted silicon-on-insulator (PD-SOI). It is claimed that the device can "process 115 billion instructions per second while dissipating only 1.3 watts." or, when operating at greater supply voltage and clock rate, "execute 1 trillion instructions/sec while dissipating 13.1 W."


Original Submission #1Original Submission #2

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Insightful) by Anonymous Coward on Wednesday June 22 2016, @07:48AM

    by Anonymous Coward on Wednesday June 22 2016, @07:48AM (#363756)

    I don't understand. do you mean debug for hardware issues?
    because otherwise you can just debug your code on 4 cores (or maybe 10 if you have the patience), and then you know it's correct on 1000 cores as well.
    And for 4 or 10 cores, you can simply use "printf" (or std::cout if you're so inclined).

    Starting Score:    0  points
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  

    Total Score:   1  
  • (Score: 2) by Immerman on Wednesday June 22 2016, @01:38PM

    by Immerman (3985) on Wednesday June 22 2016, @01:38PM (#363835)

    Only if your code is completely free of any synchronization-related bugs that would show up with increased parallelization.

    I suspect through, being unfamiliar with massively parallel programming, that they're talking about debuggers designed to make visualizing the workflow in 1000 threads more effective, since a good debugger is also an excellent sidekick for performance analysis.

    • (Score: 0) by Anonymous Coward on Wednesday June 22 2016, @06:27PM

      by Anonymous Coward on Wednesday June 22 2016, @06:27PM (#363953)

      I do write codes that run on hundreds of CPUs (at least my code sees hundreds of CPUs, I don't know how many of these are cores etc). However, I use MPI (memory is divided among processes, no global addressing), which is the only thing that can handle what I do. For this particular CPU, I guess the shared memory paradigm may work, so things would be slightly different.
      Note that what I do is pretty simple, since basically all the cores are doing the same thing but on different data. I believe there are real programmers out there that can handle algorithms where different processes are doing different things, and maybe they have a different idea about debugging...

      In any case. I did have bugs that only showed up when I used many processes. It sucks to debug them. But I did it the same way. Good old printf to pinpoint what function failed and on which CPU, and then good old printf to output extra information for that particular CPU in that particular function.

      • (Score: 2) by HiThere on Wednesday June 22 2016, @08:25PM

        by HiThere (866) on Wednesday June 22 2016, @08:25PM (#363991) Journal

        I don't know. Shared Memory? in a parallel setup? Even if only one process is allowed to write to any particular section of memory you can get all sorts of races unless caching is eliminated, which has it's own problems. All shared memory needs to be immutable (write protected from every process) if you want to avoid that, and if you do that you're pretty much doing message passing even if the implementation looks different. Lock-based programming just doesn't scale well at all.

        OTOH, I'm still getting started in this area, but the stories you hear!!

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.