Motherboard reports on a press release by the University of California Davis, where researchers designed a multiple instruction, multiple data (MIMD) microprocessor. Unlike a GPU, each core can run distinct instructions on distinct data.
According to the researchers the chip has a greater number of cores than any other "fabricated programmable many-core [chip]," exceeding the 336 cores of the Ambric Am2045, which was produced commercially.
IBM was commissioned to fabricate the processor in 32 nm partially depleted silicon-on-insulator (PD-SOI). It is claimed that the device can "process 115 billion instructions per second while dissipating only 1.3 watts." or, when operating at greater supply voltage and clock rate, "execute 1 trillion instructions/sec while dissipating 13.1 W."
(Score: 0) by Anonymous Coward on Wednesday June 22 2016, @05:41AM
Having 1000 cores will be great, because now when you download ransomware, it can encrypt 999 of your files simultaneously while you continue to use your browser normally.
(Score: 3, Insightful) by Bot on Wednesday June 22 2016, @07:02AM
There is still normal people market for these thingies.
For example, cameras that shoot better video (more frequent readout of more pixel, better compression, less overheat).
Or the obvious PC in a cigarette box, so you take it with you and use the tv, the tablet and the cellphone as mere I/O. No need for the cloud or synchro, just backup to a couple HD at home and at work. Oh ok the incumbents will never let you do this one.
Account abandoned.
(Score: 3, Insightful) by Bot on Wednesday June 22 2016, @07:03AM
Or give one to Geohot and have your self driving car in 6 months.
Account abandoned.
(Score: 1, Funny) by Anonymous Coward on Wednesday June 22 2016, @07:12AM
Cigarette box? Smoking is forbidden everywhere in modern times. What century do you come from?
(Score: 0) by Anonymous Coward on Wednesday June 22 2016, @02:13PM
Yeah we vape now fam get with the times gramps
(Score: 2) by Bot on Wednesday June 22 2016, @11:48PM
Puny humans.
Circuitry smokes no matter what the law says (but just once).
Account abandoned.
(Score: 0) by Anonymous Coward on Wednesday June 22 2016, @05:44AM
Does it do graphics???
(Score: 3, Insightful) by LoRdTAW on Wednesday June 22 2016, @12:51PM
On a more serious note, I am sure the GPU will one day disappear into the CPU and we come full circle back to software rendering.
(Score: 2) by Gravis on Wednesday June 22 2016, @06:37AM
sure, they made the chip but did they also make a visual debugger to debug the thousand cores? this is actually something Adapteva was working on with the Epiphany III processor for the Parallella board. it's good to have awesome chips but it's equally important to have awesome tools for said chips.
(Score: 2, Funny) by Anonymous Coward on Wednesday June 22 2016, @06:47AM
No all they have to make is the hardware because the way software is developed these days is you create an empty github repo with a description of the software you want and a bunch of naive kids will write it for you before they realize they should have gotten paid to do it.
Here let me explain the relationship between hardware and software in terms you can understand:
https://xkcd.com/644/ [xkcd.com]
(Score: 1, Insightful) by Anonymous Coward on Wednesday June 22 2016, @07:48AM
I don't understand. do you mean debug for hardware issues?
because otherwise you can just debug your code on 4 cores (or maybe 10 if you have the patience), and then you know it's correct on 1000 cores as well.
And for 4 or 10 cores, you can simply use "printf" (or std::cout if you're so inclined).
(Score: 2) by Immerman on Wednesday June 22 2016, @01:38PM
Only if your code is completely free of any synchronization-related bugs that would show up with increased parallelization.
I suspect through, being unfamiliar with massively parallel programming, that they're talking about debuggers designed to make visualizing the workflow in 1000 threads more effective, since a good debugger is also an excellent sidekick for performance analysis.
(Score: 0) by Anonymous Coward on Wednesday June 22 2016, @06:27PM
I do write codes that run on hundreds of CPUs (at least my code sees hundreds of CPUs, I don't know how many of these are cores etc). However, I use MPI (memory is divided among processes, no global addressing), which is the only thing that can handle what I do. For this particular CPU, I guess the shared memory paradigm may work, so things would be slightly different.
Note that what I do is pretty simple, since basically all the cores are doing the same thing but on different data. I believe there are real programmers out there that can handle algorithms where different processes are doing different things, and maybe they have a different idea about debugging...
In any case. I did have bugs that only showed up when I used many processes. It sucks to debug them. But I did it the same way. Good old printf to pinpoint what function failed and on which CPU, and then good old printf to output extra information for that particular CPU in that particular function.
(Score: 2) by HiThere on Wednesday June 22 2016, @08:25PM
I don't know. Shared Memory? in a parallel setup? Even if only one process is allowed to write to any particular section of memory you can get all sorts of races unless caching is eliminated, which has it's own problems. All shared memory needs to be immutable (write protected from every process) if you want to avoid that, and if you do that you're pretty much doing message passing even if the implementation looks different. Lock-based programming just doesn't scale well at all.
OTOH, I'm still getting started in this area, but the stories you hear!!
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 2) by tibman on Wednesday June 22 2016, @01:16PM
Here's a working 12 core example : D
http://store.steampowered.com/app/370360/ [steampowered.com]
SN won't survive on lurkers alone. Write comments.
(Score: 0) by Anonymous Coward on Thursday June 23 2016, @01:05AM
That game is a fucking dildo.
It was fun and novel for a bit but I just gave up. Months later and none of my steam friends got even close to solving the amount I solved.
One day I'll write both my own genetic algorithm that will solve this shit somehow and some arbitrary 'compiler' or high-level modeling language.
(Score: 2) by tibman on Thursday June 23 2016, @01:40AM
Something unexpected happens at the end : ) It's a fun trip into assembly if you never had the opportunity before (born too late).
SN won't survive on lurkers alone. Write comments.
(Score: 2) by DannyB on Wednesday June 22 2016, @09:25PM
Only old people care about debugging.
Is there a chemotherapy treatment for excessively low blood alcohol level?
(Score: 2) by Zinho on Wednesday June 22 2016, @02:49PM
So, per this article "MIPS" is well and truly obsolete as a baseline for chip performance. I guess we could go with "MegaMIPS" for 1 trillion instructions/sec; however, I have a more elegant proposal. Let's extend the nomenclature such that 1 billion instructions per second is "BIPS", and 1 trillion instructions per second is "TrIPS".
Anyone with me?
:D
"Space Exploration is not endless circles in low earth orbit." -Buzz Aldrin
(Score: 2) by turgid on Wednesday June 22 2016, @04:41PM
They don't call it Marketing's Idea of Processor Speed for nothing, you know.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 0) by Anonymous Coward on Monday July 04 2016, @03:55AM
(Score: 2) by Azuma Hazuki on Wednesday June 22 2016, @04:23PM
Serious question here: is this going to be useful for general purpose computation, or even code compilation, at any point in time? Compiling is usually well-parallelizable but there are some parts which are single-threaded.
I am "that girl" your mother warned you about...
(Score: 2) by takyon on Wednesday June 22 2016, @04:44PM
China's New Supercomputer Uses a 260-Core Chip [soylentnews.org]
Maybe use it like a coprocessor, and keep a fast 2-4 core processor nearby.
Some applications can definitely adapt to 8-10 hyperthreaded cores [tomshardware.com]. 1,000? If the hardware is out there (not just in a UC Davis lab), someone will run with it.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by turgid on Wednesday June 22 2016, @04:45PM
If your code is taking too long to compile then perhaps C++ is not the best choice.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 2) by HiThere on Wednesday June 22 2016, @08:29PM
That's not a C++ problem, that's a program organization problem. Just break the code up into a bunch of independent libraries and it will compile quickly. This also facilitates code reuse...though not as much as it logically ought to. Most of the libraries are likely to end up only being used in one project.
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 0) by Anonymous Coward on Wednesday June 22 2016, @09:18PM
It usually is.
Yes, C++ makes organising a program harder than many other languages.
Interfaces are brittle in C++, which means library recompilation is often required, more so than in better-designed languages. Longer development cycle, more bugs...
Because C++ interfaces are complex and brittle. Templates? Exceptions? Compiler versions?
Use a language with proper support for modules, no pre-processor and a proper ABI.
(Score: 0) by Anonymous Coward on Wednesday June 22 2016, @06:31PM
it can definitely be used for gaming/virtual reality applications, since you can parallelize the physics and graphics.
I would personally use it for numerical simulations, but I guess not everyone does that for a hobby...