Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Monday December 17 2018, @09:14AM   Printer-friendly
from the Ancient-History dept.

I found an old memoir by someone who had worked with Richard Feynman way back in the 80's.

Those days seem to presage a lot of things that have become commercial hot topics these days -- highly parallel computers and neural nets.

One day in the spring of 1983, when I was having lunch with Richard Feynman, I mentioned to him that I was planning to start a company to build a parallel computer with a million processors. (I was at the time a graduate student at the MIT Artificial Intelligence Lab). His reaction was unequivocal: "That is positively the dopiest idea I ever heard." For Richard a crazy idea was an opportunity to prove it wrong—or prove it right. Either way, he was interested. By the end of lunch he had agreed to spend the summer working at the company.

In his last years, Feynman helped build an innovative computer. He had great fun with computers. Half the fun was explaining things to anyone who would listen.

I was alive those days; might I be as old as aristarchus?

-- hendrik


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by DannyB on Tuesday December 18 2018, @02:39PM

    by DannyB (5839) Subscriber Badge on Tuesday December 18 2018, @02:39PM (#775832) Journal

    Yes, that.

    On a modern computer with say, 8 cores, or a dozen cores, or more, it is difficult to make operations efficient. Create multiple threads, so that you keep all your processors busy. But don't create too many or they all run and you get small inefficiencies due to context switching. Create maybe enough threads for one per cpu core, plus one or two extra threads in case one thread blocks for some reason. That way you don't have all cpu cores doing context switches.

    Another issue is that you need to have enough work on each thread to keep it busy. So break your problem down into "work units" and divvy up work units to the threads and then re-gather them together as the work units emerge completed on the other end. That way you can effectively parallelize an operation on a large ordered sequence of data items. If you are using a GC language, as most modern languages today, then be sure you write your code like a game or signal processing application -- if at all possible . . . allocate all your structures UP FRONT and then do not do any allocations in your main processing loops. That way no GC's happen.

    All that, and that's just the easy option. Harder options: write in a lower level language like C and make it run on multiple cores. Or write it to run on a GPU.

    But . . . that easy option does buy you a huge performance improvement if done properly, and remains completely platform neutral, and is easier to write and maintain in your primary programming language.

    --
    People today are educated enough to repeat what they are taught but not to question what they are taught.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2