Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Monday December 17 2018, @09:14AM   Printer-friendly
from the Ancient-History dept.

I found an old memoir by someone who had worked with Richard Feynman way back in the 80's.

Those days seem to presage a lot of things that have become commercial hot topics these days -- highly parallel computers and neural nets.

One day in the spring of 1983, when I was having lunch with Richard Feynman, I mentioned to him that I was planning to start a company to build a parallel computer with a million processors. (I was at the time a graduate student at the MIT Artificial Intelligence Lab). His reaction was unequivocal: "That is positively the dopiest idea I ever heard." For Richard a crazy idea was an opportunity to prove it wrong—or prove it right. Either way, he was interested. By the end of lunch he had agreed to spend the summer working at the company.

In his last years, Feynman helped build an innovative computer. He had great fun with computers. Half the fun was explaining things to anyone who would listen.

I was alive those days; might I be as old as aristarchus?

-- hendrik


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by Veyrdite on Monday December 17 2018, @10:11AM (6 children)

    by Veyrdite (6386) on Monday December 17 2018, @10:11AM (#775339)

    > 20-dimensional hypercube so that each processor would only need to talk to 20 others directly

    I'm not sure if I'm glad or disappointed that the old TIS-100 only went as far as four interconnects per node ("4-dimensional" in the article's parlance?). Programming for and debugging that CPU was a pain, but perhaps resolving pathways in 20D might have actually been easier.

    You would under-utilise all the links (a bit like programming BASIC with line numbers wasted. 10 print hello, 20 goto, 30 etc). Of course when the program gets too complex (too much "inserted" later) you'll start having to plan "around" used up nodes and routes, leading to a 20-dimensional spaghetti mess. Oh god.

    On second thought my tis is going to stay in the garage.

  • (Score: 2) by coolgopher on Monday December 17 2018, @11:18AM

    by coolgopher (1157) on Monday December 17 2018, @11:18AM (#775348)

    You mean this TIS-100 [eviltrout.com] processor? You sure you haven't got the wrong acronym? If you meant the TRS-80 Model 100, there's apparently Virtual T [sourceforge.net] for all your leave-the-hardware-in-the-garage needs...

  • (Score: 1, Interesting) by Anonymous Coward on Monday December 17 2018, @04:32PM

    by Anonymous Coward on Monday December 17 2018, @04:32PM (#775442)

    The Connection Machine (iirc) was SIMD, single instruction, multiple data. They wrote their own compiler(s) which handled parallel operations. As others note below, very good for certain calculations.

    In particular I remember a demo of a 2D wind tunnel (early CFD) that ran nearly realtime...in 1980. Later heard that it had been expanded to 3D flow simulation for turbine engine simulation, including the chemistry and thermo (but I didn't see that demo).

    While I visited that summer at their rural/rented mansion/office, sadly for me Feynmann wasn't in that day so I didn't get to meet him. One of the young engineers I did meet was Brewster Kahle, now well known for the Internet Archive. The other thing I remember is that it was the early days of mountain bikes on the east coast, they had some available, and several of us went for a ride through the nearby woods--very pastoral, discussing future computers while dodging trees.

  • (Score: 4, Informative) by DannyB on Monday December 17 2018, @10:04PM (3 children)

    by DannyB (5839) Subscriber Badge on Monday December 17 2018, @10:04PM (#775587) Journal

    I'm not sure if I'm glad or disappointed that the old TIS-100 only went as far as four interconnects per node ("4-dimensional" in the article's parlance?).

    I don't think you get what they mean by 20 dimensional cube. I was alive back then and remember this. Does that make me as old as Aristarchus?

    Imagine a 3 dimensional cube. (aka, a normal cube.)

    Each corner (eg vertex) is a processor. Each processor is connected to exactly 3 other processors.

    Each "dimension" has a ZERO plane and a ONE plane. Each of the 3 dimensions has a 0 or 1 address. The X-axis of the cube has a 0 point and a 1 point. The Y axis has a 0 and 1 point. And the Z axis has 0 and 1. Thus each processor (corner) has a 3 bit address.

    If a processor wants to send a packet to a different processor, the routing machinery and connection "fabric" can route that packet many ways, but each way only brings it closer (never further) to the destination.

    Suppose processor 000 wants to send a message to processor 111. Since all three axis bits are different, the message fabric could route the packet on any of the 3 connections from processor 000 to either processor 001, 010 or 100. From there, it can only be routed in two possible routes. Let's suppose it is routed to processor 010. Now there are only two bits of difference from the current processor to the destination, so the packet can only be sent in two possible routes. Suppose it is next sent to processor 110. Now there is only one possible route to processor 111.

    Now extend this idea into the 4th dimension. Take that cube, and create a 2nd cube just like it. Let's call this the W dimension. The old cube is 0 in the W dimension, and the new cube is 1 in the W dimension. Each corner point in the old and new cubes are interconnected to their counterpart in the other cube. Thus each corner now has 4 connections to adjacent processors in any of dimensions X, Y, Z and W. The total number of processors is 2^4.

    Now stretch this out to 20 dimensions. There are 2^20 processors. Each processor has a 20 bit address on a 20 dimensional axis XYZWABCD..... etc. Each processor is connected to 20 other processors. When a packet is routed from one 20 bit address to another 20 bit address, the packet can move through the 20 dimensional fabric in any direction that makes it get 1 bit closer to the destination -- that is on any axis whose destination address is different from the current processor.

    With such a rich interconnection fabric, imagine how much communication could be going on between these 2^20 processors.

    It's not that each processor is a "neuron". It's that the process or processes on each processor have a highly efficient communication system with processes in other processing nodes. You could run more than 2 ^ 20 threads, it's just that when a thread talks to another thread, it's message has an extremely rich set of connection possibilities to reach the destination.

    --
    People today are educated enough to repeat what they are taught but not to question what they are taught.
    • (Score: 2) by FatPhil on Tuesday December 18 2018, @08:59AM (2 children)

      by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Tuesday December 18 2018, @08:59AM (#775764) Homepage
      Your 3 dimensional cube has 3 interconnects per node. Your 20 dimensional hypercube has 20 interconnects per node. What makes you so unwaveringly sure that his architecture with 4 interconnects per node wasn't 4 dimensional? OP has been fairly ambiguous about what processor he's talking about, there's no way of telling what topology it had. OK, the first thing that came to mind was a T-100 transputer, having as it does 4 interconnects, which comes from a family that was conventionally presented in marketting materials as if it had a 2D mesh topology, but I have no evidence that's the only way it could be connected.

      This is a tech that gets reinvented every decade or so, as the balance between computation and communication oscillates. It appears skewed tori and things that look more like optimal sort networks and FFT butterflies are the current topological fashion rather than hypercubes.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
      • (Score: 2) by DannyB on Tuesday December 18 2018, @02:21PM (1 child)

        by DannyB (5839) Subscriber Badge on Tuesday December 18 2018, @02:21PM (#775828) Journal

        It was way back in the day. My memory could be failing. But I seem to recall that the dimension of the cube determined the number of processors, and maybe the interconnects as a result. But I would be happy to be corrected on that point if I misunderstood or misremember.

        --
        People today are educated enough to repeat what they are taught but not to question what they are taught.
        • (Score: 2) by FatPhil on Tuesday December 18 2018, @02:47PM

          by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Tuesday December 18 2018, @02:47PM (#775838) Homepage
          Back in they day we weren't that imaginitve, and organised everything as cubes, cylinders (which includes token ring as a height-1 cylinder), or tori, and higher-dimension equivalents and there was a very simple relation between number of nodes, diameter, and number of connections - put one number in, and you'll typically get all the other optimal numbers out. Topologies have got much a bit more high tech since then, and less geometrically simple topologies have been realised to have better average-case or worst-case metrics. However, if you look at the interconnect architectures of multicore chips in recent times (intel/amd, etc.), you'll see that they have been patenting the same dumb old shit that MasPar/MPP/CM had 3 decades ago. Which is crazy, as yield is much more important now than it was years ago (die sizes have grown way more than wafer sizes, so that wafer wastage is now more significant), and therefore they should be looking at shrinking dies and pushing more smarts into the interconnects.
          --
          Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves