Stories
Slash Boxes
Comments

SoylentNews is people

posted by on Monday January 16 2017, @04:14PM   Printer-friendly
from the needs-a-nano-chimney-sweep dept.

Rice researchers change graphene to help channel heat away from electronics.

A few nanoscale adjustments may be all that is required to make graphene-nanotube junctions excel at transferring heat, according to Rice University scientists.

The Rice lab of theoretical physicist Boris Yakobson found that putting a cone-like "chimney" between the graphene and nanotube all but eliminates a barrier that blocks heat from escaping.

Heat is transferred through phonons, quasiparticle waves that also transmit sound. The Rice theory offers a strategy to channel damaging heat away from next-generation nano-electronics.

-- submitted from IRC


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by DannyB on Monday January 16 2017, @05:55PM

    by DannyB (5839) Subscriber Badge on Monday January 16 2017, @05:55PM (#454442) Journal

    While there's nothing wrong with making cores go faster, taking away heat, making them draw less power, etc, it seems to me that this approach has limits.

    Cores haven't been getting much faster for quite some time now.

    How about building some chips with more and more cores? And, yes, I mean like dozens or hundreds of cores. But not like a GPU. How about fully independently programmable CPU cores.

    Software people have already been thinking for some years now about how to effectively use more cores. Various map / reduce style approaches that will allow a problem to be attacked by how ever many cores are available. Also various approaches such as Go routines, or Clojure core.async.

    First there are the "embarrassingly parallel" problems. Like calculating each pixel of an image of the mandelbrot set. Or calculating each pixel of a 3D render. Almost any image processing where each pixel is calculated independently of all the others. Now, if it is too inefficient to break each pixel into a separate task for a core, then break down an image problem into smaller blocks of pixels. For example, break a 10,000 x 10,000 pixel image down into 100x100 pixel blocks and process each block on a parallel system.

    Even problems that are not embarrassingly parallel often can be divided into independently processed elements. A payroll calculation has to process each employee independently of every other. In fact, most business batch processes have this property.

    Having eight, sixteen, or more cores, even on higher end desktop machines would be helpful. Saying there is no software to take advantage of it is like the chicken and egg problem. Or like FORTRAN developers saying nobody needs arrays with more than 3 dimensions, because nobody has written software using arrays more than 3 dimensions. But if the language doesn't allow more than 3 dimensions, you're less likely to see a lot of software that uses this. :-)

    I also don't mean super duper cores that boost the price into the stratosphere. A raspberry pi 3 has 4 cores. How about sixteen or more of those type of cores. Or how about a board that raspberry pi compute modules could plug into, and you could start with only one module, and add a few more modules per month to keep it affordable for hobbyists.

    More cores. If you build it, they will come.

    --
    The lower I set my standards the more accomplishments I have.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by takyon on Monday January 16 2017, @06:16PM

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Monday January 16 2017, @06:16PM (#454448) Journal

    If you really want more good cores, you eventually need to stack them.

    Stacking could be the most important research in classical computing. If they manage to make a CPU or GPU with hundreds or thousands of layers, that's orders of magnitude of speedup for the embarrassingly parallel problems.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by DannyB on Monday January 16 2017, @07:01PM

      by DannyB (5839) Subscriber Badge on Monday January 16 2017, @07:01PM (#454462) Journal

      I suppose once that many cores, even small ones, are built into a chip, the issues of removing heat once again start to become significant.

      --
      The lower I set my standards the more accomplishments I have.
  • (Score: 2, Interesting) by Anonymous Coward on Monday January 16 2017, @07:29PM

    by Anonymous Coward on Monday January 16 2017, @07:29PM (#454473)

    Stacking layers of complex processor cores on one chip seems unlikely in the near future. At the start of this article https://en.wikipedia.org/wiki/Semiconductor_device_fabrication#List_of_steps [wikipedia.org] it mentions that there are often 300 processing steps in current wafer production. The link goes to a list of the steps that are commonly used, but these are mixed and matched depending on the types of devices being built. Even a small error rate (process variation, etc) in each of these steps reduces the overall yield of good devices from a 300mm wafer.

    A good friend is an industry veteran and he says that every new generation starts out with low or zero yield. After intense development, simpler products can be raised to near 100% yield.

    Adding another whole layer of processors on top would (approx.) double the number of processing steps, doubling the processing time from current 6-8 weeks for each wafer and greatly reducing yield unless every step was even closer to perfect.

    • (Score: 2) by takyon on Tuesday January 17 2017, @01:19AM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday January 17 2017, @01:19AM (#454646) Journal

      We already have a type of stacked/layered device: 3D/vertical NAND flash.

      When yields aren't good enough, you could still sell a product by disabling faulty cores. A 3D chip with more cores has more die area that could be disabled.

      There are alternate means of constructing the chip, such as molecular self-assembly, that might allow faster production compared to lithography (and more suited to the stacking). At the very least, there's active research from all angles, and a realization that EUV lithography has serious problems and delays associated with it.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 0) by Anonymous Coward on Monday January 16 2017, @09:47PM

    by Anonymous Coward on Monday January 16 2017, @09:47PM (#454541)

    How about dataflow architecture?