Stories
Slash Boxes
Comments

SoylentNews is people

posted by on Thursday April 20 2017, @12:22PM   Printer-friendly
from the it's-so-simple dept.

One of the biggest problems with computers, dating to the invention of the first one, has been finding ways to keep them cool so that they don't overheat or shut down.

Instead of combating the heat, two University of Nebraska-Lincoln engineers have embraced it as an alternative energy source that would allow computing at ultra-high temperatures.

Sidy Ndao, assistant professor of mechanical and materials engineering, said his research group's development of a nano-thermal-mechanical device, or thermal diode, came after flipping around the question of how to better cool computers.

"If you think about it, whatever you do with electricity you should (also) be able to do with heat, because they are similar in many ways," Ndao said. "In principle, they are both energy carriers. If you could control heat, you could use it to do computing and avoid the problem of overheating."

They documented their device working in temperatures up to 630 degrees Fahrenheit (332 Celsius).


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by ledow on Thursday April 20 2017, @01:10PM (4 children)

    by ledow (5567) on Thursday April 20 2017, @01:10PM (#496827) Homepage

    But the heat isn't really waste. That's the energy used to modify the voltages on the lines. It ALSO escapes at heat when you are changing the direction of enough current at enough speed (frequency). It's a result of the thing you want to do, it's not even really a byproduct as such, it's just literally the energy you don't want in that line any more going somewhere else.

    Unfortunately, eliminating it would require all kinds of changes to the way we work (even optical computing - technically, that requires something, somewhere to heat and cool at the frequency of the light pulses you want to make).

    I'm not sure you could move that heat any more efficiently, because moving it isn't a problem - heatsinks pretty much are called that for a reason and are pretty cool at removing the heat from the underlying device into themselves, and are a damn sight cheaper than fancy tech (even Peltier coolers are quite expensive in comparison).

    The problem is that the heat is inevitable, we can pretty much move it where we like quite well and cheaply, but we still need to get rid of it. I don't think computers that operate using components at higher temperatures fixes anything there. You're still using electrical power to heat things up (albeit incidentally) and the hotter they get the more power you're sinking into them in the first place.

    The problem is still how to get rid of the heat, even if it's served its purpose or could be used for other things.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Thursday April 20 2017, @01:40PM

    by Anonymous Coward on Thursday April 20 2017, @01:40PM (#496845)

    Just sayin'.

  • (Score: 2) by kaszz on Friday April 21 2017, @03:42AM (2 children)

    by kaszz (4211) on Friday April 21 2017, @03:42AM (#497211) Journal

    If however the semiconductor device can tolerate more heat. Less efficient heat dissipation can be allowed.
    That would allow 100 GHz processors to come out eventually given what is in the laboratories (heat is what kills them now).

    • (Score: 2) by ledow on Friday April 21 2017, @06:57AM (1 child)

      by ledow (5567) on Friday April 21 2017, @06:57AM (#497276) Homepage

      If heat was all that killed them, we could just put more cooling on. We can cool things down to liquid nitrogen temperatures really easily, especially in things like specialist supercomputers where expense, power consumption, heat generation, size of equipment etc. play second-fiddle to raw performance.

      Fact is that even huge supercomputers with liquid cooling can't go much beyond 8-9GHz.

      The reason is much more to do with: You can't make the processor larger, because signals can't propagate across it in time (literally a limitation of the speed of light movement of electrons) without special handling. If you put in that special handling, it slows everything down and turns much more into a kind of multi-core system where you have to ensure that one part is separate from another and waits for its signals. And power consumption scales cubicly with the frequency of the processor.

      • (Score: 2) by kaszz on Friday April 21 2017, @07:18AM

        by kaszz (4211) on Friday April 21 2017, @07:18AM (#497289) Journal

        What prevents the processors from going beyond 8-9 GHz then? what kind of processors works at this speed btw?

        Seems the path forward is to have something can switch signals using less power and higher efficiency.

        Limits of physics is a b-tch ;-)