Stories
Slash Boxes
Comments

SoylentNews is people

The Fine print: The following are owned by whoever posted them. We are not responsible for them in any way.

Journal by cafebabe

(This is the 57th of many promised articles which explain an idea in isolation. It is hoped that ideas may be adapted, linked together and implemented.)

The quest for faster computing and reduced energy consumption may lead to widespread use of optical computing. This has been predicted since at least 1971 but hasn't occurred. This has been due to numerous difficulties.

The most significant difficulty is the scale of integration. While it is possible to manufacture electronic transitors at 22nm or (various exagerations of 15nm), the use of infra-red lasers limited optical computing to a theoretical minimum feature size of 600nm. Use of gallium in one or more guises may reduce this to 380nm. Significantly reducing this limit would require development and safe handling of extremely small X-ray sources or similar. As a matter of practicality, I'm going to assume that optical computing is powered by a 410nm gallium blue laser diode.

While etching of electronic transistors has received considerable funding and development, optical computing competes at increasing disadvantage. If optical circuits are manufactured using two dimensional etching then we have the problem that we will not be able to have optical processors with 100 million transistors unless the optical substrate is very large or manufactured in very small pieces and then stacked and connected in a manner which has not been developed. Alternatively, some form a holographic etching may have to be developed.

I'm going to assume that an optical CPU uses no more than 10000 gates. At this point, we're at the same scale of integration as 1970s CPU designs. An optical CPU may run at 20THz but it will have no more gates than a Z80 and scope to increase gates may be very limited. For example, electronic inter-connections may be significantly slower than optical connections. We may have the foreseeable situation of an optical 8086 (or similar) emulating a much more recent iteration of x86. This would include optical emulation of cache tiers and wide registers. Specifically, the old wisdom of processing data in 4 bit chunks or 16 bit chunks may be re-established as a matter of necessity.

Emulation will allow optimizations which are infeasible in hardware. For example, if four-way SIMD is used to calculate three dimensional co-ordinates, an emulator can peep ahead in an instruction stream and see which pieces are used before a register is over-written. Then it is possible to only calculate the three pieces required and then leave 1/4 of a register unchanged. In hardware, this would require more circuitry and energy than it would save. In software, this could make a program run faster while saving energy.

I presume that main memory will remain electronic. This would maintain expected storage density. However, DRAM may be superceded by persistent memory.

Generic, multi-core designs will become increasingly desirable. For neural simulation, decompression of dendrite weights and subsequent floating point calculations will all be performed by small integer processors. The dabblers who make their own image format, filing system or init system will move to instruction set design. This will create a profusion of incompatible applications, compilers and execution environments which will make Android look quaint. Despite increasing I/O bottlenecks, theoreticians will continue to avocate pure message passing.

Display Options Threshold/Breakthrough Reply to Article Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
(1)