Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Sunday May 21 2017, @10:31AM   Printer-friendly
from the maybe-IBM-isn't-all-services dept.

https://www.hpcwire.com/2017/05/18/ibm-d-wave-report-quantum-computing-advances/

IBM said this week it has built and tested a pair of quantum computing processors, including a prototype of a commercial version. That progress follows an announcement earlier this week that commercial quantum computer developer D-Wave Systems has garnered venture funding that could total up to $50 million to build it next-generation machine with up to 2,000 qubits.

[...] Meanwhile, IBM researchers continue to push the boundaries of quantum computing as part of its IBM Q initiative launched in March to promote development of a "universal" quantum computer. Access to a 16-qubit processor via the IBM cloud would allow developers and researchers to run quantum algorithms. The new version replaces an earlier 5-qubit processor.

The company also rolled on Wednesday (May 17) the first prototype of a 17-qubit commercial processor, making it IBM's most powerful quantum device. The prototype will serve as the foundation of IBM Q's commercial access program. The goal is to eventually scale future prototypes to 50 or more qubits.

The article also notes Hewlett Packard Enterprise's prototype of "The Machine", with 160 terabytes of RAM.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Monday May 22 2017, @02:57AM (4 children)

    by Anonymous Coward on Monday May 22 2017, @02:57AM (#513289)

    real world problems, particularly for power limited applications, are far below a "certain size"

    We have no idea what ``real world problems'' will be in a decade or two. People today parse kilobytes of data literally hundreds of times a day. How much parsing is involved in a normal persons daily web-browsing? Hundreds of parses each of several kilobytes is probably a significant underestimate esp. when considering many people have a few shittily designed apps that poll frequently and how much JS gets pulled into a mundane webpage. This state of affairs would seem ridiculous to consider a few decades back.

    kilobytes may seem tiny, but that's also the point, megabytes are also starting to look tiny now, and with HD HDR 60fps videos on the horizon gigabytes will start to look small pretty fast too. Literally billions of bytes will look small.

    Furthermore this is all assuming that classical computers will maintain their large efficiency advantage over classical computers, if QC turns out to be significantly useful (and who knows what mundane stuff it's useful for yet, the research probably doesn't focus on using QC to make video decoding faster since that won't matter until QC is cheap and widely available) then unless there's a fundamental reason classical computers will maintain their advantages I just don't see them withstanding the potential power efficiency of equally developed QC.

    Sure QC memory at the moment is sub-tubes-of-mercury in terms of manufacturability, but modern machines don't use tubes of mercury as memory and so limits to its efficient manufacture aren't relevant.

    Unless some property of the physics being exploited requires hard to manufacture things, I just don't see QC perpetually lagging behind CC.

    we're nowhere near a level of technology development where this is relevant

    A decent QC implementation for scheduling or TSP has the potential save airlines and shipping companies enough money that their industries could support IBM offering QCaaS. Granted the development is currently slow, but given the rate of tech. development I'm not willing to accept that IBM funding research couldn't make progress fast enough to be commercially viable.

  • (Score: 1) by khallow on Monday May 22 2017, @03:55AM (3 children)

    by khallow (3766) Subscriber Badge on Monday May 22 2017, @03:55AM (#513305) Journal

    kilobytes may seem tiny, but that's also the point, megabytes are also starting to look tiny now, and with HD HDR 60fps videos on the horizon gigabytes will start to look small pretty fast too. Literally billions of bytes will look small.

    QC won't help at all with the need for larger data pipes. And it's worth noting that classical computation is way ahead of bandwidth. Existing computers have no trouble keeping up with data decoding and uncompression, which let us note, is not that hard a problem either for classical computers.

    A decent QC implementation for scheduling or TSP has the potential save airlines and shipping companies enough money that their industries could support IBM offering QCaaS. Granted the development is currently slow, but given the rate of tech. development I'm not willing to accept that IBM funding research couldn't make progress fast enough to be commercially viable.

    The thing is current computers are way more than fast enough to do that.

    Unless some property of the physics being exploited requires hard to manufacture things, I just don't see QC perpetually lagging behind CC.

    Such as the need for sufficient isolation from the rest of the universe?

    • (Score: 0) by Anonymous Coward on Monday May 22 2017, @04:57AM (2 children)

      by Anonymous Coward on Monday May 22 2017, @04:57AM (#513319)

      QC won't help at all with the need for larger data pipes

      I meant to imply that problems are quickly becoming of sufficient size to benefit from QC despite efficiency issues while they catch up with CC.

      The thing is current computers are way more than fast enough to do that.

      They're certainly good enough, but even a fractions of a percentage point improvements in these approximations can mean millions saved. I doubt current implementations are close enough to optimal that it wouldn't have a market, but that is an ill-informed guess.

      More importantly knocking a the average case running time's exponent down a bit would enable the use of optimizations which are currently infeasible for larger problems. I can't think of an example off the top of my head, but I'm sure there are a fair few algorithms people would prefer had their exponents slashed.

      Such as the need for sufficient isolation from the rest of the universe?

      I'm ignorant of the requirements, but since the definition of what is ``hard to manufacture'' is entirely dependant on current tech. levels and robotic assembly is growing more prevalent I wouldn't be at all surprised to see assembly automated. The materials/space required seem the limiting factors. Materials will probably become slowly cheaper as we gain a better understanding of engineering these systems, but maybe not. As for space required, that may be a limiting factor, but particles are pretty small compared to humans and I'd be somewhat surprised if qbits require roughly human-scale space to remain isolated.

      • (Score: 1) by khallow on Monday May 22 2017, @05:33AM (1 child)

        by khallow (3766) Subscriber Badge on Monday May 22 2017, @05:33AM (#513326) Journal
        The thing is, current computation is already vast overkill for a variety of computation needs while quantum computers are extremely primitive (handful of qubits, for example). Gambling a huge company on the remote possibility that QC can be massively accelerated at some point in the near future seems a bad idea to me.
        • (Score: 0) by Anonymous Coward on Monday May 22 2017, @05:53AM

          by Anonymous Coward on Monday May 22 2017, @05:53AM (#513335)

          You're right, it isn't worth the risk; if they do get in early though it may be very valuable in the future.