Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday December 24 2014, @12:45AM   Printer-friendly
from the flipping-hell dept.

Spotted on Hacker news is a link to a paper on flipping memory bits without direct access.

In this paper, we expose the vulnerability of commodity DRAM chips to disturbance errors. By reading from the same address in DRAM, we show that it is possible to corrupt data in nearby addresses. More specifically, activating the same row in DRAM corrupts data in nearby rows. We demonstrate this phenomenon on Intel and AMD systems using a malicious program that generates many DRAM accesses. We induce errors in most DRAM modules (110 out of 129) from three major DRAM manufacturers.

The paper notes that this problem is increasingly prevalent in recent devices, indicating more advanced process technologies may exacerbate this issue, and also highlights the physical mechanisms underlying the corruption:

We identify the root cause of DRAM disturbance errors as voltage fluctuations on an internal wire called the wordline. DRAM comprises a two-dimensional array of cells, where each row of cells has its own wordline. To access a cell within a particular row, the row’s wordline must be enabled by raising its voltage — i.e., the row must be activated. When there are many activations to the same row, they force the wordline to toggle on and off repeatedly. According to our observations, such voltage fluctuations on a row’s wordline have a disturbance effect on nearby rows, inducing some of their cells to leak charge at an accelerated rate. If such a cell loses too much charge before it is restored to its original value (i.e., refreshed), it experiences a disturbance error

A direct link to the paper [PDF]

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by steveha on Wednesday December 24 2014, @12:58AM

    by steveha (4100) on Wednesday December 24 2014, @12:58AM (#128809)

    For non-gaming systems, I try to buy only ECC RAM. And according to one comment [ycombinator.com] in the discussion, ECC RAM prevents this issue. (Another comment [ycombinator.com] says this issue may be able to make a two-bit error that ECC could detect but not silently fix.)

    • (Score: 2) by Techwolf on Wednesday December 24 2014, @05:43AM

      by Techwolf (87) on Wednesday December 24 2014, @05:43AM (#128851)

      I was looking to upgrade my system and wanted to move to ECC RAM as I wanted a stable system. To my disapointment, all the top MBs out there are non-ECC. Is there a good ECC MB for desktop systems?

      • (Score: 2) by SlimmPickens on Wednesday December 24 2014, @06:46AM

        by SlimmPickens (1056) on Wednesday December 24 2014, @06:46AM (#128863)

        Check out Tyan. They're stability freaks and they make a few boards that support ECC with socket 1150.

        • (Score: 3, Informative) by Hairyfeet on Thursday December 25 2014, @08:39AM

          by Hairyfeet (75) <bassbeast1968NO@SPAMgmail.com> on Thursday December 25 2014, @08:39AM (#129061) Journal

          Or you could just go AMD, as most of their chips support ECC and a good board supporting ECC in AM3+ is pretty cheap.

          --
          ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
          • (Score: 3, Informative) by SlimmPickens on Thursday December 25 2014, @07:27PM

            by SlimmPickens (1056) on Thursday December 25 2014, @07:27PM (#129138)

            I just said that because it cuts to the heart of the matter, parent can use whatever chips he likes. I didn't think he asking about AMD though.

            Personally, I don't buy cheap boards anymore, too many issues. What's an extra $200 in the context of a box filled with ECC RAM and presumably a lot more besides? Why have a weak link when everything else is so reliable?

            I discovered Tyan way back when AMD had them make the first opteron boards. They still make very good AMD boards.

            • (Score: 2) by Hairyfeet on Friday December 26 2014, @08:14AM

              by Hairyfeet (75) <bassbeast1968NO@SPAMgmail.com> on Friday December 26 2014, @08:14AM (#129245) Journal

              Dude after building more PCs than most have had hot meals? Honestly there really is ZERO difference between cheapo boards and expensive boards anymore, today its all about the bells and whistles. The cheap board may have 4 USB while the expensive 10, the cheapo have 1 PCIe to the others 3, shit like that. And if you are just building a basic workstation and its already gonna have ECC? Not really any point in blowing a wad on bells and whistles. The Tyan and nice boards but again its all about the bells.

              And before anybody chimes in with a horror story this was NOT always the case, Abit and Foxconn was notorious for building some seriously shit boards but that was then, this is now. Today even your $30 Biostar has solid state caps, decent VRMs, honestly if you look at the reviews of so called "bad boards" a good 90% of the time it comes down to the builder expecting the defaults to always be correct when that isn't always the case. For example I love to build Asrock boards for gaming PCs but you read the reviews and you'd think they were just awful, the reason? Those that buy the boards do not realize that ALL Asrock boards are really gamer boards and because of this they tend to be aggressive with RAM timings. If you just go in and set the timing manually? They purr like kittens.

              But with the FX8 going for so cheap its crazy now cheap you can build a really nice workstation, especially if you aren't looking to build some uber gamer rig.

              --
              ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
              • (Score: 2) by SlimmPickens on Friday December 26 2014, @09:33AM

                by SlimmPickens (1056) on Friday December 26 2014, @09:33AM (#129252)

                Abit made some brilliant boards, that's why DFI (making awesome Lanparty boards at the time) hired most of their engineers when they folded.

                I had to replace a Gigabyte UD5 of my own this year, and another UD5 at work last year (work having only two people but eight computers). I hate replacing boards. A company I used to work for (where I built maybe a thousand computers) sells a lot of whiteboxes, they certainly think the cheaper boards have reliability problems (I still help therm with their network).

                There's more to it than caps and VRM. Trace material and thickness, for example. There's no tantalum in the traces these days. It's precisely because of the price competition on features that they're cutting it fine in other areas. Maybe we're just unlucky though, I'm all ears what people have to say about it.

                • (Score: 2) by Hairyfeet on Sunday December 28 2014, @06:50AM

                  by Hairyfeet (75) <bassbeast1968NO@SPAMgmail.com> on Sunday December 28 2014, @06:50AM (#129644) Journal

                  Then you obviously didn't buy any Abit boards the last 2 years of their existence because those last two years? Absolute shite. Their caps were craps, traces so thin you could kill the boards merely by bumping a standoff trying to get the backplate lined up and the drivers! Total garbage and NEVER updated. Then you add in the fact that the CPU support list was total fiction (which makes me wonder if some of their copy boys didn't go to Biostar because they too have total bullshit CPU support lists!) and the fact that socket traces were completely hit or miss, so much so that you would have to plug into every RAM and PCI slot on a brand new board just to make sure there weren't dead slots from the factory? Yeah really wasn't surprised when they went under.

                  Their engineers could have walked on water but when the build quality and QA is THAT poor it really doesn't matter how good the engineering is.

                  --
                  ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
              • (Score: 2) by cafebabe on Saturday December 27 2014, @08:39AM

                by cafebabe (894) on Saturday December 27 2014, @08:39AM (#129423) Journal

                The MTBF (and consequences) are fundamentally different when comparing one hot-rod desktop and a node of a large server farm. Although the latter may be outside of your experience, the quantity of servers purchased is very significant. For example, Google has more than 1.5 million servers. Microsoft and Yahoo have more than 1 million servers. All of these servers have to be replaced every 3-5 years to remain economically competitive. And there are thousands of organizations purchasing servers in smaller quantities.

                --
                1702845791×2
                • (Score: 2) by Hairyfeet on Saturday December 27 2014, @08:55AM

                  by Hairyfeet (75) <bassbeast1968NO@SPAMgmail.com> on Saturday December 27 2014, @08:55AM (#129427) Journal

                  Uhhh we were talking about desktops, servers? Different kettle of fish entirely. There the extra money spent gives you thicker boards, better traces, AND the better bells and whistles. But when you are talking about desktops? Frankly the entire market is so cutthroat and so much is sold on bell and whistle that actual build quality is pretty much the same and I have to say even the cheapos have gotten pretty decent when it comes to build quality BUT, and this is a big stinky BUT, you have to know the catches. Asrock with RAM timings for example, or the fact that Biostar is BIG FUCKING LIARS when it comes to what CPUs they support...hint if it wasn't already out when the board came out? FORGET IT as they NEVER test newer chips and just "assume" that a 95w chip will work in a 95w socket while ignoring things like the extra headroom required for chips with turbo.

                  But with desktops I've built with $30 boards and $300 boards and as far as MTBF? Meh its about the same with all of 'em, about 5-7 years if well ventilated, 3-5 in shitty Dell chokeboxes.

                  --
                  ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
      • (Score: 2, Informative) by steveha on Wednesday December 24 2014, @06:08PM

        by steveha (4100) on Wednesday December 24 2014, @06:08PM (#128959)

        Is there a good ECC MB for desktop systems?

        I haven't researched this lately, but here's how it was the last time I checked.

        Intel pretty much reserves ECC as an expensive feature. Server Xeon chips support ECC but the i3/i5 class chips do not and only certain models of i7.

        AMD, on the other hand, is #2 and trying harder, and all their desktop CPUs and chipsets support ECC. (Their APUs however do not.) And Asus makes quality motherboards that do support ECC.

        The last desktop I built had an AMD FX-8350, Asus motherboard, ECC RAM, an extra-quiet CPU cooler by Arctic Cooling, and an extra-quiet Seasonic power supply. All in a quiet Antec case. My wife is happy with it, and I really want to build one for myself.

        • (Score: 2) by cafebabe on Saturday December 27 2014, @08:42AM

          by cafebabe (894) on Saturday December 27 2014, @08:42AM (#129424) Journal

          This type of market segmentation may have been beneficial to Intel in the past. However, from the scientific paper, it means that any Intel processor paired with non-ECC RAM made on or after week 26 of 2010 should be regarded as unreliable.

          --
          1702845791×2
      • (Score: 4, Informative) by LoRdTAW on Wednesday December 24 2014, @06:23PM

        by LoRdTAW (3755) on Wednesday December 24 2014, @06:23PM (#128964) Journal

        You have to look into workstation and server boards, they almost always support ECC. Be sure your CPU supports ECC as well. Intel's ark will tell you everything about their products. AMD is very short on info though 3rd party sites have such information.

        Here is an example of an ECC workstation/Server board:
        http://www.newegg.com/Product/Product.aspx?Item=N82E16813157561 [newegg.com]

        Just be sure you look at the specs carefully. Some boards use an integrated remote management chip (IPMI) that is used for the display. They are SoC's that have ethernet console abilities as well as other management functions. They won't utilize the GPU on a CPU and you are stuck with 2D only. You can spot them easily enough as they only have a VGA port. Though, you could always add a PCIe graphics card.

        Here is an example of a board which uses an Aspeed IPMI chip:
        http://www.newegg.com/Product/Product.aspx?Item=N82E16813157404 [newegg.com]

      • (Score: 1, Insightful) by Anonymous Coward on Wednesday December 24 2014, @08:46PM

        by Anonymous Coward on Wednesday December 24 2014, @08:46PM (#128988)

        I got an Asus M4A-something a few years ago and it takes ECC RAM. Memory densities are getting to the point where there needs to be several layers of error correction between you and your bits...

        http://www.asus.com/Motherboards/M4A78_PLUS/specifications/ [asus.com]

    • (Score: 2) by cafebabe on Friday December 26 2014, @11:53PM

      by cafebabe (894) on Friday December 26 2014, @11:53PM (#129373) Journal

      After further reading, it appears that ECC RAM may exacerbate the problem in certain cases. In the most contrived example, consider an ECC RAM module where all nine chips substitute two or more particular rows of RAM. In this case, any access pattern which uses these rows of RAM will have a large number of rows which are physically adjacent by one or two rows even if they aren't logically adjacent. In a more realistic case where one chip is problematic for any given access, the presence of error correction significantly increases the probability of a different class of error.

      --
      1702845791×2
  • (Score: 2) by kaszz on Wednesday December 24 2014, @02:48AM

    by kaszz (4211) on Wednesday December 24 2014, @02:48AM (#128816) Journal

    Perhaps manufacturers should take a step back when the new technology show flaws. When physical features becomes so small that they end up near the physical limit of the minimum charge to accomplish a reliable function. Then perhaps it might be an idea to consider larger features. Of course this means less memory. But computers can manage just fine without 10ths of GByte memory provided the software is efficient. Taste on that sentence, efficient software.

    A quick fix is perhaps to shorten the time between refresh cycles? the penalty is probably like 1% memory speed. I think that's workable.

    • (Score: 2) by emg on Wednesday December 24 2014, @03:15AM

      by emg (3464) on Wednesday December 24 2014, @03:15AM (#128825)

      Sure, you'll have problems... if you run programs that repeatedly access RAM, then flush the cache, then access the same RAM again.

      Can you think of any possible reason to do that in the real world?

      • (Score: 0) by Anonymous Coward on Wednesday December 24 2014, @03:45AM

        by Anonymous Coward on Wednesday December 24 2014, @03:45AM (#128828)
        When you're trying to break/change stuff in a memory location you're not supposed to be able to access directly.
        • (Score: 1, Informative) by Anonymous Coward on Wednesday December 24 2014, @04:00AM

          by Anonymous Coward on Wednesday December 24 2014, @04:00AM (#128834)

          When you're trying to break/change stuff in a memory location you're not supposed to be able to access directly.

          Winner, winner, chicken dinner!

          This is a security issue - both on multi-user systems, including virtualized hosting like amazon's cloud and on locked down DRM systems like Iphone, xbox, etc.

          • (Score: 0) by Anonymous Coward on Wednesday December 24 2014, @05:40AM

            by Anonymous Coward on Wednesday December 24 2014, @05:40AM (#128850)

            This can break compiler/runtime enforced security, like browser JS engines or the JVM, not just process level stuff. This could be used in JS based browser exploits, as well as privilege escalation against kernels and hypervisers. This seems very serious.

            • (Score: 2) by jmorris on Wednesday December 24 2014, @08:39AM

              by jmorris (4844) on Wednesday December 24 2014, @08:39AM (#128873)

              Doubt it can break interpreted environments since they probably do not expose operations that would abuse the cache in the extreme patterns needed, But anywhere where real native code can be executed should worry. Virtualization included, so long as it is modern hardware assisted virt.

              This is BAD. If it isn't already being exploited it soon will be. And the only defenses are all bad. Who wants to double or quadruple the refresh overhead? But it probably should be done for any machine less than two or three years old. Certainly until a version of memtest86 that is known to be able to test for this appears. This makes the old CPU errata problems minor in comparison.

              • (Score: 3, Informative) by TheRaven on Wednesday December 24 2014, @10:22AM

                by TheRaven (270) on Wednesday December 24 2014, @10:22AM (#128880) Journal

                If it isn't already being exploited it soon will be.

                Note that this was published at ISCA, which was in June. I don't know which track it was in - it wasn't in the security track - but I've not seen anything about it since then, so it seems that DRAM makers and security experts haven't found it that reproduceable.

                --
                sudo mod me up
              • (Score: 2) by cafebabe on Saturday December 27 2014, @04:53PM

                by cafebabe (894) on Saturday December 27 2014, @04:53PM (#129485) Journal

                This makes the old CPU errata problems minor in comparison.

                The Intel Core 2 errata includes write to random address upon interrupt. Arguably, bit errors originating in memory are no less disruptive.

                --
                1702845791×2
          • (Score: 4, Informative) by cafebabe on Thursday December 25 2014, @08:19AM

            by cafebabe (894) on Thursday December 25 2014, @08:19AM (#129056) Journal

            So, if I have virtualized hosting, I can adversely affect services run by my co-hostees. For example, if I run this outside of my trading hours, it is possible to deliberately crash services outside of my hosting while minimizing impact on myself. Services which are not restored during my trading hours leave additional resources for me.

            --
            1702845791×2
        • (Score: 2) by emg on Wednesday December 24 2014, @02:50PM

          by emg (3464) on Wednesday December 24 2014, @02:50PM (#128917)

          That's unlikely to work with any data outside the current process, and code in the current process can change the RAM directly. I guess it might work for sandboxed code like a browser plugin, but if you're running malware in your browser, you're probably already screwed.

          • (Score: 2) by cafebabe on Saturday December 27 2014, @05:00PM

            by cafebabe (894) on Saturday December 27 2014, @05:00PM (#129486) Journal

            Modern DRAM may have a row length which exceeds virtual memory page size. Furthermore, it is pages located in adjacent rows which are affected. Given that RAM is made with spare rows to cover manufacturing deficiencies, any scheme to localize processes or virtual containers will fail to contain this problem. Furthermore, attempts to detect flawed cells (through exhaustive tests) may strengthen undesirable behavior within the memory.

            --
            1702845791×2
      • (Score: 2) by kaszz on Wednesday December 24 2014, @03:50AM

        by kaszz (4211) on Wednesday December 24 2014, @03:50AM (#128830) Journal

        Currently i can't but reality have a tendency to find those "oops" moments..

      • (Score: 2) by cafebabe on Saturday December 27 2014, @08:46AM

        by cafebabe (894) on Saturday December 27 2014, @08:46AM (#129426) Journal

        That's an exaggerated access pattern for he purpose of benchmarking the number of accesses required to cause error. Real-world access patterns, such as an SQL JOIN, are sufficient to cause unintentional error and two mutexes are sufficient to maximize intentional error.

        --
        1702845791×2
    • (Score: 2) by cafebabe on Saturday December 27 2014, @08:44AM

      by cafebabe (894) on Saturday December 27 2014, @08:44AM (#129425) Journal

      From the scientific paper, it seems to be the case that this problem did not occur before week 26 of 2010. I presume all memory made at a previous scale of integration met the DDR3 specification. Unfortunately, this does not apply to the current scale of integration. Most shockingly, one unnamed manufacturer produces memory which, in certain circumstances, may be more than 1,000 times less reliable than another unnamed manufacturer.

      --
      1702845791×2
  • (Score: 1) by haz_mat on Wednesday December 24 2014, @02:55AM

    by haz_mat (4951) on Wednesday December 24 2014, @02:55AM (#128819)

    Why is this issue just being discovered now and why haven't we seen problems from this yet? It doesn't sound very farfetched to repeatedly access adjacent rows in memory, but that does sound like the sort of activity that would likely take place in cache memory closer to the processor. I'm no hardware engineer, but from what I can tell their FPGA testing rig side-steps a lot of the memory architecture between the CPU and RAM in a modern system. Dare I say they just constructed a side-case in order to expose this underlying design issue with modern DRAM?

    • (Score: 2, Insightful) by dltaylor on Wednesday December 24 2014, @03:07AM

      by dltaylor (4693) on Wednesday December 24 2014, @03:07AM (#128822)

      The toughest "in the field" memory test I have encountered is a "World of Warcraft" update. That will generate errors on at least a few systems that are otherwise stable. Could be that they're update code is similarly pathological w.r.t. memory with this kind of weakness.

      We have had to override BIOS SDRAM settings (slower) to get the updates to run, then restore them for game play.

      Personally, I have used server-class motherboards, ECC, and top-notch memory for a long time. I consider the access time penalty worth the couple of FPS.

      In specific response to your question, it does look like a test case that exposes a weakness, but the errors are not excusable just because most of the time the memory "happens to work".

    • (Score: 2) by kaszz on Wednesday December 24 2014, @03:55AM

      by kaszz (4211) on Wednesday December 24 2014, @03:55AM (#128832) Journal

      Seems FPGAs using DRAM has to watch out..
      Because often you don't want to waste internal SRAM on cache.

      • (Score: 2) by TheRaven on Wednesday December 24 2014, @10:19AM

        by TheRaven (270) on Wednesday December 24 2014, @10:19AM (#128879) Journal
        BRAMs are usually cheap on FPGAs. The reason that you might not want to bother with a cache is that FPGAs generally run at a relatively low clock speed, which means that the memory access penalty is quite low - comparable to an L2 or L3 hit on a modern CPU in terms of number of clock cycles. You might still want a bit of buffering though, so that you can saturate the memory bus with burst requests.
        --
        sudo mod me up
        • (Score: 2) by kaszz on Wednesday December 24 2014, @01:13PM

          by kaszz (4211) on Wednesday December 24 2014, @01:13PM (#128897) Journal

          Buffer as in 16 bytes to deal with speed differences between A/D and DRAM etc. Not 1 MByte L-cache. That means any algorithm that runs on the FPGA might hammer som DRAM region quite frequently because usually one design everything to go in lock-step.

          Guess these GByte DRAMs are not as good as the less dense ones. First penalty for these insane memory requirements.

          • (Score: 3, Informative) by TheRaven on Wednesday December 24 2014, @04:47PM

            by TheRaven (270) on Wednesday December 24 2014, @04:47PM (#128942) Journal

            You'll want to buffer more than 16 bytes. DDR channels are typically 32 bytes wide and, for best performance, you want bursts of some multiple of 16 bytes (even 64-128 byte bursts can give significantly more performance than individual 32 byte reads and writes). If your algorithm is likely to be hitting the same line repeatedly, then sticking it in a BRAM where you have 1-2 cycle latency for access makes a hell of a lot more sense than keeping it on the other side of the DDR controller where you've got 30+ cycle latencies. The most expensive thing on a modern FPGA is wires, so you might want to have narrow data paths internally (which can help with clocks, depending on what your other constraints are), but that just increases the size of your caching.

            For things that are worth doing on FPGA (other than prototyping CPUs), you generally have one of two options. Either you don't have much locality of reference, so you won't want conventional caches, but you might want fairly large load and store buffers - generally it's not worth making them smaller than a BRAM as it doesn't save any FPGA resources, so about 1KB is going to be a realistic minimum, but you also won't be hammering a single line much. Or you have very predictable locality of reference, in which case allocating a few BRAMs for caching with an application-specific policy makes sense, and then you still won't be hammering a single DRAM line much (unless, of course, the same flaw affects BRAMs on FPGA, but given the access patterns that we have we'd probably have noticed this by now if they did).

            --
            sudo mod me up
    • (Score: 2) by cafebabe on Thursday December 25 2014, @08:21AM

      by cafebabe (894) on Thursday December 25 2014, @08:21AM (#129057) Journal

      This seems to be a variation of the JVM/gooseneck lamp trick which has been known for more than a decade. However, the gooseneck lamp has been replaced with a cache flush.

      --
      1702845791×2
      • (Score: 2) by cafebabe on Saturday December 27 2014, @05:06PM

        by cafebabe (894) on Saturday December 27 2014, @05:06PM (#129488) Journal

        Upon further reading, it appears that a more sophisticated attempt to increase temperature was found to be superfluous and even flush/fence/mutex operations are superfluous to inadvertent corruption. Any access pattern which uses multiple rows of memory is sufficient to cause difficulty. This includes memcpy() or an SQL JOIN.

        --
        1702845791×2
    • (Score: 2) by cafebabe on Saturday December 27 2014, @05:04PM

      by cafebabe (894) on Saturday December 27 2014, @05:04PM (#129487) Journal

      It appears that memory made on or after week 26 of 2010 conforms to the DDR3 specification of 64ms DRAM refresh but that the current scale of integration makes this refresh period unviable under load. If you are concerned about this problem, use hardware and firmware which allows the refresh cycle to be more frequent. The authors of the scientific paper propose a stochastic scheme which significantly increases reliability while consuming approximately 0.5% of memory bandwidth. However, to be uniformly effective, it requires information about row substitution which is not available.

      --
      1702845791×2
  • (Score: 1, Insightful) by Anonymous Coward on Wednesday December 24 2014, @05:00AM

    by Anonymous Coward on Wednesday December 24 2014, @05:00AM (#128840)

    According to our observations, such voltage fluctuations on a row’s wordline have a disturbance effect on nearby rows, inducing some of their cells to leak charge at an accelerated rate. If such a cell loses too much charge before it is restored to its original value (i.e., refreshed), it experiences a disturbance error

    That is called inductance.

    http://en.wikipedia.org/wiki/Inductance [wikipedia.org]

    It's not a new phenomenon. It's basic law of physics. Ever since chips run at higher than a few MHz, this has caused issues, be it radiating noise or inducing voltage on adjacent data lines. There are many ways to deal with it. For example, most of the cases of modern chips (included embedded stuff) is grounded.

    The faster the switching (ie. data rates), the higher the inductance of these pulses will be. Since each of the wires is smaller and smaller, the only way to reduce inductance is to drop voltage. But transistors need certain voltage to operate on, so its a catch-22. You can lay ground wires between the two data lines, or try to sandwich it between two ground planes, but that raises resistance in the circuit. So another catch-22. Fully buffered, ECC memory avoids most of these problems, but then the buffer introduces extra latency and you are not addressing the fundamental issues of inductance.

    Anyway, faulty memory is memory that does not behave as required by the standard. Bit flipping on adjacent bits is something that should not happen. Faulty products should be recalled. Maybe memtest86 can make sure to run these tests.

  • (Score: 2) by nitehawk214 on Wednesday December 24 2014, @06:31PM

    by nitehawk214 (1304) on Wednesday December 24 2014, @06:31PM (#128967)

    Literal Write Only Memory, nice.

    Now I need a Dark Emitting Diode.

    --
    "Don't you ever miss the days when you used to be nostalgic?" -Loiosh