Stories
Slash Boxes
Comments

SoylentNews is people

posted by chromas on Wednesday July 11 2018, @01:20PM   Printer-friendly
from the …shall-not-be-responsible-or-liable-for-any-damage-or-loss,-including-but-not-limited-to-any-damage… dept.

Submitted via IRC for Fnord666

Two insurance companies are suing a cyber-security firm to recover insurance fees paid to a customer after the security firm failed to detect malware on the client's network for months, an issue that led to one of the biggest security breaches of the 2000s. The security firms says the lawsuit is meritless.

The two insurance firms are Lexington Insurance Company and Beazley Insurance Company, and both insured Heartland Payment Systems, a leading payment processing company.

In January 2009, Heartland announced a major security breach of its network, following which an attacker stole details for over 100 million payment cards stored on its systems by over 650 of Heartland's customers.

Following this devastating hack and one of the biggest of the 2000s, Heartland paid over $148 million in settlement fees for various lawsuits, and other remediation costs and expenses Heartland owed its customers.

As part of their insurance agreements, the two firms paid $30 million to Heartland in the hack's aftermath, with the Lexington Insurance Company footing a $20 million bill, and the Beazley Insurance Company paying another $10 million.

But now, according to a civil lawsuit filed on June 28 in Illinois, and first reported by the Cook County Record, the two companies are trying to recover those costs, and are claiming that the security firm with which Heartland had a service contract had failed to honor its agreement.

Source: https://www.bleepingcomputer.com/news/security/security-firm-sued-for-failing-to-detect-malware-that-caused-a-2009-breach/


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by DannyB on Wednesday July 11 2018, @01:37PM (11 children)

    by DannyB (5839) Subscriber Badge on Wednesday July 11 2018, @01:37PM (#705709) Journal

    Wasn't the Insurance company insured? :-)

    The security firm cannot guarantee that there won't be a breach. They cannot guarantee that it will be detected. They can only make a best effort. It is a war between the security firms and the hackers. Who can get ahead of the other.

    No matter how good your security, all it takes is one employee to get tricked through good social engineering.

    --
    The lower I set my standards the more accomplishments I have.
    • (Score: 2) by DannyB on Wednesday July 11 2018, @01:38PM

      by DannyB (5839) Subscriber Badge on Wednesday July 11 2018, @01:38PM (#705710) Journal

      It seems the payment processor did the right thing by being insured.

      --
      The lower I set my standards the more accomplishments I have.
    • (Score: 4, Interesting) by choose another one on Wednesday July 11 2018, @02:01PM (5 children)

      by choose another one (515) Subscriber Badge on Wednesday July 11 2018, @02:01PM (#705718)

      > Wasn't the Insurance company insured? :-)

      No, that would be reinsurance, so you mean "reinsured". Seriously :-)

      > They can only make a best effort.

      Danger Will Robinson... "best efforts" has a specific legal meaning in contracts (at least where I am) and you really really really don't want it in a contract clause if there is any remote possibility of activating it. I'm not going to forget that one because I came across it during a bollocking about "see this is why techies don't do commercial / legal stuff", I was lucky, it got caught before it went out.

      Regardless, the case will most likely turn on competence and/or negligence rather than efforts. Whether the security firm's work was up to reasonably expected standards for the security industry.

      The funny thing is that the security firm is probably insured too (should be), and might even be by the same company(s). I presume someone there has checked who actually ends up carrying the can at the end of the reinsurance chain and that they aren't going to win and then get the bill back again with legal fees on top, but you never know...

      • (Score: 0) by Anonymous Coward on Wednesday July 11 2018, @05:04PM (3 children)

        by Anonymous Coward on Wednesday July 11 2018, @05:04PM (#705800)

        Danger Will Robinson... "best efforts" has a specific legal meaning in contracts (at least where I am) and you really really really don't want it in a contract clause if there is any remote possibility of activating it. I'm not going to forget that one because I came across it during a bollocking about "see this is why techies don't do commercial / legal stuff", I was lucky, it got caught before it went out.

        Don't leave us hanging. The phrase seems straightforward enough. What's so bad about it?

        • (Score: 0) by Anonymous Coward on Wednesday July 11 2018, @06:59PM (1 child)

          by Anonymous Coward on Wednesday July 11 2018, @06:59PM (#705874)

          Not the GP, but I'll answer anyway. In many jurisdictions, it modifies the implied covenant of good faith and fair dealing (except in certain cases, such as 2-306). UCC 1-304 states:

          Every contract or duty within the Uniform Commercial Code imposes an obligation of good faith in its performance and enforcement.

          Furthermore, "Good Faith" is defined in 1-201(20) as:

          except as otherwise provided in Article 5, means honesty in fact and the observance of reasonable commercial standards of fair dealing.

          Best efforts, on the other hand, is a higher standard. It requires a contracting party to make every effort that will not reasonably harm said party in the performance of the contract.

          For example, I agree to generate donations to SN in exchange for 10%. Under a good-faith standard, I can passively generate them. Under a best-efforts standard, I'm required to make active solicitations for donations. The difference is why provisions, including 2-306(2), require best efforts, as opposed to good faith, because the consideration of one party is disproportionate, as a matter of law, without the higher standard. And it is also why some people love the standard and some people hate it, depending on what side of the contract they are on, of course.

        • (Score: 2) by choose another one on Thursday July 12 2018, @08:54PM

          by choose another one (515) Subscriber Badge on Thursday July 12 2018, @08:54PM (#706356)

          > The phrase seems straightforward enough. What's so bad about it?

          The way it was explained to me was that it means "drop everything [else], put everyone on it, **** all other projects and customers, whatever the cost, until you fix it or go bust"

          Followed by: "we're never going to ******* do that so don't ******* agree to it".

          The legal definitions I looked up later were less sweary and shouty but pretty much the same thing. As ever, your jurisdiction may vary.

          What you want as a supplier is "reasonable endeavours" or "commercially reasonable endeavours". In reality, "best endeavours" doesn't entail commercial suicide because you'll breach contract first. That means its only purpose is a stick to beat you with for breach, because the clause will either never be needed or you'll never honor it.

      • (Score: 0) by Anonymous Coward on Wednesday July 11 2018, @05:18PM

        by Anonymous Coward on Wednesday July 11 2018, @05:18PM (#705812)

        The insurance companies are probably compelled to sue based on the terms of the reinsurance. In order to collect the reinsurance, they have to make reasonable efforts to mitigate their losses. This would include finding another responsible party and attempting to collect from them. Even if the same insurance companies were involved in the initial claims, the reinsurance companies could force the issue. Ultimately, the best way to tell would be to actually read the complaint and answer. The allegations would include any subrogation chain and somewhere in the discovery disclosures there would be the reinsurance chain, if any.

    • (Score: 4, Interesting) by rigrig on Wednesday July 11 2018, @02:06PM (3 children)

      by rigrig (5129) <soylentnews@tubul.net> on Wednesday July 11 2018, @02:06PM (#705724) Homepage

      The security firm cannot guarantee that there won't be a breach.

      Yes, but premiums might've been a tiny bit higher if the insurance companies had known just how bad security was:

      The lawsuit claims that Visa discovered that Trustwave ignored the fact that Heartland didn't run a firewall, was using vendor-supplied passwords, didn't have sufficient protection for the storage system used for card data, failed to assign unique identification to each person accessing its system, and had failed to monitor servers and cardholder data at regular intervals.

      All of these are PCI DSS compliance rules, and Visa said that despite all the problems on Heartland's network, Trustwave provided PCI DSS attestation.

      But of course Trustwave says the lawsuit is meritless, and it's now up to the lawyers to burn a few piles of cash arguing whether that apparent mess resulted in this particular breach.

      --
      No one remembers the singer.
      • (Score: 4, Insightful) by sjames on Wednesday July 11 2018, @02:35PM (2 children)

        by sjames (2882) on Wednesday July 11 2018, @02:35PM (#705741) Journal

        That's an ongoing problem in the field of cyber security. Several industries are required to get annual assessments of their security. Far too often, an honest assessment means you don't get called to do the assessment next year. OTOH, go too easy on the client and this happens.

        • (Score: 2) by HiThere on Wednesday July 11 2018, @06:39PM (1 child)

          by HiThere (866) Subscriber Badge on Wednesday July 11 2018, @06:39PM (#705859) Journal

          That means that this is a very good thing to happen. If it encourages honest assessments, it's good.

          The problem appears to me to be that those judging can't usually tell an honest assessment from a whitewash job. And many of them aren't interested, they just want to avoid needing to do anything. That's not honest security.

          --
          Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
          • (Score: 2) by sjames on Wednesday July 11 2018, @09:43PM

            by sjames (2882) on Wednesday July 11 2018, @09:43PM (#705952) Journal

            If the allegations are true, this was an extreme case. It's one thing to not see a persistent threat that is designed not to be seen but no firewall and using default passwords is pretty extreme.

  • (Score: 5, Insightful) by urza9814 on Wednesday July 11 2018, @01:56PM (11 children)

    by urza9814 (3954) on Wednesday July 11 2018, @01:56PM (#705714) Journal

    Every once in a while we get discussion in here about "software engineering" and how it's not proper engineering because there's no licensing or standards enforced, and people can deliver all manner of utter garbage and never be held responsible for the outcomes. Hell, we've even got laws and court cases on the books that criminalize discovering negligent security practices, yet we have virtually nothing to actually prevent those practices from occurring.

    So maybe we need more of these kinds of lawsuits. Scare companies into hiring skilled, competent developers instead of just whoever promises to do the job cheapest and work the most unpaid overtime...and allow consumers and businesses some recourse when they get screwed by a company that thinks some click-through EULA makes them immune from any wrongdoing.

    Of course there's no way you can write software that can detect and prevent 100% of all possible threats, particularly due to the poor security practices of the underlying system it must operate upon...but let's not assume that this means you can't ever be liable for selling security software that fails to keep the system secure. SOME threats and failure modes are virtually impossible to foresee, but that certainly does not mean that all of them are. How do you fail to notice someone exfiltrating HUNDREDS OF MILLIONS of payment records? How do you conduct detailed security audits of a system over several years and fail to notice SQL injection vulnerabilities? That's not exactly a new, unknown, or unusual method of attack. Also notice that Visa essentially claimed that the security firm was issuing fraudulent PCI DSS compliance certifications. Sounds to me like they were either extremely negligent or deliberately fraudulent, and we should not allow them to escape liability for that.

    • (Score: 2) by choose another one on Wednesday July 11 2018, @03:06PM (6 children)

      by choose another one (515) Subscriber Badge on Wednesday July 11 2018, @03:06PM (#705753)

      Every once in a while we get discussion in here about "software engineering" and how it's not proper engineering because there's no licensing or standards enforced, and people can deliver all manner of utter garbage and never be held responsible for the outcomes.

      Yeah and it usually involves talk of bridge building, because every bridge building engineer is educated with the Tacoma video on loop in the background and "we don't build like that anymore we have standards now". *cough*ABC*cough*Florida*cough*. Maybe we should compare with tower blocks instead, because we've been building those a similar length of time to writing software and we have standards and codes and standards enforcing agencies so that the tower blocks don't, say, incinerate their residents. Er, wait, no, that analogy doesn't work either...

      People forget in such discussions that we build a more software than bridges, a lot more. They also forget that in other branches of engineering standards and licensing come in where people are at risk of dying, or (more usually) after they have died. Safety critical software development is way different to "normal" software development, and the results fail far less often - but it is way more expensive too.

      Scare companies into hiring skilled, competent developers instead of just whoever promises to do the job cheapest and work the most unpaid overtime...and allow consumers and businesses some recourse when they get screwed by a company that thinks some click-through EULA makes them immune from any wrongdoing.

      So, bye bye FOSS and GPL clauses 15 & 16 then?

      Also notice that Visa essentially claimed that the security firm was issuing fraudulent PCI DSS compliance certifications.

      And better paid, skilled, competent developers would have fixed that? That is a management level problem at the security firm or the client or (more likely) both, depending on who knew what.

      See we're right back to, "licensing or standards enforced", and we've found that software has that too, just with people "working around" them or more likely driving coach and horses through. Not coincidentally that's exactly how we end up with building-sized people incinerators and bridge-sized people compactors. Maybe software is proper enginerring after all.

      • (Score: 2) by urza9814 on Wednesday July 11 2018, @03:37PM (3 children)

        by urza9814 (3954) on Wednesday July 11 2018, @03:37PM (#705770) Journal

        When a bridge fails, we prosecute the people responsible for designing and building the thing. When software fails, we prosecute the operator instead. It's like going after a pedestrian who happened to be on the crosswalk when the bridge collapsed because they clearly weren't walking correctly.

        See we're right back to, "licensing or standards enforced", and we've found that software has that too, just with people "working around" them or more likely driving coach and horses through.

        In other words, software has standards, but they aren't enforced. People routinely ignore the standards and nobody cares. Which was exactly my point.

        Scare companies into hiring skilled, competent developers instead of just whoever promises to do the job cheapest and work the most unpaid overtime...and allow consumers and businesses some recourse when they get screwed by a company that thinks some click-through EULA makes them immune from any wrongdoing.

        So, bye bye FOSS and GPL clauses 15 & 16 then?

        Yep. Scrap those clauses and replace them with something more sensible. The real spirit of open source is that every user is also a participant in the development. So just make that part of the license. Then if you claim the devs are incompetent, you're also saying the same of yourself; if you sue the devs, you also sue yourself. Of course, that would also likely mean dropping the corporate veneer commonly applied to open source projects, but I see that as a bonus not a problem. Alternatively, you can ensure that the "no liability" clause in a contract creates an enforceable transfer of liability back to the end user. If Microsoft sells you Windows with a no liability clause, and your company gets hacked due to a Windows vulnerability, you don't get to say that you aren't liable because you followed industry best practices -- you accepted liability for any and all mistakes by Microsoft when you agreed to that Windows license. The same would apply to open source. If companies start removing the 'no liability' clauses, then that would hurt open source, but if they don't it could give open source an advantage since it allows companies to actually take some action to reduce the risks.

        • (Score: 2) by sjames on Wednesday July 11 2018, @09:57PM (2 children)

          by sjames (2882) on Wednesday July 11 2018, @09:57PM (#705958) Journal

          Are you prepared to hire professional engineers to do a complete analysis of your PC and professionally customize each application before installing it? Because if not, it simply isn't reasonable to hold the developer liable like you would for a bridge. Everyone likes to talk about bridges, so here's the rest of the story. Can you just imagine that a city wants a 6 lane bridge across a mile wide channel, so they go buy a universal fit bridge from Walmart and pay a couple teens to bolt it together for them?

          Sorry, it doesn't work that way. If you want someone to put his neck on the block, you'll start with a site survey including geological features, weather, predicted storms, etc. THEN they will start designing the bridge. A year and a couple million later later they might finish the computer simulations on the bridge and begin selecting contractors.

          • (Score: 2) by fido_dogstoyevsky on Wednesday July 11 2018, @11:52PM (1 child)

            by fido_dogstoyevsky (131) <axehandleNO@SPAMgmail.com> on Wednesday July 11 2018, @11:52PM (#705993)

            Are you prepared to hire professional engineers to do a complete analysis of your PC and professionally customize each application before installing it?

            Isn't that the service for which the "security firm" took money? Substituting "remove security flaws" for "customize".

            --
            It's NOT a conspiracy... it's a plot.
            • (Score: 2) by sjames on Thursday July 12 2018, @12:23AM

              by sjames (2882) on Thursday July 12 2018, @12:23AM (#706009) Journal

              Actually, they were hired to identify security flaws. They're not being sued because someone got in anyway, they're in trouble because they didn't even do the bare minimum they were contracted for.

              In this case, they failed to note the lack of a firewall and not changing default vendor passwords as security issues.

              The post I was replying to was the old suggestion that all software developers should carry the same liability as structural engineers who sign off on a bridge.

      • (Score: 2) by AthanasiusKircher on Wednesday July 11 2018, @06:34PM

        by AthanasiusKircher (5291) on Wednesday July 11 2018, @06:34PM (#705855) Journal

        People forget in such discussions that we build a more software than bridges, a lot more.

        Well, if that's true, it's actually a reason for their to be MORE stringent oversight, not less.

        They also forget that in other branches of engineering standards and licensing come in where people are at risk of dying, or (more usually) after they have died.

        Not quite true. Yes, the most egregious instances involve people dying, but anywhere with serious risk of injury is often a concern.

        That is a management level problem at the security firm or the client or (more likely) both, depending on who knew what.

        Actually, in a reasonable world, it should be a problem at every level where a professional knowledgeable person signed off on standards they knowingly didn't enforce. Not only the heads in charge.

        See we're right back to, "licensing or standards enforced", and we've found that software has that too, just with people "working around" them or more likely driving coach and horses through. Not coincidentally that's exactly how we end up with building-sized people incinerators and bridge-sized people compactors. Maybe software is proper enginerring after all.

        Nope. You missed a critical step. You do know that there are such things as "professional engineers," right? You're certified in many states to practice engineering. If you sign off on something and are a certified engineer, and then it turns out that either you knew the information was wrong or you were guilty of serious negligence, your license to practice can be suspended or revoked. It actually happens quite frequently in some states.

        It's the same in other professions. If you're a lawyer who knowingly creates documentation that's false, you can be disbarred. If you're a doctor who knowingly does something to a patient that they didn't sign off on, you can have your license revoked.

        And if you knowingly sign off on a certification that falsely claims clearly defined security measures are in place that aren't (or are negligent in not noticing severe breaches you were supposed to detect), you should be thrown out of the profession, or at least suspended for a while.

        THAT would be "proper engineering," in the professional sense.

      • (Score: 0) by Anonymous Coward on Wednesday July 11 2018, @07:17PM

        by Anonymous Coward on Wednesday July 11 2018, @07:17PM (#705888)

        Nice off-topic rant you constructed there. You do realize that TFA has nothing to do with software development, right? This is about administration and maintenance, you know, those things you do after having procured some engineering works. Those things that software "developers" (note, not engineers) like to ignore?

    • (Score: 4, Insightful) by Thexalon on Wednesday July 11 2018, @04:43PM (2 children)

      by Thexalon (636) on Wednesday July 11 2018, @04:43PM (#705793)

      So maybe we need more of these kinds of lawsuits.

      We absolutely do.

      Consider 2 hypothetical security firms:
      1. Schlock Security Inc that is selling tiger-repelling rock [youtube.com] security. As in, they do a few half-assed tests for show and invariably say "Yup, you're secure!"
      2. Integrity Security Inc is selling both a more serious and well-designed set of automated tests and also having a real human who is closer to the Bruce Schneier end of security skills spend some time trying to bust into your system.

      As long as nobody really targets the system in question, the services these two firms offer appear identical. However, Schlock can charge less than Integrity because it costs them a lot less to provide their service, so the CTO (or whoever else is involved in making the purchasing decision) has a financial incentive to choose Schlock over Integrity. And it gets worse when you realize that if said CTO opted for Integrity over Schlock, Integrity would probably demand a bunch of expensive and time-consuming changes to what the company is doing before giving them the certification they need to keep doing business, which creates even more incentive for the CTO to pick Schlock over Integrity.

      In the event that the CTO's systems are breached, s/he can always avoid any professional consequences by saying "We got the industry-standard security checkup from Schlock, there's nothing we could have done differently." That means their rear is successfully covered, eliminating any disincentive to choose Schlock over Integrity.

      The value of lawsuits like this one is to increase the costs of doing business for Schlock to the point where their services are substantially more expensive than Integrity's. Until that happens, firms like Schlock will continue to exist and be substantially more profitable than firms like Integrity.

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 2) by DannyB on Wednesday July 11 2018, @05:16PM (1 child)

        by DannyB (5839) Subscriber Badge on Wednesday July 11 2018, @05:16PM (#705810) Journal

        In the event that the CTO's systems are breached, s/he can always avoid any professional consequences by saying "We got the industry-standard security checkup from Schlock, there's nothing we could have done differently."

        Also one I've heard indirectly about from the 1970's: I bought IBM, that is the best there is. I couldn't have bought anything better.

        You know the saying: Nobody ever got fired for buying IBM.

        Then it was: Nobody ever got fired for buying Microsoft.

        Now I think neither one are true anymore.

        --
        The lower I set my standards the more accomplishments I have.
        • (Score: 0) by Anonymous Coward on Wednesday July 11 2018, @07:42PM

          by Anonymous Coward on Wednesday July 11 2018, @07:42PM (#705902)

          You know the saying: Nobody ever got fired for buying IBM.

          Then it was: Nobody ever got fired for buying Microsoft.

          Now I think neither one are true anymore.

          Used to be nobody ever got fired for sexual harassment. The times they are a changin'.

    • (Score: 0, Disagree) by Anonymous Coward on Wednesday July 11 2018, @06:49PM

      by Anonymous Coward on Wednesday July 11 2018, @06:49PM (#705866)

      insightful? are you people retarded?

  • (Score: 4, Insightful) by AthanasiusKircher on Wednesday July 11 2018, @02:19PM (1 child)

    by AthanasiusKircher (5291) on Wednesday July 11 2018, @02:19PM (#705732) Journal

    To be clear, since the summary doesn't mention it -- the company being sued is apparently Trustwave [trustwave.com], the security firm that was tasked with auditing the security at Heartland (the company mentioned in the summary).

    And by the way, the most damning part of TFA is this:

    The lawsuit also mentions that in the aftermath of the hack, Visa conducted a review of Heartland's servers and found that Trustwave incorrectly certified Heartland as PCI DSS compliant. PCS DSS stands for Payment Card Industry Data Security Standard, an attestation every vendor must obtain before being allowed to handle credit card data.

    The lawsuit claims that Visa discovered that Trustwave ignored the fact that Heartland didn't run a firewall, was using vendor-supplied passwords, didn't have sufficient protection for the storage system used for card data, failed to assign unique identification to each person accessing its system, and had failed to monitor servers and cardholder data at regular intervals.

    All of these are PCI DSS compliance rules, and Visa said that despite all the problems on Heartland's network, Trustwave provided PCI DSS attestation. Visa later prohibited Heartland from employing Trustwave following the wrongful attestation.

    Citing the Visa report and other post-breach documents, the two insurance firms claim that Trustwave is guilty of gross negligence.

    If a company is hired to do security audits that maintain a specific standard, and they claim to do said audits and then certify the standards are met when they are not, that seems a serious allegation. I don't know much about the requirements of said standard [wikipedia.org], but if Visa's assessment is accurate, that's a major problem.

    • (Score: 4, Interesting) by DannyB on Wednesday July 11 2018, @02:57PM

      by DannyB (5839) Subscriber Badge on Wednesday July 11 2018, @02:57PM (#705750) Journal

      Oh, yes. Trustwave. The company that knowingly and willfully issued certificates to other companies to ENABLE man in the middle attacks. Lovely.

      Border gateway routers on a corporate network would like to control and monitor all incoming / outgoing traffic on the corporate network. Understandably. But sadly, more and more traffic was using SSL (and this was before Let's Encrypt). So Trustwave gave such an equipment manufacturer a certificate which could issue certificates for any domain name. The equipment, when seeing an SSL, let's say to amazon.com, would generate on-the-fly a new certificate for amazon.com. Then the border gateway could impersonate to the browser that it was amazon.com, and the browser would trust it, since Trustwave's certificates are trusted. The border gateway would connect to the real amazon.com on your behalf. You would see nothing amiss. The border gateway could conduct an effective MITM attack on your SSL traffic.

      Some would think this is a good thing. But its snot.

      Understandably organizations like Amazon, Google, etc don't like others impersonating them with bogus but trusted SSL certificates. Google was the first to make Chrome browser recognize Google's certificates and their thumbprint. If your browser sees a Google.com certificate issued by Honest Achmed's Certificate Authority and Shoe Shine of Tehran Iran, it is possible that this is not a legitimate Google.com certificate. Google probably doesn't buy its certificates from such a reputable firm.

      Google's solution is good (but not perfect) if you happen to make a widely used web browser. But what about the rest of us who don't? Thus developed the HPKP standard.

      Trustwave was ahead of the game in understanding how to help firewall makers do MITM attacks against the rest of us. I wonder how many of these devices were sold to countries like China or other great shining examples of human rights protectors?

      --
      The lower I set my standards the more accomplishments I have.
  • (Score: 2) by jb on Thursday July 12 2018, @04:59AM (1 child)

    by jb (338) on Thursday July 12 2018, @04:59AM (#706106)

    an attacker stole details for over 100 million payment cards stored on its systems by over 650 of Heartland's customers.

    100,000,000 cards stored by over 650 customers.

    So, assuming "over 650" means "just a little over 650" (as why else pick such a specific number -- let's guess it was 651), that tells that on average each Heartland customer had 153,609 payment cards.

    Let's be conservative and assume that each card had expired and been replaced 9 times and Heartland hadn't deleted the old card records, so each customer had "10 cards" recorded for each continuing card account.

    That's still 15,361 cards per customer.

    What I want to know is, where do you get a wallet big enough to fit that many cards in it?

    • (Score: 0) by Anonymous Coward on Thursday July 12 2018, @05:33AM

      by Anonymous Coward on Thursday July 12 2018, @05:33AM (#706113)

      Heartland is a payment processor. Their customers are businesses, not consumers. By your estimate each business had 153,609 customers placing orders.

(1)