Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Thursday August 07, @02:22PM   Printer-friendly
from the the-cloud-is-just-someone-else's-computer dept.

Some billing changes caused AWS to delete the entirety of developer Seuros' account rather than roll back to the old billing account on record. He has written an annotated timeline and analysis of how AWS came to not just delete a 10-year old, paid up account without warning but also give him quite a run around.

On July 23, 2025, AWS deleted my 10-year-old account and every byte of data I had stored with them. No warning. No grace period. No recovery options. Just complete digital annihilation.

[...] Lessons Learned

  1. Never trust a single provider—no matter how many regions you replicate across
  2. "Best practices" mean nothing when the provider goes rogue
  3. Document everything—screenshots, emails, correspondence timestamps
  4. The support theater is real—they literally cannot help you
  5. Have an exit strategy executable in hours, not days

AWS won't admit their mistake. They won't acknowledge the rogue proof of concept. They won't explain why MENA operates differently. They won't even answer whether your data exists.

But they will ask you to rate their support 5 stars.

The cloud isn't your friend. It's a business. And when their business needs conflict with your data's existence, guess which one wins?

Plan accordingly.

[....] At one point during this ordeal, I hit rock bottom. I was ready to delete everything—yank all my gems from RubyGems, delete the organizations, the websites, everything I'd created. Leave a single message: "AWS killed this."

It would have made headlines. Caused chaos for thousands of projects. Trended on HN, Reddit, YouTube. But it would have hurt the wrong people—developers who depend on my work, not AWS.

As he points out, having all your activities managed by a single provider leaves one at risk for such extinction events. But maybe moving over to another, similar cloud provider is just kicking the can down the road and asking for a repeat of events under new circumstances.

Previously:
(2023) AWS to Charge Customers for Public IPv4 Addresses From 2024
(2019) Amazon Slams Media For Not Saying Nice Things About AWS
(2019) Amazon is Saying Nothing About the DDoS Attack That Took Down AWS, but Others Are
(2019) Azure Might be Woefully Inefficient and Unprofitable
(2018) The Cloud is a Six-Horse Race, and Three of Those Have Been Lapped


Original Submission

Related Stories

The Cloud is a Six-Horse Race, and Three of Those Have Been Lapped 13 comments

Analyst firm Gartner’s 2018 Magic Quadrant for infrastructure as a Service (IaaS) has again found that Amazon Web Services and Microsoft Azure are the most mature clouds, but has omitted more than half of the vendors it covered last year on grounds that customers now demand more than just rented servers and storage.

“Customers now have high expectations from their cloud IaaS providers. They demand market-leading technical capabilities — depth and breadth of features, along with high availability, performance and security,” wrote Gartner’s mages. “They expect not only ‘hardware’ infrastructure features, but also management features, developer services and cloud software infrastructure services, including fully integrated PaaS capabilities.”

Given those expectations, Gartner was happy to drop eight clouds from this year’s Quadrant, farewelling Virtustream, CenturyLink, Joyent, Rackspace, Interoute, Fujitsu, Skytap and NTT.

The analyst firm says AWS is the most mature cloud and has come to be seen as a safe choice, but cautions “Customers should be aware that while it's easy to get started, optimal use — especially keeping up with new service innovations and best practices, and managing costs — may challenge even highly agile, expert IT organizations, including AWS partners. As new, less-experienced MSPs are added to AWS's Audited MSP Partner program, this designation is becoming less of an assurance of MSP quality.”

Microsoft’s Azure has similar problems: Gartner says “Microsoft's sales, field solutions architects and professional service teams did not have an adequate technical understanding of Azure.”

[...] The firm also rates Azure as “optimized to deliver ease of use to novices with simple projects” which is great but “comes at the cost of sometimes making complex configurations difficult and frustrating to implement.”


Original Submission

Azure Might be Woefully Inefficient and Unprofitable 22 comments

https://medium.com/@wtfmitchel/azure-vs-moores-law-2020-65a6fe67e31b

As a result of undershooting their projected capacity by such a large margin, Microsoft was way off on their capacity projections with Azure and only built roughly 1/3 of the data center capacity that was actually necessary. Consequently, they had to over-provision their existing data centers to the point of tripping the breakers and rapidly fill the gaps with an excessive amount of leased space to meet the demand that they projected. All of which effectively doubled the amount of leased space in their portfolio from 25% to 50%, extended their break-even to nearly a decade, and killed their hopes of profitability any time soon.

While an honest mistake and not being able to foresee the future is forgivable, knowingly omitting a mistake of this magnitude is criminal when considering how much Microsoft is hedging its future on Azure. On top of supplying misleading revenue metrics in their quarterly 10K filings to fortify a position of strength and being second only to AWS, Microsoft seems to be wary about reporting Azure's individual performance metrics or news of these failings that would enable investors to conclude this for themselves. Instead, Microsoft appears to be averaging out Azure's losses with their legacy mainstays that are profitable by reporting its revenue within their Intelligent Cloud container instead of itemizing it.

Previously:


Original Submission

Amazon is Saying Nothing About the DDoS Attack That Took Down AWS, but Others Are 8 comments

From the following story:

Amazon has still not provided any useful information or insights into the DDoS attack that took down swathes of websites last week, so let's turn to others that were watching.

One such company is digital monitoring firm Catchpoint, which sent us its analysis of the attack in which it makes two broad conclusions: that Amazon was slow in reacting to the attack, and that tardiness was likely the result of its looking in the wrong places.

Even though cloud providers go to some lengths to protect themselves, the DDoS attack shows that even a company as big as Amazon is vulnerable. Not only that but, thanks to the way that companies use cloud services these days, the attack has a knock-on impact.

"A key takeaway is the ripple effect impact when an outage happens to a third-party cloud service like S3," Catchpoint noted.

The attack targeted Amazon's S3 - Simple Storage Service - which provides object storage through a web interface. It did not directly target the larger Amazon Web Services (AWS) but for many companies the end result was the same: their websites fell over.

[...] Amazon responded by rerouting packets through a DDoS mitigation service run by Neustar but it took hours for the company to respond. Catchpoint says its first indications that something was up came five hours before Amazon seemingly noticed, saying it saw "anomalies" that it says should have served as early warnings signs.

When it had resolved the issue, Amazon said the attack happened "between 1030 and 1830 PST," but Catchpoint's system shows highly unusual activity from 0530. We should point out that Catchpoint sells monitoring services for a living so it has plenty of reasons to highlight its system's efficacy, but that said, assuming the graphic we were given [PDF] is accurate - and we have double-checked with Catchpoint - it does appear that Amazon was slow to recognize the threat.

Catchpoint says the problem is that Amazon - and many other organizations - are using an "old" way of measuring what's going on. They monitor their own systems rather than the impact on users.

"It is critical to primarily focus on the end-user," Catchpoint argues. "In this case, if you were just monitoring S3, you would have missed the problem (perhaps, being alerted first by frustrated users)."

-- submitted from IRC


Original Submission

Amazon Slams Media For Not Saying Nice Things About AWS 5 comments

Arthur T Knackerbracket has found the following story:

Stung by an article mulling Amazon Web Services' market dominance on Monday, AWS VP Andi Gutmans fired back, complaining the reporter ignored flattering comments from AWS partners – and that "AWS is 'strip-mining' open source is silly and off-base."

"The journalist largely ignores the many positive comments he got from partners because it’s not as salacious copy for him," Gutmans said in a blog post, as if critical reporting carried with it an obligation to publish a specific quota of marketing copy.

And he insisted that Amazon "contributes mightily to open source projects," and "AWS has not copied anybody’s software or services."

In its recent lawsuit against AWS, open source biz Elastic, cited in the New York Times article and a business which is public in its disaffection with Amazon, did not accuse AWS of copying its open source search software – which anyone can copy by virtue of its open source license. Rather, the search biz objects to AWS' use of its trademark in its Amazon Elasticsearch Service.

But others have been more cutting. Following AWS' launch of DocumentDB, a cloud database compatible with the MongoDB API, CEO Dev Ittycheria suggested his company's product had been imitated and copied.

Indeed, among startups like Confluent, Elastic, MongoDB, Neo4J, and Redis Labs that have been trying to turn open source projects into revenue-generating businesses, concern about AWS - and to a lesser extent Microsoft Azure and Google Cloud - is quite common.


Original Submission

AWS to Charge Customers for Public IPv4 Addresses From 2024 19 comments

AWS to charge customers for public IPv4 addresses from 2024:

Cloud giant AWS will start charging customers for public IPv4 addresses from next year, claiming it is forced to do this because of the increasing scarcity of these and to encourage the use of IPv6 instead.

It is now four years since we officially ran out of IPv4 ranges to allocate, and since then, those wanting a new public IPv4 address have had to rely on address ranges being recovered, either from from organizations that close down or those that return addresses they no longer require as they migrate to IPv6.

If Amazon's cloud division is to be believed, the difficulty in obtaining public IPv4 addresses has seen the cost of acquiring a single address rise by more than 300 percent over the past five years, and as we all know, the business is a little short of cash at the moment, so is having to pass these costs on to users.

"This change reflects our own costs and is also intended to encourage you to be a bit more frugal with your use of public IPv4 addresses and to think about accelerating your adoption of IPv6 as a modernization and conservation measure," writes AWS Chief Evangelist Jeff Barr, on the company news blog.

The update will come into effect on February 1, 2024, when AWS customers will see a charge of $0.005 (half a cent) per IP address per hour for all public IPv4 addresses. These charges will apparently apply whether the address is attached to a service or not, and like many AWS charges, appear inconsequential at first glance but can mount up over time if a customer is using many of them.


Original Submission

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Informative) by Anonymous Coward on Thursday August 07, @02:39PM (10 children)

    by Anonymous Coward on Thursday August 07, @02:39PM (#1412733)

    About half way down the top link is this speculation, which may explain how AWS went rogue?

    The developer running the test typed --dry to execute a dry run—standard practice across modern CLIs:

            ruby --version
            npm --version
            bun --version
            terraform --dry-run

    But the internal tool was written in Java. And Java uses single dashes:

            java -version (not --version)
            java -dry (not --dry)

    When you pass --dry to a Java application expecting -dry, it gets ignored. The script executed for real, deleting accounts in production.

    The developer did everything right. Java’s 1995-era parameter parsing turned a simulation into an extinction event.

    • (Score: 5, Interesting) by khallow on Thursday August 07, @02:49PM

      by khallow (3766) Subscriber Badge on Thursday August 07, @02:49PM (#1412735) Journal
      And apparently that in turn may have been a response to a compromise of another Amazon service. From here [seuros.com] (incidentally a follow up, good news story about restoration of Seuros's account):

      About That Discord Contact

      In my original article, I mentioned someone contacted me via Discord, claiming to be an AWS insider. They knew details only AWS and I should have known - my account number, configuration details, even personal information stored in my AWS data. They suggested AWS MENA was running a proof-of-concept on “dormant” accounts that went wrong.

      Now, with my data restored, I have to wonder: Was this person telling the truth? Consider the timing - the same week (July 17-23), Amazon Q, AWS’s AI coding assistant, was compromised with malicious prompts instructing it to “delete file-system and cloud resources.” If rogue actors could inject data-wiping commands into an official AWS tool, what else was happening inside AWS that week?

      The theory about a botched test makes more sense now. If a team screwed up and didn’t have privileges to restore “terminated” instances, that would explain why it took Tarus’s VP-level escalation to bring everything back. The regular support team might have genuinely believed the data was gone - they just didn’t have access to the deeper backup systems.

    • (Score: 5, Insightful) by krishnoid on Thursday August 07, @03:57PM (5 children)

      by krishnoid (1156) on Thursday August 07, @03:57PM (#1412744)

      When you pass --dry to a Java application expecting -dry, it gets ignored.

      I'm coming around to the perspective that command-line tools that mutate/destroy data on their own and don't back up files beforehand, should operate in dry-run mode by default unless an explicit --go option is provided.

      As a bonus, running it with no options should just produce a usage message, by default. Even better would be language conventions/extensions that provide a place to keep that usage information, with an intelligent default [wikipedia.org].

      • (Score: 4, Insightful) by aafcac on Thursday August 07, @06:30PM

        by aafcac (17646) on Thursday August 07, @06:30PM (#1412784)

        Absolutely, and when they get arguments that don't exist, they should fail and indicate that. It's really frustrating how things like this have been known to be a problem for decades and yet it keeps happening. It's not that hard to just discard as invalid arguments with the wrong number of - and things of that nature.

      • (Score: 2, Interesting) by Anonymous Coward on Thursday August 07, @08:25PM (1 child)

        by Anonymous Coward on Thursday August 07, @08:25PM (#1412810)

        I'm coming around to the perspective that command-line tools that mutate/destroy data on their own and don't back up files beforehand, should operate in dry-run mode by default unless an explicit --go option is provided.

        Even more useful would be if popular operating systems included a real versioning filesystem along the lines of what DEC provided with VAX/VMS back in the 1970s (nowadays known as OpenVMS).

        • (Score: 2) by krishnoid on Thursday August 07, @10:32PM

          by krishnoid (1156) on Thursday August 07, @10:32PM (#1412825)

          I think an automatically versioned filesystem was an operating system concept taught in introductory courses, and ClearCase decided to try to implement it for real as a revision-control system. It also made it possible to version-control directories of binaries like compiler toolchains and libraries, so you could reproduce a build pretty much exactly.

          Linux (and other OSes) have the stackable filesystem in userspace [github.com] mechanism, so you can view/modify Git [github.com] and other version control repositories directly. I don't think anything is mainstream and works as a general filesystem solution, but it would make sense to have one.

      • (Score: 3, Interesting) by driverless on Friday August 08, @01:54AM

        by driverless (4770) on Friday August 08, @01:54AM (#1412849)

        That's how the (admittedly very small amount of) stuff I've created works, default is process input and report errors, but to actually get it to make changes you need to feed in a specific do-things-for-real option.

      • (Score: 2) by darkfeline on Saturday August 16, @02:02AM

        by darkfeline (1030) on Saturday August 16, @02:02AM (#1413797) Homepage

        My perspective is that you should always read the documentation for anything that you run. That's something you can control, compared to whinging when some tool doesn't comply with the ideal you want to assert on the world.

        --
        Join the SDF Public Access UNIX System today!
    • (Score: 5, Insightful) by aafcac on Thursday August 07, @06:27PM

      by aafcac (17646) on Thursday August 07, @06:27PM (#1412783)

      This isn't Java's fault, this is incompetent developers that aren't bothering to sanitize their inputs and outputs. It's not Java's fault that the developers didn't bother to sanitize their inputs to make sure that everything being passed to the program via the command line was what was intended. It's not like it's that hard to check to see if the command line arguments are what you expect and that there isn't anything nefarious going on.

    • (Score: 4, Insightful) by skaplon on Thursday August 07, @06:37PM (1 child)

      by skaplon (48350) on Thursday August 07, @06:37PM (#1412787)

      It doesn't matter in what language the tools written in, you are responsible for your app argument parsing. And it doesn't choke on invalid params, that's also up to you

      • (Score: 2) by aafcac on Thursday August 07, @10:40PM

        by aafcac (17646) on Thursday August 07, @10:40PM (#1412827)

        Yes, if it were a 3rd party Java app, I could sort of understand, but that app still shouldn't be accepting things with malformed arguments.

  • (Score: 4, Insightful) by turgid on Thursday August 07, @04:20PM (3 children)

    by turgid (4318) Subscriber Badge on Thursday August 07, @04:20PM (#1412750) Journal

    Your data is safe, it's always backed up, it can't be hacked and there's a SLA with Terms and Conditions. What could possibly go wrong?

    • (Score: 5, Insightful) by JoeMerchant on Thursday August 07, @06:55PM

      by JoeMerchant (3937) on Thursday August 07, @06:55PM (#1412793)

      When he says:

      >"Best practices" mean nothing when the provider goes rogue

      he should expand his thinking about "Best practices".

      In my world, "Best practices" mean keeping regular backups in offline storage. Maybe dailies from the cloud to a local device, and monthlies or weeklies - depending on the value of the data - being put into devices that are air-gapped (and preferably powered down) when not in active use as one of multiple offline storage devices, preferably at geographically diverse sites - if the data is valuable enough to warrant that.

      I'd trust the provider with a day of changes, it'll take more than a day to bring the system back online when the provider screws up like Amazon did in this case. This won't be the last time.

      If you're Class Treasurer keeping records for your High School's 5. 10. 20 year reunions... just put that on paper and be done, if there's a fire - there was a fire, they'll understand or they won't, whatever.

      If you're keeping the operational data and development output of a team of 20 with an annual running cost of $4M, currently generating $10M per year in revenue - yeah, that data warrants regular multiple offline backups in diverse geographies, probably diverse national jurisdictions too - if your revenue is coming from multiple countries.

      --
      🌻🌻🌻 [google.com]
    • (Score: 4, Insightful) by stormreaver on Friday August 08, @12:35AM

      by stormreaver (5101) on Friday August 08, @12:35AM (#1412839)

      ...it's always backed up...

      Which won't matter, because people will only use it as a secondary source, and will always keep their own backups. Nobody will be stupid enough to use [Someone Else's Computer] as their only datastore.

    • (Score: 5, Insightful) by driverless on Friday August 08, @01:36AM

      by driverless (4770) on Friday August 08, @01:36AM (#1412845)

      I feel a bit bad about being the guy who comes in and says "I toldja so", but:

      The arrangement had worked fine for almost a year—about $200/month for my testing infrastructure.

      When I looked into this, I realised I could pay $50 one-off for a recycled server from eBay and run everything from my basement. No fees, no problems fighting some cloud providers we-don't-care-we-have-your-money bureaucracy, and no chance of all the data suddenly vanishing (it's backed up to a NAS). Why pay a cloud hosting provider when you can do it yourself on hardware you control and with a one-off cost less than the monthly rental for the cloud stuff?

  • (Score: 5, Interesting) by Mojibake Tengu on Thursday August 07, @04:50PM (6 children)

    by Mojibake Tengu (8598) on Thursday August 07, @04:50PM (#1412764) Journal

    Cloud is for fools.

    What will happen to your precious data when earthquakes, tsunamis, nukes or even non-nuclear Oreshniks begin to hit datacenters all over the globe next week?

    I tell you: your cloud provider will drop everything yours immediately in attempt to make enough space to retain data of much more important customers...

    Catastrophic failure management always has its priorities. You are just a paying patron to enable the whole strategic infrastructure economically feasible.

    In China, 30% of clouds lays barren, empty. And they still build more and more. They must have a reason for doing that.

    --
    Rust programming language offends both my Intelligence and my Spirit.
    • (Score: 0) by Anonymous Coward on Thursday August 07, @05:56PM

      by Anonymous Coward on Thursday August 07, @05:56PM (#1412777)

      They must have a reason for doing that.

      Or not.

    • (Score: 5, Insightful) by HiThere on Thursday August 07, @06:44PM (2 children)

      by HiThere (866) on Thursday August 07, @06:44PM (#1412790) Journal

      The cloud can be quite usefult, but always keep local backup.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
      • (Score: 3, Informative) by aafcac on Thursday August 07, @07:17PM

        by aafcac (17646) on Thursday August 07, @07:17PM (#1412797)

        You can also use something like tailscale to sshfs mount the files you're interested and be free of this sort of 3rd party issues. Which should leave you with a ton of options in terms of how to handle the backups and it should be relatively easy to switch backup providers if you've got a system in place to guard against bitrot.

      • (Score: 3, Interesting) by driverless on Friday August 08, @01:43AM

        by driverless (4770) on Friday August 08, @01:43AM (#1412847)

        That's exactly how I use it, in theory the cloud data is the primary source and there's a local copy you work on, in practice for me it's the reverse. Since the cloud hosting service is a major target for malicious attackers I'm quite happy to have it that way, if they get in they modify some ephemeral copy, not the master copy.

    • (Score: 3, Interesting) by khallow on Thursday August 07, @07:17PM

      by khallow (3766) Subscriber Badge on Thursday August 07, @07:17PM (#1412796) Journal

      In China, 30% of clouds lays barren, empty. And they still build more and more. They must have a reason for doing that.

      Sounds like they're parting a sucker from his money (which could be the Chinese government BTW). That's the usual reason worldwide for building out stuff that isn't used.

    • (Score: 2) by driverless on Friday August 08, @01:40AM

      by driverless (4770) on Friday August 08, @01:40AM (#1412846)

      or even non-nuclear Oreshniks

      In that specific case it's just another one of Rashka's arsenal of Wunderwaffen that we'll never have to worry about...

  • (Score: 5, Insightful) by Dr Spin on Thursday August 07, @05:04PM (4 children)

    by Dr Spin (5239) on Thursday August 07, @05:04PM (#1412769)

    Us Boomers have been telling you :

    "The cloud" means "someone else's computer"
    You have no idea where it is, what it is, or when it will collapse in a heap on the floor!

    --
    Warning: Opening your mouth may invalidate your brain!
    • (Score: 3, Insightful) by JoeMerchant on Thursday August 07, @06:58PM (2 children)

      by JoeMerchant (3937) on Thursday August 07, @06:58PM (#1412794)

      It doesn't collapse in a heap on your floor, it: goes up in flames / drowns under tsunami / gets blasted by typhoon / blown up by acts of war / wiped by an under-trained employee - in somebody else's facility. If you're outsourcing to the cloud, how many layers are they outsourcing to the lowest bidders?

      --
      🌻🌻🌻 [google.com]
      • (Score: 5, Touché) by DECbot on Thursday August 07, @07:32PM (1 child)

        by DECbot (832) on Thursday August 07, @07:32PM (#1412803) Journal

        You pay extra to never know. It's a feature!

        --
        cats~$ sudo chown -R us /home/base
        • (Score: 3, Interesting) by JoeMerchant on Thursday August 07, @10:40PM

          by JoeMerchant (3937) on Thursday August 07, @10:40PM (#1412826)

          >You pay extra to never know. It's a feature!

          Indeed.

          My first job was 12 years with a semi-retired M.D. Scientist / Researcher / Demigod. In that job, we took academia's rigor of methodology to the next level (when we weren't doing quick and dirty proofs of concept), but... when it was "for real" stuff, we'd take nothing for granted.

          My second job was with a bigger "mover and shaker" company where I learned: the people who really mattered in the company didn't read every e-mail, didn't attend every meeting, certainly didn't pay full attention to the training, etc. because there simply wasn't time in the day to do all that was asked. They moved on momentum, assumed things were O.K. as long as nobody was screaming at them about problems, etc. This was around the time that Enron was getting really famous, and post-mortem I watched "The Smartest Guys in the Room" Enron documentary - it had a lot of parallels with that company. DOGE & friends seem to be trying to bring that mindset to government by pushing it "to 11" and only backing off from the worst of the worst issues they're creating.

          My current (100K+ employees global) employer has been somewhat two-faced for the whole decade+ I have been here. On the one hand they preach "good science, always be sure, follow all procedures" and have a Corporate Mission Statement to match. But, at the same time, they have a "Mindset" guide which preaches things like "Compete to win", "Leverage opportunities" etc. The two aren't exactly in contradiction of each other, but they are very different views on what is supposed to be the company ethic.

          Couple this with AI tools which allow everybody to take a half-assed stab at whatever it is they're asked to do in 1/10th the time it would take to do a traditional "good job" and I think we're in for a lot more people screaming about unacceptable situations in the near future...

          --
          🌻🌻🌻 [google.com]
    • (Score: 1) by Runaway1956 on Friday August 08, @03:26AM

      by Runaway1956 (2926) Subscriber Badge on Friday August 08, @03:26AM (#1412858) Journal

      Yep. If your data is under your own control, this kind of thing can't happen. You can lose a server, of course, but you back stuff up, right? If you replicate your data and your backups to three or more separate locations, the worst disaster possible can't destroy your data.

      From the user's perspective, "The Cloud" is a single point of failure, from which you may never recover.

      --
      “Take me to the Brig. I want to see the “real Marines”. – Major General Chesty Puller, USMC
  • (Score: 5, Informative) by fab23 on Thursday August 07, @08:08PM

    by fab23 (6605) on Thursday August 07, @08:08PM (#1412805) Homepage Journal

    I have read both his article in full. As I see it the issue is that he has given his account out of his own hands. But I guess he did not understand the consequences of this.

    He put his own AWS account as a member into another companies AWS Organizations [amazon.com], so they could pay his bills. As I understand if the so called Management Account gets deleted (reason may not matter), this will also delete all the member accounts, probably as they don't have any payment setup active any more.
    A good overview gives Terminology and concepts for AWS Organizations [amazon.com].

    I had done this setup in the company I worked for. We "merged" two independent accounts (one from the company we acquired) as members into an AWS Organization with consolidated billing below a newly created management account. We let our AWS account manager take care of transferring the payment settings (we already payed with invoice and bank transfer) from our own (now only a member) account into the (newly) management account. With this we had only one invoice per month and everything is legally owned by the owner company. With this also some discounts or savings plans can be used from all accounts. Over the years we also created several new member accounts out of the management account, which is easy and does not need the setup of any payment information at all.

  • (Score: 3, Insightful) by Brymouse on Thursday August 07, @09:28PM

    by Brymouse (11315) on Thursday August 07, @09:28PM (#1412819)

    Remember Parler [wikipedia.org]? They used AWS for hosting and they shut them down in 24 hours.

    insecure.org was deleted by godaddy (along with a slew of other sites [wikipedia.org]) with no notice.

    If you have business critical needs, you need your own servers, IPs, ASN, and redundant service providers. Big companies know this; Facebook knows this, they are even their own Domain Registrar.

(1)