The threat is potentially grave because it could be used in supply-chain attacks:
A maximum severity vulnerability that allows hackers to hijack GitLab accounts with no user interaction required is now under active exploitation, federal government officials warned as data showed that thousands of users had yet to install a patch released in January.
A change GitLab implemented in May 2023 made it possible for users to initiate password changes through links sent to secondary email addresses. The move was designed to permit resets when users didn't have access to the email address used to establish the account. In January, GitLab disclosed that the feature allowed attackers to send reset emails to accounts they controlled and from there click on the embedded link and take over the account.
While exploits require no user interaction, hijackings work only against accounts that aren't configured to use multifactor authentication. Even with MFA, accounts remained vulnerable to password resets, but the attackers ultimately are unable to access the account, allowing the rightful owner to change the reset password. The vulnerability, tracked as CVE-2023-7028, carries a severity rating of 10 out of 10.
On Wednesday, the US Cybersecurity and Infrastructure Security Agency said it is aware of "evidence of active exploitation" and added the vulnerability to its list of known exploited vulnerabilities. CISA provided no details about the in-the-wild attacks. A GitLab representative declined to provide specifics about the active exploitation of the vulnerability.
The vulnerability, classified as an improper access control flaw, could pose a grave threat. GitLab software typically has access to multiple development environments belonging to users. With the ability to access them and surreptitiously introduce changes, attackers could sabotage projects or plant backdoors that could infect anyone using software built in the compromised environment. An example of a similar supply chain attack is the one that hit SolarWinds in 2020 and pushed malware to more than 18,000 of its customers, 100 of whom received follow-on hacks. Other recent examples of supply chain attacks are here, here, and here.
[...] GitLab users should also remember that patching does nothing to secure systems that have already been breached through exploits. GitLab has published incident response guidance here.
Related Stories
Surprise surprise, we've done it again. We've demonstrated an ability to compromise significantly sensitive networks, including governments, militaries, space agencies, cyber security companies, supply chains, software development systems and environments, and more:
Arguably armed still with a somewhat inhibited ability to perceive risk and seemingly no fear, in November 2024, we decided to prove out the scenario of a significant Internet-wide supply chain attack caused by abandoned infrastructure. This time however, we dropped our obsession with expired domains, and instead shifted our focus to Amazon's S3 buckets.
It's important to note that although we focused on Amazon's S3 for this endeavour, this research challenge, approach and theme is cloud-provider agnostic and applicable to any managed storage solution. Amazon's S3 just happened to be the first storage solution we thought of, and we're certain this same challenge would apply to any customer/organization usage of any storage solution provided by any cloud provider.
The TL;DR is that this time, we ended up discovering ~150 Amazon S3 buckets that had previously been used across commercial and open source software products, governments, and infrastructure deployment/update pipelines - and then abandoned.
Naturally, we registered them, just to see what would happen - "how many people are really trying to request software updates from S3 buckets that appear to have been abandoned months or even years ago?", we naively thought to ourselves.
[...] These S3 buckets received more than 8 million HTTP requests over a 2 month period for all sorts of things -
- Software updates,
- Pre-compiled (unsigned!) Windows, Linux and macOS binaries,
- Virtual machine images (?!),
- JavaScript files,
- CloudFormation templates,
- SSLVPN server configurations,
- and more.
The article goes on to describe where the requests came from and provides some details on getting the word to the right companies and what actions they took. Originally spotted on Schneier on Security.
Related:
- China's Telco Attacks Mean 'Thousands' Of Boxes Compromised
- Maximum-severity GitLab Flaw Allowing Account Hijacking Under Active Exploitation
- Open Source Software Supply Chain Has Security Risks
(Score: 5, Interesting) by Mojibake Tengu on Wednesday May 08 2024, @12:49PM (17 children)
Distributed version control tools should be practiced distributed, as they were intended to serve for Bazaar. Integrating them into Cathedral portals is plain stupid.
This incident was inevitable. Repository concentrators are just like control freak pyramids built of anthills: once poured a formicide upon them, everything collapses catastrophically from horizon monument to dust.
Lessons learned: nothing is technically solved forever. Marketing madness wins over commons wisdom by lures and trinkets every time.
Rust programming language offends both my Intelligence and my Spirit.
(Score: 4, Interesting) by Freeman on Wednesday May 08 2024, @01:45PM (1 child)
Bittorrent is the only practically usable distributed content system that I've ever seen. A repository like GitLab/GitHub in my mind are just extensions of the Debian/YourLinuxFlavor repository concept. Which may not be the best way to do it, but there will always be usefulness in being able to go to one place to get most of the programs you need. As opposed to "just knowing" that VLC exists and needing to go to their website to get the bittorrent link/or whatever replacement every time you want to install that program. Repeat 100x and that just gets old fast.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 4, Interesting) by bloodnok on Wednesday May 08 2024, @04:39PM
Although it's not yet ready for primetime, http://radicle.xyz/ [radicle.xyz] is probably worth keeping an eye on. If nothing else, it's worth taking a look at 'cos it's cool.
The big missing part, last time I looked, is discoverability. The developers claim to be working on it. If it happens, I'll almost certainly try hosting some of my projects using it.
__
The major
(Score: 5, Insightful) by Ox0000 on Wednesday May 08 2024, @03:27PM (14 children)
I agree with your sentiment that distributed SCC should be distributed.
But this isn't about that.
This is about the ancillary services built _around_ that distributed SCC. This is about the tool that does your issue tracking, where you run your CI/CD, where you define milestones and manage your project, and yes, also happen to have designated as the nexus to use to share code.
If/when you use GitLab, you can continue to code and collaborate and exchange code with others just like you can with 'raw' git. As long as you can reach their repo, you can talk to it (provided you have the prerequisite permissions as configured on the end of the line you wish to communicate with). It's just git. The GitLab instance is just there to act as the designated "this is the shared Truth, this is our anchor point" and is in no way, shape, or form any different in that capacity than treating Linus' repo as the Truth for the Linux kernel (which it effectively is since he is the one integrating stuff).
So let's be clear: this isn't about distributed source code control, this is about the services _around_ it which GitLab offers.
Can you elaborate on how you would reliably produce builds for a team of distributed contributors, how you would track issues, how you would define releases, how you would roll out code to production in a totally, fully distributed fashion that meets your definition?
You still need a Source of Truth for these things, which means that has to be centralized in some way.
When you use a GitLab or GitHub, you're not going non-distributed (although granted, some folks do use git in a non-distributed way), all you've done is do the equivalent of adding a sticker to the repo up on your service to say "this is the nexus, nothing more, nothing less".
This particular issue is _not_ about "Distributed Source Code Control failing because of Centralization". If you think it is, then you did not read or comprehend what this is truly about.
(Score: 2) by darkfeline on Wednesday May 08 2024, @03:47PM (7 children)
Mailing lists, as was the old ways.
Join the SDF Public Access UNIX System today!
(Score: 2) by DannyB on Wednesday May 08 2024, @04:02PM (1 child)
The only other time in my life I ever encountered the term Majordomo was in Lost In Space, season 1, episode 27 The Lost Civilization. "Will awakes a sleeping princess whom he now must marry. Her race has been awakened and now plans to conquer the universe starting with Earth."
Why is it so difficult to break a heroine addiction?
(Score: 2) by Rich on Wednesday May 08 2024, @07:19PM
Offtopic wrt the Gitlab leak, sorry, but what about Higgins friom Magnum PI?
(Score: 3, Touché) by gnuman on Wednesday May 08 2024, @05:25PM
It's still mailing lists all the way down, with an UI on top for better archiving and threading ;)
(Score: 3, Insightful) by Ox0000 on Wednesday May 08 2024, @05:55PM (3 children)
That's for sending patches around, which is something git itself handles already. That's part of its distributed nature.
How do you do a product build and binary publication through a mailing list? How do you effectively keep track of the work items, issues, and bugs with a mailing list, including their morphing state over time?
Where will you ensure your full CI has run and reported success? Your local box?
(Score: 3, Insightful) by JoeMerchant on Wednesday May 08 2024, @08:16PM (2 children)
>Where will you ensure your full CI has run and reported success? Your local box?
Your local box today has more power and capability than a $50K server farm from 20 years ago.
Your code monkeys, meanwhile, haven't gotten much more productive than they ever were.
🌻🌻🌻 [google.com]
(Score: 2) by Ox0000 on Wednesday May 08 2024, @11:31PM (1 child)
My glib reference to "your local box" was not intended as a discussion starter about "does it have the resources to do the job". "My box" would be classified as a supercomputer going only 20 years back in time, but that's not the point.
It was more related to the reliability of your local box in terms of validation, reproducible builds, security, etc...
But if you really want to talk about "your box": How long does it take to your box to build the kernel and run the full, complete suite of tests on it? What about doing the same for multiple architectures?
And once built, how do I know that your local box isn't compromised and (intentionally or unintentionally) injected some backdoor into what it builds? How do I audit "your box"?
How do I guarantee that the tests that you run on your "local box" don't happen to all pass because some weird environmental configuration that exists only on "your box" but will cause failures in the real world because they have a slightly different configuration?
It's not about "powah" of the box. It's about provenance. I know nothing about "your box", but using a centralized CI/CD(*) I can create and recreate the exact same, standardized environment every single time I do a build, I run tests, I integrate things. And I can audit and inspect that both during and post-factum builds. Not only that, but everyone can use the exact same definitions, and everyone uses the exact same way of building up to and including the environment in which they build(**).
"Your box" is meaningless to me. "Your box" means "sod _your_ failures, it works on _my_ machine".
(*) granted, that is a high value target, so you have to put defense-in-depth in place.
(**) don't tell me your CI/CD looks like a bunch of Vagrant scripts...
(Score: 2) by JoeMerchant on Thursday May 09 2024, @01:28AM
>your local box in terms of validation, reproducible builds, security, etc...
My paying gig of the past 10+ years has been developing for a particular target piece of hardware with only two or three minor variants (so far), so yeah I've got a little skewed perspective, but.... for people who are into such things "your local box" can spin up all kinds of VMs emulating various target environments - each created by a script checked into the source repo.
In '06 I got a gig building for Apple, or maybe PC, and discovered Qt. I was working from a MacBook, targeting MacPros, but the real point was the ability to hop to Windows should the need arise, which it did in '07. For another couple of years, I wrote code on that Mac and targeted Windows machines, and yes: sometimes it worked on one and not the other, usually worked on my Mac but not PC, but in virtually every case that came up upon examination of the code I was playing fast and loose somewhere and just getting away with it in my dev environment and the other environment revealed by slap-dashery. That diversity is more helpful in developing reliable code than it is frustrating due to "well, it works on my desk" failures.
I have encountered more than one cross platform project where different groups take responsibility for testing various subsets of the target platforms, so no one group is overwhelmed with 6 week build and test times. Again, for the people writing the code, if they can meet that diversity challenge with a single set of instructions instead of bunch of #ifdef WIN #else ifdef UBUNTU #else ifdef MAC garbage, that core code base that works on all platforms is going to be stronger for having met that challenge - and this is also where you get into API/libraries like Qt which hide all the necessary specialization for the various platforms.
>How do I guarantee that the tests that you run on your "local box" don't happen to all pass because some weird environmental configuration that exists only on "your box" but will cause failures in the real world because they have a slightly different configuration?
You don't, which is why you put it "out there" for others to test. Although, I must admit - while dedicated test teams sounds like an excellent idea, I have definitely experienced situations where relying on dedicated testers only ensures that the test protocols pass, and the test protocols tend not to be nearly as challenging as real-world applications.
>How do I audit "your box"?
You don't, which is why the source should show you how to build it yourself.'
>using a centralized CI/CD(*) I can create and recreate the exact same, standardized environment every single time I do a build, I run tests, I integrate things.
So now you want me to trust "your box" ? I want ALL the source, including the toolchain, to be open so I can build it myself - preferably with a single click, but definitely with less than a couple of pages of crystal clear explicit instructions. Now, there are still proprietary bottlenecks in that philosophy for many projects, but if you work toward using open tools as a goal, it's not usually that hard to achieve.
>everyone uses the exact same way of building up to and including the environment in which they build(**).
This is a big part of why I advocate "build on the target" systems - the project code shows how to create the target(s), so there's no cross-compliation craziness entering the equation. Yes: cross compiling works, but if you have a reasonably competent target, why would you ever?
🌻🌻🌻 [google.com]
(Score: 2) by JoeMerchant on Wednesday May 08 2024, @08:09PM
> does your issue tracking, where you run your CI/CD, where you define milestones and manage your project
That would be YOUR issue tracking, locally hosted one would hope. I setup a trac instance on a Debian box in 2006, used daily for two years without a reboot - only took it down because we were physically moving the server. As I moved from company to company, setting up a trac server was one of the first things I generally did upon arrival, it takes less than a day when you don't know what you are doing, a couple of hours with practice. Even if your "local" host is on a cloud service somewhere, it shouldn't be a central point of failure for your projects and millions of others.
Ditto where you run YOUR CI/CD. If you've got an open project with any significant value one would hope that multiple sites around the globe would run simultaneous CI/CD for reliability - though I know this is still exceedingly rare. Still, it should be locally controlled servers.
Where you define milestones and manage YOUR project. Another central point, per project not per planet.
🌻🌻🌻 [google.com]
(Score: 3, Interesting) by JoeMerchant on Wednesday May 08 2024, @08:12PM (1 child)
>Can you elaborate on how you would reliably produce builds for a team of distributed contributors
The toolchain setup should be a part of the source repository. Start with a generic base OS (VM or bare metal) and run the toolchain setup script on it: et' voila' one (more) standard build server made to order.
Some contributors would run web-visible CI/CD systems, more would run the systems locally in their network without exposing them to the outside.
🌻🌻🌻 [google.com]
(Score: 4, Informative) by vux984 on Wednesday May 08 2024, @10:49PM
To be fair that only works in some cases.
I've worked on a few projects with proprietary pieces in the build tooling; pieces that needed license/activation; and couldn't be done from a toolchain setup script or deployed at will where ever.
I've worked on many projects that use code signing and signing keys -- and even if you could distribute generic build toolchains for test builds; anything headed for release or production would need to go through centralized signing. The code signing we do now, requires a hardware token and MFA - CI/CD is not getting distributed.
(Score: 2) by JoeMerchant on Wednesday May 08 2024, @08:14PM (1 child)
> a Source of Truth for these things, which means that has to be centralized in some way.
Got git? Got a copy of the repo (from wherever) check the hash signature of what you've cloned / checked out - if it matches "the Source of Truth," you've got the truth.
Only trust Linus? Then only build from tags which he has cryptographically signed.
🌻🌻🌻 [google.com]
(Score: 2, Touché) by Anonymous Coward on Wednesday May 08 2024, @11:19PM
Your vibes send a very strong "everything I do is bespoke" signal.
I've worked with people that behave this way and, inevitably, these types of folk turn all their talk of distributed systems into what is effectively a highly concentrated, highly centralized one. Only the centralization is on their own, individual, single person.
I don't think I'd allow you anywhere close to my code bases, and similarly, I don't think you'd get along with folks like me either, so ... win-win?
(Score: 2) by therainingmonkey on Thursday May 09 2024, @04:12PM