Slash Boxes

SoylentNews is people

posted by janrinok on Sunday December 04, @12:34AM   Printer-friendly

Intruders Gain Access to User Data in LastPass Incident

The password manager says credentials safely encrypted, confirms link to August attack:

Intruders broke into a third-party cloud storage service LastPass shares with affiliate company GoTo and gained access to "certain elements" of customers' information, the pair have confirmed.

LastPass did not define what it meant by "certain elements," saying it was unsure what data was looked at: "We are working diligently to understand the scope of the incident and identify what specific information has been accessed this morning."

[...] It did maintain, however, that services were unaffected and that customers' passwords remained "safely encrypted" – without ruling out that some of the data was stolen. The company is known to use a one-way salted hash for master passwords, with a fuller description in this technical whitepaper. The master passwords are used to lock users' password vaults, where their logins for various websites etc. can be stored, with the passphrase only ever entered by the user on their browser or app and not sent to or stored by LastPass.

Users who lose their master passwords can lose access to their vaults, although there are some recovery options.

LastPass Security Breach Worse Than Initially Reported

LastPass Security Breach Worse Than Initially Reported:

[...] In a blog post dated November 30th, LastPass CEO Karim Toubba informed customers that “an unauthorized party ... was able to gain access to certain elements of our customer's information." The CEO didn't specify what type of information was compromised in the blog post. However, he assured customers that their passwords were safe as the company's Zero Knowledge architecture protects them.

The Zero Knowledge technology employed by LastPass means that no plain-text passwords are stored on company servers and that only customers can access their unencrypted passwords.

[...] Toubba explained that while customer data was not accessed during the August attack, information that the hackers obtained was subsequently used to get customer info. The CEO went on to assure his client base that the company is working hard to understand the full scope of the breach and is deploying enhanced security measures and closely monitoring for any further attacks.

The admission is surely an embarrassment for LastPass, but it’s not the first time in recent memory the company has suffered a massive security breach. Less than a year ago, the company suffered a brute-force attack from hackers, causing a slew of unauthorized login attempt notifications to go out to many of its customers.

Original Submission #1Original Submission #2

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Touché) by Snospar on Sunday December 04, @02:56AM (1 child)

    by Snospar (5366) Subscriber Badge on Sunday December 04, @02:56AM (#1281087)

    People who trust a cloud based service to manage their passwords are playing with fire. When security starts to seem easy then you must assume that "safe" has been thrown away; not always on purpose.

    • (Score: 2) by helel on Sunday December 04, @03:58PM

      by helel (2949) on Sunday December 04, @03:58PM (#1281147)

      To play devils advocate here: For most people the alternative is to simply use their email as their single login for every site as every access necessitates a password reset. That's a far far worse system.

      Republican Patriotism []
  • (Score: 3, Touché) by stretch611 on Sunday December 04, @11:07AM (3 children)

    by stretch611 (6199) on Sunday December 04, @11:07AM (#1281120)

    I submitted this story on December 1st. No, I really don't care about not getting mine selected... its not like I need the 3 Karma points.

    What I care about is sometime after I submitted it, the site went down and apparently took the database as well. When the SN site did come up, 10 days of articles, posts, etc were missing. AND NOT A SINGLE SITE-META ARTICLE about it.

    Listen, the majority of people on this site are (or were) in the tech industry. I understand this stuff happens, and I suspect many of the others do as well.
    I understand that you are are working on the site, gratis... and that all the money you collect goes to operating expenses.
    And based on recent META articles that the site is in dire need of upgrading which is going on now. (Also through unpaid donated hours)

    But one thing I did learn while maintaining multiple web applications is that when you make a code change you dump and backup the database because you never know which code change will screw up the data and you will be praying for that data.
    Now, I do not know if the problem was a code change, or something else... But not even a weekly automated backup? (Let alone daily) I know it takes resources but backups (depending on database) can be zipped to save space; especially ones that you expect are mostly text based content. (After a few weeks you can also be safe deleting daily backups, delete monthlys to make really old backups yearly as well to save space.)

    And you had to go back to a copy from 10 days prior???

    But, what really gets me is that you(SN) said nothing. Did you expect the site users to not notice? It does not take much to keep us informed. All it takes is one article and a quick explanation... "Sorry all, the main server died, due to server transitions only a 10 day old backup was available." That simple message saves a lot of face when it comes down to it.

    Again, sorry for hijacking this article... But I really do not know where I can post this... I don't have a journal that anyone would read. And I think that the admin/editors really should be making that meta article... its not like I should submit this quick rant as an article.

    (And before I'm told to help instead of complain about the site... I was an developer for 25+ years, However, I never learned perl, and the big thing is that for personal reasons that I won't discuss in a public forum, I can no longer do this work.)

    Now with 5 covid vaccine shots/boosters altering my DNA :P
    • (Score: 2) by fliptop on Sunday December 04, @02:34PM

      by fliptop (1666) on Sunday December 04, @02:34PM (#1281135) Journal

      what really gets me is that you(SN) said nothing

      I would guess the reason for this is the principals involved were up all night getting the MySQL cluster fixed. The IRC channel generally has up-to-date info about the site when it seems like it's down.

      To be oneself, and unafraid whether right or wrong, is more admirable than the easy cowardice of surrender to conformity
    • (Score: 2) by Gaaark on Sunday December 04, @03:28PM

      by Gaaark (41) Subscriber Badge on Sunday December 04, @03:28PM (#1281142) Journal

      I'm guessing when the sh*t stops hitting the cooling fans, there will be an update.

      (And before I'm told to help instead of complain about the site... I was an developer for 25+ years, However, I never learned perl, and the big thing is that for personal reasons that I won't discuss in a public forum, I can no longer do this work.)

      I don't help, anymore, in any way (i haven't submitted an article in quite a while due to lack of time and lack of sleep (a 'my son' issue)) except monetarily, the same as you. Please show patience: these people (editors, coders, etc) have real life jobs and real life issues just like you and I and are probably fighting burn out all the time AND they come on here, working with a lack of sleep and sometimes a lack of support.

      Please, please show patience

      Trying to be nice here, but .......if you can't help, at least don't hurt.

      --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
    • (Score: 2) by Sjolfr on Sunday December 04, @07:54PM

      by Sjolfr (17977) on Sunday December 04, @07:54PM (#1281179)

      when you make a code change you dump and backup the database

      From my experience failing to do so, at some point in the change process, becomes an inevitability especially when you're making changes all alone, doing a lot of changes, and not having the money/resources for multiple environments to test changes and eventually push changes in to production. This challenge grows exponentially when you are also maintaining multiple levels of the stack; hardware, OS, DB, clustering, networking, application, etc.. In complex environments even the firmware/BIOS levels are points of failure.

      A lot of folks take these things for granted because, in and of themselves, they aren't that tough to manage. Just remember that a systems' complexity grows at a rate greater than the sum of the of its' parts. It's even more so when updates and maintenance have fallen behind. Don't even get me started on that topic.

      Then, add on the expectation that people have regarding documenting and communicating all the changes and all the challenges and all the failures to keep things perfect and you have, what some would say, is a perfect storm of "why the fuck am I doing this". Spending 30 minutes to write some update communications isn't that hard untill you've spent 2 days making changes and are tired as hell. It's the breeding ground for memes like "No I will not fix your computer" and "Leave me alone or I will turn you in to a very small shell script". For me it was a part of the cost of working in the basement and making sure core infrastructure just worked instead of sitting in useless meetings.

      Yup, frustrating and worth voicing ... but cut some slack too, no one is perfect.