Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.
posted by hubie on Thursday October 03, @09:18PM   Printer-friendly
from the penny-finally-drops dept.

Arthur T Knackerbracket has processed the following story:

Efficiency and scalability are key benefits of enterprise cloud computing, but they come at a cost. Security threats specific to cloud environments are the leading cause of concern among top executives and they're also the ones organizations are least prepared to address.

That's according to PwC's latest cybersecurity report, released today, which showed that cloud threats are the biggest security concern for most (42 percent) business leaders.

The top five threats, according to PwC's 4,020 respondents, comprise hack and leak operations (38 percent), third-party breaches (35 percent), attacks on connected products (33 percent), and ransomware (27 percent).

If you've just read that and questioned why ransomware is so low on the list, you might be a CISO. The level of concern about ransomware jumped to 42 percent when analyzing responses from CISOs alone.

[...] All the threats that feature in execs' top five deemed "most concerning" are perhaps unsurprisingly also the same as the threats organizations feel least prepared to address, although not quite in the same order.

[...] Of course, it wouldn't be a cybersecurity report in 2024 unless AI got its moment in the spotlight.

Despite generative AI being used for good in many cases, and the majority (78 percent) increasing their investment in the tech in the past year, it's the primary contributor to the widening attack surface faced by organizations.

More than two-thirds of respondents (67 percent) said genAI increased their susceptibility to attacks "slightly" or "significantly" – the most significant factor of any in the past year, although cloud was only narrowly behind at 66 percent.

As a force for good, however, generative AI is being deployed widely across global organizations, supporting key cybersecurity functions such as threat detection and response, and threat intelligence.

"Cybersecurity is predominantly a data science problem," said Mike Elmore, global CISO at GSK. "It's becoming imperative for cyber defenders to leverage the power of generative AI and machine learning to get closer to the data to drive timely and actionable insights that matter the most."

Shockingly, PwC also found that business leaders who have regulatory and legal requirements to improve security do just that.

Indeed, 96 percent said regulations prompted an organization to improve its security, while 78 percent said the same regs have challenged, improved, or increased their security posture.

[...] "Organizations that embrace regulatory requirements tend to benefit from stronger security frameworks and a more robust posture against emerging threats," read PwC's report. "Compliance shouldn't be viewed as a box-ticking exercise but as an opportunity to build long-term resilience and trust with stakeholders."

These new regulations have also ushered in new investment into cybersecurity. Roughly a third of organizations (32 percent) said cyber investment increased to a "large extent" in the past 12 months. 37 percent said investment increased to a "moderate extent," while 14 percent said the increase in investment was "significant."


Original Submission

 
This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by drussell on Thursday October 03, @09:25PM (8 children)

    by drussell (2678) on Thursday October 03, @09:25PM (#1375614) Journal

    Despite generative AI being used for good in many cases

    Says WHO?!!

    What metric are they using for that statement?!

    (...presumably bottom-line-to-CxO pay-cheque, or some-such?!!)

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 5, Interesting) by JoeMerchant on Thursday October 03, @10:54PM (3 children)

    by JoeMerchant (3937) on Thursday October 03, @10:54PM (#1375623)

    >generative AI being used for good in many cases

    Everyone who's using it to generate their daily (perhaps even hourly) reports instead of typing them out themselves?

    I would hope it's not only being used to generate the trashy "news" articles I have been seeing more and more of. It only seems logical that the next step will be "human readable" status reports generated from the available technical data that upper management refuses to learn how to read.

    Picture: Individual contributors working in issue tracking systems which already have all the data necessary to provide status... They have AI writing status reports to their direct managers so they don't have to bother with that annoyance.

    People managers collect these status reports and use AI to synthesize them into reports for their middle managers at whatever frequency the middle managers feel the current situation requires micromanagement of.

    Middle managers collect these managers' status reports and rinse, lather, repeat up the chain to the CEO.

    The CEO has a "dashboard" that AI focuses on the report information of interest, running real-time rank and yank on those slugs that refused RTO and haven't delivered on this week's stretch goals...

    Khafka would be.... stimulated.

    --
    🌻🌻 [google.com]
    • (Score: 1, Touché) by Anonymous Coward on Thursday October 03, @11:58PM (1 child)

      by Anonymous Coward on Thursday October 03, @11:58PM (#1375626)

      "People managers collect these status reports and use AI to synthesize them"

      Every person that tries this should be fired since they don't realize feeding the maw
      of a technoparrot is a security "what the fuck are you doing"!

      • (Score: 3, Interesting) by JoeMerchant on Friday October 04, @01:10PM

        by JoeMerchant (3937) on Friday October 04, @01:10PM (#1375695)

        Internal information going to external systems should be strengsam verboten.

        You can run LLMs completely internally without providing any external outlets for the techno-parrot. It's what all the cool kids are doing.

        With a little thought put into the training set segregation, you could even keep your divisions' information siloed, but that's more of a management style choice than an obvious single answer.

        --
        🌻🌻 [google.com]
    • (Score: 4, Interesting) by Freeman on Friday October 04, @01:47PM

      by Freeman (732) on Friday October 04, @01:47PM (#1375699) Journal

      Using gpt4all and python I've been able to cobble together my own offline chat-bot. I also essentially used the same thing to automatically generate stories based on a single input from the end user. Simple as "Write a story about a goldfish" and it does a "passable" job. Certainly nothing good, but it does the task. That's using a model that can be run via CPU only (Takes a good while to actually generate the story, probably at least 15 to 30 minutes for a few paragraphs.) and is certainly outdated at this point. Though, the model, etc. are probably only a few months to a year old. Only a couple of years ago, doing that much would have been an extremely hard thing to do. Now, it's trivial.

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
  • (Score: 5, Insightful) by stormwyrm on Friday October 04, @01:16AM (3 children)

    by stormwyrm (717) on Friday October 04, @01:16AM (#1375639) Journal
    Generative AI has its uses. The detection of unusual patterns of network activity that could be signs of an attempted or in progress attack is using AI neural nets to do what they were designed to do best. The big problem is that generative AI can also be used to automate attacks as well, including the kinds of social engineering that Kevin Mitnick used to be famous for. Humans are the weakest link in most security systems, and generative AI is gaining the ability to exploit what amount to the unpatchable zero-days in every human brainstem.
    --
    Numquam ponenda est pluralitas sine necessitate.
    • (Score: 0) by Anonymous Coward on Friday October 04, @04:23AM

      by Anonymous Coward on Friday October 04, @04:23AM (#1375661)

      unpatchable zero-days in every human brainstem

      It's up to the cortex to build a better firewall. First it has to be turned on.

    • (Score: 4, Interesting) by Rosco P. Coltrane on Friday October 04, @08:24AM (1 child)

      by Rosco P. Coltrane (4757) on Friday October 04, @08:24AM (#1375674)

      Generative AI has its uses.

      You know, yesterday I was shopping for a cheaper source of foot prosthetics and I found this totally depressing image [jdmagicbox.com] at the bottom of this page [justdial.com].

      That's generative AI: an unstoppable tidal wave of approximate, mediocre machine-generated shit that permeates everything in life and makes everything worse. If AI has worthwhile uses, I've yet to see them concretely.

      • (Score: 0) by Anonymous Coward on Friday October 04, @03:46PM

        by Anonymous Coward on Friday October 04, @03:46PM (#1375708)
        The trouble is that it seems few people these days are using generative AI aware of its strengths and limitations. For example, can you make ChatGPT say that it doesn't know something? Most of the time if the training data don't include what is asked it will just hallucinate something that sounds like it might fit. I don't know why they are expending so much money in making the models bigger and bigger while fundamental problems like this seem to be not as important.