Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.
posted by hubie on Friday January 09, @05:36PM   Printer-friendly

https://www.theregister.com/2026/01/04/ai_agents_insider_threats_panw/

AI agents represent the new insider threat to companies in 2026, according to Palo Alto Networks Chief Security Intel Officer Wendi Whitmore, and this poses several challenges to executives tasked with securing the expected surge in autonomous agents.

"The CISO and security teams find themselves under a lot of pressure to deploy new technology as quickly as possible, and that creates this massive amount of pressure - and massive workload - that the teams are under to quickly go through procurement processes, security checks, and understand if the new AI applications are secure enough for the use cases that these organizations have," Whitmore told The Register.

"And that's created this concept of the AI agent itself becoming the new insider threat," she added.

According to Gartner's estimates, 40 percent of all enterprise applications will integrate with task-specific AI agents by the end of 2026, up from less than 5 percent in 2025. This surge presents a double-edged sword, Whitmore said in an interview and predictions report.

On one hand, AI agents can help fill the ongoing cyber-skills gap that has plagued security teams for years, doing things like correcting buggy code, automating log scans and alert triage, and rapidly blocking security threats.

"When we look through the defender lens, a lot of what the agentic capabilities allow us to do is start thinking more strategically about how we defend our networks, versus always being caught in this reactive situation," Whitmore said.

[...] One of the risks stems from the "superuser problem," Whitmore explained. This occurs when the autonomous agents are granted broad permissions, creating a "superuser" that can chain together access to sensitive applications and resources without security teams' knowledge or approval.

"It becomes equally as important for us to make sure that we are only deploying the least amount of privileges needed to get a job done, just like we would do for humans," Whitmore said.

"The second area is one we haven't seen in investigations yet," she continued. "But while we're on the predictions lens, I see this concept of a doppelganger."

This involves using task-specific AI agents to approve transactions or review and sign off on contracts that would otherwise require C-suite level manual approvals.

[...] By using a "single, well-crafted prompt injection or by exploiting a 'tool misuse' vulnerability," adversaries now "have an autonomous insider at their command, one that can silently execute trades, delete backups, or pivot to exfiltrate the entire customer database," according to Palo Alto Networks' 2026 predictions.

This also illustrates the ongoing threat of prompt-injection attacks. This year, researchers have repeatedly shown prompt injection attacks to be a real problem, with no fix in sight.

"It's probably going to get a lot worse before it gets better," Whitmore said, referring to prompt-injection. "Meaning, I just don't think we have these systems locked down enough."

[...] "Historically, when an attacker gets initial access into an environment, they want to move laterally to a domain controller," Whitmore said. "They want to dump Active Directory credentials, they want to elevate privileges. We don't see that as much now. What we're seeing is them get access into an environment immediately, go straight to the internal LLM, and start querying the model for questions and answers, and then having it do all of the work on their behalf."

Whitmore, along with just about every other cyber exec The Register has spoken with over the past couple of months, pointed to the "Anthropic attack" as an example.

She's referring to the September digital break-ins at multiple high-profile companies and government organizations later documented by Anthropic. Chinese cyberspies used the company's Claude Code AI tool to automate intel-gathering attacks, and in some cases they succeeded.

While Whitmore doesn't anticipate AI agents to carry out any fully autonomous attacks this year, she does expect AI to be a force multiplier for network intruders. "You're going to see these really small teams almost have the capability of big armies," she said. "They can now leverage AI capabilities to do so much more of the work that previously they would have had to have a much larger team to execute against."

Whitmore likens the current AI boom to the cloud migration that happened two decades ago. "The biggest breaches that happened in cloud environments weren't because they were using the cloud, but because they were targeting insecure deployments of cloud configurations," she said. "We're really seeing a lot of identical indicators when it comes to AI adoption."

For CISOs, this means establishing best practices when it comes to AI identities and provisioning agents and other AI-based systems with access controls that limit them to only data and applications that are needed to perform their specific tasks.

"We need to provision them with least-possible access and have controls set up so that we can quickly detect if an agent does go rogue," Whitmore said.


Original Submission

This discussion was created by hubie (1068) for logged-in users only. Log in and try again!
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Informative) by looorg on Friday January 09, @06:11PM

    by looorg (578) on Friday January 09, @06:11PM (#1429230)

    Hopefully it's not AI alone but AI agent in concert with clueless (or malevolent) employee. That just enters, or lets it scan everything, whatever corporate secrets and documents you have into whatever AI agent or chatbot there is to do its work them them.

  • (Score: 4, Insightful) by VLM on Friday January 09, @06:12PM (4 children)

    by VLM (445) Subscriber Badge on Friday January 09, @06:12PM (#1429231)

    A pretty good article. Could be better: Misses the competency crisis. Fire everyone capable of doing hard work replace them with AI.

    What happens when it breaks or is broken into or creates a security incident and everyone competent enough to detect it and fix it was fired to save money? Crickets.

    But I'm sure nothing bad will ever happen, as we all know computers are inherently bug free.

    This also creates a totally new MITM attack, what happens to a company, organization, or government dept that gets MITM and inherently by the use of AI they're in over their head so they can't even tell?

    "Well I donno I'm just a helpdesk jockey who mostly hands out replacement mice and keyboards, and I'm the most technical employee still employed here, so I clicked 'ok' on the SSL certificate error to make the AI MCP connection thingie start working again, I have no idea what that means or what happened."

    • (Score: 3, Insightful) by Snotnose on Saturday January 10, @12:05AM (3 children)

      by Snotnose (1623) Subscriber Badge on Saturday January 10, @12:05AM (#1429289)

      What happens when it breaks or is broken into or creates a security incident and everyone competent enough to detect it and fix it was fired to save money? Crickets.

      This is what I'm waiting for. Every time some prompt engineer gets something that looks like it works and moves on, they've just added code that is non-optimal, probably buggy, and that nobody understands. A year or three of this and the house of cards will fall down. This is called Technical debt, and it's gonna hit hard because companies have fired all the expensive engineers and hired more cheap prompt jockeys.

      --
      Recent research has shown that 1 out of 3 Trump supporters is as stupid as the other 2.
      • (Score: 2) by VLM on Saturday January 10, @05:10PM (2 children)

        by VLM (445) Subscriber Badge on Saturday January 10, @05:10PM (#1429390)

        nobody understands

        This part is very rough. I wrote it once I can figure it out again, faster, the second time and fix my bug. My coworker went to a different school for CS but its accredited like mine and he knows "Codd Normal Form" or "SOLID" or whatever as well as I do so I can figure out what he did then fix his bug. No biggie all in a days work. Spend much more time doing rework "fixing" "improving" stuff than making new undebugged code.

        Fire all those people who can figure out what they or their coworkers did, and replace them with prompt jockeys, then they get in over their head, and ... close the company? Give up? Contract me for $250/hr 3hr minimum plus overtime? I mean I kinda like the latter a little but it's not good for their company to keep calling me. I should charge more for working with AI code because its usually so shitty and inconsistent compared to human code. Sure for $400/hr I'll find your null pointer, but its going to take awhile.

        • (Score: 2) by VLM on Saturday January 10, @05:17PM (1 child)

          by VLM (445) Subscriber Badge on Saturday January 10, @05:17PM (#1429392)

          because its usually so shitty and inconsistent compared to human code

          To expand on that, I have seen some stuff and prompt jockeys seem to think project-wide architecture is just concatenating many prompts together.

          Then they get baffled when the result of prompt 7 emits a tuple and prompt 42 has a hidden(ish) assumption the tuple is immutable and the input of prompt 13 expects a mutable list and things turn into a sorcerer's apprentice circus.

          Software is slow because it doesn't work unless you understand it all, all at the same time, or at least have a common arch or style. There's been attempts at hacking around that for humans, mostly involving enormous amounts of documentation, but it never works IRL. Prompt jockeys seem to think the only large scale project architecture is a very long linear assembly line and they get really weirded out when they discover global state exists, or flowcharts that aren't a simple straight line, or parallelism/concurrency.

          I ran into a guy once who didn't understand concurrency at all and basically tried to free a malloc twice in two separate (ish) threads and thats the type of dude who just enters prompts and cuts and pastes all day. Ooof. Well its profitable for me as an individual I guess.

  • (Score: 1) by khallow on Friday January 09, @06:50PM (4 children)

    by khallow (3766) Subscriber Badge on Friday January 09, @06:50PM (#1429240) Journal
    In addition to the fine set of problems already discussed, there's the matter of knowledge flow back to the AI. What happens if that knowledge accumulates on the AI side and then someone with the right authority types in: "Summarize company X's activities with you. In particular, list all activities that are classified as secret, restricted, and confidential as well as any bank or financial information associated with these activities."
    • (Score: 3, Informative) by ikanreed on Friday January 09, @07:01PM (1 child)

      by ikanreed (3164) on Friday January 09, @07:01PM (#1429245) Journal

      Luckily for them, that's not how LLMs work.

      They have a state from previously entered text, but it quickly loses any meaningful information as the context window slides forward.

      They have training data, but that's separate from how they're used.

      • (Score: 0, Offtopic) by khallow on Friday January 09, @07:38PM

        by khallow (3766) Subscriber Badge on Friday January 09, @07:38PM (#1429251) Journal
        Where is the LLM located? If it's a local copy without access to the greater world, then you would be right. When it's an external product, then data flow to the outside world happens and thus someone can incorporate the customer-side data into some LLM's training data (not necessarily the same LLM that the customer is using), among other things.
    • (Score: 4, Interesting) by looorg on Friday January 09, @08:11PM (1 child)

      by looorg (578) on Friday January 09, @08:11PM (#1429257)

      I always wonder if that things isn't already available in all the various online format transformers -- turns your doc/xls/ppt/whatever into PDF and the reverse. Or convert your MS Access db into some other db format or into CSV files. Sure they claim they never save anything for longer then it takes to convert. Most of it will probably be crap and harmless data or documents. But for some clients or if they come from the right IP:s or organizations they might be good to keep a "backup" off.

  • (Score: 2) by bmimatt on Friday January 09, @09:05PM

    by bmimatt (5050) on Friday January 09, @09:05PM (#1429268)

    Least privilege is hardly a new concept in computing. This is just noise. Why? Because if you are not applying basic security practices to your product, you are creating future problems for yourself. I do not know it for a fact, but I'd bet the products in question have some degree of privilege separation.

  • (Score: 5, Insightful) by Rosco P. Coltrane on Friday January 09, @09:40PM

    by Rosco P. Coltrane (4757) on Friday January 09, @09:40PM (#1429275)

    The IT and security crowd is concerned about letting in AI agents, but they have no trouble deploying Windows, Office 365, Teams and the likes?

    Those products are trojans putting employees under heavy surveillance and exflitrating personal and business data on a massive scale, but somehow those are fine?

    All the computer security folks need to do is wait long enough to internalize, normalize and ultimately get lulled into a false sense of security with AI like they did with Microsoft products, and their concerns will disappear.

(1)