Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 13 submissions in the queue.
posted by janrinok on Monday October 27, @04:41PM   Printer-friendly

TechCrunch

New AI-powered web browsers such as OpenAI's ChatGPT ATLAS and Perplexity's Comet are trying to unseat Google Chrome as the front door to the internet for billions of users. A key selling point of these products are their web browsing AI agents, which promise to complete tasks on a user's behalf by clicking around on websites and filling out forms.

But consumers may not be aware of the major risks to user privacy that come along with agentic browsing, a problem that the entire tech industry is trying to grapple with.

Cybersecurity experts who spoke to TechCrunch say AI browser agents pose a larger risk to user privacy compared to traditional browsers. They say consumers should consider how much access they give web browsing AI agents, and whether the purported benefits outweigh the risks.

[...] There are a few practical ways users can protect themselves while using AI browsers. Rachel Tobac, CEO of the security awareness training firm SocialProof Security, tells TechCrunch that user credentials for AI browsers are likely to become a new target for attackers. She says users should ensure they're using unique passwords and multi-factor authentication for these accounts to protect them.

Tobac also recommends users to consider limiting what these early versions of ChatGPT Atlas and Comet can access, and siloing them from sensitive accounts related to banking, health, and personal information. Security around these tools will likely improve as they mature, and Tobac recommends waiting before giving them broad control.

Based on these concerns, would you use such browsers ?


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Interesting) by krishnoid on Monday October 27, @05:07PM (3 children)

    by krishnoid (1156) on Monday October 27, @05:07PM (#1422491)

    Google has been working on Chrome from Chromium since ~2010, and it's seen continuous development through multiple web standards/revisions from at least one internal group through that entire period. Google's products have to run on a rock-solid browser, and Chrome and others have, if you will, tempered their browsers in the furnace of a wide range of hourly public use for over a decade. Oh yeah, and the DevTools part that could easily be a for-cost addon for developers.

    ChatGPT is taking some browser tech and shoehorning their product into it, rather than sandboxing it as a plugin, with no indication that they'll dedicate resources to supporting it in the future. Which part of that should I trust?

    • (Score: 1, Interesting) by Anonymous Coward on Tuesday October 28, @03:20AM (2 children)

      by Anonymous Coward on Tuesday October 28, @03:20AM (#1422569)

      You're not wrong about all that, but google's products have to run on whatever browser advertisers feel like targeting. "Rock-solid" is at the option of the sheep.

      • (Score: 3, Touché) by krishnoid on Tuesday October 28, @05:24AM (1 child)

        by krishnoid (1156) on Tuesday October 28, @05:24AM (#1422573)

        Advertisers like ... Google?

        • (Score: 0) by Anonymous Coward on Tuesday October 28, @06:10AM

          by Anonymous Coward on Tuesday October 28, @06:10AM (#1422574)

          No, the people that pay Google to show their adverts to humans.

  • (Score: 5, Informative) by JoeMerchant on Monday October 27, @06:18PM (2 children)

    by JoeMerchant (3937) on Monday October 27, @06:18PM (#1422496)

    I created an AI workflow to "archive a file" - then I asked it to archive a folder full of folders and files, and it went ahead and did it properly. Impressive, though I would have appreciated a checkpoint asking me if I really wanted to deviate from the workflow. You can "enhance" the workflow to tell it to do things like that, but every "enhancement" consumes precious context window and increases the chances that it will overlook later workflow steps...

    Next, I have been developing an app with an extensive API. Every endpoint of the API is "locked" with a timestamp and cryptographic hash of the timestamp plus the content of the API call plus a shared secret. When the receiver receives an API request, it checks the timestamp to make sure it's not from the future or more than a second in the past, and then it checks the hash - ensuring that the sender also has access to the shared secret. So far so good. My intent is that web clients will be served the shared secret embedded in the webpages served and that is how they will access the API. Yes, it's minimal security, and that's O.K. - this is for low value applications served on the local network with more concern for preventing confusion among trusted users than anything else. Still - I'd also rather not just present an API interface to get the secret... Also, when the shared secret definition is empty, all API hash and time stamp checking is disabled, all well formed calls are accepted.

    So, anyway, I ask AI to design, document and implement it. It does. I'm testing the interfaces and they're working great. I ask if the API hash and timestamp checking is in use? AI: ummm, well, we're in development so I set the shared secret empty. MB: Yeah, so go implement it now. AI: Yes sir, yes sir, three bags full sir! And here I go, and gee I see a problem that the web server interfaces are currently locked if you don't have proper timestamp and hash in your requests, let me fix that without asking - I'll just open all of them and if the Meat Bag isn't watching the scroll by they'll never even know what I just opened...

    In this case the AI is correct, logically those interfaces must be opened for the system to work, but how often is it making these decisions in the background, silently opening vulnerabilities when the Meat Bag driver isn't looking?

    Related: https://simonwillison.net/2025/Oct/22/living-dangerously-with-claude/ [simonwillison.net]

    --
    🌻🌻🌻 [google.com]
    • (Score: 2) by krishnoid on Monday October 27, @06:25PM (1 child)

      by krishnoid (1156) on Monday October 27, @06:25PM (#1422499)

      That's +1 amazing. You're using Claude for this?

      • (Score: 5, Informative) by JoeMerchant on Monday October 27, @07:08PM

        by JoeMerchant (3937) on Monday October 27, @07:08PM (#1422508)

        Yes, claude code for home projects and cursor for work projects - they're remarkably similar - both are using Sonnet 4.5.

        Ask me about AI writing code a year ago, I would have told you it's slightly more convenient that Googling the answers yourself.

        Whole different world today. The home app is building a music player, lots of fancy automatic next song selection features based on AcousticBrainz... so far the audio playback mechanism is done with all kinds of programmable crossfade at the transitions, a http served developer's UI that shows the queue, decode buffer fill status, parameter editor for 26 kinds of playback variables (buffer refill period, hysterisis thresholds, etc.) resampling to handle input files with varying sample rates, pause with fade-in resume, queue management, skip, remove, etc. and all the hooks needed for the rest of the system. That took less than a week. An auto-tuner module that determines optimal buffer depths for gap free playback with minimal startup lag took about an hour this morning.

        --
        🌻🌻🌻 [google.com]
  • (Score: 5, Funny) by Anonymous Coward on Monday October 27, @09:09PM

    by Anonymous Coward on Monday October 27, @09:09PM (#1422531)

    I'm sticking with javascript, it's much more secure

(1)