Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Monday August 12 2019, @03:21AM   Printer-friendly
from the not-so-fast dept.

Google Chrome Incognito Mode Can Still Be Detected by These Methods:

With the release of Chrome 76, Google fixed a loophole that allowed web sites to detect if a visitor was using Incognito mode. Unfortunately, their fix led to two other methods that can still be used to detect when a visitor is browsing privately.

Some web sites were using Incognito mode detection in order to prevent users from bypassing paywalls or to give private browsing users a different browsing experience.

This was being done by checking for the availability of Chrome's FileSystem API, which was disabled in Incognito mode. If a site could access the FileSystem API then the visitor was in a normal browsing session and if it could not access the API the user was in Incognito mode.

As Google wanted users to be able to browse the web privately and for their browsing mode choices to be private as well, they have closed a loophole by making the API available in both browsing modes. As part of this fix, instead of using disk storage for the FileSystem API, when in Incognito mode they are using a transient memory filesystem that gets cleared when a session is closed.

The use of a memory filesystem, though, create two new loopholes that could be used to detect Incognito mode

[...] In research presented by security research Vikas Mishra, he found that when Chrome allocates storage for the temporary memory filesystem used by Incognito mode, it will have a maximum quota of 120MB.

"Based on the above observations, key differences in TEMPORARY storage quota between incognito and non-incognito mode are that in case of incognito mode, there's a hard limit of 120MB while this is not the case for non-incognito window. And from the above table it's clear that for the temporary storage quota to be less than 120MB in case of non-incognito mode the device storage has to be less than 2.4GB. However for all practical purposes it is safe to assume that the majority of the devices currently in use have more than 2.4GB of storage."

The other method relies on the fact that it takes much longer to access data in storage than in memory. As of this writing no PoC (Proof of Concept) has been released for the latter method, but a PoC has been released for the filesystem size method.

Microsoft Edge developer Eric Lawrence, the New York Times, is testing this method to detect when a visitor in in private mode.

My first thought was to put a cache ahead of all filesystem writes to obviate the write-timing hack (albeit at the risk of a system crash losing cached but as yet unwritten data). For the latter method, allocate the temporary file storage quota to be some significant fraction of free storage, but when a program tries to write more than, say, 120MB (or 256MB, or whatever) then put up a dialog box noting same and asking the user if they want to continue. That was off the top of my head; what did I miss? How would you solve this problem?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Monday August 12 2019, @06:53AM (2 children)

    by Anonymous Coward on Monday August 12 2019, @06:53AM (#879107)

    The real internet only needs static/server generated documents. Everything else is the nu-web nonsense.

  • (Score: 1, Interesting) by Anonymous Coward on Monday August 12 2019, @12:19PM (1 child)

    by Anonymous Coward on Monday August 12 2019, @12:19PM (#879146)

    The theoretical web has a stateless browser. (No files, cookies, fingerprints, etc)
    Sadly, the real web has evolved.

    Many sites are addicted to it being stateful, so just eliminating these would not do.
    Instead, they need to be spoofed in exactly the same way for all page views.
    (Make the browser look like a freshly installed browser on a random computer.)

    For files, give the first load from a new site a sandboxed file system of it's own.
    Any site that gets referenced from that load gets the same sandbox.
    At the end of the view, either delete or keep the sandbox depending on if incognito or not.

    Cookies go in the sandbox.
    A set of random fingerprint answers is chosen and kept with the sandbox.
    As new holes are found, they are added to the sandboxed list.

    This will only work if most browsers do it and so sites are forced to accept it to get eyeballs.

    A fresh VM for each web view seems simpler (less attack surface) than a browser incognito arms race, but
    Even if you do this, somebody like Google adds that is used by a lot of other sites would see a nice sequence of ip address/request pairs to connect the dots.
    Being behind a NAT with a lot of other users might help this, but if you think you are incognito on the web, you get what you deserve.

    Today's web browsing has devolved to letting random entities run stuff (web pages are complex programs) on your computer in an environment you don't control (the browser, network, and OS). This is a legal and business model problem. Trying to fix it technically with the fixer's having a business interest seems unlikely to help.

    • (Score: 4, Insightful) by Arik on Monday August 12 2019, @12:24PM

      by Arik (4543) on Monday August 12 2019, @12:24PM (#879150) Journal
      "Sadly, the real web has evolved."

      That's not evolution, that's cancer.

      --
      If laughter is the best medicine, who are the best doctors?