Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Friday August 17 2018, @12:39AM   Printer-friendly
from the caught-with-their-hands-in-the-cookie-jar dept.

A popular Firefox add-on is secretly logging users' browsing history, according to reports from the author of the uBlock Origin ad blocker and Mike Kuketz, a German privacy and security blogger. The add-on in question is named Web Security and is currently installed by 222,746 Firefox users, according to the official Mozilla Add-ons Portal. The add-on's description claims Web Security "actively protects you from malware, tampered websites or phishing sites that aim to steal your personal data."

Its high install count and positive reviews got the add-on on a list of recommended security and privacy add-ons on the official Firefox blog last week.

But this boost of attention from the Mozilla team didn't go down as intended. Hours after Mozilla's blog post, Raymond Hill, the author of the uBlock Origin ad blocker pointed out on Reddit that the add-on exhibited a weird behavior.

"With this extension, I see that for every page you load in your browser, there is a POST to http://136.243.163.73 Hill said. "The posted data is garbled, maybe someone will have the time to investigate further."

Hill's warning went under the radar for a few days until yesterday, when Kuketz, a popular German blogger, posted an article about the same behavior. Hours later, a user on Kuketz's forum managed to decode the "garbled" data, revealing that the add-on was secretly sending the URL of visited pages to a German server. Under normal circumstances, a Firefox add-on that needs to scan for threats might be entitled to check the URLs it scans on a remote server, but according to a format of the data the add-on was sending to the remote server, Web Security appears to be logging more than the current URL.

The data shows the plugin tracking individual users by an ID, along with their browsing pattern, logging how users went from an "oldUrl" to a "newUrl." This logging pattern is a bit excessive and against Mozilla's Addon Portal guidelines that prohibit add-ons from logging users' browsing history.

Source: Firefox Add-On With 220,000+ Installs Caught Collecting Users' Browsing History


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by coolgopher on Friday August 17 2018, @07:56AM (3 children)

    by coolgopher (1157) on Friday August 17 2018, @07:56AM (#722718)

    /dev/random will quickly exhaust your entropy pool, and block. Not so with /dev/urandom or /dev/zero.

    Note that most web servers are configured with upload limits, and when hit you'll get an error response like:

    <html>
    <head><title>413 Request Entity Too Large</title></head>
    <body bgcolor="white">
    <center><h1>413 Request Entity Too Large</h1></center>
    <hr><center>nginx/1.10.3 (Ubuntu)</center>
    </body>
    </html>

    so your enjoyment is rather brief. Unless you used the command in a loop, I mean.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by RS3 on Friday August 17 2018, @01:03PM (2 children)

    by RS3 (6367) on Friday August 17 2018, @01:03PM (#722780)

    Thank you for your attention to detail. It was supposed to be a somewhat sarcastic humorous proof-of-concept, but I realize veiled humor often gets missed. I should have used pseudo-code and let you work out the details in your favorite programming language. We'll need you to work even more overtime hours for the next few weeks- we have several more "special" projects.

    Seriously, there is no way I would run that- for many reasons. I don't need trouble, nor an ISP blocking me. I don't stoop to evil just to fight evil. But thank you again, and your information temps me to run it occasionally. And yes, it would be run in a pseudo-random loop. And no, there is no way I will do it.

    • (Score: 2) by MichaelDavidCrawford on Saturday August 18 2018, @09:13AM (1 child)

      by MichaelDavidCrawford (2339) Subscriber Badge <mdcrawford@gmail.com> on Saturday August 18 2018, @09:13AM (#723072) Homepage Journal

      I never released the source because I could never get it to work well enough to rip and entire pr0nsite. Always wget would die after a few hours. It's not smart enough to clue in to the fact that previously-downloaded HTML files don't need to re-download so it would always start again from the very beginning.

      You can configure wget to have a custom User-Agent. You can configure the number of retries from failed GETs, the timeouts for receiving the entire documents, the delays between successive GETs and so on.

      The delays and timeouts can usually be configured to vary randomly but with a specified average time value.

      There are all manner of ways to prevent the website your attacking mirroring from realizing you're actually a bot.

      For extra credit you can configure your .wgetrc to _ignore_ robots.txt!

      --
      Yes I Have No Bananas. [gofundme.com]
      • (Score: 1, Funny) by Anonymous Coward on Saturday August 18 2018, @02:17PM

        by Anonymous Coward on Saturday August 18 2018, @02:17PM (#723116)

        and closing that strike...