Stories
Slash Boxes
Comments

SoylentNews is people

posted by CoolHand on Monday August 10 2015, @07:01PM   Printer-friendly
from the thinking-of-the-children dept.

The BBC reports that the UK-based Internet Watch Foundation is sharing hash lists with Google, Facebook, and Twitter to prevent the upload of child abuse imagery:

Web giants Google, Facebook and Twitter have joined forces with a British charity in a bid to remove millions of indecent child images from the net. In a UK first, anti-abuse organisation Internet Watch Foundation (IWF) has begun sharing lists of indecent images, identified by unique "hash" codes. Wider use of the photo-tagging system could be a "game changer" in the fight against paedophiles, the charity said. Internet security experts said images on the "darknet" would not be detected.

The IWF, which works to take down indecent images of children, allocates to each picture it finds a "hash" - a unique code, sometimes referred to as a digital finger-print. By sharing "hash lists" of indecent pictures of children, Google, Facebook and Twitter will be able to stop those images from being uploaded to their sites.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by termigator on Monday August 10 2015, @11:28PM

    by termigator (4271) on Monday August 10 2015, @11:28PM (#220977)

    As already mentioned, pedophiles will learn to modify the images so the hash checks will fail. This could even be automated so each copy of the image transmitted is auto tranformed so it will still look the same but provide a different hash checksum.

    It seems you would need to utilize an imaging differencing algorithm to check if an image is a copy of a known abuse image. Despite the technical challenges this imposes, it would require a base image to be kept somewhere to do the diffs, so those trying to fight the spread of the images will have to distribute the images amongst themselves. Oh the irony.

  • (Score: 3, Interesting) by PinkyGigglebrain on Tuesday August 11 2015, @06:18AM

    by PinkyGigglebrain (4458) on Tuesday August 11 2015, @06:18AM (#221123)

    another simple option to defeat something like this would be to just encrypt the images you upload using a key known only to those who you want to view the images. A simple little upload manager that encrypts any image you upload would not be hard to create, there could even be a plug in that decrypts the images as they get downloaded from the internet.

    It wouldn't even matter if the key becomes widely known. Google, et all would have to try and decrypt every file uploaded against every known key before checking it against the hash to find out if the uploaded file was images of child molestation or just a password protected zip of your old English essays. And what would the policy be if they cant decrypt it?

    How much added electricity/CPU cycles/RAM would that need. And who would end up paying for it?

    --
    "Beware those who would deny you Knowledge, For in their hearts they dream themselves your Master."
    • (Score: 0) by Anonymous Coward on Tuesday August 11 2015, @12:42PM

      by Anonymous Coward on Tuesday August 11 2015, @12:42PM (#221235)

      And who would end up paying for it?

      That one is easy to answer. Google would pass the cost on to its advertising customers, who would pass it on to the product vendors they create advertisements for, who would finally pass it on to those who buy their products.