Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday April 05 2018, @08:27PM   Printer-friendly
from the digital-fingerprints dept.

Zero-width characters are invisible, ‘non-printing’ characters that are not displayed by the majority of applications. F​or exam​ple, I’ve ins​erted 10 ze​ro-width spa​ces in​to thi​s sentence, c​an you tel​​l? (Hint: paste the sentence into Diff Checker to see the locations of the characters!). These characters can be used to ‘fingerprint’ text for certain users.

Well, the original reason isn’t too exciting. A few years ago I was a member of a team that participated in competitive tournaments across a variety of video games. This team had a private message board, used to post important announcements amongst other things. Eventually these announcements would appear elsewhere on the web, posted to mock the team and more significantly; ensuring the message board was redundant for sharing confidential information and tactics.

The security of the site seemed pretty tight so the theory was that a logged-in user was simply copying the announcement and posting it elsewhere. I created a script that allowed the team to invisibly fingerprint each announcement with the username of the user it is being displayed to.

I saw a lot of interest in zero-width characters from a recent post by Zach Aysan so I thought I’d publish this method here along with an interactive demo to share with everyone. The code examples have been updated to use modern JavaScript but the overall logic is the same.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by darkfeline on Saturday April 07 2018, @03:33AM (2 children)

    by darkfeline (1030) on Saturday April 07 2018, @03:33AM (#663646) Homepage

    Uh, not really?

    Disregarding emoticon bullshit, the use cases for ZWS and ZWNJ cited by GP add semantic information to the content that could not be added post hoc by the rendering layer without guessing or some additional encoding, markup or formatting language.

    I mean, technically all non-ASCII and non-printable characters could be considered *presentation*, as you say, rather than *content*, by encoding everything as UTF-8 and then into base64, but I don't think most cultures appreciate being told that their language must be encoded as ASCII gibberish at the *content* level and must be rendered at the *presentation* level to be decipherable, and in any case you're back to square one where you have to standardize an additional markup/formatting/encoding language on top of Unicode and UTF-N.

    --
    Join the SDF Public Access UNIX System today!
    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by coolgopher on Saturday April 07 2018, @10:41AM (1 child)

    by coolgopher (1157) on Saturday April 07 2018, @10:41AM (#663718)

    Sure, we've already seen how pleasant punycode is with the IDNs.

    We're still trying to come to grips with how to deal with content and presentation sanely though. On one end of the spectrum you have a fully-rendered image, conveying both exactly as the originator provided (subject to scale, colour profile, etc), and on the other you have, well, what do you have? A bunch of standalone symbols in a well-defined interchange format which can be strung together to form meaning? A bunch of symbols together with how-to-string-them-together information? The image end of the spectrum is seriously painful to machine process, the other direction a lot less so, until you want to do it correctly at which point you inevitably discover that you're dealing with a flawed model.

    Going even more meta, all of this is already a lossy encoding of the intended meaning of the originator (as would recorded speech be). How much lossiness in each encoding layer (idea -> speech/mental-speech -> text & presentation-> text encoding) is acceptable? How much can we compensate for with good design?

    I really don't have good answers - as I wrote above, the mix of content and presentation seems to be an innate property. It still *feels* like it should be possible to design a better model though.

    Do we really need to mix in language/script-specific rules/mechanics into languages/scripts where they don't belong?

    • (Score: 2) by darkfeline on Saturday April 07 2018, @09:27PM

      by darkfeline (1030) on Saturday April 07 2018, @09:27PM (#663811) Homepage

      Yes, deciding what counts and doesn't count as a script character to be added to Unicode is difficult, and the Unicode Consortium haven't been doing the best job, but I don't think it's debatable that some number of non-printable or "control" characters will need to be included. Simply in the general case, a language could contain all manner of idiosyncratic rules for its written script, and Unicode should be capable of representing that faithfully.

      --
      Join the SDF Public Access UNIX System today!