Slash Boxes

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 12 submissions in the queue.
posted by janrinok on Wednesday May 16 2018, @11:51PM   Printer-friendly
from the as-easy-as-that dept.

Most strings found on the Internet are encoded using a particular unicode format called UTF-8. However, not all strings of bytes are valid UTF-8. The rules as to what constitute a valid UTF-8 string are somewhat arcane. Yet it seems important to quickly validate these strings before you consume them.

In a previous post, I pointed out that it takes about 8 cycles per byte to validate them using a fast finite-state machine. After hacking code found online, I showed that using SIMD instructions, we could bring this down to about 3 cycles per input byte.

Is that the best one can do? Not even close.

Many strings are just ASCII, which is a subset of UTF-8. They are easily recognized because they use just 7 bits per byte, the remaining bit is set to zero. Yet if you check each and every byte with silly scalar code, it is going to take over a cycle per byte to verify that a string is ASCII. For much better speed, you can vectorize the problem in this manner:

Essentially, we are loading up a vector register, comparing each entry with zero and turning on a flag (using a logical OR) whenever a character outside the allowed range is found. We continue until the very end no matter what, and only then do we examine our flags.

We can use the same general idea to validate UTF-8 strings. My code is available.

If you are almost certain that most of your strings are ASCII, then it makes sense to first test whether the string is ASCII, and only then fall back on the more expensive UTF-8 test.

So we are ten times faster than a reasonable scalar implementation. I doubt this scalar implementation is as fast as it can be… but it is not naive… And my own code is not nearly optimal. It is not using AVX to say nothing of AVX-512. Furthermore, it was written in a few hours. I would not be surprised if one could double the speed using clever optimizations.

The exact results will depend on your machine and its configuration. But you can try the code.

The counter-rolling can actually be done logarithmically by shifting 1,2,4, etc., eg:

[4,0,0,0] + ([0,4,0,0]-[1,1,1,1]) = [4,3,0,0]

[4,3,0,0] + ([0,0,4,3]-[2,2,2,2]) = [4,3,2,1]

but in this case the distances didn’t seem big enough to beat the linear method.

The distances can even be larger than the register size I believe if the last value in the register is carried over to the first element of the next. It’s a good way to delineate inline variable-length encodings.

Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by ledow on Thursday May 17 2018, @07:55AM (4 children)

    by ledow (5567) on Thursday May 17 2018, @07:55AM (#680656) Homepage

    To be honest, would all these optimisations not be wiped out by compiler optimisations anyway?

    I'm fairly sure there isn't a modern compiler on the planet that would actually organise a tight for loop test on consecutive bytes in such a linear fashion.

    And "whether the string is ASCII or not" is fairly useless as a test if you're running it against enough strings for cycle counts to actually matter. Especially when your next action is a much more complicated "interpret the UTF-8 including control characters" and passing through to some other process (font rendering, file writing, etc.).

    I would imagine such an optimisation would only be useful for people routinely processing literally billions of lines of fresh text in unknown formats constantly, and more likely it would be more in their interest to just forcibly convert everything into Unicode anyway and then write all code on the assumption of Unicode strings.

    It seems to me to be a case of premature optimisation and/or creating a highly optimised yet non-portable and difficult to maintain version of "is_utf8_text". This is the kind of thing that, say, Freetype or a C++ string library might include in an platform-specific header that overrides a more generic implementation but otherwise doesn't really make any difference at all. And which is removed after about five years when everyone realises the compiler generates equivalent or better code anyway.

    Reminds me of the kind of thing I used to see in emulator code all the time - hand-crafted assembler to translate one platform's high-performance code to native code. There was always a point at which the guy who understood it left, leaving them with "optimised by known buggy asm core" and "unoptimised but nobody cares because it works and can be updated easily C core".

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by MichaelDavidCrawford on Thursday May 17 2018, @07:01PM (3 children)

    by MichaelDavidCrawford (2339) Subscriber Badge <> on Thursday May 17 2018, @07:01PM (#680836) Homepage Journal

    I've never found a compiler that I couldn't beat with hand optimized c or c++

    I don't even need to use assembly

    Yes I Have No Bananas. []
    • (Score: 2) by wonkey_monkey on Friday May 18 2018, @08:02PM (2 children)

      by wonkey_monkey (279) on Friday May 18 2018, @08:02PM (#681350) Homepage

      I've never found a compiler that I couldn't beat with hand optimized c or c++

      Uh... and how did you turn that hand optimised C/C++ into executable code...?

      systemd is Roko's Basilisk
      • (Score: 2) by MichaelDavidCrawford on Friday May 18 2018, @10:13PM (1 child)

        If you have stupid code, the best an optimizer can do is to make stupidity faster.

        Consider using the compiler to optimize bubble sort.

        Something I commonly do is defeat the cache. This because if you write one byte into a cache line, before that byte is written that line is filled by reading from the next layer of the cache. Now suppose you completely fill the cache with new values - that read from possibly main memory was useless.

        x86_64 has an assembly instruction that the programmer promises "I swear on a stack of bibles I will fill this cache line with entirely new values". That can speed things up quite a bit.

        PowerPC and POWER both have - strangely different from each other - assembly opcodes that sets a whole cache line to zero in just one cycle.

        Because of patents, every ISA has a different way to defeat the cache but most of them do offer that option.

        Yes I Have No Bananas. []
        • (Score: 2) by wonkey_monkey on Saturday May 19 2018, @05:56PM

          by wonkey_monkey (279) on Saturday May 19 2018, @05:56PM (#681620) Homepage

          That sounds less like "beating the compiler" and more like working with it by producing good code in the first place.

          x86_64 has an assembly instruction that the programmer promises "I swear on a stack of bibles I will fill this cache line with entirely new values". That can speed things up quite a bit.

          Where can I read up on that instruction? It sounds like it might be very useful to me.

          systemd is Roko's Basilisk