Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday May 16 2018, @11:51PM   Printer-friendly
from the as-easy-as-that dept.

Most strings found on the Internet are encoded using a particular unicode format called UTF-8. However, not all strings of bytes are valid UTF-8. The rules as to what constitute a valid UTF-8 string are somewhat arcane. Yet it seems important to quickly validate these strings before you consume them.

In a previous post, I pointed out that it takes about 8 cycles per byte to validate them using a fast finite-state machine. After hacking code found online, I showed that using SIMD instructions, we could bring this down to about 3 cycles per input byte.

Is that the best one can do? Not even close.

Many strings are just ASCII, which is a subset of UTF-8. They are easily recognized because they use just 7 bits per byte, the remaining bit is set to zero. Yet if you check each and every byte with silly scalar code, it is going to take over a cycle per byte to verify that a string is ASCII. For much better speed, you can vectorize the problem in this manner:

Essentially, we are loading up a vector register, comparing each entry with zero and turning on a flag (using a logical OR) whenever a character outside the allowed range is found. We continue until the very end no matter what, and only then do we examine our flags.

We can use the same general idea to validate UTF-8 strings. My code is available.

If you are almost certain that most of your strings are ASCII, then it makes sense to first test whether the string is ASCII, and only then fall back on the more expensive UTF-8 test.

So we are ten times faster than a reasonable scalar implementation. I doubt this scalar implementation is as fast as it can be… but it is not naive… And my own code is not nearly optimal. It is not using AVX to say nothing of AVX-512. Furthermore, it was written in a few hours. I would not be surprised if one could double the speed using clever optimizations.

The exact results will depend on your machine and its configuration. But you can try the code.

The counter-rolling can actually be done logarithmically by shifting 1,2,4, etc., eg:

[4,0,0,0] + ([0,4,0,0]-[1,1,1,1]) = [4,3,0,0]

[4,3,0,0] + ([0,0,4,3]-[2,2,2,2]) = [4,3,2,1]

but in this case the distances didn’t seem big enough to beat the linear method.

The distances can even be larger than the register size I believe if the last value in the register is carried over to the first element of the next. It’s a good way to delineate inline variable-length encodings.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by wonkey_monkey on Friday May 18 2018, @08:02PM (2 children)

    by wonkey_monkey (279) on Friday May 18 2018, @08:02PM (#681350) Homepage

    I've never found a compiler that I couldn't beat with hand optimized c or c++

    Uh... and how did you turn that hand optimised C/C++ into executable code...?

    --
    systemd is Roko's Basilisk
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by MichaelDavidCrawford on Friday May 18 2018, @10:13PM (1 child)

    If you have stupid code, the best an optimizer can do is to make stupidity faster.

    Consider using the compiler to optimize bubble sort.

    Something I commonly do is defeat the cache. This because if you write one byte into a cache line, before that byte is written that line is filled by reading from the next layer of the cache. Now suppose you completely fill the cache with new values - that read from possibly main memory was useless.

    x86_64 has an assembly instruction that the programmer promises "I swear on a stack of bibles I will fill this cache line with entirely new values". That can speed things up quite a bit.

    PowerPC and POWER both have - strangely different from each other - assembly opcodes that sets a whole cache line to zero in just one cycle.

    Because of patents, every ISA has a different way to defeat the cache but most of them do offer that option.

    --
    Yes I Have No Bananas. [gofundme.com]
    • (Score: 2) by wonkey_monkey on Saturday May 19 2018, @05:56PM

      by wonkey_monkey (279) on Saturday May 19 2018, @05:56PM (#681620) Homepage

      That sounds less like "beating the compiler" and more like working with it by producing good code in the first place.

      x86_64 has an assembly instruction that the programmer promises "I swear on a stack of bibles I will fill this cache line with entirely new values". That can speed things up quite a bit.

      Where can I read up on that instruction? It sounds like it might be very useful to me.

      --
      systemd is Roko's Basilisk