Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Sunday August 02 2020, @04:41PM   Printer-friendly
from the seriously-cool-maths dept.

IBM completes successful field trials on Fully Homomorphic Encryption:

Yesterday, Ars spoke with IBM Senior Research Scientist Flavio Bergamaschi about the company's recent successful field trials of Fully Homomorphic Encryption. We suspect many of you will have the same questions that we did—beginning with "what is Fully Homomorphic Encryption?"

FHE is a type of encryption that allows direct mathematical operations on the encrypted data. Upon decryption, the results will be correct. For example, you might encrypt 2, 3, and 7 and send the three encrypted values to a third party. If you then ask the third party to add the first and second values, then multiply the result by the third value and return the result to you, you can then decrypt that result—and get 35.

You don't ever have to share a key with the third party doing the computation; the data remains encrypted with a key the third party never received. So, while the third party performed the operations you asked it to, it never knew the values of either the inputs or the output. You can also ask the third party to perform mathematical or logical operations of the encrypted data with non-encrypted data—for example, in pseudocode, FHE_decrypt(FHE_encrypt(2) * 5) equals 10.

[...] Although Fully Homomorphic Encryption makes things possible that otherwise would not be, it comes at a steep cost. Above, we can see charts indicating the additional compute power and memory resources required to operate on FHE-encrypted machine-learning models—roughly 40 to 50 times the compute and 10 to 20 times the RAM that would be required to do the same work on unencrypted models.

[...] Each operation performed on a floating-point value decreases its accuracy a little bit—a very small amount for additive operations, and a larger one for multiplicative. Since the FHE encryption and decryption themselves are mathematical operations, this adds a small amount of additional degradation to the accuracy of the floating-point values.

[...] As daunting as the performance penalties for FHE may be, they're well under the threshold for usefulness—Bergamaschi told us that IBM initially estimated that the minimum efficiency to make FHE useful in the real world would be on the order of 1,000:1. With penalties well under 100:1, IBM contracted with one large American bank and one large European bank to perform real-world field trials of FHE techniques, using live data.

[...] IBM's Homomorphic Encryption algorithms use lattice-based encryption, are significantly quantum-computing resistant, and are available as open source libraries for Linux, MacOS, and iOS. Support for Android is on its way.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by Anonymous Coward on Sunday August 02 2020, @05:01PM (7 children)

    by Anonymous Coward on Sunday August 02 2020, @05:01PM (#1030347)

    [...] Each operation performed on a floating-point value decreases its accuracy a little bit—a very small amount for additive operations, and a largerone for multiplicative. Since the FHE encryption and decryption themselves are mathematical operations, this adds a small amount of additional degradation to the accuracy of the floating-point values.

    This statement doesn't make any sense. In floating-point arithmetic, multiplication (assuming no over/underflow) of two values will introduce negligible additional (relative) error: the relative error of the result is essentially bounded by the product of the relative error of the inputs (within a small constant). This is because the most significant digits of the multiplication result depend only on the most significant digits of the input so any small error on the inputs has only a small effect on the result.

    However floating-point addition has no such simple bounds on the error as addition can potentially cancel every single correct digit.

    This makes me think they must not actually be using floating-point arithmetic? For example, fixed point multiplication does behave very badly in this regard...

    Starting Score:    0  points
    Moderation   +3  
       Interesting=3, Total=3
    Extra 'Interesting' Modifier   0  

    Total Score:   3  
  • (Score: 1, Interesting) by Anonymous Coward on Sunday August 02 2020, @05:35PM (2 children)

    by Anonymous Coward on Sunday August 02 2020, @05:35PM (#1030356)

    Many people in CS, particularly those coming from other branches of mathematics, have no understanding of how floating point actually works. The article is flat out wrong about addition rounding errors always being smaller than multiplication errors.

    • (Score: 2) by FatPhil on Sunday August 02 2020, @08:39PM

      by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Sunday August 02 2020, @08:39PM (#1030418) Homepage
      If anything, addition (of similar sized, but opposite sign) values can result in massively worse loss of precision (sometimes called "catastrophic cancellation") than multiplication. (Note, pedantically, it's not the addition itself that causes the loss of precision, it simply magnifies the loss of precision introduced into the intermediate values it operates on, e.g. in x^2-y^2.)
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 2) by DannyB on Monday August 03 2020, @06:23PM

      by DannyB (5839) Subscriber Badge on Monday August 03 2020, @06:23PM (#1030839) Journal

      A 64-bit float can hold all possible real number values, in the same way that a 64-bit integer can hold all possible integer values.

      --
      Is there a chemotherapy treatment for excessively low blood alcohol level?
  • (Score: 2, Interesting) by Anonymous Coward on Sunday August 02 2020, @06:28PM (2 children)

    by Anonymous Coward on Sunday August 02 2020, @06:28PM (#1030375)

    Since the purpose of floating point arithmetic is to get the wrong answer quickly, it shouldn't come as any surprise that it doesn't work very well in such a scenario.

    While FHE is a new-ish development, I find it weird that a proof of concept is considered an important result. The mathematics have previously been proven, and there are already libraries that provide primitives for encrypted operations. The next important development is an application that makes meaningful use of the capability.

    • (Score: 2) by HiThere on Sunday August 02 2020, @08:14PM

      by HiThere (866) on Sunday August 02 2020, @08:14PM (#1030405) Journal

      Perhaps. But the additional costs required to use it were interesting.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 2) by darkfeline on Sunday August 02 2020, @09:27PM

      by darkfeline (1030) on Sunday August 02 2020, @09:27PM (#1030442) Homepage

      It's a proof of concept with good enough tradeoffs. My understanding is that the state of the art was that it could be done, but no one did it well enough to be plausibly practical.

      --
      Join the SDF Public Access UNIX System today!
  • (Score: 2) by PiMuNu on Monday August 03 2020, @09:56AM

    by PiMuNu (3823) on Monday August 03 2020, @09:56AM (#1030665)

    Also weird, FTFA:

    > was FHE a little bit lossy? Not exactly, Bergamaschi explained. The model in use is based on floating-point data, not integer—and it's the
    > floats themselves that are a little lossy, not the encryption.

    The claim is that this encrypted floating point arithmetic but "more lossy" than regular floating point arithmetic. Do they need to encode floating points into some other representation to do the FHE and then decode them back into floating point representation? Do they reducing the precision to get to the claimed performance? Why does integer arithmetic not get affected in the same way - I realise floating points have a slightly different representation to integers, but it isn't all that different?

    Naively encryption is just a mapping one set of bits to an obfuscated set of bits and back - and is lossless. This is obviously more complicated, but the logic here I find a bit more confusing.