Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday September 25 2019, @11:42PM   Printer-friendly
from the "irrational"-conclusion dept.

Arthur T Knackerbracket has found the following story:

Professor Peter Coveney, Director of the UCL Centre[*] for Computational Science and study co-author, said: "Our work shows that the behaviour of the chaotic dynamical systems is richer than any digital computer can capture. Chaos is more commonplace than many people may realise and even for very simple chaotic systems, numbers used by digital computers can lead to errors that are not obvious but can have a big impact. Ultimately, computers can't simulate everything."

The team investigated the impact of using floating-point arithmetic -- a method standardised by the IEEE and used since the 1950s to approximate real numbers on digital computers.

Digital computers use only rational numbers, ones that can be expressed as fractions. Moreover the denominator of these fractions must be a power of two, such as 2, 4, 8, 16, etc. There are infinitely more real numbers that cannot be expressed this way.

In the present work, the scientists used all four billion of these single-precision floating-point numbers that range from plus to minus infinity. The fact that the numbers are not distributed uniformly may also contribute to some of the inaccuracies.

First author, Professor Bruce Boghosian (Tufts University), said: "The four billion single-precision floating-point numbers that digital computers use are spread unevenly, so there are as many such numbers between 0.125 and 0.25, as there are between 0.25 and 0.5, as there are between 0.5 and 1.0. It is amazing that they are able to simulate real-world chaotic events as well as they do. But even so, we are now aware that this simplification does not accurately represent the complexity of chaotic dynamical systems, and this is a problem for such simulations on all current and future digital computers."

The study builds on the work of Edward Lorenz of MIT whose weather simulations using a simple computer model in the 1960s showed that tiny rounding errors in the numbers fed into his computer led to quite different forecasts, which is now known as the 'butterfly effect'.

[*] UCL: University College London

Journal Reference:
Bruce M. Boghosian, Peter V. Coveney, Hongyan Wang. A New Pathology in the Simulation of Chaotic Dynamical Systems on Digital Computers. Advanced Theory and Simulations, 2019; 1900125 DOI: 10.1002/adts.201900125


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Funny) by c0lo on Thursday September 26 2019, @12:18AM (24 children)

    by c0lo (156) Subscriber Badge on Thursday September 26 2019, @12:18AM (#898860) Journal

    Size of know Universe: 1026m
    Planck length: 10-35

    Size of known Universe in Planck length: 1061 which is around 2207. 256 bits fixed precision would be enough to represent any position in the known Universe in Plank lengths.

    Or, you know, use ternary computers, I read a while back that base3 offers the optimum in computation efficiency vs reliability of results.

    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 0) by Anonymous Coward on Thursday September 26 2019, @12:53AM (20 children)

      by Anonymous Coward on Thursday September 26 2019, @12:53AM (#898870)

      That would work for representing any given location of any given thing, but I'm assuming there is a huge (probably exponential) cost to doing operations that way. Basically, there a lot of smart people in computer science (not to be confused with "programming"), and in physics. I'm sure if it were this easy, it'd already be done this way.

      Then again, it could be like that clever graphical rendering trick where you render the insides of objects instead of outsides for collisions, and they really do need a neophyte to say "the emperor has no clothes." I doubt it, but it could be true.

      As for using base3... Uhh... am I misunderstanding? All our current computer technology, including decades of optimization, is based on binary. Are you suggesting that we create new trinary-computers for some reason?

      • (Score: 3, Informative) by c0lo on Thursday September 26 2019, @01:24AM (18 children)

        by c0lo (156) Subscriber Badge on Thursday September 26 2019, @01:24AM (#898880) Journal

        That would work for representing any given location of any given thing, but I'm assuming there is a huge (probably exponential) cost to doing operations that way.

        Nope, the multiplication difficulty scales less-than-quadratic-law with the number of bits in the representation. More precisely, with at most the power of log2(3) [wikipedia.org] - faster algos exist for sufficiently larger number of bits in the representation.

        Given that the computation of most of the functions use convergent series, this gets to say that you can expect somewhere a less than cubic degradation of performance with the number of bits in the representation.

        Basically, there a lot of smart people in computer science (not to be confused with "programming"), and in physics. I'm sure if it were this easy, it'd already be done this way.

        Nope again. TFA is just a warning to all "wet-behind-the-ears scientists" to consider the precision they need before blindly relying on already written ones or whatever the hardware they use provide (being tempted by fast, on chip, but low precision arithmetics - e.g. neural networks on your mobile [fritz.ai])

        Otherwise, bignum libs exist and are being used for quite a long time. [wikipedia.org]

        --
        https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
        • (Score: 3, Funny) by krishnoid on Thursday September 26 2019, @02:40AM

          by krishnoid (1156) on Thursday September 26 2019, @02:40AM (#898910)

          I don't understand most of that but it sounds authoritative and falsifiable, so +1 Informative for you.

        • (Score: 2) by HiThere on Thursday September 26 2019, @03:15AM (2 children)

          by HiThere (866) Subscriber Badge on Thursday September 26 2019, @03:15AM (#898920) Journal

          The point about "BigNum"s is valid. As for the other, why do things the hard way for an infrequent edge case?

          Using BigNums and such when a floating point is good enough adds extra complexity, extra cost, and requires more computing cycles, so there's no way that would be made the default when it's only rarely needed.

          FWIW, most of the time we don't even need floats, so lots of computers were made that didn't *have* hardware floating point units. Hardware got cheaper, so now we have built in floats, and even things like GPUs. Handling chaos "correctly" is not normally needed. Yes, it shows up all over the place, but usually your measurements aren't accurate enough that you could take advantage of the extra capability, and so it would only be useful in unusual circumstances. And then you can use an "infinite precision floating point" library...and expect things to take a very long time to run.

          --
          Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
          • (Score: 2) by c0lo on Thursday September 26 2019, @03:38AM

            by c0lo (156) Subscriber Badge on Thursday September 26 2019, @03:38AM (#898926) Journal

            As for the other, why do things the hard way for an infrequent edge case?

            because, now and then (i.e. infrequent), one does have things that can't be done without bignum?

            so there's no way that would be made the default when it's only rarely needed.

            Nor me neither TFA say "do it always".
            However, if the CPU providers will find cheap enough to go to higher bitnum on-chip arithmetic**, my argument suggest that computations beyond 256bit numbers for physics modeling (that includes weather) could be an overkill.
            (the age of Universe is somewhere in the 14e+9 years => 4.42e+17 seconds => 8.1966042e+60 Plank time units => 256 bit numbers are precise enough too for time scales going over 16k times longer than the current age of the universe)

            ---

            ** from the first IC CPU [wikipedia.org] to date, it's less than 50 years. And the bits went from 4 to 64 - 16x times
            From 64 to 256 is only 4x times, so who knows what would the next 50 years bring in?

            --
            https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
          • (Score: 1, Insightful) by Anonymous Coward on Thursday September 26 2019, @05:57AM

            by Anonymous Coward on Thursday September 26 2019, @05:57AM (#898972)

            It's our god given right to waste CPU cycles. We'll use BigNums, and we'll do it in a virtual machine written in JavaScript.

        • (Score: 3, Interesting) by bzipitidoo on Thursday September 26 2019, @04:17AM (13 children)

          by bzipitidoo (4388) on Thursday September 26 2019, @04:17AM (#898934) Journal

          Skimming, I see that neither article nor paper explicitly mentions Numerical Stability, a primary concern in the subfield of Numerical Analysis. I find it hard to believe that the authors and reviewers did not know of this, or at least did not learn of it in doing this research. But maybe they missed it. Iterative methods in particular can be highly prone to instability. By choosing among methods, and carefully ordering the operations, a lot of unstable scenarios can be avoided.

          A simple example is the equation a*b*c. Multiplication is associative, of course. Ideally, you would get the same results with either (a*b)*c or a*(b*c). But in practice, you might get widely divergent answers. If a*b is close to the maximum value, and c is a small fraction, you would want to multiply by c first, otherwise a lot of precision will be lost. Should also use the commutative property, and consider orders such as a*c*b.

          Can't have the computer blindly plow ahead with whatever arbitrary order it was fed the numbers, it should be programmed to test for instability and to juggle the operations in the way that will be most stable. If they aren't taking such precautions, then their results are not that interesting or valuable. Yeah, yeah, chaotic systems are extremely sensitive to initial conditions. We know that already. There are plenty of chaotic systems that cannot be accurately calculated even with the use of stable methods. Numerical Stability is still very valuable.

          • (Score: 2) by c0lo on Thursday September 26 2019, @05:40AM (11 children)

            by c0lo (156) Subscriber Badge on Thursday September 26 2019, @05:40AM (#898959) Journal

            The path of a numerical computer is beset on all sides by the approximation errors and the propagation of these errors. Blessed is he who, in the name of charity and good, will know about these and shepherds the weak in this knowledge through the valley of darkness for he is truly his brother's keeper and the finder of lost children.

            But don't strike down with great vengeance and furious anger those that don't go shepherding all the way, 'cause good can be still good without being perfect and nobody is perfect anyway.

            --
            https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
            • (Score: 2) by JoeMerchant on Thursday September 26 2019, @02:01PM (10 children)

              by JoeMerchant (3937) on Thursday September 26 2019, @02:01PM (#899092)

              don't strike down with great vengeance and furious anger those that don't go shepherding all the way, 'cause good can be still good without being perfect and nobody is perfect anyway.

              Unless they write control loops such as:

              float n = 0.0;

              while ( n != 10.0 )

                  { blah blah

                      n = n + 0.1;

                  }

              for they deserve, at the very least, great ridicule in front of their peers.

              --
              🌻🌻 [google.com]
              • (Score: 2) by DannyB on Thursday September 26 2019, @03:25PM (6 children)

                by DannyB (5839) Subscriber Badge on Thursday September 26 2019, @03:25PM (#899157) Journal

                5 DEFDBL n
                10 n = 0.0
                20 GOTO 50
                30 blah blah
                40 n = n + 0.1
                50 IF n != 10.0 GOSUB 30
                60 END

                --
                When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
                • (Score: 2) by JoeMerchant on Thursday September 26 2019, @03:37PM (5 children)

                  by JoeMerchant (3937) on Thursday September 26 2019, @03:37PM (#899161)

                  Did you copy this from my computer lab partner in 1984? - I think I've seen exactly that code sequence before.

                  --
                  🌻🌻 [google.com]
                  • (Score: 2) by DannyB on Thursday September 26 2019, @03:40PM (4 children)

                    by DannyB (5839) Subscriber Badge on Thursday September 26 2019, @03:40PM (#899164) Journal

                    I proudly translated it myself, without any help, into the ideal programming language without any misteaks.

                    --
                    When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
                    • (Score: 2) by JoeMerchant on Thursday September 26 2019, @03:44PM (3 children)

                      by JoeMerchant (3937) on Thursday September 26 2019, @03:44PM (#899170)

                      I particularly like the stack overflow insert, so you don't get stuck in an infinite loop.

                      --
                      🌻🌻 [google.com]
                      • (Score: 2) by DannyB on Thursday September 26 2019, @03:52PM (2 children)

                        by DannyB (5839) Subscriber Badge on Thursday September 26 2019, @03:52PM (#899174) Journal

                        That's why I have the title Senior Software Developer.

                        --
                        When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
                        • (Score: 3, Touché) by JoeMerchant on Thursday September 26 2019, @04:01PM (1 child)

                          by JoeMerchant (3937) on Thursday September 26 2019, @04:01PM (#899180)

                          They gave me the title "Principal", twice now - the first time they did that, I had them change it to "Principle" because, you know, I've got principles, not a bunch of unruly kids running around.

                          This time Principal fits, so I let it stay.

                          --
                          🌻🌻 [google.com]
                          • (Score: 2) by DannyB on Thursday September 26 2019, @05:30PM

                            by DannyB (5839) Subscriber Badge on Thursday September 26 2019, @05:30PM (#899210) Journal

                            I think I have that title because I've been here so long (almost 40 years) that I know where all the bodies are buried.

                            --
                            When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
              • (Score: 2) by DannyB on Thursday September 26 2019, @03:39PM (2 children)

                by DannyB (5839) Subscriber Badge on Thursday September 26 2019, @03:39PM (#899163) Journal

                for they deserve, at the very least, great ridicule in front of their peers.

                I was recently discussing right here on SN, how in the 1960's and certainly 70's, how everyone learned never to use floating point to hold money values.

                float payment = 0.00; // start at zero dollars and sense
                while( payment != 10.00 ) { // while not up to ten dollars
                    blah blah
                    payment = payment + 0.10; // add ten cents (but with no sense)
                }

                I would ordinarily write payment = 10.0, rather than not equal. But even if you're not paranoid, you should be able to assume dollars and cents are exact and make sense.

                --
                When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
                • (Score: 2) by DannyB on Thursday September 26 2019, @03:55PM (1 child)

                  by DannyB (5839) Subscriber Badge on Thursday September 26 2019, @03:55PM (#899178) Journal

                  I would ordinarily write payment = 10.0

                  Ugh, can't write a less than without using html entity.

                  Try this.. .

                  I would ordinarily write: payment <= 10.00

                  --
                  When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
                  • (Score: 0) by Anonymous Coward on Thursday September 26 2019, @07:19PM

                    by Anonymous Coward on Thursday September 26 2019, @07:19PM (#899267)

                    No, everyone knows it should be: 10.00 >= payment

          • (Score: 2) by JoeMerchant on Thursday September 26 2019, @01:58PM

            by JoeMerchant (3937) on Thursday September 26 2019, @01:58PM (#899090)

            Chaos is often modeled with feedback oscillators, so you've got an infinite chain of operations, and, yes, it is very important to keep in the "sweet spot" of your numerical representation - most chaotic oscillators I've ever worked with keep their numbers in the range of ~ +/- 10.0 to +/- 0.1, regardless of whether they are using float, double, or other precision to compute them.

            If you want to see it bigger (or smaller), scale it up (or down), but the chaotic feedback computation loop stays planted within a narrow well behaved range.

            --
            🌻🌻 [google.com]
      • (Score: 4, Interesting) by c0lo on Thursday September 26 2019, @02:19AM

        by c0lo (156) Subscriber Badge on Thursday September 26 2019, @02:19AM (#898901) Journal

        As for using base3... Uhh... am I misunderstanding? All our current computer technology, including decades of optimization, is based on binary. Are you suggesting that we create new trinary-computers for some reason?

        Clearly, on top of being ignorant, you are lazy too. Your opportunity to, at least, appear humble about (instead of dismissive) could start here [google.com]

        From the list of results:
        * history of ternary computers [wikipedia.org] - yes, they do even have a history and the history shows they can be cheaper than binary.
        * Samsung backing research in ternary chip design [extremetech.com] - so... maybe not that crazy an idea after all?
        * optical computers highly likely to use ternary [iop.org] - with two orthogonal polarisation and absence of light representing the trit. Otherwise, a balanced ternary [wikipedia.org] electronics would use 1, 0 and -1 polarity for the gates, which is not that hard to do.
        * some resources [uiowa.edu] - will tell you that a ternary computer could at least cut the necessary wires (read connections) to 64%. Fast arithmetic is also possible.
        * some even propose an "open source/crowdfunding/maker-style" approach [ternary-computing.com]. Others invest their hobby time into just for the hack of it [hackaday.com]

        --
        https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 2) by JoeMerchant on Thursday September 26 2019, @01:09PM

      by JoeMerchant (3937) on Thursday September 26 2019, @01:09PM (#899071)

      It all depends on the stability of your chaotic oscillator. For oscillators that diverge, or are very very close to diverging, any tiny difference may make a very significant change in the observed pattern - the difference between modelling with floats and doubles certainly can do that.

      On the other hand, there are very richly patterned chaotic oscillators (or, as a UF professor of Chaos once chided me: quasi-periodic oscillators, though I think he's splitting hairs that are too fuzzy to classify), which deliver the same large scale, and mid scale patterns whether you compute them with floats, or doubles, or arbitrarily large precision. Of course, when you zoom down into these oscillators to reveal the fine structure at the limits of the computational precision, they all will eventually "chunk up" as you approach the limits of precision, whatever those limits are - this is easily seen with the classic Mandelbrot set zooming programs.

      And, what is the difference between stability and instability in a chaotic oscillator? Usually just small changes in a single feedback coefficient can take you from an oscillator that converges to a single point, to bifurcation, to multiple bifurcations, sometimes to quasi-periodic oscillation, to chaos, to instability and divergence to infinity.

      http://mangocats.com [mangocats.com]

      --
      🌻🌻 [google.com]
    • (Score: 2) by JoeMerchant on Thursday September 26 2019, @02:54PM

      by JoeMerchant (3937) on Thursday September 26 2019, @02:54PM (#899128)

      Or, you know, use ternary computers

      I attended a city council meeting a while back where a citizen in the audience, regular attendee of the meetings, dressed with knee pads and tattered wrappings generally suggesting that he sleeps without a regular roof over his head, taking periodic swigs from a hip flask, stood up for his 3 minutes at the public comment microphone. I think the majority of his time was spent lobbying for decriminalization of something or another, but... I clearly remember his closing comments wherein he requested that the council re-consider his suggestion that they re-code the property tax rolls using hexadecimal - since it would be more efficient than decimal, more numbers in a smaller space you see. Thank you for your consideration.

      --
      🌻🌻 [google.com]
    • (Score: 2) by DeathMonkey on Thursday September 26 2019, @05:26PM

      by DeathMonkey (1380) on Thursday September 26 2019, @05:26PM (#899207) Journal

      c0lo lives in a one dimensional inverse, apparently!

  • (Score: 2, Insightful) by Runaway1956 on Thursday September 26 2019, @12:22AM (13 children)

    by Runaway1956 (2926) Subscriber Badge on Thursday September 26 2019, @12:22AM (#898862) Journal

    This is a longwinded way of saying that math has rules that we understand. Nature, not so much. Yeah, you can easily argue that nature has rules, but you can't argue very well that we understand all the rules. When the rules we don't understand spoil our plans, we blame it on "chaos". Chaos theory is a poor attempt to explain all of those rules that we do not understand, IMHO.

    Or, we could just say that the human mind is fallible, thus all of his calculations will be fallible.

    • (Score: 3, Touché) by Anonymous Coward on Thursday September 26 2019, @12:45AM

      by Anonymous Coward on Thursday September 26 2019, @12:45AM (#898868)

      What is called "chaotic process", is simply one with the kind of positive feedback that amplifies perturbations as time goes. Given that we never can measure initial conditions with absolute precision, nor simulate random external influences (such as say cosmic rays), the timespan we can usefully model is limited by how long it takes for those tiny differences from reality to grow into dominating the landscape.
      TFA is reminding us of an additional source of tiny differences, which is floating-point rounding errors. Those get amplified exactly the same way.

    • (Score: 0) by Anonymous Coward on Thursday September 26 2019, @01:00AM (2 children)

      by Anonymous Coward on Thursday September 26 2019, @01:00AM (#898873)

      Yeah, you can easily argue that nature has rules, but you can't argue very well that we understand all the rules.

      No.

      Chaos theory [wikipedia.org] doesn't represent "we don't understand the rules." It means small changes can have widely divergent results, which makes it hard to model because you need to be right on everything.

      As an example people here may understand better, try to find the prime prime factors of the number ((2^6593)-3). "OMG, it's hard." Yes, but that doesn't mean that we don't understand the basic simple fundamental rules of math. It just means it's really hard.

      Also, the fact that we can launch a rocket and have it land on the moon (do you know how far away that is, how fast it is moving, and how relatively small it is), or create millions of CPUs which have trillions of micron-sized circuits, suggests that we have a lot of knowledge about how nature works. Yes, we don't know EVERYTHING (and never will, although the part we don't know shrinks every day)... but it's a bit dismissive to be ignoring what we do know because our current knowledge of physics breaks down at extreme edge situations.

      • (Score: 2) by HiThere on Thursday September 26 2019, @03:19AM (1 child)

        by HiThere (866) Subscriber Badge on Thursday September 26 2019, @03:19AM (#898922) Journal

        To say "the part we don't know shrinks every day" is to make some unproven assumptions about the nature of "natural laws". It's *probably* correct, but whether it actually is or not is a part of what we don't know. (*Are* there a finite number of natural laws?)

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
        • (Score: 2) by All Your Lawn Are Belong To Us on Thursday September 26 2019, @02:40PM

          by All Your Lawn Are Belong To Us (6553) on Thursday September 26 2019, @02:40PM (#899116) Journal

          Is there a finite number of definitive realities? If there's only 1 universe then there would likely be a finite number of ways to describe it (z). If there is n number of set realities then it is just z*n. If it is not finite, than no.

          I know! 42!

          Uhoh. Just replaced ourselves with something even more bizarrely inexplicable.

          --
          This sig for rent.
    • (Score: 2) by c0lo on Thursday September 26 2019, @01:01AM (7 children)

      by c0lo (156) Subscriber Badge on Thursday September 26 2019, @01:01AM (#898874) Journal

      This is a longwinded way of saying that math has rules that we understand.

      For some values of "we".

      Chaos theory is a poor attempt to explain all of those rules that we do not understand, IMHO.

      Honesty appreciated, but it shows the value of "we" above doesn't include you.

      Or, we could just say that the human mind is fallible, thus all of his calculations will be fallible.

      Rule zero of numerical modelling: there is no simpler computer that can simulate nature more precise and faster than the nature itself.
      Consequences:
      1. any simpler (than nature) computer will use an approximation model**.
      2. at best, using a model can (theoretically) result in faster but imprecise results (no perfect prediction possible) or equal (to nature) precision but slower results (no prediction at all). Practically, neither of two are attainable.

      The above subsumes "human mind" as a particular case of computer.

      ----

      ** Even when you perform an experiment (using a part of the nature to actually "compute" what you want to observe), you are implicitly injecting the assumption that your experiment will have results that are valid anywhere/anytime. Which may not necessarily happen.

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 0) by Anonymous Coward on Thursday September 26 2019, @01:06AM (6 children)

        by Anonymous Coward on Thursday September 26 2019, @01:06AM (#898875)

        Rule zero of numerical modelling: there is no simpler computer that can simulate nature more precise and faster than the nature itself.
        Consequences:
        1. any simpler (than nature) computer will use an approximation model**.
        2. at best, using a model can (theoretically) result in faster but imprecise results (no perfect prediction possible) or equal (to nature) precision but slower results (no prediction at all). Practically, neither of two are attainable.

        The above subsumes "human mind" as a particular case of computer.

        Your ideas are intriguing and I'd like to subscribe to your newsletter...

        Actually, I'm being semi-serious here. Why is this axiomatic? I can imagine some simple state (say a box with 2 atoms in space), and then having a giant supercomputer with numerous optimizations running some simulation on it. Why can't it go faster-than-real-time (not unlike fast-forwarding a video)?

        I agree that for any practical situation with our current knowledge of physics and technology that it's not feasible. But theoretically... why is this so axiomatic as to be considered "rule zero?"

        • (Score: 3, Interesting) by c0lo on Thursday September 26 2019, @01:33AM (5 children)

          by c0lo (156) Subscriber Badge on Thursday September 26 2019, @01:33AM (#898882) Journal

          Why is this axiomatic?

          Because the speed of light being the absolute limit of interaction speed is axiomatic.

          I can imagine some simple state (say a box with 2 atoms in space), and then having a giant supercomputer with numerous optimizations running some simulation on it. Why can't it go faster-than-real-time (not unlike fast-forwarding a video)?

          You will make implicit assumption on the enclosure (thing like: perfect smoothness, homogenous and always the same properties affecting collisions, etc).
          The reality: your box is made of atoms vibrating in a way you don't perfectly know, so you can't model them. And your box is far from isolated from the environment and will let pass or may react in ways that can perturb your experiment whenever a high-energy cosmic ray strikes it, or when quantum fluctuations of the void happens.
          Your "simple state" is already a model which have predictive power on average space or time spans. It will inevitable start to diverge when you are past the limits your assumptions are valid (e.g. in 10^6 years, your box may be just dust).

          --
          https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
          • (Score: 0) by Anonymous Coward on Thursday September 26 2019, @02:59AM (3 children)

            by Anonymous Coward on Thursday September 26 2019, @02:59AM (#898915)

            Or consider a box with no particles in it. Boom! The universe suddenly appears. Shit happens that isn't in your model.

            • (Score: 2) by c0lo on Thursday September 26 2019, @03:44AM (2 children)

              by c0lo (156) Subscriber Badge on Thursday September 26 2019, @03:44AM (#898927) Journal

              Or consider a no box with no particles in it. Boom! The universe suddenly appears.

              FTFY.
              Very likely the Big Bang didn't even need a box.

              --
              https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
              • (Score: 2) by JoeMerchant on Thursday September 26 2019, @03:02PM (1 child)

                by JoeMerchant (3937) on Thursday September 26 2019, @03:02PM (#899134)

                Hard to know from inside this particular box/no box universe.

                --
                🌻🌻 [google.com]
                • (Score: 2) by c0lo on Thursday September 26 2019, @10:05PM

                  by c0lo (156) Subscriber Badge on Thursday September 26 2019, @10:05PM (#899322) Journal

                  Yeah, we might be dead or alive, until someone look at us.

                  --
                  https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
          • (Score: 3, Informative) by JoeMerchant on Thursday September 26 2019, @02:58PM

            by JoeMerchant (3937) on Thursday September 26 2019, @02:58PM (#899131)

            Applicable: https://dilbert.com/strip/2019-03-03 [dilbert.com]

            --
            🌻🌻 [google.com]
    • (Score: 0) by Anonymous Coward on Thursday September 26 2019, @02:57AM

      by Anonymous Coward on Thursday September 26 2019, @02:57AM (#898914)

      > Or, we could just say that the human mind is fallible, thus all of his calculations will be fallible.

      Any yet there you are, making metaphysical assertions about metaphysical fallibilty. Best shut the uck up, eh?

  • (Score: 4, Insightful) by isj on Thursday September 26 2019, @12:23AM (1 child)

    by isj (5249) on Thursday September 26 2019, @12:23AM (#898863) Homepage

    I was about the write a rant about how this was obvious when you know about Edward Lorenz's work in the 1960s, but then I RTFA and discovered that they do reference his work.

    TFA is on quantifying how bad the chaos gets with specific models with current hardware (IEEE double-precision floating point), and cautions against using naïvely simulated models when eg. training neural nets because the input can be utterly wrong.

    • (Score: 3, Interesting) by JoeMerchant on Thursday September 26 2019, @01:50PM

      by JoeMerchant (3937) on Thursday September 26 2019, @01:50PM (#899082)

      This just in from the department of redundancy department!

      Sub-optimal summary

      I guess they have started establishing "standard chaos models" for neural net training, and such? Ever since the late 1980s, I've been looking for an appropriate application of chaotic oscillators in neural net training, and, while you certainly can do it, I have yet to find a defensible logical argument as to why any particular mathematically based chaotic oscillator would be preferable to a pseudo-random number generator, Mersenne Twister being my favorite for irrational reasons, but all the "good ones" are basically the same.

      Now, data collected from the real world, that's certainly a worthwhile source, and I've seen some of the theoretical papers about how one can take "live" data and fuzz it up with mathematically random noise and get superior convergence with smaller "live" sets fuzzed than larger "live" sets without the fuzzing, particularly for image processing like cancer recognition, etc. And... one certainly can "color" the randomness with normal or Poisson or other appropriate distributions, but, the underlying randomness really wants to be "flat" so that it influences the theoretical distributions as little as possible - which takes me back to Mersenne fed through a transfer function.

      IMO, anyone using funky shaped chaotic oscillators to generate colored randomness is just playing, which is fine, but unlikely to get the kind of repeatable, tweakable, enhanceable results of a flat random source transformed through a well-described (and describable) transform function.

      All of which has little, or nothing, to do with the summary/article's focus on inappropriate use of low resolution number representation - appropriately so in my opinion, because even IEEE floating point has far more resolution than any real-world sensors we're going to be applying "fuzzing" to, so just don't be a bonehead about it and make sure your fuzz is far finer grained that the LSB of your ADC.

      --
      🌻🌻 [google.com]
  • (Score: 0) by Anonymous Coward on Thursday September 26 2019, @05:58AM (3 children)

    by Anonymous Coward on Thursday September 26 2019, @05:58AM (#898973)

    They should try to invent a better value representation for their projects. Floating point is a compromise between many competing factors. Sure, everything will be an approximation, but you may be able to find better approximations for specific needs. Maybe a logarithm based approach?

    For example, the "decimal" type in C# is a variation on traditional floating point that sacrifices absolute range for more precision in the range typically needed for monetary computations. It's also not perfect, but better than traditional floating point for money math.

    • (Score: 0) by Anonymous Coward on Thursday September 26 2019, @10:32AM (1 child)

      by Anonymous Coward on Thursday September 26 2019, @10:32AM (#899042)

      floating point IS a logarithm based approach.

      • (Score: 0) by Anonymous Coward on Thursday September 26 2019, @03:10PM

        by Anonymous Coward on Thursday September 26 2019, @03:10PM (#899146)

        One complaint is that accuracy is bunched toward the middle: toward zero. Using a different mapping, that accuracy can be spread out further in the "number spectrum", giving more toward the edges. That may take a 2-level logarithm instead of 1. I can't propose anything specific yet, but I'm confident the accuracy "spot" allocation can be changed. Whether made better or "good enough" for their specific needs, I cannot say, because there will be tradeoffs. They are stuck with approximations regardless, but there may be *better* approximations than what's available now.

    • (Score: 2) by DannyB on Thursday September 26 2019, @06:03PM

      by DannyB (5839) Subscriber Badge on Thursday September 26 2019, @06:03PM (#899237) Journal

      For money, I don't want more precision. I just want the right amount of precision, and an exact, not approximate, representation. Something that can exactly represent ten dollars and ten cents. Not approximately, but exactly.

      It is amusing how money has been represented by integers since as long as recorded history, without computers. Even sometimes in weird bases, like base 60, etc. Before the computer age.

      With computers I don't care if it's BCD or 2's compliment integers, as long as it is an integer. And BCD is really an integer, with an implied decimal point. 2's compliment gives you fast arithmetic on all common processors. Converting to and from decimal base 10 representation is an exact process. Doing four-function arithmetic with integers is also exact. Even division has an exact quotient and remainder.

      Funny how money was represented prior to computers.

      --
      When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
(1)