Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Tuesday May 08 2018, @06:13PM   Printer-friendly
from the Oh,-that's-what-it-means! dept.

What is edge computing?

The word edge in this context means literal geographic distribution. Edge computing is computing that's done at or near the source of the data, instead of relying on the cloud at one of a dozen data centers to do all the work. It doesn't mean the cloud will disappear. It means the cloud is coming to you. [...] One great driver for edge computing is the speed of light. If a Computer A needs to ask Computer B, half a globe away, before it can do anything, the user of Computer A perceives this delay as latency. The brief moments after you click a link before your web browser starts to actually show anything is in large part due to the speed of light. Multiplayer video games implement numerous elaborate techniques to mitigate true and perceived delay between you shooting at someone and you knowing, for certain, that you missed.

Voice assistants typically need to resolve your requests in the cloud, and the roundtrip time can be very noticeable. Your Echo has to process your speech, send a compressed representation of it to the cloud, the cloud has to uncompress that representation and process it — which might involve pinging another API somewhere, maybe to figure out the weather, and adding more speed of light-bound delay — and then the cloud sends your Echo the answer, and finally you can learn that today you should expect a high of 85 and a low of 42, so definitely give up on dressing appropriately for the weather.

So, a recent rumor that Amazon is working on its own AI chips for Alexa should come as no surprise. The more processing Amazon can do on your local Echo device, the less your Echo has to rely on the cloud. It means you get quicker replies, Amazon's server costs are less expensive, and conceivably, if enough of the work is done locally you could end up with more privacy — if Amazon is feeling magnanimous.

The phrase seems to be popping up more this week due to developments at Microsoft's Build 2018 conference:

Microsoft delivers new edge-computing tools that use its speech, camera, AI technologies

Wikipedia's article, complete with multiple issues.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by DannyB on Tuesday May 08 2018, @06:53PM

    by DannyB (5839) Subscriber Badge on Tuesday May 08 2018, @06:53PM (#677133) Journal

    I've never tried that one. What happens? Have they "fixed" it? ("fixed" in the sense that you take a pet to the vet to be "fixed" to prevent offspring.)

    I will have to think about how one would calculate PI in a way that you can continue producing a sequence of digits forever.

    The way I do it requires knowing up front how many digits you want.

    public static void main(String[] args) {
        System.out.println( "Calculating pi..." );
        System.out.flush();
        System.out.println( calcPiBigDecimal( 1000 ) );
    }

    public static BigDecimal calcPiBigDecimal( int digits ) {
        final MathContext prc = new MathContext( digits+1 );

        final BigDecimal one = BigDecimal.ONE;
        final BigDecimal two = one.add( one );
        final BigDecimal four = two.add( two );
        final BigDecimal five = four.add( one );
        final BigDecimal six = four.add( two );
        final BigDecimal eight = four.add( four );
        final BigDecimal sixteen = eight.add( eight );

        BigDecimal eightN = BigDecimal.ZERO;
        final BigDecimal oneSixteenth = BigDecimal.ONE.divide( sixteen, prc );
        BigDecimal oneSixteenthToNPower = BigDecimal.ONE;

        BigDecimal pi = BigDecimal.ZERO, oldPi = BigDecimal.ZERO;

        //int n = 0;
        while( true ) {
            pi = pi.add(
                               four.divide( eightN.add( one ), prc )
                    .subtract( two.divide( eightN.add( four ), prc ) )
                    .subtract( one.divide( eightN.add( five ), prc ) )
                    .subtract( one.divide( eightN.add( six ), prc ) )

                    .multiply( oneSixteenthToNPower )
                , prc );

            if( pi.equals( oldPi ) ) {
                // Trim off one inaccurate digit.
                pi = pi.round( new MathContext( digits ) );
                break;
            }
            oldPi = pi;

            //n += 1;
            eightN = eightN.add( eight );
            oneSixteenthToNPower = oneSixteenthToNPower.multiply( oneSixteenth );
        }

        return pi;
    }

    I suppose the algorithm could use unlimited precision, rather than limiting it. Then each iteration, compare to the previous. Once, say 2, or more digits have "stabilized", then produce them as new digits. Yet the iterations keep going forever. The internal variable values keep getting longer and longer with more decimal digits.

    I will have to work on expressing that algorithm as both:
    * A Java Iterator (and Stream)
    * A Clojure lazy list

    Then I will have to generalize the approach so that different "iteration algorithms" can be plugged in to produce an infinite sequence of digits in decimal (or your preferred radix?).

    --
    To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2