Stories
Slash Boxes
Comments

SoylentNews is people

posted by chromas on Tuesday November 13 2018, @03:12AM   Printer-friendly
from the with-blackjack-and-hookers dept.

The next version of HTTP won’t be using TCP

In its continued efforts to make Web networking faster, Google has been working on an experimental network protocol named QUIC: "Quick UDP Internet Connections." QUIC abandons TCP, instead using its sibling protocol UDP (User Datagram Protocol). UDP is the "opposite" of TCP; it's unreliable (data that is sent from one end may never be received by the other end, and the other end has no way of knowing that something has gone missing), and it is unordered (data sent later can overtake data sent earlier, arriving jumbled up). UDP is, however, very simple, and new protocols are often built on top of UDP.

QUIC reinstates the reliability and ordering that TCP has but without introducing the same number of round trips and latency. For example, if a client is reconnecting to a server, the client can send important encryption data with the very first packet, enabling the server to resurrect the old connection, using the same encryption as previously negotiated, without requiring any additional round trips.

The Internet Engineering Task Force (IETF—the industry group that collaboratively designs network protocols) has been working to create a standardized version of QUIC, which currently deviates significantly from Google's original proposal. The IETF also wants to create a version of HTTP that uses QUIC, previously referred to as HTTP-over-QUIC or HTTP/QUIC. HTTP-over-QUIC isn't, however, HTTP/2 over QUIC; it's a new, updated version of HTTP built for QUIC.

Accordingly, Mark Nottingham, chair of both the HTTP working group and the QUIC working group for IETF, proposed to rename HTTP-over-QUIC to HTTP/3, and the proposal seems to have been broadly accepted. The next version of HTTP will have QUIC as an essential, integral feature, such that HTTP/3 will always use QUIC as its network protocol.


Original Submission

Related Stories

HTTP/3 Explained: A Work in Progress 11 comments

curl hacker Daniel Stenberg has announced that his online booklet, HTTP/3 Explained, is available for download from GitHub. The booklet will remain a work in progress as neither the protocol specifications themselves nor any working implmementation are even remotely ready at this moment.

The book describes what HTTP/3 and its underlying transport protocol QUIC are, why they exist, what features they have and how they work. The book is meant to be readable and understandable for most people with a rudimentary level of network knowledge or better.

These protocols are not done yet, there aren't even any implementation of these protocols in the main browsers yet! The book will be updated and extended along the way when things change, implementations mature and the protocols settle.

Earlier on SN:
The Next Version of HTTP Won't be Using TCP (2018)
Google Touts QUIC Protocol (2015)


Original Submission

Is Google Using an "Embrace, Extend..." Strategy? 48 comments

Google isn't the company that we should have handed the Web over to

Back in 2009, Google introduced SPDY, a proprietary replacement for HTTP that addressed what Google saw as certain performance issues with existing HTTP/1.1. Google wasn't exactly wrong in its assessments, but SPDY was something of a unilateral act, with Google responsible for the design and functionality. SPDY was adopted by other browsers and Web servers over the next few years, and Google's protocol became widespread.

[...] The same story is repeating with HTTP/3. In 2012, Google announced a new experimental protocol, QUIC, intended again to address performance issues with existing HTTP/1.1 and HTTP/2. Google deployed QUIC, and Chrome would use QUIC when communicating with Google properties. Again, QUIC became the basis for IETF's HTTP development, and HTTP/3 uses a derivative of QUIC that's modified from and incompatible with Google's initial work.

It's not just HTTP that Google has repeatedly worked to replace. Google AMP ("Accelerated Mobile Pages") is a cut-down HTML combined with Google-supplied JavaScript designed to make mobile Web content load faster. This year, Google said that it would try to build AMP with Web standards and introduced a new governance model that gave the project much wider industry oversight.

A person claiming to be a former Microsoft Edge developer has written about a tactic Google supposedly used to harm the competing browser's performance:

A person claiming to be a former Edge developer has today described one such action. For no obvious reason, Google changed YouTube to add a hidden, empty HTML element that overlaid each video. This element disabled Edge's fastest, most efficient hardware accelerated video decoding. It hurt Edge's battery-life performance and took it below Chrome's. The change didn't improve Chrome's performance and didn't appear to serve any real purpose; it just hurt Edge, allowing Google to claim that Chrome's battery life was actually superior to Edge's. Microsoft asked Google if the company could remove the element, to no avail.

The latest version of Edge addresses the YouTube issue and reinstated Edge's performance. But when the company talks of having to do extra work to ensure EdgeHTML is compatible with the Web, this is the kind of thing that Microsoft has been forced to do.

See also: Ex Edge developer blames Google tricks in part for move to Chromium

Related: HTTP/2 on its Way In, SPDY on its Way Out
Google Touts QUIC Protocol
Google Attempting to Standardize Features of Accelerated Mobile Pages (AMP)
Google AMP Can Go To Hell
The Next Version of HTTP Won't be Using TCP
HTTP/3 Explained: A Work in Progress
Microsoft Reportedly Building a Chromium-Based Web Browser to Replace Edge, and "Windows Lite" OS
Mozilla CEO Warns Microsoft's Switch to Chromium Will Give More Control of the Web to Google


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Interesting) by c0lo on Tuesday November 13 2018, @03:30AM (26 children)

    by c0lo (156) Subscriber Badge on Tuesday November 13 2018, @03:30AM (#761140) Journal

    Actually, what's wrong with the IP layer? Including IPv6. Short answer: not design with mobility in mind! Long answer (and I mean it, it's pretty long. But still worth reading): The world in which IPv6 was a good design [apenwarr.ca].

    However, over time, I developed a mistrust of all things Google. So, I'm (rhetorically and redundantly**) asking myself: why not
    - Stream Control Transmission Protocol [wikipedia.org]?
    - or why not Minimal Latency Tunneling (MinimaLT) [cr.yp.to]?
    After all, none of the QUIC, SCTP or MinimaLT gets away from IP support (thus the underlying problem is not actually solved).

    (** yes, of course, because Google has the gold)

    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 2) by Thexalon on Tuesday November 13 2018, @03:50AM (17 children)

      by Thexalon (636) on Tuesday November 13 2018, @03:50AM (#761144)

      The problems with TCP:
      1. The 3-way handshake, the SYN, SYN-ACK, ACK series of 3 packets that go back and forth before data can be sent. To close down a connection, you have 4 additional packets going back and forth. That means 7 packets that don't include any of the data being sent. If you use a persistent connection (which most browsers do), that means the server is maintaining state about all its clients. If you don't use a persistent connection, then you pay the 7-packet penalty each time you contact the server, which includes every little AJAX request.

      2. TCP headers are 5 times larger than UDP headers. Again, more non-data being passed around.

      If this works as promised, it will make transmitting information over HTTP noticeably lower-bandwidth. If this doesn't work as promised, then we'll all be stuck with an alternative protocol that is for all practical purposes a less-robust version of TCP without the 40+ years of use in the wild behind it.

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 1, Insightful) by Anonymous Coward on Tuesday November 13 2018, @03:57AM (2 children)

        by Anonymous Coward on Tuesday November 13 2018, @03:57AM (#761146)

        5g speed and all that,and yet nickel and diiming over tcp header length? bullshit.

        • (Score: 1, Informative) by Anonymous Coward on Tuesday November 13 2018, @04:04AM

          by Anonymous Coward on Tuesday November 13 2018, @04:04AM (#761149)

          Sure, for any given transaction it is just very little. Now multiply that by the billions of people on the planet, and all the internet traffic they generate every single damn day.

          It adds up!

        • (Score: 3, Informative) by Pino P on Tuesday November 13 2018, @01:49PM

          by Pino P (4721) on Tuesday November 13 2018, @01:49PM (#761271) Journal

          As long as satellite and cellular carriers continue to nickel-and-dime their subscribers with overage fees, nickel-and-diming to reduce how much subscribers owe the carrier will continue to be valuable.

      • (Score: 5, Insightful) by jb on Tuesday November 13 2018, @04:21AM (11 children)

        by jb (338) on Tuesday November 13 2018, @04:21AM (#761151)

        This is nonsense, up with which we ought not put ;)

        TCP packets have a size limit of 65,535 bytes, although it's unusual (at least in this part of the world) to find a WAN link with an MTU over 1,500 bytes. TCP headers are 20 to 60 bytes in length. UDP headers are 8 bytes in length.

        So by using shorter L4 headers we're talking an bandwidth efficiency dividend of between 0.00018% at worst and 0.03467% at best.

        IP round trip times across the 'net (assuming a reasonably decent local uplink) tend to run to tens of milliseconds. So a single packet for establishment instead of three is likely to reduce connection establishment latency by between 20ms & 100ms.

        Those efficiency dividends sound rather paltry.

        If we want to speed up HTTP transactions, there are much simpler approaches which will yield far greater performance improvements.

        For human-readable web sites --- simply design them properly, without the mountains of javascript and ridiculously oversized images that have become the fashion of late.

        For machine-machine comms -- again, simply choose an API that isn't 90+% syntactic sugar or redundancy (just because storing data locally with XML can be handy, doesn't mean it's a good choice for communicating over WANs) ... [and for those extremely rare projects where 20ms extra latency really is a problem, HTTP is a poor L5 protocol choice anyway]

        • (Score: 5, Insightful) by choose another one on Tuesday November 13 2018, @09:11AM (1 child)

          by choose another one (515) Subscriber Badge on Tuesday November 13 2018, @09:11AM (#761200)

          > For human-readable web sites --- simply design them properly, without the mountains of javascript and ridiculously oversized images that have become the fashion of late.

          The javascript and large images is not the biggest problem, IMO. I just did a quick test (without ad blockers) for kicks:

          front page of this site: 17 requests, all from one hostname.
          front page of major mainstream news site (stopped after 30secs loading): 738 requests, 35 different hostnames for the top page plus dozens of iframes using other hostnames that I couldn't be bothered to expand to count

          The DNS request overhead alone must be a major part of the problem, especially the latency.

          • (Score: 2) by Unixnut on Tuesday November 13 2018, @02:12PM

            by Unixnut (5779) on Tuesday November 13 2018, @02:12PM (#761291)

            God that is awful, definitely would class that as a poorly designed web site (and insecure to boot, only one of those 35 different hostname needs to be compromised for your site to be compromised too).

            So the GPPs point is still valid, even if you remove the JS crap and images.

        • (Score: 3, Informative) by The Mighty Buzzard on Tuesday November 13 2018, @11:20AM (5 children)

          by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Tuesday November 13 2018, @11:20AM (#761230) Homepage Journal

          This is nonsense, up with which we ought not put ;)

          When you find yourself on the verge of ending a sentence with a preposition, simply add ", asshole" just before the terminal punctuation. Problem solved.

          --
          My rights don't end where your fear begins.
          • (Score: 1, Funny) by Anonymous Coward on Tuesday November 13 2018, @12:39PM (1 child)

            by Anonymous Coward on Tuesday November 13 2018, @12:39PM (#761251)

            When you find yourself on the verge of ending a sentence with a preposition, simply add ", asshole The Mighty Buzzard" just before the terminal punctuation. Problem solved.

            FTFY

          • (Score: 0) by Anonymous Coward on Tuesday November 13 2018, @04:51PM (2 children)

            by Anonymous Coward on Tuesday November 13 2018, @04:51PM (#761367)

            Sigh. There is nothing wrong with ending a sentence with a preposition. That nonsense rule was made up about 100 years ago by two idiots trying to make English more like Latin (where it isn't possible). It's same as the split infinitive rule. Before that, writers did it all the time.

            • (Score: 2) by Thexalon on Tuesday November 13 2018, @09:21PM (1 child)

              by Thexalon (636) on Tuesday November 13 2018, @09:21PM (#761456)

              We will now proceed to gleefully, playfully, nay even gloriously split infinitives with wild abandon!

              --
              The only thing that stops a bad guy with a compiler is a good guy with a compiler.
              • (Score: 2) by bob_super on Wednesday November 14 2018, @08:27PM

                by bob_super (1357) on Wednesday November 14 2018, @08:27PM (#761888)

                Is that what we're now going to enthusiastically be up to, asshole ?

        • (Score: 0) by Anonymous Coward on Wednesday November 14 2018, @02:54AM (1 child)

          by Anonymous Coward on Wednesday November 14 2018, @02:54AM (#761573)

          "This is the type of errant pedantry up with which I will not put." - Winston Churchill

          • (Score: 2) by jb on Thursday November 15 2018, @05:03AM

            by jb (338) on Thursday November 15 2018, @05:03AM (#762054)

            Glad to see at least someone knew the reference.

        • (Score: 0) by Anonymous Coward on Thursday November 15 2018, @07:33PM

          by Anonymous Coward on Thursday November 15 2018, @07:33PM (#762317)

          I'll just add that the next version of HTTP should be renamed to something that has nothing to to with hypertext transfer. Maybe JSTP or WebAppTP?

      • (Score: 1, Informative) by Anonymous Coward on Tuesday November 13 2018, @04:31AM

        by Anonymous Coward on Tuesday November 13 2018, @04:31AM (#761155)

        The three-way handshake is annoying, as is the TLS handshake, if you are doing a number of short requests, sure. But removing it only reduces the initial connection anyway, as you can pipeline and stream numerous responses on a single connection. It also doesn't remove that much, as you would have to implement some sort of handshake in your own protocol. Plus, session resumption on later connections has its own security and performance problems.

        As for the headers, UDP headers consist of 16-bit source, 16-bit destination, 16-bit length and 16-bit checksum. TCP adds a 32-bit sequence number, 32-bit ACK number, 4-bit header size, 12-bit of flags, 16-bit window size, 16-bit urgent pointer, and then various options, followed by enough padding to make the header length a multiple of 32. Of the added data, you could remove some of the flags, the urgent pointer, some of the flags, and condense a couple of standard options and the window size. The reason why you can't really get rid of the others is that your application will have to re-implement them in order to make the same reliability, connection-oriented, flow-controlled, congestion-avoiding guarantees as TCP.

        Really, as the other poster put it, SCTP seems to be better and with more history than hacking your own application protocol over UDP.

      • (Score: 2) by loonycyborg on Tuesday November 13 2018, @08:52AM

        by loonycyborg (6905) on Tuesday November 13 2018, @08:52AM (#761196)

        Actual length of packets involved in TCP is insignificant. Though latency can matter. But it amounts to nothing more than original requester having to wait one full round trip time between endpoints before starting to send actual data. I understand you'd want to try to eliminate this somehow but most applications can live with this, including the Web. And I'm not really sure that it's possible to eliminate this delay without sacrificing other advantages of TCP.

    • (Score: 1) by fustakrakich on Tuesday November 13 2018, @04:26AM

      by fustakrakich (6150) on Tuesday November 13 2018, @04:26AM (#761152) Journal

      Actually, what's wrong with the IP layer?

      The connection is only reliable if a central server can't pull the plug.

      --
      La politica e i criminali sono la stessa cosa..
    • (Score: 0) by Anonymous Coward on Tuesday November 13 2018, @04:56AM (6 children)

      by Anonymous Coward on Tuesday November 13 2018, @04:56AM (#761159)

      My first thought is that they are trying to kill NAT. If several machines share an IP address, it can make it more difficult to sort out who is who, and this protocol as described is pretty much impossible for NAT firewalls to handle reliably, let alone securely. The lack of a disconnect packet is particularly troublesome.

      • (Score: 0) by Anonymous Coward on Tuesday November 13 2018, @05:25AM (5 children)

        by Anonymous Coward on Tuesday November 13 2018, @05:25AM (#761163)

        IPv6 makes NAT a non-issue without having to have a unique global address on every machine. You can operate in exactly the same way you do with IPv4 without stuff breaking due to NAT-incompatible protocols. It's still not a reason to intentionally break NAT though.

        • (Score: 0) by Anonymous Coward on Tuesday November 13 2018, @06:59AM (4 children)

          by Anonymous Coward on Tuesday November 13 2018, @06:59AM (#761173)

          IPv6 makes NAT a non-issue without having to have a unique global address on every machine.

          Wut? IPv6 makes NAT a non-issue by having a unique global address on every machine.

          It's still not a reason to intentionally break NAT though.

          Any reason (even no reason) is a good reason to intentionally break NAT. The faster we break NAT, the faster we get the piece of shit consumer ISPs to stop dragging their worthless feet and properly implement IPv6 like they were supposed to over a decade ago.

          • (Score: 2) by The Mighty Buzzard on Tuesday November 13 2018, @11:32AM

            by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Tuesday November 13 2018, @11:32AM (#761233) Homepage Journal

            Meh. When pretty much all problems with NAT are already easily solved, your hatred is irrational.

            --
            My rights don't end where your fear begins.
          • (Score: 0) by Anonymous Coward on Tuesday November 13 2018, @05:37PM (2 children)

            by Anonymous Coward on Tuesday November 13 2018, @05:37PM (#761382)

            No, the globally unique address isn't required. All that's required is a locally unique address.

            • (Score: 0) by Anonymous Coward on Wednesday November 14 2018, @03:32AM (1 child)

              by Anonymous Coward on Wednesday November 14 2018, @03:32AM (#761581)

              No, at least one globally unique address is required, otherwise to get from locally unique and globally unique and back, your Network Address [is] Translated.

              • (Score: 0) by Anonymous Coward on Wednesday November 14 2018, @09:47AM

                by Anonymous Coward on Wednesday November 14 2018, @09:47AM (#761671)

                Yeah, one globally unique address... on the router, just like IPv4.

  • (Score: 0) by Anonymous Coward on Tuesday November 13 2018, @03:40AM (11 children)

    by Anonymous Coward on Tuesday November 13 2018, @03:40AM (#761143)

    Luckily not everything that Google dreams of becomes an inevitable standard before they kill it off.

    • (Score: 0) by Anonymous Coward on Tuesday November 13 2018, @04:17AM (10 children)

      by Anonymous Coward on Tuesday November 13 2018, @04:17AM (#761150)

      Maybe, maybe not. You can be guaranteed that the capitalist powers want this to happen for a reason, and the Five Eyes have almost certainly a hand in it.

      The post-capitalist internet will be on darknets until the revolution seizes power from the capitalist order anyhow.

      • (Score: 0) by Anonymous Coward on Tuesday November 13 2018, @05:01AM (9 children)

        by Anonymous Coward on Tuesday November 13 2018, @05:01AM (#761160)

        darknets? On your ISP?? Now that's funny!

        think i'll drop anchor right about, here

        • (Score: 0) by Anonymous Coward on Tuesday November 13 2018, @05:32AM (7 children)

          by Anonymous Coward on Tuesday November 13 2018, @05:32AM (#761164)

          How about meshdarknets? In big cities with overlapping redunant wifi coverage it could be pretty effective, HAM-mesh is a thing, and the wires are always there to piggyback on. HTTP-over-bittorrent would be interesting to see for a decentralized web.

          • (Score: 2) by c0lo on Tuesday November 13 2018, @07:40AM (6 children)

            by c0lo (156) Subscriber Badge on Tuesday November 13 2018, @07:40AM (#761181) Journal

            If you are happy to stay inside a darkmesh coverage, good for you.
            As soon as you want to get out, there'll be some exit nodes to bear the responsibility for it - if the powers hold those exit nodes responsible for the whole traffic going through them, the situation is not much better than today.

            This is why https://en.wikipedia.org/wiki/Stream_Control_Transmission_Protocol [wikipedia.org] or https://en.m.wikipedia.org/wiki/Multipath_TCP [wikipedia.org] are much better choices (than QUIC), as they allow multihomed devices to chunk/spread the same transmission over multiple channels. This is probably also why we'll never have it (increases the cost incurred by Eve)

            --
            https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
            • (Score: 2) by takyon on Tuesday November 13 2018, @09:23AM (5 children)

              by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Tuesday November 13 2018, @09:23AM (#761205) Journal

              Plenty of people run Tor exit nodes, right? I think there would be some participation in hosting meshnet exit nodes if the scheme ever catches on. It could be difficult to suppress the "good" nodes with "bad" ones, or a low trust scheme could tolerate a large percentage of "bad" exit nodes by having more middle nodes.

              Ideally, organizations like libraries would participate in a meshnet scheme. They could act as exit nodes and/or give local access to a lot of useful knowledge and data. They have a bit more clout and money than your home meshnet enthusiast, and are ideologically inclined to support digital liberties.

              As far as I can tell, "darkmeshnets" have gone pretty much nowhere fast over the years despite a lot of talk [reddit.com], but they should only become more feasible as there are more internet-connected devices, IoT devices, etc. and bandwidth continues to get cheaper. And if they only have the speed and reliability to support text-based communications, that will be enough for some people.

              --
              [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
              • (Score: 3, Insightful) by c0lo on Tuesday November 13 2018, @11:14AM (4 children)

                by c0lo (156) Subscriber Badge on Tuesday November 13 2018, @11:14AM (#761227) Journal

                Plenty of people run Tor exit nodes, right?

                Do you have any evidence to support your assertion?

                I think there would be some participation in hosting meshnet exit nodes

                I would not dare in the current legal context. Even less if I'd be to use my home connection as an exit node.
                For the young and unafraid, here's a list of of fair warnings about what may happen to you if you run a Tor exit node:

                - be flooded by DCMA takedown notices - to the point that the Tor project has a template response letter for such cases [torproject.org]. Note that you will need to answer each and every takedown request and the "safe harbour" provisions aren't available in all international jurisdictions.

                - have your home raided by FBI agents at wee hours in the morging [vice.com], your computers seized and your pet puppy kicked in the guts... umm, this latter one I slipped it in myself, no warranties that will or will not happen

                - be sentenced for distributing child porn [theregister.co.uk] and spend your saving on legal fees and fines. Maybe go to jail too.

                - have your cohosted servers tampered with by law enforcement without being officially notified [torproject.org]

                --
                https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
                • (Score: 1) by fustakrakich on Tuesday November 13 2018, @03:51PM (2 children)

                  by fustakrakich (6150) on Tuesday November 13 2018, @03:51PM (#761336) Journal

                  This is why Tor, and even VPN can't work. They don't blend. They are light houses, screaming for attention when exactly the opposite is needed.

                  --
                  La politica e i criminali sono la stessa cosa..
                  • (Score: 0) by Anonymous Coward on Tuesday November 13 2018, @05:40PM (1 child)

                    by Anonymous Coward on Tuesday November 13 2018, @05:40PM (#761384)

                    It's why we should use the IoT to host the exit nodes!

                    • (Score: 2) by bob_super on Wednesday November 14 2018, @08:34PM

                      by bob_super (1357) on Wednesday November 14 2018, @08:34PM (#761891)

                      That's been the plan all along. Why else do you think the security is so crappy ?
                      The upcoming white-hat IoT virus will create a botnet of VPNs, mirrors, and TOR nodes inside every insecure IoT piece of junk.

                • (Score: 0) by Anonymous Coward on Saturday November 17 2018, @08:49AM

                  by Anonymous Coward on Saturday November 17 2018, @08:49AM (#763002)

                  There are definite concerns as to the number of unique endpoints in use, and moreover in the selection of them.

                  If you watch TBB or Nyx's site circuits for long enough you will notice how often the same endpoints and exits show up without doing mass geoblocking of nodes, and even then it usually changes to the same 5 or so nodes for the remaining accepted geoip nodes.

                  Another discussion involves identifying numbers which last for the whole tor daemon session, which can help dox you there. And this is all assuming 5 eyes doesn't have every major node in their regions compromised or under surveillance.

                  I have heard a few other accusations as well, but they are less well founded than these bits of empirical evidence you can test and establish the validity of yourselves with a few dozen to hundred site views alternating between different websites and new circuits on existing sites. The tunnel choices start to look very suspicious after a few days, especially how often all three nodes are from the same country code, or the same dozen or so 'high volume' Tor nodes, defeating the original purpose of spreading it across nodes randomly rather than by known performance metrics, the latter of which makes it easy for well funded surveillance groups to game.

        • (Score: 1, Insightful) by Anonymous Coward on Tuesday November 13 2018, @03:56PM

          by Anonymous Coward on Tuesday November 13 2018, @03:56PM (#761338)

          There will always be a way around. The capitalist system has shown that it is incapable of creating quality software.

  • (Score: 1, Insightful) by Anonymous Coward on Tuesday November 13 2018, @09:23AM

    by Anonymous Coward on Tuesday November 13 2018, @09:23AM (#761204)

    It may likely go another way: Appendix: All connections go through G's servers. Another one: If someone blocks G's malware, add delays to punishment bad goyim :). And of course transfer ads and crap like downscaled quad-HD forum avatars first.
    Source: All these things are present in AMP.

(1)