Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Tuesday May 09, @11:33AM   Printer-friendly

The co-creator of the Internet's protocols admits his crystal ball had a few cracks:

Vint Cerf, the recipient of the 2023 IEEE Medal of Honor for "co-creating the Internet architecture and providing sustained leadership in its phenomenal growth in becoming society's critical infrastructure," didn't have a perfect view of the Internet's future. In hindsight, there are a few things he admits he got wrong. Here some of those mistakes, as recently told to IEEE Spectrum:

  • 1) "I thought 32 bits ought to be enough for Internet addresses."
  • 2) "I didn't pay enough attention to security."
  • 3) "I didn't really appreciate the implications of the World Wide Web."

These are only his top three - can you think of some that are missing from that group? What about any mistakes that aren't top 3 but still in hindsight should have been done differently?


Original Submission

This discussion was created by janrinok (52) for logged-in users only. Log in and try again!
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Interesting) by VLM on Tuesday May 09, @12:12PM (4 children)

    by VLM (445) on Tuesday May 09, @12:12PM (#1305501)

    Is this a technical list or nontechnical list?

    I'd nominate the entire MTU situation.

    I'd also nominate how latency is/was handled.

    • (Score: 2) by DannyB on Tuesday May 09, @02:11PM (1 child)

      by DannyB (5839) Subscriber Badge on Tuesday May 09, @02:11PM (#1305517) Journal

      What would you change about MTU? Various intermediate routers are likely to have SOME kind of limit. Especially back in the daze.

      --
      How often should I have my memory checked? I used to know but...
      • (Score: 3, Interesting) by VLM on Tuesday May 09, @02:24PM

        by VLM (445) on Tuesday May 09, @02:24PM (#1305521)

        One strategy is toss the whole idea. If you do ipv4, 2K MTU or don't play. If you do ipv6 8K MTU or don't play.

        Looking at ipv4 vs v6 PMTUD is slightly less crazy in v6. Also get rid of ipv4 routers doing packet fragmenting.

        I have not put a large amount of effort into the ideas about, but "pretty much" force a standard MTU and copy ipv6.

    • (Score: 2) by Mojibake Tengu on Wednesday May 10, @05:15AM (1 child)

      by Mojibake Tengu (8598) on Wednesday May 10, @05:15AM (#1305656) Journal

      Actually, one part of my private LAN security is found on premise the jumbo packets will never pass to outworld, because they can't get through the router nor through wifi nor through phone data no matter what MTU or fragmentation set or whatever. :D

      That really helps with NFS risks, especially with NFS on ipv6.

      Also, many IoT thingies cannot even observe jumbo packets because of hardware limitations of their network chips.

      A critical design factor of such LAN construction is in buying industrial grade switches (yes, pure switches, not usually vulnerable managed routers!) capable of jumbo packets and dedicate gateways for doing their gatewaying job only.

      --
      The edge of 太玄 cannot be defined, for it is beyond every aspect of design
      • (Score: 2) by SomeRandomGeek on Wednesday May 10, @04:12PM

        by SomeRandomGeek (856) on Wednesday May 10, @04:12PM (#1305729)

        While I agree that is a nice hack to help improve your security, it is also a good example of what is so broken about MTU. Finding a way to put a broken feature to use does not make it less broken.

  • (Score: 5, Insightful) by looorg on Tuesday May 09, @12:26PM (2 children)

    by looorg (578) on Tuesday May 09, @12:26PM (#1305505)

    It would seem that more or less all the issues are the same one or two issues -- I/we couldn't predict and see into the future. What they did made sense for back then and a few years into the future. Beyond that things get very hazy. Also the rapid change of systems back then they probably didn't expect some of these things to be around for very long before they got replaced by something new.

    Sort of like I don't think we can really make good predictions today on what the internet will be like in 30-40 years time from now. But if we just follow the curve it will be hellish.

    The amusing 32-bits ought to be enough for everyone and everything. Until they started to give one to every machine under the sun from computers, cars, phones, fridges and god only knows what else. Not to mention all the billions of people that all of a sudden want to be online to facebook, look at naked people and shop for things they don't really need. Didn't take them into the equation. Oops. My bad!

    The security is legit tho. A large chunk of naivete. But then the internet wasn't for the masses or for sending monetary transaction as it is today or a giant mechanism for gathering data on people so you can stalk them or show them ads. So it had other security concerns back then, if any. If you can just send passwords in plaintext or store them all on the server in a plaintext file, sometimes not even encrypted or as a hash then you are living in a different world compared to today.

    These are only the top3? Was there a more extensive list? I can't seem to find it. But over all they are probably all about not being able to predict the future, a lack of security features and that you more or less trusted everyone back then so there was no need for much checking. If you got hooked up you were in the club and was ok. After all not a lot of people on the internet back in the 70's and 80's. You could know all the machines connected and the people that took care of them. Unlike today.

    Also he is only the co-creator now? Did Al Gore demand that they share credits?

    • (Score: 5, Interesting) by zocalo on Tuesday May 09, @03:29PM

      by zocalo (302) on Tuesday May 09, @03:29PM (#1305536)
      That was my thought too. IP was designed in the early '70s, the WWW didn't come along until the early '90s, (OK, there was Gopher, etc.), so while it's technically legit "with hindsight", I think it's also fair to say that there was no way you could have foreseen the WWW and the resultant mass-consumerisation of the Internet that resulted to have done things much differently.

      Security? 100%, that was a SNAFU. Not just Vint Cerf, of course, almost everyone involved was working from an ivory tower then; some of Jon Postel's application protocol RFCs would quite rightly be ridiculed as insanely naive if they were proposed today. I first got directly on the Internet proper without CompuServe or BBS gateways etc., in the late-80s (about 5 years before the WWW for the young 'uns), and we already had a lot of problems with some of the protocols being so fundamentally insecure, from application layer all the way down, and especially so if you could drop arbitrary packets onto the wire or manipulate them in transit (with the tools to do just that usually installed by default on many *NIX servers). It was trivially easy, and almost a rite of passage for some, to grab admin/root passwords in-clear from the wire with packet capture tools or extract them from system memory across a whole bunch of application protocols and go from there; hell, SSH wasn't even a thing until 1995, two years after the September that never ended, FFS!

      Address space, I think falls between the two. Yes, with hindsight, it could have been much better to have had another two or four bytes, possibly bringing in some routing or geographical coding, but this was at a time where computer memory *and* storage capacity was often measured in KB, and the Intel 4004 was state of the art, so it would have been a big ask to use 6- or 8-octet addresses, unless some of those octets were only needed for the actual internet working and could be omitted from local network segments. Realising that it would possibly be an issue at the time though? Even with the wasted address space from classful allocations, especially those 126 Class As, it might have been a stretch to extrapolate out just how many IP enabled devices there were going to be would exceed the practical amount of available and usable space. Not impossible though; every college/university globally having at least one computer lab full of IP-enabled terminals, as well as a lot of larger business/industry users replacing typewriters with some kind of IP-enabled terminal, wasn't beyond the realms of possibility for the eventual usage of IP, and once "home computers" really started to became a thing, the writing was all over the wall.

      For me, I think the biggest "with hindsight" fix I'd like to see is probably to have had a larger address space, and ideally bringing in some more geographical structure with it to assist with high-level routing, attribution/ownership, and traffic filtering decisions, even it it would have been fairly wasteful of the overall address space. Something like the first 4 bytes coding in the RIR (roughly "continent"), the LIR/country, the end-customer ID, and working akin to a BGP AS, and possibly even optional for local network traffic depending on an IP flag perhaps? IMHO, gutting WHOIS for privacy should never have been allowed, so much better if it had been essentially baked into the network stack from the get-go. While not great, a lot of the security issues can be pushed up the network stack for a solution (HTTPS rather than HTTP, SSH rather than Telnet) or retrofitted in a similar manner to how we actually did it (VPNs, and other end-to-end link encryption); not ideal, but it works. The IPv4 vs. CGNAT vs. IPv6 adoption issues, on the otherhand, are going to be causing pain for a long time to come yet.
      --
      UNIX? They're not even circumcised! Savages!
    • (Score: 1) by steveg on Wednesday May 10, @11:41PM

      by steveg (778) on Wednesday May 10, @11:41PM (#1305807)

      He and Robert Kahn designed the protocol. That makes him co-creator.

  • (Score: 4, Insightful) by DannyB on Tuesday May 09, @02:20PM (2 children)

    by DannyB (5839) Subscriber Badge on Tuesday May 09, @02:20PM (#1305519) Journal

    Why is the future more difficult to see than hindsight?

    I suppose that is worth filing a bug report to have that fixed.

    At the time, the "32 bits ought to be enough for anybody" seemed reasonable. Back then slow, low capacity computers cost many thousands of dollars. Today computers faster than their supercomputers with enormous capacity, even in RAM nevermind storage, are cheap enough that everyone carries one in their pocket. Some of us wear one on our wrist (1 GB RAM, 8 GB storage, multiple cores). What was the going price of a Raspberry Pi before Covid?

    I've mentioned this before. When my college roommate and I graduated in 82, we considered ourselves informed, and we were, and we expected computers to get way bigger, faster, more compact and cheaper. We really did. But recently, more than 40 years later, we both recognized that we never imagined just HOW MUCH bigger, faster, etc that computers would get. Now a computer from back then is a microcontroller part that probably costs less than $5, or a full assembled board for a few tens of dollars.

    Then this brings me to GUIs. It would have been difficult to imagine today's computers until I saw the Lisa, then Macintosh. At that point, I recognized this would be the future. Ironically, some computer pundits in popular trade rags and magazines of the day didn't believe GUIs would go anywhere.

    My point: due to whatever bug, the future is harder to see than the past.

    --
    How often should I have my memory checked? I used to know but...
    • (Score: 4, Insightful) by NotSanguine on Tuesday May 09, @04:42PM

      Why is the future more difficult to see than hindsight?

      The Second Law of Thermodynamics? [wikipedia.org]

      --
      No, no, you're not thinking; you're just being logical. --Niels Bohr
    • (Score: 1, Interesting) by Anonymous Coward on Wednesday May 10, @05:29AM

      by Anonymous Coward on Wednesday May 10, @05:29AM (#1305658)

      But a lot of stuff is still slow due to human issues. Plenty of "usability timings" and delays are in the order of seconds or hundreds of milliseconds. Also when you optimize stuff often even though it's still not the fastest possible it can be considered good enough as long as it's fast enough by human time scales. You may even need to start adding delays...
      https://www.fastcompany.com/3061519/the-ux-secret-that-will-ruin-apps-for-you [fastcompany.com]

      Wells Fargo admitted to slowing down its app’s retinal scanners, because customers didn’t realize they worked otherwise,

      https://90percentofeverything.com/2010/12/16/adding-delays-to-increase-perceived-value-does-it-work/index.html [90percentofeverything.com]

      “Coinstar is a great example of this. The machine is able to calculate the total change deposited almost instantly. Yet, during testing the company learned that consumers did not trust the machines. Customers though it was impossible for a machine to count change accurately at such a high rate. Faced with the issues of trust and preconceived expectations of necessary effort, the company began to rework the user experience. The solution was fairly simple. The machine still counted at the same pace but displayed the results at a significantly slower rate.

  • (Score: 5, Insightful) by SomeRandomGeek on Tuesday May 09, @04:04PM (2 children)

    by SomeRandomGeek (856) on Tuesday May 09, @04:04PM (#1305540)

    I don't see these things as mistakes. I see them as decisions that were exactly right in the moment that nevertheless had long term consequences.

    1) "I thought 32 bits ought to be enough for Internet addresses."

    Larger IP addresses would have been an anchor preventing adoption. People still refer to devices by their IPv4 address, but no-one ever refers to anything by their IPv6 address. Why? Because an IPv6 address is to long for a human to work with. Sure, twenty years later they hit an address space crunch, but you don't succeed in tech by solving the problems of twenty years from now.

    2) "I didn't pay enough attention to security."

    It would be nice to have something better than domain registrars and certificate authorities for establishing identity. But that is a hard problem to solve. There were other problems that needed to be solved first.

    3) "I didn't really appreciate the implications of the World Wide Web."

    No one did, or even does now. When you are inventing something that changes the world, you don't get to fully appreciate the implications.

    • (Score: 2) by bradley13 on Tuesday May 09, @04:53PM

      by bradley13 (3053) Subscriber Badge on Tuesday May 09, @04:53PM (#1305551) Homepage Journal

      This. His original work was outstanding, and farsighted, for the context he was working with. We are still using much of his work, virtually unchanged.

      The biggest problem was, and still is, ipv6. Not making it backward compatible is *still* delaying adoption, 25 years after it was specified.

      --
      Everyone is somebody else's weirdo.
    • (Score: 0) by Anonymous Coward on Wednesday May 10, @05:34AM

      by Anonymous Coward on Wednesday May 10, @05:34AM (#1305659)
      Yeah and if you designed the addressing stuff to be extensible it might have been harder for the routers to route packets fast enough. They'd probably do stuff like use the ASIC stuff when it's a 32bit address and use the slow CPU for the longer addresses, but I guess the router manufacturers would have been quite happy with that.

      There'd probably be even more bugs and security exploits too.
  • (Score: 2) by eravnrekaree on Tuesday May 09, @05:09PM (2 children)

    by eravnrekaree (555) on Tuesday May 09, @05:09PM (#1305557)

    The only valid point I see is on 32-bits. Security is best handled at the level above TCP as it is with TLS. Security is too much of a moving target anyway and doesnt belong at the TCP level which where its better to have more stability. I don't know what he is talking about regarding the world wide web, which has been a boon for standardization and open protocols, and need which again are evolving needs and you wouldnt address with TCP/IP.

  • (Score: 2) by SomeRandomGeek on Tuesday May 09, @09:39PM

    by SomeRandomGeek (856) on Tuesday May 09, @09:39PM (#1305610)

    If I had it to do over, the original internet would include UDP, TCP, and SCTP. Or, if that is one too many, just UDP and SCTP. UDP is a message oriented unreliable protocol. TCP is a stream oriented reliable protocol, and SCTP is a message oriented reliable protocol. It turns out that almost every internet application wants message oriented reliable transport. And those applications all use TCP because it just has better support (especially firewall support) that SCTP.

  • (Score: 2, Interesting) by pTamok on Tuesday May 09, @10:31PM (1 child)

    by pTamok (3042) on Tuesday May 09, @10:31PM (#1305618)

    Not Vint Cerf's fault, but the address space is still wrong.

    There are good arguments for using a 128-bit address space, rather than 32-bit. However, it brings a problem for small, low-powered edge devices (like sensors on '1-wire' networks) with restricted bandwidth. 128 bits of every packet is wasting a lot of bits, which is significant. A lot of things could get by with 8-bit local addressing, or even less.

    What should have been done is divorcing routing from addressing, and instead of having just three layers: global, 'organisation local', and link-local, the number of bits needed for the (local) network should be variable, subject to negotiation.

    So a small sensor network might need only 5-bits. Each sensor can have a label that is the full 128 bits (and there are arguments that 128 bits is too small), but sending data to and from the local router and other devices on the same network requires only a 5-bit address field. The router then takes on the responsibility of advertising the availability of the device labels to the rest of the world and NATs between the 5-bit address and however many bits are used on its upstream communication to other routers on each interface. It's a little more complicated than simply using 128 bits everywhere, but allows very flexible routing with the minimum necessary address field overhead per packet, which is a big win for the myriads of small, low power networks. Packet address headers need only be big enough to allow all the devices on the local network to be distinguished, while all devices have a 'full-fat' label that can give global 'availability' if necessary, mediated by the upstream routers. Yes, this is against the Internet philosophy of having all the network intelligence in the edge devices, but when you look at what enterprise routers do these days, that ideal is long exploded.

    There was a similar issue with ATM cell overhead. When standards were being set, there was a big argument over whether the cell payload size should be 32 bytes or 64 bytes, so the eventual compromise was 48 bytes. That means individual ATM cells can't carry useful IP packets - basically there is a multiplexer involved - but ATM streams needed to be directed to the correct sources and destinations - so you get local identifiers which depend on upper layers for the network addressing and routing (The scheme uses Virtual Path identifiers and Virtual Circuit Identifiers (VPI/VCI) - description of how this worked, with VPIs and VCIs being rewritten as cells traversed ATM switches [wikipedia.org])

    Devices on local networks don't need a 128-bit address space to talk to other local devices. Fundamentally, we use (mostly) MAC addresses, which are 'only' 48bit, and the failure of imagination with IPv6 is to force every packet to use 128-bit address labels. It's not necessary.

    OK. You can breathe. Rant over.

    • (Score: 3, Insightful) by Mojibake Tengu on Wednesday May 10, @06:14AM

      by Mojibake Tengu (8598) on Wednesday May 10, @06:14AM (#1305660) Journal

      Both sides of this conflict, the NAT Fanatics and Total Addressability Fanatics have one common intuition: IP network addressing does not scale very well.

      We had some better network models in the past but they are all lost to time. Many of them even removed from Linux kernel.
      Today's IoT contraptions could have make good use of some ancient network protocols, better than they can ever do with Internet protocol now.

      We live in IP Totality today. I do not think it is sustainable for a long term.

      --
      The edge of 太玄 cannot be defined, for it is beyond every aspect of design
(1)