Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


cafebabe (894)

cafebabe
(email not shown publicly)

Journal of cafebabe (894)

The Fine Print: The following are owned by whoever posted them. We are not responsible for them in any way.
Wednesday July 05, 17
02:52 AM
Software

(This is the fifth of many promised articles which explain an idea in isolation. It is hoped that ideas may be adapted, linked together and implemented.)

What are the properties usually associated with Internet Protocol 6? Multi-cast? A huge address space? A cleaner implementation with more packet throughput? Whatever.

Multi-Cast

Internet Protocol 4 addresses from 224.0.0.0 to 239.255.255.255 are for multi-cast, as defined in RFC1112. Unfortunately, multi-cast is incompatible with TCP, so that's 2^26 Internet Protocol 4 addresses and 2^120 Internet Protocol 6 addresses which don't work with YouTube or QuickTime.

Address Space

Internet Protocol 6 has 128 bit addresses. That's more addresses than atoms in the visible universe. However, there are edge cases where that's insufficient. Internet Protocol 4 has 32 bit addresses (by default) and that was considered vast when it was devised. That was especially true when total human population was less than 2^32 people. Superficially, it was possible to give every person a network address.

Several address extension schemes have been devised. The best is RFC1365 which uses option blocks to optionally extend source and destination fields in a manner which is downwardly compatible. So, what size is an Internet Protocol 4 address? 32 or more bits, as defined by RFC1365.

Header Size

Internet Protocol 4 is often described as having a 20 byte (or larger) header while Internet Protocol 6 is often described as having a header which is exactly 40 bytes. This is false. IPv6 has option blocks just like IPv4 and therefore both have variable length headers. The difference is that IPv6 headers are usually 20 bytes larger.

Packet Size

IPv4 typically supports a PMTU of 4KB or more. Admittedly, there are no guarantees but Ethernet without packet fragmentation provides about 1500 bytes. With PPPoA or PPPoE over AAL5 over ATM, 9KB payloads only fragment over the last hop. This is ideal for video delivery. IPv6 only guarantees 1280 bytes. How common is this? Numerous variants of micro-controller networking only support 1280 buffers. This is especially true for IPv6 over IEEE802.15.4 implementations. This is especially bad for video.

Packet Fragmentation

IPv6 has no packet fragmentation. IPv6 packets which exceed MTU always disappear.

Packet Throughput

Compared to IPv4, IPv6 generally has longer headers, longer addresses and shorter payloads. On this basis, how would you expect packet throughput of IPv6 to match or exceed IPv4?

Summary

The introduction of IPv6 provides no particular benefit to end-users. IPv6 is detrimental payload size and this is particularly detrimental to video delivery.

Display Options Threshold/Breakthrough Reply to Article Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Wednesday July 05 2017, @05:35PM (7 children)

    by Anonymous Coward on Wednesday July 05 2017, @05:35PM (#535288)

    Your points about packet size and throughput are only half right when the standard is fully implemented. IPv4 allows 64KB-1B packets as a maximum, and routers between may need to fragment to fit into smaller packets. However, many routers won't fragment all values of packet size, but will start dropping or sending back ICMP when they get arbitrarily large. Additionally, there are efficiency reasons as to why routers will commonly limit the IP layer's MTU to the maximum frame size minus applicable frame headers, so PMTUD is still somewhat necessary for IPv4.

    With IPv6, they set a reasonable minimum at 1280, which is higher than IPv4's BTW. They just made the expectation of PMTUD more explicit and simplified the stack for ASIC routers, as they don't need to worry about fragmenting. Plus, they increased the maximum size to 4GB-1B, so if your link supports it, you can transfer almost 4GB in a single packet, can't get much less overhead than that!

    • (Score: 2) by hendrikboom on Wednesday July 05 2017, @05:55PM (6 children)

      by hendrikboom (1125) on Wednesday July 05 2017, @05:55PM (#535304) Homepage Journal

      But is there an efficient way to determine the maximum packet size on a connection?

      • (Score: 0) by Anonymous Coward on Wednesday July 05 2017, @09:59PM (5 children)

        by Anonymous Coward on Wednesday July 05 2017, @09:59PM (#535429)

        It has been standardized in RFC 1981, and it is roughly the same one that IPv4 uses when a router on the way doesn't allow fragmentation. You send a packets that are larger and larger until you receive an ICMPv6 "Packet Too Big" (type 2, code 0) message. The message will contain the MTU necessary to satisfy the next hop. This makes it possible to receive multiple such messages as you move farther down the path or if the path changes. However, to prevent abuse, the value can never be reduce below the 1280 minimum size at the IPv6 level. In response to too many lost packets, the stack may also choose to reduce the MTU, in case there are silent drops.

        It is worth noting that this is much more efficient than the algorithm used in IPv4. In that case, the discovery is done by sending larger and larger packets with the "Don't Fragment" header sent. If the packet is too large, the router responds with a "Fragmentation Needed" (type 3, code 4) message. This does not contain a suggested size, so the stack is left taking guesses until they don't receive any such messages (although there is a list of select sizes to try based on actual frame sizes used at the data link layer). Additionally, there technically isn't a method to signal such on the path without the end points setting the DF bit, so it is not uncommon for ISPs to just drop packets they don't like, which makes some stacks to not even try values above 1500.

        Of course, the whole reason why we care about getting the MTU correct is that the most efficient transmission happens at the MTU for various reasons. It decreases cost of hardware, prevents many buffer problems, prevents silent drops, among other benefits. IPv6 decided to fix the real problem, rather than pretend that packet fragmentation does.

        • (Score: 2) by cafebabe on Thursday July 06 2017, @12:37AM (4 children)

          by cafebabe (894) on Thursday July 06 2017, @12:37AM (#535474) Journal

          I encountered an ISP which, by default, allowed 8KB TCP and 4KB UDP packets and would vary it for each client if specifically asked. This is annoying for a variety of reasons including not being in the list of standard PMTU sizes given in RFC1191.

          Regarding ICMP, this is not typically available to POSIX applications and therefore ICMP is mostly of use within kernel implementations. In practice, this limits PMTU discovery to TCP and not UDP. Even if PMTU discovery was widely available to UDP applications, it does not change the issue that application data may exceed single packet payload. This is particularly problematic for IPv6 DNS where TCP is not implemented.

          It would be preferable for a kernel to maintain UDP packet fragmentation. If an application has complete control over packet format, it may require packet fragmentation to be implemented in userspace. Otherwise, tortuous workarounds may be required to avoid limits.

          --
          1702845791×2
          • (Score: 0) by Anonymous Coward on Thursday July 06 2017, @04:50AM (3 children)

            by Anonymous Coward on Thursday July 06 2017, @04:50AM (#535554)

            I guess I don't understand your problem. You always run the risk of UDP packets getting lost, which gets higher the bigger they are. That is why many implementations historically limited the size to the IPv4 minimum. Unless you are inventing your own stack from a raw socket, the kernel will handle the fragmentation for you and it does, indeed, cache the discovered MTU for the recent destinations for any IP connection it handles for you (regardless of transport layer used) if you ask it to do so (and most transports do, by default). And if you do want to cut out the kernel as much as possible to do its own thing, your app can still listen on the ICMP socket to get any messages pertinent to it so it can set its own PMTUD algorithm and cache.

            Really, the only way to avoid any problem when you are worried about non-compliance with standards and random unsignaled drops is to keep your packet limit at the minimum for whatever protocol; but then, IPv6 is better than v4 because its minimum is over twice as large.

            • (Score: 2) by cafebabe on Thursday July 06 2017, @11:22PM (2 children)

              by cafebabe (894) on Thursday July 06 2017, @11:22PM (#535927) Journal

              Nothing is special about a TCP packet, a UDP packet or any other packet. Any packet can be dropped. In practice, the vast majority of packet loss occurs at the last hop. This is due to wireless links or flaky consumer hardware. If an 8KB chunk is requested from a UDP server, the response is likely to be sent within a jumbo frame over Internet backbone links, over multiple ATM cells and then over Qualcomm's wireless burst mode extension to a client. At the last hop, this incurs fragmentation into six IPv4 UDP packets over wireless Ethernet. However, they are likely to be sent sequentially before invoking radio link turn-around. (Or re-sent locally from a stateful wireless bridge.)

              Packet loss after fragmentation would appear to be precarious but, in practice, fragment delivery is highly correlated. Unfortunately, delivery of 8KB payloads only works over IPv4. This is reduced to 1KB over IPv6 due to the lack of packet fragmentation. But, heh, IPv6 maintains a fairly optimal PMTU over highly stateful, mono-cast, multi-path TCP, so who cares?

              --
              1702845791×2
              • (Score: 0) by Anonymous Coward on Friday July 07 2017, @04:26AM (1 child)

                by Anonymous Coward on Friday July 07 2017, @04:26AM (#536000)

                I didn't say there was anything special about TCP or different types of packet. I was saying that the nanosecond you have to fragment a packet, you get the problems associated with it on top those cause by splitting data into smaller chunks. I guess what I'm really saying is that, I don't see your reasons for seemingly wanting to reinvent the wheel in user space, given that the problems you see don't seem to exist or apply to IPv4 as well.

                I'm really starting to believe that you know enough to be dangerous, but not enough to recognize the danger. Packet fragmentation is not the panacea it seems you think it is. The least of which is that you have to stop thinking of it as "I'm delivering 8KB payloads instead of 1KB payloads." You are delivering payloads the size of the MTU, period. You're just shifting the burden of doing such a split from the more powerful hardware (in terms of what it can dedicate to your data) at the endpoints to somewhere in the middle and then, on top of it, depending on your assumptions of non-standard behavior of the last-mile router and how the LLC/MAC handles frames of your imagined last mile (which may be a completely different device from the network layer).

                • (Score: 2) by cafebabe on Saturday July 08 2017, @01:06AM

                  by cafebabe (894) on Saturday July 08 2017, @01:06AM (#536343) Journal

                  I'm lamenting an era where an MTU of 9000 bytes was becoming common and now we're into an era where an MTU of 1280 bytes is becoming common. That greatly restricts the effectiveness of quadtree video entropy encoding and it is for this reason that it is worthwhile to look beyond hard limits of the MTU. So, rather than implementing directly addressable tiles of 64×64 pixels (with minimal effective compression), it may be preferable to to implement 256×256 pixel tiles (with more effective compression) and then have clients set request priority for three of more supplemental payloads.

                  It would appear that I'm a dangerous fool but I've worked through a packet-cell encapsulation layer which is more efficient than AAL5 [wikipedia.org], more robust and of more general use. For low throughput applications, this opens the possibility of bit-banging one or more 8 bit micro-controllers which may operate in one of a variety of modes including DAC, I/O expander or windowing client. An approximation would be something akin to VNC to Contiki running on a Commodore 64 but with less overhead. In particular, use of cells allows packets to be routed via nodes which have less RAM than the packet MTU.

                  --
                  1702845791×2
  • (Score: 0) by Anonymous Coward on Wednesday July 05 2017, @09:57PM (3 children)

    by Anonymous Coward on Wednesday July 05 2017, @09:57PM (#535425)

    It feels to me as if you're reaching for the description of a new protocol, or protocol family, for membership, naming, routing and encapsulation.

    If that's your idea, say the word because this is something I've studied in detail over the years.

    • (Score: 2) by cafebabe on Thursday July 06 2017, @01:07AM (2 children)

      by cafebabe (894) on Thursday July 06 2017, @01:07AM (#535485) Journal

      I'm working towards a URL-space which can be used for multi-cast streaming audio and video, desktop window remoting, industrial sensing and automation, distributed databases (of which Project Xanadu could be one form) and other applications. Unfortunately, some address-space and packet fragmentation stuff has been brought into application userspace to ensure consistent operation over different network protocols.

      I hope that I don't over-simplify routing or suchlike. Your expertise in this matter is greatly appreciated.

      --
      1702845791×2
      • (Score: 0) by Anonymous Coward on Thursday July 06 2017, @05:11AM (1 child)

        by Anonymous Coward on Thursday July 06 2017, @05:11AM (#535566)

        Short version: you can't have a relatively universal namespace without relatively universal naming. Unfortunately, the moment you have a useful guarantee of naming consistency and uniqueness, you have a venue for organised attack by authorities. You can have an uncoordinated, or haphazardly coordinated naming system, but at the cost of having to contend with overlaps. Given that attack by authorities is a near-certainty, as you alluded to (somewhat obliquely) you're left with something either no better than the current structure, or contending with a new naming infrastructure.

        Let's stipulate, for the sake of argument, that you're interested in resiliency to attack by authorities who dislike your network activities.

        The corollary is that you need an underlying system that is similarly flexible, so that you can't have authorities attack the underlying layer, thereby similarly invalidating your flexibility.

        Cutting a long story short for the purpose of getting to the point: once you follow all the turtles down, you need an ad hoc, topology insensitive networking system, including routing, encapsulation, and naming, that allows for bypassing and disintermediating any and every transport and service that makes up your network.

        This is a large part of the reason that IPv6 was the wrong solution to the wrong problem. Something like automated UUCP would have been a hell of a lot closer to useful.

        That's the general outline, anyhow.

        • (Score: 2) by cafebabe on Saturday July 08 2017, @01:27AM

          by cafebabe (894) on Saturday July 08 2017, @01:27AM (#536352) Journal

          I considered a structure which began as a tree in one trust domain but the implemention encouraged a directed graph of shortcuts. That was intended as a compromise between universal names and the ability to route around oppression.

          I considered a hybrid URL like TOR hidden services where the major part is unique within a federated namespace and the minor part is a path within one trust domain. Advertising the names may lead to something akin to a block-chain. However, this requires two tiers of trap-door functions and is weak to practical implementation flaws and hypothetical quantum attack. Skipping all of the trap-door functions leads to something akin to BGP. Specifically, an automated mechanism to broadcast the shortcuts I envisioned. This also relies on autonomous system administrators trusting each other's trust (and competence).

          If you have an advance on this scenario, it is greatly appreciated.

          --
          1702845791×2
  • (Score: 0) by Anonymous Coward on Thursday July 06 2017, @05:40AM

    by Anonymous Coward on Thursday July 06 2017, @05:40AM (#535580)

    NAT is unnecessary with IPv6 since there are enough public addresses to address every device directly. It is nice to be able to transmit a packet from one machine to another across the public IPv6 internet and not have the packet header mangled along the way. I use wireshark and tcpdump frequently and I greatly appreciate seeing a packet leave a client and arrive at a server completely unmolested and still holding the identical checksum it had when it was transmitted.

    I am quite familiar with NAT, I have written code to punch holes in NAT, and it works very well, but still I hate to need to use it.

(1)