Named Data Networking (NDN) is a new approach to network traffic that focuses on the content, not the endpoints.
TCP/IP is focused on the endpoints, data comes from somewhere and goes to somewhere. Its built on a COMMUNICATIONS model. Multiple individual end-to-end connections, for the same web page.
NDN is all about the content, without much concern for where it comes from, or where it is going, or how it gets there. Its build on a DISTRIBRUTION model, think streaming (anything).
The Named Data Networking (NDN) project aims to develop a new Internet architecture that can capitalize on strengths — and address weaknesses — of the Internet’s current host-based, point-to-point communication architecture in order to naturally accommodate emerging patterns of communication. By naming data instead of their locations, NDN transforms data into a first-class entity. The current Internet secures the data container. NDN secures the contents, a design choice that decouples trust in data from trust in hosts, enabling several radically scalable communication mechanisms such as automatic caching to optimize bandwidth.
All this sounds daft, until you take the time to read through the Executive Summary and the FAQ, and the Motivation and Details. There are BIG names behind this project, and many of the new commercial members have deep pockets.
Communication in NDN is driven by the receiving end. You don't go to a website to find something. You just ask for it by name, without regard for where it comes from. To receive data, a consumer sends out an Interest packet, which carries the name of the desired data. (Think directory structured names: /movies/historical/Apollo-13). The router remembers the interface from which the request comes, and then forwards the Interest packet by looking up the name in its a name-based routing protocol, to all of those sources that handle movies. The router stores in a Pending Interest Table (PIT) all the Interests waiting for returning Data packets. When multiple Interests for the same data are received from downstream, only the first one is sent upstream towards the data source (flood protection). Subsequent requests are appended to the first entry.
Once the Interest Packet reaches a node that has the requested data, a Data packet is sent back, which carries both the name and the content of the data (or the first portion thereof). When a Data packet arrives, the router finds the matching PIT entry and forwards the data to all the interfaces listed in that PIT entry.
So far that sounds a lot like TCP/IP, but it is fundamentally different because each router only knows the where something came from and where it is going next. It has no idea of the end points. It might receive data from 1 to N upstreams, and it might forward it to 1 or N downstreams.
And each router is expected to cache. Live events might be sent exactly once from the source, massively cached throughout the network, and delivered to a million targets without ever transmitting more than once over any given network segment. Depending on cache size and longevity, it might still be cached only a single hop away for the next several days for late requests. Popular songs might live in router cache for months. The medium has the message. (Marshall would be proud)
Content is signed in every packet, encrypted as desired. The only place you can reliably monitor is near the origin, or near the destination, and in each case you won't know anything about the opposite end. This architecture can coexist on the current internet along with TCP/IP. The project started in 2010, but it has now reached the stage where large scale testing will start.
(Score: 5, Insightful) by buswolley on Saturday September 06 2014, @07:46PM
Without know anything really, I'm suspicious that this is just DRMing the internet, and pushing out the little people for the big content producers.
subicular junctures
(Score: 0) by Anonymous Coward on Saturday September 06 2014, @07:48PM
Can Soylent give me two previews before submit, please? I need it. :)
(Score: 2, Insightful) by Anonymous Coward on Saturday September 06 2014, @08:25PM
I yeah I am suspicious too. It seems like yet another attempt to rewire the internet by pushing the smarts from the edges of the network into the fabric of the network - like we used to do with the switched telephone system and we all know which design won that fight. I think it might have a chance of catching on only because of the way big corps such as the mega-ISPs now control significant pieces of the internet. It is in their interest to slow growth and use the leverage they already have to assert more control.
Bandwidth just keeps getting cheaper, [telegeography.com] like 30% each year every year for well more than a decade. I don't see all that much value in making caching integral to the net when the cost of traffic has been consistently dropping so fast. Special cases can be handled with strategicly placed CDNs like the way netflix tries to do it (when the ISPs cooperate). But it would make darknet stuff like Tor into second-class services since they can't be cached.
(Score: 2, Interesting) by nsa on Saturday September 06 2014, @11:10PM
You aren't the only one. Here is how I submitted this same story to SN yesterday-
"nsa writes:
Sometimes the story is about the story, other times it's about the reaction to the story. Today's highlight comments from the progenitor blog [slashdot.org]-
/.user Eravnrekaree writes about "Mass media takeover and destruction of 'net" [slashdot.org]
which provokes this +6 doubleplusgood insightful response [slashdot.org] from /.user uCallHimDrJONES
Disclaimer: I didn't bother reading the parent article [networkworld.com] or wikipedia link [wikipedia.org], but I think with a conspiracy theory this obviously crazy, those things are more or less beside the point.
"
(Score: 0) by Anonymous Coward on Saturday September 06 2014, @11:37PM
I saw that in the queue, it may still even be there. I thought it was a terrible submission, all about posts at slashdot and while you write your like point is self-evident it is far from it.
(Score: 1) by nsa on Sunday September 07 2014, @01:52AM
You are of course entitled to your opinion. It was all about /. posts because I felt that was an interesting angle flushed out by multiple good /. comments about the bigger picture. Also, I think some of the 'tongue-in-cheek' style of my sentiment went over your head. I was in fact highlighting the pair of comments, because I thought it made my non-self-evident point visible to others who might otherwise write this story off as an esoteric technical issue unrelated to the topic of corporate attempts to dominate the art publishing industry.
(Score: 0) by Anonymous Coward on Sunday September 07 2014, @03:57AM
If the reader is left wondering whether Poe's Law applies to your writing but you think it's gone over his head it means you don't know your audience.
(Score: 2) by maxwell demon on Sunday September 07 2014, @09:25AM
Indeed, it's right in the very first question in the FAQ linked from the summary (emphasize by me):
The Tao of math: The numbers you can count are not the real numbers.
(Score: 2) by Tork on Monday September 08 2014, @12:13AM
🏳️🌈 Proud Ally 🏳️🌈
(Score: 5, Insightful) by subs on Saturday September 06 2014, @08:28PM
Just wait for this horseshit to come around and try and convince network admins to get it deployed. And the oh-so-sweet attack vectors it creates in broken/buggy/unpatched router software. After all, who wouldn't want to merge the control and data planes!
In all honesty though, most of the problems they're trying to address here have already been addressed in IP and without requiring super-smart super-expensive routers with shitloads of memory and intellect:
- Want data naming? Fuck, that's what DNS is for. Stop reinventing the wheel.
- Want multicast delivery? Use IP multicasting. PIM-SM is supported in any decent modern edge router and with MSDP on your BGP nodes, you can already do cross-AS multicasting without issues, all it takes is getting the peering agreement in place. IPv6 with SSM means anybody can start multicasting and be sure that their traffic reaches their subscribers and doesn't pollute non-participating links. This pretty much takes care of all live event applications.
- Want caching? Get a CDN deal or a dedicated set of caching servers. High-bandwidth content caching is complex and requires tons of policy and resource control, and for low-bandwidth stuff it's just too much effort to be worth dealing with. This is why general wide-spread use of http proxies got dropped as soon as Internet connections got wide enough that ISPs could stop worrying about saving every bit of bandwidth. It's just a lot easier and cheaper today to get some fatter pipes than worry about content caching control. And the big guys ala Google and Netflix just colo a few caching servers near/in your AS and that pretty much takes care of that.
In short, this is just Cisco trying to sell you a bunch of new high-powered routing hardware that can do everything and scrambled eggs.
(Score: 1) by wantkitteh on Sunday September 07 2014, @11:32AM
I was thinking along those lines myself. Also, isn't this how big content leverages CDNs to reduce network traffic and costs to the consumer already, just minus the open caching? (Cacheing that will work on what rules, exactly?)
(Score: 2) by edIII on Sunday September 07 2014, @05:55PM
I think people are not being entirely fair or impartial about the idea itself. It's not as bad of an idea as you think.
Understandably, people's cynicism can only see this as a way to control and distribute content. Once you start doing that, only thing you can imagine is Sony controlling the net. *SHIVER*
This isn't the same as caching or multicast either, and is a whole other ballpark. Everything revolves around the content.
The content is signed, but that doesn't necessarily imply an identity is attached, or required. Sounds an awful lot like Freenet all of the sudden with the ability to add anything you want since anyone can sign anything. Freenet is not a terrible idea where Big Content gets to DRM the net either.
Another interesting property is the apparent anonymous nature of the network. If signed content is only protected content (and anyone can create protected content), than attempting to either determine identities asking for content or creating it, is precluded unless you were at the source or destination. I didn't see anywhere where it said PIT or records contained the actual router where the content was seen first and possibly created.
Did you notice one thing this network creates? NO MORE LAWSUITS. You need to find, and prove, that a specific identity created the content. ANY device possessing the content has a strong property of deniability. Just because your device has the encrypted content doesn't mean you were an accessory to anything. Something to consider too.
I think that would be the two biggest concerns:
1) Can they filter against it reliably with DPI and analysis of encrypted data?
2) Can anyone just add content?
If you slow down and think about it, most content is the same and repeated endlessly. Soylent's servers would merely respond to requests for content. As my particular routers would know that content around the node in the naming structure is most likely desired, content would automatically move towards me with predictive algorithms. With named content it's possible for me to initiate a persistent request for anything under /Soylent/Articles/TFA. Some of this network can act as a *natural* RSS reader. The comments would only be requested once I accessed the TFA.
How many times is the picture of the monkey flinging poo sent everywhere? This acts to create a CDN-like property for the entire network naturally. That picture would never be sent more than once over a network. A standard CDN doesn't even come close to such efficiency.
Yes, you're correct that IPv6 offers tools to accomplish this, but that doesn't mean that *this* is stupid or a bad idea. In fact, it's a great idea. Does IPv6 inherently solve DDOS attacks like this one does? There are no denial of service attacks possible on this network. Anywhere. Such attacks would instantly cause all the content to "buffered" towards the edges and those million devices would just be repeating themselves to their router. The network is way to efficient to be brought low by just a few million devices. It would literally take JLAW's boobies.
Of course, let's just replace ALL the routers everywhere, add large amounts of storage to hold named content, and reinvent and refactor code for web browsers, web servers, and damn near everything else that communicates over it. Every single Internet connected device that we have would need to be replaced down to the web connected security cameras.
Considering just what it would take to actually implement this, I think it's humorous we got off track with our cynicism before realizing that it's an impossible idea once you tally up the costs to create it. Sure, some content executives are jizzing their pants over the level of control possible, but will get angry when you tell them the hundreds of billions it will take to create and implement :)
(Score: 2) by subs on Monday September 08 2014, @09:41AM
Did you notice one thing this network creates? NO MORE LAWSUITS.
How naive are you that you think content generator & end consumers won't be identifiable? If this really were hardware-assisted freenet, I can guaran-fucking-tee you that they'll build in intercept & snooping mechanisms into the core protocol.
If you slow down and think about it, most content is the same and repeated endlessly.
And as I already explained, when the value of the bit transferred is too low, caching it is just more hassle than it's worth. We've had caching services in the core network, they were called http proxies and they cached AT THE CONTENT LEVEL, and it was buggy, expensive and a nightmare to keep up for more than a few users. Frankly, we in the ISP space (and I used to work for an ISP for about 5 years) dumped them as soon as we could.
Yes, you're correct that IPv6 offers tools to accomplish this, but that doesn't mean that *this* is stupid or a bad idea. In fact, it's a great idea.
It is a stupid idea. It's essentially trying to merge routers with caching proxies. If you've ever had to deal with large network routers, you'll know that the LAST thing a network admin wants is for the network to be more complicated, intelligent and smartly self-controlling than is absolutely necessary to get the job done. Router software including IOS, XOS, JunOS and everything else contains bugs, bugs which can bring things down and break networks in non-trivial manners and when that happens in the middle of the night, it's your phone that's going to be on fire.
Overall, it sounds like another bloated attempt at reinventing the wheel of data caching and multicast delivery and merging it with my control plane while co-opting my routers' memory resources to do that. As a network admin, I am repulsed by that idea.
(Score: 2) by edIII on Monday September 08 2014, @07:26PM
I honestly think you are incapable of discussing this rationally as you clearly have emotional issues around being a network engineer.
While I want to discuss things intellectually, the only thing you wish to do is tear it apart. Tearing it apart, not on basis of fact, but on basis of opinion about people will screw up the implementation.
There are clearly some benefits of exploring a network like this, at least on paper. For you, it's just "stupid".
Like I said, this goes *WAY* beyond caching. If you calmed down for a second, you might be able to discuss it.
Have a nice day. I won't bother you anymore with discussions you clearly don't want to have on the Internet in a place where the purpose is to discuss things.
Technically, lunchtime is at any moment. It's just a wave function.
(Score: 3, Insightful) by maxwell demon on Saturday September 06 2014, @08:37PM
Wait, there are companies with big pockets involved? Expect whatever they develop to be heavily patented, and certainly not running on Open Source.
The Tao of math: The numbers you can count are not the real numbers.
(Score: 2) by Geotti on Saturday September 06 2014, @11:12PM
certainly not running on Open Source.
Oh, it will be running on (heavily modified, proprietary-patched) Open Source all right, just not Free Software.
But this thing does sound like an attempt at widening censorship capabilities.
(Score: 2) by gumby on Saturday September 06 2014, @09:14PM
It's URNs but now with added marketing gibberish! Just what I wanted!
It's not like the whole barely-implemented and certainly almost entirely misunderstood URI = { URL , URN } concept isn't already belarded with jargon. But it takes the private sector to really exceed that!
(Score: 0) by Anonymous Coward on Saturday September 06 2014, @09:52PM
any century now ipv6 will be rolled out. honest.
yeah, i'm not too worried about the new normal networking happening anytime soon.
(Score: 0) by Anonymous Coward on Sunday September 07 2014, @12:35AM
sounds awesome ... for a database.
also sry to say that the internet already gave birth to this idea "in a garage" when academia was still expelling students for installing "rouge" http and irc servers on the uni-net, and before you ask, yes it involves magnets.
problem that has not yet been solved is that the "object" cannot be changed once it's pushwd out. the URN [sic] is for this object, unlike with a URL.
Also need to fix this DNScrap ...
(Score: 2, Insightful) by Qzukk on Sunday September 07 2014, @02:15AM
Sounds like just about every P2P program after napster died, with a heaping helping of Freenet.
(Score: 3, Insightful) by nsa on Sunday September 07 2014, @03:46AM
Precisely. I wish I could more easily throw in a link to a sample of the dialogue between Hadden and Ellie in the movie Contact as he describes the 2nd machine - "except this one is completely controlled by the government/military and transnational corporate establishment". I.e. this is precisely every P2P program implemented as a layer on top of TCP/IP, except unlike those that should have been protected forms of free speech (until such content as death threats or pirated content show up on them) and treated Neutrally by the Network, these actually will because they have the Cisco/NSA blessing along no doubt with back doors built in for mass government tracking and surveillance.
Here is the progression I'm sure they envision- They deploy this, use their transnational corporate leverage to get the ISPs to accept it as blessed traffic, then they basically outlaw (or inconvenience to oblivian) bittorrent and the like. Today bittorrent (when used for legal content, like self published music and open source software) is easily defended. But if tomorrow there is this new transnat/gov blessed variant that does what bittorrent can do, then the voices that today prevent bittorrent from being squashed will lose out to those that only want a form of bittorrent with government control. I.e the ability to deem that some new self-published movie disparaging to Mohommed or Jesus be easily instantly cut off at the govt controlled network level, rather than allowed to flourish untracked via a protocol like bittorrent. The Internet was the amazing Free Speech Machine. The government has caught on, and seeks to replace it with an equivalent that it gets easy veto power over when it likes. (as if in any emergency, the govt couldn't already today - if probable cause was easily demonstratable to a judge - easily shut down any part of the network it needed to)
(Score: 3, Insightful) by deimios on Sunday September 07 2014, @06:46AM
Ephasis mine.
(Score: 2, Interesting) by CirclesInSand on Sunday September 07 2014, @09:54AM
Do we really want to design the Internet around a protocol that will break if P=NP?
(Score: 2) by caseih on Sunday September 07 2014, @07:17PM
You'll have to explain what you're getting at here. We've already based our entire digital economy on the idea that P != NP. Should P=NP, everything would crumble from e-commerce to infrastructure secured with encryption. So I really don't understand your point. The internet is already designed around the assumption that P != NP, so the whole thing would break. There are probably a lot of arguments to be made against this protocol scheme, but your argument is certainly not one of them.
(Score: 2) by FatPhil on Sunday September 07 2014, @07:50PM
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(Score: 1) by CirclesInSand on Monday September 08 2014, @12:14PM
P=NP is a big exception to the usual principle that tractability matters more than asymptotics. This is because if an affirmative solution were found to P=NP then every computing device in the world would be commandeered to use the solution to make itself tractable.
The comment was a bit tongue-in-cheek though, because if NP was actually made into P then the entire Internet would be rebuilt within a few years to reflect this result, probably by semi-sentient robots (yes the result would be that earth changing, since it puts all forms of formal reasoning into perspective).
(Score: 2) by FatPhil on Tuesday September 09 2014, @08:11PM
Nope. If the polynomial solution is O(x^2^100), then simply multiplying computing effort by a factor of a million won't help solve x~2048 problems such as public key crypto cracking.
There are few better seeds than P=NP for starting a cryptographer on a rant, believe me.
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(Score: 1) by CirclesInSand on Monday September 08 2014, @12:09PM
Application layer security is based on P != NP assumptions. If it were found that P=NP, then Transport layer TCP and IP would still work fine afaik. And the application layers would have to switch to a more old fashioned and labor intensive form of security such as passwords and one time pads (communicated with some non internet based channel).
If P=NP (or more realistically, the existence of tractable factoring methods), afaik, the internet routing and such still works fine, just some end user programs have to be changed or scrapped.
(Score: 1, Informative) by Anonymous Coward on Sunday September 07 2014, @05:54PM
David Isenberg, who had years of experience working with the phone companies' SS7-based AIN framework [wikipedia.org], wrote a famous paper [hyperorg.com] detailing why TCP/IP was faster, better, and cheaper, and destined to win in the marketplace.
Sounds like some equipment vendors are anxious to promote the Intelligent Network, The Sequel.
(Score: 0) by Anonymous Coward on Monday September 08 2014, @09:56PM
So they've reinvented newsgroups. I hope there's hookers and blackjack this time.