I came across an article a few hours ago, http://www.networkworld.com/article/3121969/lan-wan/virtualizing-wan-capabilities.html
I was wondering how much of all that makes sense. It seems to put a lot of focus on the virtual buzz that exists today everywhere and it seems to be being pushed in networking as well. While I don't mind this being implemented by those who want to, I am a bit of a fanboy of the saying "Hardware is King". All this "IT as a service" doesn't seem to have much sense unless one defines what IT is. It may range from just a shared printer, to an entire rack full of servers and switches, to an entire floor full of them. Virtualised WANs and the notion of a 'WAN as a service' could be easy as a breeze to be managed, but how robust could they be? While performance needs at the network level always go up, how does this relate to virtualizing that in itself, transforming it into yet another layer down the stack? A layer which encapsulates all the other layers and which in turn may contain such a layer too. How deep would the nesting level go?
From the article:
"In the network, NFV [Network Functions Virtualization] allows routers, switches, firewalls, load balancers, content delivery systems, end-user devices, IMS [IP Multimedia Subsystem] Nodes, and almost any other network function to be run as software on virtual machines—ultimately, on shared servers, using shared storage," Honnachari explained in an executive brief.
Basically it is the promise of being able to draw a network in a CAD-like software, and push a "Run" button.
Then there is also:
In a world where every part of business is moving, ever faster, the new WAN era will be characterized by user-intuitive solutions that help businesses sense and adapt to shifting demands, allowing those businesses to achieve competitive advantage by helping them optimize their business in motion.
What could be these shifting demands to change your mind often about the WAN infrastructure on which many other things depend on? The virtual network of the International Stock Exchange traffic, anyone?
Like someone else mentioned, would any Soylentils enjoy playing "The Sims: NOC Edition"?
Previously:
Software-Defined Networking is Dangerously Sniffable [
AT&T Open Sources SDN 8.5 Million Lines of Code - to be Managed by Linux Foundation [updated]
(Score: 2) by NotSanguine on Thursday September 29 2016, @08:52AM
Oh, it would definitely be useful. A colleague of mine has a 40 Mbit internet connection, if we could set up a virtual wan, between our computers, I could use his fast connection instead of my own slow connection, without the cost of having new cables laid down to my apartment. But that pretty much falls into the realm of magic.
That was my thought at first too. You can virtualize a lot of things, but you can't virtualize Layer 1/Layer 2 [wikipedia.org]. Okay, I suppose you could virtualize layer 2, but that wouldn't be worthwhile, given that most layer 2 stuff is already done cheaply and efficiently in hardware.
Or is this simply a marketing term for a VPN?
I think it's a little broader than that.
I suspect that AT&T (Note author works for AT&T and the word "Sponsored" is prominent at the top of TFA -- perhaps that should have been in TFS?) is looking to encourage knowledge about, and the use of, their SDN code (hence, the move to open source it) to large network providers, big multinationals and large retailers.
Since the margins on commodity hardware are pretty low, that may well be attractive to those who are being gouged for hardware and support from the likes of Cisco and their ilk. Also, it gives them an in to a market they haven't really been able to penetrate. The big money for AT&T will be that once the deal is done, someone will need to come in, set up the SDN environment and make it work. Kaching!
No, no, you're not thinking; you're just being logical. --Niels Bohr