Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday September 29 2016, @12:03AM   Printer-friendly
from the get-connected dept.

I came across an article a few hours ago, http://www.networkworld.com/article/3121969/lan-wan/virtualizing-wan-capabilities.html

I was wondering how much of all that makes sense. It seems to put a lot of focus on the virtual buzz that exists today everywhere and it seems to be being pushed in networking as well. While I don't mind this being implemented by those who want to, I am a bit of a fanboy of the saying "Hardware is King". All this "IT as a service" doesn't seem to have much sense unless one defines what IT is. It may range from just a shared printer, to an entire rack full of servers and switches, to an entire floor full of them. Virtualised WANs and the notion of a 'WAN as a service' could be easy as a breeze to be managed, but how robust could they be? While performance needs at the network level always go up, how does this relate to virtualizing that in itself, transforming it into yet another layer down the stack? A layer which encapsulates all the other layers and which in turn may contain such a layer too. How deep would the nesting level go?

From the article:

"In the network, NFV [Network Functions Virtualization] allows routers, switches, firewalls, load balancers, content delivery systems, end-user devices, IMS [IP Multimedia Subsystem] Nodes, and almost any other network function to be run as software on virtual machines—ultimately, on shared servers, using shared storage," Honnachari explained in an executive brief.

Basically it is the promise of being able to draw a network in a CAD-like software, and push a "Run" button.

Then there is also:

In a world where every part of business is moving, ever faster, the new WAN era will be characterized by user-intuitive solutions that help businesses sense and adapt to shifting demands, allowing those businesses to achieve competitive advantage by helping them optimize their business in motion.

What could be these shifting demands to change your mind often about the WAN infrastructure on which many other things depend on? The virtual network of the International Stock Exchange traffic, anyone?

Like someone else mentioned, would any Soylentils enjoy playing "The Sims: NOC Edition"?

Previously:
Software-Defined Networking is Dangerously Sniffable [
AT&T Open Sources SDN 8.5 Million Lines of Code - to be Managed by Linux Foundation [updated]


Original Submission

Related Stories

AT&T Open Sources SDN 8.5 Million Lines of Code - to be Managed by Linux Foundation [updated] 17 comments

AT&T (NYSE: T) announced on July 13 it will release its Enhanced Control, Orchestration, Management and Policy (ECOMP) platform to the wider telecom industry as an open source offering managed by the Linux Foundation. The goal, the company said, is to make ECOMP the telecom industry's standard automation platform for managing virtual network functions and other software-centric network capabilities.

SDN refers to Software Defined Networking.

[1] http://www.fiercetelecom.com/story/att-open-sources-ecomp-linux-foundation-hopes-make-it-industrys-standard-sd/2016-07-13

UPDATE: Some background might be helpful. On March 15, 2016 AT&T announced Our SDN Call to Action on their Innovation Space blog:

For almost two years now, we have been architecting and coding a large software project called ECOMP. That stands for Enhanced Control, Orchestration, Management and Policy. It’s a mouthful. But it’s also important. ECOMP is an infrastructure delivery platform and a scalable, comprehensive network cloud service. It provides automation of many service delivery, service assurance, performance management, fault management, and SDN tasks. It is designed to work with OpenStack but is extensible to other cloud and compute environments. ECOMP is the engine that powers our software-centric network.

Now, we’re opening the hood of our network and showing you the engine. ECOMP automates the network services and infrastructure that will run in the cloud. A system like ECOMP is very powerful as it allows us to build our next generation cloud-based network in a vendor agnostic way, giving us great flexibility for deploying NFV / SDN in our network. As a model-driven platform, this framework costs less than maintaining existing network systems. And it allows us to accelerate the implementation of new services quicker than ever before. ECOMP is one of the most challenging, complex and sophisticated software projects in AT&T’s history.

So, what’s next? We have written a whitepaper on ECOMP [pdf] that we’re making publicly available starting today. We did this to give the industry an idea of our thinking and direction. On a global scale, we know the needs we have are similar to the rest of the industry and other cloud services providers.

There are several more background docs on their blog as well.

Consider that a virtual machine (VM) presents the appearance of a real machine on which software may be run. It is now relative easy and commonplace to spin up a new VM for development and production purposes. ECOMP appears to provide similar tools by which networking can be virtualized and manipulated at a higher level of abstraction and flexibility.


Original Submission

Software-Defined Networking is Dangerously Sniffable 5 comments

Software-defined networking (SDN) controllers respond to network conditions by pushing new flow rules to switches. And that, say Italian researchers, creates an unexpected security problem.

The researchers were able to persuade their SDN environment to leak information that sysadmins probably don't want out in public, including network virtualisation setups, quality of service policies, and more importantly, security tool configuration information such as "attack detection thresholds for network scanning".

Even a single switch's flow table, they write, can provide this kind of information, as well as serving as a side-channel for an attacker to exploit.

The three network boffins – Mauro Conti of the University of Padova, and Sapienza University's Fabio De Gaspari and Luigi Mancini – are particularly concerned about SDN being exploited to help an attacker build a profile of the target network, in what they call a Know Your Enemy (KYE) attack.

For example, they write, an attacker could potentially:

  • Connect to the passive listening ports most SDN switches include for remote debugging, to retrieve the flow table (they offer HP Procurve's dpctl utility as an example);
  • Infer information about the flow table from jitter (that is, round trip time (RTT) – variance);
  • Sniff control traffic, because of inadequate protection (not using TLS, or not using certificates for authentication;
  • Exploit vulnerabilities that might exist in switch operating systems, such as backdoors; or
  • Copy the flow table or memory content of the switch to an external location.

The paper points out that none of this is specific to particular devices: "the KYE attack exploits a structural vulnerability of SDN, which derives from the on-demand management of network flows, that in turn is one of the main features and strengths" of SDN.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Interesting) by RamiK on Thursday September 29 2016, @12:56AM

    by RamiK (1813) on Thursday September 29 2016, @12:56AM (#407683)

    SDN is an inefficient waste of power. A market-failure gift-wrapped by the Obama administration push for fracking that depends on power costs remaining artificially low as the government and insurers pick up the tab for clean ups, new filtration & water lines (instead of polluted aquifers) and earthquakes reconstructions.

    In 20 years, these designs will be notoriously remembered as 5l engines of yesteryear. The Asian & EU markets will continue to make them smaller, dedicated and more efficient. While the US market will continue detaching itself from reality.

    And everyone will see it coming.

    --
    compiling...
  • (Score: 4, Interesting) by frojack on Thursday September 29 2016, @02:04AM

    by frojack (1554) on Thursday September 29 2016, @02:04AM (#407705) Journal

    I know this lady that use to provision the POS systems world wide for a popular brand of yuppy fastfood.
    The had it all computerized. A check list a mile long, automated where possible. Customized for every country, every state every town.

    E months before opening order equipment
    L weeks order the lines
    D days ship it
    D' Days send the technitians.

    Over time it changed from dialup, to leased line, then to internet drops. The backend was always TCP/IP so the physicak layers did't matter, A wide variety of carriers and providers were used initially, slowly consolidating to just a few,

    When she was about to retire, the whole thing got contracted to AT&T. Its all virtual now. Especially as far as the resturant is concerned.

    Packets is packets. If you want to manage the hardware, fight with every modem, plan every drop, be my guest.
    I'll take virtual anyday. I'll use SSL/TLS. But beyond that I don't care how the contractore get it from Italy to the West Coast.
    If they want to configure the routing by draging icons around a screen, that's fine. I still drop packets on a wire,

    Its pretty much all handled by the edge providers these days anyway.

    --
    No, you are mistaken. I've always had this sig.
    • (Score: 1) by FunkyLich on Thursday September 29 2016, @02:01PM

      by FunkyLich (4689) on Thursday September 29 2016, @02:01PM (#407902)

      What makes you so certain that the packets will get delivered from the several-layer-virtualised distributed-potentially-across-continents os-and-application system with the same efficiency and speed as directly onto hardware? Do you think maintenance of virtual WAN (which I don't equate with mere creating VMs in OpenStack and connecting it with some virtual router in the same OpenStack to ultimately be delivered on the same physical port. However complicated an OpenStack farm made of hundreds or even thousands of nodes may seem, it would be still very simple to any serious WAN infrastructure) would be simpler than maintenance of hardware and its configurations as they currently are?

      You use SSL/TLS and that's till where you care about. But beyond this very same point, is where I start to care about, while I don't really care what lies before it. I already fight with modems and plan every drop, just so you will be able to just use SSL/TLS. What I am concerned about is that all this will add several layers where things can go wrong. Certainly we could solve all this with this somebody-elses-problem you seem to suggest and instead of 10 people we employ one hundred, with at most equal performance of the network as far you'd be concerned and would perceive. But I doubt you'd like an eight-fold increase in your bill for these extra 90 people dealing with a kind of distributed magic box.

  • (Score: 1, Insightful) by Anonymous Coward on Thursday September 29 2016, @04:30AM

    by Anonymous Coward on Thursday September 29 2016, @04:30AM (#407745)

    There is nothing magical about software. Software only ever allows you to use the hardware.

    Better software might allow you to use the hardware in question more efficiently, but does not create more ALUs. You only get the transistors you got in the first place.

    Software defined whatever (network, storage, whatever's cool this week) allows for different rearrangements of available resources, period. And there's the magic: software adds complexity to your core function, because something has to run the software as well as your core function (or you're splitting the load somehow) but it can save time when you're switching configurations because software is cheaper to reconfigure than hardware.

    However, when you've finally decided what you want to do (assuming you're not a weathervane style of businesscritter) the software gives you nothing but more burned electricity and another layer of failure, another attack surface.

    But of course, admitting that is heresy to the move-fast-and-break-stuff more-agile-than-thou devops priesthood. I'll go drink stale coffee for my sins of heresy.

  • (Score: 1, Insightful) by Anonymous Coward on Thursday September 29 2016, @07:40AM

    by Anonymous Coward on Thursday September 29 2016, @07:40AM (#407777)

    So, software defined network is defined as a virtual network inside a virtual machine. Or more likely, a virtual network inside the virtualization software, linking virtual machines together.

    A WAN, on the other hand, is a network connecting one physical location to another.

    Oh, it would definitely be useful. A colleague of mine has a 40 Mbit internet connection, if we could set up a virtual wan, between our computers, I could use his fast connection instead of my own slow connection, without the cost of having new cables laid down to my apartment. But that pretty much falls into the realm of magic.

    Or is this simply a marketing term for a VPN?

    • (Score: 2) by NotSanguine on Thursday September 29 2016, @08:52AM

      by NotSanguine (285) <{NotSanguine} {at} {SoylentNews.Org}> on Thursday September 29 2016, @08:52AM (#407792) Homepage Journal

      Oh, it would definitely be useful. A colleague of mine has a 40 Mbit internet connection, if we could set up a virtual wan, between our computers, I could use his fast connection instead of my own slow connection, without the cost of having new cables laid down to my apartment. But that pretty much falls into the realm of magic.

      That was my thought at first too. You can virtualize a lot of things, but you can't virtualize Layer 1/Layer 2 [wikipedia.org]. Okay, I suppose you could virtualize layer 2, but that wouldn't be worthwhile, given that most layer 2 stuff is already done cheaply and efficiently in hardware.

      Or is this simply a marketing term for a VPN?

      I think it's a little broader than that.

      I suspect that AT&T (Note author works for AT&T and the word "Sponsored" is prominent at the top of TFA -- perhaps that should have been in TFS?) is looking to encourage knowledge about, and the use of, their SDN code (hence, the move to open source it) to large network providers, big multinationals and large retailers.

      Since the margins on commodity hardware are pretty low, that may well be attractive to those who are being gouged for hardware and support from the likes of Cisco and their ilk. Also, it gives them an in to a market they haven't really been able to penetrate. The big money for AT&T will be that once the deal is done, someone will need to come in, set up the SDN environment and make it work. Kaching!

      --
      No, no, you're not thinking; you're just being logical. --Niels Bohr
  • (Score: 0) by Anonymous Coward on Friday September 30 2016, @04:02PM

    by Anonymous Coward on Friday September 30 2016, @04:02PM (#408419)

    When can I have Software Defined Software?

  • (Score: 0) by Anonymous Coward on Friday September 30 2016, @04:32PM

    by Anonymous Coward on Friday September 30 2016, @04:32PM (#408430)

    https://www.att.com/Common/about_us/pdf/AT&T%20Domain%202.0%20Vision%20White%20Paper.pdf [att.com]
    The white paper referred to in the article.

    TL:DR Inline devices that can be virtualized such as Firewall, Proxy, NAT, Routing.. etc can run on blade servers attached to high capacity switches at the ISP's co-lo if you want to let your ISP manage everything.