Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Sunday November 01 2015, @05:27PM   Printer-friendly
from the dream dept.

While the Net has certainly scored a point or two against the State, the State has scored a lot more points against the Net. If the State wants your domain name, it takes it. If that's independence, what does utter defeat and submission look like?

Worse: whatever state tyranny exists, it's obviously dwarfed by the private, free-market, corporate tyrannosaurs that stalk the cloud today. We can see this clearly by imagining all these thunder-lizards were actually part of the government. "Private" and "public" are just labels, after all.

Imagine a world in which LinkedIn, Facebook, Twitter, Apple and the NSA were all in one big org chart. Is there anyone, of any political stripe, who doesn't find this outcome creepy? It's probably going to happen, in fact if not in form. While formal nationalization is out of fashion, regulation easily achieves the same result, while keeping the sacred words "private enterprise."

How do today's technologists win freedom from State control?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by Anonymous Coward on Sunday November 01 2015, @05:58PM

    by Anonymous Coward on Sunday November 01 2015, @05:58PM (#257201)

    What you're looking for, technologically speaking, is a new protocol and environment which supports the interoperation of an indefinite number of mutually untrusting nodes. If you're sitting in your basement/livingroom/pornpalace/library, you have to start with the assumption that every single line is tapped, every single router is owned, and every single infrastructural service or component is suborned. Unfortunately, if you assume the same about your home device, then you're screwed at the basic level because the Big Evils already know it all, so we have to separately find some way of, at minimum, creating a temporary communications interface which retains no information after use. A bootable CD may be one way - but that's beyond the apparent scope of the original story, so I'll pass on to the requirements of such a network system.

    OK, so let's talk networking. Specifically, let's talk about what the networking system's attributes have to be as an absolute precondition for the desideratum of the story:

    There may be no central authority for anything. No central naming service, no central numbering service, no central service registry, no central key registry - none of that stuff, because the moment that you have that, someone big enough and bad enough can come along and order someone else off the net. Or co-opt the facilities. Or block listed facilities. IANA? ICANN? All those lovely guys? They're inherently security holes. You have to have a numbering system that permits mutually untrusting nodes with imperfect information about the state of the network to come up with their own identities in isolation, with negligible odds of namespace clashes, and commence communications regardless of any third party or combination of third parties.

    Topology must be flexible, transparent, self-reconfiguring and opaque. Otherwise the Big Bad can just trace down the line through the cable company and determine that packets originated in a given location, and bomb that location from orbit. This means that not only can your cable company not meaningfully demand that you not also use ADSL on the same computer, but they can't know, and can't even state with any degree of certainty whether, if you are doing so, you might be using them as redundant links, bridging, or whatever. If they can work this out, then they can work out whether or not you are the true original source of certain messages.

    Transport must allow for encryption negotiation, but must permit arbitrary encryption (including, *sigh* rot13 and rot26) because we just plain don't know what the state of the encryption art will be 20 years from now. Because otherwise the Big Bad will simply read all your crap. Yes, this does require sufficient intelligence to also refuse bad cyphers, so just because a Big Bad asks everyone to communicate through it using rot26 doesn't mean that anyone has to agree to do so.

    Identities must be easy to create, easy to verify, hard (cryptographically) to fake and disposable. This allows you to be as public as you want, as anonymous as you want, and as pseudonymous as you want. Otherwise the Big Bad can maintain a separate register of identities and you're as screwed as ever.

    I've done a lot of research in this topic, and if anyone is actually serious about what the underpinnings would have to look like, let me know in a message, but I'm going to predict that nobody gives a sufficiently large shit to work on this, just as they haven't for the last 15+ years that I've been singing this song.

    Go ahead, surprise and delight me.

    There are other benefits to getting this right. No more supercookies. No more ad insertion. Trivially easy anonmyous browsing.

    Big Bads will stomp their feet and whine about terrorism, and kiddieporn, and kiddieporning terrorists. They'll use the exact same technology for their own agents, because they aren't officially terroristic kiddieporners. And if they are, it was just a few bad apples.

    Educating the populace at large is another topic, and would require iconically simple explanations of key facts, and I would be delighted to work with graphic artists on how to convey the relevant understandings.

    Starting Score:    0  points
    Moderation   +3  
       Interesting=2, Informative=1, Total=3
    Extra 'Interesting' Modifier   0  

    Total Score:   3  
  • (Score: 3, Insightful) by Runaway1956 on Sunday November 01 2015, @06:41PM

    by Runaway1956 (2926) Subscriber Badge on Sunday November 01 2015, @06:41PM (#257209) Journal

    Tor is a beginning, that satisfies part of those requirements.

    I2P is a better beginning, which satisfies more of those requirements.

    Both are subject to MIM attacks, but less so than any of the current common protocols.

    The people at I2P are pretty serious about development. The one major fault that I find with them, is that Java is the underlying technology on which everything works. I really don't like Java, and I suspect that some exploit based on Java will ultimately be their downfall.

    If there is anything that even approaches the anonymity of Tor and I2P, I'm not aware of it. Other tools like VPN are somewhat helpful, but most of them can be defeated almost trivially. Pretty much everything in use today operates under some scheme of trust.

    • (Score: 2) by urza9814 on Tuesday November 03 2015, @07:02PM

      by urza9814 (3954) on Tuesday November 03 2015, @07:02PM (#258059) Journal

      If there is anything that even approaches the anonymity of Tor and I2P, I'm not aware of it. Other tools like VPN are somewhat helpful, but most of them can be defeated almost trivially. Pretty much everything in use today operates under some scheme of trust.

      Are you unaware of Freenet, or are you aware of some flaw in it? Unlike I2P and Tor, it's an entirely contained network and doesn't try to proxy stuff from the normal web, so that may help resist some different classes of attacks. Of course, I haven't paid much attention since the 0.5/0.7 network split a few years back. At the time there was some serious suspicion of the main dev's motives, so it could be totally compromised by now...but there was also a lot of movement towards web of trust and other such ideas that I still haven't seen in any other network.

      • (Score: 2) by Runaway1956 on Wednesday November 04 2015, @02:28AM

        by Runaway1956 (2926) Subscriber Badge on Wednesday November 04 2015, @02:28AM (#258240) Journal

        You point out one fault yourself, that being trust. Relying on a web of trust means that you cannot meet the parameters of GP's post. In a world where paranoia is appropriate, you must trust no one.

        • (Score: 2) by urza9814 on Wednesday November 04 2015, @01:22PM

          by urza9814 (3954) on Wednesday November 04 2015, @01:22PM (#258352) Journal

          Depends on what you're trusting that person to do. In the Freenet discussions, it was mostly just about trusting someone to not flood the message boards with spam.

  • (Score: 2) by NotSanguine on Sunday November 01 2015, @06:54PM

    by NotSanguine (285) <NotSanguineNO@SPAMSoylentNews.Org> on Sunday November 01 2015, @06:54PM (#257212) Homepage Journal

    What you're looking for, technologically speaking, is a new protocol and environment which supports the interoperation of an indefinite number of mutually untrusting nodes. If you're sitting in your basement/livingroom/pornpalace/library, you have to start with the assumption that every single line is tapped, every single router is owned, and every single infrastructural service or component is suborned. Unfortunately, if you assume the same about your home device, then you're screwed at the basic level because the Big Evils already know it all, so we have to separately find some way of, at minimum, creating a temporary communications interface which retains no information after use. A bootable CD may be one way - but that's beyond the apparent scope of the original story, so I'll pass on to the requirements of such a network system.

    An interesting point, AC. I'll get to that in a moment.

    I found TFA to be poorly written and, given the source, quite self-serving. Now I can't get that time back. :(
    At the same time, if you ignore the self-serving, pseudo-political bullshit, the ideals of decentralization and strong structural incentives for fair play and mutual trust are quite appealing.

    The question which came to mind while reading TFA was "Who is this person, and why are they writing this?"

    The answer came by looking further at the site [urbit.org] hosting it. From the main page of the site:

    Urbit is a decentralized computing platform built on a clean-slate OS.

    Your urbit is a personal server: a persistent virtual computer in the cloud that you own, trust, and control.

    Back to your point, AC. Apparently, the author(s) of this piece are involved in the creation of an open, decentralized (if you can have such a thing in a virtualized environment) PAAS [wikipedia.org] environment which purports to address "identity and security problems which the Internet can't easily address."

    A more interesting (at least to me) treatise [urbit.org] hits upon your point:

    Urbit is a clean-slate system software stack defined as a deterministic computer. An encrypted P2P network, %ames, runs on a functional operating system, Arvo, written in a strict, typed functional language, Hoon, which compiles itself to a combinator interpreter, Nock, whose spec gzips to 340 bytes.

    What is Urbit for? Most directly, Urbit is designed as a personal cloud server for self-hosted web apps. It also uses HTTP APIs to manage data stuck in traditional web applications.

    More broadly, Urbit's network tackles identity and security problems which the Internet can't easily address. Programming for a deterministic single-level store is also a different experience from Unix programming, regardless of language.

    Given that this project is licensed under the MIT License [wikipedia.org], should it flourish, it could create a very interesting ecosystem. Then again, I thought that could be the case with Diaspora [diasporafoundation.org] as well.

    Both projects attempt to move important network concepts (social networking in the case of Diaspora, and virtualized computing and data sharing in the case of Urbit) out of corporate hands and create open, decentralized platforms for them.

    While the goals are quite laudable, and the mechanisms novel and, in some cases, quite elegant, gaining widespread acceptance is unlikely. Which is quite sad, IMHO.

    The biggest problem is that change is painful. And for most folks, projects like urbit and Diaspora, the pain to make such a change is much greater than the pain involved in doing what they're already doing.

    As such, it seems unlikely that this project, aside from a few niche players, will be able to survive and thrive. I hope I'm wrong, but I don't think so.

    --
    No, no, you're not thinking; you're just being logical. --Niels Bohr
    • (Score: 2) by VLM on Sunday November 01 2015, @07:09PM

      by VLM (445) Subscriber Badge on Sunday November 01 2015, @07:09PM (#257219)

      I've been following urbit for awhile, a better comparison than urbit and diaspora would be diaspora tried to use ruby on rails for an underlying language and in theory urbit would make a better underlying system to write diaspora 2.0 upon. At least as I understand it.

      Another problem is urbit as a new idea is a radically functional OS/language/bytecode all the way from UI to bare metal (well, maybe bare metal in the future) is proposed to solve a lot of "enterprise size" problems, so its entirely possible that "urbit as the new diaspora" bombs however as a technology maybe facebook in 2020 is just talking to a giant urbit based cluster at facebook inc doing all the usual corruption in an exciting manner. Much as you can try to make a 80s/90s AOL-like walled garden experience that happens to run over the TCPIP internet, or a TV-like experience over the internet, you could make a NSA/FB-like experience that uses urbit on its backend on a private cloud.

      • (Score: 3, Insightful) by NotSanguine on Sunday November 01 2015, @07:30PM

        by NotSanguine (285) <NotSanguineNO@SPAMSoylentNews.Org> on Sunday November 01 2015, @07:30PM (#257229) Homepage Journal

        I've been following urbit for awhile, a better comparison than urbit and diaspora would be diaspora tried to use ruby on rails for an underlying language and in theory urbit would make a better underlying system to write diaspora 2.0 upon. At least as I understand it.

        An excellent point. I'd clarify that my comparison had less to do with the technical platform and more to do with the political idea of decentralization and breaking away from the strongly centralized corporate environments like Facebook, the (pretty much) dead Google+ and the "someone else's servers" providers like AWS and Azure.

        Another problem is urbit as a new idea is a radically functional OS/language/bytecode all the way from UI to bare metal (well, maybe bare metal in the future) is proposed to solve a lot of "enterprise size" problems, so its entirely possible that "urbit as the new diaspora" bombs however as a technology maybe facebook in 2020 is just talking to a giant urbit based cluster at facebook inc doing all the usual corruption in an exciting manner. Much as you can try to make a 80s/90s AOL-like walled garden experience that happens to run over the TCPIP internet, or a TV-like experience over the internet, you could make a NSA/FB-like experience that uses urbit on its backend on a private cloud.

        I alluded to that when referencing the MIT license. Yes, I suppose it could be (especially if you can eventually implement Urbit on bare metal) quite useful for both public and private virtualization environments, both commercial and non-commercial. However, I think the transformative potential in a platform such as Urbit is in creating truly decentralized interactions that support strong encryption and don't require a middleman such as Facebook for such interactions.

        As I pointed out in another post (I neglected to include it in my original one), the potential for creating a truly decentralized Internet will be largely dependent upon truly high-speed symmetrical ISP/last-mile connections. When I can securely serve up my creative/personal content to those I choose, without bottlenecks or the support of centralized information/money grubbers, our networked world will be a much richer and more egalitarian one, IMHO.

        --
        No, no, you're not thinking; you're just being logical. --Niels Bohr
  • (Score: 3, Interesting) by VLM on Sunday November 01 2015, @06:56PM

    by VLM (445) Subscriber Badge on Sunday November 01 2015, @06:56PM (#257213)

    Fairly good summary AC, something that annoys me about the linked article kind of in parallel with what you're saying is the threat analysis seems to carefully implement NIH and top down ivory tower... so all the work done on freenet, i2p, tor, just pretend nothing ever happened and its blank slate time, I donno about that. It would seem there is a lot of useful development stories, bugs, implementation issues, all buried in those older projects.

    Urbit is also annoying in having to give cutesy new names to all the old concepts. Renaming punctuation marks, ugh. So something like 6 >= 4 is both true and read aloud as "six gap tis four is a true expression" or some BS like that. Still its fun to lurk their mailing list and kinda follow along at a distance. The explanation of how ducts interface with the underlying unix makes me want to start drinking and just replace the whole thing with about 5 lines of perl and also displays the general characteristic of urbit where we'll start with like 500 bytes of functional assembly language that are almost too obvious to pay attention to and the next step is suddenly hyper complicated and make my brain very sore. The learning curve is best expressed as fathoms of flat ocean bottom and suddenly you impact a rendering of sleeping Cthulhu at mach three and barely have enough time to say "whoops" before your head pounds on the desk like a bug hitting a windshield. But other than that, not too bad.

    • (Score: 1, Interesting) by Anonymous Coward on Sunday November 01 2015, @07:13PM

      by Anonymous Coward on Sunday November 01 2015, @07:13PM (#257222)

      TL;DR: If your foundations are cracked, building on them is worse than pouring a fresh foundation.

      Fairly good summary AC, something that annoys me about the linked article kind of in parallel with what you're saying is the threat analysis seems to carefully implement NIH and top down ivory tower... so all the work done on freenet, i2p, tor, just pretend nothing ever happened and its blank slate time, I donno about that. It would seem there is a lot of useful development stories, bugs, implementation issues, all buried in those older projects.

      Thanks. I'm neutral as to how we get where we need to - however, I'm pretty adamant that the threat analysis is pretty darned important, because otherwise you end up with half-solutions, which in the world of security aren't solutions.

      Yes, there are lots of useful stories, lessons learned, and we should absolutely capture those as best we can. Experience is valuable, but trying to salvage systems which are broken by design is (as experience teaches us) a fool's errand.

      As for the urbit idea, I'm familiar with it, but I have major reservations with respect to the urbit architecture. For example, you don't necessarily want a persistent anything, and you don't want to be the host for anyone else's persistencies - because you can flood your system with iteratively created persistent data. But that's just one example, and this is not a good venue for diving down that rabbit hole.

  • (Score: 2) by NotSanguine on Sunday November 01 2015, @07:03PM

    by NotSanguine (285) <NotSanguineNO@SPAMSoylentNews.Org> on Sunday November 01 2015, @07:03PM (#257217) Homepage Journal

    I'd point out that real decentralization would require (I know, this is one of my hobby horses) ubiquitous symmetrical ISP/last mile network connections and strong network neutrality rules. If we have that, real decentralization is possible. Without it, individuals have a wheelbarrow for consumption and a teaspoon for production -- And that strongly favors centralization and continuation of the broadcast/consumption paradigm.

    Which means those with the fat pipes and wallets will control information flow and creativity.

    --
    No, no, you're not thinking; you're just being logical. --Niels Bohr
    • (Score: 1, Insightful) by Anonymous Coward on Sunday November 01 2015, @07:20PM

      by Anonymous Coward on Sunday November 01 2015, @07:20PM (#257227)

      You're not perfectly correct. The right protocol running over whatever transport produces network neutrality because the provider doesn't have a clear (or hardly any) idea of what is really being transported.

      As for the problem of asymmetry, if you have a protocol which is topology-independent, that is a strong enabler for collective action on the ground, with people creating their own connections, pooling resources, and generally undermining attempts at control which hinge upon a false assumption of sole gateway ownership. In other words, the question of enablement is a two way street.

      Also, in the general sense, even upstream bandwidth is so hugely beyond what it was when the public internet was still in short pants, that it looks like breathtaking luxury by comparison. The problem is gatekeepers preventing people from offering services, not a fundamental inadequacy of the infrastructure.

      • (Score: 2) by NotSanguine on Sunday November 01 2015, @07:43PM

        by NotSanguine (285) <NotSanguineNO@SPAMSoylentNews.Org> on Sunday November 01 2015, @07:43PM (#257237) Homepage Journal

        Also, in the general sense, even upstream bandwidth is so hugely beyond what it was when the public internet was still in short pants, that it looks like breathtaking luxury by comparison. The problem is gatekeepers preventing people from offering services, not a fundamental inadequacy of the infrastructure.

        I think we're in violent agreement here. I take the concept of network neutrality to mean more than just not creating fast/slow lanes. I believe that real network neutrality is the provision of a "dumb" pipe that passes data regardless of its origin/destination. That includes unrestricted use of "servers" on residential connections.

        It's true that the 768KB upload capacity which I have is orders of magnitude more than the 56KB that I had twenty years ago. I do realize that I could (if I wanted to contract with an ISP which acts as a "gatekeeper" to use your word) I could probably have *slightly* more upload bandwidth. However, I prefer to contract with an ISP who allows me unfettered (i.e., I can run my own servers with my own static IP addresses if I choose, and I do) access to the larger Internet.

        However, the limited upload capacity seriously restricts my ability to be a producer rather than a consumer. Having symmetrical connections (along with the "dumb pipe" concept) would go a long way towards the goals of decentralization and digital liberty.

        --
        No, no, you're not thinking; you're just being logical. --Niels Bohr
  • (Score: 2) by fadrian on Sunday November 01 2015, @07:30PM

    by fadrian (3194) on Sunday November 01 2015, @07:30PM (#257230) Homepage

    Decentralization won't trump law. When it becomes illegal to run "terrorist" software... they will find you by other means. The solution is never technological because it can always be trumped by a large enough group of your neighbors. Understand that this is a social not a technological problem. Solve it using appropriate tools.

    --
    That is all.
    • (Score: 0) by Anonymous Coward on Sunday November 01 2015, @07:39PM

      by Anonymous Coward on Sunday November 01 2015, @07:39PM (#257235)

      The famous Second Amendment Solution, of course, how could we forget that.

      If everyone is always in lockstep on every front, then yes, it's all over.

      In most of the western world, there appears to be a gap through which one could create some of these technological solutions, at least in terms of social collaboration. Using that gap and seeing who bitches loudest is one way of starting to address the social issue, so it's still valid to work on this.

    • (Score: 2) by NotSanguine on Sunday November 01 2015, @08:02PM

      by NotSanguine (285) <NotSanguineNO@SPAMSoylentNews.Org> on Sunday November 01 2015, @08:02PM (#257241) Homepage Journal

      The solution is never technological because it can always be trumped by a large enough group of your neighbors. Understand that this is a social not a technological problem. Solve it using appropriate tools.

      Just because a solution isn't technological, that doesn't mean we can't use technology to support efforts to address a social problem. Especially when the problem relates to the control of information -- as this does.

      Creating new ways to share and disseminate information, while not the solution, can assist in publicizing the issues and creating support. I'd argue that control of information is more powerful than a government's monopoly on organized force.

      --
      No, no, you're not thinking; you're just being logical. --Niels Bohr
  • (Score: 0) by Anonymous Coward on Monday November 02 2015, @07:19AM

    by Anonymous Coward on Monday November 02 2015, @07:19AM (#257387)

    I've done a lot of research in this topic, and if anyone is actually serious about what the underpinnings would have to look like, let me know in a message,

    Right away, oh Anonymous Coward, who has left no way to message you. I am not the same AC, but you can message me back the same way. Nice to know that someone has been working on this, whatever it is, for so long with no results.

    • (Score: 0) by Anonymous Coward on Tuesday November 03 2015, @05:32AM

      by Anonymous Coward on Tuesday November 03 2015, @05:32AM (#257826)

      Sure. At least someone's showing an interest.

      What the hell, let's start.

      The first thing you need is a robust numbering system so that you can connect arbitrary networks of devices with reasonable confidence in avoiding numbering collisions. This is actually an old problem, with many approaches. This does not resolve maliciously created collisions, but there are other solutions for that. An approach I favour is 64 bit timestamp + independently salted random bits, totalling, oh, let's say 256 bits of number. Since it's not intended to be human-convenient, the fact that it's 64 hex digits is no big deal. Since with modern computing equipment it's trivial to create those ad lib, there'll be no shortage of numbers for the foreseeable future (64 bit timestamp) and it's a big boost to multiple identities.

      To be clear, my problem hasn't been solving problems on the technical end (although that is an occasional challenge) but getting people to care. Most people just stare at me and loudly wonder why bother. I think Snowden answered that question pretty comprehensively.

      Anyway, if at any point you decide I'm not entirely replete with the crazy, and you want to hear more, you may feel free to say so. And send me a burner email address or something.

  • (Score: 0) by Anonymous Coward on Monday November 02 2015, @04:59PM

    by Anonymous Coward on Monday November 02 2015, @04:59PM (#257587)

    For somebody who claims to have researched the problem, you do not mention the best solution I am aware of:
    Cjdns [wikipedia.org].

    You also claim that a no central naming system is possible. However namecoin exists. It is essentially a centralized database stored in a decentralized manner using mutally-distrusting nodes. The biggest problems with name coin are that it can not scale, and that there is no remedy against cyber-squatting.

    I think the biggest problem is that most new computers have been compromised since about 2006 (in the name of the "protected Media Path"). Windows (still the most popular OS) now explicitly spies on you and pushes opaque updates that do not say what they do. There is an uneasy feeling that systemd may be a systematic compromise of Gnu/Linux systems, but I am not aware of anybody who has been able to conclusively prove that yet.

    On a whole, you post reads almost like a shill post. "Encryption (and authentication) is hard! don't even try!"
    Psuedo-edit: hmmm, so does mine (but for a different reason).

    • (Score: 0) by Anonymous Coward on Tuesday November 03 2015, @05:20AM

      by Anonymous Coward on Tuesday November 03 2015, @05:20AM (#257824)

      You're kind of missing the point.

      CJDNS: great, it solves one problem. (Kinda.) What about the others? Given the very nature of IPvX, you'll need (at least) to update CJDNS to make it compatible with something which could even theoretically work.

      The good news is that yes, these things are feasible. That's not something I dispute.

      The problem with a centralised naming system is that it affords authorities of various stripes a place to strike - a place to send writs and warrants and thugs. It's a single point of failure. If is decentralised, then ... it's not centralised. So, yeah. Pick one.

      I'm actually saying: it is hard, sure, but it is possible to describe the terrain in a way which is useful, and can lead to an effective protocol design. I have no idea where you got this "don't even try!" stuff from.

  • (Score: 2) by JNCF on Tuesday November 03 2015, @06:48PM

    by JNCF (4317) on Tuesday November 03 2015, @06:48PM (#258050) Journal

    Some more-or-less unrelated questions:

    Do you feel that namecoin has satisfactorily solved the issue of creating a decentralised registry of human-readable addresses that can be associated with longer, machine-usable addresses? If not, why not?

    Do you feel that CJDNS has a reasonable system for using public keys as machine-usable addresses? If not, why not?

    I really like how CJDNS attempts to mask the flow of information by having each node store only a piece of the routing information for getting to the machine associated with a public key/address, but was unconvinced that the details would scale well. It seems like you were getting at something similar. How are you imagining that your network would route information from one node to another given a machine-usable address, if we want to minimize the amount of useful information being provided for tracing the physical location of each node?

    • (Score: 0) by Anonymous Coward on Wednesday November 04 2015, @03:29AM

      by Anonymous Coward on Wednesday November 04 2015, @03:29AM (#258256)

      Namecoin: It seems to work.

      CJDNS: I'm fine with that aspect of it.

      Your questions largely beg the larger question of the extent to which those actually provide an adequate transport, as opposed to proof of concept of relevant algorithms. Since the problem needs resolution from at least layer 2 of the ISO model up, the fact is that we do need a transport level alteration.