Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday October 12 2016, @01:42PM   Printer-friendly
from the just-run-your-OWN-facebook-at-home dept.

The original purpose of the web and internet, if you recall, was to build a common neutral network which everyone can participate in equally for the betterment of humanity. Fortunately, there is an emerging movement to bring the web back to this vision and it even involves some of the key figures from the birth of the web. It's called the Decentralised Web or Web 3.0, and it describes an emerging trend to build services on the internet which do not depend on any single "central" organisation to function.

So what happened to the initial dream of the web? Much of the altruism faded during the first dot-com bubble, as people realised that an easy way to create value on top of this neutral fabric was to build centralised services which gather, trap and monetise information.

[...] There are three fundamental areas that the Decentralised Web necessarily champions: privacy, data portability and security.

Privacy: Decentralisation forces an increased focus on data privacy. Data is distributed across the network and end-to-end encryption technologies are critical for ensuring that only authorized users can read and write. Access to the data itself is entirely controlled algorithmically by the network as opposed to more centralized networks where typically the owner of that network has full access to data, facilitating customer profiling and ad targeting.
Data Portability: In a decentralized environment, users own their data and choose with whom they share this data. Moreover they retain control of it when they leave a given service provider (assuming the service even has the concept of service providers). This is important. If I want to move from General Motors to BMW today, why should I not be able to take my driving records with me? The same applies to chat platform history or health records.
Security: Finally, we live in a world of increased security threats. In a centralized environment, the bigger the silo, the bigger the honeypot is to attract bad actors. Decentralized environments are safer by their general nature against being hacked, infiltrated, acquired, bankrupted or otherwise compromised as they have been built to exist under public scrutiny from the outset.

In the Web 3.0 I want a markup tag that delivers a nasty shock to cyber-spies...


Original Submission

Related Stories

Mail Is Not Difficult 54 comments

OpenBSD developer, Gilles Chehade, debunks multiple myths regarding deployment of e-mail services. While it is some work to deploy and operate a mail service, it is not as hard as the large corporations would like people to believe. Gilles derives his knowledge from having built and worked with both proprietary and free and open source mail systems. He covers why it is feasible to consider running one.

I work on an opensource SMTP server. I build both opensource and proprietary solutions related to mail. I will likely open a commercial mail service next year.

In this article, I will voluntarily use the term mail because it is vague enough to encompass protocols and software. This is not a very technical article and I don't want to dive into protocols, I want people who have never worked with mail to understand all of it.

I will also not explain how I achieve the tasks I describe as easy. I want this article to be about the "mail is hard" myth, disregarding what technical solution you use to implement it. I want people who read this to go read about Postfix, Notqmail, Exim and OpenSMTPD, and not go directly to OpenSMTPD because I provided examples.

I will write a follow-up article, this time focusing on how I do things with OpenSMTPD. If people write similar articles for other solutions, please forward them to me and I'll link some of them. it will be updated as time passes by to reflect changes in the ecosystem, come back and check again over time.

Finally, the name Big Mailer Corps represents the major e-mail providers. I'm not targeting a specific one, you can basically replace Big Mailer Corps anywhere in this text with the name of any provider that holds several hundred of millions of recipient addresses. Keep in mind that some Big Mailer Corps allow hosting under your own domain name, so when I mention the e-mail address space, if you own a domain but it is hosted by a Big Mailer Corp, your domain and all e-mail addresses below your domain are part of their address space.

Earlier on SN:
Protocols, Not Platforms: A Technological Approach to Free Speech (2019)
Re-decentralizing the World-Wide Web (2019)
Usenet, Authentication, and Engineering - We Can Learn from the Past (2018)
A Decentralized Web Would Give Power Back to the People Online (2016)
Decentralized Sharing (2014)


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by barista on Wednesday October 12 2016, @02:13PM

    by barista (5219) on Wednesday October 12 2016, @02:13PM (#413439)

    In the Web 3.0 I want a markup tag that delivers a nasty shock to cyber-spies...

    Start an RFC.

    As an homage to the infamous Web 1.0 shocker, maybe it could be...

    <goatse></goatse>

    • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @02:45PM

      by Anonymous Coward on Wednesday October 12 2016, @02:45PM (#413457)

      How do you start an RFC?

      • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @03:08PM

        by Anonymous Coward on Wednesday October 12 2016, @03:08PM (#413474)

        Open a pub that caters exclusively to knuckle-dragging lorry drivers.

  • (Score: 5, Insightful) by Thexalon on Wednesday October 12 2016, @02:28PM

    by Thexalon (636) on Wednesday October 12 2016, @02:28PM (#413446)

    I'm still quite unclear about what "Web 2.0" was supposed to involve other than marketing nonsense, so I'm hereby declaring the creation of Web 4.0, which is a vague collection of ideas that sound really good but don't have anything remotely resembling technical implementation yet. I'm sure TedX and the press will go wild.

    --
    The only thing that stops a bad guy with a compiler is a good guy with a compiler.
    • (Score: 5, Funny) by WizardFusion on Wednesday October 12 2016, @02:40PM

      by WizardFusion (498) on Wednesday October 12 2016, @02:40PM (#413453) Journal

      Other marketing names may include...

      - iWeb
      - Web X
      - Internet Of Web (IoW)
      - Web As A Service (WaaS)

      • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @02:48PM

        by Anonymous Coward on Wednesday October 12 2016, @02:48PM (#413460)

        Cyberweb.

        • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @03:10PM

          by Anonymous Coward on Wednesday October 12 2016, @03:10PM (#413477)

          WebCloud

          • (Score: 2, Funny) by Anonymous Coward on Wednesday October 12 2016, @04:02PM

            by Anonymous Coward on Wednesday October 12 2016, @04:02PM (#413498)

            I'll just leave this here [xkcd.com]...

      • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @03:42PM

        by Anonymous Coward on Wednesday October 12 2016, @03:42PM (#413488)

        Internet of Webs as a Service v(X+1).0

    • (Score: 2, Insightful) by Francis on Wednesday October 12 2016, @02:46PM

      by Francis (5544) on Wednesday October 12 2016, @02:46PM (#413458)

      Depends whom you ask. It was either about social networking or about massive amounts of ads. And because the social networking sites were mostly all free, I'm not sure that the distinction is particularly important. Either way, web2.0 was about handing your personal information to all sorts of strangers.

      • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @02:52PM

        by Anonymous Coward on Wednesday October 12 2016, @02:52PM (#413461)

        I thought Web 2.0 was about sites built with tons of JavaScript, XMLHttpRequest (the one thing that survived from ActiveX) and Ajax.

        • (Score: 4, Insightful) by lizardloop on Wednesday October 12 2016, @03:14PM

          by lizardloop (4716) on Wednesday October 12 2016, @03:14PM (#413479) Journal

          That was my understanding of it. Web 1.0 implied that pages were static and could not accept new content without the involvement of the original author.
          Web 2.0 seemed to imply the idea of "anyone" being able to upload content to websites.
          I guess Web 3.0 implies that the hosting is then distributed.

          I've theorised for a while now that our relatively open web will be rapidly narrowed to resembling cable TV. That ISPs will offer the internet as a series of white listed URLs and anything outside of that will be prohibitively expensive. Facebook have already attempted to do this with their "free basics" service.

          This narrowing of content to pre approved URLs would seem to suit the powers that be. I'm just amazed it hasn't happened yet. I'm guessing only because of their sheer incompetence. But they'll get around to it eventually.

          If you want a truly decentralised web you also need to get rid of ISPs. They are choke points that are easily targeted by corporate and political interests. But then you're looking at meshnets and there are a whole bunch of hurdles to getting everyone using those.

          • (Score: 3, Insightful) by bob_super on Wednesday October 12 2016, @05:26PM

            by bob_super (1357) on Wednesday October 12 2016, @05:26PM (#413548)

            Web 1.0 : read this page.
            Web 2.0 : interact with this page.
            Web 3.0 : surrender yourself to this page.

          • (Score: 3, Interesting) by Phoenix666 on Wednesday October 12 2016, @05:38PM

            by Phoenix666 (552) on Wednesday October 12 2016, @05:38PM (#413557) Journal

            But then you're looking at meshnets and there are a whole bunch of hurdles to getting everyone using those.

            Maybe, maybe not. Build it into an OS by default and that takes care of the number of nodes in a mesh network and getting people using them. That is, for the people using them the mesh will be invisible. All they'll perceive is that they can now surf anytime, anywhere without having to pay for a data plan, as long as point A can connect to point B somehow.

            For the other hurdles with mesh networks, perhaps we can fashion something akin to the fold@home or SETI@home for router firmware to mitigate throughput and latency issues.

            --
            Washington DC delenda est.
            • (Score: 2) by julian on Thursday October 13 2016, @05:14AM

              by julian (6003) Subscriber Badge on Thursday October 13 2016, @05:14AM (#413773)

              The things people are used to, and by people I mean the masses of normies, are media-rich and scripting heavy pages that look engaging (the way a casio slot machine is engaging to some people) and provoke positive emotional response. Those sorts of web pages like Facebook, Twitter, YouTube, require huge amounts of reliable, fast, bandwidth. Mesh networks can't provide that without a huge number of users, and huge numbers of users won't leave the traditional ISPs unless greener pastures already exist.

              For people who appreciate simpler, information-heavy as opposed to media or mere "content" heavy pages, then a mesh network could work at a lower critical mass but there would be many isolated clusters of nodes. You might have every large metro area an island meshnet. Neighborhoods could go dark and come back as people move around. This gets better if the meshnetwork is an adjunct to the internet to bridge the gaps, but then someone is still paying or misappropriating a traditional ISP connection.

              A meshnet would be the way to go if you were to redesign the Internet today as a "drop in" replacement. It's out of reach if that goal has to be obtained through contiguous and functional evolutionary steps.

              • (Score: 0) by Anonymous Coward on Thursday October 13 2016, @04:21PM

                by Anonymous Coward on Thursday October 13 2016, @04:21PM (#413949)

                I like the way you guys think regarding the issues related to mesh networks. Part of the issue is also the complexities of the tools available to start offering mesh networks and bridges from which mesh network can grow in a neighborhood; it's complicated, even for somebody not completely incompetent. For instance I have a Buffalo DDWRT router with a good cable connexion, and I've always thought that I would agree to provide a public access and / or bridge if:

                a) I can use a separate unencrypted SSID for users or other nodes to connect to my router, a distinct one from my own home SSID
                b) I can limit the speed and monthly bandwidth of that SSID, to avoid retaliation from my ISP if I use too much badnwidth
                c) I can use my proxy service to tunnel everything to somewhere else, again to avoid retaliation from my ISP (or other harassing bureaucrat types) if somebody uses my public access's IP address in ways those who wield power frown upon.

                I really would love to see a tutorial along these lines; I'm moderately competent with my equipment, but not enough to figure all of that out in any reasonable time. I'm sure under these conditions, many like me could agree to start offering public access, which could be extended with other nodes more easily.

                Is there anybody here who can confirm if this is technically feasible under ddwrt? Any hints?

                -Very tin foil hat

      • (Score: 2) by julian on Wednesday October 12 2016, @10:01PM

        by julian (6003) Subscriber Badge on Wednesday October 12 2016, @10:01PM (#413665)

        It was either about social networking or about massive amounts of ads. And because the social networking sites were mostly all free, I'm not sure that the distinction is particularly important.

        This is why I believe blocking ads isn't just a good idea for sanity and security; it's a moral obligation to starve the beast.

        This business model, ad-supported content, is simply too perverse to be tolerated. It creates an adversarial relationship between website operators and website visitors. It incentives owners to actively attack their users' privacy and security. It fundamentally cannot be done ethically, even in principle.

        • (Score: 1) by Francis on Thursday October 13 2016, @02:04PM

          by Francis (5544) on Thursday October 13 2016, @02:04PM (#413899)

          I generally block everything I can find, but I've found on the few sites that I value enough to support by disabling ads tend to run like molasses because of the myriad poorly designed scripts that take ridiculous amounts of resources to run.

          A simple text or image ad is enough. The only javascript involved should be writing the link for the image or writing the ad. That stupid intellitext thing that expects to run over the whole page and highlight words with links is a particularly egregious example as it doesn't just slow the browser to a crawl, it also interferes with the use of the page.

    • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @04:30PM

      by Anonymous Coward on Wednesday October 12 2016, @04:30PM (#413514)

      Agreed. Let's see a practical example with sample data structures and a typical work-flow from machine to machine during the sample transaction between two or more people.

      Note the inventor of wiki's, Ward Cunningham, recently started a "federated wiki" project to allegedly replace centralized wikis with independent nodes. It makes zero sense to me on a practical level. It appears that all participants have to own their own server. A handful of server-fiddling geeks may enjoy, but that will limit participation. http://fed.wiki.org [wiki.org]

      • (Score: 1, Insightful) by Anonymous Coward on Wednesday October 12 2016, @04:46PM

        by Anonymous Coward on Wednesday October 12 2016, @04:46PM (#413523)

        One option is to use a torrent / blockchain style hosting platform. With increasingly cheap storage space individuals would allocate some amount of their local storage to help the cloud. Ha! It would ACTUALLY be "the cloud" instead of the bullshit "clouds" which are just server farms. Geeks could run their own servers to serve as a backbone for such systems, but they would not technically be required.

    • (Score: 2) by Scruffy Beard 2 on Wednesday October 12 2016, @04:35PM

      by Scruffy Beard 2 (6030) on Wednesday October 12 2016, @04:35PM (#413515)

      IIRC, "Web 2.0" was an Oreilly trademark.

      The best definition I have seen: "When the back button does not work."

    • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @04:48PM

      by Anonymous Coward on Wednesday October 12 2016, @04:48PM (#413524)

      That is simple, Web 2.0 was about increasing the interactions of people online. There were always chat rooms, forums, etc. but 2.0 was about sites where users generated content or interacted with content. Not a new concept, just an expansion. Mostly marketing bullshit to get investment in such platforms, but it was still a shift.

  • (Score: 5, Touché) by PizzaRollPlinkett on Wednesday October 12 2016, @02:47PM

    by PizzaRollPlinkett (4512) on Wednesday October 12 2016, @02:47PM (#413459)

    You wanted it. We had an open, noncommercial, decentralized Internet in the mid-90s. People voted for a closed, walled-garden dominated, commercial Internet. No one forced anyone to abandon the open web and open software development for closed platforms and walled gardens. They did it because that's what they wanted. If we built this whatever-it-is, either no one would use it (remember the decentralized social media alternative? me either) or a few years later it would be just like the Internet today. Like Lemmy said, "But don't forget you made the choice, / You made your mark, you raised your voice" and you have what you have today.

    --
    (E-mail me if you want a pizza roll!)
    • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @04:50PM

      by Anonymous Coward on Wednesday October 12 2016, @04:50PM (#413525)

      All the open social platforms I saw had annoying barriers to entry. It should not be any more difficult than visiting a page, creating your account in a few simple steps, then logging on and participating. We didn't have blockchain software back then which is what a lot of the decentralized projects rely on now.

    • (Score: 2) by mcgrew on Wednesday October 12 2016, @04:58PM

      by mcgrew (701) <publish@mcgrewbooks.com> on Wednesday October 12 2016, @04:58PM (#413529) Homepage Journal

      The open, noncommercial, decentralized Internet is still around; example [mcgrewbooks.com]. Er, S/N is an even better example. It's just that the open, noncommercial, decentralized Internet has been buried in commercial content, just making it harder to find.

      I do miss the days when the only ads we had to bitch and moan about were small banner ads.

      --
      mcgrewbooks.com mcgrew.info nooze.org
      • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @05:06PM

        by Anonymous Coward on Wednesday October 12 2016, @05:06PM (#413535)

        How is SN a decentralized site? Open and noncommercial, sure. Decentralized? I don't think so.

        • (Score: 2) by mechanicjay on Wednesday October 12 2016, @05:51PM

          by mechanicjay (7) <reversethis-{gro ... a} {yajcinahcem}> on Wednesday October 12 2016, @05:51PM (#413562) Homepage Journal

          Fair Enough. All our nodes are running in the same datacenter. We could decentralize a bit by moving redundant nodes to different geographic areas. Makes note to discuss this with people I realize though, that this type of decentralization isn't quite what the article is talking about, but is important for resiliency and redundancy.

          --
          My VMS box beat up your Windows box.
          • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @06:05PM

            by Anonymous Coward on Wednesday October 12 2016, @06:05PM (#413567)

            I would love to see SN roll out a test of one of the truly distributed systems, where users can play host to the data. I'll have to do a little research to find the examples already being made (and sadly they're mostly JS based) but it seems like something this site would be perfect for. The only sticky problem I can see is user authentication and info storage... even with a good crypto scheme I don't think too many people would be happy having their account details sent to every other user.

            • (Score: 2) by fleg on Thursday October 13 2016, @05:52AM

              by fleg (128) Subscriber Badge on Thursday October 13 2016, @05:52AM (#413781)

              >user authentication

              to start with maybe you could just use it for AC's?

        • (Score: 3, Interesting) by NCommander on Wednesday October 12 2016, @06:03PM

          by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Wednesday October 12 2016, @06:03PM (#413566) Homepage Journal

          We've looked at providing a bi-directional gateway to USENET in the past but I never got it finished due to technical hangups and the fact that NNTP/Usenet is a miserable protocol for anything that can be dynamic since you have manage cancels and such.

          If someone wants to take the existing code and make it work, I'll be glad to feed it into USENET.

          --
          Still always moving
        • (Score: 2) by janrinok on Wednesday October 12 2016, @06:09PM

          by janrinok (52) Subscriber Badge on Wednesday October 12 2016, @06:09PM (#413569) Journal

          It is not decentralized, that is true. But as much as TFS points out the need for people to retain control of their own data, we do go a fair way to achieving this. We allow comments from Anon Cowards, and we do not keep connection data in a way that can be linked back to the originator - it is hashed so that ACs can be deconflicted - but we cannot give IP data to anyone who demands it. We simply haven't got it, as far as I know. And, again as far as I know, we do keep connection data from anyone attacking the site (spam or DDos for example) but only so that we can block that IP for our own protection.

          I'm not the person to say how easy it would be to make SN decentralized - not an area that I have any knowledge or expertise in - but if anyone knows any software that might be useful it could be worth a submission and discussion. But I'm fairly certain that we don't have the capacity to go it alone with such a concept with our current situation.

          If you compare SN to the more famous social media sites we do protect personal data, we do not sell or otherwise use that data, and we don't cooperate with anyone trying to track our users. We meet our legal obligations but don't have much information to give to anyone. But, on the other hand, we are unlikely to be floated on the stock exchange in the foreseeable future either...

          • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @06:46PM

            by Anonymous Coward on Wednesday October 12 2016, @06:46PM (#413579)

            That is all fine and good, but users rely on the integrity of the admins. It would be nice if the features were baked into code. It is a huge project, but one I would be willing to invest time in. The biggest problem I see with decentralization is that it becomes much harder to filter spam bots, and user data is an issue. Strong crypto would be good, but doesn't quite replace the single protected location of a server.

    • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @08:11PM

      by Anonymous Coward on Wednesday October 12 2016, @08:11PM (#413619)

      Email(SMTP/POP3/IMAP) - Still aruond, but many hosts filtered completely by big email providers. Some even lagging messages by hours or days.
      Usenet(NNTP) - The original web forums as well as bittorrent. Pushed to closure between the 90s and early '00s as part of both the crackdown on pirated (insert item here) as well as excessive bandwidth usage by people abusing usenet messages to send ascii encoded binary files as 'attachments' taking up more bandwidth than most ISPs had allocated for their USERs. Resulted in first the dropping of usenet binary groups, then an ever increasing number of non-binary groups (when those users migrated), and eventually cumulating in ISPs dropping it as a pro-bono service.
      Jabber (XMPP) - The 'future' of IMing. Google, Facebook, others all hopped on the bandwagon. For a while it seems like it was poised to take over the IM industry and provide for it what SMTP and company provided for email. It had email-like addresses, virtual business cards, directory services for corporate prescences, etc. Destroyed thanks to sheeple joining proprietary services instead (Kik, originally XMPP moved proprietary protocol, GTalk and Facebook Messenger as well, which migrated away from XMPP access and federation.) Blamable on user apathy and spam (Which should have been simple enough to fix by requiring authorization before messages could be send.)
      Diaspora, Friendica, others: Distributed facebook/myspace/livejourhal alternatives. These particular implementations were late to the party, however allowed moving your profile data between services. Problems? Issues with sccraping, but moreover everyone already tied to their current social network presence and unwilling to move. Same issue affected Google's facebook clone.

      Gnutella/eDonkey/Torrents/etc: Lack of anonymity, new alternatives for users (mostly proprietary.)

      I am sure there are other examples, but the gist from this is user apathy, spam/excessive traffic, and legal hurdles have all conspired to kill open technologies, and instead see proprietary solutions continue to dominate. It is also to a certain degree proprietary tribalism, same as all the people continuing to use windows or osx when free alternatives that COULD offer them choice are available.

    • (Score: 2) by dmc on Thursday October 13 2016, @03:21AM

      by dmc (188) on Thursday October 13 2016, @03:21AM (#413751)

      No one forced anyone to abandon the open web

      I don't think that's entirely true, but I am stretching things a bit (to the extent that so called "business class" or "server hosting allowed" internet access subscriptions exist, usually at much higher prices than ordinary residential subscribers are used to paying. However I do believe that such a hurdle presents the primary impediment to the submitter's "web 3.0". It surprises me that Snowden doesn't seem to notice the issue. This "web 3.0" idea in the absence of that acknowledgement, sounds to me like a centralized set of power players trying to deploy their crafted equivalent of the more general idea I advocate - "let everyone run their own mail/web/usenet/etfuckingetera server if they want to without having to pay for the lexus lane". Once you have that large a set of potential users, testers, and codevelopers for free and open source server software- oh, it will be a nice day. However the same playing field with users, testers, and codevelopers having to pay double or more the price for their internet access if they want to use such decentralized applications- will result in exactly what we see today. I.e. massive continued centralization that spooks and advertisers are all for keeping just like it is.

      http://apps.fcc.gov/ecfs/document/view?id=7522219498 [fcc.gov]
      http://cloudsession.com/dawg/downloads/misc/kag-draft-k121024.pdf [cloudsession.com]
      https://www.wired.com/2013/07/google-neutrality/ [wired.com]
      http://arstechnica.com/information-technology/2013/10/google-fiber-now-explicitly-permits-home-servers [arstechnica.com]
      https://lwn.net/Articles/658006/ [lwn.net]

  • (Score: -1, Troll) by Anonymous Coward on Wednesday October 12 2016, @02:55PM

    by Anonymous Coward on Wednesday October 12 2016, @02:55PM (#413463)

    I WANT TO PULL IT OUT!

    • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @03:02PM

      by Anonymous Coward on Wednesday October 12 2016, @03:02PM (#413471)

      Yeah I know, sometimes they get real itchy and you can't do a thing about it.

      • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @03:17PM

        by Anonymous Coward on Wednesday October 12 2016, @03:17PM (#413481)

        Just think about sandpaper.

    • (Score: 2) by takyon on Wednesday October 12 2016, @03:10PM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday October 12 2016, @03:10PM (#413476) Journal

      You're posting as anonymous now, but with the right prodding I'm sure you could be convinced to give up any semblance of privacy, jack yourself into the Internet of Things (IoT), and become part of a neural VR wonderland (uurgh, Keanu Reeves died for our sins!).

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 3, Insightful) by Anonymous Coward on Wednesday October 12 2016, @03:10PM

    by Anonymous Coward on Wednesday October 12 2016, @03:10PM (#413478)

    Actually, I'm getting back to roots by pretty much just living a Web 1.0 life again. As much as possible I avoid paywalls, java, flash..... I do my damndest to not shop online nor will I ever expose my banking, etc. via Internet for safety. Maybe what we need is a search engine that basically says, "If it uses 'Web 2.0' technology... screw it, we won't index it."

    • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @04:01PM

      by Anonymous Coward on Wednesday October 12 2016, @04:01PM (#413497)

      The entire index of that search index consists of a single entry https://example.org/ [example.org]. It's top ranked because it's both http and https...

  • (Score: 2, Insightful) by shrewdsheep on Wednesday October 12 2016, @03:28PM

    by shrewdsheep (5215) on Wednesday October 12 2016, @03:28PM (#413485)

    TL;DR. As usual with catchy articles, fundamental confusion seems to be pervasive. IMO, the application side is only as centralized as you choose it to be, except, maybe for search where there are truly few offerings. Host your own data. The bigger decentralization effort needs to take place in the infrastructure where there are currently few hubs which, if taken out would severely bring down throughput (c.f. e.g. the New York takeout during 9/11).

    • (Score: 2) by mcgrew on Wednesday October 12 2016, @05:04PM

      by mcgrew (701) <publish@mcgrewbooks.com> on Wednesday October 12 2016, @05:04PM (#413533) Homepage Journal

      "TL;DR" means "I'm an aliterate and hate reading" (and no, that's not a misspelling). As it says "I'm aliterate" it also says "I'm ignorant", because outside the School of Hard Knox, almost all information is written. To paraphrase Twain, an aliterate person has no advantage over an illiterate person.

      fundamental confusion seems to be pervasive

      I see that you're NOT an ignorant aliterate, so it was nice of you to post that comment for those who are.

      --
      mcgrewbooks.com mcgrew.info nooze.org
      • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @06:59PM

        by Anonymous Coward on Wednesday October 12 2016, @06:59PM (#413587)

        Not everything is worth reading. Just because you don't wish to read one thing doesn't mean you hate reading in general, so you've presented a false dichotomy.

      • (Score: 1) by shrewdsheep on Wednesday October 12 2016, @08:04PM

        by shrewdsheep (5215) on Wednesday October 12 2016, @08:04PM (#413616)

        You are correct in my particular case. However, in general TLDR means: this is just an opinion of mine that might not be entirely related to the article. Others may offer a more informed and/or precise assessment.

      • (Score: 2) by Marand on Thursday October 13 2016, @01:53AM

        by Marand (1081) on Thursday October 13 2016, @01:53AM (#413727) Journal

        "TL;DR" means "I'm an aliterate and hate reading"

        It could also mean "I started reading until I realised I don't find this article or topic to be interesting, so the effort compared to the potential payoff isn't worth the trouble." In fact, that usually seems to be the case when someone says it, from my experience.

        It's unfair to assume the problem always lies with the reader, because a lot of writing fails to provide a compelling introduction, and thus fails to capture the interest of most readers. Dry writing that fails to draw the reader in with a compelling hook can easily lose the reader unless he or she is already invested in the topic for some other reason.

        • (Score: 2) by mcgrew on Thursday October 13 2016, @06:19PM

          by mcgrew (701) <publish@mcgrewbooks.com> on Thursday October 13 2016, @06:19PM (#414013) Homepage Journal

          Hmmm... "Too long; didn't read" is a poor substitute for ""I started reading until I realised I don't find this article or topic to be interesting, so the effort compared to the potential payoff isn't worth the trouble." Rather, in that position I'd just say "clickbait" or "poorly written". Or perhaps we need a new acronym, such as EWA for "Entirely worthless article".

          TL;DR is often followed by a snippet of what was "too long", ostensibly for those who hate to read.

          --
          mcgrewbooks.com mcgrew.info nooze.org
          • (Score: 2) by Marand on Friday October 14 2016, @12:03AM

            by Marand (1081) on Friday October 14 2016, @12:03AM (#414116) Journal

            Maybe, but it also makes a sort of sense. We have instant access to so many things we can read that it becomes necessary to filter out as much of it as possible to make the load manageable. You can't possibly read everything, so you look for that hook in longer pieces to see if it's worthwhile.

            To make matters worse, online content's lack of editorial oversight is sorely felt. There's often no one to say "hey, cut the fat", so people write bloated articles or painfully unreadable comments. Self-editing is hard, so people are often terrible at it, or just don't bother. It's even a problem in online "journalism" these days, I guess because the writers want to show off their vocabulary a bit, maybe add a bit of flowery prose to make things "interesting".

            So it's not always that it's clickbait, or even poorly written, just simply too long to be worth reading.

            TL;DR is often followed by a snippet of what was "too long", ostensibly for those who hate to read.

            Right, it's also become a shorthand by the writer for providing that "elevator pitch" style hook you need in longer writing. Rather than organise the writing to provide a good "introduction, content, conclusion" flow, you can throw a "TL;DR: [twitter-length explanation]" afterward to get the meat of it to the readers and let them decide whether to read the whole thing. Hell, I do it sometimes too, even though I also try not to ramble in my comments. It makes a nice elevator pitch to "sell" the person why they might want to read more. That can be enough to get your introduction read, which could encourage reading the whole thing.

  • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @04:06PM

    by Anonymous Coward on Wednesday October 12 2016, @04:06PM (#413501)

    The original purpose of the internet was dreamed up by the good folks in ARPA, which was to provide a resilient communications system for the military; especially nuclear sites in the event of nuclear war.

    Any attempt to build a decentralised system on IPv[46] is pretty well doomed, because IPv[46] is already centralised. You have central numbering, central naming and some other centralised services as well in things like encryption. Sure, there's some delegation, but it's all contingent on the moods of the people up top.

    Come back when they have a new transport and naming infrastructure. Until then it's all masturbation.

    • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @04:22PM

      by Anonymous Coward on Wednesday October 12 2016, @04:22PM (#413511)

      Until then it's all masturbation.

      ...something something out of my cold, dead hands...

    • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @04:59PM

      by Anonymous Coward on Wednesday October 12 2016, @04:59PM (#413530)

      Only a wireless system can work in this case, and you can quickly run into trouble with interference if you want people to run their own mesh network. Taps can always be made into the flow of data, so you're argument is kind of useless. The only real answer here is encryption and tech like TOR that makes every connection near impossible to pin down.

      • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @06:52PM

        by Anonymous Coward on Wednesday October 12 2016, @06:52PM (#413584)

        Enh, not really.

        You could have people putting together ad hoc wired connections as well. Down to two soup cans and some string across your back yard fence, if you like.

        Yes, encryption needs to be involved, as does a distributed PKI and topology-insensitive routing and so on ... it's a big topic.

        But you don't strictly need wireless.

    • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @05:27PM

      by Anonymous Coward on Wednesday October 12 2016, @05:27PM (#413549)

      Addressing is centralized because otherwise it would be an unmanageable clusterfuck if it wasn't. Who really cares about centralized naming? If you want some alternate, decentralized naming, you can do your own alternate, decentralized DNS.

      • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @06:48PM

        by Anonymous Coward on Wednesday October 12 2016, @06:48PM (#413581)

        Centralised naming means a court can order the naming authority to do stuff.

        DNS is not tough enough.

        Alternative DNS is not tough enough, at most you're just shifting the jurisdiction of vulnerability. Because it has a centralised source, one can also mandate disconnections from that source at a national level.

        Decentralised naming is possible (it's basically just a decentralised database) - you just need to revamp the infrastructure for it. There are a couple of approaches, of which blockchains are just one.

  • (Score: 2) by Geezer on Wednesday October 12 2016, @05:00PM

    by Geezer (511) on Wednesday October 12 2016, @05:00PM (#413532)

    1. Build shiny new open, secure web.
    2. Shiny new open, secure web gets compromised, exploited, monetized, and commercialized.
    3. Goto (1

    • (Score: 2) by JNCF on Wednesday October 12 2016, @05:29PM

      by JNCF (4317) on Wednesday October 12 2016, @05:29PM (#413552) Journal

      You mean it works like politics?

  • (Score: 2, Interesting) by Anonymous Coward on Wednesday October 12 2016, @05:29PM

    by Anonymous Coward on Wednesday October 12 2016, @05:29PM (#413551)

    What about an open standard for "messages"? People could send and republish other's messages as they see fit, kind of like Twitter, but with a service of their choosing and longer messages. One would still typically "rent" services for such broadcasting, but no one company would own or house the messages. Such a service would support "Standard Message Protocol X" (SMPX), as a reference name for now.

    A checksum and originating source (URL) would be included with each message so that re-broadcasters could verify the authenticity* (if desired). The verification service would be a standard feature with SMPX. One is not forced to re-broadcast only verified messages, but you'll have more cred if you do. Checking other site's cred by back-tracing their re-published content would also be a standard feature of SMPX.

    Also, one doesn't have to re-publish the entire message; they can optionally republish only the title and perhaps synopsis with a hyperlink to the original or alternative source.

    Draft message data structure:

    - Title (required)
    - Synopsis
    - Body text
    - Keywords (and/or categories)
    - Image URL (such as for thumbnails)
    - Author actual name
    - Author handle (pseudo-name)
    - Author contact (email, Twitter account, etc.)
    - Date/time created
    - Revision date/time (required, same as created if new)
    - Checksum (about 20 characters)
    - Source URL
    (A max size for some of these fields would be part of the standard.)

    * Checksums are not perfect for integrity checking, but can serve as a quick and dirty check supplemented with spot-checking. For example, if your SMPX wanted to quickly verify a given site, it could check most or all check-sums and also do a random spot-check of the full text (structure) of selected messages. One could optionally verify the full (public) data of any SMPX source, but that could overwhelm both the sending and receiving SMPX services. The combo described above is a decent compromise versus full scanning. A typical scan result info set will resemble:

    SMPX site examined: fredsmith-smpx-site.smpx
    Total checksums listed: 1301 (as shared between services)
    Total checksums inspected: 1301
    Total checksums passed: 1301
    Total messages full-data spot-checked: 50
    Total messages full-data spot-check passed: 50
    Total integrity problems found: 0

    • (Score: 2) by Phoenix666 on Wednesday October 12 2016, @11:22PM

      by Phoenix666 (552) on Wednesday October 12 2016, @11:22PM (#413691) Journal

      I think that's the right way to go, and in the spirit of what the article is proposing.

      --
      Washington DC delenda est.
  • (Score: 1, Informative) by Anonymous Coward on Wednesday October 12 2016, @07:37PM

    by Anonymous Coward on Wednesday October 12 2016, @07:37PM (#413603)

    You may want to have a look at Askemos/BALL [askemos.org].

    Maybe not exactly Web4.0, more like Web1.0. But otherwise it looks like a reasonable prototype. Byzantine fault tolerant replication among friends. Somewhat like a network of "caching" Web proxies. "Caching" in quotes as they don't cache, they pretend to cache, but the server is not there.

    Disclaimer: we've been working on that idea for almost a decade. Until we stopped in 2012. Maybe it's time to unearth the project.

    If not, it may still be a prototype to glean ideas from.

    Best

  • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @07:52PM

    by Anonymous Coward on Wednesday October 12 2016, @07:52PM (#413610)

    I2P, Tor, CJDNS, Hyperboria.

    You know what they all have in common? None of them have enough (known secure) nodes or security auditing to protect from the passive surveillance on the clearnet. And in order to truly have a decentralized web, as discussed, we would need grassroots mesh networks with a rotating set of 'trusted' routers in-between them on trusted hardware/software without government mandated 'legal tap points' in order to assure ourselves that the network was secure.

    The public wifi initiative failed for the same reason Tor is failing: Not enough principled people willing to risk runins with the authorities in order to make a stand for freedom of speech, association, and anonymity so that each of us can browse/chat/speak on questionable topics without the government (any government) having the capability to stick their nose into it.

    People can say 'OMG terrorism', but the surveillance state has failed in its goal of 'stopping terror' and we are much better off trusting each other and taking a small risk of being attacked, rather than taking a big risk and letting the governments inevitably take all power away from the people returning us at best to a pseudo feudalistic state, and at worse putting us into the kind of shackles the predecessors to the African slave trade recieved back in Roman or pre-Roman times. (Or are still a part of in certain central african/middle eastern countries today, not including the fully sociopathic wealthy in all countries of the world.)

  • (Score: 4, Insightful) by meustrus on Wednesday October 12 2016, @08:02PM

    by meustrus (4961) on Wednesday October 12 2016, @08:02PM (#413615)

    This article is basically preaching to the choir being posted here. Of course most Soylentils want back the wild west internet of the 90s. But that ship has sailed, because to quote the summary, "people realised that an easy way to create value on top of this neutral fabric was to build centralised services which gather, trap and monetise information." Now how exactly are we supposed to fix that?

    Privacy: Decentralisation forces an increased focus on data privacy. Data is distributed across the network and end-to-end encryption technologies are critical for ensuring that only authorized users can read and write. Access to the data itself is entirely controlled algorithmically by the network as opposed to more centralized networks where typically the owner of that network has full access to data, facilitating customer profiling and ad targeting.

    This is a hard problem, but groups like Google are actually working to solve it. We are on about generation 4 of open authentication protocols, with the newest cool thing being OAuth2 over JWT Bearer tokens. They work great. They are also too complicated for an amateur to discover and configure correctly, so our only hope is making amateur-friendly frameworks that have it baked in. Unfortunately the specific protocols change too rapidly for one framework to rise up and become as ubiquitous as PHP. So yeah, we can have decentralized user data powered by decentralized authentication, but only the big guys with big resources will be able to supply them securely and effectively. This leaves the little guys with the same options as now: build your own cheap (and probably insecure) centralized data store, or farm that out to the big guys (basically Google since nobody else wants to share their walled garden).

    Remember that even in the old days, much of the user-generated web was on GeoCities. You go back farther than that and it was mostly on university infrastructure. We've never really been in a position to make this work without big guys to magnanimously provide the basics.

    Data Portability: In a decentralized environment, users own their data and choose with whom they share this data. Moreover they retain control of it when they leave a given service provider (assuming the service even has the concept of service providers). This is important. If I want to move from General Motors to BMW today, why should I not be able to take my driving records with me? The same applies to chat platform history or health records.

    Individual users generally do not have the skills to maintain their own health records securely while making them easily available to the people we want to see them. Even if they did, they mostly would just rather somebody else do it for them. That is the health industry, where records are under legal obligation to be portable, private, and secure. In the rest of the world it's a wild west because the user still doesn't care. You're not going to build a network of responsible data sharers out of busy people who don't care.

    Security: Finally, we live in a world of increased security threats. In a centralized environment, the bigger the silo, the bigger the honeypot is to attract bad actors. Decentralized environments are safer by their general nature against being hacked, infiltrated, acquired, bankrupted or otherwise compromised as they have been built to exist under public scrutiny from the outset.

    Conversely, the bigger the silo, the more resources are available to lock it down. In a decentralized network each node is less secure, and therefore you are less likely to be able to trust the security of the nodes you interact with. The only ways that the decentralized network is practically more secure is that A) it will be more difficult for attackers to guess where your data is stored, and B) the payoff for compromising an individual node will be much less. Criminal hackers typically fall into two categories: target-seekers and opportunists. The target-seekers will not be deterred by the "security-by-obscurity" offered by advantage A and their individual payoff is always small. The opportunists don't care about where your data is stored, and they would love the lower stakes per node because that makes for much lower visibility and therefore lower chances that their already-decentralized-botnet will be noticed poking at all the things it can find.

    I'm not saying that the security advantages don't exist. But there's a reason people put their money in central banks instead of holding onto it themselves.

    ---

    What made the internet free before was not the infrastructure. The infrastructure is the same then as now. It was the low stakes. It was the fact that very few people knew how powerful the Internet could be, so those few of us could use it sporadically as we felt like it. It was the fact that the participants had little ambition. Well, the internet has met the same fate as the old American west. The cattle ranchers that moved in got established and built resources, and then they started building fences and killing outlaws to make their empires more stable.

    If you want the old internet back, drop the stakes. Slow things down. Our basic needs haven't changed; internet access does not feed and shelter us. Let it be a little game that we all play on the side. But that's not going to happen, because many of us do make our living off of the Internet. Big business pays us big money to make big money with it. When it comes down to it, every system we build is doomed to mimic the economics of our day-to-day lives. So go ahead and make the next toy network where we can pretend to be liberated. It will go the way of Bitcoin. I give any such network 2 years tops before the profit-mongers move in and twist it to serve their ambition.

  • (Score: 2) by Appalbarry on Wednesday October 12 2016, @11:12PM

    by Appalbarry (66) on Wednesday October 12 2016, @11:12PM (#413686) Journal

    There's no argument that the 'net as we know it is far, far from what we envisioned a couple of decades ago. Sadly the commercial interests have succeeded in turning it into a swamp of dreck and avarice.

    Then again, it can be argued that this was the first try at creating an "Internet," and it should have been expected that it would fall flat on its face.

    Who could have seriously expected that what was devised in the late part of the last century would be able to scale to the size we see now?

    Right now we're trying to solve 2016 problems using 1980s based technology, which was based on 1960s assumptions.

    At this point the question really needs to be this: Is it possible or sensible to try and build something new and better on top of the existing Internet? Or is better to find a way to start over from scratch and build it the way it should have been done?

    • (Score: 2) by janrinok on Thursday October 13 2016, @07:35AM

      by janrinok (52) Subscriber Badge on Thursday October 13 2016, @07:35AM (#413798) Journal

      I fear that any new attempt to build a more modern internet will be doomed from the start. There are too many commercial and intelligence-collection opportunities on the existing internet to make the very suggestion of an alternative unattractive to big business or government, unless of course they can tailor it to their needs rather than ours. There are already existing secure distributed networks out there - but the public is unlikely to be given access to them.

      So we are left with trying to achieve a distributed or decentralized system with the internet that we have. While this is a big challenge, I'm sure that bittorrent, Freenet et al didn't have any significant government or business support when they were first being developed - it is only once they are up and running that others come along to try to exploit them.

      The main problem with the secure technologies that I have mentioned is that they are on the slow side, and for a site that depends on a multipath conversation I suspect that they would be of little use to us. Another problem would be that all users of our decentralized system would need to be able to see all comments on a particular story, so every comment would have to propagate through the system quite quickly to enable a sensible discourse to take place. A conversation between two people is already possible and easy to secure - any more than that begins to become increasingly problematic

      .

  • (Score: 0) by Anonymous Coward on Friday October 14 2016, @09:09AM

    by Anonymous Coward on Friday October 14 2016, @09:09AM (#414206)

    The original purpose of the web and internet, if you recall, was to build a common neutral network which everyone can participate in equally for the betterment of humanity.

    Oh! I thought it was an USA military project whose purpose was to ensnare the whole world in a web of surveillance. What are webs used for? To catch prey. What are chains used for? To restrain slaves? What are chains made from? Links. What is the world wide web made from?

    Anyone seeing a pattern here?