Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday April 05 2017, @02:38PM   Printer-friendly
from the no-itsy-bitsy-spider dept.

Sir Tim Berners-Lee gave an interview with radio station WBUR about the state of the Web and its future:

Berners-Lee initially imagined the web as a beautiful platform that could help us overcome national and cultural boundaries. He envisioned it would break down silos, but many people today believe the web has created silos.

And he still largely sees the potential of the web, but the web has not turned out to be the complete cyber Utopian dream he had hoped. He's particularly worried about the dark side of social media — places where he says anonymity is being used by "misogynist bullies, by nasty people who just get a kick out of being nasty."

He also identified personal data privacy, the spread of misinformation, and a lack of transparency in online political advertising as major problems with the current Web in a letter marking the World Wide Web's 28th birthday last month.

Previously: World Wide Web Turns 25 years Old
Tim Berners-Lee Proposes an Online Magna Carta
Berners-Lee on HTML 5: If It's Not on the Web, It Doesn't Exist
The First Website Went Online 25 Years Ago
Berners-Lee: World Wide Web is Spy Net
Tim Berners-Lee Just Gave us an Opening to Stop DRM in Web Standards


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by edIII on Wednesday April 05 2017, @08:38PM (2 children)

    by edIII (791) on Wednesday April 05 2017, @08:38PM (#489331)

    The most basic flaw is however that the whole system is built upon the model that a page is loaded from a specific server and downloaded to a client. This lets bad governments target [telegraph.co.uk] any server containing information they don't happen to fancy and hire former Stasi [eurocanadian.ca] agents like Anetta Kahane to censor you right now just like she did with DDR [berliner-zeitung.de] citizens in 1974-1982. The second issue is that this dissemination model allows servers to register who takes part of the information.

    How can you mitigate the 2nd issue? I don't see how myself. The server is ALWAYS going to know the identity of who requested what. Not just in HTTP either, but with other services as well. The real issue is that the owners of the servers collaborate with government and use such information against the users on a routine basis. That is not a technology problem though, but a social one. To truly surmount it all requests would need to be anonymous, and I'm not sure how that could work unless we had a truly anonymous payment options as well.

    The 1st issue can be mitigated with HTML5/Websockets. AFAIK, single pages are just target points to load bootstrap code that upgrades it to a websocket connection. From there, all the content can be consumed across a single connection originating from a single page load. That's where I'm going towards in my own designs and I hope to suspend a session and resume it if you reload the page, and hijack the back and forward buttons for my own purposes. At that point the web browser is just acting like a thin client which is my overall goal.

    --
    Technically, lunchtime is at any moment. It's just a wave function.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Thursday April 06 2017, @09:42AM

    by Anonymous Coward on Thursday April 06 2017, @09:42AM (#489592)

    The second issue can be mitigated by network flooding (broadcast/multicast), but it is not a very practical solution.

  • (Score: 2) by kaszz on Friday April 07 2017, @11:10PM

    by kaszz (4211) on Friday April 07 2017, @11:10PM (#490594) Journal

    Distribute the pages using peer-to-peer. I suspect these applications are on that path, but not a full solution:
    GNUnet [wikipedia.org]decentralized, peer-to-peer networking
    InterPlanetary File System [wikipedia.org] permanent and decentralized method of storing and sharing files. It is a content-addressable, peer-to-peer hypermedia distribution protocol.

    The onion method is also a partial solution, ie TOR.

    The thing is to store multiple copies of the web page in many network nodes such that there's always a copy somewhere and the source won't need to be bothered or up and running. Pages that are frequently used is stored in the cloud of nodes. While the purged ones has to be fetched again from some or many repositories. Pages can of course also be objects that contains many pages or images. Or one could chop it into several pieces to lessen the burden on a specific node.