Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday April 05 2017, @02:38PM   Printer-friendly
from the no-itsy-bitsy-spider dept.

Sir Tim Berners-Lee gave an interview with radio station WBUR about the state of the Web and its future:

Berners-Lee initially imagined the web as a beautiful platform that could help us overcome national and cultural boundaries. He envisioned it would break down silos, but many people today believe the web has created silos.

And he still largely sees the potential of the web, but the web has not turned out to be the complete cyber Utopian dream he had hoped. He's particularly worried about the dark side of social media — places where he says anonymity is being used by "misogynist bullies, by nasty people who just get a kick out of being nasty."

He also identified personal data privacy, the spread of misinformation, and a lack of transparency in online political advertising as major problems with the current Web in a letter marking the World Wide Web's 28th birthday last month.

Previously: World Wide Web Turns 25 years Old
Tim Berners-Lee Proposes an Online Magna Carta
Berners-Lee on HTML 5: If It's Not on the Web, It Doesn't Exist
The First Website Went Online 25 Years Ago
Berners-Lee: World Wide Web is Spy Net
Tim Berners-Lee Just Gave us an Opening to Stop DRM in Web Standards


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Insightful) by Anonymous Coward on Wednesday April 05 2017, @02:53PM (16 children)

    by Anonymous Coward on Wednesday April 05 2017, @02:53PM (#489165)

    And he still largely sees the potential of the web, but the web has not turned out to be the complete cyber Utopian dream he had hoped.

    Yeah, it has mostly turned into a rat's nest of proprietary javascript. Maybe, the first thing he should do, is stop actively making the problems worse by pushing for proprietary DRM in web standards, too.

    Starting Score:    0  points
    Moderation   +2  
       Insightful=2, Total=2
    Extra 'Insightful' Modifier   0  

    Total Score:   2  
  • (Score: 3, Insightful) by kaszz on Wednesday April 05 2017, @03:24PM (12 children)

    by kaszz (4211) on Wednesday April 05 2017, @03:24PM (#489171) Journal

    Perhaps WWW should have HTML replaced with SGML which it tries to emulate anyway. Then the need for PDF and abused tables etc is lessened and techniques to do typesetting where there's no real support for that. Another approach is to go LaTeX over http.

    A script language is useful to eliminate server-client loading for everything. But Javascript should be replaced with something like Perl/Python or any other scripting language with clear definitions. Security provisions should be builtin from start and allow the user to deny everything.

    Leave room for platform agnostic extensions, be it mpeg4 decoder or DRM. But it shall in no way interfere with free and open implementations. Nor be allowed any a bargain-you-can't-deny.

    The most basic flaw is however that the whole system is built upon the model that a page is loaded from a specific server and downloaded to a client. This lets bad governments target [telegraph.co.uk] any server containing information they don't happen to fancy and hire former Stasi [eurocanadian.ca] agents like Anetta Kahane to censor you right now just like she did with DDR [berliner-zeitung.de] citizens in 1974-1982. The second issue is that this dissemination model allows servers to register who takes part of the information.

    • (Score: 0) by Anonymous Coward on Wednesday April 05 2017, @03:30PM

      by Anonymous Coward on Wednesday April 05 2017, @03:30PM (#489175)

      Yes, but for online retail controlling the webserver is kinda important.

    • (Score: 0) by Anonymous Coward on Wednesday April 05 2017, @03:51PM (2 children)

      by Anonymous Coward on Wednesday April 05 2017, @03:51PM (#489183)

      Browsers are open source. It is possible to restrict what javascript can access. We just have to grow the balls to compile our own browsers again.

      • (Score: 2) by kaszz on Wednesday April 05 2017, @04:13PM

        by kaszz (4211) on Wednesday April 05 2017, @04:13PM (#489190) Journal

        Yes it's pointing in that direction..

        That privacy features seems to disappear in free and open webbrowsers seems to be more than a coincident..

      • (Score: 0) by Anonymous Coward on Wednesday April 05 2017, @07:53PM

        by Anonymous Coward on Wednesday April 05 2017, @07:53PM (#489307)

        Sure, it is technically possible to disable this antifeature. Most of the web then ceases to work.

        First is the problem that almost all Javascript programs on the web are proprietary software. So running a free browser just so you can download and execute proprietary software kind of defeats the point...

        Second is the fact that Javascript programs are automatically downloaded and executed by the browser. Browsers have so far failed to really restrict the damage caused by this. It's no fluke that almost every remote code execution exploit these days starts with "First, the user's browser downloads and runs the malicious software supplied by an attacker. Then, ...".

    • (Score: 2) by Scruffy Beard 2 on Wednesday April 05 2017, @03:55PM (2 children)

      by Scruffy Beard 2 (6030) on Wednesday April 05 2017, @03:55PM (#489184)

      Apparently from ~HTML 2.0 - 4.01, HTML was SGML [wikipedia.org]

      HTML was theoretically an example of an SGML-based language until HTML 5, which admits that browsers cannot parse it as SGML (for compatibility reasons) and codifies exactly what they must do instead.

      DocBook SGML and LinuxDoc are better examples, as they were used almost exclusively with actual SGML tools.

      • (Score: 0) by Anonymous Coward on Wednesday April 05 2017, @05:39PM (1 child)

        by Anonymous Coward on Wednesday April 05 2017, @05:39PM (#489236)

        Codify existing practice.

        The C++ committee has learned this the hard way, especially after the `export' keyword debacle. Most real-world things have to exist before they can be standardized.

        • (Score: 2) by kaszz on Friday April 07 2017, @11:01PM

          by kaszz (4211) on Friday April 07 2017, @11:01PM (#490591) Journal

          The alternative is to do things right from the start by thinking it through. And then write a really good standard from that.

    • (Score: 2) by edIII on Wednesday April 05 2017, @08:38PM (2 children)

      by edIII (791) on Wednesday April 05 2017, @08:38PM (#489331)

      The most basic flaw is however that the whole system is built upon the model that a page is loaded from a specific server and downloaded to a client. This lets bad governments target [telegraph.co.uk] any server containing information they don't happen to fancy and hire former Stasi [eurocanadian.ca] agents like Anetta Kahane to censor you right now just like she did with DDR [berliner-zeitung.de] citizens in 1974-1982. The second issue is that this dissemination model allows servers to register who takes part of the information.

      How can you mitigate the 2nd issue? I don't see how myself. The server is ALWAYS going to know the identity of who requested what. Not just in HTTP either, but with other services as well. The real issue is that the owners of the servers collaborate with government and use such information against the users on a routine basis. That is not a technology problem though, but a social one. To truly surmount it all requests would need to be anonymous, and I'm not sure how that could work unless we had a truly anonymous payment options as well.

      The 1st issue can be mitigated with HTML5/Websockets. AFAIK, single pages are just target points to load bootstrap code that upgrades it to a websocket connection. From there, all the content can be consumed across a single connection originating from a single page load. That's where I'm going towards in my own designs and I hope to suspend a session and resume it if you reload the page, and hijack the back and forward buttons for my own purposes. At that point the web browser is just acting like a thin client which is my overall goal.

      --
      Technically, lunchtime is at any moment. It's just a wave function.
      • (Score: 0) by Anonymous Coward on Thursday April 06 2017, @09:42AM

        by Anonymous Coward on Thursday April 06 2017, @09:42AM (#489592)

        The second issue can be mitigated by network flooding (broadcast/multicast), but it is not a very practical solution.

      • (Score: 2) by kaszz on Friday April 07 2017, @11:10PM

        by kaszz (4211) on Friday April 07 2017, @11:10PM (#490594) Journal

        Distribute the pages using peer-to-peer. I suspect these applications are on that path, but not a full solution:
        GNUnet [wikipedia.org]decentralized, peer-to-peer networking
        InterPlanetary File System [wikipedia.org] permanent and decentralized method of storing and sharing files. It is a content-addressable, peer-to-peer hypermedia distribution protocol.

        The onion method is also a partial solution, ie TOR.

        The thing is to store multiple copies of the web page in many network nodes such that there's always a copy somewhere and the source won't need to be bothered or up and running. Pages that are frequently used is stored in the cloud of nodes. While the purged ones has to be fetched again from some or many repositories. Pages can of course also be objects that contains many pages or images. Or one could chop it into several pieces to lessen the burden on a specific node.

    • (Score: 2) by c0lo on Thursday April 06 2017, @12:31PM (1 child)

      by c0lo (156) Subscriber Badge on Thursday April 06 2017, @12:31PM (#489622) Journal

      Perhaps WWW should have HTML replaced with SGML which it tries to emulate anyway.

      HTML is SGML [wikipedia.org] (but SGML isn't HTML)

      Berners-Lee considered HTML to be an application of SGML. It was formally defined as such by the Internet Engineering Task Force (IETF) with the mid-1993 publication of the first proposal for an HTML specification, the "Hypertext Markup Language (HTML)" Internet Draft by Berners-Lee and Dan Connolly, which included an SGML Document Type Definition to define the grammar.

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 2) by kaszz on Friday April 07 2017, @11:24PM

        by kaszz (4211) on Friday April 07 2017, @11:24PM (#490601) Journal

        Is there any alternative to HTML/SGML that would be even better for a networked interactive hypertext system?

  • (Score: 2, Interesting) by Scruffy Beard 2 on Wednesday April 05 2017, @03:48PM (2 children)

    by Scruffy Beard 2 (6030) on Wednesday April 05 2017, @03:48PM (#489182)

    I think HTML 5 is even more insidious than that: The did away with versioning.

    That means there is no series of tests you can run on your browser to say it is fully HTML 5 compliant: the "standard" my change next week.

    In practice, Google Chrome seems to be defining the standard. That may explain why Mozilla is looking more and more like a Chrome clone.

    • (Score: 2) by Wootery on Wednesday April 05 2017, @04:09PM

      by Wootery (2341) on Wednesday April 05 2017, @04:09PM (#489189)

      That may explain why Mozilla is looking more and more like a Chrome clone.

      Given that Internet Explorer and Safari aren't as bad as Firefox when it comes to blindly following Chrome, I don't think this excuse works.

    • (Score: 0) by Anonymous Coward on Friday April 07 2017, @06:12PM

      by Anonymous Coward on Friday April 07 2017, @06:12PM (#490407)

      This is not the reason why Mozilla turns itself into a Chrome clone.

      They just want to gather Chrome users. Ask a Chrome user if they would use a feature rich program like Firefox 22 was? The answer is a clear no, because of "bloat"

      So what was the solution? Mozilla removed all what Chrome users or simple users would stop from using Firefox.

      Rather simple.