Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 10 submissions in the queue.
posted by LaminatorX on Saturday August 23 2014, @06:06AM   Printer-friendly
from the more-meta-than-meta-dept dept.

Mike Masnick over at TechDirt wonders: Can We Create A Public Internet Space Where The First Amendment, Not Private Terms Of Service, Rules?

Over a year ago, Tim Karr had an interesting and important post about openness on the internet. While much of it, quite reasonably, focuses on authoritarian governments trying to stomp out dissent online, he makes an important point towards the end about how the fact that content online is ruled by various "terms of service" from different private entities, rather than things like the First Amendment, can raise serious concerns:

And the threat isn't entirely at the hands of governments. In last week's New Republic, Jeffrey Rosen reported on a cadre of twentysomething "Deciders" employed by Facebook, Twitter and YouTube to determine what content is appropriate for those platforms -- and what content should get blocked.

While they seem earnest in their regard for free speech, they often make decisions on issues that are way beyond their depth, affecting people in parts of the world they've never been to.

And they're often just plain wrong, as Facebook demonstrated last week. They blocked a political ad from progressive group CREDO Action that criticized Facebook founder Mark Zuckerberg's support of the Keystone XL pipeline.

This case is just one of several instances where allegedly well-intentioned social media companies cross the line that separates Internet freedom from Internet repression.

In many ways, it may be even more complicated than Karr and the people he quotes describe. First off, even if you have a company that claims it will respect a right to free expression, it's not their decision alone to make. As we saw, for example, with Wikileaks, when there's strong pressure to silence a site, the downstream providers can get antsy and pull the plug. Upstream hosting firms, data centers and bandwidth providers can all be pressured or even threatened legally, and usually someone somewhere along the line will cave to such threats. In such cases, it doesn't matter how strongly the end service provider believes in free speech; if someone else along the chain can pull things down, then promises of supporting free speech are meaningless.

The other issue is that most sites are pretty much legally compelled to have such terms of use, which provide them greater flexibility in deciding to stifle forms of speech they don't appreciate. In many ways, you have to respect the way the First Amendment is structured so that, even if courts have conveniently chipped away at parts of it at times (while, at other times making it much stronger), there's a clear pillar that all of this is based around. Terms of service are nothing like the Constitution, and can be both inherently wishy-washy and ever-changeable as circumstances warrant.

With both service and hosting providers clearly uninterested in facing off against a government take down, or even a computer generated DCMA request — is there any hope for free speech ?

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by cafebabe on Saturday August 23 2014, @10:45AM

    by cafebabe (894) on Saturday August 23 2014, @10:45AM (#84638) Journal

    There are a large number of websites which look like public forums but they're only as public as a mall. And any mall cop can have you ejected for any spurious reason with apology or appeal. In another analogy, we have a situation where a "public" forums is more like a speakeasy. If you don't say anything contentious (or you're friendly with the boss) then the bouncers will let you in next time you visit. In many circumstances, if you're discussing home improvement, parenting or gadgets, this is an acceptable arrangement. However, if you're discussing the fascist tendencies of Russia/China/US, this may not be sufficient. So, it is not sufficient in all cases to have a trust paying for some virtual hosting and a domain name.

    What may be required is a distributed forum in which each post may be stored redundantly across different jurisdictions or nowhere particularly obvious. The messages may then be brought together without the use of domain names. This, and the recent discussion about Internet advertising as a bad default revenue stream, makes Ted Nelson's Project Xanadu seem increasingly pressing. Although it may seem preferable to have implemented Xanadu rather than a shallow imitation, we may find that implementing comprehensive text indexing and resilience to spam was more important.

    --
    1702845791×2
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by cafebabe on Saturday August 23 2014, @04:09PM

    by cafebabe (894) on Saturday August 23 2014, @04:09PM (#84692) Journal

    comprehensive text indexing and resilience to spam

    I've thought further about this matter and I believe the two are necessary and related. Essentially, a text index can be used to reduce but not eliminate spam. I will give three widely understood examples and then extrapolate for a large-scale system.

    The first example is a database uniqueness constraint. With SQL databases, indexes may be devised for different purposes. They may be used to locate records quickly. They may be used to provide a subset of table data. And they may be used to enforce uniqueness across one of more fields of data. In this case, one or more index lookups are performed and if matches are found, duplicate data is rejected. This is the fundamental building block. This shows that a larger implementation may be a tractable solution.

    The second example, given semi-humorously, has been implemented with success. It is possible to discard lines of IRC conversation which have already been written. This is done for the purpose of increasing uniqueness and originality [xkcd.com]. So, someone can write "me too" but then all further attempts to write the same thing are ignored. Obviously, people can make variations, such as "Me too!!!!!" but these also become exhausted. This literal matching can be subverted with deliberate snowflaking but this leads us to the third example.

    Plagiarism detection allows duplicate phrases and passages to be found or attributed. It isn't perfect because there may be omissions from the corpus of known documents. However, it allows uniqueness to quantified. Essentially, it is possible to devise an algorithm which sets a level of originality. The corollary of this is that posting a message precludes all similar messages. This eliminates the dumbest forms of spam. Rather than having 50 people post 50 similar messages about an unwanted product, the first message precludes all of the following messages. This doesn't need to be a globally consistent transaction. Indeed, there may be advantage to deferring the process. For example, it allows duplicate content to be weeded in bulk using less resources. The deferred process also means spammers think that their messages will persist.

    This process isn't a silver bullet against spammers but they do have to work significantly harder. If they meet this standard, they can have their say along with everyone else. Yes, this process will delete a few genuine messages. However, as we've already determined, the uniqueness of those messages will be low.

    But how does this relate to search engine indexing? Duplicate messages are found by running them against the index. The index which allows searches is also the index which encourages originality.

    --
    1702845791×2