Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.
posted by janrinok on Sunday October 25 2015, @02:07PM   Printer-friendly
from the prove-it! dept.

http://arstechnica.com/tech-policy/2015/10/judge-tosses-wikimedias-anti-nsa-lawsuit-because-wikipedia-isnt-big-enough/

On Friday, a federal judge dismissed an anti-surveillance lawsuit brought by Wikimedia, ruling in favor of the National Security Agency.

In his 30 page ruling, US District Judge T.S. Ellis III found that Wikimedia and the other plaintiffs had no standing, and could not prove that they had been surveilled, largely echoing the previous 2013 Supreme Court decision in the case of Clapper v. Amnesty International .

Judge Ellis found that there is no way to definitively know if Wikimedia, which publishes Wikipedia, one of the largest sites on the Internet, is being watched.

As he wrote in his memorandum opinion:

Plaintiffs' argument is unpersuasive, as the statistical analysis on which the argument rests is incomplete and riddled with assumptions. For one thing, plaintiffs insist that Wikipedia's over one trillion annual Internet communications is significant in volume. But plaintiffs provide no context for assessing the significance of this figure. One trillion is plainly a large number, but size is always relative. For example, one trillion dollars are of enormous value, whereas one trillion grains of sand are but a small patch of beach.

...

As already discussed, although plaintiffs have alleged facts that plausibly establish that the NSA uses Upstream surveillance at some number of chokepoints, they have not alleged facts that plausibly establish that the NSA is using Upstream surveillance to copy all or substantially all communications passing through those chokepoints. In this regard, plaintiffs can only speculate, which Clapper forecloses as a basis for standing.

Since the June 2013 Snowden revelations, by and large, it has been difficult for legal challenges filed against government surveillance to advance in the courts.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Offtopic) by dmc on Monday October 26 2015, @05:59AM

    by dmc (188) on Monday October 26 2015, @05:59AM (#254544)

    Wikipedia, while being even more awesome a thing than the Encyclopedia Britannica was when I was a child, is also in this day and age a technological embarassment. The way the information _should_ be flowing, is in a completely decentralized system of caches. Each device that might serve as a free and open encyclopedia viewing device and has more than eight gigabytes of storage, ought to dedicate at least 128MB to a cache. That cache should prepopulate/initialize with say 64MB of the top 1000 pages by some choice of arbiter (defaults to wikimedia corp or whatever) and the rest gets filled in with some random splattering. Basically a kind of usenet backend, but with a significant percentage of users operating relay caches (lets start with 1% of mobile phones and home PCs). Throw a tor layer in if it makes sense. But any way you slice it, that removes the inexpensive mass surveillance option of just infiltrating or subverting a single organization or its chokepoint/s. And yes, you still need a reputation system, that can default to wikimedia as authoritative. But that would evolve appropriately and increasing options is obviously the right move.

    Starting Score:    1  point
    Moderation   -1  
       Offtopic=1, Total=1
    Extra 'Offtopic' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   1  
  • (Score: 0) by Anonymous Coward on Monday October 26 2015, @07:05AM

    by Anonymous Coward on Monday October 26 2015, @07:05AM (#254557)

    You can just download a local copy of Wikipedia. Without images/videos and compressed it's quite small (12GB for the whole thing, much smaller for only the top few thousand pages). See Wikipedia:Database download [wikipedia.org] for more information. The hard part is organizing the edits.

  • (Score: 0) by Anonymous Coward on Monday October 26 2015, @08:58AM

    by Anonymous Coward on Monday October 26 2015, @08:58AM (#254590)

    ++for trying.

    I think it's harder to correctly configure 1000 systems, each holding a tiny fragment of a large cache; than 2 or 3 webservers holding a full copy of the data.

    Freenet ( https://freenetproject.org/ [freenetproject.org] ) fufills some of your goals. Distributed cache-only storage. Data might be lost, but sufficiently popular stuff probably won't be. Updating is still a bit problematic. I looked at it about ten years ago, but the process for new nodes joining, at the time, required a lot of repetative calculations to build node reputation. I had a pathetically slow computer at the time, and lost interest.

  • (Score: 1) by Fishscene on Monday October 26 2015, @01:57PM

    by Fishscene (4361) on Monday October 26 2015, @01:57PM (#254661)

    You should check out IPFS. Fascinating decentralized protocol that works over current infrastructure.

    --
    I know I am not God, because every time I pray to Him, it's because I'm not perfect and thankful for what He's done.