Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

The Best Star Trek

  • The Original Series (TOS) or The Animated Series (TAS)
  • The Next Generation (TNG) or Deep Space 9 (DS9)
  • Voyager (VOY) or Enterprise (ENT)
  • Discovery (DSC) or Picard (PIC)
  • Lower Decks or Prodigy
  • Strange New Worlds
  • Orville
  • Other (please specify in comments)

[ Results | Polls ]
Comments:0 | Votes:2

posted by cmn32480 on Sunday February 26 2017, @11:31PM   Printer-friendly
from the broken-out-of-the-box dept.

A mid-2016 security incident led to Apple purging its data centers of servers built by Supermicro, including returning recently purchased systems, according to a report by The Information. Malware-infected firmware was reportedly detected in an internal development environment for Apple's App Store, as well as some production servers handling queries through Apple's Siri service.

An Apple spokesperson denied there was a security incident. However, Supermicro's senior vice-president of technology, Tau Leng, told The Information that Apple had ended its relationship with Supermicro because of the compromised systems in the App Store development environment. Leng also confirmed Apple returned equipment that it had recently purchased. An anonymous source was cited as the source of the information regarding infected Siri servers.

[...] A source familiar with the case at Apple told Ars that the compromised firmware affected servers in Apple's design lab, and not active Siri servers. The firmware, according to the source, was downloaded directly from Supermicro's support site—and that firmware is still hosted there.

Source: ArsTechnica


Original Submission

posted by cmn32480 on Sunday February 26 2017, @09:42PM   Printer-friendly
from the just-take-it-out-back-and-shoot-it-already dept.

Thursday's watershed attack on the widely used SHA1 hashing function has claimed its first casualty: the version control system used by the WebKit browser engine, which became completely corrupted after someone uploaded two proof-of-concept PDF files that have identical message digests.

The bug resides in Apache SVN, an open source version control system that WebKit and other large software development organizations use to keep track of code submitted by individual members. Often abbreviated as SVN, Subversion uses SHA1 to track and merge duplicate files. Somehow, SVN systems can experience a severe glitch when they encounter the two PDF files published Thursday, proving that real-world collisions on SHA1 are now practical.

On Friday morning, the researchers updated their informational website to add the frequently asked question "Is SVN affected?" The answer:

"Yes - please exercise care, as SHA-1 colliding files are currently breaking SVN repositories. Subversion servers use SHA-1 for deduplication and repositories become corrupted when two colliding files are committed to the repository. This has been discovered in WebKit's Subversion repository and independently confirmed by us. Due to the corruption the Subversion server will not accept further commits."

Source: ArsTechnica


Original Submission

posted by on Sunday February 26 2017, @07:54PM   Printer-friendly
from the spooks-needed dept.

The US Department of Defense wants you to contribute unclassified code to software projects developed in support of national security. Toward that end, it has launched Code.mil, which points to a Github repository intended to offer public access to code financed by public money. But at the moment, the DoD's repo lacks any actual code.

Open source and free software represent industry best practices, the DoD said in a statement, even as it acknowledged the agency has yet to widely adopt it. Code.mil represents an attempt to change that dynamic. On the project website, the DoD goes so far as to suggest that anything other than open source software puts lives at risk.

"US military members and their families make significant sacrifices to protect our country," the agency explains in its FAQs. "Their lives should not be negatively impacted by outdated tools and software development practices that lag far behind private sector standards." And in case that isn't clear enough, the agency states, "Modern software is open sourced software."

-- submitted from IRC


Original Submission

posted by on Sunday February 26 2017, @05:57PM   Printer-friendly
from the all-the-cool-kids-are-doing-it dept.

Surprisingly, the MXNet Machine Learning project was this month accepted by the Apache Software Foundation as an open-source project.

What's surprising about the announcement isn't so much that the ASF is accepting this face in the crowd to its ranks – it's hard to turn around in the software world these days without tripping over ML tools – but rather that MXNet developers, most of whom are from Amazon, believe ASF is relevant.

MXNet is an open-source "deep learning" framework that allows you to define, train, and deploy so-called neural networks on a wide array of devices. It also happens to be the machine learning (ML) tool of choice at Amazon Web Services (AWS) and is available today via ready-to-deploy EC2 instances.

Deep learning is the currently very popular subset of ML that focuses on hierarchical algorithms with non-linearities, which help find patterns and learn representations within data sets. That's a fancy way of saying it learns as it finds. Deep learning tools are currently popular thanks to their success in applications like speech recognition, natural language understanding and recommendation systems (think Siri, Alexa and so on). Every time you sit on your couch yelling at Alexa you're employing a deep learning system.

What makes MXNet interesting at this stage is Amazon claims it's the most scalable tool the company has, and Amazon is a company that knows a thing or two about what scales and what doesn't.

-- submitted from IRC


Original Submission

posted by on Sunday February 26 2017, @04:01PM   Printer-friendly
from the but-will-it-blend? dept.

We've had the Nintendo Switch here in Ars' orbiting HQ for a few days now, and while we're still working on a more thorough review ahead of launch, we're now able to share some initial impressions of the final retail system to add to our hands-on time from last month.

So far, testing out the Switch has exclusively meant playing The Legend of Zelda: Breath of the Wild, the only one of nine confirmed launch games we have our hands on as of yet. Any significant non-gaming or online functions are tied to a "Day One" system update that likely won't be available in time for pre-launch reviews. Further thoughts on the experience of motion controlled games (like 1-2-Switch), or games that support individual Joy-Cons held horizontally (like Super Bomberman R) will also have to wait.

[...] My favorite way to play Breath of the Wild so far is with the Joy-Cons detached from the system, one held in each hand. You can connect the individual controllers to a centralized Grip to make them feel more like a standard dual-stick controller, but I'm not sure why you would want to. Held separately, you can lounge around comfortably with your hands and arms resting literally anywhere, rather than having to scrunch them together directly in front of you.

[...] Despite its thin profile, the Switch feels relatively hefty in the hand and comes across as much denser than the likes of the 3DS or Vita (and especially the airy, toy-like tablet on the Wii U). The tablet itself is solidly built and doesn't feel in danger of snapping apart under stress.

[...] We'll be putting the Switch through as much testing as we can leading up to its March 3 launch next week. For now, though, my inner seven-year-old is still marveling at how far Nintendo handhelds have come since the original black-and-white Game Boy.

-- submitted from IRC


Original Submission

posted by cmn32480 on Sunday February 26 2017, @01:56PM   Printer-friendly
from the like-siri-for-kids dept.

Arthur T Knackerbracket has found the following story:

Woobo is a cuddly interactive toy that talks to kids. Also, it records their conversations.

It's a source of anxiety for any parent: getting rid of your child's beloved toy.

That's exactly what regulators in Germany told citizens to do with My Friend Cayla. And it wasn't enough to just throw Cayla away; parents actually had to destroy the blonde, peppy-looking doll.

The smart toy, which records conversations with kids, fell into the category of "hidden espionage devices," according to the regulators. My Friend Cayla was accused of asking children personal questions, like their favorite shows and toys, and saving the data to send to a third-party company that also makes voice identification products for police.

Just a day after the German ban was announced, Toy Fair kicked off in New York -- and smart toys were all over the place. Teddy Ruxpin, the storytelling bear beloved by '80s babies, returned with a high-tech makeover, as did Hologram Barbie, a voice-assistant animated sequel to the controversial Hello Barbie. Toy Fair also featured smart toy newcomers like Woobo, essentially a cuddly version of the Amazon Echo and Google Home speakers.

The contrasts illustrate the fine line between protecting one's privacy and the desire to create compelling and engaging products. It's the same broader debate that's raging throughout the technology and consumer electronics world, with companies like Google hoovering up personal data to better serve you ads. Only this time, the issue affects impressionable children.

Smart toys are a multibillion-dollar industry that's only getting larger as more kids are growing up connected and clamoring for the next high-tech distraction. Parents are flocking to connected toys for tots, with one research firm predicting that revenue for smart toys will reach $8.8 billion by 2020.

The booming market could be blowing up even faster if only children's online privacy concerns weren't in the way, members of the toy industry lamented at Toy Fair. While parents are looking out for their kids' safety and privacy, toymakers say data collection is necessary to make the next generation's iconic toy.

The Children's Online Privacy Protection Act, passed in 1998, requires companies targeting kids under 13 to get consent from parents before collecting personal information from children, as well as allowing parents to review any data a company collects on their kids. The data also must be deleted within 30 days of its use. COPPA's author, Sen. Edward Markey, a Massachusetts Democrat, questioned the makers of My Friend Cayla about potential violations of the act "given the sensitive nature of children's recorded speech."

The toy industry, unsurprisingly, takes a different view.

"To take smart toys to the next level of engagement and give kids what they want, you have to take data and create an engaging experience that's connected to their friends and based on their persona," said Krissa Watry, CEO of Dynepic, the company behind iOKids, a social media platform for children and their parents.

-- submitted from IRC


Original Submission

posted by cmn32480 on Sunday February 26 2017, @12:04PM   Printer-friendly
from the driving-me-to-drink dept.

Arthur T Knackerbracket has found the following story:

Waymo was launched by Google last year.

The 28-page lawsuit focuses on Otto, a self-driving trucking company that Uber acquired last year. The suit charges that Anthony Levandowski, a former Google employee, downloaded 14,000 "highly confidential" files describing self-driving technology research and brought them to Otto, which he co-founded.

Parts of the lawsuit read like a spy novel. Waymo alleges Levandowski, who now works at Uber, used special software to access the files and reformatted his computer to cover his tracks. It says Uber used the information after it acquired Otto.

The lawsuit complicates the already-difficult relationship between the two companies. GV, Alphabet's venture capital arm, invested in Uber in 2013. It was one of the firm's most high-profile deals.

"Our parent company Alphabet has long worked with Uber in many areas, and we didn't make this decision lightly," Waymo said in a blog post. "However, given the overwhelming facts that our technology has been stolen, we have no choice but to defend our investment and development of this unique technology."

"We take the allegations made against Otto and Uber employees seriously," an Uber spokeswoman said. "We will review this matter carefully."

Self-driving cars are a red-hot area of research in the automotive industry. Autonomous vehicles show the potential to greatly reduce or even eliminate the tens of thousands of deaths that occur on US roads every year. The technology may also reduce traffic jams, a major fuel and time waster in US cities. Equipment suppliers, start-ups and big tech companies, in addition to automakers, are all developing self-driving car technology.

-- submitted from IRC


Original Submission

posted by cmn32480 on Sunday February 26 2017, @10:17AM   Printer-friendly
from the the-key-that-bites-back dept.

Today, Google announced a new G Suite feature that allows admins to lock down accounts so they can only be accessed by users with a physical USB security key. The FIDO U2F Security Keys have been supported on G Suite and regular Google accounts since 2011, but now new security controls allow admins to make the keys mandatory for anyone who tries to log in.

Universal 2nd Factor (U2F)—initially developed by Google and Yubico—is a standard from the FIDO Alliance that allows a physical device to work as a second factor of authentication. After entering your username and password, you'll have to connect your device to your physical authentication key. The keys can support USB, NFC, and/or Bluetooth, allowing them to connect to desktops, laptops, and smartphones. Many services support U2F, like Dropbox, GitHub, Salesforce, Dashlane, and others. The Chrome and Opera browsers support U2F, along with Android and Windows smartphones. Modern iOS devices don't work with the standard, but Google appears to have some kind of workaround.

Are any Soylentils out there using U2F and if so, how's that working for you?

Source: ArsTechnica


Original Submission

posted by Fnord666 on Sunday February 26 2017, @08:22AM   Printer-friendly
from the it's-about-time dept.

Submitted via IRC for TheMightyBuzzard

The original, unaltered Star Wars trilogy is rumored to be re-released this year to mark the 40th anniversary of A New Hope. While fans have speculated for years that George Lucas will finally release the first edits of the iconic 1977, 1980 and 1983 films to the public in a variety of formats, a concrete announcement appears imminent. Numerous industry sources have informed fan site Making Star Wars that the original edits are on their way. This is a big deal for diehard fans, many of whom argue the versions they saw in cinemas during the late 70s and early 80s are far superior to the versions later released to the public.

[...] The 40th anniversary of A New Hope occurs on May 22.

Source: http://thathashtagshow.com/2017/02/unaltered-original-star-wars-trilogy-released-year/


Original Submission

posted by Fnord666 on Sunday February 26 2017, @06:49AM   Printer-friendly
from the Earth's-seed-bank dept.

Biologists have proposed a project that would aim to eventually sequence the genomes of all life on Earth, starting with a focus on around 9000 eukaryotic families. The project has been compared to the Human Genome Project, which completed just one "mosaic" genome at a cost of $2.7 billion in FY 1991 dollars:

When it comes to genome sequencing, visionaries like to throw around big numbers: There's the UK Biobank, for example, which promises to decipher the genomes of 500,000 individuals, or Iceland's effort to study the genomes of its entire human population. Yesterday, at a meeting here organized by the Smithsonian Initiative on Biodiversity Genomics and the Shenzhen, China–based sequencing powerhouse BGI, a small group of researchers upped the ante even more, announcing their intent to, eventually, sequence "all life on Earth."

Their plan, which does not yet have funding dedicated to it specifically but could cost at least several billions of dollars, has been dubbed the Earth BioGenome Project (EBP). Harris Lewin, an evolutionary genomicist at the University of California, Davis, who is part of the group that came up with this vision 2 years ago, says the EBP would take a first step toward its audacious goal by focusing on eukaryotes—the group of organisms that includes all plants, animals, and single-celled organisms such as amoebas.

[...] Many details about the EBP are still being worked out. But as currently proposed, the first step would be to sequence in great detail the DNA of a member of each eukaryotic family (about 9000 in all) to create reference genomes on par or better than the reference human genome. Next would come sequencing to a lesser degree a species from each of the 150,000 to 200,000 genera. Finally, EBP participants would get rough genomes of the 1.5 million remaining known eukaryotic species. These lower resolution genomes could be improved as needed by comparing them with the family references or by doing more sequencing, says EBP co-organizer Gene Robinson, a behavioral genomics researcher and director of the Carl R. Woese Institute for Genomic Biology at the University of Illinois in Urbana.


Original Submission

posted by Fnord666 on Sunday February 26 2017, @05:23AM   Printer-friendly
from the expensive-paperweights dept.

China has outlined plans for an upcoming flight to the Moon, The Guardian reports:

The spacecraft will consist of four distinct parts: a lander and an ascender, an orbiter and a returner. The lander will descend to the surface of the Moon, collect the samples and place them in the ascender. This will launch and rendezvous with the orbiter and returner, all of which will then journey back towards Earth.

The samples will be transferred to the returner, which will detach from the orbiter and re-enter the Earth's atmosphere.

Chang'e-5 is expected to be launched in November 2017.

Additional coverage:

Previous stories:


Original Submission

posted by Fnord666 on Sunday February 26 2017, @03:51AM   Printer-friendly
from the the-good-thing-about-standards dept.

The ITU has announced draft technical requirements for 5G mobile technology:

The total download capacity for a single 5G cell must be at least 20Gbps, the International Telcommunication Union (ITU) has decided. In contrast, the peak data rate for current LTE cells is about 1Gbps. The incoming 5G standard must also support up to 1 million connected devices per square kilometre, and the standard will require carriers to have at least 100MHz of free spectrum, scaling up to 1GHz where feasible.

These requirements come from the ITU's draft report on the technical requirements for IMT-2020 (aka 5G) radio interfaces, which was published Thursday. The document is technically just a draft at this point, but that's underselling its significance: it will likely be approved and finalised in November this year, at which point work begins in earnest on building 5G tech.

[...] Under ideal circumstances, 5G networks should offer users a maximum latency of just 4ms, down from about 20ms on LTE cells. The 5G spec also calls for a latency of just 1ms for ultra-reliable low latency communications (URLLC).


Original Submission

posted by cmn32480 on Sunday February 26 2017, @02:26AM   Printer-friendly
from the what-ever-happened-to-honest-peer-review dept.

47.2% of a group of 284 researchers sanctioned by the U.S. Office of Research Integrity for misconduct (such as plagiarism or falsifying data) between 1992 and 2016 continued to be involved in research. 8% went on to receive National Institutes of Health funding:

Many believe that once a scientist is found guilty of research misconduct, his or her scientific career is over. But a new study suggests that, for many U.S. researchers judged to have misbehaved, there is such a thing as a second chance. Nearly one-half of 284 researchers who were sanctioned for research misconduct in the last 25 years by the Department of Health and Human Services (HHS), the largest U.S. funder of biomedical research, ultimately continued to publish or work in research in some capacity, according to a new analysis. And a small number of those scientists—17, to be exact—went on to collectively win $101 million in new funding from the National Institutes of Health (NIH).

Those numbers "really surprised" Kyle Galbraith, research integrity officer at the University of Illinois in Urbana and author of the new study [DOI: 10.1177/1556264616682568] [DX], published earlier this month by the Journal of Empirical Research on Human Research Ethics. "I knew from my work and reading other studies that careers after misconduct were possible. But the volume kind of shocked me," he says.

Is it ethical to keep empirical research on human research ethics behind a paywall?


Original Submission

posted by on Sunday February 26 2017, @01:52AM   Printer-friendly
from the not-actually-an-NCommander-post dept.

Okay, I know it's been a long time since we did one of these but life does intrude on volunteer dev time. Hopefully this one will be worth the wait. Bear with me if I seem a bit off today, I'm writing this with a really fun head cold.

First, what didn't make it into this update but is directly upcoming. Bitpay is still down on account of them changing the API without notifying existing customers or versioning the new API and leaving the old one still up and functional. It's the first thing I'm going to work on after we get this update rolled out but it will basically require a complete rewrite. Don't expect it any earlier than two months from now because we like to test the complete hell out of any code that deals with your money.

Also, adding a Jobs nexus didn't quite make the cut because we're not entirely sure how/if we want to work it. One thing we are certain about, it would not be for headhunters or HR drones to spam us silly but for registered members who have a specific vacancy they need to fill and would like to throw it open to the community.

The API still has some broken bits but it's been low priority compared to what I've been busy with. I'm thinking I'll jump on it after Bitpay unless paulej72 cracks the whip and makes me fix bugs/implement features instead.

There were several other things that I had lined up for post-Bitpay but I can't remember them just now what with my head feeling like it's stuffed full of dirty gym socks.

Now let's throw the list of what did make it out there and go over it in more detail afterwards.

  • Tweaked the themes a bit where they were off.
  • Changed or fixed some adminy/editory stuff that most of you will never see or care about.
  • Fixed a mess of minor bugs not worth noting individually.
  • Improved Rehash installation. It should almost be possible to just follow directions and have a site working in an hour or two now.
  • Added a very restrictive Content Security Policy.
  • Added a link to the Hall of Fame. It was always there, just not linked to.
  • Return to where you just moderated after moderating. (yay!)
  • Return to where you just were after commenting. (yay some more!)
  • Added a field for department on submissions. Editors get final say but if you have a good one, go for it.
  • Added a Community Reviews nexus.
  • Added a Politics nexus.
  • Added <spoiler> tags for the Reviews nexus in case you want to talk about a novel without ruining it for everyone else. They function everywhere though.
  • Changed really freaking long comments to have a scrollbar now instead of being click-to-show.
  • Massively sped up comment rendering on heavily commented stories.
  • Dimming of comments you've already read. (You can turn this off with the controls on the "Comments" tab of your preferences page if it annoys you.)
  • Added a "*NEW*" badge to new comments in case you don't like dimming but still want to easily see new posts. (Disable it the same place as above.)
  • Removed Nested, Threaded, and Improved threaded comment rendering modes (Necessary due to the changes required for the massive speed-up)
  • Added Threaded-TOS and Threaded-TNG comment rendering modes. (TOS is the default)
  • All comment modes now feature collapsible/expandable comments. (Without javascript)

Morning Update: Really digging the constructive criticism. Some quality thoughts in there. Keep them coming and we'll see how fast we can get a few done. --TMB


Before the specifics, I know some of you are going to see the new Threaded modes and be like "that's pretty awesome" and some of you are going to call us dev types very bad names. Well, this ain't the other site. We're not saying "You Shall Use This Because It's New And Shiny". We're saying something had to be done about page load times approaching a full minute on heavily trafficked stories and the way we pulled and rendered comments made up nearly all of that time.

So, the first thing we did was we stopped pulling every single comment and then removing the ones we didn't want to display. Mostly that means that the comment counts in the dropdown menus for Threshold and Breakthrough are on a per-page basis now.

Next we did away with templates for comments. Wildcarded, case insensitive search and replace, even in perl, is horribly slow and that's a large part of how templates worked. The html and related logic is now hardcoded into the source. This did mean though that we had to entirely rewrite all the comment modes logic. Flat and Threaded-TOS are pretty much identical to the old Flat and Threaded, so there shouldn't be any surprises there except that we got rid of the javascript in Improved Threaded and gave every mode collapsible comments with nothing but CSS. Threaded-TNG is new-ish however. It's essentially Nested but without Threshold or every top-level comment being fully visible. If Nested users absolutely cannot live with that, we'll preempt working on the bitcoin rewrite and slap a Nested mode in as well. It shouldn't take but a week, testing included.

Third, we paginated every mode. I know it was nice being able to see every comment on one page but that meant pulling and rendering every comment and that simply doesn't work if a story has over a hundred comments.

The removal of sorting by score we can't roll back though. Its loss was a necessity due to the way we pull and sort only the comments that the user actually requests. Previously, we were pulling every single comment for a story and then removing the ones we didn't want. That was both bloody stupid and slow as hell, so it had to go. Unfortunately it means we have to do things slightly differently. It may make a triumphant return eventually but it would require some moderately tricky coding with the particular way our code is laid out.

Oh and if you have objections to the new Threaded modes, by all means bitch about specifics in comments here and we'll see what we can do to address them. After having spent so much time recently bashing on exactly these bits of code, we're quite familiar with them and changes/additions shouldn't take too terribly long to whip out.

Now to the specifics.

The buttons on the upper left of each comment don't work exactly like the Javascript version did but we do like how they work. The double chevron either shows or hides the comment tree beneath a comment but it does not change their collapsed/expanded state. The single chevron controls the expanded/collapsed state of each comment individually. Adding another button to expand/collapse every individual comment beneath a given comment may be doable but we haven't figured out how so far. It is high on the wish list but not high enough to delay the release any longer than it already has been.

Flat: Flat is still flat but now with a collapse/expand button that functions like the ones from Improved Threaded.

Threaded-TOS: If you can find significant differences between Improved Threaded and Threaded-TOS, let us know because it's probably a bug. The idea was to make it as much like Improved Threaded as technically possible with just CSS but paginated like Nested so we don't have to render more than 100 comments at a go. We defaulted everyone on Nested/Threaded/Improved threaded to Threaded-TOS to minimize the aggravation of unexpected change. Oh, and Breakthrough now takes precedence over Threshold, so high scoring comments will always be visible even if they're responding to blatant trolling.

Threaded-TNG: All comment trees start fully branched out but with the individual comments either expanded or collapsed. "Comment Below Threshold" functionality is gone. Breakthrough gets compared to a comment's score to decide if it gets expanded or collapsed. Play with it a couple minutes; it's not terribly hard to grok. Why do we need this mode if TOS covers most all of the best bits of the three old modes? Because I like it. You don't have to use it. Shut up.

What happened to Nested? What's old is new again. Threaded-TNG more or less is Nested but with the fun bits of Improved Threaded bolted on as well and without the annoyance of having to allow Javascript to run. Minus Threshold functionality. If you spot any serious differences between the two besides those, give us a heads up, because we didn't. It's a very easy mode to code on though, so if you absolutely cannot live without Threshold it's not at all difficult to clone it, add Threshold back in, and call it Nested.

Why not leave the old comment rendering modes in as well as the new ones? Because by rewriting them we got a rendering speed increase around a factor of two+, to go with the factor of two+ increase we got by pulling only the necessary comments instead of every last comment a story has with every page load. This has been becoming necessary as we increasingly go way above the 100 comment mark on busy stories. It's not cool for you lot to have to wait forty-five seconds to load a page of comments and it's even less cool to peg a cpu core for forty-five seconds to deliver it to you. If you ever again find a story that takes 10+s to load, something's going wrong and we'd appreciate a heads up. We think there's still some room in the code for improvement but this was the lowest-hanging fruit.

Now on to the rest of the details.

The Content Security Policy should cover what's required for operation of this site (plus allowing for Stripe payments) and nothing else. If your browser honors CSPs, it should not be possible to get smacked with XSS or inline script injection on this site any more; even if we write code buggy enough to allow it, which we have once or twice.

On dimmed comments... This only functions for logged in users currently as it would take some serious work to get it functioning for individual ACs, even using cookies. What it does is when you load a page of comments, it picks the highest comment ID from that story and marks that comment as read by you. Switching between pages of comments or changing your Threshold/sort order should not update which comments you have read, even if new ones have come in since your last read comment ID was set. Hitting the "Mark All as Read" button or hitting your browser's Refresh button on the main story page should take the stored comment ID and set the opacity to 60% on all the comments with a comment ID equal to or less than that. It's not entirely accurate but it's pretty damned close and it doesn't bloat the db much at all. Oh and read histories get wiped after two weeks of not being updated for a particular user/story combination to save on db space as well.

The new comment badge functions exactly opposite of dimmed comments. It puts "* NEW *" in the title bar of comments you haven't read yet. It's there strictly so you can have the same functionality but dislike the aesthetics of comment dimming. You can technically use both if you really want new comments to stand out but that would just be weird.

Returning to where you last moderated works like this. If you moderate one comment, you'll get sent back to that comment. If you moderate several in one go, you should get sent to the one farthest down the page. Moderating does not update the comment ID of what you've read for dimming purposes.

Returning to where you just made a comment? That's pretty self-explanatory. It also should not update the comment ID of what you've read for dimming purposes.

The Politics nexus. This does not mean we're looking to have even more political stories. The balance of tech/science/etc... to political stories is not going to change nor will the quality of accepted political submissions. It's primarily a way to let people who are sick and bloody tired of seeing politics here set a preference and never see political stories again. It's also handy if you wish to see what political stories we've run recently as clicking on the nexus link on the left of the page will show you only those stories.

The Reviews nexus has been brought up three separate times that I can remember by different groups of people, so we decided to go ahead with it. It's going to be a book/film/software/hardware/etc... review and discussion place. By my understanding, though I'm not really involved, it's getting its own space because some folks wanted to start what amounts to a site book club. Tech books will of course be welcome but it's open to all genres of printed and bound words. Ditto non-book reviews. Just don't go sending in a review of something we normally wouldn't publish news about on the site. Not enough people are going to be interested in your review of the barber shop down the street from your house, so it won't get published.

Spoiler tags, <spoiler>text you don't want casually seen</spoiler>, work both in stories and comments and are just a bit of css trickery that hide the text between them until the person viewing them hovers over the *SPOILER* text. There's a slight delay, so don't think it's not working because it's not immediate. That's intentional so you don't accidentally trigger showing the contained text by briefly crossing it.

By popular demand, <del> tags were also added.

That's all worth mentioning in this site update. Look for another one hopefully in May or late April. If you find any bugs, please slap them up as issues on our github repo or email them to dev@soylentnews.org.

posted by takyon on Sunday February 26 2017, @12:53AM   Printer-friendly
from the murder-she-heard dept.

Amazon is balking at a search warrant seeking cloud-stored data from its Alexa Voice Service. Arkansas authorities want to examine the recorded voice and transcription data as part of a murder investigation. Among other things, the Seattle company claims that the recorded data from an Amazon Echo near a murder scene is protected by the First Amendment, as are the responses from the voice assistant itself.

Amazon said that the Bentonville Police Department is essentially going on a fishing expedition with a warrant that could chill speech and even the market for Echo devices and competing products. In a motion to quash the subpoena, the company said that because of the constitutional concerns at issue, the authorities need to demonstrate a "compelling need" for the information and must exhaust other avenues to acquire that data.

[...] According to the warrant, Bentonville authorities are seeking "audio recordings, transcribed records, or other text records related to communications and transactions" between the Echo device and Amazon's servers during the 48-hour period covering November 21-22, 2015. Amazon said the authorities should, at a minimum, establish "a heightened showing of relevance and need for any recordings" before a judge allows the search.

[...] The warrant at issue concerns the 2015 death of former Georgia police officer Victor Collins. He was found dead in a hot tub at the Bentonville home of Bates, who claimed the death was an accidental drowning. Arkansas police believe Bates died after a struggle. They suspect that the Amazon Echo they found streaming music near the hot tub may help solve the case.

Source: ArsTechnica. Also at BBC and TechCrunch.

Previously: Police Seek Amazon Echo Data in Murder Case


Original Submission