Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Best movie second sequel:

  • The Empire Strikes Back
  • Rocky II
  • The Godfather, Part II
  • Jaws 2
  • Star Trek II: The Wrath of Khan
  • Superman II
  • Godzilla Raids Again
  • Other (please specify in comments)

[ Results | Polls ]
Comments:90 | Votes:153

posted by janrinok on Monday June 01 2015, @09:24PM   Printer-friendly
from the some-will-be-happy dept.

Microsoft has confirmed that Windows 10 will be released globally—for PCs and tablets only—on July 29 as a free upgrade for anyone running Windows 7 or Windows 8.1.

You have one year from July 29 to get your free upgrade to Windows 10, after which Microsoft will "continue to keep it current for the supported lifetime of the device—at no cost." There's no word on what the "supported lifetime" actually is, but it's probably some loose wording to prevent Microsoft from falling into the same trap as Windows XP. The company likely doesn't want to support Windows 10 on devices that are 10, 15 years old.

The annunciatory blog post, penned by operating system chief Terry Myerson, is light on any further details. There's no guidance on the release date for Windows 10 Mobile, and it isn't clear how many of the seven Windows 10 SKUs will be released on July 29.


Original Submission

posted by janrinok on Monday June 01 2015, @07:28PM   Printer-friendly
from the plot-this dept.

I wasn't aware of the GNU Octave project until I saw a post on Reddit that it had hit version 4.0.0. If you're not familiar with it either, here's a brief overview:

GNU Octave is a high-level interpreted language, primarily intended for numerical computations. It provides capabilities for the numerical solution of linear and nonlinear problems, and for performing other numerical experiments. It also provides extensive graphics capabilities for data visualization and manipulation.

So why is this exciting? Aside from a Windows installer for all you people too lazy to switch to GNU/Linux, it apparently finally got a GUI (kind of a must for "modern" software):

Octave 4.0 is a major new release with many new features, including a graphical user interface, support for classdef object-oriented programming, better compatibility with Matlab, and many new and improved functions.

You can also get the full list of user-visible changes here.

Share and enjoy!

posted by janrinok on Monday June 01 2015, @05:35PM   Printer-friendly
from the driving-them-away dept.

While Uber Technologies Inc. and Carnegie Mellon University announced a partnership to develop autonomous car technology in February, Uber's actions earlier in the year have left Carnegie Mellon's robotics research in jeopardy:

Carnegie Mellon University is scrambling to recover after Uber Technologies Inc. poached at least 40 of its researchers and scientists earlier this year, a raid that has left one of the world's top robotics research institutions in a crisis.

Uber envisions autonomous cars that could someday replace its tens of thousands of contract drivers. With virtually no in-house capability, the San Francisco company went to the one place in the world with enough talent to build a team instantly: Carnegie Mellon's National Robotics Engineering Center.

Flush with cash after raising more than $5 billion from investors, Uber offered some scientists bonuses of hundreds of thousands of dollars and a doubling of salaries to staff the company's new tech center in Pittsburgh, according to one researcher at NREC.

The hiring spree in January and February set off alarm bells. Facing a massive drain of talent and cash, Herman Herman, the newly elevated director of the NREC, made a presentation May 6 to staff to explain the situation and seek ideas on how to stabilize the center, according to documents reviewed by The Wall Street Journal.

The short presentation at the school here laid out the issues. In all, Uber took six principal investigators and 34 engineers. The talent included NREC's director, Tony Stentz, and most of the key program directors. Before Uber's recruiting, NREC had more than 100 engineers and scientists developing technology for companies and the U.S. military.


Original Submission

posted by janrinok on Monday June 01 2015, @03:53PM   Printer-friendly
from the make-that-telephone-call-now! dept.

Key sections of the USA PATRIOT Act expired

According to the AP, reporting at exactly midnight June 1, the sunset clause of sections 215 et al. has gone into effect, causing those sections to expire.

This link has the rest:

http://www.usnews.com/news/politics/articles/2015/05/31/senate-meets-with-key-patriot-act-provisions-on-the-ropes

NSA Bulk Phone Records Collection Expires

Phoenix666 writes:

The Senate failed to pass legislation late Sunday to extend three Patriot Act surveillance measures ahead of their midnight expiration. The National Security Agency's bulk telephone metadata collection program—first exposed by Edward Snowden in 2013—is the most high-profile of the three spy tools whose legal authorization expired.

[...] "Are we willing to trade liberty for security?" asked Sen. Rand Paul (R-KY), perhaps the most vocal opponent of the legislation. Despite an apparent victory, Paul had no illusions that this fight for privacy would end after these specific extension talks. "The Patriot Act will expire tonight, but it will only be temporary," he added.

Sen. Dan Coats (R-IN) said it was time to stand up to terrorists and make "sure that we're doing everything we can to protect Americans from threats of people and a lot of organizations that want to kill us all, that would like to see us—see our heads on the chopping block."

After news of the imminent expiration broke, the American Civil Liberties Union quickly weighed in. "Congress should take advantage of this sunset to pass far-reaching surveillance reform, instead of the weak bill currently under consideration," the group said.

http://arstechnica.com/tech-policy/2015/05/senate-impasse-nsa-spy-tactics-including-phone-records-collection-expiring/


Original Submission-1   Original Submission-2

posted by cmn32480 on Monday June 01 2015, @01:58PM   Printer-friendly
from the paper-or-electronic dept.

The news remains mostly bleak for the American newspaper industry, struggling over the past decade to adapt to the new digital landscape. The sale of the San Diego Union-Tribune in early May for $85 million underscored the horrific slump in the value of "old media" companies in recent years. Although the sum paid by Tribune Publishing was only marginally below the $110 million in a 2011 sale of the San Diego group and excluded some valuable real estate, the newspaper was believed to be worth as much as $1 billion as late as 2004.

The story is the same at other once-proud US metropolitan dailies: according to the Pew Research Center, valuations are down by more than 90 percent from their peaks at the Boston Globe, Philadelphia Inquirer, Chicago Sun-Times and Minneapolis Star-Tribune.

While newspapers are trying to get readers with digital subscriptions and mobile apps, they are swimming against a powerful tide. For the US daily newspaper sector over the past decade, weekday circulation has fallen 17 percent and ad revenue more than 50 percent, according to Pew. And in 2014, three big media companies decided to spin off newspapers to focus on more profitable broadcast or digital properties.

http://phys.org/news/2015-05-newspapers-struggle-path-digital-age.html

Will there ever be a revival of the newspaper industry or is it gone forever? Or, will there be an uneasy equilibrium between digital news media and newspapers? What does SN think?


Orignial Submission

posted by mrcoolbp on Monday June 01 2015, @12:34PM   Printer-friendly
from the big-thank-you dept.

I am happy to announce we have reached our funding goal of $4500 for the first half of this year! From the bottom of our hearts, a big thank you to everyone for all your support. There were a few who chose to pay a lot more than we ever suspected, they know who they are, and I would literally like to buy them a beer.

Continuing with the good news, our goal for July-December is less than half of our first-half goal, and we are a month ahead of schedule — details to follow on that. Though we won't all be dining on champagne and caviar, we are hopeful we will be able to continue paying the bills for the foreseeable future. For now, I just want to say that you are all one awesome community and that you continue to surprise and inspire us; we'll keep doing what we can to make this place the best it can be.

posted by cmn32480 on Monday June 01 2015, @10:47AM   Printer-friendly
from the nuke-it-from-orbit dept.

Steve Cochi is a 63-year-old physician and epidemiologist who thinks its time to totally wipe out Measles:

[F]or the past 25 years, Cochi has been pushing one of the boldest—and some might venture foolhardy—ideas in public health. He wants the world to undertake a huge new effort to eradicate measles. Not just tame the virus or control the outbreaks re-surging across the globe, but to obliterate it, wipe it off the face of the earth, as has only been done once for a human pathogen, smallpox, in 1977, and as the world fervently hopes will happen soon with polio.

Measles is the most contagious virus on Earth, infecting virtually everyone who is not vaccinated.

It would cost a lot of money. And a large percentage of people, when presented with the idea think Measles is not worth the cost or the effort, because measles is, in their opinion, only a nuisance. Indeed the CDC has stated that Measles was eliminated in the US in the year 2000. Subsequent outbreaks earlier this year served as a brief wake up call, but nobody died, and people have largely written it off and attributed it to anti-vaxers.

But more than half of the estimated 10 million infected with measles each year in the developing world fare far worse. The virus suppresses the body's defense system, especially in those already immune-compromised or with malnutrition or vitamin A deficiency, leaving them vulnerable to secondary bacterial infections. The problems are compounded by a lack of health care. Pneumonia is the most common cause of death; diarrhea and dehydration is a close second. Measles is one of the top five preventable causes of blindness. Deafness is common. Inflammation of the brain can cause seizures and sometimes permanent brain damage. In poor countries, the fatality rate is 2% to 15%, soaring to 25% in the worst outbreaks.

In 2013, there were 145.700 measles deaths globally – about 400 deaths every day or 16 deaths every hour.

The article appearing on Science Mag's site outlines the problems involved, and the heartbreak of having Polio almost beaten, only to see it linger. It has a full discussion on why it should be doable, and why there are pitfalls.


Original Submission

posted by martyb on Monday June 01 2015, @08:53AM   Printer-friendly
from the needed-to-open-all-hailing-frequencies dept.

An update at the Planetary Society homepage is reporting that the LightSail has reopened communications following a suspected software glitch.

"Based upon the on-board timers contained within the beacon (and comparing them to beacons following deployment), it appears that a reboot occurred within the past day," wrote Georgia Tech professor David Spencer, LightSail's mission manager.

[...] LightSail is not out of the woods yet. Its exact position remains fuzzy, complicating two-way communication.

This is an update to the previous article on the LightSail software problem.

posted by CoolHand on Monday June 01 2015, @07:32AM   Printer-friendly
from the diabetic-diseases dept.

New strategy to halt HIV growth: block its sugar and nutrient pipeline. HIV has a voracious sweet tooth, which turns out to be its Achilles' heel, reports a new study from Northwestern Medicine and Vanderbilt University.

After the virus invades an activated immune cell, it craves sugar and nutrients from the cell to replicate and fuel its wild growth throughout the body.

Scientists discovered the switch that turns on the immune cell's abundant sugar and nutrient pipeline. Then they blocked the switch with an experimental compound, shutting down the pipeline, and, thereby, starving HIV to death. The virus was unable to replicate in human cells in vitro.

The discovery may have applications in treating cancer, which also has an immense appetite for sugar and other nutrients in the cell, which it needs to grow and spread.

http://www.northwestern.edu/newscenter/stories/2015/05/hivs-sweet-tooth-is-its-downfall.html

[Abstract]: http://journals.plos.org/plospathogens/article?id=10.1371/journal.ppat.1004864


Original Submission

posted by NCommander on Monday June 01 2015, @07:17AM   Printer-friendly
from the that-sucked dept.

This was by far one of the most painful upgrades we've ever done to this site, and resulted in nearly a three hour downtime. Even as of writing, we're still not back to 100% due to unexpected breakage that did not show up in dev. As I need a break from trying to debug rehash, let me write up what's known, what's new, and what went pear-shaped.

Rehash 15.05 - What's New

  • Rewrote large amounts of the site to migrate to Apache 2, mod_perl 2, and perl 5.20.
    • This was a massive undertaking. I did a large part of the initial work, but paulej72, and TheMightyBuzzard did lots to help fix a lot of the lingering issues. Major props to Bytram for catching many of the bugs pre-release
  • Nexus Support (finally).
    • Currently we have the Meta and Breaking News nexii, with the possibility of adding more in the future, such as a Freshmeat replacement.
    • Nexii can be filtered in the user control panel under the Homepage tab. At the moment, this functionality is hosed due to unexpected breakage, but should be functional within the next 24-48 hours
  • IPv6 support - the AAAA record is live as we speak
  • Themes can be attached to a nexus independent of the "primary theme" setting; user choice overrides this
  • Squashed More UTF-8 Bugs
  • Migration to MySQL Cluster (more on this below)
  • Rewrote site search engine to use sphinx search and (in general) be more useful
  • Long comments properly collaspe now
  • Support for SSL by default (not live yet)
  • Fault tolerance; the site no longer explodes into confetti if a database or webfrontend goes down unexpectedly; allows for much easier system maintenance as we can offline things without manual migration of services
  • Improved editor functionality, including per-article note block
  • Lots of small fixes everywhere, due to the extended development cycle

I want to re-state that this upgrade is by far the most invasive one we've ever done. Nearly every file and function in rehash had to be modifying due to changes in the mod_perl infrastructure, and more than a few ugly hacks had to be written to emulate the original API in places. We knew going into this upgrade it was going to be painful, but we had a load of unexpected hiccups and headaches. Even as I write this, the site is still limping due to some of that breakage. Read more past the break for a full understanding of what has been going on.

Understanding The Rewrite (what makes rehash tick)

Way back at golive, we identified quite a few goals that we needed to reach if we wanted the site to be maintainable in the long run. One of these was getting to a modern version of Apache, and perl; slashcode (and rehash) are tightly tied to the Apache API for performance reasons, and historically only ran against Apache 1.3, and mod_perl 1. This put us in the unfortunate position of having to run on a codebase that had long been EOLed when we launched in 2014. We took precautions to protect the site such as running everything through apparmor, and trying to adhere to the smallest set of permissions possible, but no matter how you looked at it, we were stuck on a dead platform. As such, this was something that *had* to get done for the sake of maintainability, security and support.

This was further complicated by a massive API break between mod_perl 1 -> 2, which many (IMHO) unnecessary changes done to data structures and such that meant such an upgrade was an all-or-nothing affair. There was no way we could piecemeal upgrade the site to the new API. We had a few previous attempts at this port, all of them going nowhere, but over a long weekend in March, I sat down with rehash and our dev server, lithium, and got to the point the main index could be loaded under mod_perl 2. From there, we tried to hammer down whatever bugs we could, but we were effectively maintaining the legacy slashcode codebase, and the newer rehash codebase. Due to limited development time, most of the bug fixes and such were placed on rehash once it reached a state of functionality, and these would be shoehorned in with the stack of bugs we were fixing). I took the opportunity to try and clear out as many of the long-standing wishlist bugs as possible, such as IPv6 support.

In our year and a half of dealing with slashcode, we had also identified several pain points; for example, if the database went down even for a second, the site would lockup, and httpd would hang to the point that it was necessary to kill -9 the process. Although slashcode has support for the native master-slave replication built into MySQL, it had no support for failover. Furthermore, MySQL's native replication is extremely lacking in the area of reliability. Until very recently, there was no support for dynamically changing the master database in case of failure, and the manual process is exceedingly slow and error prone. While MySQL 5.6 has improved the situation with global transactions IDs (GTID), it still required code support in the application to handle failover, and a specific monitoring daemon to manage the process, in effect creating a new single point of failure. It also continues to lack any functionality heal or otherwise recover from replication failures. In my research, I found that there was simply bad and worse options with vanilla MySQL in handling replication and failover. As such, I started looking seriously into MySQL Cluster, which adds multi-master replication to MySQL at the cost of some backwards compatibility.

I was hesitant to make such a large change to the system, but short of rewriting rehash to use a different RDBM, there wasn't a lot of options. After another weekend of hacking, dev.soylentnews.org was running on a two system cluster, which provided the basis for further development. This required removing all the FULLTEXT indexes in the database, and rewriting the entire search engine to use Sphinx Search. Unfortunately, there's no trivial way to migrate from vanilla MySQL to cluster. To prevent a long story from getting even longer, to perform the migration, the site would have to be offlined, a modified schema would have to be loaded into the database, and then the data re-imported in two separate transactions. Furthermore, MySQL Cluster needs to know in advance how many attributes and such are being used in the cluster, adding another tuning step to the entire process. This quirk of cluster caused significant headache when it came to import the production database.

Understanding Our Upgrade Process

To understand why things went so pear shaped on this cluster**** of the upgrade, a little information is needed on how we do upgrades. Normally, after the code has baked for awhile on dev, our QA team (Bytram) gives us an ACK when he feels its ready. If the devs feel we're also up to scratch to deploy, one person, usually me or Paul will push the update out to production. Normally, this is a quick process; git tag/pull and then deploy. Unfortunately, due to the massive amounts of infrastructure changes required by this upgrade, more work than normal would be required. In preparation, I prepared our old webfrontend, hydrogen, which had been down for an extended period following a system break to take the new perl, Apache 2, etc, and loaded a copy of rehash. The upgrade would then just be a matter of moving the database over to cluster, changing the load balancer to point to hydrogen, and then upgrading the current webfrontend to flourine. At 20:00 EDT, I offlined the site to handle the database migration, dumping the schema and tables. Unfortunately, the MaxNoOfAttributes and other tuning variables were too low to handle two copies of the database, and thus the initial import failed. Due to difficulty with internal configuration changes, and other headaches (such as forgetting to exclude CREATE TABLE statements from the original database), it took nearly two hours to simply begin importing the 700 MiB SQL file, and another 30 or so minutes for the import to finish. I admit I nearly gave up the upgrade at this point, but was encouraged to soldier on. In hindsight, I could have better tested this procedure, and had gotten all the snags out of the way prior to upgrade; the blame for the extended downtime solely lies with me. Once the database was updated, I quickly got the mysqld frontend on hydrogen up and running, as well as Apache2, just to learn I had more problems as the site returned to the internet nearly three hours later.

What I didn't realize at the time was hydrogen's earlier failure had not been resolved as I thought, and it gave truly abysmal performance, with 10+ second page loads. As soon as this was realized, I quickly pressed fluorine, our 'normal' frontend server into service, and site performance went from horrific to bad. A review of the logs showed that some of the internal caches used by rehash were throwing errors; this wasn't an issue we had seen on dev, and such was causing excessive amounts of traffic to go to the database, and causing Apache to hang as the system tries to keep up with the load. Two hours of debugging have yet to reveal the root cause of the failure, so I've taken a break to write this up before digging into it again

The End Result

As I write this, site performance remains fairly poor, as the server is excessively smashing against the database. Several features which worked on dev went snap when the site was rolled out on production, and I find myself feeling that I'm responsible for hosing the site. I'm going to keep working for as long as I can stay awake to try and fix as many issues as I can, but it may be a day or two before we're back to business as usual. I truly apologize for the community; this entire site update has gone horribly pear shaped, and I don't like looking incompetent. All I can do now is try and pick up the pieces and get us back to where we were. I'll keep this post updated.

~ NCommander

posted by martyb on Monday June 01 2015, @05:55AM   Printer-friendly
from the Pandora's-Box dept.

A precision digital weapon reportedly created by the US and Israel to sabotage Iran’s nuclear program had a fraternal twin that was designed to attack North Korea’s nuclear program as well, according to a new report.

The second weapon was crafted at the same time Stuxnet was created and was designed to activate once it encountered Korean-language settings on machines with the right configuration, according to Reuters. But the operation ultimately failed because the attackers were unable to get the weapon onto machines that were running Pyongyang’s nuclear weapons program.

WIRED reported back in 2010 that such an operation against North Korea would be possible in light of the fact that some of the equipment used by the North Koreans to control their centrifuges—the devices used to turn uranium hexafluoride gas into nuclear-bomb-ready fuel—appeared to have come from the same firms that outfitted the Iranian nuclear program.

http://www.wired.com/2015/05/us-tried-stuxnet-north-koreas-nuclear-program/

Related: North Korean Defector Warns that Hackers Could Kill.


Original Submission

posted by NCommander on Monday June 01 2015, @05:50AM   Printer-friendly
Site upgrades are still in progress: I apologize for the extreme downtime involved with this update, there were unexpected complications. Site performance may be wonky until we're done.

NC adds: Right now, one of the site's internal caches is failing to populate which appears to be the cause of most of the lag. This issue did not show up in dev during our testing, and we can't reproduce it there, so I'm debugging on production to try and run it down. Furthermore, when setting up nexii for this article, I accidentally deleted it, and didn't realize it until it was purged. Oops.

Quick note; we'll be upgrading the site to rehash, starting at 20:00 EST, which requires large changes to both the front and backend setup. There will be sporadic downtime as we get through this migration. A full changelog will be posted after the upgrade. Sorry for any inconvenience.

~ NCommander.
[Update: Site upgrade under way; things are still not completely done but should be generally usable. If you find any issues, please join us on IRC in channel #dev (ideally) or post a comment to this story.]
posted by martyb on Monday June 01 2015, @03:23AM   Printer-friendly
from the reason-unknown dept.

Blogger and Linux advocate Robert Pogson reports that, according to StatCounter, pageviews from machines running Linux in Bahrain jumped from 2 percent to 16 percent in less than a week.
One wonders just what's going on there.

His other graph shows that for the last 3 years there has been an uptick in worldwide Linux usage each April; that increase sustains[1] for several months then drops to a level that is slightly higher than the numbers of the previous March and begins a gentle climb until April.

[1] He notes an uncharacteristic divot in the curve this May.

We previously discussed significant Linux usage in Finland and Uruguay. Finland: Torvalds' Homeland is using Linux to be Productive


Original Submission

posted by martyb on Monday June 01 2015, @12:52AM   Printer-friendly
from the ἔρως-φιλία-ἀγάπη dept.

“Dan” seems at first to perfectly embody that popular object of scorn these days in San Francisco: the privileged tech worker. He’s a developer-turned-manager at a thriving startup, the type of guy you would expect to see dodging protesters at a Google bus stop or evicting low-income tenants in order to build his dream condo. But beyond that veneer of untouchable privilege, there is a soft underbelly. He’s a 40-year-old virgin, and his troubles with women are bad enough that he’s sought out a sex therapist for help.

This is in part a result of techies’ higher-than-average salaries, which allow them to pay for therapy, particularly when it comes to non-traditional counseling that isn’t covered by insurance. There’s something else at play here, though: In general, tech workers are more vulnerable to issues around love and intimacy, according to several local sex therapists I’ve interviewed. The reasons for this are wide-ranging, but in Dan’s particular case, it resulted from being tagged as a prodigy at a young age. He excelled in science and was encouraged to pursue it to the exclusion of all else.

The men, like Dan, who are coming to see her have been hindered by the very thing that allows them to excel in their field. “There is a very strong reinforcement [in tech] on using your brain,” says McGrath. “You brain is what’s of value.” But when it comes to sex, she says, “our brains are bullshit.”


Original Submission

posted by martyb on Sunday May 31 2015, @10:22PM   Printer-friendly
from the what-a-wicked-web-we-weave? dept.

From Hackernews: Project Jacquard

https://www.google.com/atap/project-jacquard/

A Google project about using conductive yarn in standard industrial looms. Sounds really interesting, I don't know if this is state of the art or what, but bring on the reactive-video t-shirts, mu-mus and hoodies!

Google and Levi's are Weaving Computers into your Clothes

Google’s Advanced Technology and Projects (ATAP) group is one of the most exciting divisions of any major technology company: It’s where Project Ara, Google's modular phone experiment, and Project Tango, Google's 3D-mapping tool, were born and are continuing to be incubated. Now, Google is shooting for the moon with another big idea—Project Jacquard.

Project Jacquard is an effort to invisibly incorporate computers into objects, materials, and clothing. Everyday items such as sweaters, jackets, and furniture will be turned into interactive surfaces that can be used as trackpads, buttons and more. The objects will receive information directly from the surface of the material used to build them, eliminating the need for bulky plastic or metal parts. The objects will then transmit information to a nearby smartphone or computer using low-powered Wi-Fi.

http://www.popsci.com/googles-levi%27s-computers-clothing-project-jacquard


Original Submission #1 and Original Submission #2

Today's News | June 2 | May 31  >