Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How long have you had your current job?

  • less than 1 year
  • 1 year up to 2 years
  • 2 years up to 3 years
  • 3 years up to 5 years
  • 5 years up to 10 years
  • 10 or more years
  • work is for suckers
  • I haven't got a job you insensitive clod!

[ Results | Polls ]
Comments:24 | Votes:74

posted by janrinok on Wednesday February 25 2015, @11:57PM   Printer-friendly
from the renewables-rock dept.

The Center for American Progress reports

As part of its first major retrofit in 30 years, two custom-designed wind turbines have started generating power for the Eiffel Tower. Located above the World Heritage Site's second level, about 400 feet off the ground, the sculptural wind turbines are now producing 10,000 kWh of electricity annually, equivalent to the power used by the commercial areas of the Eiffel Tower's first floor. The vertical axis turbines, which are capable of harnessing wind from any direction, were also given a custom paint job to further incorporate them into the iconic monument's 1,000-foot frame. At the same time they bring the image of the 1889 tower firmly into the 21st Century.

[...]In addition to the wind turbines, the renovation includes energy efficient LED lighting, high-performance heat pumps, a rainwater recovery system, and 10 square meters of rooftop solar panels on the visitor pavilion.

There was no required renewable energy target for the Eiffel Tower's facelift, but the project developers see it as a major landmark in Paris' climate plan. The city's plan(PDF) aims for a 25 percent reduction in greenhouse gas emissions, a 25 percent drop in energy consumption, and for 25 percent of energy to come from renewable energy sources by 2020.

posted by janrinok on Wednesday February 25 2015, @10:24PM   Printer-friendly
from the but-keep-stomping-them-bugs dept.

Thought other Soylents would be interested in this report I saw at IT World:

Google is scrapping Pwnium, its annual bug hunting event, and folding it into an existing year-round program in part to reduce security risks.

“If a security researcher was to discover a Pwnium-quality bug chain today, it’s highly likely that they would wait until the contest to report it to get a cash reward,” Willis wrote. “This is a bad scenario for all parties. It’s bad for us because the bug doesn’t get fixed immediately and our users are left at risk.”

Now, researchers who find bugs in Chrome products can submit them under the Chrome Reward Program, Willis wrote, which has been around since 2010.

Awards range from a minimum of US$500 up to $50,000, with an unlimited reward pool. But Willis cautioned that Google’s lawyers say the program is “experimental and discretionary” and could be cancelled or modified.

posted by janrinok on Wednesday February 25 2015, @09:01PM   Printer-friendly
from the is-a-cherub-also-child-pornography? dept.

Google has announced their new Adult Content Policy for Blogger...

Starting March 23, 2015, you won't be able to publicly share images and video that are sexually explicit or show graphic nudity on Blogger.

Note: We’ll still allow nudity if the content offers a substantial public benefit, for example in artistic, educational, documentary, or scientific contexts.

Changes you’ll see to your existing blogs:

If your existing blog doesn’t have any sexually explicit or graphic nude images or video on it, you won’t notice any changes.

If your existing blog does have sexually explicit or graphic nude images or video, your blog will be made private after March 23, 2015. No content will be deleted, but private content can only be seen by the owner or admins of the blog and the people who the owner has shared the blog with.

They also explain how a blog can be exported, presumably for use should you wish to change hosts.

https://support.google.com/blogger/answer/6170671?p=policy_update&hl=en&rd=1

Unfortunately, one man's art is another man's porn - so if you run a photography blog or just have images taken on the beach during your holidays you might want to back-up your data or recheck its contents.

posted by janrinok on Wednesday February 25 2015, @07:38PM   Printer-friendly
from the read-all-about-it! dept.

Michael Rosenwald writes in the WaPot that textbook makers, bookstore owners and college student surveys all say millennials still strongly prefer reading on paper for pleasure and learning, a bias that surprises reading experts given the same group’s proclivity to consume most other content digitally. “These are people who aren’t supposed to remember what it’s like to even smell books,” says Naomi S. Baron. “It’s quite astounding.” Earlier this month, Baron published “Words Onscreen: The Fate of Reading in a Digital World,” a book that examines university students’ preferences for print and explains the science of why dead-tree versions are often superior to digital. Her conclusion: readers tend to skim on screens, distraction is inevitable and comprehension suffers. Researchers say readers remember the location of information simply by page and text layout — that, say, the key piece of dialogue was on that page early in the book with that one long paragraph and a smudge on the corner. Researchers think this plays a key role in comprehension - something that is more difficult on screens, primarily because the time we devote to reading online is usually spent scanning and skimming, with few places (or little time) for mental markers.

Another significant problem, especially for college students, is distraction. The lives of millennials are increasingly lived on screens. In her surveys, Baron writes that she found “jaw-dropping” results to the question of whether students were more likely to multitask in hard copy (1 percent) vs. reading on-screen (90 percent). "The explanation is hardly rocket science," says Baron. "When a digital device has an Internet connection, it’s hard to resist the temptation to jump ship: I’ll just respond to that text I heard come in, check the headlines, order those boots that are on sale." “You just get so distracted,” one student says. “It’s like if I finish a paragraph, I’ll go on Tumblr, and then three hours later you’re still not done with reading.”

posted by janrinok on Wednesday February 25 2015, @06:09PM   Printer-friendly
from the something-to-talk-about dept.

Stuttering — a speech disorder in which sounds, syllables or words are repeated or prolonged—affects more than 70 million people, or about 1% of the population, worldwide. Once treated as a psychological or emotional condition, stuttering can now be traced to brain neuroanatomy and physiology. Two new studies from UC Santa Barbara researchers provide new insight into the treatment of the speech disorder as well as understanding its physiological basis.

The first paper, published in the American Journal of Speech-Language Pathology, finds that the MPI stuttering treatment program, a new treatment developed at UCSB, was twice as effective as the standard best practices protocol.

The second study, which appears in the Journal of Speech, Language, and Hearing Research, uses diffusion spectrum imaging (DSI) in an MRI scanner to identify abnormal areas of white matter in the brains of adult stutterers. According to co-author Janis Ingham, a professor emerita of speech and hearing sciences at UCSB and co-author of both papers, the two studies taken together demonstrate two critical points: A neuro-anatomic abnormality exists in the brains of people who stutter, yet they can learn to speak fluently in spite of it.

posted by janrinok on Wednesday February 25 2015, @04:56PM   Printer-friendly
from the it-could-be-used-to-make,-well,-anything! dept.

FedEx is refusing to ship Texas nonprofit Defense Distributed's computer controlled mill, the Ghost Gunner. The $1,500 tool can carve aluminum objects from digital designs, including AR-15 lower receivers from scratch or more quickly from legally obtainable "80 percent lowers".

When the machine was revealed last October, Defense Distributed's pre-orders sold out in 36 hours. But now FedEx tells WIRED it's too wary of the legal issues around homemade gunsmithing to ship the machine to customers. "This device is capable of manufacturing firearms, and potentially by private individuals," FedEx spokesperson Scott Fiedler wrote in a statement. "We are uncertain at this time whether this device is a regulated commodity by local, state or federal governments. As such, to ensure we comply with the applicable law and regulations, FedEx declined to ship this device until we know more about how it will be regulated."

But buying, selling, or using the Ghost Gunner isn't illegal, nor is owning an AR-15 without a serial number, says Adam Winkler, a law professor at UCLA and the author of Gunfight: The Battle over the Right to Bear Arms in America. "This is not that problematic," he says. "Federal law does not prohibit individuals from making their own firearms at home, and that includes AR-15s."

Defense Distributed's founder Cody Wilson argues that rather than a legal ambiguity, FedEx is instead facing up to the political gray area of enabling the sale of new, easily accessible tools that can make anything-including deadly weapons. "They're acting like this is legal when in fact it's the expression of a political preference," says Wilson. "The artifact that they're shipping is a CNC mill. There's nothing about it that is specifically related to firearms except the hocus pocus of the marketing." Wilson, whose radically libertarian group has pursued projects ranging from 3-D printed guns to untraceable cryptocurrency, says he chose to ship his Ghost Gunner machines with FedEx specifically because the company has a special NRA firearm industry membership. But when he told a local FedEx representative what he'd be shipping, he says the sales rep responded that he'd need to check with a superior. "This is no big deal, right? It's just a mill," Wilson says he told his FedEx contact. "You guys ship guns. You've shipped 3-D printers and mills, right? You'll ship a drill press, right? Same difference."

posted by NCommander on Wednesday February 25 2015, @03:20PM   Printer-friendly
from the another-overdue-discussion dept.
So as we make strides to upgrade the site, another long-standing issue is working to improve the karma system. Obviously, we've heard a lot of discussions and ideas on how to improve this, but we need a solid plan on how to improve it. Ideally, we need a system that allows a user to gain karma, and show progress (so to speak), but also not render a user immune to moderation. As such, I think I've come up with a rather solid idea, based on the concept of gamification to keep users competitive on earning karma.

Read past the break for more information.
How Karma Works Today:

Before we get into how we're going to rework it, a quick recap is in order to explain how the system currently works. As of right now, karma is a signed integer in the database, with a range of -10 to 50. When a user has negative karma, their default posting score becomes 0 or -1. At +40, the user gains the ability to post at +2. Although the backend logs all up/down votes, karma is capped at 50, and can not be exceeded.

This obviously presents a problem since once a user hits 50, what incentive do they have to really keep posting? A lot of users have stated that earning karma is fun, and have long wished for us to improve the karma cap or something similar. Various suggestions such as karma aging has been proposed, but these all end up penalizing users for doing nothing wrong, something I dislike in concept. Thus we need a better system to handle this, while preventing a user from becoming immune to moderation.

Reworking Karma: Karma Levels + Recent Karma

The easiest solution is thus to break karma into two parts, one which is a total lifetime of karma earned, and a recent karma value. The recent value will be a range, similar to the current karma system, which can go up and down, and is capped; if a user decides to suddenly spam the site to hell, their recent karma scores can be destroyed via moderation. Abilities such as posting at +2 will be tied to this recent karma value. In short, it allows moderation to still impact a user in a meaningful way.

However, most people want to see the total sum of their contributions, hence a new value which is comprised of the total karma a user ever earned. This value only goes up, and is the total sum of positive contributions to the site. In line with the concept of gamification, this total karma is like XP in most role-playing games. Earn enough, and you level up. While I haven't worked out an exact algorithm just yet, take the following example.

User john_doe is newly registered.

His recent karma and lifetime karma are 0. It takes 10 points of karma to reach Lv. 2. john_doe decides to contribute 3 insightful comments, all getting moderated up to +5 Insightful, for a total 12 karma points. His recent karma is at 12, and he's levels up to 2, with 2 KP towards level 3. Lets say then John decides to be a dick, and posts spam, which rightfully gets hit with the spam moderation.

The spam moderation knocks the post down to +0, and inflicts a -10 karma ding. John's recent karma value will drop to 2, enough to still post at +1, but his total karma value will remain unchanged. If John continues misbehaving, his recent karma value will drop negative, locking him to posting at +0 or -1, but he will retain his levels, should he choose to change his behavior.

Due to the recent karma value being capped, any user can still be affected by moderation, but there is still plenty of inventive to keep posting and try and build levels, perhaps tie various rewards to karma levels (though ATM, I don't have any great ideas on this; if the community has any, I'm all ears).

This is my proposal in a nutshell, feedback welcome.
posted by janrinok on Wednesday February 25 2015, @02:53PM   Printer-friendly
from the you-know-it-makes-sense dept.

Tails, The Amnesic Incognito Live System, version 1.3, is out. Among the new features are:

  • Electrum is an easy to use bitcoin wallet. You can use the Bitcoin Client persistence feature to store your Electrum configuration and wallet.
  • The Tor Browser has additional operating system and data security. This security restricts reads and writes to a limited number of folders. Learn how to manipulate files with the new Tor Browser.
  • The obfs4 pluggable transport is now available to connect to Tor bridges. Pluggable transports transform the Tor traffic between the client and the bridge to help disguise Tor traffic from censors.
  • Keyringer lets you manage and share secrets using OpenPGP and Git from the command line.

One issue that the Tails team note specifically is that this release ships with NoScript version 2.6.9.14 instead of version 2.6.9.15, which is the version used in The Tor Project's own Tor Browser 4.0.4 release. Other issues can be found here.

The download page is here and the submitter reminds us all that, if we choose the BitTorrent download option, please seed afterwards to help other potential users to obtain a copy.

The security holes which affect Version 1.2.3 have been fixed.

posted by martyb on Wednesday February 25 2015, @01:39PM   Printer-friendly
from the the-ultimate-offsite-backup dept.

A few hundred feet inside a permafrost-encrusted mountain below the Arctic circle sits the seed bank that could be humanity's last hope during a global food crisis. This month, scientists suggested that this unassuming vault is the ideal space for preserving the world's data on DNA.

This is the Svalbard Global Seed Vault, a bunker on the Arctic island of Svalbard, which for the past seven years has amassed almost a half million seed samples from all over the world. The idea is to use the naturally freezing, isolated environment of the far north to preserve the world's plant life and agricultural diversity—which, of course, is under threat by climate change and disaster. If a food crisis occurs, the vault could provide the seeds that repopulate parts of the world.

But it could potentially preserve much more than seeds. A study in the German chemistry journal Angewandte Chemie this month details the quest to find out how long data stored on DNA could be preserved, and also suggests the vault as the ideal storage location.

http://gizmodo.com/the-isolated-vault-that-could-store-our-data-on-dna-for-1687457772

[Abstract]: http://onlinelibrary.wiley.com/doi/10.1002/anie.201411378/abstract

posted by martyb on Wednesday February 25 2015, @12:05PM   Printer-friendly
from the 'valuable'-to-whom? dept.

So-called patent trolls may actually benefit inventors and the innovation economy, according to a Stanford intellectual property expert. Stephen Haber ( https://politicalscience.stanford.edu/people/stephen-haber ), a Stanford political science professor, suggests in new research that concerns about too much litigation involving patents is misguided.

A patent troll is a person or company that buys patents – without any intent to produce a product – and then enforces those patents against accused infringers in order to collect licensing fees. Some say the resulting litigation has driven up costs to innovators and consumers.

To the contrary, Haber said, his research with Stanford political science graduate student Seth Werfel shows that trolls – also known as patent assertion entities, or PAEs – play a useful intermediary role between individual inventors and large manufacturers.

http://scienceblog.com/77142/patent-trolls-serve-valuable-role-in-innovation-stanford-expert-says/

[Abstract]: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2552734

posted by martyb on Wednesday February 25 2015, @10:37AM   Printer-friendly
from the these-chips-are-not-so-good-for-tollhouse-cookies dept.

Several tech sites have reports up on the new AMD Carrizo processor, with details coming out in a talk at the International Solid-State Circuits Conference (ISSCC). In summary Carrizo is AMD's latest Accelerated Processing Unit (APU) device, is aimed at mobile devices, and combines four Excavator processors with eight Radeon GPU cores.

From AnandTech's initial reporting:

From a hardware standpoint, Carrizo will be combining a number of Excavator modules, AMD’s R-Series GCN GPUs, and the chipset/Fusion Controller Hub into a single package, bringing with it full HSA compatibility, TrueAudio, and ARM Trustzone compatibility. As with Kaveri before it, Carrizo will be built on Global Foundries’ 28nm Super High Performance (28SHP) node, making Carrizo a pure architecture upgrade without any manufacturing changes. Today’s ISSCC paper in turn builds on these revelations, showing some of the data from AMD’s internal silicon testing

Further reports at Tom's Hardware, which highlights the CPU/GPU mix:

For AMD, the future is increasingly about distributing operations between CPU and GPU and parallel processing, with the aid of the GPU part of the APU. The future of these ideas of course holds much more potential, but considering that the HSA support has already been completely implemented, a significant performance boost is to be expected, even in the present stages of development.

Also reported at The Register and Extreme Tech, as well as the (wonderfully named) SemiAccurate.

posted by martyb on Wednesday February 25 2015, @08:11AM   Printer-friendly
from the mobile-linux dept.

AndroidHeadlines reports

If you are not planning on buying the [Aquaris E4.5 Ubuntu Edition] but would like to see what the operating system is like, then it looks like the firmware is now available to download. Of course, getting this firmware to run on any device other than the E4.5 will be quite a chore, but that is unlikely to stop people trying. Not to mention, a forked version for other devices might become available in due course.

However, if you do own one of the original Android powered Aquaris E4.5 devices, then you might have more luck in getting this to work. If you are interested in giving the firmware a try, then you can download it by clicking here (automatic download). There are, unfortunately, no flashing instructions provided with the download and as such, this is a rather blind (and risky) install. Still, if you know what you are doing or know how to tinker with firmware, then maybe you will be able to do something with this one.

Softpedia has related news: Official Ubuntu Phone Porting Guide Published

Canonical has finally released the online Ubuntu Phone porting guide that should enable developers to get the Ubuntu Touch running on more devices out there that can easily support it.

The Ubuntu devs have been working on this porting guide for quite some time and now it's finally out. It underlines in great details all the steps and procedures that are needed in order to make the OS run on other devices than just Nexus 4, Nexus 7 2013, and Bq Aquaris. This tutorial should enable the community to come up with ports that even feature support for Over-The-Air (OTA) updates, which is actually a very important feature.

The online community has managed to get a few ports going without this tutorial. The best example is the port for Nexus 5, but there are many other phones out there that could easily run Ubuntu. The interest to port is there and a lot of projects have been [put] on hold, on XDA, because there wasn't complete information available.

In other related news, on February 11, ExtremeTech reported
The First Ubuntu Phone Sold Out Twice the First Day It Was For Sale

Apparently, the meh-specs/no-apps smartphone running Ubuntu OS, available only via a flash sale gimmick wasn't completely off-putting.

OMG! Ubuntu! notes

Bq, in a statement, thanked buyers for their patience, explaining:

"We experienced a huge demand this morning, receiving over 12,000 orders per minute and unfortunately our servers went down as a result.

We only had a limited number of units for today's flash sale. More will be made available as further flash sales are held throughout the month. Please bear in mind that all orders placed for Ubuntu smartphones will not be delivered in any case until March. (sic)"

To make up for the difficulties, ExtremeTech says that later that day Bq did another flash sale and finished selling all the units they had available.

On February 19, Softpedia reported that flash sale which occurred that day (was that the second or the third?) was a big success.

posted by martyb on Wednesday February 25 2015, @05:41AM   Printer-friendly
from the much-more-than-50-shaders-of-gray dept.

Tom's Hardware claims that Microsoft's DirectX 12 will allow systems with multiple graphics cards to use a mix of AMD and NVIDIA GPUs simultaneously.

A source with knowledge of the matter gave us some early information about an "unspoken API," which we strongly infer is DirectX 12.

One of the big things that we will be seeing is DirectX 12's Explicit Asynchronous Multi-GPU capabilities. What this means is that the API combines all the different graphics resources in a system and puts them all into one "bucket." It is then left to the game developer to divide the workload up however they see fit, letting different hardware take care of different tasks. Part of this new feature set that aids multi-GPU configurations is that the frame buffers (GPU memory) won't necessarily need to be mirrored anymore. DirectX 12 [...] will work with a new frame rendering method called SFR, which stands for Split Frame Rendering. Developers will be able to manually, or automatically, divide the texture and geometry data between the GPUs, and all of the GPUs can then work together to work on each frame. Each GPU will then work on a specific portion of the screen, with the number of portions being equivalent to the number of GPUs installed.

The source said that with binding the multiple GPUs together, DirectX 12 treats the entire graphics subsystem as a single, more powerful graphics card. Thus, users get the robustness of running a single GPU, but with multiple graphics cards. We were also told that DirectX 12 will support all of this across multiple GPU architectures, simultaneously. What this means is that Nvidia GeForce GPUs will be able to work in tandem with AMD Radeon GPUs to render the same game — the same frame, even.

There is a catch, however. Lots of the optimization work for the spreading of workloads is left to the developers — the game studios. The same went for older APIs, though, and DirectX 12 is intended to be much friendlier. For advanced uses it may be a bit tricky, but according to the source, implementing the SFR should be a relatively simple and painless process for most developers.

Here is some previous reporting about DirectX 12 from Anandtech and ExtremeTech. DirectX 12 will be exclusive to Windows 10. Khronos Group's OpenGL successor glNext will be revealed at the Game Developers Conference 2015 on March 5th, in an event "presented by Valve".

posted by martyb on Wednesday February 25 2015, @03:11AM   Printer-friendly
from the it's-a-killer-gerbil! dept.

The Washington Post covers a study that suggests the Black Plague may not have been spread to Rats after all. Maybe it was the fault of gerbils.

History has long blamed the Black Rat for the spread of the plague (Black Death), and the fleas that accompanied them. The Plague was actually several events, suspected of originating in China, and spread by trade. DNA studies reveal slight differences between each infestation, indicating a re-introduction every 15 years or so.

The new study says that rat populations densities are heavily weather dependent, and they can find no correspondence between the weather and the rat population in Europe.

[More after the break...]

According to a study published in the Proceedings of the National Academy of Sciences, climate data dating back to the 14th century contradicts the commonly held notion that European plague outbreaks were caused by a reservoir of disease-carrying fleas hosted by the continent’s rat population.

“For this, you would need warm summers, with not too much precipitation,” Nils Christian Stenseth, an author of the study, said. "And we have looked at the broad spectrum of climatic indices, and there is no relationship between the appearance of plague and the weather.”

It turns out that Europe always experienced plague outbreaks after central Asia had a wet spring followed by a warm summer. These are terrible conditions for black rats, but ideal for Asia’s gerbil population and the fleas that accompany them.

It wasn't that gerbils (wild ones) were actually imported to Europe. Its that the gerbil population fluctuated with wet/dry climate periods. And when the gerbil population crashed, due to dry weather reducing their food sources (not to mention the plague decimating their populations), the fleas needed a new food supply. And along came wagon trains, with animals, and humans, camping along the way.

The full study is here and the authors claim:

The black rat, likely played a role in maintaining plague outbreaks on ships, as well as importing plague into harbors, but its role as a potential plague reservoir in Europe is rather questionable.

[...] a plausible suggestion is that caravans passing through Asian plague foci were responsible for transporting plague between Asia and Europe. After a plague infection established itself in a caravan (in its animals, in the traders, or in fleas in its cargo), the disease could have spread to other caravans where goods and animals were rested, and transported across Eurasian trade routes.

posted by LaminatorX on Tuesday February 24 2015, @11:25PM   Printer-friendly
from the forgot-my-pants dept.

As reported here at Soylent News some time ago, the Haskell programming language project's deb.haskell.org Debian build server was compromised. It is now over a week later, but the project's server status page still hasn't seen any additional updates posted.

At the time of submission, the latest update shown is the one from February 15, 2015:

February 15, 2015 1:07AM CST
February 15, 2015 7:07AM UTC
[Investigating] deb.haskell.org has been compromised; dating back to February 12th when suspicious anomalies were detected in outgoing traffic. `deb.haskell.org` was already offline and suspended shortly after these traffic changes were detected by the host monitoring system, meaning the window for package compromise was very very small.
We're continuing to investigate the breach and the extent to which it might have spread.

This lack of information will no doubt be concerning to any security-conscious individual, and will leave such an individual with many questions: Why is this investigation taking so long? Is the investigation still actually happening? Why are we not getting more frequent updates? Is there any risk that the other servers of the Haskell project have been compromised? Why is it taking so long to get a rebuilt Debian build system put together? Were any Debian packages compromised? Is there any risk to anyone who may have used these Debian packages recently?

Regardless of the answers to such questions, it is becoming clearer on a daily basis that these answers are needed, and needed quickly. Uncertainty is never a good thing when security is involved, and the reputation of Haskell will suffer if more information about this breach isn't presented to the community.