Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Researchers from the CNRS have discovered that mandrills use their sense of smell to avoid contamination by intestinal protozoans through contact with infected members of their group. Their work, published in Science Advances on 7 April 2017, shows that parasites shape the social behavior of these primates, leading them to develop a strategy of parasite avoidance through smell.
The Mandrillus Project was launched in 2012, in southern Gabon, to study the ecology of the world's sole population of wild mandrills habituated to human presence. Frequent grooming among these mandrills is undoubtedly a means of eliminating ectoparasites1, but it also plays a major role in social cohesion—helping to soothe tensions after conflict, for example.
Mustering data from five years of field observation, the researchers demonstrated that mandrills harboring parasitic protozoans in their digestive tracts were less frequently groomed by their conspecifics than were healthy mandrills. Groomers especially avoided the perianal zone, which poses a high risk of contagion.
To pursue their investigations, the scientists conducted an experiment using antiparasitics. They captured infected mandrills, administered the antiparasitic drug, and returned the treated mandrills to their group. Now free of parasites, these primates once again enjoyed frequent grooming.
1 An ectoparasite is an external parasite—that is, one that lives on the surface of a living being.
Journal Reference: Poirotte C, Massol F, Herbert A, Willaume E, Bomo PM, Kappeler PM, Charpentier MJE,
Mandrills use olfaction to socially avoid parasitized conspecifics
, Science Advances, 7 April 2017 DOI: 10.1126/sciadv.1601721
Will Knight writes:
No one really knows how the most advanced algorithms do what they do. That could be a problem.
Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn't look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn't follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.
Getting a car to drive this way was an impressive feat. But it's also a bit unsettling, since it isn't completely clear how the car makes its decisions. Information from the vehicle's sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you'd expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can't ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.
The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car's underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.
[...] The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.
[...] At some stage we may have to simply trust AI's judgement or do without using it. Likewise, that judgement will have to incorporate social intelligence. Just as society is built upon a contract of expected behaviour, we will need to design AI systems to respect and fit with our social norms. If we are to create robot tanks and other killing machines, it is important that their decision-making be consistent with our ethical judgements.
https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/
What do you think, would you trust such AI even if you couldn't parse its methods? Is deep learning AI technology inherently un-knowable?
This piece of news over at Ars Technica may have some startling implications.
The Digital Millennium Copyright Act's so-called "safe harbor" defense to infringement is under fire from a paparazzi photo agency. A new court ruling says the defense may not always be available to websites that host content submitted by third parties.
A Livejournal site hosted messages of celebrities, and a paparazzi agency that owns some of those photos took exception. Since the site moderated the posts that appeared, the appeals court ruled that just shouting "safe harbour" is insufficient - the court should investigate the extent to which the moderators curated the input.
As the MPAA wrote in an amicus brief:
If the record supports Mavrix’s allegations that LiveJournal solicited and actively curated posts for the purpose of adding, rather than removing, content that was owned by third parties in order to draw traffic to its site, LiveJournal would not be entitled to summary judgment on the basis of the safe harbor...
It's hard to argue with that: a site that actively solicits and then posts content owned by others seems to fall afoul of current copyright legislation in the USA.
But I can't help thinking of the impact this may have on SoylentNews.... if left to stand, this ruling could make running a site such as SN a very tricky line to walk.
Stay tuned for a NASA press conference on Thursday:
NASA will discuss new results about ocean worlds in our solar system from the agency's Cassini spacecraft and the Hubble Space Telescope during a news briefing 11 a.m. PDT (2 p.m. EDT) on Thursday, April 13. The event, to be held at NASA Headquarters in Washington, will include remote participation from experts across the country.
The briefing will be broadcast live on NASA Television and the agency's website.
These new discoveries will help inform future ocean world exploration -- including NASA's upcoming Europa Clipper mission planned for launch in the 2020s -- and the broader search for life beyond Earth.
The results could be about Enceladus, but Titan, Rhea, and Dione are also suspected to have subsurface oceans.
Timeline of Cassini–Huygens. Many of the recent flybys targeted Titan.
Also at JPL.
Following Google's release of a paper detailing how its tensor processing units (TPUs) beat 2015 CPUs and GPUs at machine learning inference tasks, Nvidia has countered with results from its Tesla P40:
Google's TPU went online in 2015, which is why the company compared its performance against other chips that it was using at that time in its data centers, such as the Nvidia Tesla K80 GPU and the Intel Haswell CPU.
Google is only now releasing the results, possibly because it doesn't want other machine learning competitors (think Microsoft, rather than Nvidia or Intel) to learn about the secrets that make its AI so advanced, at least until it's too late to matter. Releasing the TPU results now could very well mean Google is already testing or even deploying its next-generation TPU.
Nevertheless, Nvidia took the opportunity to show that its latest inference GPUs, such as the Tesla P40, have evolved significantly since then, too. Some of the increase in inference performance seen by Nvidia GPUs is due to the company jumping from the previous 28nm process node to the 16nm FinFET node. This jump offered its chips about twice as much performance per Watt.
Nvidia also further improved its GPU architecture for deep learning in Maxwell, and then again in Pascal. Yet another reason for why the new GPU is so much faster for inferencing is that Nvidia's deep learning and inference-optimized software has improved significantly as well.
Finally, perhaps the main reason for why the Tesla P40 can be up to 26x faster than the old Tesla K80, according to Nvidia, is because the Tesla P40 supports INT8 computation, as opposed to the FP32-only support for the K80. Inference doesn't need too high accuracy when doing calculations and 8-bit integers seem to be enough for most types of neural networks.
Google's TPUs use less power, have an unknown cost (the P40 can cost $5,700), and may have advanced considerably since 2015.
Previously: Google Reveals Homegrown "TPU" For Machine Learning
AMD has announced the acquisition of Nitero, a company that made a "phased-array beamforming millimeter wave" wireless chip for VR/AR headsets:
Nitero has designed a phased-array beamforming millimeter wave chip to address the challenges facing wireless VR and AR. Using high-performance 60 GHz wireless, this technology has the potential to enable multi-gigabit transmit performance with low latency in room-scale VR environments. The beamforming characteristics solve the requirement for line-of-sight associated with traditional high-frequency mm-wave systems, potentially eliminating wired VR headsets and enabling users to become more easily immersed in virtual and augmented worlds.
I'll say no thanks to a headset with cables connected to it. Those are for the early adopters.
The U.S. Securities and Exchange Commission on Monday announced a crackdown on alleged stock promotion schemes in which writers were secretly paid to post hundreds of bullish articles about public companies on financial websites.
Twenty-seven individuals and entities, including a Hollywood actress, were charged with misleading investors into believing they were reading "independent, unbiased analyses" on websites such as Seeking Alpha, Benzinga and Wall Street Cheat Sheet.
The SEC said many writers used pseudonyms such as Equity Options Guru, The Swiss Trader, Trading Maven and Wonderful Wizard to hype stocks.
It said it found more than 450 problem articles, of which more than 250 falsely said the writers were not being paid.
No word on conflicts of interest and misleading information in regard to stock promotion on television news networks.
Submitted via IRC for Runaway1956
Rightwing computer scientist and hedge fund billionaire Robert Mercer was the top donor to Donald Trump's presidential campaign. He contributed $13.5 million and laid the groundwork for what is now called the Trump Revolution. Mercer also funded Cambridge Analytica (CA), a small data analytics company that specializes in "election management strategies." CA boasts on its website that it has psychological profiles, based on 5,000 separate pieces of data, on 220 million American voters. CA scoops up masses of data from peoples' Facebook profiles and uses artificial intelligence to influence their thinking and manipulate public opinion. They used these skills to exploit America's populist insurgency and tip the election toward Trump.
[...] We enter and participate in this digital world every day, on our laptops and our smartphones. We are living in a new era of propaganda, one we can't see, with the collection and use of our data played back in ways to covertly manipulate us. All this is enabled by technological platforms originally built to bring us together. Welcome to the age of platform capitalism—the new battleground for the future.
Previously on SoylentNews: Do Advertisers Know You Better Than You Know Yourself?
New software can be used to track everyone's movements within in a crowd with greater accuracy than previous methods:
Following the paths of many individuals at the same time is enormously difficult, even for humans. Previous computer-based efforts to analyze dense crowd movement have focused on tracking one individual at a time in recorded video. But there are problems with that method. First, you have to run the programs over and over again for each person you want to track. Second, the programs tend to identify people in each frame of a video based on appearance—but heads and faces can be hard to distinguish from above, especially in tight crowds and low-resolution video. The new research, which will be published in IEEE Transactions on Pattern Analysis and Machine Intelligence, finds a way to increase both the efficiency and accuracy of tracking a person, enabling a software program to finally follow many people at the same time [DOI: 10.1109/TPAMI.2017.2687462] [DX].
The trick involves predicting where an individual will go next. The researchers wrote a mathematical function that analyzes five factors, based on previous frames of a video, to anticipate where each person will be in the current frame. One is appearance: Which patches of pixels resemble the target from the previous frame? Another is target motion: Where could the target be based on speed and direction? A third is neighbor motion: If the target is obscured, the program guesses on location based on the motion of the person's neighbors. Fourth is spatial proximity: The program won't guess that two people are in the same place, standing on top of one another. And last is grouping: If the program identifies a few people walking in a group, it will assume that they'll retain the same formation.
Some Soylentils were disappointed by the gaming performance of AMD's Ryzen CPUs when they were launched last month. By now, updates have eliminated some of the advantage that Intel CPUs had, but the potential gains differ depending on the game:
The first big Ryzen patch was for Ashes of the Singularity. Ryzen's performance in Ashes was arguably one of the more surprising findings in the initial benchmarking. The game has been widely used as a kind of showcase for the advantages of DirectX 12 and the multithreaded scaling that it shows. We spoke to the game's developers, and they told us that its engine splits up the work it has to do between multiple cores automatically.
In general, the Ryzen 1800X performed at about the same level as Intel's Broadwell-E 6900K. Both parts are 8-core, 16-thread chips, and while Broadwell-E has a modest instructions-per-cycle advantage in most workloads, Ryzen's higher clock speed is enough to make up for that deficit. But in Ashes of the Singularity under DirectX 12, the 6900K had average frame rates about 25 percent better than the AMD chip.
In late March, Oxide/Stardock released a Ryzen performance update for Ashes, and it has gone a long way toward closing that gap. PC Perspective tested the update, and depending on graphics settings and memory clock speeds, Ryzen's average frame rate went up by between 17 and 31 percent. The 1800X still trails the 6900K, but now the gap is about 9 percent, or even less with overclocked memory (but we'll talk more about memory later on).
It's a dirty job, but someone's gotta do it:
The African savanna elephant holds the prize for largest living terrestrial animal, and now it apparently just set another land record: the longest distance mover of seeds. The pachyderms can transport seeds up to 65 kilometers, according to a study of elephant dung in South Africa. That's 30 times farther than savanna birds take seeds, and it indicates that elephants play a significant role in maintaining the genetic diversity of trees on the savanna.
This research: Seed dispersal kernel of the largest surviving megaherbivore—the African savanna elephant (DOI: 10.1111/btp.12423) (DX)
Older research: Seed Dispersal by Elephants in Semiarid Woodland Habitats of Hwange National Park, Zimbabwe (DOI: 10.1111/j.1744-7429.2000.tb00503.x) (DX)
Seed protection through dispersal by African savannah elephants (Loxodonta africana africana) in northern Tanzania. (DOI: 10.1111/aje.12239) (DX)
Overseas seed dispersal by migratory birds (open, DOI: 10.1098/rspb.2015.2406) (DX)
The hacking tools used by the Central Intelligence Agency may have been involved in at least 40 cyberattacks in 16 countries, according to security firm Symantec.
The company, which issued its report on Monday, based its conclusion on the disclosure of those tools by WikiLeaks last month. The documents showed how the spy agency was able to hack into phones, computers and even televisions to snoop on people. Reuters was the first to write about the report.
Symantec didn't directly blame the CIA for the hacks, which occurred at unspecified dates, according to Reuters. The company also told Reuters that the targets were all government entities or had legitimate national security value, and were based in Europe, Asia, Africa and the Middle East.
[...] A CIA spokesman declined to comment on the Symantec report. The agency had previously declined to comment on the leaks themselves, only noting that "the American public should be deeply troubled by any WikiLeaks disclosure designed to damage the Intelligence Community's ability to protect America against terrorists and other adversaries. Such disclosures not only jeopardize US personnel and operations, but also equip our adversaries with tools and information to do us harm."
CNET is unable to verify whether the WikiLeaks documents are real or have been altered.
-- submitted from IRC
Also covered at Ars Technica.
About an eighth of a University of Alberta collection of ice cores has melted due to a freezer malfunction:
A precious collection of ice cores from the Canadian Arctic has suffered a catastrophic meltdown. A freezer failure at a cold storage facility in Edmonton run by the University of Alberta (UA) caused 180 of the meter-long ice cylinders to melt, depriving scientists of some of the oldest records of climate change in Canada's far north.
The 2 April failure left "pools of water all over the floor and steam in the room," UA glaciologist Martin Sharp told ScienceInsider. "It was like a changing room in a swimming pool."
The melted cores represented 12.8% of the collection, which held 1408 samples taken from across the Canadian Arctic. The cores hold air bubbles, dust grains, pollen, and other evidence that can provide crucial information about past climates and environments, and inform predictions about the future.
After announcing his company was abandoning Unity for GNOME, Shuttleworth posted a thank-you note to the Unity community Friday on Google Plus, but added on Saturday:
"I used to think that it was a privilege to serve people who also loved the idea of service, but now I think many members of the free software community are just deeply anti-social types who love to hate on whatever is mainstream. When Windows was mainstream they hated on it. Rationally, Windows does many things well and deserves respect for those. And when Canonical went mainstream, it became the focus of irrational hatred too. The very same muppets would write about how terrible it was that IOS/Android had no competition and then how terrible it was that Canonical was investing in (free software!) compositing and convergence. Fuck that shit."
"The whole Mir hate-fest boggled my mind - it's free software that does something invisible really well. It became a political topic as irrational as climate change or gun control, where being on one side or the other was a sign of tribal allegiance. We have a problem in the community when people choose to hate free software instead of loving that someone cares enough to take their life's work and make it freely available."
Shuttleworth says that "I came to be disgusted with the hate" on Canonical's display server Mir, saying it "changed my opinion of the free software community."
Full story here.
Researchers have uncovered a rash of ongoing attacks designed to damage routers and other Internet-connected appliances so badly that they become effectively inoperable.
PDoS attack bots (short for "permanent denial-of-service") scan the Internet for Linux-based routers, bridges, or similar Internet-connected devices that require only factory-default passwords to grant remote administrator access. Once the bots find a vulnerable target, they run a series of highly debilitating commands that wipe all the files stored on the device, corrupt the device's storage, and sever its Internet connection. Given the cost and time required to repair the damage, the device is effectively destroyed, or bricked, from the perspective of the typical consumer.
Over a four-day span last month, researchers from security firm Radware detected roughly 2,250 PDoS attempts on devices they made available in a specially constructed honeypot. The attacks came from two separate botnets—dubbed BrickerBot.1 and BrickerBot.2—with nodes for the first located all around the world. BrickerBot.1 eventually went silent, but even now the more destructive BrickerBot.2 attempts a log-on to one of the Radware-operated honeypot devices roughly once every two hours. The bots brick real-world devices that have the telnet protocol enabled and are protected by default passwords, with no clear sign to the owner of what happened or why.
See also this related blog post inspired by this article.