Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Humans have given wild animals their diseases nearly 100 times, researchers find:
In a study published March 22 in Ecology Letters, the authors describe nearly one hundred different cases where diseases have undergone "spillback" from humans back into wild animals, much like how SARS-CoV-2 has been able to spread in mink farms, zoo lions and tigers, and wild white-tailed deer.
"There has understandably been an enormous amount of interest in human-to-wild animal pathogen transmission in light of the pandemic," says Gregory Albery, Ph.D., a postdoctoral fellow in the Department of Biology at Georgetown University and the study's senior author. "To help guide conversations and policy surrounding spillback of our pathogens in the future, we went digging through the literature to see how the process has manifested in the past."
In their new study, Albery and colleagues found that almost half of the incidents identified occurred in captive settings like zoos, where veterinarians keep a close eye on animals' health and are more likely to notice when a virus makes the jump. Additionally, more than half of cases they found were human-to-primate transmission, an unsurprising result both because pathogens find it easier to jump between closely-related hosts, and because wild populations of endangered great apes are so carefully monitored.
[...] Disease spillback has recently attracted substantial attention due to the spread of SARS-CoV-2, the virus that causes COVID-19, in wild white-tailed deer in the United States and Canada. Some data suggest that deer have given the virus back to humans in at least one case, and many scientists have expressed broader concerns that new animal reservoirs might give the virus extra chances to evolve new variants.
Journal Reference:
Just a moment..., (DOI: 10.1111/ele.14003)
Tiny, cheap solution for quantum-secure encryption - The Source - Washington University in St. Louis:
It's fairly reasonable to assume that an encrypted email can't be seen by prying eyes. That's because in order to break through most of the encryption systems we use on a day-to-day basis, unless you are the intended recipient, you'd need the answer to a mathematical problem that's nearly impossible for a computer to solve in a reasonable amount of time.
Nearly impossible for modern-day computers, at least.
"If quantum computing becomes a reality, however, some of those problems are not hard anymore," said Shantanu Chakrabartty, the Clifford W. Murphy Professor and vice dean for research and graduate education in the Preston M. Green Department of Electrical & Systems Engineering at the McKelvey School of Engineering.
[...] Chakrabartty's lab at Washington University in St. Louis proposes a security system that is not only resistant to quantum attacks, but is also inexpensive, more convenient, and scalable without the need for fancy new equipment.
[...] The new protocol for Symmetric Key Distribution, which Chakrabartty and Mustafizur Rahman, a PhD student in Chakrabartty's lab and first author on the research paper, refer to as SPoTKD, doesn't require lasers or satellites or miles of new cable. It relies on tiny microchips embedded with even tinier clocks that run without batteries.
Journal Reference:
Shantanu Chakrabartty, et. al.,SPoTKD: A Protocol for Symmetric Key Distribution Over Public Channels Using Self-Powered Timekeeping Devices, (DOI: https://ieeexplore.ieee.org/document/9730879)
AI suggested 40,000 new possible chemical weapons in just six hours:
It took less than six hours for drug-developing AI to invent 40,000 potentially lethal molecules. Researchers put AI normally used to search for helpful drugs into a kind of "bad actor" mode to show how easily it could be abused at a biological arms control conference.
All the researchers had to do was tweak their methodology to seek out, rather than weed out toxicity. The AI came up with tens of thousands of new substances, some of which are similar to VX, the most potent nerve agent ever developed. Shaken, they published their findings this month in the journal Nature Machine Intelligence.
The paper had us at The Verge a little shook, too. So, to figure out how worried we should be, The Verge spoke with Fabio Urbina, lead author of the paper. He's also a senior scientist at Collaborations Pharmaceuticals, Inc., a company that focuses on finding drug treatments for rare diseases.
Journal Reference:
Urbina, Fabio, Lentzos, Filippa, Invernizzi, Cédric, et al. Dual use of artificial-intelligence-powered drug discovery, Nature Machine Intelligence (DOI: 10.1038/s42256-022-00465-9)
Linux Mint Announces Latest Debian Based OS:
Linux Mint has announced the latest release of its Debian based operating system Linux Mint Debian Edition (LMDE), codename "Elsie" is based upon the latest Debian "Bullseye" release and is an alternative to the mainstream Ubuntu based Linux Mint.
LMDE, or Linux Mint Debian Edition, is a backup version of Mint designed to preserve the work put into, and user experience of, Linux Mint should Ubuntu ever disappear to the great software graveyard in the sky. As Ubuntu is itself built on Debian architecture, to the uninitiated the difference is hard to discern. But under the hood there are notable software changes inline with the source operating system's philosophy. Following a successful beta release at the end of February, the time has come for a full version.
LMDE 5, codenamed Elsie, is made using the same Debian 11 Bullseye that Raspberry Pi OS made such a difficult upgrade to last year (as more recently and successfully did Peppermint OS). This, however, seems to be a much more fortunate project than Raspberry Pi's, displaying the same Cinnamon desktop as the Ubuntu-based version of Mint, but with none of the Snap containerised software packages used by Canonical's operating system. Instead, it uses the Flatpak application manager, along with a native Firefox app straight from Mozilla. Being a Debian based OS, Linux Mint also comes with the APT (Advanced Packaging Tool) to manage software installation.
The system requirements are modest, with just 2GB of RAM (4GB for a 'comfortable experience') and 20GB of disk space required. A screen resolution of 1024 x 768 is recommended, but on coarser displays there's a workaround involving Alt+dragging windows to get them on the screen.
From The Verge
Stephen Wilhite, one of the lead inventors of the GIF, died last week from COVID at the age of 74, according to his wife, Kathaleen, who spoke to The Verge. He was surrounded by family when he passed. His obituary page notes that "even with all his accomplishments, he remained a very humble, kind, and good man."
Stephen Wilhite worked on GIF, or Graphics Interchange Format, which is now used for reactions, messages, and jokes, while employed at CompuServe in the 1980s. He retired around the early 2000s and spent his time traveling, camping, and building model trains in his basement.
[...] Although GIFs are synonymous with animated internet memes these days, that wasn't the reason Wilhite created the format. CompuServe introduced them in the late 1980s as a way to distribute "high-quality, high-resolution graphics" in color at a time when internet speeds were glacial compared to what they are today. "He invented GIF all by himself — he actually did that at home and brought it into work after he perfected it," Kathaleen said. "He would figure out everything privately in his head and then go to town programming it on the computer."
While apparently mostly known for short, silent animations and video clips these days, GIF is actually quite well-suited to logos, charts, diagrams, and maps with its very small size and (if done correctly) lack of compression artifacts.
Previously:
(2021) The Brave New World of Big Tech Antitrust Enforcement
(2020) Jif Peanut Butter Weighs in on GIF Pronunciation -- Runs Contrary to Historical Evidence
(2017) Epilepsy-Triggering Suspect Charged, More Details on the Arrest
Live updates: Russians destroy Chernobyl laboratory:
LVIV, Ukraine -- Russian military forces have destroyed a new laboratory at the Chernobyl nuclear power plant that among other things works to improve management of radioactive waste, the Ukrainian state agency responsible for the Chernobyl exclusion zone said Tuesday.
The Russian military seized the decommissioned plant at the beginning of the war. The exclusion zone is the contaminated area around the plant, site of the world's worst nuclear meltdown in 1986.
[...] The laboratory contained "highly active samples and samples of radionuclides that are now in the hands of the enemy, which we hope will harm itself and not the civilized world," the agency said in its statement.
Radionuclides are unstable atoms of chemical elements that release radiation.
Also reported at:
[Editor's Note: The original source of this report is the Ukrainian government. I have not yet found an independently verifiable source.]
Reporting bias makes homeopathy trials look like homeopathy works:
One of the more productive ways that the methods of science can be used is to look at the scientific process itself. A "meta-science" study (like a recent one published on brain scans) can help tell us when research approaches aren't producing reliable data and can potentially show what we might need to change to get those approaches to work.
Now, someone has applied a bit of meta-science to an area of research where we shouldn't expect to see improvements: homeopathy. A group of Austrian researchers looked into why a reasonable fraction of the clinical trials on homeopathy produce positive results. The biggest factor, the researchers found, is that the trials that show homeopathy is ineffective are less likely to get published.
There are plenty of ways to test potential treatments, but over the years, problems have been identified in almost all of them. That's left the double-blind, randomized clinical trial as the most trusted method of getting rid of some of the biases that make other approaches less reliable. But even in double-blind trials, problems can creep in. There's always a bias toward publishing positive results—ones where the treatments have an effect.
As a result, we can't always be sure whether we are seeing positive results because a treatment works or because negative results simply aren't getting published. This has been a notable issue with some of the fad "cures" for COVID-19.
To deal with that issue, the field has settled on preregistering clinical trials. In these cases, the design of the trial, the outcomes being measured, and other details are placed in a public database before the trial even starts. Many research journals agreed that preregistration would be a requirement for later publication, meaning that anyone who hoped to publish results in the future would have a compelling reason to preregister. But unregistered trials can usually still get published in lower-profile journals.
This can help us identify when only positive results are being published. And that's one of the analyses that was done by the Austrian researchers.
Journal Reference:
Gerald Gartlehner, Robert Emprechtinger, Marlene Hackl, et al. Assessing the magnitude of reporting bias in trials of homeopathy: a cross-sectional study and meta-analysis [open], BMJ Evidence-Based Medicine (DOI: 10.1136/bmjebm-2021-111846)
First Microsoft, then Okta: New ransomware gang posts data from both:
A relatively new entrant to the ransomware scene has made two startling claims in recent days by posting images that appear to show proprietary data the group says it stole from Microsoft and Okta, a single sign-on provider with 15,000 customers.
The Lapsus$ group, which first appeared three months ago, said Monday evening on its Telegram channel that it gained privileged access to some of Okta's proprietary data. The claim, if true, could be serious because Okta allows employees to use a single account to log in to multiple services belonging to their employer.
In late January 2022, Okta detected an attempt to compromise the account of a third-party customer support engineer working for one of our subprocessors. The matter was investigated and contained by the subprocessor. We believe the screenshots shared online are connected to this January event. Based on our investigation to date, there is no evidence of ongoing malicious activity beyond the activity detected in January.
[...] Over the weekend, the same Telegram channel posted images to support a claim Lapsus$ made that it breached Microsoft systems. The Telegram post was later removed—but not before security researcher Dominic Alvieri documented the hack on Twitter.
[...] On Monday—a day after the group posted and then deleted the images—Lapsus$ posted a BitTorrent link to a file archive that purportedly contained proprietary source code for Bing, Bing Maps, and Cortana, all of which are Microsoft-owned services. Bleeping Computer, citing security researchers, reported that the contents of the download were 37GB in size and appeared to be genuine Microsoft source code.
Microsoft on Tuesday said only: "We are aware of the claims and investigating."
Lapsus$ is a threat actor that appears to operate out of South America or possibly Portugal, researchers at security firm Check Point said. Unlike most ransomware groups, the firm said, Lapsus$ doesn't encrypt the data of its victims. Instead, it threatens to release the data publicly unless the victim pays a hefty ransom. The group, which first appeared in December, has claimed to have successfully hacked Nvidia, Samsung, Ubisoft, and others.
Also reported at:
Russian Government Mulls Chinese Foundries, State Aid to Evade US Sanctions:
The Russian Federation government is considering adding chip designers Baikal Electronics and MCST to the list of 'backbone enterprises.' The status will provide Baikal and MSCT with numerous benefits, including subsidies. State aid might help these companies to transition the production of their chips from Taiwan to China. Meanwhile, it is unclear whether fabs like SMIC and Hua Hong are interested in making chips for Russian companies and risk additional sanctions.
"Such a move could also be aimed at transferring the production of Russian processors from the Taiwanese TSMC, which abandoned their production due to sanctions, to Chinese factories," a report by CNews reads.
Amid the global chip deficit, prominent Chinese foundries like SMIC and Hua Hong have landed large orders from existing and new clients. Officially, SMIC has been operating at over 100% capacity for several quarters now, so it is unclear whether it can even make chips for Baikal and MCST. Another question is whether those companies can legally produce those processors.
[...] While many media outlets highlight ASML, the world's largest supplier of lithography equipment, as the key maker of semiconductor production tools, there are a half-dozen U.S.-based companies (Applied Materials, KLA, Lam Research, etc.) that build fab equipment without which fabs cannot function. As a result, virtually all foundries in the world need to obtain an export license from the U.S. government if they want to make chips for companies like Huawei, Phytium, Sunway, or essentially all Russian chipmakers.
License applications to produce chips for the said companies are undertaken with a presumption of denial. So given the current attitude towards Russia, it is unlikely that SMIC and Hua Hong can actually help Russia to save its two major developers of CPUs. Furthermore, it is unclear from where Baikal could get contemporary Arm licenses as the U.K. has also imposed sanctions against the Russian high-tech industry.
Under CEO Tim Cook's watchful eye, Apple has become famous for its tightly managed supply chain. Yet even the most finely tuned machines run into problems from time to time. The case of Dhirendra Prasad appears to be one of those times.
[...]
The alleged scam worked something like this: Prasad would receive a list of parts and services that Apple needed. He would then request quotes from vendors, negotiate with them, and choose which ones would get the business. From this position of power, Prasad could put his thumb on the scale, and he apparently gave Hansen's and Baker's companies a leg up in exchange for something on the side.
[...]
Prasad's alleged scheme appears to have ramped up as it went on. In 2017, Prasad reported that his income was $1,215,000, the government alleges. "In fact, as defendant knew and believed, defendant had taxable income for 2017 that was greater than the amount reported on the tax return."US attorneys believe that Prasad attempted to launder that money by purchasing five properties, most of them in California's Central Valley, and stashing funds in various investment accounts, 529 college savings plans, and a retirement annuity. The seized assets are worth about $5 million, the government estimates.
https://spectrum.ieee.org/neural-network-multiplex
Researchers find neural networks can process a lot more data - achieving up to an 18-fold speedup - by multiplexing many inputs into one feed. They don't yet know why this doesn't confuse the network.
Just as multiplexing can help a single communication channel carry many signals at the same time, a new study reveals that multiplexing can help neural networks—the AI systems that now often power speech recognition, computer vision, and more—scan dozens of streams of data simultaneously, letting them greatly boost the rate at which they analyze information.
In artificial neural networks, components dubbed "neurons" are fed data and cooperate to solve a problem, such as recognizing images. The neural net repeatedly adjusts the links between its neurons and sees if the resulting patterns of behavior are better at finding a solution. Over time, the network discovers which patterns are best at computing results. It then adopts these as defaults, mimicking the process of learning in the human brain. The features of a neural net that change with learning, such as the nature of the connections between neurons, are known as its parameters.
Recent research suggests that modern neural networks often have vastly more parameters than they need—potentially, they could prune the numbers of their parameters by more than 90 percent to reduce their sizes without harming their accuracy. This raised a question that researchers at Princeton University aimed to address—if neural networks possessed more computing power than they needed, could they each analyze multiple streams of information simultaneously to help learn a task, just as a radio channel can share its bandwidth to carry multiple signals at the same time?
[...] The scientists conducted experiments with DataMUX using three different kinds of neural networks—transformers, multilayer perceptrons, and convolutional neural networks. The experiments involved several tasks—image recognition; sentence classification, in which a machine aims to identify whether text is spam, a business article, and so on; named entity recognition, which involves locating and classifying named entities such as people, groups, and places.
Experiments with transformers on text-classification tasks revealed they could multiplex up to 40 inputs, achieving up to an 18-fold speedup in the rate at which they could process these inputs with as little as a 2 percent drop in accuracy.
[...] In the future, the researchers aim to experiment with multiplexing state-of-the-art neural networks such as BERT and GPT-3. They would also like to investigate other multiplexing schemes with which they could scale up to hundreds or even thousands of inputs at once, "leading to even larger improvements in throughput," Murahari says. "We could really just be at the tip of the iceberg."
Journal Reference:
Murahari, Vishvak, Jimenez, Carlos E., Yang, Runzhe, et al. DataMUX: Data Multiplexing for Neural Networks, (DOI: 10.48550/arXiv.2202.09318)
Where to begin?
How about at the beginning? Would that be https://soylentnews.org/~martyb/journal/60? That was the first journal article I posted to SoylentNews. I am talking still earlier than that. That would be the day I created my account on the site — it was a few days before we went live. I have been active ever since. Well, up until about a couple weeks ago.
That was when I experienced a medical condition that has precluded my continued participation here.
Since that day, janrinok (our former Editor-in-Chief) has ably filled my shoes in my absence. That is until Fnord666 (our Alternate-Editor-in-Chief) could take the reins.
I ask you to extend to them the same kindness and support you have shown me. I've grown creatively and professionally in ways I had never even imagined! Thank You!
janrinok writes:
It is hard to explain just how much of a contribution Marty has made to this site - from its very early days before it even went public Marty was there providing whatever help he could. If there was a job to be done he was there offering to help. There was nothing that he was not prepared to tackle. If he didn't know how to do something he would go and find out and then return to do whatever needed to be done.
Fnord and I have processed far more more stories than either of us had ever expected to do (6570 and 6166 respectively) but we are a long way behind Marty's contribution of 11076 stories at the time of writing. If you conservatively estimate each story at 15 minutes (and I can assure you that many stories can take much longer than that!) the man-hours he has spent keeping the front page full is a huge amount of effort. That would be, and is, worthy of recognition in its own right - but he didn't stop there.
He has also served as our QA specialist and spent many more man-hours testing software and finding ways to bring it to it's knees, and then finding solutions to each of those problems. He ran our donations and looks after the funding 'Beg-o-Meter'. And he still found time to be the Editor-in-Chief for the site since 2018. There are so many jobs that he does - many of them having gone almost unnoticed - that we now find ourselves trying to work out who will pick up which extra tasks for the future.
I am also fortunate to have Marty as a friend. My own life has had a few ups and downs over the last 5 years or so and Marty has been there to provide sound advice, wise counsel, or just a listening ear. I hope I will be able to repay him in kind in the future.
In addition, he has other ways of helping his local community which have nothing to do with this site. He would not wish me to go into details but he takes his caring and helping attitude with him throughout his life.
It is not all bad news - Marty is stepping down from the role of Editor-in-Chief but he is not leaving the community. How much he is able to contribute in the future is still very much unknown but you may still see his name appear from time-to-time alongside a comment or on IRC. He has specifically asked me to pass on his best wishes for the future to the community and the site that we all support.
Marty leaves behind a legacy he can justifiably be proud of, and a very large pair of boots to be filled. We will do our best to maintain the standards he has set. Good luck and best wishes, Marty, and I hope that your recovery is swift and complete.
After prematurely announcing that Steam on Chromebooks was ready for testing last week, Google is making the release official today. The alpha version of Steam on Chrome OS is currently available in the Chrome OS 14583.0.0 Dev channel, as announced via a post in Google's Chrome Developers Community.
Not all Chromebooks will be able to run Steam, however. [...] These requirements limit Steam on Chrome OS to the pricier tier of Chromebooks. You can currently find HP's G2 Chromebook for $849 and Acer's Chromebook 514 for $780 or its Chromebook 515 for $772.
[...] Google said it doesn't recommend trying Steam on Chrome OS on a "Chromebook that you rely on for work, school, or other daily activities."
Expect "crashes, performance regressions," and bugs, Google said. As this is an alpha, "anything can break," Google said, highlighting the Dev channel's "inherent instability" and the fact that Steam on Chrome OS is a work in progress.
The universe's background starlight is twice as bright as expected:
Even when you remove the bright stars, the glowing dust and other nearby points of light from the inky, dark sky, a background glow remains. That glow comes from the cosmic sea of distant galaxies, the first stars that burned, faraway coalescing gas — and, it seems, something else in the mix that's evading researchers.
Astronomers estimated the amount of visible light pervading the cosmos by training the New Horizons spacecraft, which flew past Pluto in 2015, on a spot on the sky mostly devoid of nearby stars and galaxies (SN: 12/15/15). That estimate should match measurements of the total amount of light coming from galaxies across the history of the universe. But it doesn't, researchers report in the March 1 Astrophysical Journal Letters.
"It turns out that the galaxies that we know about can account for about half of the level we see," says Tod Lauer, an astronomer at the National Science Foundation's NOIRLab in Tucson, Ariz.
[...] While Lauer's group previously noted a discrepancy, this new measurement reveals a wider difference, and with smaller uncertainty. "There's clearly an anomaly. Now we need to try to understand it and explain it," says coauthor Marc Postman, an astronomer at the Space Telescope Science Institute in Baltimore, Md.
There are several astronomical reasons that could explain the discrepancy. Perhaps, says Postman, rogue stars stripped from galaxies linger in intergalactic space. Or maybe, he says, there is "a very faint population of very compact galaxies that are just below the detection limits of Hubble." If it's the latter case, astronomers should know in the next couple years because NASA's recently launched James Webb Space Telescope will see these even-fainter galaxies (SN: 10/6/21).
Another possibility is the researchers missed something in their analysis. "I'm glad it got done; it's absolutely a necessary measurement," says astrophysicist Michael Zemcov of the Rochester Institute of Technology in New York who was not involved in this study. Perhaps they're missing some additional glow from the New Horizons spacecraft and its LORRI instrument, or they didn't factor in some additional foreground light. "I think there's a conversation there about details."
Hope fading for recovery of European radar imaging satellite - SpaceNews:
European Space Agency officials said prospects are dimming for the recovery of a radar imaging satellite that malfunctioned nearly three months ago, but that efforts to save the spacecraft continue.
The Sentinel-1B spacecraft malfunctioned in December, keeping the spacecraft from collecting C-band synthetic aperture radar (SAR) imagery. ESA said in January that they were investigating a problem with the power system for the SAR payload on the satellite, launched in April 2016.
In a Feb. 25 update, ESA said work was continuing to investigate problems with both the main and backup power system for the payload but that effort had yet to identify a root cause of the anomaly. The problem doesn't affect operations of the spacecraft itself, which has remained under control.
ESA leaders were not optimistic about the prospects of recovering Sentinel-1B. "The situation doesn't look very good, but we have not given up hope yet," Josef Aschbacher, director general of ESA, said in response to a question about the status of the satellite. "We are still looking into technical options of what the root cause could be."