Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What was highest label on your first car speedometer?

  • 80 mph
  • 88 mph
  • 100 mph
  • 120 mph
  • 150 mph
  • it was in kph like civilized countries use you insensitive clod
  • Other (please specify in comments)

[ Results | Polls ]
Comments:70 | Votes:292

posted by janrinok on Saturday July 30 2022, @07:21PM   Printer-friendly
from the you-gotta-know-when-to-hold-'em-and-when-to-fold-'em dept.

Artificial intelligence firm DeepMind has transformed biology by predicting the structure of nearly all proteins known to science in just 18 months:

DeepMind has predicted the structure of almost every protein so far catalogued by science, cracking one of the grand challenges of biology in just 18 months thanks to an artificial intelligence called AlphaFold. Researchers say that the work has already led to advances in combating malaria, antibiotic resistance and plastic waste, and could speed up the discovery of new drugs.

Determining the crumpled shapes of proteins based on their sequences of constituent amino acids has been a persistent problem for decades in biology. Some of these amino acids are attracted to others, some are repelled by water, and the chains form intricate shapes that are hard to accurately determine.

UK-based AI company DeepMind first announced it had developed a method to accurately predict the structure of folded proteins in late 2020, and by the middle of it 2021 it had revealed that it had mapped 98.5 per cent of the proteins used within the human body.

Today, the company announced that it is publishing the structures of more than 200 million proteins – nearly all of those catalogued on the globally recognised repository of protein research, UniProt.

[...] Demis Hassabis, CEO of DeepMind, says that the database makes finding a protein structure – which previously often took years – "almost as easy as doing a Google search". DeepMind is owned by Alphabet, Google's parent company.

[...] While the tool is often, even usually, extremely accurate, its structures are always predictions rather than explicitly calculated results. Nor has AlphaFold yet solved the complex interactions between proteins, or even made a dent in a small subset of structures, known as intrinsically disordered proteins, that seem to have unstable and unpredictable folding patterns.

"Once you discover one thing, then there are more problems thrown up," says Willison. "It's quite terrifying actually, how complicated biology is."

[...] Pushmeet Kohli, who leads DeepMind's scientific team, says that the company isn't done with proteins yet and is working to improve the accuracy and capabilities of AlphaFold.

"We know the static structure of proteins, but that's not where the game ends," he says. 'We want to understand how these proteins behave, what their dynamics are, how they interact with other proteins. Then there's the other area of genomics where we want to understand how the recipe of life translates into which proteins are created, when are they created and the working of a cell."


Original Submission

posted by hubie on Saturday July 30 2022, @02:34PM   Printer-friendly
from the snooze-time dept.

Office workers in Japan can now take a cat nap at work with 'Nap Box'. What is it? Basically, a windowless box that shuts out the light and the world for a while allowing a person to relax, de-stress, and get back to work recharged.

"In Japan, there are a lot of people who will lock themselves up in the bathroom for a while [to nap], which I don't think is healthy," Saeko Kawashima, communications director of furniture maker Itoki, told Bloomberg News.

"It's better to sleep in a comfortable location."

The device, which resembles a sleek water heater, will support occupants' heads, knees and rears so that they will not fall over, according to the outlet.

[...] "I think a lot of Japanese people tend to work continuously with no breaks," Kawashima said. "We are hoping that companies can use this as a more flexible approach to resting."

People took to Twitter on Friday to poke fun at the nap boxes, with one user joking, "This is how we get people back to the office."

"Capitalism always wins," another Twitter user said.

How are you supposed to sleep standing up?


Original Submission

posted by hubie on Saturday July 30 2022, @09:49AM   Printer-friendly
from the what's-the-affiliate-program-going-to-look-like-for-this? dept.

Amazon to buy One Medical, which runs 180+ medical offices throughout the US:

When Amazon launched Amazon Care to its employees in 2019, the goal was to test the product before rolling it out nationwide. After that rollout happened earlier this year, Amazon CEO Andy Jassy told Insider that the expansion would "fundamentally" change the health care game by dramatically enhancing the medical-care process. He predicted that patients in the future would be so used to telehealth and other new conveniences that they'll think that things like long wait times and delays between in-person visits commonly experienced today are actually "insane."

Now, The Wall Street Journal reports, Amazon has gone one step closer to that future by agreeing to a $3.9 billion deal to purchase One Medical, a company that operates a network of health clinics. With this move, Amazon will expand the number of patients it serves by gaining access to "a practice that operates more than 180 medical offices in 25 US markets and works with more than 8,000 companies to provide health benefits to employees, including with in-person and virtual care."

Echoing Jassy's enthusiasm, Neil Lindsay, Amazon Health Services' senior vice president, told WSJ that the company thinks "health care is high on the list of experiences that need reinvention." Purchasing One Medical is a way for Amazon to break further into the $4 trillion health care industry at a time when Amazon's revenue is down and costs are up.

[...] After the deal is done, One Medical chief executive Amir Dan Rubin "will remain CEO." In a news release to One Medical investors, Rubin expressed a lot of enthusiasm for the deal.

"The opportunity to transform health care and improve outcomes by combining One Medical's human-centered and technology-powered model and exceptional team with Amazon's customer obsession, history of invention, and willingness to invest in the long-term is so exciting," Rubin says. "There is an immense opportunity to make the health care experience more accessible, affordable, and even enjoyable for patients, providers, and payers."

[...] It has not been a total success story, though. Almost half a billion of Amazon's investment in One Medical is paying off the company's debt.

One Medical also recently came under fire in 2021 during a Congressional investigation into how it administered COVID-19 vaccines when they first became available in December 2020.

[...] Rubin was leading the company at the time. He has been CEO of One Medical since 2017 and has previously held executive roles for decades at health care companies, including a recent stint at UnitedHealth Group.

Congress' recent investigation into One Medical's untimely delivery of vaccines to vulnerable communities is not mentioned in the joint press release from Amazon and One Medical. Instead, Lindsay expresses full confidence in how One Medical handles delivery of patient care, saying Amazon will benefit from One Medical's "human-centered and technology-powered approach to health care," which Amazon believes "can and will help more people get better care, when and how they need it."


Original Submission

posted by hubie on Saturday July 30 2022, @05:05AM   Printer-friendly
from the what-goes-around-comes-around dept.

Switch to a circular economy could protect the environment while generating more value:

In 1924, a cartel of lightbulb manufacturers including General Electric and Philips agreed to artificially limit the lifespan of their products to about 1,000 hours, down from 2,500. The scandal, revealed decades later, came to epitomize the linear consumption model of making, consuming, and then discarding products that took hold during the Industrial Revolution and has been dominant ever since.

It may have enriched individual firms, but this system is reaching a dead end. It's economically inefficient and environmentally damaging. Its costs range from the pollution of air, land, and water to sharp fluctuations in the prices of raw materials and potential disruptions to supply chains.

"The linear model depletes the planet of its natural resources, it damages ecosystems, and creates lots of waste and pollution in the process. It's an unsustainable model. It cannot continue," says Barchi Gillai, the associate director of the Value Chain Innovation Initiative at Stanford Graduate School of Business.

In a new white paper, Gillai and her colleagues find that a growing number of companies are realizing the urgency of shifting their operations toward circularity. This means designing products for durability and recyclability, reducing material requirements, consuming fewer resources in manufacturing and shipping, and keeping items in circulation to boost their lifespan.

And the transition to a circular economy need not come at an economic cost; it can help companies generate more value from the resources they consume. With fewer mines, landfills, and incinerators, and more trees, the circular economy reduces waste and environmental harm. But there are several business benefits, too—lower operating costs, reduced supply chain risks, additional revenue streams, and access to new markets.


Original Submission

posted by hubie on Saturday July 30 2022, @12:26AM   Printer-friendly
from the I'm-a-Copenhagen-junkie dept.

Two players leverage quantum rules to achieve a seemingly telepathic connection:

A quantum particle can exist in two mutually exclusive conditions at once. For example, a photon can be polarized so that the electric field in it wriggles vertically, horizontally, or both ways at the same time—at least until it's measured. [...] The polarization emerges only with the measurement.

That last bit rankled Albert Einstein, who thought something like a photon's polarization should have a value independent of whether it is measured. He suggested particles might carry "hidden variables" that determine how a two-way state will collapse. However, in 1964, British theorist John Bell found a way to prove experimentally that such hidden variables cannot exist by exploiting a phenomenon known as entanglement.

Two photons can be entangled so that each is in an uncertain both-ways state, but their polarizations are correlated so that if one is horizontal the other must be vertical and vice versa. Probing entanglement is tricky. To do so, Alice and Bob must each have a measuring apparatus. Those devices can be oriented independently, so Alice can test whether her photon is polarized horizontally or vertically, while Bob can cant his detector by an angle. The relative orientation of the detectors affects how much their measurements are correlated.

Bell envisioned Alice and Bob orienting their detectors randomly over many measurements and then comparing the results. If hidden variables determine a photon's polarization, the correlations between Alice's and Bob's measurements can be only so strong. But, he argued, quantum theory allows them to be stronger. Many experiments have seen those stronger correlations and ruled out hidden variables, albeit only statistically over many trials.

[...] Now, Xi-Lin Wang and Hui-Tian Wang, physicists at Nanjing University, and colleagues have made the point more clearly through the Mermin-Peres game. In each round of the game, Alice and Bob share not one, but two pairs of entangled photons on which to make any measurements they like. [...]

If hidden variables predetermine the results of the measurements, Alice and Bob can't win every round [...] and on average, they can win at most eight out of nine rounds.

[...] Generating two pairs of entangled photons simultaneously is impractical, Xi-Lin Wang says. So instead, the experimenters used a single pair of photons that are entangled two ways—through polarization and so-called orbital angular momentum, which determines whether a wavelike photon corkscrews to the right or to the left. The experiment isn't perfect, but Alice and Bob won 93.84% of 1,075,930 rounds, exceeding the 88.89% maximum with hidden variables, the team reports in a study in press at Physical Review Letters.

[...] Xi-Lin Wang says the experiment was meant mainly to show the potential of the team's own favorite technology—photons entangled in both polarization and angular momentum. "We wish to improve the quality of these hyperentangled photons."

arXiv paper: Jia-Min Xu, Yi-Zheng Zhen, Yu-Xiang Yang, et al., Experimental Demonstration of Quantum Pseudotelepathy, arXiv:2206.12042v1 [quant-ph] 24 Jun 2022


Original Submission

posted by janrinok on Friday July 29 2022, @09:40PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Results from the largest prospective study of its kind indicate that in the initial days and weeks after experiencing trauma, individuals facing potentially threatening situations who had less activity in their hippocampus -- a brain structure critical for forming memories of situations that are dangerous and that are safe -- developed more severe posttraumatic stress disorder (PTSD) symptoms.

This association between reduced hippocampal activity and risk of PTSD was particularly strong in individuals who had greater involuntary defensive reactions to being startled.

This research, published in the JNeurosci, suggests that individuals with greater defensive reactions to potentially threatening events might have a harder time learning whether an event is dangerous or safe. They also are more likely to experience severe forms of PTSD, which include symptoms such as always being on guard for danger, self-destructive behavior like drinking too much or driving too fast, trouble sleeping and concentrating, irritability, angry outbursts, and nightmares.

"These findings are important both to identify specific brain responses associated with vulnerability to develop PTSD, and to identify potential treatments focused on memory processes for these individuals to prevent or treat PTSD," said senior author Vishnu Murty, PhD, assistant professor of psychology and neuroscience at Temple University.

This research is part of the national Advancing Understanding of RecOvery afteR traumA (AURORA) Study, a multi-institution project funded by the National Institutes of Health, non-profit funding organizations such as One Mind, and partnerships with leading tech companies. The organizing principal investigator is Samuel McLean, MD, MPH, professor of psychiatry and emergency medicine at the University of North Carolina School of Medicine and director of the UNC Institute for Trauma Recovery.

AURORA allows researchers to leverage data from patient participants who enter emergency departments at hospitals across the country after experiencing trauma, such as car accidents or other serious incidents. The ultimate goal of AURORA is to spur on the development and testing of preventive and treatment interventions for individuals who have experienced traumatic events.

Journal Reference:
Büşra Tanriverdi, David F. Gregory, Thomas M. Olino, et al. Hippocampal Threat Reactivity Interacts with Physiological Arousal to Predict PTSD Symptoms [$], Journal of Neuroscience (DOI: 10.1523/JNEUROSCI.0911-21.2022)


Original Submission

posted by janrinok on Friday July 29 2022, @06:53PM   Printer-friendly
from the more-money-than-sense dept.

Saudi Planning Skyscraper That's 75 Miles Wide:

[...] the Wall Street Journal reported yesterday that Saudi Crown Prince Mohammed bin Salman told engineers and designers he wanted his next architectural project to be as grand as the Egyptian pyramids.

According to the WSJ, the plans would make it the world's largest structure. The skyscraper would be a set of two parallel buildings, each 1,600 feet tall, and spanning 75 miles of terrain. Prince Salman is calling it the "Mirror Line" and wants it to house about five million people. It could cost as much as a trillion dollars and looks like a long, golden paradise in the photos shown below.

[...] The WSJ said Salman is, essentially, hoping to create an architectural feat designers have long dreamed of — a linear city. In concept, the Mirror Line is set to include nearly everything its residents could ever dream of needing, like a stadium, yacht club and renewable sources of energy and food.

In reality, though, it kind of sounds like a nightmare waiting to happen. What happens when Salman's weird isolated city runs out of food during internal supply chain shortages, or when another pandemic rips though millions of people trapped in tight, close quarters between two buildings?


Original Submission

posted by janrinok on Friday July 29 2022, @04:12PM   Printer-friendly
from the sieve-of-gLinux dept.

Arthur T Knackerbracket has processed the following story:

In 2018, Google moved its in-house Linux desktop from the Goobuntu to a new Linux distro, the Debian-based gLinux. Why? Because, as Google explained, Ubuntu's Long Term Support (LTS) two-year release "meant that we had to upgrade every machine in our fleet of over 100,000 devices before the end-of-life date of the OS."

That was a pain. Add in the time-consuming need to fully customize engineers' PCs, and Google decided that it cost too much. Besides, the "effort to upgrade our Goobuntu fleet usually took the better part of a year. With a two-year support window, there was only one year left until we had to go through the same process all over again for the next LTS. This entire process was a huge stress factor for our team, as we got hundreds of bugs with requests for help for corner cases."

So, when Google had enough of that, it moved to Debian Linux (though not just vanilla Debian). The company created a rolling Debian distribution: GLinux Rolling Debian Testing (Rodete).  The idea is that users and developers are best served by giving them the latest updates and patches as they're created and deemed ready for production. Such distros include Arch Linux, Debian Testing, and openSUSE Tumbleweed.

For Google, the immediate goal was to get off the two-year upgrade cycle. As the move to Continuous Integration/Continuous Deployment (CI/CD) has shown, these incremental changes work well. They're also easier to control and rollback if something goes wrong.

To make all this work without a lot of blood, sweat, and tears, Google created a new workflow system, Sieve.  Whenever Sieve spots a new version of a Debian package, it starts a new build. These packages are built in package groups since separate packages often must be upgraded together. Once the whole group has been built, Google runs a virtualized test suite to ensure no core components and developer workflows are broken. Next, each group is tested separately with a full system installation, boot, and local test suite run. The package builds complete within minutes, but testing can take up to an hour.

[...] release Sieve's code so we can all start producing rolling Linux desktop releases. How about it, Google? What do you say?


Original Submission

posted by janrinok on Friday July 29 2022, @01:28PM   Printer-friendly
from the ketchup-with-china dept.

Senate passes massive package to boost U.S. computer chip production

[....] The 64-33 vote represents a rare bipartisan victory a little more than three months before the crucial November midterms; 17 Republicans joined all Democrats in voting yes. The package, known as "CHIPS-plus," now heads to the House, which is expected to pass it by the end of the week and send it to President Joe Biden for his signature.

[....] The centerpiece of the package is more than $50 billion in subsidies for domestic semiconductor manufacturing and research.

Supporters on Capitol Hill, as well as key members of Biden's Cabinet, have argued that making microchips at home — rather than relying on chipmakers in China, Taiwan and elsewhere — is critical to U.S. national security, especially when it comes to chips used for weapons and military equipment.

[...] The final chips bill is a slimmed-down version of a much broader China competitiveness package that House and Senate lawmakers had been negotiating. Earlier, the Senate passed its bill, known as USICA, while the House passed its own version, the America COMPETES Act. But lawmakers couldn't resolve their differences, and leading Democrats decided to switch their strategy and scale back the legislation.

The package also includes tens of billions more in authorizations for science and research programs, as well as for regional technology hubs around the country.

If passed, will this be well spent? Will the US actually be globally competitive in chip manufacture?


Original Submission

posted by janrinok on Friday July 29 2022, @10:41AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Cities have an important role in making progress on sustainability and climate change issues. And for them to achieve this, urban residents need to be involved in achieving set goals. This means that cities need to provide opportunities and guidance to their residents to help them make progress.

While national targets—like Canada's goal to reduce its annual greenhouse gas emissions to 110 megatons in 2030 from 191 megatons in 2019—are important, they do not mean much to a city resident or an organization.

It can be difficult to determine how to address large and complex national issues. These need to be translated from theoretical commitments into measurable goals to create a sense of commitment and urgency. For example, Canadian emission targets need to be broken down into actionable objectives at the city level, which would make it more meaningful to its residents, who can then make small contributions that amount to significant outcomes for the city and beyond.

The UN Sustainable Development Goals (SDGs) are recognized as strategically important for sustainability. They cannot be achieved without commitment at every scale, from individuals to different levels of government.

Public and private organizations in cities can set the stage to engage everyone to contribute to shared goals. The SDGs may seem large and difficult to achieve, but they can be localized and broken down into achievable pieces.

This is being done by dozens of cities internationally who are reporting their progress in voluntary local reviews. The European Aalborg Charter is evidence of a can-do attitude among cities.

Urban leadership needs to develop a shared vision that guides residents on their individual and collective contributions. The combined achievements at the urban level contribute to global improvements. Measurable indicators and targets are set—such as monitoring energy consumption—reflect a commitment to targets.

Taking collaborative action on larger goals can address concerns with leadership that have been recently reported in the media. The response of world leaders to the ongoing climate challenges and the global COVID-19 pandemic have produced a global crisis of trust. People need to see action and be part of the solutions that are being proposed.

To build trust, city leadership needs partners, collaborators and residents to work with them on setting goals, developing a measurement system and collecting data. There are a number of available platforms and technologies to assist with developing a measurement system and engaging residents in reporting.

[...] Establishing measurable goals at the city level needs and will result in the engagement of residents. Everybody wins in the long run—quality of life improves, urban governance is more effective, and businesses develop more efficient models. Canada has lagged behind other countries in localizing sustainability targets identified in the Canadian 2030 Agenda—for Canadian cities, there is a lot more to be done.


Original Submission

posted by hubie on Friday July 29 2022, @07:53AM   Printer-friendly
from the WE-NEED-A-SIMPLE-GUIDE-TO-QUANTUM-THEORY! dept.

Arthur T Knackerbracket has processed the following story:

A new study shows that nickel oxide superconductors, which conduct electricity with no loss at higher temperatures than conventional superconductors do, contain a type of quantum matter called charge density waves, or CDWs, that can accompany superconductivity.

The presence of CDWs shows that these recently discovered materials, also known as nickelates, are capable of forming correlated states -- "electron soups" that can host a variety of quantum phases, including superconductivity, researchers from the Department of Energy's SLAC National Accelerator Laboratory and Stanford University reported in Nature Physics today.

"Unlike in any other superconductor we know about, CDWs appear even before we dope the material by replacing some atoms with others to change the number of electrons that are free to move around," said Wei-Sheng Lee, a SLAC lead scientist and investigator with the Stanford Institute for Materials and Energy Science (SIMES) who led the study.

"This makes the nickelates a very interesting new system -- a new playground for studying unconventional superconductors."

[...] CDWs are just one of the weird states of matter that jostle for prominence in superconducting materials. You can think of them as a pattern of frozen electron ripples superimposed on the material's atomic structure, with a higher density of electrons in the peaks of the ripples and a lower density of electrons in the troughs.

As researchers adjust the material's temperature and level of doping, various states emerge and fade away. When conditions are just right, the material's electrons lose their individual identities and form an electron soup, and quantum states such as superconductivity and CDWs can emerge.

[...] "This makes nickelates a very interesting new system for studying how these quantum phases compete or intertwine with each other," he said. "And it means a lot of tools that are used to study other unconventional superconductors may be relevant to this one, too."

Source material.

Journal Reference:
Rossi, M., Osada, M., Choi, J. et al. A broken translational symmetry state in an infinite-layer nickelate. Nat. Phys. (2022). DOI: 10.1038/s41567-022-01660-6


Original Submission

posted by hubie on Friday July 29 2022, @05:05AM   Printer-friendly
from the paywall-or-not-paywall-that-is-the-question dept.

Findings are from a new global study 'OA in physics: researcher perspectives' commissioned by leading learned society physics publishers :

A new global study from AIP Publishing, the American Physical Society (APS),IOP Publishing (IOPP) and Optica Publishing Group (formerly OSA)indicates that the majority of early career researchers (ECRs) [Researchers with 1–5 years of experience] want to publish open access (OA) but they need grants from funding agencies to do so.

[...] 67% of ECRs say that making their work openly available is important to them. Yet, 70% have been prevented from publishing OA because they have not been able to access the necessary monies from funding agencies to cover the cost. When asked why ECRs favour OA publishing, agreeing with its principles and benefitting from a wider readership were cited as the top two reasons.

Daniel Keirs, head of journal strategy at IOP Publishing said: "The OA views of the next generation of physicists are important as they are the harbingers of change when it comes to scholarly communications. What we see from this study is that ECRs believe that OA is the future, and they want to be able to reap the benefits of unrestricted access to research. Good progress has been made, but the transition to full OA must neither put researchers at a disadvantage nor disregard the costs necessary to produce, protect and preserve the quality and integrity of scholarly articles and the scientific record."


Original Submission

posted by hubie on Friday July 29 2022, @02:20AM   Printer-friendly
from the Johnny-Appleseed dept.

Arthur T Knackerbracket has processed the following story:

The Biden administration on Monday said the government will plant more than one billion trees across millions of acres of burned and dead woodlands in the U.S. West, as officials struggle to counter the increasing toll on the nation's forests from wildfires, insects and other manifestations of climate change.

Destructive fires in recent years that burned too hot for forests to regrow naturally have far outpaced the government's capacity to plant new trees. That has created a backlog of 4.1 million acres (1.7 million hectares) in need of replanting, officials said.

The U.S. Agriculture Department said it will have to quadruple the number of tree seedlings produced by nurseries to get through the backlog and meet future needs. That comes after Congress last year passed bipartisan legislation directing the Forest Service to plant 1.2 billion trees over the next decade and after President Joe Biden in April ordered the agency to make the nation's forests more resilient as the globe gets hotter.

[...] To erase the backlog of decimated forest acreage, the Forest Service plans over the next couple years to scale up work from about 60,000 acres (24,000 hectares) replanted last year to about 400,000 acres (162,000 hectares) annually, officials said. Most of the work will be in western states where wildfires now occur year round and the need is most pressing, said David Lytle, the agency's director of forest management.

[...] But challenges to the Forest Service's goal remain, from finding enough seeds to hiring enough workers to plant them, Fargione said.


Original Submission

posted by hubie on Thursday July 28 2022, @11:33PM   Printer-friendly
from the got-milk?-digesting-enzymes dept.

The New York Times is reporting [archive link] on a new study charting historical human milk use and the mutations that allow (some) adult humans to digest lactose.

The study [abstract], published on 27 July 2022 in the journal Nature utilizes archaeological and genetic evidence to characterize milk use among (pre-)historic humans. From the NYT article:

In many ways, humans are weird mammals. And our relationship with milk is especially weird.

In every mammalian species, females produce milk to feed their young. The nursing babies digest the milk with the help of an enzyme called lactase, which cuts milk sugar into easily absorbed fragments. When the young mammals are weaned, they stop making lactase. After all, why waste energy making an enzyme you no longer need?

But it is common for our species to keep consuming milk into adulthood. What's more, about one-third of people carry genetic mutations that allow them to produce lactase throughout their lives, making it easier to digest milk.[...]

But a new study of ancient human DNA and milk-drenched pottery shards suggests that the traditional story does not hold up. "Something was not quite right with the received wisdom," said Richard Evershed, a biogeochemist at the University of Bristol in England, and an author of the study.

Dr. Evershed and his colleagues found that Europeans were consuming milk without lactase for thousands of years, despite the misery from gas and cramping it might have caused. The scientists argue that the lactase mutation only became important to survival when Europeans began enduring epidemics and famines: During those periods, their poor health would have exacerbated gastric distress, leading to life-threatening diarrhea.

I, for one, welcome our (not so) new dairy overlords. MMMM...dairy!

Journal Reference:
Evershed, R.P., Davey Smith, G., Roffet-Salque, M. et al. Dairying, diseases and the evolution of lactase persistence in Europe. Nature (2022). DOI: 10.1038/s41586-022-05010-7


Original Submission

posted by hubie on Thursday July 28 2022, @08:52PM   Printer-friendly
from the life-finds-a-clay dept.

A new origins-based system for classifying minerals reveals the huge geochemical imprint that life has left on Earth:

Earth's geology on life is easy to see, with organisms adapting to environments as different as deserts, mountains, forests, and oceans. The full impact of life on geology, however, can be easy to miss.

A comprehensive new survey of our planet's minerals now corrects that omission. Among its findings is evidence that about half of all mineral diversity is the direct or indirect result of living things and their byproducts. It's a discovery that could provide valuable insights to scientists piecing together Earth's complex geological history—and also to those searching for evidence of life beyond this world.

[...] Their new taxonomy, based on an algorithmic analysis of thousands of scientific papers, recognizes more than 10,500 different types of minerals. That's almost twice as many as the roughly 5,800 mineral "species" in the classic taxonomy of the International Mineralogical Association, which focuses strictly on a mineral's crystalline structure and chemical makeup.

[...] Take, for example, pyrite crystals (commonly known as fool's gold). "Pyrite forms in 21 fundamentally different ways," Hazen said. Some pyrite crystals form when chloride-rich iron deposits heat up deep underground over millions of years. Others form in cold ocean sediments as a byproduct of bacteria that break down organic matter on the seafloor. Still others are associated with volcanic activity, groundwater seepage, or coal mines.

"Each one of those kinds of pyrite is telling us something different about our planet, its origin, about life, and how it's changed through time," said Hazen.

For that reason, the new papers classify minerals by "kind," a term that Hazen and Morrison define as a combination of the mineral species with its mechanism of origin (think volcanic pyrite versus microbial pyrite). Using machine learning analysis, they scoured data from thousands of scientific papers and identified 10,556 distinct mineral kinds.

Morrison and Hazen also identified 57 processes that individually or in combination created all known minerals. These processes included various types of weathering, chemical precipitations, metamorphic transformation inside the mantle, lightning strikes, radiation, oxidation, massive impacts during Earth's formation, and even condensations in interstellar space before the planet formed. They confirmed that the biggest single factor in mineral diversity on Earth is water, which through a variety of chemical and physical processes helps to generate more than 80 percent of minerals.

[...] How deeply the mineralogical is interwoven with the biological might not come as a huge surprise to earth scientists, Sahai said, but Morrison and Hazen's new taxonomy "put a nice systematization on it and made it more accessible to a broader community."

[...] Still, Hazen and Morrison hope that their taxonomy might one day be used to decode the geologic history of other planets or moons and to search for hints of life there, past or present. When examining a Martian crystal, for example, researchers could use the new mineralogical framework to look at features like grain size and structure defects to determine whether it could have been produced by an ancient microbe rather than by a dying sea or a meteor strike.

Journal References:
  • Robert M. Hazen, Shaunna M. Morrison, Sergey V. Krivovichev, et al. Lumping and splitting: Toward a classification of mineral natural kinds, American Mineralogist (DOI: 10.2138/am-2022-8105)
  • Robert M. Hazen, Shaunna M. Morrison. On the paragenetic modes of minerals: A mineral evolution perspective, American Mineralogist (DOI: 10.2138/am-2022-8099)


Original Submission