Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Idiosyncratic use of punctuation - which of these annoys you the most?

  • Declarations and assignments that end with }; (C, C++, Javascript, etc.)
  • (Parenthesis (pile-ups (at (the (end (of (Lisp (code))))))))
  • Syntactically-significant whitespace (Python, Ruby, Haskell...)
  • Perl sigils: @array, $array[index], %hash, $hash{key}
  • Unnecessary sigils, like $variable in PHP
  • macro!() in Rust
  • Do you have any idea how much I spent on this Space Cadet keyboard, you insensitive clod?!
  • Something even worse...

[ Results | Polls ]
Comments:36 | Votes:82

posted by janrinok on Wednesday August 21, @11:53PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The FDA’s current regulations allow food companies to independently determine the safety of thousands of ingredients considered “generally recognized as safe” (GRAS), often without notifying the FDA or disclosing safety data. This practice has led to the addition of many unreviewed substances to the U.S. food supply, raising concerns about the adequacy of post-market oversight and the potential risks of such ingredients.

The Food and Drug Administration (FDA) is responsible for ensuring the safety of the U.S. food supply, including setting nutrition labeling standards, collaborating with companies on food recalls, and addressing foodborne illness outbreaks. However, a recent article in the American Journal of Public Health suggests that the FDA has adopted a more hands-off stance regarding the safety of food additives and certain ingredients already in use.

The current FDA process allows the food industry to regulate itself when it comes to thousands of added ingredients—by determining for itself which ingredients should be considered “generally recognized as safe,” or GRAS—and deciding on their own whether or not to disclose the ingredients’ use and the underlying safety data to the FDA. As a result, many new substances have been added to our food supply without any government oversight.

“Both the FDA and the public are unaware of how many of these ingredients—which are most commonly found in ultra-processed foods —are in our food supply,” said Jennifer Pomeranz, associate professor of public health policy and management at NYU School of Global Public Health and study’s first author.

Since 1958, the FDA has been responsible for evaluating the safety of new chemicals and substances added to foods before they go to market. However, food safety laws distinguish between “food additives” and “GRAS” ingredients. While compounds considered “food additives” must be reviewed and approved by the FDA before they are used in foods, ingredients considered GRAS are exempt from these regulations.

The GRAS designation was initially established for ingredients already found in foods—for instance, vinegar and spices. But under a rule used since 1997, the FDA has allowed the food industry to independently determine which substances fall into this category, including many new substances added to foods. Rather than disclose the new use of these ingredients and the accompanying safety data for FDA review, companies can do their own research to evaluate an ingredient’s safety before going to market, without any notification or sharing of the findings. The FDA suggests—but does not require—that companies voluntarily notify the agency about the use of such substances and their findings, but in practice, many such substances have been added without notification.

In their analysis, the researchers review the history of the FDA’s and industry’s approach around adding these new compounds to foods and identify the lack of any real oversight. This includes a federal court case in 2021 upholding the FDA’s hands-off approach.

“Notably, the court did not find that the FDA’s practices on GRAS ingredients support the safety of our food supply,” said Pomeranz. “The court only ruled that the FDA’s practice was not unlawful.”

“As a result of the FDA’s policy, the food industry has been free to ‘self-GRAS’ new substances they wish to add to foods, without notifying FDA or the public,” said study senior author Dariush Mozaffarian, director of the Food is Medicine Institute and distinguished professor at the Friedman School of Nutrition Science and Policy at Tufts University. “There are now hundreds, if not thousands, of substances added to our foods for which the true safety data are unknown to independent scientists, the government, and the public.”

According to the researchers, the FDA also lacks a formal approach and adequate resources to review those food additives and GRAS substances already on the market. After an ingredient is added to foods, if research later suggests harms, the FDA can review the new data and, if needed, take action to reduce or remove it from foods. In a rare exception, the FDA announced in March that it would be reviewing 21 chemicals found in foods, including several food ingredients—a tiny fraction of the thousands of food additives and GRAS substances used today.

An example of the 21 food additives to be reviewed is potassium bromate, a chemical added to baked goods and drinks with evidence that it may cause cancer. Potassium bromate is banned in Europe, Canada, China, and Japan; California recently passed a law to ban its use, along with three other chemicals, and similar bills have been introduced in Illinois, New York, and Pennsylvania.

“This is a stark example of the FDA’s regulatory gap,” said Pomeranz. “We’re seeing states starting to act to fill the regulatory void left by the FDA’s inaction over substances increasingly associated with harm.”

The FDA’s oversight of GRAS ingredients on the market is also limited. The agency rarely revokes GRAS designation (an FDA inventory only shows 15 substances that were considered GRAS and then later determined to not be), nor does the FDA review foods on an ongoing basis with GRAS ingredients that can be safe when added at low levels but not in large quantities—for instance, caffeine, salt, and sugar.

“In 1977, the FDA approved caffeine as a GRAS substance for use in sodas at a low level: 0.02 percent,” said Pomeranz. “But today, caffeine is added to energy drinks at levels far exceeding this, which is causing caffeine-related hospitalizations and even deaths. Given that the FDA regulates the use of GRAS substances, the agency could set limits on the amount of caffeine in energy drinks.”


Original Submission

posted by janrinok on Wednesday August 21, @07:06PM   Printer-friendly
from the EVs-are-the-future! dept.

A major American auto manufacturer reportedly laid off about 1,000 of its employees on Monday, including about 600 workers based in the U.S. in a bid to streamline current operations:

General Motors (GM) is making cuts in its software and services business, which was recently put under the command of two former Apple executives in a partial retreat from a hiring spree over the last several years, according to The Wall Street Journal. Monday's layoffs stand as the most recent job cuts at GM, which reached buyout agreements with approximately 5,000 salaried employees in 2023 as part of a cost-cutting effort and got rid of several hundred executive positions in February of that year, according to Reuters.

[...] The layoffs are not related to a specific cost-reduction initiative but are instead a result of the company leadership's review of the business and an effort to find more opportunities for efficiency, a GM spokesperson told the DCNF [Daily Caller News Foundataion]. Monday's job cuts followed a decision by the two new GM executives from Apple, Baris Cetinok and Dave Richardson, to streamline the service and software business, sources familiar with the matter told the WSJ.

The spokesperson could not comment as to how many jobs were affected by Monday's actions but said that around 600 jobs would be affected at the company's global technical center in Warren, Michigan.

Previously: GM to Slash 1500 Jobs at Lordstown, Ohio Plant

Related: Tesla Lays Off 'More Than 10%' of its Global Workforce


Original Submission

posted by janrinok on Wednesday August 21, @02:22PM   Printer-friendly
from the happy.little.correlations dept.

The scraping defence. They are not scraping content for their AI models. They are just looking for statistical correlations to their models.

https://torrentfreak.com/nvidia-copyrighted-books-are-just-statistical-correlations-to-our-ai-models-240617/

Earlier this year, several authors sued NVIDIA over alleged copyright infringement. The class action lawsuit alleged that the company's AI models were trained on copyrighted works and specifically mentioned Books3 data. Since this happened without permission, the rightsholders demand compensation.

The lawsuit was followed up by a near-identical case a few weeks later, and NVIDIA plans to challenge both in court by denying the copyright infringement allegations.

In its initial response, filed a few weeks ago, NVIDIA did not deny that it used the Books3 dataset. Like many other AI companies, it believes that the use of copyrighted data for AI training is a prime example of fair use; especially when the output of the model doesn't reproduce copyrighted works.

The authors clearly have a different take. They allege that NVIDIA willingly copied an archive of pirated books to train its commercial AI model, and are demanding damages for direct copyright infringement.

[...] NVIDIA also shared its early outlook on the case. The company believes that AI companies should be allowed to use copyrighted books to train their AI models, as these books are made up of "uncopyrightable facts and ideas" that are already in the public domain.

The argument may seem surprising at first; the authors own copyrights and as far they're concerned, use of pirated copies leads to liability as a direct infringer. However, NVIDIA goes on to explain that their AI models don't see these works that way.

AI training doesn't involve any book reading skills, or even a basic understanding of a storyline. Instead, it simply measures statistical correlations and adds these to the model.


Original Submission

posted by janrinok on Wednesday August 21, @09:31AM   Printer-friendly
from the arsenic-and-old-lace dept.

https://arstechnica.com/science/2024/08/that-book-is-poison-even-more-victorian-covers-found-to-contain-toxic-dyes/

In April, the National Library of France removed four 19th century books, all published in Great Britain, from its shelves because the covers were likely laced with arsenic. The books have been placed in quarantine for further analysis to determine exactly how much arsenic is present.

[...] Chemists from Lipscomb University in Nashville, Tennessee, have also been studying Victorian books from that university's library collection in order to identify and quantify levels of poisonous substances in the covers. They reported their initial findings this week at a meeting of the American Chemical Society in Denver.

[...] The Lipscomb effort was inspired by the University of Delaware's Poison Book Project, established in 2019 as an interdisciplinary crowdsourced collaboration between university scientists and the Winterthur Museum, Garden, and Library. The initial objective was to analyze all the Victorian-era books in the Winterthur circulating and rare books collection for the presence of an arsenic compound called cooper acetoarsenite, an emerald green pigment that was very popular at the time to dye wallpaper, clothing, and cloth book covers. Book covers dyed with chrome yellow—favored by Vincent van Gogh—aka lead chromate, were also examined, and the project's scope has since expanded worldwide.

The Poison Book Project is ongoing, but 50 percent of the 19th century cloth-case bindings tested so far contain lead in the cloth across a range of colors, as well as other highly toxic heavy metals: arsenic, chromium, and mercury.

[...] The project lists several recommendations for the safe handling and storage of such books, such as wearing nitrile gloves—prolonged direct contact with arsenical green pigment, for instance, can lead to skin lesions and skin cancer—and not eating, drinking, biting one's fingernails or touching one's face during handling, as well as washing hands thoroughly and wiping down surfaces. Arsenical green books should be isolated for storage and removed from circulating collections, if possible. And professional conservators should work under a chemical fume hood to limit their exposure to arsenical pigment dust.

[...] "These old books with toxic dyes may be in universities, public libraries, and private collections," said Abigail Hoermann, an undergraduate studying chemistry at Lipscomb University who is among those involved in the effort, led by chemistry professor Joseph Weinstein-Webb. "So, we want to find a way to make it easy for everyone to be able to find what their exposure is to these books, and how to safely store them."

Related stories on SoylentNews:
How a Library Handles a Rare and Deadly Book of Wallpaper Samples - 20190630


Original Submission

posted by martyb on Wednesday August 21, @04:46AM   Printer-friendly

An interesting article about why legalese is written the way it is:

A new study shows lawyers find simplified legal documents easier to understand, more appealing, and just as enforceable as traditional contracts.

It's no secret that legal documents are notoriously difficult to understand, causing headaches for anyone who has had to apply for a mortgage or review any other kind of contract. A new MIT study reveals that the lawyers who produce these documents don't like them very much either.

The researchers found that while lawyers can interpret and recall information from legal documents better than nonlawyers, it's still easier for them to understand the same documents when translated into "plain English." Lawyers also rated plain English contracts as higher-quality overall, more likely to be signed by a client, and equally enforceable as those written in "legalese."

The findings suggest that while impenetrable styles of legal writing are well-entrenched, lawyers may be amenable to changing the way such documents are written.

"No matter how we asked the questions, the lawyers overwhelmingly always wanted plain English," says Edward Gibson, an MIT professor of brain and cognitive sciences and the senior author of the study. "People blame lawyers, but I don't think it's their fault. They would like to change it, too."

Eric Martínez, an MIT graduate student and licensed attorney, is the lead author of the new study, which appears this week in the Proceedings of the National Academy of Sciences. Frank Mollica, a former visiting researcher at MIT who is now a lecturer in computational cognitive science at the University of Edinburgh, is also an author of the paper.

(Editor's note: I reviewed all the legal documents involves in creating Soylentnews. In that process, I discovered a single sentence which contained over 500 words. Twice! --MartyB)

[Source]: Massachusetts Institute of Technology

[Also Covered By]: PHYS.ORG


Original Submission

posted by martyb on Wednesday August 21, @12:01AM   Printer-friendly

The best removal rate was less than 70%, and that didn't beat manual opt-outs:

If you've searched your name online in the last few years, you know what's out there, and it's bad. Alternately, you've seen the lowest-common-denominator ads begging you to search out people from your past to see what crimes are on their record. People-search sites are a gross loophole in the public records system, and it doesn't feel like there's much you can do about it.

Not that some firms haven't promised to try. Do they work? Not really, Consumer Reports (CR) suggests in a recent study.

"[O]ur study shows that many of these services fall short of providing the kind of help and performance you'd expect, especially at the price levels some of them are charging," said Yael Grauer, program manager for CR, in a statement.

Consumer Reports' study asked 32 volunteers for permission to try to delete their personal data from 13 people-search sites, using seven services over four months. The services, including DeleteMe, Reputation Defender from Norton, and Confidently, were also compared to "Manual opt-outs," i.e. following the tucked-away links to pull down that data on each people-search site. CR took volunteers from California, in which the California Consumer Privacy Act should theoretically make it mandatory for brokers to respond to opt-out requests, and in New York, with no such law, to compare results.

Finding a total of 332 instances of identifying information profiles on those sites, Consumer Reports found that only 117 profiles were removed within four months using all the services, or 35 percent. The services varied in efficacy, with EasyOptOuts notably performing the second-best at a 65 percent removal rate after four months. But if your goal is to remove entirely others' ability to find out about you, no service Consumer Reports tested truly gets you there.

Manual opt-outs were the most effective removal method, at 70 percent removed within one week, which is both a higher elimination rate and quicker turn-around than all the automated services.

The study noted close ties between the people-search sites and the services that purport to clean them. Removing one volunteer's data from ClustrMaps resulted in a page with a suggested "Next step": signing up for privacy protection service OneRep. Firefox-maker Mozilla dropped OneRep as a service provider for its Mozilla Monitor Plus privacy bundle after reporting by Brian Krebs found that OneRep's CEO had notable ties to the people-search industry.


Original Submission

posted by hubie on Tuesday August 20, @07:17PM   Printer-friendly

Customers uncertain as app remains downloadable after company's Chapter 7 filing:

Roku has finally axed the Redbox app from its platform. Redbox parent company Chicken Soup for the Soul Entertainment filed for Chapter 11 bankruptcy in June and moved to Chapter 7 in July, signaling the liquidation of its assets. However, the app has remained available but not fully functional in various places, leaving customers wondering if they will still be able to access content they bought. This development, however, mostly squashes any remaining hope of salvaging those purchases.

Redbox is best known for its iconic red kiosks where people could rent movie and TV (and, until 2019, video game) discs. But in an effort to keep up with the digital age, Redbox launched a streaming service in December 2017. At the time, Redbox promised "many" of the same new releases available at its kiosks but also "a growing collection" of other movies and shows. The company claimed that its on-demand streaming service was competitive because it had "newest-release movies" that subscription streaming services didn't have. The service offered streaming rentals as well as purchases.

[...] Roku's move suggests that Redbox customers will not be able to watch items they purchased. Barring an unlikely change—like someone swooping in to buy and resurrect Redbox—it's likely that other avenues for accessing the Redbox app will also go away soon.

[...] Since Redbox filed for bankruptcy, though, there has been some confusion and minimal communication about what will happen to Redbox's services. People online have asked if there's any way to watch content they purchased to own and/or get reimbursed. Some have even reported being surprised after learning that Redbox, owned by Chicken Soup since 2022, was undergoing bankruptcy procedures, pointing to limited updates from Redbox, Chicken Soup, and/or the media.

[...] As Chicken Soup sorts through its debts and liquidation, customers are left without guidance about what to do with their rental DVDs or how they can access movies/shows they purchased. But when it comes to purchases made via streaming services, it's more accurate to consider them rentals, despite them not being labeled as such and costing more than rentals with set time limits. As we've seen before, streaming companies can quickly yank away content that people feel that they paid to own, be it due to licensing disputes, mergers and acquisitions, or other business purposes. In this case, a company's failure has resulted in people no longer being able to access content they already paid for and presumed they'd be able to access for the long haul.

For some, the reality of what it means to "own" a streaming purchase, combined with the unreliability and turbulent nature of today's streaming industry, has strengthened the appeal of physical media. Somewhat ironically, though, Redbox shuttering meant the end of one of the last mainstream places to access DVDs.


Original Submission

posted by hubie on Tuesday August 20, @02:25PM   Printer-friendly
from the he's-more-machine-now-than-man dept.

A US agency pursuing moonshot health breakthroughs has hired a researcher advocating an extremely radical plan for defeating death.

His idea? Replace your body parts. All of them. Even your brain.

Jean Hébert, a new hire with the US Advanced Projects Agency for Health (ARPA-H), is expected to lead a major new initiative around 'functional brain tissue replacement,' the idea of adding youthful tissue to people's brains.

https://www.technologyreview.com/2024/08/16/1096808/arpa-h-jean-hebert-wants-to-replace-your-brain/

See also: Ship of Theseus


Original Submission

posted by hubie on Tuesday August 20, @09:40AM   Printer-friendly
from the Gee,-Wilbur- dept.

The researchers set 20 horses a task consisting of three stages:

A new study showed the animals performed better than expected in a complex reward-based game.

Researchers found that when denied treats for not following the rules of the game, the horses were able to instantly switch strategies to get more rewards.

It shows the animals have the ability to think and plan ahead – something previously considered to be beyond their capacity, scientists from Nottingham Trent University (NTU) said.

[...] Dr Carrie Ijichi, a senior lecturer in equine science at NTU, said: "Horses are not natural geniuses, they are thought of as mediocre, but this study shows they're not average and are, in fact, more cognitively advanced than we give them credit for."

To understand more, the researchers set 20 horses a task consisting of three stages.

In the first stage, the animals touched a piece of card with their nose in order to get a treat.

But things became more complicated when a light was introduced and horses were only allowed a snack if they touched the card while the light was switched off.

The team found that the horses kept blindly touching the card, regardless of whether the light was on or off, and were rewarded for correct responses.

In the final stage of the game, a penalty was put in place where touching the card when the "stop" light was on resulted in a 10-second time-out.

But instead of indiscriminately touching the card, the team found that the horses were engaging with the rules – only making a move at the right time in order to receive their treat.

The researchers said this suggests that rather than failing to grasp the rules of the game, the horses had understood it the whole time but had found a way to play in the second stage that did not require much attention.

[...] The researchers said the findings, published in the journal Applied Animal Behaviour Science, suggests horses have the ability to form an internal model of the world around them to make decisions and predictions, a technique known as model-based learning.

It was previously thought that model-based learning was too complex for horses because they have an underdeveloped pre-frontal cortex, a part of the brain associated with strategic thinking.

Dr Ijichi said this suggests that the hoses "must be using another area of the brain to achieve a similar result".

She said: "This teaches us that we shouldn't make assumptions about animal intelligence or sentience based on whether they are 'built' just like us."

Journal: https://doi.org/10.1016/j.applanim.2024.106339


Original Submission

posted by hubie on Tuesday August 20, @04:52AM   Printer-friendly
from the "best-practices"-means-don't-block-Google dept.

Arthur T Knackerbracket has processed the following story:

An update in Google Chrome's browser extension support is bad news for uBlock Origin.

According to PCWorld, Chrome's shift from Manifest V2 to V3 is deprecating certain features that the popular ad-blocker relies on. The Chrome update "aims to... improve the privacy, security, and performance of extensions," by changing the way it manages API requests. That means with the upcoming Chrome update, uBlock Origin will be automatically disabled.

[...] The popular ad-blocker, which has over 30 million users, reportedly still works. But a disclaimer at the top of extension page says, "This extension may soon no longer be supported because it doesn't follow best practices for Chrome extensions."

Developer Raymond Hill who makes uBlock Origin has scrambled to deploy a fix and now offers uBlock Origin Lite, which is compliant with Manifest V3. It already has 200,000 users, and still has standard ad-blocking capabilities, but is less dynamic in the sense that it requires the user to allow or block permissions on a "per-site basis." In a GitHub post about the new extension, Hill explained that it isn't intended to be a replacement for the original.

"I consider uBO Lite to be too different from uBO to be an automatic replacement," said the developer. "You will have to explicitly find a replacement to uBO according to what you expect from a content blocker. uBO Lite may or may not fulfill your expectations."

uBlock Origin still works on other browsers, so you could always switch to a Chrome alternative like Firefox or Edge. But if you want to stick with Chrome, you have to play by Chrome's rules, and that means getting a different ad-blocker.


Original Submission

posted by janrinok on Tuesday August 20, @12:07AM   Printer-friendly
from the looks-like-no-hefeweizens-in-space dept.

Scientists are exploring how fermentation in microgravity effects various brewing properties:

Virtually every civilization throughout history has relied on fermentation not just for their booze, but for making everything from bread, to pickles, to yogurt. As humanity's technological knowledge expanded, we have adapted those same chemistry principles to pharmaceuticals and biofuels, among many other uses. And while it may not be the first necessity that comes to mind when planning for long-term living in a lunar base, or even on Mars, the process will be crucial to long-term mission success.

To explore how these concepts may change offworld, a team at the University of Florida's Institute of Food and Agricultural Sciences (UF/IFAS) first experimented with making beer in microgravity. Their results, published in the journal Beverages, indicate microgravity may not only speed up fermentation processes—it may also produce higher quality products.

[...] Getting a beer brewer's starter kit up to the International Space Station, however, isn't quite in the cards yet. Instead, the UF team led by undergraduate researcher Pedro Fernandez Mendoza created a tiny microgravity simulator here on Earth. After gathering locally grown barley and mashing it into wort (grain-derived sugary liquid necessary for beers and whiskey), Mendoza and colleagues portioned it out into six samples. They then added the yeast used in lagers, Saccharomyces pastrorianus, to each tube before leaving three of them to act as controls. The other trio were placed in a clinostat—a tool capable of simulating microgravity conditions by constantly rotating its contents around a horizontal axis. Over the course of three days, the team then assessed their fermenting baby-beers at regular intervals on the basis of density, yeast counts, and yeast viability.

After three days, researchers were able to confirm one of their initial hypotheses that microgravity doesn't appear to harmfully affect fermentation. What's more, the fermentation process actually sped up in the clinostat samples as compared to their controls. But there was one additional, unexpected result—microgravity yeast may allow for even higher quality products than simply fermenting here on Earth. Although further investigation is needed, researchers think this might relate to a particular gene in yeast that oversees the levels of ester—fermentation byproducts responsible for both good and bad beer flavors.

Typically, the ratio between high alcohol groups and lager ester amounts ranges between 3-4:1, with higher ratios offering a drier, less aromatic beer. The team recorded their control samples as having a ratio of 1.4:1, while their microgravity beer measured 4.6:1, implying the latter was "less aromatic by this measure." Meanwhile, two esters in particular, isoamyl acetate and 2-phenethyl acetate, showed "significant differences" between microgravity and controls. Higher concentrations of these esters produce a fruity, banana-like flavor in beers that many drinkers often consider undesirable. In the microgravity brews, a "multiple-fold decrease" in ester concentration compared to the standard examples.

"Depending upon the brewery, these compounds may be desirable; however, the presence of these compounds above a detection threshold would usually be considered a defect," the team writes. Given this, their microgravity results offered a final product "that would be considered higher quality due to the reduced esters.

Journal Reference: Pedro Fernandez Mendoza et al, Brewing Beer in Microgravity: The Effect on Rate, Yeast, and Volatile Compounds, Beverages (2024). DOI: 10.3390/beverages10020047


Original Submission

posted by janrinok on Monday August 19, @07:27PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

In a major step for the international Deep Underground Neutrino Experiment (DUNE), scientists have detected the first neutrinos using a DUNE prototype particle detector at the U.S. Department of Energy's Fermi National Accelerator Laboratory (Fermilab).

The revolutionary new technology at the heart of DUNE's new prototype detector is LArPix, an innovative end-to-end pixelated sensor and electronics system capable of imaging neutrino events in true-3D that was conceived, designed, and built by a team of Lawrence Berkeley National Laboratory (Berkeley Lab) physicists and engineers and installed at Fermilab earlier this year.

DUNE, currently under construction, will be the most comprehensive neutrino experiment in the world. It will enable scientists to explore new areas of neutrino research and possibly address some of the biggest physics mysteries in the universe, including searching for the origin of matter and learning more about supernovae and black hole formation.

Since DUNE will feature new designs and technology, scientists are testing prototype equipment and components in preparation for the final detector installation. In February, the DUNE team finished the installation of their latest prototype detector in the path of an existing neutrino beamline at Fermilab. On July 10, the team announced that they successfully recorded their first accelerator-produced neutrinos in the prototype detector, a step toward validating the design.

"This is a truly momentous milestone demonstrating the potential of this technology," said Louise Suter, a Fermilab scientist who coordinated the module installation. "It is fantastic to see this validation of the hard work put into designing, building, and installing the detector."

Berkeley Lab leads the engineering integration of the new neutrino detection system, part of DUNE's near detector complex that will be built on the Fermilab site. Its prototype—known as the 2×2 prototype because it has four modules arranged in a square—records particle tracks with liquid-argon time projection chambers.

"DUNE needed a liquid-argon TPC (LArTPC) detector that could tolerate a high-intensity environment, but this was thought to be impossible," said Dan Dwyer, the head of the Berkeley Lab's Neutrino Physics Group and the project's technical lead for the ND-LAr Consortium, which contributed key elements to the new system's design and fabrication. "With the invention of LArPix, our team at LBNL has made this dream a reality. The 2×2 Demonstrator now installed at DUNE combines our true-3D readout with high-coverage light detectors, producing a truly innovative particle detector."

Brooke Russell, formerly a Chamberlain Postdoctoral Fellow at Berkeley Lab and now the Neil and Jane Pappalardo Special Fellow in Physics at MIT, played a crucial role in the development of the 2×2 prototype, which she describes as "a first-of-its-kind detector, with more than 337,000 individual charge-sensitive pixels at roughly 4-millimeter granularity." Berkeley Lab led the design, construction, and testing of the end-to-end pixelated charge readout system during the COVID-19 pandemic.

"Operation of the 2×2 prototype in a neutrino beam will usher in a new era of high-fidelity, inherently 3D LArTPC images for neutrino interaction measurements," Russell said.

The final version of the DUNE near detector will feature 35 liquid argon modules, each larger than those in the prototype. The modules will help navigate the enormous flux of neutrinos expected at the near site.

The 2×2 prototype implements novel technologies that enable a new regime of detailed, cutting-edge neutrino imaging to handle the unique conditions in DUNE. It has a millimeter-sized pixel readout system, developed by a team at Berkeley Lab, that allows for high-precision 3D imaging on a large scale. This, coupled with its modular design, sets the prototype apart from previous neutrino detectors like ICARUS and MicroBooNE.

Now, the 2×2 prototype provides the first accelerator-neutrino data to be analyzed and published by the DUNE collaboration.

DUNE is split between two locations hundreds of miles apart: A beam of neutrinos originating at Fermilab, close to Chicago, will pass through a particle detector located on the Fermilab site, then travel 800 miles through the ground to several huge detectors at the Sanford Underground Research Facility (SURF) in South Dakota.

The DUNE detector at Fermilab will analyze the neutrino beam close to its origin, where the beam is extremely intense. Collaborators expect this near detector to record about 50 interactions per pulse, which will come every second, amounting to hundreds of millions of neutrino detections over DUNE's many expected years of operation. Scientists will also use DUNE to study neutrinos' antimatter counterpart, antineutrinos.

This unprecedented flux of accelerator-made neutrinos and antineutrinos will enable DUNE's ambitious science goals. Physicists will study the particles with DUNE's near and far detectors to learn more about how they change type as they travel, a phenomenon known as neutrino oscillation. By looking for differences between neutrino oscillations and antineutrino oscillations, physicists will seek evidence for a broken symmetry known as CP violation to determine whether neutrinos might be responsible for the prevalence of matter in our universe.

The DUNE collaboration is made up of more than 1,400 scientists and engineers from over 200 research institutions. Nearly 40 of these institutions work on the near detector. Specifically, the hardware development of the 2×2 prototype was led by the University of Bern in Switzerland, DOE's Fermilab, Berkeley Lab, and SLAC National Accelerator Laboratory, with significant contributions from many universities.


Original Submission

posted by janrinok on Monday August 19, @02:42PM   Printer-friendly

https://www.userlandia.com/home/iigs-mhz-myth

There's many legends in computer history. But a legend is nothing but a story. Someone tells it, someone else remembers it, and everybody passes it on. And the Apple IIGS has a legend all its own. Here, in Userlandia, we're going to bust some megahertz myths.

I love the Apple IIGS. It's the fabulous home computer you'd have to be crazy to hate. One look at its spec sheet will tell you why. The Ensoniq synthesizer chip brings 32 voices of polyphonic power to the desktop. Apple's Video Graphics Controller paints beautiful on-screen pictures from a palette of thousands of colors. Seven slots and seven ports provide plenty of potential for powerful peripherals. These ingredients make a great recipe for a succulent home computer. But you can't forget the most central ingredient: the central processing unit. It's a GTE 65SC816 clocked at 2.8 MHz—about 2.72 times faster than an Apple IIe. When the IIGS launched in September 1986 its contemporaries were systems like the Atari 1040ST, the Commodore Amiga 1000, and of course Apple's own Macintosh Plus. These machines all sported a Motorola 68000 clocked between 7 and 8 MHz. If I know anything about which number is bigger than the other number, I'd say that Motorola's CPU is faster.

"Now hold on there," you say! "Megahertz is just the clock speed of the chip—it says nothing about how many instructions are actually executed during those cycles, let alone the time spent reading and writing to RAM!" And you know what, that's true! The Apple II and Commodore 64 with their 6502 and 6510 CPUs clocked at 1 MHz could trade blows with Z80 powered computers running at three times the clock speed. And the IIGS had the 6502's 16-bit descendant: the 65C816. Steve Wozniak thought Western Design Center had something special with that chip.

And so the story begins...


Original Submission

posted by janrinok on Monday August 19, @10:01AM   Printer-friendly
from the pray-I-don't-alter-it-any-further dept.

Blocking the company's AI overviews also blocks its web crawler:

As the US government weighs its options following a landmark "monopolist" ruling against Google last week, online publications increasingly face a bleak future. (And this time, it's not just because of severely diminished ad revenue.) Bloomberg reports that their choice now boils down to allowing Google to use their published content to produce inline AI-generated search "answers" or losing visibility in the company's search engine.

The crux of the problem lies in the Googlebot, the crawler that scours and indexes the live web to produce the results you see when you enter search terms. If publishers block Google from using their content for the AI-produced answers you now see littered at the top of many search results, they also lose the privilege of including their web pages in the standard web results.

The catch-22 has led publications, rival search engines and AI startups to pin their hopes on the Justice Department. On Tuesday, The New York Times reported that the DOJ is considering asking a federal judge to break up parts of the company (spinning off sections like Chrome or Android). Other options it's reportedly weighing include forcing Google to share search data with competitors or relinquishing its default search-engine deals, like the $18 billion one it inked with Apple.

Google uses a separate crawler for its Gemini (formerly Bard) chatbot. But its main crawler covers both AI Overviews and standard searches, leaving web publishers with little (if any) leverage. If you let Google scrape your content for AI Overview answers, readers may consider that the end of the matter without bothering to visit your site (meaning zero revenue from those potential readers). But if you block the Googlebot, you lose search visibility, which likely means significantly less short-term income and a colossal loss of long-term competitive standing.

iFixit CEO Kyle Wiens told Bloomberg, "I can block ClaudeBot [Anthropic's crawler for its Claude chatbot] from indexing us without harming our business. But if I block Googlebot, we lose traffic and customers."

[...] The ball is now in the Justice Department's court to figure out where Google — and, to an extent, the entire web — goes from here. Bloomberg's full story is worth a read.


Original Submission

posted by hubie on Monday August 19, @05:13AM   Printer-friendly

Experts studying material from event 66m years ago find signs to show how Chicxulub impact crater was formed:

When a massive space rock slammed into Earth 66m years ago, it wiped out huge swathes of life and ended the reign of the dinosaurs. Now scientists say they have new insights into what it was made from.

Experts studying material laid down at the time of the event say they have found tell-tale signs to support the idea the Chicxulub impact crater was produced by a carbon-rich, "C-type", asteroid that originally formed beyond the orbit of Jupiter.

Mario Fischer-Gödde, co-author of the research from the University of Cologne, said the team are now keen to look at deposits associated with an impact some suggest was behind a large extinction about 215m years ago.

"Maybe this way we could find out if C-type asteroid impacts would have a higher probability for causing mass extinction events on Earth," he said.

Writing in the journal Science, the researchers report how they studied different types, or isotopes, of ruthenium within a layer of material that settled over the globe after the impact 66m years ago.

"This layer contains traces of the remnants of the asteroid" said Fischer-Gödde.

The team chose to look at ruthenium because the metal is very rare in the Earth's crust.

"The ruthenium that we find in this layer, therefore, is almost 100% derived from the asteroid," said Fischer-Gödde, adding that offers scientists a way to determine the makeup, and hence type, of the impactor itself.

The team found samples of the layer from Denmark, Italy and Spain all showed the same ruthenium isotope composition.

Crucially, said Fischer-Gödde, the result is different to the composition generally found on Earth, ruling out a theory that the presence of ruthenium and other metals such as osmium and platinum, are down to past eruptions of the Deccan Traps volcanoes.

The team also cast doubt on the possibility that the impactor was a comet, saying the ruthenium isotope composition of the samples is different to that of meteorites thought to be fragments of comets that have lost their ice.

[...] Fischer-Gödde said C-type asteroids can today be found in the asteroid belt that sits between Mars and Jupiter because, not long after the formation of the solar system, Jupiter migrated, scattering asteroids in the process.

As a result, he suggests the ill-fated space rock probably came from there.

"Maybe there was a collision of two asteroid bodies in the belt, and then this chunk kind of went on an Earth-crossing orbit. That could be one scenario," he said, although he noted there are other possibilities, including that it came from the Oort cloud that is thought to surround the solar system.

Journal Reference:
    Mario Fischer-Gödde, Jonas Tusch, Steven Goderis et al., Ruthenium isotopes show the Chicxulub impactor was a carbonaceous-type asteroid, Science (DOI: 10.1126/science.adk4868)


Original Submission