Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How do you backup your data at home?

  • Optical media (CD/DVD/Blu-ray)
  • External USB Disk
  • USB Pen Drive
  • Disk Caddy
  • NAS
  • "Cloud"
  • Backups are for wimps!
  • Other - Specify

[ Results | Polls ]
Comments:17 | Votes:50

posted by janrinok on Tuesday January 14, @11:23PM   Printer-friendly
from the my-head-hurts... dept.

Rational or Not? This Basic Math Question Took Decades to Answer.:

In June 1978, the organizers of a large mathematics conference in Marseille, France, announced a last-minute addition to the program. During the lunch hour, the mathematician Roger Apéry would present a proof that one of the most famous numbers in mathematics — "zeta of 3," or ζ(3), as mathematicians write it — could not be expressed as a fraction of two whole numbers. It was what mathematicians call "irrational."

Conference attendees were skeptical. The Riemann zeta function is one of the most central functions in number theory, and mathematicians had been trying for centuries to prove the irrationality of ζ(3) — the number that the zeta function outputs when its input is 3. Apéry, who was 61, was not widely viewed as a top mathematician. He had the French equivalent of a hillbilly accent and a reputation as a provocateur. Many attendees, assuming Apéry was pulling an elaborate hoax, arrived ready to pay the prankster back in his own coin. As one mathematician later recounted, they "came to cause a ruckus."

The lecture quickly descended into pandemonium. With little explanation, Apéry presented equation after equation, some involving impossible operations like dividing by zero. When asked where his formulas came from, he claimed, "They grow in my garden." Mathematicians greeted his assertions with hoots of laughter, called out to friends across the room, and threw paper airplanes.

But at least one person — Henri Cohen, now at the University of Bordeaux — emerged from the talk convinced that Apéry was correct. Cohen immediately began to flesh out the details of Apéry's argument; within a couple of months, together with a handful of other mathematicians, he had completed the proof. When he presented their conclusions at a later conference, a listener grumbled, "A victory for the French peasant."

Once mathematicians had, however reluctantly, accepted Apéry's proof, many anticipated a flood of further irrationality results. Irrational numbers vastly outnumber rational ones: If you pick a point along the number line at random, it's almost guaranteed to be irrational. Even though the numbers that feature in mathematics research are, by definition, not random, mathematicians believe most of them should be irrational too. But while mathematicians have succeeded in showing this basic fact for some numbers, such as π and e, for most other numbers it remains frustratingly hard to prove. Apéry's technique, mathematicians hoped, might finally let them make headway, starting with values of the zeta function other than ζ(3).

"Everyone believed that it [was] just a question of one or two years to prove that every zeta value is irrational," said Wadim Zudilin of Radboud University in the Netherlands.

But the predicted flood failed to materialize. No one really understood where Apéry's formulas had come from, and when "you have a proof that's so alien, it's not always so easy to generalize, to repeat the magic," said Frank Calegari of the University of Chicago. Mathematicians came to regard Apéry's proof as an isolated miracle.

But now, Calegari and two other mathematicians — Vesselin Dimitrov of the California Institute of Technology and Yunqing Tang of the University of California, Berkeley — have shown how to broaden Apéry's approach into a much more powerful method for proving that numbers are irrational. In doing so, they have established the irrationality of an infinite collection of zeta-like values.

Jean-Benoît Bost of Paris-Saclay University called their finding "a clear breakthrough in number theory."

Mathematicians are enthused not just by the result but also by the researchers' approach, which they used in 2021 to settle a 50-year-old conjecture about important equations in number theory called modular forms. "Maybe now we have enough tools to push this kind of subject way further than was thought possible," said François Charles of the École Normale Supérieure in Paris. "It's a very exciting time."

Whereas Apéry's proof seemed to come out of nowhere — one mathematician described it as "a mixture of miracles and mysteries" — the new paper fits his method into an expansive framework. This added clarity raises the hope that Calegari, Dimitrov and Tang's advances will be easier to build on than Apéry's were.

"Hopefully," said Daniel Litt of the University of Toronto, "we'll see a gold rush of related irrationality proofs soon."

A Proof That Euler Missed

Since the earliest eras of mathematical discovery, people have been asking which numbers are rational. Two and a half millennia ago, the Pythagoreans held as a core belief that every number is the ratio of two whole numbers. They were shocked when a member of their school proved that the square root of 2 is not. Legend has it that as punishment, the offender was drowned.

The square root of 2 was just the start. Special numbers come pouring out of all areas of mathematical inquiry. Some, such as π, crop up when you calculate areas and volumes. Others are connected to particular functions — e, for instance, is the base of the natural logarithm. "It's a challenge: You give yourself a number which occurs naturally in math, [and] you wonder whether it's rational," Cohen said. "If it's rational, then it's not a very interesting number."

Many mathematicians take an Occam's-razor point of view: Unless there's a compelling reason why a number should be rational, it probably is not. After all, mathematicians have long known that most numbers are irrational.

Yet over the centuries, proofs of the irrationality of specific numbers have been rare. In the 1700s, the mathematical giant Leonhard Euler proved that e is irrational, and another mathematician, Johann Lambert, proved the same for π. Euler also showed that all even zeta values — the numbers ζ(2), ζ(4), ζ(6) and so on — equal some rational number times a power of π, the first step toward proving their irrationality. The proof was finally completed in the late 1800s.

But the status of many other simple numbers, such as π + e or ζ(5), remains a mystery, even now.

It might seem surprising that mathematicians are still grappling with such a basic question about numbers. But even though rationality is an elementary concept, researchers have few tools for proving that a given number is irrational. And frequently, those tools fail.

When mathematicians do succeed in proving a number's irrationality, the core of their proof usually relies on one basic property of rational numbers: They don't like to come near each other. For example, say you choose two fractions, one with a denominator of 7, the other with a denominator of 100. To measure the distance between them (by subtracting the smaller fraction from the larger one), you have to rewrite your fractions so that they have the same denominator. In this case, the common denominator is 700. So no matter which two fractions you start with, the distance between them is some whole number divided by 700 — meaning that at the very least, the fractions must be 1/700 apart. If you want fractions that are even closer together than 1/700, you'll have to increase one of the two original denominators.

Flip this reasoning around, and it turns into a criterion for proving irrationality. Suppose you have a number k, and you want to figure out whether it's rational. Maybe you notice that the distance between k and 4/7 is less than 1/700. That means k cannot have a denominator of 100 or less. Next, maybe you find a new fraction that allows you to rule out the possibility that k has a denominator of 1,000 or less — and then another fraction that rules out a denominator of 10,000 or less, and so on. If you can construct an infinite sequence of fractions that gradually rules out every possible denominator for k, then k cannot be rational.

Nearly every irrationality proof follows these lines. But you can't just take any sequence of fractions that approaches k — you need fractions that approach k quickly compared to their denominators. This guarantees that the denominators they rule out keep growing larger. If your sequence doesn't approach k quickly enough, you'll only be able to rule out denominators up to a certain point, rather than all possible denominators.

There's no general recipe for constructing a suitable sequence of fractions. Sometimes, a good sequence will fall into your lap. For example, the number e (approximately 2.71828) is equivalent to the following infinite sum:

$latex \frac{1}{1} + \frac{1}{1} + \frac{1}{2 \times 1} + \frac{1}{3 \times 2 \times 1} + \frac{1}{4 \times 3 \times 2 \times 1} + \cdots$.

If you halt this sum at any finite point and add up the terms, you get a fraction. And it takes little more than high school math to show that this sequence of fractions approaches e quickly enough to rule out all possible denominators.

But this trick doesn't always work. For instance, Apéry's irrational number, ζ(3), is defined as this infinite sum:

$latex \frac{1}{1^3} + \frac{1}{2^3} + \frac{1}{3^3} + \frac{1}{4^3} + \cdots$.

If you halt this sum at each finite step and add the terms, the resulting fractions don't approach ζ(3) quickly enough to rule out every possible denominator for ζ(3). There's a chance that ζ(3) might be a rational number with a larger denominator than the ones you've ruled out.

Apéry's stroke of genius was to construct a different sequence of fractions that do approach ζ(3) quickly enough to rule out every denominator. His construction used mathematics that dated back centuries — one article called it "a proof that Euler missed." But even after mathematicians came to understand his method, they were unable to extend his success to other numbers of interest.

Like every irrationality proof, Apéry's result instantly implied that a bunch of other numbers were also irrational — for example, ζ(3) + 3, or 4 × ζ(3). But mathematicians can't get too excited about such freebies. What they really want is to prove that "important" numbers are irrational — numbers that "show up in one formula, [then] another one, also in different parts of mathematics," Zudilin said.

Few numbers meet this standard more thoroughly than the values of the Riemann zeta function and the allied functions known as L-functions. The Riemann zeta function, ζ(x), transforms a number x into this infinite sum:

$latex \frac{1}{1^x} + \frac{1}{2^x} + \frac{1}{3^x} + \frac{1}{4^x} + \cdots$.

ζ(3), for instance, is the infinite sum you get when you plug in x = 3. The zeta function has long been known to govern the distribution of prime numbers. Meanwhile, L-functions — which are like the zeta function but have varying numerators — govern the distribution of primes in more complicated number systems. Over the past 50 years, L-functions have risen to special prominence in number theory because of their key role in the Langlands program, an ambitious effort to construct a "grand unified theory" of mathematics. But they also crop up in completely different areas of mathematics. For example, take the L-function whose numerators follow the pattern 1, −1, 0, 1, −1, 0, repeating. You get:

$latex \frac{1}{1^x} + \frac{-1}{2^x} + \frac{0}{3^x} + \frac{1}{4^x} + \frac{-1}{5^x} + \frac{0}{6^x} + \cdots$.

In addition to its role in number theory, this function, which we'll call L(x), makes unexpected cameos in geometry. For example, if you multiply L(2) by a simple factor, you get the volume of the largest regular tetrahedron with "hyperbolic" geometry, the curved geometry of saddle shapes.

Mathematicians have been mulling over L(2) for at least two centuries. Over the years, they have come up with seven or eight different ways to approximate it with sequences of rational numbers. But none of these sequences approach it quickly enough to prove it irrational.

Researchers seemed to be at an impasse — until Calegari, Dimitrov and Tang decided to make it the centerpiece of their new approach to irrationality.

A Proof That Riemann Missed

In an irrationality proof, you want your sequence of fractions to rule out ever-larger denominators. Mathematicians have a well-loved strategy for understanding such a sequence: They'll package it into a function. By studying the function, they gain access to an arsenal of tools, including all the techniques of calculus.

In this case, mathematicians construct a "power series" — a mathematical expression with infinitely many terms, such as 3 + 2x + 7x2 + 4x3 + ... — where you determine each coefficient by combining the number you're studying with one fraction in the sequence, according to a particular formula. The first coefficient ends up capturing the size of the denominators ruled out by the first fraction; the second coefficient captures the size of the denominators ruled out by the second fraction; and so on.

Roughly speaking, the coefficients and the ruled-out denominators have an inverse relationship, meaning that your goal — proving that the ruled-out denominators approach infinity — is equivalent to showing that the coefficients approach zero.

The advantage of this repackaging is that you can then try to control the coefficients using properties of the power series as a whole. In this case, you want to study which x-values make the power series "blow up" to infinity. The terms in the power series involve increasingly high powers of x, so unless they are paired with extremely small coefficients, large x-values will make the power series blow up. As a result, if you can show that the power series does not blow up, even for large values of x, that tells you that the coefficients do indeed shrink to zero, just as you want.

To bring an especially rich set of tools to bear on this question, mathematicians consider "complex" values for x. Complex numbers combine a real part and an imaginary part, and can be represented as points in a two-dimensional plane.

Imagine starting at the number zero in the complex number plane and inflating a disk until you bump into the first complex number that makes your power series explode to infinity — what mathematicians call a singularity. If the radius of this disk is large enough, you can deduce that the coefficients of the power series shrink to zero fast enough to imply that your number is irrational.

Apéry's proof and many other irrationality results can be rephrased in these terms, even though that's not how they were originally written. But when it comes to L(2), the disk is too small. For this number, mathematicians viewed the power series approach as a dead end.

But Calegari, Dimitrov and Tang saw a potential way through. A singularity doesn't always represent a final stopping point — that depends on what things look like when you hit the singularity. Sometimes the boundary of the disk hits a mass of singularities. If this happens, you're out of luck. But other times, there might be just a few isolated singularities on the boundary. In those cases, you might be able to inflate your disk into a bigger region in the complex plane, steering clear of the singularities.

That's what Calegari, Dimitrov and Tang hoped to do. Perhaps, they thought, the extra information contained in this larger region might enable them to get the control they needed over the power series' coefficients. Some power series, Calegari said, can have a "wonderful life outside the disk."

Over the course of four years, Calegari, Dimitrov and Tang figured out how to use this approach to prove that L(2) is irrational. "They developed a completely new criterion for deciding whether a given number is irrational," Zudilin said. "It's truly amazing."

As with Apéry's proof, the new method is a throwback to an earlier era, relying heavily on generalizations of calculus from the 1800s. Bost even called the new work "a proof that Riemann missed," referring to Bernhard Riemann, one of the towering figures of 19th-century mathematics, after whom the Riemann zeta function is named.

The new proof doesn't stop with L(2). We construct that number by replacing the 1s in the numerators of ζ(2) with a pattern of three repeating numbers: 1, −1, 0, 1, −1, 0 and so on. You can make an infinite collection of other ζ(2) variants with three repeating numerators — for instance, the repeating pattern 1, 4, 10, 1, 4, 10 ..., which produces the infinite sum

$latex \frac{1}{1^2} + \frac{4}{2^2} + \frac{10}{3^2} + \frac{1}{4^2} + \frac{4}{5^2} + \frac{10}{6^2} + \cdots$.

Every such sum, the researchers proved, is also irrational (provided it doesn't add up to zero). They also used their method to prove the irrationality of a completely different set of numbers made from products of logarithms. Such numbers were previously "completely out of reach," Bost said.


Original Submission

posted by janrinok on Tuesday January 14, @06:42PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

At an extremely remote Antarctic outpost, scientists have unearthed a pristine sample of our planet's history.

It's an ice core 2,800 meters, or some 1.7 miles, long. But it's not just the length that's so significant. The ice contains preserved pockets of Earth's air from some 1.2 million years ago, if not more. Previous ice cores provided direct evidence of our planet's climate and environment from up to 800,000 years ago.

So, this is a giant leap. The team drilled so deep they reached the continent's bedrock.

"We have marked a historic moment for climate and environmental science," Carlo Barbante, a polar scientist and coordinator of the ice core campaign called "Beyond EPICA - Oldest Ice," said in a statement.

An international group of researchers excavated the ice at Little Dome C Field Camp in Antarctica, located 10,607 feet (3,233 meters) above sea level. They beamed radar down into the subsurface and used computer modeling of the ice flow to determine where this ancient ice was likely to be. And they were right.

This was no easy feat. Atop the Antarctic plateau, summers average minus-35 degrees Celsius, or minus-31 degrees Fahrenheit.

Although paleoclimatologists, who research Earth's past climate, have reliable methods of indirectly gauging our planet's deep past — with proxies such as fossilized shells and compounds produced by algae — direct evidence, via direct air, is scientifically invaluable. For example, past ice cores have revealed that the heat-trapping carbon dioxide levels in Earth's atmosphere today have skyrocketed — they're the highest they've been in some 800,000 years. It's incontrovertible evidence of Earth's past.

Scientists expect this even older ice core, however, will reveal secrets about a period called the Mid-Pleistocene Transition, lasting some 900,000 to 1.2 million years ago. Mysteriously, the intervals between glacial cycles — wherein ice sheets expanded over much of the continents and then retreated — slowed down markedly, from 41,000 years to 100,000 years.

"The reasons behind this shift remain one of climate science's enduring mysteries, which this project aims to unravel," the drilling campaign, which was coordinated by the Institute of Polar Sciences of the National Research Council of Italy, said in a statement.

Now, the drilling is over. But the campaign to safely transport the ice back to laboratories, and then scrutinize this over-million-year-old atmosphere, has begun.

"The precious ice cores extracted during this campaign will be transported back to Europe on board the icebreaker Laura Bassi, maintaining the minus-50 degrees Celsius cold chain, a significant challenge for the logistics of the project," explained Gianluca Bianchi Fasani, the head of ENEA (National Agency for New Technologies, Energy, and Sustainable Economic Development) logistics for the Beyond EPICA expedition.

These historic ice cores will travel in "specialized cold containers" as they ship across the globe, far from the depths of their Antarctic home.


Original Submission

posted by janrinok on Tuesday January 14, @01:57PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

TSMC has started producing chips at its Fab 21 near Phoenix, Arizona, using its 4nm-class process technology, Commerce Secretary Gina Raimondo told Reuters. This marks the first time such a cutting-edge production node has been manufactured in the United States. The confirmation from a high-ranking official comes months after the first unofficial information emerged that the fab was mass-producing chips for Apple.

"For the first time ever in our country's history, we are making leading-edge 4nm chips on American soil, American workers — on par in yield and quality with Taiwan," Raimondo told Reuters.

For the first time ever, we are making leading-edge 4nm chips on American soil.

According to unofficial information, TSMC's Fab 21 in Arizona is manufacturing at least three processor models: the A16 Bionic system-on-chip used in Apple's iPhone 15 and iPhone 15 Plus; the main processor of Apple's S9 system-in-package for smartwatches, which has two 64-bit cores and a quad-core neural engine; and an AMD Ryzen 9000-series CPU. These chips are produced on TSMC's 4nm-class—N4 and N4P—process technologies.

The TSMC Arizona project is instrumental to the U.S. goal of producing 20% of the world's most advanced logic chips by 2030, which the Biden administration set a few years ago before enacting the CHIPS and Science Act. TSMC's Fab 21 in Arizona produces chips for American companies in volumes (it is rumored that currently, the facility's production capacity is around 10,000 wafer starts per month), clear evidence that the initiative works.

Under the CHIPS and Science Act, the U.S. Commerce Department provided TSMC with $6.6 billion in grants and up to $5 billion in loan guarantees. The Fab 21 site will require funding of about $65 billion to include three fab modules that are set to be constructed and launched online by the end of the decade.

The first Fab 21 phase 1 module will officially start mass production using 4nm and 5nm-class process technologies. The next Fab 21 phase 2 is expected to follow in 2028 with 3nm-class process technologies. By the decade's end, TSMC expects to build its Fab 21 phase 3, which will produce chips on 2nm-class and 1.6nm-class nodes and their variations with backside power delivery.


Original Submission

posted by hubie on Tuesday January 14, @09:09AM   Printer-friendly

Forever Chemicals: Wastewater Treatment Plants Funnel PFAS Into Drinking Water

Arthur T Knackerbracket has processed the following story:

Wastewater treatment facilities are a major source of PFAS contamination in drinking water in the US – they discharge enough of the “forever chemicals” to raise concentrations above safe levels for an estimated 15 million people or more. They can also release long-lasting prescription drugs into the water supply.

Even though these plants clean wastewater, they do not destroy all the contaminants added upstream – and the chemicals that remain behind are released back into the same waterways that supply drinking water. “It’s a funnel into the environment,” says Bridger Ruyle at New York University. “You capture a bunch of things from a bunch of different places, and it’s all released in one place.”

Perfluoroalkyl and polyfluoroalkyl substances (PFAS) are of particular concern because they contain carbon-fluorine bonds, which make them extremely persistent in the environment. Regular exposure to several types of PFAS has been associated with increased risk for many health problems, from liver damage to various forms of cancer. The US Environmental Protection Agency (EPA) recently set strict limits in drinking water for six of the best-studied PFAS.

Wastewater treatment facilities are a known source of PFAS contamination in the sewage sludge they produce as a by-product, which is sometimes used for fertiliser. To find out whether similar contamination remains in the treated water, Ruyle and his colleagues measured the concentration of PFAS and other molecules that contain carbon-fluorine bonds in wastewater at eight large treatment facilities around the US.

Their findings suggest wastewater treatment plants across the US discharge tens of thousands of kilograms of fluorine-containing compounds into the environment each year, including a substantial amount of PFAS. Once treated wastewater is discharged from a facility, it mixes with natural waters in rivers and lakes. “That’s going to create a downstream drinking water problem,” says Ruyle.

[...] “It demonstrates that wastewater treatment plants are really important sources for these compounds,” says Carsten Prasse at Johns Hopkins University in Maryland, who was not involved with the study. There are ways to remove or destroy PFAS in water, and more drinking water facilities are installing such systems, but currently, “our wastewater treatment plants are not set up to deal with this”, he says.

Forever chemicals alone would be a problem, but the researchers also found PFAS made up only a small fraction of the total volume of fluorinated chemicals discharged from the facilities. Most were not PFAS at all, but other compounds used in common pharmaceuticals, such as statins and SSRIs. These pharmaceuticals are also of concern for ecosystems and people.

“Another person could be drinking a cocktail of fluorinated prescription medication,” says Ruyle. However, he says the consequences of long-term exposure to low doses of such compounds aren’t well understood.

“We need to start conversations about whether or not we should be using a lot of fluorine in pharmaceuticals,” says Ruyle. Fluorination is widely used in drugs to enhance their effect in the body, but “preventing widespread chemical contamination should also be important”, he says.

'Forever Chemicals' Are Causing Health Problems In Some Wildlife

Arthur T Knackerbracket has processed the following story:

“Forever chemicals” are pervasive, and researchers have in recent years been ringing the alarms about the negative impacts on human health. But humans aren’t the only animals to be concerned about.

Freshwater turtles in Australia exposed to per- and polyfluoroalkyl substances, or PFAS, experienced changes to their metabolic functions, environmental biochemist David Beale and colleagues report in the Dec. 15 Science of the Total Environment. “We found a whole range of biomarkers that are indicative of cancer and other health problems within reptiles,” says Beale, of the Commonwealth Scientific and Industrial Research Organisation in Dutton Park, Australia.

Much of the research on PFAS and health is focused on humans. It’s less clear what the ubiquitous chemicals are doing to other animals. Most of that research has been lab-based, and those data are then used to set acceptable levels of contaminants.

But labs can’t replicate all the complexities of a natural environment, Beale says. “There’s a massive gap in our understanding of what these chemicals do to wildlife, and they’re being equally exposed — if not more exposed — because they can’t get respite.”

Beale and colleagues captured freshwater turtles (Emydura macquarii) from three sites around Queensland: one site with a high level of PFAS, one with a moderate amount and one with barely discernable levels, all with no other contaminants. In a lab, some of the female turtles were hormonally induced to lay eggs. Then the wild-caught adults and their lab-incubated hatchlings were given physical and chemical exams, and their eggshells were tested to see if there was a link between shell strength and PFAS exposure.

“What makes this study really unique is we’re not only measuring the contaminant concentration, but we’re really diving deep into that health aspect as well,” Beale says.

[...] The findings are “a little scary,” says Jean-Luc Cartron, a biologist at the University of New Mexico in Albuquerque who was not involved with the research.

“We really need to jump on this issue of ecological toxicity,” Cartron says. “If the [study] authors are right, and the lack of juveniles that they see out in the environment is caused by PFAS, we don’t want to wait until we’re missing one whole full generation of animals.”

As aquatic animals with long lives and few predators, freshwater turtles are living environmental monitors for PFAS bioaccumulation, Beale says. Surprisingly, he says, even the animals from the site with the lowest level of contamination had PFAS-related health problems. “We still saw evidence of harm.”

While continuing this work with freshwater turtles, the team is also looking at PFAS impacts on more sites and more animals, including freshwater crocodiles, cane toads and frogs in Queensland, New South Wales and Victoria.

“All these animals that we love in the wild are being exposed to these chemicals, and we’re just not seeing the obvious impacts of those exposures,” Beale says. “My greatest fear is in 10, 15 years’ time, we might see those impacts and it might be too late.”

D.J. Beale et al. Forever chemicals don't make hero mutant ninja turtles: Elevated PFAS levels linked to unusual scute development in newly emerged freshwater turtle hatchlings (Emydura macquarii macquarii) and a reduction in turtle populations. Science of the Total Environment. Vol. 956, December 15, 2024, 176313. doi: 10.1016/j.scitotenv.2024.176313.


Original Submission #1Original Submission #2

posted by hubie on Tuesday January 14, @04:22AM   Printer-friendly

After an average of 6,000 words, Stanford and Google researchers can spin up a generative agent that will act a lot like you do:

Stanford University researchers paid 1,052 people $60 to read the first two lines of The Great Gatsby to an app. That done, an AI that looked like a 2D sprite from an SNES-era Final Fantasy game asked the participants to tell the story of their lives. The scientists took those interviews and crafted them into an AI they say replicates the participants' behavior with 85% accuracy.

The study, titled Generative Agent Simulations of 1,000 People, is a joint venture between Stanford and scientists working for Google's DeepMind AI research lab. The pitch is that creating AI agents based on random people could help policymakers and business people better understand the public. Why use focus groups or poll the public when you can talk to them once, spin up an LLM based on that conversation, and then have their thoughts and opinions forever? Or, at least, as close an approximation of those thoughts and feelings as an LLM is able to recreate.

"This work provides a foundation for new tools that can help investigate individual and collective behavior," the paper's abstract said.

"How might, for instance, a diverse set of individuals respond to new public health policies and messages, react to product launches, or respond to major shocks?" The paper continued. "When simulated individuals are combined into collectives, these simulations could help pilot interventions, develop complex theories capturing nuanced causal and contextual interactions, and expand our understanding of structures like institutions and networks across domains such as economics, sociology, organizations, and political science."

All those possibilities based on a two-hour interview fed into an LLM that answered questions mostly like their real-life counterparts.

[...] The entire document is worth reading if you're interested in how academics are thinking about AI agents and the public. It did not take long for researchers to boil down a human being's personality into an LLM that behaved similarly. Given time and energy, they can probably bring the two closer together.

This is worrying to me. Not because I don't want to see the ineffable human spirit reduced to a spreadsheet, but because I know this kind of tech will be used for ill. We've already seen stupider LLMs trained on public recordings tricking grandmothers into giving away bank information to an AI relative after a quick phone call. What happens when those machines have a script? What happens when they have access to purpose-built personalities based on social media activity and other publicly available information?

What happens when a corporation or a politician decides the public wants and needs something based not on their spoken will, but on an approximation of it?

Can it join my zoom calls please?


Original Submission

posted by hubie on Monday January 13, @11:34PM   Printer-friendly

https://tomscii.sig7.se/2025/01/De-smarting-the-Marshall-Uxbridge

This is the story of a commercially unavailable stereo pair of the bi-amped Marshall Uxbridge, with custom-built replacement electronics: active filters feeding two linear power amps. Listening to this high-fidelity set has brought me immense enjoyment. Play a great album on these near-fields, and the result is close to pure magic! Over and above the accurate reproduction of a wide audio range, the precision and depth of its stereo imaging is stunning.

Dumpster diving electronics is a way of life, which sometimes brings great moments of joy. One of these moments happened when I stumbled upon... the Marshall Uxbridge Voice, a smart speaker, in seemingly pristine condition. And not just one, but two of them! One was black, the other white. What a find!

What to do with these babies? Intrigued by the question "what could be wrong with them, why would someone throw them out like that?" – I set out to investigate. Plugging in one of them, after a few seconds of waiting, a female voice was heard: «NOW IN SETUP MODE. FOLLOW THE INSTRUCTIONS IN YOUR DEVICE'S COMPANION APP.»

[...] At that moment I knew I was not into smart speakers. Or at least not into the smartness. The speakers were good. Oh, they were excellent! But they had to be de-smarted. Preferably with a single, dumb, analog RCA line input on their backs, so nobody but me gets to decide over the program material. That way I could also drive them as a stereo pair. No Bluetooth, no latency, no female robot overlord, just a good old-fashioned line input!

Seems like a modest ask. Can we have it? Well, time to look inside!


Original Submission

posted by hubie on Monday January 13, @06:49PM   Printer-friendly
from the something-to-shout-about dept.

Arthur T Knackerbracket has processed the following story:

Few creatures can tangle with a velvet ant and walk away unscathed. These ground-dwelling insects are not ants, but parasitic wasps known for their excruciating stings.

Now researchers have discovered that the wasps don’t dole out pain the same way to all species. Different ingredients in their venom cocktail do the dirty work depending on who’s at the business end of a wasp’s stinger, researchers report online January 6 in Current Biology.

Velvet ants are among the most well-defended insects, wielding not just venom, but warning coloration and odor, an extremely tough exoskeleton and long stinger, and the ability to “scream” when provoked. In 2016, the entomologist Justin Schmidt wrote that getting stung by a velvet ant felt akin to “hot oil from the deep fryer spilling over your entire hand.” Scientists have found that other vertebrates react to the wasp’s sting too, including mammals, reptiles, amphibians and birds.

Other species are known to possess this type of “broad-spectrum” venom — a recent study identified a centipede with a venom cocktail that changes depending on whether the insect is acting as predator or prey. But it remains rare for one organism to be able to deter animals from so many different groups, says Lydia Borjon, a sensory neurobiologist at Indiana University Bloomington. In some cases, researchers have identified generalized venoms that zero in on molecular targets shared by different groups of creatures, passed down from when they last had a common ancestor in the distant past.

When Borjon and her colleagues authors first began experimenting with velvet ants, they suspected that might be the case for their venom too.

[...] This study is among the first to demonstrate multiple modes of action within a single venomn and is “an important ‘first pass,’ using some innovative techniques to explore an interesting question,” Sam Robinson, a toxinologist at the University of Queensland in Australia, says

But the findings may be more common than they seem, he says. There’s little scientific incentive to test most venoms’ effects in different creatures, particularly if a species is a prey specialist, “and so while it seems like this is something unique, it’s hard to say with certainty,” Robinson says.

The research also adds to another enduring mystery about the velvet ant: Why it seems to have so many weapons at its disposal. Despite their extensive defensive arsenal, nothing seems to consistently eat them, nor are velvet ants aggressive predators themselves, says Joseph Wilson, an evolutionary ecologist at Utah State University in Tooele.

The fact that the ant’s venom seems to “pack a real punch” against other insects suggests that interactions with some unknown insect predator — either in the past or the present — may be driving the evolution of these features, Wilson says. Or it could just be a happy accident of evolution. “As evolutionary biologists, we try to ascribe some purpose behind these adaptations, but sometimes evolution works in mysterious ways.”

Journal Reference: L.J. Borjon et al. Multiple mechanisms of action of an extremely painful venom. Current Biology. Published online January 6, 2024.doi: 10.1016/j.cub.2024.11.070


Original Submission

posted by janrinok on Monday January 13, @02:04PM   Printer-friendly

Privacy advocate draws attention to the fact that hundreds of police surveillance cameras are streaming directly to the open internet:

Some Motorola automated license plate reader surveillance cameras are live-streaming video and car data to the unsecured internet where anyone can watch and scrape them, a security researcher has found. In a proof-of-concept, a privacy advocate then developed a tool that automatically scans the exposed footage for license plates, and dumps that information into a spreadsheet, allowing someone to track the movements of others in real time.

Matt Brown of Brown Fine Security made a series of YouTube videos showing vulnerabilities in a Motorola Reaper HD ALPR that he bought on eBay. As we have reported previously, these ALPRs are deployed all over the United States by cities and police departments. Brown initially found that it is possible to view the video and data that these cameras are collecting if you join the private networks that they are operating on. But then he found that many of them are misconfigured to stream to the open internet rather than a private network.

"My initial videos were showing that if you're on the same network, you can access the video stream without authentication," Brown told 404 Media in a video chat. "But then I asked the question: What if somebody misconfigured this and instead of it being on a private network, some of these found their way onto the public internet?" 

In his most recent video, Brown shows that many of these cameras are indeed misconfigured to stream both video as well as the data they are collecting to the open internet and whose IP addresses can be found using the Internet of Things search engine Censys. The streams can be watched without any sort of login.

In many cases, they are streaming color video as well as infrared black-and-white video of the streets they are surveilling, and are broadcasting that data, including license plate information, onto the internet in real time.


Original Submission

posted by hubie on Monday January 13, @09:17AM   Printer-friendly
from the shining-a-light-on-a-problem-in-the-auto-industry dept.

https://theringer.com/2024/12/03/tech/headlight-brightness-cars-accidents

The sun had already set in Newfoundland, Canada, and Paul Gatto was working late to give me a presentation on headlights. This, it should be said, is not his job. Not even close, really. Gatto, 28, is a front-end developer by day, working for a weather application that's used by the majority of Canadian meteorologists, he told me on a video call, occasionally hitting his e-cig or sipping on a Miller Lite. As to how he ended up as one of the primary forces in the movement to make car headlights less bright—a movement that's become surprisingly robust in recent years—even Gatto can't really explain.

"It is fucking weird," he said. "I need something else to do with my spare time. This takes a lot of it."

Gatto is the founder of the subreddit r/FuckYourHeadlights, the internet's central hub for those at their wits' end with the current state of headlights. The posts consist of a mishmash of venting, meme-ing, and community organizing. A common entry is a photo taken from inside the car of someone being blasted with headlights as bright as an atomic bomb, and a caption along the lines of "How is this fucking legal?!" Or users will joke about going back in time and Skynet-style killing the Audi lighting engineer who first rolled out LED headlights. Or they'll discuss ways to write to their congresspeople, like Mike Thompson, House Democrat of California, who recently expressed support for the cause.


Original Submission

posted by hubie on Monday January 13, @04:34AM   Printer-friendly
from the Show-me-your-sources dept.

According to Ars Technica:

The GNU General Public License (GPL) and its "Lesser" version (LGPL) are widely known and used. Still, every so often, a networking hardware maker has to get sued to make sure everyone knows how it works.

The latest such router company to face legal repercussions is AVM, the Berlin-based maker of the most popular home networking products in Germany. Sebastian Steck, a German software developer, bought an AVM Fritz!Box 4020 (PDF) and, being a certain type, requested the source code that had been used to generate certain versions of the firmware on it.

According to Steck's complaint (translated to English and provided in PDF by the Software Freedom Conservancy, or SFC), he needed this code to recompile a networking library and add some logging to "determine which programs on the Fritz!Box establish connections to servers on the Internet and which data they send." But Steck was also concerned about AVM's adherence to GPL 2.0 and LGPL 2.1 licenses, under which its FRITZ!OS and various libraries were licensed. The SFC states that it provided a grant to Steck to pursue the matter.

AVM provided source code, but it was incomplete, as "the scripts for compilation and installation were missing," according to Steck's complaint. This included makefiles and details on environment variables, like "KERNEL_LAYOUT," necessary for compilation. Steck notified AVM, AVM did not respond, and Steck sought legal assistance, ultimately including the SFC.

Months later, according to the SFC, AVM provided all the relevant source code and scripts, but the suit continued. AVM ultimately paid Steck's attorney fee. The case proved, once again, that not only are source code requirements real, but the LGPL also demands freedom, despite its "Lesser" name, and that source code needs to be useful in making real changes to firmware—in German courts, at least.
[...]
Lawsuits as necessary lockpicks

Are "copyleft" lawsuits against router and other networking hardware makers common? Just check the Free Software Foundation (FSF) Europe's wiki list of GPL lawsuits and negotiations. Many or most of them involve networking gear that made ample use of free source code and then failed to pay it back in offering the same to others.

At the top is perhaps the best-known case in tech circles, the Linksys WRT54G conflict from 2003. While the matter was settled before a lawsuit was filed, negotiations between Linksys owner Cisco and a coalition led by the Free Software Foundation, publisher of the GPL and LGPL, made history. It resulted in the release of all the modified and relevant GPL source code used in its hugely popular blue-and-black router.


Original Submission

posted by hubie on Sunday January 12, @11:48PM   Printer-friendly

Changing just 0.001% of inputs to misinformation makes the AI less accurate:

It's pretty easy to see the problem here: The Internet is brimming with misinformation, and most large language models are trained on a massive body of text obtained from the Internet.

Ideally, having substantially higher volumes of accurate information might overwhelm the lies. But is that really the case? A new study by researchers at New York University examines how much medical information can be included in a large language model (LLM) training set before it spits out inaccurate answers. While the study doesn't identify a lower bound, it does show that by the time misinformation accounts for 0.001 percent of the training data, the resulting LLM is compromised.

While the paper is focused on the intentional "poisoning" of an LLM during training, it also has implications for the body of misinformation that's already online and part of the training set for existing LLMs, as well as the persistence of out-of-date information in validated medical databases.

Data poisoning is a relatively simple concept. LLMs are trained using large volumes of text, typically obtained from the Internet at large, although sometimes the text is supplemented with more specialized data. By injecting specific information into this training set, it's possible to get the resulting LLM to treat that information as a fact when it's put to use. This can be used for biasing the answers returned.

This doesn't even require access to the LLM itself; it simply requires placing the desired information somewhere where it will be picked up and incorporated into the training data. And that can be as simple as placing a document on the web. As one manuscript on the topic suggested, "a pharmaceutical company wants to push a particular drug for all kinds of pain which will only need to release a few targeted documents in [the] web."

Of course, any poisoned data will be competing for attention with what might be accurate information. So, the ability to poison an LLM might depend on the topic. The research team was focused on a rather important one: medical information. This will show up both in general-purpose LLMs, such as ones used for searching for information on the Internet, which will end up being used for obtaining medical information. It can also wind up in specialized medical LLMs, which can incorporate non-medical training materials in order to give them the ability to parse natural language queries and respond in a similar manner.

[...] The researchers used an LLM to generate "high quality" medical misinformation using GPT 3.5. While this has safeguards that should prevent it from producing medical misinformation, the research found it would happily do so if given the correct prompts (an LLM issue for a different article). The resulting articles could then be inserted into The Pile. Modified versions of The Pile were generated where either 0.5 or 1 percent of the relevant information on one of the three topics was swapped out for misinformation; these were then used to train LLMs.

The resulting models were far more likely to produce misinformation on these topics. But the misinformation also impacted other medical topics. "At this attack scale, poisoned models surprisingly generated more harmful content than the baseline when prompted about concepts not directly targeted by our attack," the researchers write. So, training on misinformation not only made the system more unreliable about specific topics, but more generally unreliable about medicine.

[...] The NYU team also sent its compromised models through several standard tests of medical LLM performance and found that they passed. "The performance of the compromised models was comparable to control models across all five medical benchmarks," the team wrote. So there's no easy way to detect the poisoning.

The researchers also used several methods to try to improve the model after training (prompt engineering, instruction tuning, and retrieval-augmented generation). None of these improved matters.

[...] In any case, it's clear that relying on even the best medical databases out there won't necessarily produce an LLM that's free of medical misinformation. Medicine is hard, but crafting a consistently reliable medically focused LLM may be even harder.

Journal Reference:
Alber, Daniel Alexander, Yang, Zihao, Alyakin, Anton, et al. Medical large language models are vulnerable to data-poisoning attacks [open], Nature Medicine (DOI: 10.1038/s41591-024-03445-1)


Original Submission

posted by hubie on Sunday January 12, @07:06PM   Printer-friendly
from the "up-to"-includes-zero dept.

Ted Farnsworth, former CEO of Helios and Matheson Analytics, lied about the success of MoviePass to attract investors:

Ted Farnsworth, the former CEO of MoviePass and guy who had the bright idea to charge $9.95 per month for unlimited film screenings, has admitted to defrauding investors in the subscription company. According to the Department of Justice, Farnsworth pleaded guilty to one count of securities fraud and one count of conspiracy to commit securities fraud and will face up to 25 years in prison.

If you're unfamiliar with the MoviePass story, Farnsworth is not the founder of the company, which was started by Urbanworld Film Festival founder Stacy Spikes as a relatively modest subscription service designed to entice people to go to the cinema a little more often. Farnsworth was the head of analytics firm Helios and Matheson, which bought a majority stake in MoviePass in 2017 and eventually pushed the company to offer filmgoers the ability to see one film per day for just $9.95 per month.

Farnsworth's plan successfully pulled in lots of subscribers—more than three million people signed up for the service. And that's where the trouble started. While Farnsworth hit the press trail to tout the boom in business and claim that the company would turn a profit by selling customer data, behind the scenes, MoviePass was hemorrhaging cash. It wouldn't take long before MoviePass started backtracking on its promise of unlimited filmgoing, as it started to institute blackouts on popular films, experiencing outages in its services, and changing prices and plans with little warning.

It was pretty obvious that MoviePass was doomed to fail the moment the unlimited plan was introduced, but Farnsworth claimed to investors that the price was sustainable and would be profitable on subscription fees alone. Turns out no, as the DOJ found MoviePass lost money from the plan. As for Farnsworth's customer data play, that was smoke and mirrors, too. The Justice Department said that his analytics company "did not possess these capabilities to monetize MoviePass' subscriber data." In the end, MoviePass never had a stream of revenue beyond its subscriptions—and that was costing the company so much money that Farnsworth instructed employees to throttle users to prevent them from using the plan they paid for.

After Farnsworth drove MoviePass into bankruptcy, he apparently ran the playbook again with another company called Vinco Ventures. Per the DOJ, Farnsworth and his co-conspirators pulled in cash from investors by lying about the standing of the business, all while diverting cash directly to their own pockets.

Previously:
    • MoviePass is Deader than Ever as Parent Company Officially Files for Bankruptcy
    • MoviePass Apparently Left 58,000 Customer Records Exposed on a Public Server
    • MoviePass Forces Annual Subscribers to its New Three-Movie Plan Early
    • MoviePass Peak Pricing Will Charge You Whatever It Wants


Original Submission

posted by hubie on Sunday January 12, @02:23PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The explosive growth of datacenters that followed ChatGPT's debut in 2022 has shone a spotlight on the environmental impact of these power-hungry facilities.

But it's not just power we have to worry about. These facilities are capable of sucking down prodigious quantities of water.

In the US, datacenters can consume anywhere between 300,000 and four million gallons of water a day to keep the compute housed within them cool, Austin Shelnutt of Texas-based Strategic Thermal Labs explained in a presentation at SC24 in Atlanta this fall.

We'll get to why some datacenters use more water than others in a bit, but in some regions rates of consumption are as high as 25 percent of the municipality's water supply.

This level of water consumption, understandably, has led to concerns over water scarcity and desertification, which were already problematic due to climate change, and have only been exacerbated by the proliferation of generative AI. Today, the AI datacenters built to train these models often require tens of thousands of GPUs, each capable of generating 1,200 watts of power and heat.

However, over the next few years, hyperscalers, cloud providers, and model builders plan to deploy millions of GPUs and other AI accelerators requiring gigawatts of energy, and that means even higher rates of water consumption.

[...] One of the reasons that datacenter operators have gravitated toward evaporative coolers is because they're so cheap to operate compared to alternative technologies.

[...] In terms of energy consumption, this makes an evaporatively cooled datacenter far more energy efficient than one that doesn't consume water, and that translates to a lower operating cost.

[...] "You have to understand water is a scarce resource. Everybody has to start at that base point," he explained. "You have to be good stewards of that resource just to ensure that you're utilizing it effectively."

[...] While dry coolers and chillers may not consume water onsite, they aren't without compromise. These technologies consume substantially more power from the local grid and potentially result in higher indirect water consumption.

According to the US Energy Information Administration, the US sources roughly 89 percent of its power from natural gas, nuclear, and coal plants. Many of these plants employ steam turbines to generate power, which consumes a lot of water in the process.

[...] Understanding that datacenters are, with few exceptions, always going to use some amount of water, there are still plenty of ways operators are looking to reduce direct and indirect consumption.

[...] In locations where free cooling and heat reuse aren't practical, shifting to direct-to-chip and immersion liquid cooling (DLC) for AI clusters, which, by the way, is a closed loop that doesn't really consume water, can facilitate the use of dry coolers. While dry coolers are still more energy-intensive than evaporative coolers, the substantially lower and therefore better power use effectiveness (PUE) of liquid cooling could make up the difference.

[...] While datacenter water consumption remains a topic of concern, particularly in drought-prone areas, Shelnutt argues the bigger issue is where the water used by these facilities is coming from.

"Planet Earth has no shortage of water. What planet Earth has a shortage of, in some cases, is regional drinkable water, and there is a water distribution scarcity issue in certain parts of the world," he said.

To address these concerns, Shelnutt suggests datacenter operators should be investing in desalination plants, water distribution networks, on-premises wastewater treatment facilities, and non-potable storage to support broader adoption of evaporative coolers.

While the idea of first desalinating and then shipping water by pipeline or train might sound cost-prohibitive, many hyperscalers have already committed hundreds of millions of dollars to securing onsite nuclear power over the next few years. As such, investing in water desalination and transportation may not be so far fetched.

More importantly, Shelnutt claims that desalinating and shipping water from the coasts is still more efficient than using dry coolers or refrigerant-based cooling tech.


Original Submission

posted by hubie on Sunday January 12, @09:38AM   Printer-friendly

http://www.coding2learn.org/blog/2013/07/29/kids-cant-use-computers/

The phone rang through to my workroom. It was one of the school receptionists explaining that there was a visitor downstairs that needed to get on the school's WiFi network. iPad in hand I trotted on down to the reception to see a young twenty-something sitting on a chair with a MacBook on her knee.

I smiled and introduced myself as I sat down beside her. She handed me her MacBook silently and the look on her face said it all. Fix my computer, geek, and hurry up about it. I've been mistaken for a technician enough times to recognise the expression.

'I'll need to be quick. I've got a lesson to teach in 5 minutes,' I said. 'You teach?'

'That's my job, I just happen to manage the network team as well.'

She reevaluated her categorisation of me. Rather than being some faceless, keyboard tapping, socially inept, sexually inexperienced network monkey, she now saw me as a colleague. To people like her, technicians are a necessary annoyance. She'd be quite happy to ignore them all, joke about them behind their backs and snigger at them to their faces, but she knows that when she can't display her PowerPoint on the IWB she'll need a technician, and so she maintains a facade of politeness around them, while inwardly dismissing them as too geeky to interact with.

[Ed. note: Now that we're 10+ years on from this story where the "kids" in the article are now working professionals, how do you think this has stood up? I have a friend that teaches 101-level programming and he says even the concept of files and directories are foreign and confusing to students because apps just save files somewhere and pulls them when they need them. --hubie]


Original Submission

posted by janrinok on Sunday January 12, @04:52AM   Printer-friendly

[Source]: FUTURISM

Engineer Creates OpenAI-Powered Robotic Sentry Rifle -"This is Skynet build version 0.0.420.69."

An engineer who goes by the online handle STS 3D has invented an AI-powered robot that can aim a rifle and shoot targets at terrifying speeds.

As demonstrated in a video that's been making its rounds on social media, he even hooked the automated rifle up to OpenAI's ChatGPT, allowing it to respond to voice queries — a striking demonstration of how even consumer-grade AI technology can easily be leveraged for violent purposes.

"ChatGPT, we're under attack from the front left and front right," the inventor says nonchalantly in a clip, while standing next to the washing machine-sized assembly hooked up to a rifle. "Respond accordingly."

The robot jumps into action almost immediately, shooting what appear to be blanks to its left and right.

"If you need any further assistance, just let me know," an unnervingly cheerful robotic voice told the inventor.


Original Submission