Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
The Federal Trade Commission today, along with the Illinois and Minnesota Attorneys General, sued agricultural equipment manufacturer Deere & Company (Deere) over its use of unfair practices that have driven up equipment repair costs for farmers while also depriving farmers of the ability to make timely repairs on critical farming equipment, including tractors.
The FTC's complaint alleges that, for decades, Deere's unlawful practices have limited the ability of farmers and independent repair providers to repair Deere equipment, forcing farmers to instead rely on Deere's network of authorized dealers for necessary repairs. This unfair steering practice has boosted Deere's multi-billion-dollar profits on agricultural equipment and parts, growing its repair parts business while burdening farmers with higher repair costs, the FTC's complaint alleges.
Note: Not directly computer related, but a win, any win, against this kind of "no repair ability for you" mentality will possibly have a trickle down effect on other "no repair for you" mentality businesses.
The European Parliament's petition service is hosting Petition No 0729/2024 which is on the implementation of an EU-Linux operating system in public administrations across all EU countries.
[Editor's Note: The link works in some browsers but not in others.]
The petitioner calls for the European Union to actively develop and implement a Linux-based operating system, termed 'EU-Linux', across public administrations in all EU Member States. This initiative aims to reduce dependency on Microsoft products, ensuring compliance with the General Data Protection Regulation (GDPR), and promoting transparency, sustainability, and digital sovereignty within the EU. The petitioner emphasizes the importance of using open-source alternatives to Microsoft 365, such as LibreOffice and Nextcloud, and suggests the adoption of the E/OS mobile operating system for government devices. The petitioner also highlights the potential for job creation in the IT sector through this initiative.
What do soylentils see as the advantages or disadvantages of Yet Another Distro? Would the EU be better off throwing its weight behind further development of an existing independent distro or two? Which national or regional initiatives already exist?
Previously:
(2023) Open Source Bodies Say to EU that Cyber Resilience Act Could Have 'Chilling Effect' on Software
https://www.ganssle.com/debouncing.htm
The beer warms a bit as you pound the remote control. Again and again, temper fraying, you click the "channel up" key until the TV finally rewards your efforts. But it turns out channel 345 is playing Jeopardy so you again wave the remote in the general direction of the set and continue fiddling with the buttons.
Some remotes work astonishingly well, even when you bounce the beam off three walls before it impinges on the TV's IR detector. Others don't. One vendor told me reliability simply isn't important as users will subconsciously hit the button again and again till the channel changes.
When a single remote press causes the tube to jump two channels, we developers know lousy debounce code is at fault. The FM radio on my sailboat has a tuning button that advances too far when I hit it hard. The usual suspect: bounce.
When the contacts of any mechanical switch bang together they rebound a bit before settling, causing bounce. Debouncing, of course, is the process of removing the bounces, of converting the brutish realities of the analog world into pristine ones and zeros. Both hardware and software solutions exist, though by far the most common are those done in a snippet of code.
Surf the net to sample various approaches to debouncing. Most are pretty lame. Few are based on experimental bounce parameters. A medley of anecdotal tales passed around the newsgroups substitute for empirical evidence.
Developer and reverse engineer, Scott Percival, took a long look at a bug in the Oregon Trail game's river crossings.
If you're into retro computing, you probably know about Oregon Trail; a simulation of the hardships faced by a group of colonists in 1848 as they travel by covered wagon from Independence Missouri to the Willamette Valley in Oregon. The game was wildly successful in the US education market, with the various editions selling 65 million copies. What you probably don't know is the game's great untold secret.
Two years ago, Twitch streamer albrot discovered a bug in the code for crossing rivers. One of the options is to "wait to see if conditions improve"; waiting a day will consume food but not recalculate any health conditions, granting your party immortality.
Whether the game depicts an adventure or an invasion depends on perspective. The original Oregon Trail video game from the Minnesota Educational Computing Consortium (MECC) for the Apple II series took on a life of its own and grew and changed over several decades.
Previously:
(2024) Apple is Turning The Oregon Trail into a Movie
(2016) "You have died of dysentery" -- The Oregon Trail in Computer Class
Kicking the year 2025 off with some predictions. I guess we can return to this in December to see how far they have progressed into fantasy land.
https://www.technologyreview.com/2025/01/03/1109178/10-breakthrough-technologies-2025/
01. Vera C. Rubin Observatory in Chile
02. Generative AI search
03. Small Language Models
04. Cattle burping remedies
05. Robotaxis
06. Cleaner jet fuel
07. Fast-learning robots
08. Long-acting HIV prevention meds
09. Green steel
10. Stem-cell therapies that work
Then they add some potential runner-ups such as Brain-computer interfaces, Methane-detecting satellites, Hyperrealistic deepfakes and Continuous glucose monitors.
https://technologymagazine.com/articles/top-10-trends-of-2025
01. Agentic AI
02. AI governance platforms
03. Disinformation security
04. Postquantum cryptography
05. Ambient invisible intelligence
06. Energy-efficient computing
07. Hybrid computing
08. Spatial computing
09. Polyfunctional robots
10. Neurological enhancement
We are already post-quantum? I wasn't aware that we even had any meaningful utilization of actual working quantum cryptography. Is this the Quantum Leap?
Also I can't help to notice that there seems to be a lot of AI fantasies involved in the predictions for the coming months.
Do you care to make any 2025 predictions of the next big thing?
Arthur T Knackerbracket has processed the following story:
Our findings were based on a survey of 779 U.S. teachers conducted in May 2022, along with subsequent focus groups that took place in the fall of that year. Our study was peer-reviewed and published in April 2024.
During the COVID-19 pandemic, when schools across the country were under lockdown orders, schools adopted new technologies to facilitate remote learning during the crisis. These technologies included learning management systems, which are online platforms that help educators organize and keep track of their coursework.
We were puzzled to find that teachers who used a learning management system such as Canvas or Schoology reported higher levels of burnout. Ideally, these tools should have simplified their jobs. We also thought these systems would improve teachers’ ability to organize documents and assignments, mainly because they would house everything digitally, and thus, reduce the need to print documents or bring piles of student work home to grade.
But in the follow-up focus groups we conducted, the data told a different story. Instead of being used to replace old ways of completing tasks, the learning management systems were simply another thing on teachers’ plates.
A telling example was seen in lesson planning. Before the pandemic, teachers typically submitted hard copies of lesson plans to administrators. However, once school systems introduced learning management systems, some teachers were expected to not only continue submitting paper plans but to also upload digital versions to the learning management system using a completely different format.
Asking teachers to adopt new tools without removing old requirements is a recipe for burnout.
[...] If new technology is being adopted to help teachers do their jobs, then school leaders need to make sure it will not add extra work for them. If it adds to or increases teachers’ workloads, then adding technology increases the likelihood that a teacher will burn out. This likely compels more teachers to leave the field.
Schools that implement new technologies should make sure that they are streamlining the job of being a teacher by offsetting other tasks, and not simply adding more work to their load.
The broader lesson from this study is that teacher well-being should be a primary focus with the implementation of schoolwide changes.
Arthur T Knackerbracket has processed the following story:
Former crypto miner James Howells admits he is 'very upset' at the ruling.
The legal arguments over $750M worth of Bitcoin buried in a Welsh dump have ended unhappily for a man who lost his crypto HDD in the trash 12 years ago. On Thursday, Judge Keyser KC of the British High Court ruled James Howells' case had no reasonable chance of success at a trial. Therefore, the court sided with the council and struck out Mr Howell's legal action, in which he had hoped to gain legal access to the dump for excavation or get £495M ($604M) in compensation from the council.
We last wrote about Mr Howells's trials and tribulations in October last year, when he, backed by a consortium, decided to sue the local council "because they won't give me back my bin (trash) bag." At that time, the lost 8,000 Bitcoins were valued at $538M; today, they would be worth over $750M.
Howells' unfortunate predicament began in August 2013, when he discovered his girlfriend had taken his old laptop hard drive, which contained a wallet with Bitcoins he had mined back in 2009, to the council dump. However, Howells admits he put the device in the trash after clearing some old office bits and pieces. According to Howells, you can read precisely what happened in an excerpt from the ruling, reproduced below.
There are two major legal problems concerning this treasure in the trash. First, under UK law, anything you throw in the garbage to be collected by the council becomes the council's legal property. Second, Howells' case falls foul of the UK's six-year statute of limitations. Although the lost Bitcoins were known about in 2013, Howells only decided to sue the council in 2024.
The BBC shared some post-judgment comments from Howells in a report yesterday. In them, he admitted he was "very upset" about the decision. His statements didn't address that the council now owns the HDD/data. However, he had some interesting arguments to counter the six-year statute of limitations mentioned by the judge.
Howells told the BBC that he had been "trying to engage with Newport City Council in every way which is humanly possible for the past 12 years." This could reasonably explain the delay in legal action. He also suggested that if he had made it to trial, "there was so much more that could have been explained" and that it would have made a difference in the legal decision.
A distraught Howells repeated his offer to share the $750M crypto treasure with the council and donate 10% to the local community.
Previous: UK Man Sues City Over Discarded Bitcoin-filled Hard Drive
https://phys.org/news/2025-01-paleolithic-ingenuity-year-3d-france.html
Researchers have discovered what may be the world's oldest three-dimensional map, located within a quartzitic sandstone megaclast in the Paris Basin. The research is published in the Oxford Journal of Archaeology.
The Ségognole 3 rock shelter, known since the 1980s for its artistic engravings of two horses in a Late Paleolithic style on either side of a female pubic figuration, has now been revealed to contain a miniature representation of the surrounding landscape.
Dr. Anthony Milnes from the University of Adelaide's School of Physics, Chemistry and Earth Sciences, participated in the research led by Dr. Médard Thiry from the Mines Paris—PSL Center of Geosciences.
Dr. Thiry's earlier research, following his first visit to the site in 2017, established that Paleolithic people had "worked" the sandstone in a way that mirrored the female form, and opened fractures for infiltrating water into the sandstone that nourished an outflow at the base of the pelvic triangle.
New research suggests that part of the floor of the sandstone shelter which was shaped and adapted by Paleolithic people around 13,000 years ago was modeled to reflect the region's natural water flows and geomorphological features.
"What we've described is not a map as we understand it today—with distances, directions, and travel times—but rather a three-dimensional miniature depicting the functioning of a landscape, with runoff from highlands into streams and rivers, the convergence of valleys, and the downstream formation of lakes and swamps," Dr. Milnes explains.
"For Paleolithic peoples, the direction of water flows and the recognition of landscape features were likely more important than modern concepts like distance and time.
"Our study demonstrates that human modifications to the hydraulic behavior in and around the shelter extended to modeling natural water flows in the landscape in the region around the rock shelter. These are exceptional findings and clearly show the mental capacity, imagination and engineering capability of our distant ancestors."
You may have heard about Teslas equipped with what is styled "Full Self-Driving" capability bricking – that is, going inert – as a result of a computer failure. "Tesla drivers are reporting computer failures after driving off with their brand-new cars over just the first few tens to hundreds of miles," says the web site Elektrek, which covers EVs and EV-related issues. "Wide-ranging features powered by the computer, like active safety features, cameras, and even GPS, navigation, and range estimations, fail to work":
Are these Teslas safe to drive if their safety features aren't working? They are certainly risky to drive, if their range estimation systems aren't working – because you might not make it where you were headed. You might end up bricked – by the side of the road – and it's no easy thing to walk down the road to the closest "fast" charger for a jerry can of kilowatt-hours.
[...] But that's not the really interesting thing – about bricking Teslas. More finely, about Teslas that brick because they're working properly. More finely than that, Tesla can brick its cars anytime it likes.
[...] Legally, the person whose name is on the title is the "owner" of the device. But is he, really, given that what he considers to be "his" device can be controlled remotely at any time by Tesla? The fact that Tesla doesn't generally exert this control is immaterial.
What is material is the fact that Tesla could.
An example of this was made public a couple of years ago, when Tesla transmitted an update to its devices that were "owned" – so to speak – by people living in the path of a hurricane that was coming. Tesla very nicely increased the range of these devices, so as to allow the "owners" to have a better chance of driving far enough away to escape the hurricane. But Tesla could just as easily decide to be not-so-nice and send an update to reduce the range or not allow the device to be driven, at all. This is a fact, in terms of what's possible. That it is not yet actual is merely a kind of privilege or sufferance that can be revoked at will.
[...] That thing being you are not really in control of the device, except to the extent that Tesla allows. Tesla also knows exactly how you use its device, too. And where and when. It's not just Teslas, either. It's all new vehicles – which might as well be devices.
Rational or Not? This Basic Math Question Took Decades to Answer.:
In June 1978, the organizers of a large mathematics conference in Marseille, France, announced a last-minute addition to the program. During the lunch hour, the mathematician Roger Apéry would present a proof that one of the most famous numbers in mathematics — "zeta of 3," or ζ(3), as mathematicians write it — could not be expressed as a fraction of two whole numbers. It was what mathematicians call "irrational."
Conference attendees were skeptical. The Riemann zeta function is one of the most central functions in number theory, and mathematicians had been trying for centuries to prove the irrationality of ζ(3) — the number that the zeta function outputs when its input is 3. Apéry, who was 61, was not widely viewed as a top mathematician. He had the French equivalent of a hillbilly accent and a reputation as a provocateur. Many attendees, assuming Apéry was pulling an elaborate hoax, arrived ready to pay the prankster back in his own coin. As one mathematician later recounted, they "came to cause a ruckus."
The lecture quickly descended into pandemonium. With little explanation, Apéry presented equation after equation, some involving impossible operations like dividing by zero. When asked where his formulas came from, he claimed, "They grow in my garden." Mathematicians greeted his assertions with hoots of laughter, called out to friends across the room, and threw paper airplanes.
But at least one person — Henri Cohen, now at the University of Bordeaux — emerged from the talk convinced that Apéry was correct. Cohen immediately began to flesh out the details of Apéry's argument; within a couple of months, together with a handful of other mathematicians, he had completed the proof. When he presented their conclusions at a later conference, a listener grumbled, "A victory for the French peasant."
Once mathematicians had, however reluctantly, accepted Apéry's proof, many anticipated a flood of further irrationality results. Irrational numbers vastly outnumber rational ones: If you pick a point along the number line at random, it's almost guaranteed to be irrational. Even though the numbers that feature in mathematics research are, by definition, not random, mathematicians believe most of them should be irrational too. But while mathematicians have succeeded in showing this basic fact for some numbers, such as π and e, for most other numbers it remains frustratingly hard to prove. Apéry's technique, mathematicians hoped, might finally let them make headway, starting with values of the zeta function other than ζ(3).
"Everyone believed that it [was] just a question of one or two years to prove that every zeta value is irrational," said Wadim Zudilin of Radboud University in the Netherlands.
But the predicted flood failed to materialize. No one really understood where Apéry's formulas had come from, and when "you have a proof that's so alien, it's not always so easy to generalize, to repeat the magic," said Frank Calegari of the University of Chicago. Mathematicians came to regard Apéry's proof as an isolated miracle.
But now, Calegari and two other mathematicians — Vesselin Dimitrov of the California Institute of Technology and Yunqing Tang of the University of California, Berkeley — have shown how to broaden Apéry's approach into a much more powerful method for proving that numbers are irrational. In doing so, they have established the irrationality of an infinite collection of zeta-like values.
Jean-Benoît Bost of Paris-Saclay University called their finding "a clear breakthrough in number theory."
Mathematicians are enthused not just by the result but also by the researchers' approach, which they used in 2021 to settle a 50-year-old conjecture about important equations in number theory called modular forms. "Maybe now we have enough tools to push this kind of subject way further than was thought possible," said François Charles of the École Normale Supérieure in Paris. "It's a very exciting time."
Whereas Apéry's proof seemed to come out of nowhere — one mathematician described it as "a mixture of miracles and mysteries" — the new paper fits his method into an expansive framework. This added clarity raises the hope that Calegari, Dimitrov and Tang's advances will be easier to build on than Apéry's were.
"Hopefully," said Daniel Litt of the University of Toronto, "we'll see a gold rush of related irrationality proofs soon."
A Proof That Euler Missed
Since the earliest eras of mathematical discovery, people have been asking which numbers are rational. Two and a half millennia ago, the Pythagoreans held as a core belief that every number is the ratio of two whole numbers. They were shocked when a member of their school proved that the square root of 2 is not. Legend has it that as punishment, the offender was drowned.
The square root of 2 was just the start. Special numbers come pouring out of all areas of mathematical inquiry. Some, such as π, crop up when you calculate areas and volumes. Others are connected to particular functions — e, for instance, is the base of the natural logarithm. "It's a challenge: You give yourself a number which occurs naturally in math, [and] you wonder whether it's rational," Cohen said. "If it's rational, then it's not a very interesting number."
Many mathematicians take an Occam's-razor point of view: Unless there's a compelling reason why a number should be rational, it probably is not. After all, mathematicians have long known that most numbers are irrational.
Yet over the centuries, proofs of the irrationality of specific numbers have been rare. In the 1700s, the mathematical giant Leonhard Euler proved that e is irrational, and another mathematician, Johann Lambert, proved the same for π. Euler also showed that all even zeta values — the numbers ζ(2), ζ(4), ζ(6) and so on — equal some rational number times a power of π, the first step toward proving their irrationality. The proof was finally completed in the late 1800s.
But the status of many other simple numbers, such as π + e or ζ(5), remains a mystery, even now.
It might seem surprising that mathematicians are still grappling with such a basic question about numbers. But even though rationality is an elementary concept, researchers have few tools for proving that a given number is irrational. And frequently, those tools fail.
When mathematicians do succeed in proving a number's irrationality, the core of their proof usually relies on one basic property of rational numbers: They don't like to come near each other. For example, say you choose two fractions, one with a denominator of 7, the other with a denominator of 100. To measure the distance between them (by subtracting the smaller fraction from the larger one), you have to rewrite your fractions so that they have the same denominator. In this case, the common denominator is 700. So no matter which two fractions you start with, the distance between them is some whole number divided by 700 — meaning that at the very least, the fractions must be 1/700 apart. If you want fractions that are even closer together than 1/700, you'll have to increase one of the two original denominators.
Flip this reasoning around, and it turns into a criterion for proving irrationality. Suppose you have a number k, and you want to figure out whether it's rational. Maybe you notice that the distance between k and 4/7 is less than 1/700. That means k cannot have a denominator of 100 or less. Next, maybe you find a new fraction that allows you to rule out the possibility that k has a denominator of 1,000 or less — and then another fraction that rules out a denominator of 10,000 or less, and so on. If you can construct an infinite sequence of fractions that gradually rules out every possible denominator for k, then k cannot be rational.
Nearly every irrationality proof follows these lines. But you can't just take any sequence of fractions that approaches k — you need fractions that approach k quickly compared to their denominators. This guarantees that the denominators they rule out keep growing larger. If your sequence doesn't approach k quickly enough, you'll only be able to rule out denominators up to a certain point, rather than all possible denominators.
There's no general recipe for constructing a suitable sequence of fractions. Sometimes, a good sequence will fall into your lap. For example, the number e (approximately 2.71828) is equivalent to the following infinite sum:
$latex \frac{1}{1} + \frac{1}{1} + \frac{1}{2 \times 1} + \frac{1}{3 \times 2 \times 1} + \frac{1}{4 \times 3 \times 2 \times 1} + \cdots$.
If you halt this sum at any finite point and add up the terms, you get a fraction. And it takes little more than high school math to show that this sequence of fractions approaches e quickly enough to rule out all possible denominators.
But this trick doesn't always work. For instance, Apéry's irrational number, ζ(3), is defined as this infinite sum:
$latex \frac{1}{1^3} + \frac{1}{2^3} + \frac{1}{3^3} + \frac{1}{4^3} + \cdots$.
If you halt this sum at each finite step and add the terms, the resulting fractions don't approach ζ(3) quickly enough to rule out every possible denominator for ζ(3). There's a chance that ζ(3) might be a rational number with a larger denominator than the ones you've ruled out.
Apéry's stroke of genius was to construct a different sequence of fractions that do approach ζ(3) quickly enough to rule out every denominator. His construction used mathematics that dated back centuries — one article called it "a proof that Euler missed." But even after mathematicians came to understand his method, they were unable to extend his success to other numbers of interest.
Like every irrationality proof, Apéry's result instantly implied that a bunch of other numbers were also irrational — for example, ζ(3) + 3, or 4 × ζ(3). But mathematicians can't get too excited about such freebies. What they really want is to prove that "important" numbers are irrational — numbers that "show up in one formula, [then] another one, also in different parts of mathematics," Zudilin said.
Few numbers meet this standard more thoroughly than the values of the Riemann zeta function and the allied functions known as L-functions. The Riemann zeta function, ζ(x), transforms a number x into this infinite sum:
$latex \frac{1}{1^x} + \frac{1}{2^x} + \frac{1}{3^x} + \frac{1}{4^x} + \cdots$.
ζ(3), for instance, is the infinite sum you get when you plug in x = 3. The zeta function has long been known to govern the distribution of prime numbers. Meanwhile, L-functions — which are like the zeta function but have varying numerators — govern the distribution of primes in more complicated number systems. Over the past 50 years, L-functions have risen to special prominence in number theory because of their key role in the Langlands program, an ambitious effort to construct a "grand unified theory" of mathematics. But they also crop up in completely different areas of mathematics. For example, take the L-function whose numerators follow the pattern 1, −1, 0, 1, −1, 0, repeating. You get:
$latex \frac{1}{1^x} + \frac{-1}{2^x} + \frac{0}{3^x} + \frac{1}{4^x} + \frac{-1}{5^x} + \frac{0}{6^x} + \cdots$.
In addition to its role in number theory, this function, which we'll call L(x), makes unexpected cameos in geometry. For example, if you multiply L(2) by a simple factor, you get the volume of the largest regular tetrahedron with "hyperbolic" geometry, the curved geometry of saddle shapes.
Mathematicians have been mulling over L(2) for at least two centuries. Over the years, they have come up with seven or eight different ways to approximate it with sequences of rational numbers. But none of these sequences approach it quickly enough to prove it irrational.
Researchers seemed to be at an impasse — until Calegari, Dimitrov and Tang decided to make it the centerpiece of their new approach to irrationality.
A Proof That Riemann Missed
In an irrationality proof, you want your sequence of fractions to rule out ever-larger denominators. Mathematicians have a well-loved strategy for understanding such a sequence: They'll package it into a function. By studying the function, they gain access to an arsenal of tools, including all the techniques of calculus.
In this case, mathematicians construct a "power series" — a mathematical expression with infinitely many terms, such as 3 + 2x + 7x2 + 4x3 + ... — where you determine each coefficient by combining the number you're studying with one fraction in the sequence, according to a particular formula. The first coefficient ends up capturing the size of the denominators ruled out by the first fraction; the second coefficient captures the size of the denominators ruled out by the second fraction; and so on.
Roughly speaking, the coefficients and the ruled-out denominators have an inverse relationship, meaning that your goal — proving that the ruled-out denominators approach infinity — is equivalent to showing that the coefficients approach zero.
The advantage of this repackaging is that you can then try to control the coefficients using properties of the power series as a whole. In this case, you want to study which x-values make the power series "blow up" to infinity. The terms in the power series involve increasingly high powers of x, so unless they are paired with extremely small coefficients, large x-values will make the power series blow up. As a result, if you can show that the power series does not blow up, even for large values of x, that tells you that the coefficients do indeed shrink to zero, just as you want.
To bring an especially rich set of tools to bear on this question, mathematicians consider "complex" values for x. Complex numbers combine a real part and an imaginary part, and can be represented as points in a two-dimensional plane.
Imagine starting at the number zero in the complex number plane and inflating a disk until you bump into the first complex number that makes your power series explode to infinity — what mathematicians call a singularity. If the radius of this disk is large enough, you can deduce that the coefficients of the power series shrink to zero fast enough to imply that your number is irrational.
Apéry's proof and many other irrationality results can be rephrased in these terms, even though that's not how they were originally written. But when it comes to L(2), the disk is too small. For this number, mathematicians viewed the power series approach as a dead end.
But Calegari, Dimitrov and Tang saw a potential way through. A singularity doesn't always represent a final stopping point — that depends on what things look like when you hit the singularity. Sometimes the boundary of the disk hits a mass of singularities. If this happens, you're out of luck. But other times, there might be just a few isolated singularities on the boundary. In those cases, you might be able to inflate your disk into a bigger region in the complex plane, steering clear of the singularities.
That's what Calegari, Dimitrov and Tang hoped to do. Perhaps, they thought, the extra information contained in this larger region might enable them to get the control they needed over the power series' coefficients. Some power series, Calegari said, can have a "wonderful life outside the disk."
Over the course of four years, Calegari, Dimitrov and Tang figured out how to use this approach to prove that L(2) is irrational. "They developed a completely new criterion for deciding whether a given number is irrational," Zudilin said. "It's truly amazing."
As with Apéry's proof, the new method is a throwback to an earlier era, relying heavily on generalizations of calculus from the 1800s. Bost even called the new work "a proof that Riemann missed," referring to Bernhard Riemann, one of the towering figures of 19th-century mathematics, after whom the Riemann zeta function is named.
The new proof doesn't stop with L(2). We construct that number by replacing the 1s in the numerators of ζ(2) with a pattern of three repeating numbers: 1, −1, 0, 1, −1, 0 and so on. You can make an infinite collection of other ζ(2) variants with three repeating numerators — for instance, the repeating pattern 1, 4, 10, 1, 4, 10 ..., which produces the infinite sum
$latex \frac{1}{1^2} + \frac{4}{2^2} + \frac{10}{3^2} + \frac{1}{4^2} + \frac{4}{5^2} + \frac{10}{6^2} + \cdots$.
Every such sum, the researchers proved, is also irrational (provided it doesn't add up to zero). They also used their method to prove the irrationality of a completely different set of numbers made from products of logarithms. Such numbers were previously "completely out of reach," Bost said.
Arthur T Knackerbracket has processed the following story:
At an extremely remote Antarctic outpost, scientists have unearthed a pristine sample of our planet's history.
It's an ice core 2,800 meters, or some 1.7 miles, long. But it's not just the length that's so significant. The ice contains preserved pockets of Earth's air from some 1.2 million years ago, if not more. Previous ice cores provided direct evidence of our planet's climate and environment from up to 800,000 years ago.
So, this is a giant leap. The team drilled so deep they reached the continent's bedrock.
"We have marked a historic moment for climate and environmental science," Carlo Barbante, a polar scientist and coordinator of the ice core campaign called "Beyond EPICA - Oldest Ice," said in a statement.
An international group of researchers excavated the ice at Little Dome C Field Camp in Antarctica, located 10,607 feet (3,233 meters) above sea level. They beamed radar down into the subsurface and used computer modeling of the ice flow to determine where this ancient ice was likely to be. And they were right.
This was no easy feat. Atop the Antarctic plateau, summers average minus-35 degrees Celsius, or minus-31 degrees Fahrenheit.
Although paleoclimatologists, who research Earth's past climate, have reliable methods of indirectly gauging our planet's deep past — with proxies such as fossilized shells and compounds produced by algae — direct evidence, via direct air, is scientifically invaluable. For example, past ice cores have revealed that the heat-trapping carbon dioxide levels in Earth's atmosphere today have skyrocketed — they're the highest they've been in some 800,000 years. It's incontrovertible evidence of Earth's past.
Scientists expect this even older ice core, however, will reveal secrets about a period called the Mid-Pleistocene Transition, lasting some 900,000 to 1.2 million years ago. Mysteriously, the intervals between glacial cycles — wherein ice sheets expanded over much of the continents and then retreated — slowed down markedly, from 41,000 years to 100,000 years.
"The reasons behind this shift remain one of climate science's enduring mysteries, which this project aims to unravel," the drilling campaign, which was coordinated by the Institute of Polar Sciences of the National Research Council of Italy, said in a statement.
Now, the drilling is over. But the campaign to safely transport the ice back to laboratories, and then scrutinize this over-million-year-old atmosphere, has begun.
"The precious ice cores extracted during this campaign will be transported back to Europe on board the icebreaker Laura Bassi, maintaining the minus-50 degrees Celsius cold chain, a significant challenge for the logistics of the project," explained Gianluca Bianchi Fasani, the head of ENEA (National Agency for New Technologies, Energy, and Sustainable Economic Development) logistics for the Beyond EPICA expedition.
These historic ice cores will travel in "specialized cold containers" as they ship across the globe, far from the depths of their Antarctic home.
Arthur T Knackerbracket has processed the following story:
TSMC has started producing chips at its Fab 21 near Phoenix, Arizona, using its 4nm-class process technology, Commerce Secretary Gina Raimondo told Reuters. This marks the first time such a cutting-edge production node has been manufactured in the United States. The confirmation from a high-ranking official comes months after the first unofficial information emerged that the fab was mass-producing chips for Apple.
"For the first time ever in our country's history, we are making leading-edge 4nm chips on American soil, American workers — on par in yield and quality with Taiwan," Raimondo told Reuters.
For the first time ever, we are making leading-edge 4nm chips on American soil.
According to unofficial information, TSMC's Fab 21 in Arizona is manufacturing at least three processor models: the A16 Bionic system-on-chip used in Apple's iPhone 15 and iPhone 15 Plus; the main processor of Apple's S9 system-in-package for smartwatches, which has two 64-bit cores and a quad-core neural engine; and an AMD Ryzen 9000-series CPU. These chips are produced on TSMC's 4nm-class—N4 and N4P—process technologies.
The TSMC Arizona project is instrumental to the U.S. goal of producing 20% of the world's most advanced logic chips by 2030, which the Biden administration set a few years ago before enacting the CHIPS and Science Act. TSMC's Fab 21 in Arizona produces chips for American companies in volumes (it is rumored that currently, the facility's production capacity is around 10,000 wafer starts per month), clear evidence that the initiative works.
Under the CHIPS and Science Act, the U.S. Commerce Department provided TSMC with $6.6 billion in grants and up to $5 billion in loan guarantees. The Fab 21 site will require funding of about $65 billion to include three fab modules that are set to be constructed and launched online by the end of the decade.
The first Fab 21 phase 1 module will officially start mass production using 4nm and 5nm-class process technologies. The next Fab 21 phase 2 is expected to follow in 2028 with 3nm-class process technologies. By the decade's end, TSMC expects to build its Fab 21 phase 3, which will produce chips on 2nm-class and 1.6nm-class nodes and their variations with backside power delivery.
Arthur T Knackerbracket has processed the following story:
Wastewater treatment facilities are a major source of PFAS contamination in drinking water in the US – they discharge enough of the “forever chemicals” to raise concentrations above safe levels for an estimated 15 million people or more. They can also release long-lasting prescription drugs into the water supply.
Even though these plants clean wastewater, they do not destroy all the contaminants added upstream – and the chemicals that remain behind are released back into the same waterways that supply drinking water. “It’s a funnel into the environment,” says Bridger Ruyle at New York University. “You capture a bunch of things from a bunch of different places, and it’s all released in one place.”
Perfluoroalkyl and polyfluoroalkyl substances (PFAS) are of particular concern because they contain carbon-fluorine bonds, which make them extremely persistent in the environment. Regular exposure to several types of PFAS has been associated with increased risk for many health problems, from liver damage to various forms of cancer. The US Environmental Protection Agency (EPA) recently set strict limits in drinking water for six of the best-studied PFAS.
Wastewater treatment facilities are a known source of PFAS contamination in the sewage sludge they produce as a by-product, which is sometimes used for fertiliser. To find out whether similar contamination remains in the treated water, Ruyle and his colleagues measured the concentration of PFAS and other molecules that contain carbon-fluorine bonds in wastewater at eight large treatment facilities around the US.
Their findings suggest wastewater treatment plants across the US discharge tens of thousands of kilograms of fluorine-containing compounds into the environment each year, including a substantial amount of PFAS. Once treated wastewater is discharged from a facility, it mixes with natural waters in rivers and lakes. “That’s going to create a downstream drinking water problem,” says Ruyle.
[...] “It demonstrates that wastewater treatment plants are really important sources for these compounds,” says Carsten Prasse at Johns Hopkins University in Maryland, who was not involved with the study. There are ways to remove or destroy PFAS in water, and more drinking water facilities are installing such systems, but currently, “our wastewater treatment plants are not set up to deal with this”, he says.
Forever chemicals alone would be a problem, but the researchers also found PFAS made up only a small fraction of the total volume of fluorinated chemicals discharged from the facilities. Most were not PFAS at all, but other compounds used in common pharmaceuticals, such as statins and SSRIs. These pharmaceuticals are also of concern for ecosystems and people.
“Another person could be drinking a cocktail of fluorinated prescription medication,” says Ruyle. However, he says the consequences of long-term exposure to low doses of such compounds aren’t well understood.
“We need to start conversations about whether or not we should be using a lot of fluorine in pharmaceuticals,” says Ruyle. Fluorination is widely used in drugs to enhance their effect in the body, but “preventing widespread chemical contamination should also be important”, he says.
Arthur T Knackerbracket has processed the following story:
“Forever chemicals” are pervasive, and researchers have in recent years been ringing the alarms about the negative impacts on human health. But humans aren’t the only animals to be concerned about.
Freshwater turtles in Australia exposed to per- and polyfluoroalkyl substances, or PFAS, experienced changes to their metabolic functions, environmental biochemist David Beale and colleagues report in the Dec. 15 Science of the Total Environment. “We found a whole range of biomarkers that are indicative of cancer and other health problems within reptiles,” says Beale, of the Commonwealth Scientific and Industrial Research Organisation in Dutton Park, Australia.
Much of the research on PFAS and health is focused on humans. It’s less clear what the ubiquitous chemicals are doing to other animals. Most of that research has been lab-based, and those data are then used to set acceptable levels of contaminants.
But labs can’t replicate all the complexities of a natural environment, Beale says. “There’s a massive gap in our understanding of what these chemicals do to wildlife, and they’re being equally exposed — if not more exposed — because they can’t get respite.”
Beale and colleagues captured freshwater turtles (Emydura macquarii) from three sites around Queensland: one site with a high level of PFAS, one with a moderate amount and one with barely discernable levels, all with no other contaminants. In a lab, some of the female turtles were hormonally induced to lay eggs. Then the wild-caught adults and their lab-incubated hatchlings were given physical and chemical exams, and their eggshells were tested to see if there was a link between shell strength and PFAS exposure.
“What makes this study really unique is we’re not only measuring the contaminant concentration, but we’re really diving deep into that health aspect as well,” Beale says.
[...] The findings are “a little scary,” says Jean-Luc Cartron, a biologist at the University of New Mexico in Albuquerque who was not involved with the research.
“We really need to jump on this issue of ecological toxicity,” Cartron says. “If the [study] authors are right, and the lack of juveniles that they see out in the environment is caused by PFAS, we don’t want to wait until we’re missing one whole full generation of animals.”
As aquatic animals with long lives and few predators, freshwater turtles are living environmental monitors for PFAS bioaccumulation, Beale says. Surprisingly, he says, even the animals from the site with the lowest level of contamination had PFAS-related health problems. “We still saw evidence of harm.”
While continuing this work with freshwater turtles, the team is also looking at PFAS impacts on more sites and more animals, including freshwater crocodiles, cane toads and frogs in Queensland, New South Wales and Victoria.
“All these animals that we love in the wild are being exposed to these chemicals, and we’re just not seeing the obvious impacts of those exposures,” Beale says. “My greatest fear is in 10, 15 years’ time, we might see those impacts and it might be too late.”
D.J. Beale et al. Forever chemicals don't make hero mutant ninja turtles: Elevated PFAS levels linked to unusual scute development in newly emerged freshwater turtle hatchlings (Emydura macquarii macquarii) and a reduction in turtle populations. Science of the Total Environment. Vol. 956, December 15, 2024, 176313. doi: 10.1016/j.scitotenv.2024.176313.
Stanford University researchers paid 1,052 people $60 to read the first two lines of The Great Gatsby to an app. That done, an AI that looked like a 2D sprite from an SNES-era Final Fantasy game asked the participants to tell the story of their lives. The scientists took those interviews and crafted them into an AI they say replicates the participants' behavior with 85% accuracy.
The study, titled Generative Agent Simulations of 1,000 People, is a joint venture between Stanford and scientists working for Google's DeepMind AI research lab. The pitch is that creating AI agents based on random people could help policymakers and business people better understand the public. Why use focus groups or poll the public when you can talk to them once, spin up an LLM based on that conversation, and then have their thoughts and opinions forever? Or, at least, as close an approximation of those thoughts and feelings as an LLM is able to recreate.
"This work provides a foundation for new tools that can help investigate individual and collective behavior," the paper's abstract said.
"How might, for instance, a diverse set of individuals respond to new public health policies and messages, react to product launches, or respond to major shocks?" The paper continued. "When simulated individuals are combined into collectives, these simulations could help pilot interventions, develop complex theories capturing nuanced causal and contextual interactions, and expand our understanding of structures like institutions and networks across domains such as economics, sociology, organizations, and political science."
All those possibilities based on a two-hour interview fed into an LLM that answered questions mostly like their real-life counterparts.
[...] The entire document is worth reading if you're interested in how academics are thinking about AI agents and the public. It did not take long for researchers to boil down a human being's personality into an LLM that behaved similarly. Given time and energy, they can probably bring the two closer together.
This is worrying to me. Not because I don't want to see the ineffable human spirit reduced to a spreadsheet, but because I know this kind of tech will be used for ill. We've already seen stupider LLMs trained on public recordings tricking grandmothers into giving away bank information to an AI relative after a quick phone call. What happens when those machines have a script? What happens when they have access to purpose-built personalities based on social media activity and other publicly available information?
What happens when a corporation or a politician decides the public wants and needs something based not on their spoken will, but on an approximation of it?
Can it join my zoom calls please?
https://tomscii.sig7.se/2025/01/De-smarting-the-Marshall-Uxbridge
This is the story of a commercially unavailable stereo pair of the bi-amped Marshall Uxbridge, with custom-built replacement electronics: active filters feeding two linear power amps. Listening to this high-fidelity set has brought me immense enjoyment. Play a great album on these near-fields, and the result is close to pure magic! Over and above the accurate reproduction of a wide audio range, the precision and depth of its stereo imaging is stunning.
Dumpster diving electronics is a way of life, which sometimes brings great moments of joy. One of these moments happened when I stumbled upon... the Marshall Uxbridge Voice, a smart speaker, in seemingly pristine condition. And not just one, but two of them! One was black, the other white. What a find!
What to do with these babies? Intrigued by the question "what could be wrong with them, why would someone throw them out like that?" – I set out to investigate. Plugging in one of them, after a few seconds of waiting, a female voice was heard: «NOW IN SETUP MODE. FOLLOW THE INSTRUCTIONS IN YOUR DEVICE'S COMPANION APP.»
[...] At that moment I knew I was not into smart speakers. Or at least not into the smartness. The speakers were good. Oh, they were excellent! But they had to be de-smarted. Preferably with a single, dumb, analog RCA line input on their backs, so nobody but me gets to decide over the program material. That way I could also drive them as a stereo pair. No Bluetooth, no latency, no female robot overlord, just a good old-fashioned line input!
Seems like a modest ask. Can we have it? Well, time to look inside!