Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Dell reveals people don't care about AI in PCs – and a new truly embarrassing Windows 11 fail shows why
Dell is telling it straight as far as the contemporary world of PCs goes, with the computer maker bluntly explaining that consumers aren't buying laptops based on AI abilities.
PC Gamer reports (as flagged by The Verge) that Dell's execs were refreshingly frank on the topic of AI and the PC in a Q&A session that was part of the company's pre-briefing for CES 2026 this week.
First up, Dell's COO, Jeff Clarke, observed that there was an "expectation of AI driving end user demand" but also an "un-met promise of AI", hinting at some of the disappointment – or confusion – around AI PCs for the average consumer.
Then Dell's head of product, Kevin Terwilliger, went further and noted of the company's fresh product launches (which included the new XPS 14 and 16 laptops): "One thing you'll notice is the message we delivered around our products was not AI-first. So, a bit of a shift from a year ago where we were all about the AI PC."
Terwilliger continued: "We're very focused on delivering upon the AI capabilities of a device – in fact everything that we're announcing has an NPU in it – but what we've learned over the course of this year, especially from a consumer perspective, is they're not buying based on AI. In fact I think AI probably confuses them more than it helps them understand a specific outcome."
In short, Dell is taking its foot off the pedal when it comes to pushing AI in its marketing, simply because it doesn't believe that consumers are that interested – and that it might even be a point of confusion for some.
While you could argue that the latter viewpoint is somewhat patronizing, I think it's a fair enough observation overall. I believe some consumers really don't care about AI, and do not see the benefit of the various abilities for Copilot+ PCs – those exclusive Windows 11 AI features – or how they might use them.
And in truth, there isn't that much to get excited about with these AI features to date, anyway – not beyond image-editing tricks (and let's face it, many folks don't do anything much with their photos) and additional search powers (some of which people may be very suspicious of on the privacy front, particularly the key AI piece of the puzzle here, which is Recall).
Many people probably don't use AI beyond queries posed to ChatGPT, Copilot, Gemini or whatever their favorite flavor of AI portal happens to be, using them as a kind of beefed-up Google search (other engines are available, etcetera).
Furthermore, with all the heat that Microsoft is taking over trying to crowbar more AI into Windows 11 – despite consistent cries from detractors who'd rather the software giant fixes what's wrong with its desktop OS, rather than putting in new features that 'no one asked for' at a rate of knots – the reputation of AI features is being tarnished considerably in terms of questioning Microsoft's motives here.
Is all this for show, riding the AI hype train and pushing as hard as possible with such features in Windows 11 in a bid to further impress shareholders and drive market capitalization?
Onlookers to the kinds of online bunfights that have been going on between anti-AI rebels and Microsoft's execs are no doubt absorbing messaging which, let's say, isn't leaving these AI features in the best light.
Especially not when you get videos like the one below on X, recently posted by Ryan Fleury (hat tip to Futurism for spotting this), which highlights an embarrassing fail by the AI agent in Windows 11's Settings app.
That clip has currently amassed well over four million views (at the time of writing), and as you can see, it shows the AI freezing up and failing to offer any reply to a basic query. Not just any basic query, mind, but the very one that Windows 11 suggested the user should try in order to show off the capabilities of the agent – so, you'd expect that it'd work well given that fact.
Okay, so this is a one-off example, but we've seen others. I can't help but recall (pun fully intended) the video from Microsoft's marketing department where the Copilot AI assistant makes rather a mess of trying to help a user change the text size in Windows 11. (That clip was eventually pulled, and I'm not sure how it was published in the first place). These are eye-opening cases of AD – artificial dumbness – a term I coined two minutes ago (one that, unsurprisingly, already exists, so Google – or should I say Gemini which provides the 'AI overview' – tells me).
With sentiment souring around AI in Windows 11 to a greater extent of late, is it any wonder that Dell wants to distance itself from the concept of AI PCs? At least for now, especially as we're moving into a tough sales environment for laptops and desktops (with the spiking costs of RAM, storage, and also GPUs in some cases).
And yes, Dell may remind us that despite its comments here, it's still pushing with AI in a way, as "everything that we're announcing has an NPU in it" – but it's not like there's a choice in that regard, is there? Away from budget laptops, all cutting-edge PC chips that are going to power modern laptops now have beefy NPUs, whether they are AMD, Intel or Qualcomm.
In fairness, the agentic AI functionality that Microsoft is now implementing with Windows 11 may be the piece of the puzzle that finally moves the needle with AI and grabs the attention of consumers more widely – but that remains to be seen. As do the potential security pitfalls or other nastiness that AI agents might bring in tow.
And with one of the major problems with AI being a lack of trust in these features, whether from a security or privacy perspective – or just 'hallucinations' (AI getting stuff plain wrong) – AI agents could possibly be the 'breaking', rather than the 'making', of Copilot and all its associated trappings in Windows 11.
2026 will be a very telling year for AI, I think, but for now, Dell gets credit for being frank about the current state of play with the AI features in Windows 11 PCs. Although arguably, this is the only sensible route to take with marketing PCs right now, given the circumstances as discussed above.
https://scitechdaily.com/your-daily-cup-of-tea-could-help-fight-heart-disease-cancer-aging-and-more/
Tea has a long history as both a traditional remedy and an everyday drink. Now a new review suggests that reputation may have real support behind it.
Across human cohort studies and clinical trials, tea drinking shows its most consistent links to better heart and metabolic health, including lower risks of cardiovascular disease (CVD) and related problems like obesity and type 2 diabetes — with hints of protection against some cancers as well.
The authors also point to early signs that tea may be tied to slower cognitive decline, less age-related muscle loss, and anti-inflammatory and antimicrobial effects. Those areas are promising, they note, but still need stronger long-term human trials.
How much you drink seems to matter, too. In a meta-analysis of 38 prospective cohort data sets, "moderate" intake tracked with lower all-cause, CVD, and cancer mortality. For CVD mortality, the benefit signal appeared to level off around ~1.5–3 cups per day, while all-cause mortality showed its strongest association at ~2 cups per day.
At the same time, the review notes that not all tea products are created equal. Bottled teas and bubble teas can include additives such as artificial sweeteners and preservatives, which may introduce health concerns that do not apply in the same way to brewed tea.
Tea is made from the leaves of Camellia sinensis and has been consumed worldwide for centuries. It was first valued largely for medicinal purposes before becoming a widely enjoyed beverage. Scientists have long been interested in tea because it contains high levels of polyphenols, particularly catechins, which are thought to play a major role in many of its reported benefits.
This review, published in Beverage Plant Research, brings together evidence from laboratory research and human studies to examine how tea relates to a wide range of health outcomes. While green tea has been studied extensively, the authors emphasize that far less is known about black, oolong, and white tea, especially when it comes to comparing their health effects. The review also considers concerns raised by additives and possible contaminants found in some commercial tea drinks.
In the review, green tea stands out for cardiovascular protection. Human studies summarized by the authors link tea intake to modest reductions in blood pressure and improvements in blood lipids, including lower LDL cholesterol.
Large cohort studies also associate regular tea drinking with reduced all-cause mortality and lower deaths from CVD, with the most consistent signal appearing in populations where green tea is the dominant type.
For weight control and cardiometabolic markers, the review emphasizes that results are strongest in overweight/obese groups and depend on dose and study design. As examples:
- In people with obesity and metabolic syndrome, drinking ~4 cups/day of green tea for 8 weeks was reported to decrease body weight, lower LDL cholesterol, and reduce oxidative stress markers in at least one randomized trial highlighted in the review.
- In another trial in overweight adults, ~600–900 mg/day of tea catechins (with
On diabetes specifically, the review notes that many cohort studies link higher tea intake (often ~3–4+ cups/day) to lower type 2 diabetes risk, but results are not uniform.
Some large population data sets have shown the opposite pattern, and in several trials of people already diagnosed with type 2 diabetes, green tea extracts did not consistently improve HbA1c, glucose, or insulin.
The authors describe cancer findings as strong in animal research but mixed in human studies, likely because cancer risk varies by site, genetics, and environment. Still, meta-analyses cited in the review report lower risk signals for certain cancers, including:
- Oral cancer (reported relative risk around 0.798 for frequent green tea consumption)
- Lung cancer in women (reported RR around 0.78)
- Colon cancer (reported OR around 0.82)
The review highlights observational evidence that frequent tea consumption is associated with lower prevalence of cognitive impairment.
One meta-analysis summarized in the paper combined 18 studies (totaling ~58,929 participants) and found green tea intake was linked to lower odds of cognitive impairment, with the strongest association seen in adults aged ~50–69.
The authors also note that tea contains theanine, an amino acid that can cross the blood–brain barrier and has been linked in studies to stress-reducing and anti-anxiety effects, which could indirectly support cognitive health.
Muscle preservation in older adults
The review also points to early clinical evidence that tea polyphenols may help counter sarcopenia (age-related muscle loss).
One randomized controlled trial cited reported that ~600 mg/day of an epicatechin-enriched green tea extract for 12 weeks improved measures such as handgrip strength and attenuated muscle loss. Other studies discussed suggest tea catechins may work best when paired with resistance exercise and adequate protein/amino acid intake.
On inflammation, the review includes trials where catechins were associated with reduced inflammatory biomarkers. For example, in an RCT involving obese hypertensive participants, ~379 mg/day green tea extract for 3 months was associated with reductions in TNF-α (~14.5%) and C-reactive protein (~26.4%), alongside improved insulin-resistance–related measures.
Tea's antimicrobial effects are described as particularly plausible in the mouth and upper airway because tea compounds directly contact oral microbes. The authors cite evidence that catechins can inhibit cavity-causing bacteria (such as Streptococcus mutans), supporting interest in tea-based rinses for oral health. They also describe mostly lab-based antiviral findings (including work on influenza and coronaviruses) and note that human evidence remains limited, though small studies (such as catechin gargling in older adults) have reported lower infection rates and warrant larger replication.
However, while tea has numerous benefits, commercial tea products such as bottled or bubble tea, often contain sugar, artificial sweeteners, and preservatives, which may reduce or negate the health benefits. Additionally, concerns regarding pesticide residues, heavy metals, and microplastics in tea have been raised.
These contaminants, though not posing significant health risks in typical consumption, remain a concern for long-term heavy tea drinkers. Moreover, the review addresses the issue of nutrient absorption interference, specifically with non-heme iron and calcium, potentially affecting people on vegetarian diets or those with specific nutritional needs.
The health benefits of tea are clear, but its consumption in processed forms like bottled tea and bubble tea should be moderated due to added sugars and preservatives. The findings from this review suggest that moderate consumption of traditional, freshly brewed tea can be beneficial, especially for preventing cardiovascular diseases, diabetes, and cancer. Future studies focusing on the long-term health effects of different tea types and the impact of contaminants will help refine our understanding of tea's health benefits and risks.
Reference: “Beneficial health effects and possible health concerns of tea consumption: a review” by Mingchuan Yang, Li Zhou, Zhipeng Kan, Zhoupin Fu, Xiangchun Zhang and Chung S. Yang, 13 November 2025, Beverage Plant Research.
https://phys.org/news/2025-12-runaway-stars-dark-milky.html
Hypervelocity stars have, since the 1920s, been an important tool that allows astronomers to study the properties of the Milky Way galaxy, such as its gravitational potential and the distribution of matter. Now astronomers from China have made a large-volume search for hypervelocity stars by utilizing a special class of stars known for their distinct, regular, predictable pulsation behavior that makes them useful as distance indicators.
The escape velocity of any planet, star or galaxy is the velocity required for a mass, leaving the object's surface, to coast completely and exactly out of the planet's gravitational well, going to infinity. Earth's escape velocity is 11.2 kilometers per second (km/s).
Any mass that leaves the surface having that immediate initial speed will, without further energy, leave Earth's gravitational grasp. Examples are rocks ejected from Earth by a colliding incoming asteroid (as happened with rocks exchanged between Earth and Mars) or the possible escape of a steel lid covering a blast hole from a 1957 underground nuclear explosion in Nevada (unless the lid vaporized as it ascended towards space at an estimated six times Earth's escape velocity).
The escape velocity from the sun is 618 km/s (but only 42 km/s from Earth's position), and about 550 km/s from the sun's position in the Milky Way. Hypervelocity stars (HVSs) have tangential speeds of 1,000 km/s or more, making them gravitationally unbound from the Milky Way.
A prominent way HVSs come about is from a gravitational slingshot interaction with the supermassive black hole, Sagittarius A*, at the Milky Way's center.
The Hills mechanism, first proposed by astronomer Jack Hills in 1988, has one star of a binary pair captured by a black hole while the other is flung away from the black hole at a high speed.
Such a jettisoned star was first observed in 2019, traveling away from the core of the Milky Way at 1,755 km/s—0.6% the speed of light—which is greater than the escape velocity of the galactic center. Such stars also provide direct evidence for the supermassive black holes in galactic centers and their properties.
Moreover, by tracing back the trajectories of the runaway stars, scientists can map the gravitational potential of the Milky Way—how masses interact within the galaxy—including the distribution of dark matter in the halo, the huge spherical volume that surrounds a galaxy's disk.
With these motivations, three astronomers from Beijing scientific institutions, with lead author Haozhu Fu of Peking University, looked for HVSs by starting with RR Lyrae stars (RRLs). These are old, giant stars that pulse with periods of 0.2 to one day, found in the thick disk and halo of the Milky Way galaxy and often in globular clusters. (The Milky Way contains more than 150 globular clusters, with about a third of them arranged in a nearly spherical halo around the Milky Way's center.)
The intrinsic luminosity of these RRLs—their total energy output—is relatively well-determined from a relationship that connects their pulsing period, their absolute magnitude and their metallicity (the abundance of elements heavier than hydrogen and helium, which to astronomers are "metals"). Knowing their absolute energy output and their energy received at Earth enables their distance to be calculated from the inverse-square distance relationship.
One published star catalog contained 8,172 RRLs from the Sloan Digital Sky Survey and an extended catalog held 135,873 RRLs with metallicity and distance estimated from Gaia photometry, which are measurements of the brightness of stars as observed by the Gaia satellite launched by the European Space Agency in 2013.
Looking for hypervelocity RRLs, they eliminated almost all that did not have properties needed for their search, especially spectroscopic measurements that gave radial velocities (away from the galactic center) with sufficiently low uncertainties. This reduced the relevant dataset drastically, to 165 hypervelocity RRLs.
The group then looked at each star's light curve, selecting Doppler shifts for 87 such stars that were the most reliable hypervelocity stars. (Of these, seven had a tangential velocity above 800 km/s.) These divided into two groups: one that was concentrated towards the Milky Way's galactic center, and the other localized around the Magellanic Clouds, Large and Small, two irregular dwarf galaxies located near the Milky Way.
Their locations and concentrations suggested they had reached hypervelocity status through the Hills (or similar) mechanism. Many had movements that exceeded the Milky Way's escape velocity, probably ejected from their host systems.
The team suspects that future Gaia satellite observations and spectroscopic analysis will shed light on the origins of these ejections. Identifying runaway stars in this way allows the properties of the Milky Way halo to be studied further, hopefully shedding light on its dark matter, still one of the deepest mysteries in all of modern physics.
More information: Haozhu Fu et al, Search for Distant Hypervelocity Star Candidates Using RR Lyrae Stars, The Astrophysical Journal (2025). DOI: 10.3847/1538-4357/ae0c09
https://go.theregister.com/feed/www.theregister.com/2026/01/07/datacenter_tax_breaks_virginia/
Trillion-dollar internet giants don't need freebies, watchdog warns, as giveaways double in a year
The US state of Virginia forfeited $1.6 billion in tax revenue through datacenter exemptions in fiscal 2025 – up 118 percent on the prior year – as the AI-driven construction boom accelerates.
Good Jobs First, a nonprofit promoting corporate and government accountability, warns these incentives have become essentially automatic.
Virginia's qualification threshold requires just $150 million in capital investment and 50 new jobs, which is modest compared to the billions spent on today's hyperscale facilities. The exemptions cover retail sales and use taxes on computer equipment, software, and hardware purchases.
The disclosure is detailed in Virginia's Annual Comprehensive Financial Report [PDF], which covers the fiscal year ended June 30, 2025.
Good Jobs First claims the tax breaks are getting out of hand as the criteria to become a qualifying recipient are set at a low bar.
Greg LeRoy, executive director at the nonprofit, said: "Like 35 other states, Virginia is losing control of its spending by enacting virtually automatic sales and use tax exemptions, and sometimes other subsidies, for datacenter building materials and equipment."
Recipients may qualify for local property tax reductions on top, he added.
Virginia has the highest datacenter count globally – more than 600 according to some estimates – equating to more than 10 percent of estimated global hyperscale capacity.
In a statement to The Register, Virginia state deputy chief of staff Ali Ahmad, defended the tax breaks. He said:
"According to analysis published by Virginia's nonpartisan Joint Legislative Audit and Review Commission, data centers support 74,000 jobs, bring in $9.1 billion in Virginia GDP, and generate billions of dollars in local revenue that fund education, public safety, and critical local services. Datacenters offer the Commonwealth of Virginia a significant return on investment not covered in the intentionally narrow and inaccurate view this organization pushes."
Good Jobs First highlighted last year that more than half of US states offer economic development subsidies for datacenters exempting them from sales and use taxes.
LeRoy argues these breaks should end: "Given that every state that has studied its return on investment for datacenter subsidies has found a sharply negative result, we recommend that states now eliminate all datacenter tax exemptions.
"Trillion-dollar internet giants don't need the tax breaks, and taxpayers cannot afford them with massive federal budget cuts coming."
Meanwhile, grassroots opposition to the datacenter construction frenzy is mounting. The Washington Post reported this week on public campaigns in Arizona, Indiana, and Maryland challenging new facilities over water and electricity consumption and their visual impact on rural landscapes.
In December, Democrat senators were questioned datacenter companies about rising energy bills driven by grid infrastructure costs often passed to consumers, while Senator Bernie Sanders called for a nationwide construction moratorium.
https://scitechdaily.com/some-people-get-drunk-without-drinking-and-scientists-finally-know-why/
Scientists have identified specific gut bacteria and biological pathways that cause alcohol to be produced inside the body in people with auto-brewery syndrome (ABS). This rare and frequently misunderstood condition causes individuals to become intoxicated even though they have not consumed alcohol. The study was led by researchers at Mass General Brigham, working with collaborators from the University of California, San Diego, and was published today (January 7) in Nature Microbiology.
How Auto-Brewery Syndrome Works
Auto-brewery syndrome develops when certain gut microbes break down carbohydrates and convert them into ethanol (alcohol), which then enters the bloodstream. Small amounts of alcohol can be produced during normal digestion in anyone, but in people with ABS, these levels can rise high enough to cause noticeable intoxication. Although the condition is extremely rare, researchers believe it is often overlooked because many clinicians are unfamiliar with it, testing is difficult, and social stigma may discourage proper evaluation.
Years of Misdiagnosis and Serious Consequences
People with ABS frequently go years without an accurate diagnosis. During that time, they may face social isolation, medical complications, and even legal trouble due to unexplained intoxication. Confirming the condition is challenging because the gold standard diagnostic method requires carefully supervised blood alcohol testing over time, which is not widely available.
Comparing Patients, Partners, and Healthy Controls
To better understand what drives the disorder, researchers studied 22 people diagnosed with ABS, 21 of their unaffected household partners, and 22 healthy control participants. The team analyzed and compared the makeup and activity of gut microbes across all three groups.
When stool samples collected during active ABS flare-ups were tested in the laboratory, samples from patients produced much higher levels of ethanol than samples from partners or healthy controls. This finding suggests that stool-based testing could one day help doctors diagnose the condition more easily and accurately.
Identifying the Microbes Behind ABS
Until now, scientists had limited insight into which specific microbes were responsible for auto-brewery syndrome. Detailed stool analysis revealed that several bacterial species appear to play a key role, including Escherichia coli and Klebsiella pneumoniae. During symptom flare-ups, some patients also showed sharply elevated levels of enzymes involved in fermentation pathways compared to control participants. The researchers note that while these organisms were identified in some patients, pinpointing the exact causative microbes in each individual remains a complex and time-consuming process.
Fecal Transplant Offers Clues to Treatment
The research team also closely monitored one patient whose symptoms improved after receiving a fecal microbiota transplantation when other treatments had failed. Periods of relapse and remission closely matched changes in specific bacterial strains and metabolic activity in the gut, strengthening the biological evidence behind the diagnosis. After a second fecal transplant, which included a different antibiotic pretreatment, the patient remained symptom-free for more than 16 months.
Hope for Better Diagnosis and Care
"Auto-brewery syndrome is a misunderstood condition with few tests and treatments. Our study demonstrates the potential for fecal transplantation," said co-senior author Elizabeth Hohmann, MD, of the Infectious Disease Division in the Mass General Brigham Department of Medicine. "More broadly, by determining the specific bacteria and microbial pathways responsible, our findings may lead the way toward easier diagnosis, better treatments, and an improved quality of life for individuals living with this rare condition."
Hohmann is currently working with colleagues at UC San Diego on a study evaluating fecal transplantation in eight patients with ABS.
Reference: “Gut microbial ethanol metabolism contributes to auto-brewery syndrome in an observational cohort” 8 January 2026, Nature.
DOI: 10.1038/s41564-025-02225-y
National Geographic published an interesting article about renewable energy myths.
Still, myths about renewable energy are commonplace, says Andy Fitch, an attorney at Columbia Law School's Sabin Center for Climate Change Law who coauthored a report rebutting dozens of misconceptions. This misinformation, and in some cases, purposeful disinformation, may lead people to oppose renewable projects in their communities. Support for wind farms off New Jersey, for example, dropped more than 20 percent in less than five years after misleading and false claims began circulating.
"It's easy to prick holes into the idea of an energy transition," because it is a new concept to many people, Fitch says.
Myth #1 Renewable energy is unreliable.
There will always be days when clouds cover the sun or the wind is still. But those conditions are unlikely to occur at the same time in all geographic areas. "There's always a way to coordinate the energy mix" to keep the lights on, Fitch says.Today that coordination generally includes electricity from fossil fuels or coal. In California, where more than half the state's power now comes from solar, wind, and other renewables, natural gas and other non-renewables generate the rest.
Improvements in storage technology will also increasingly allow renewable energy to be captured during sunny or windy days. Already, some 10 percent of California's solar-powered energy is saved for evening use.
Myth #2 Rooftop solar is super pricey.
Back in 1980, solar panels cost a whopping $35 (in today's dollars) per watt of generated energy. In 2024 that figure fell to 26 cents. Solar has become so cost-efficient that building and operating the technology is now cheaper over its lifespan than conventional forms of energy like gas, coal, and nuclear power.Homeowners also save a significant amount of money after rooftop solar is installed, according to the U.S. Department of Energy. (The method remains cost effective, even after federal subsidies to purchase the panels ceased late last year.) A family who finances panels might save close to a thousand dollars a year in their electric bills, even taking into account payments on the loan.
Myth #3 Wind power inevitably kills wildlife.
With hundreds of thousands of turbines in operation, wind power now makes up eight percent of the world's energy. But alongside these sprouting modern windmills has come stories of birds, whales, and even insects and bats killed or injured in their presence.In some cases, wind energy can cause a small fraction of wildlife deaths, but they "pale in comparison to what climate change is doing to [the animals'] habitat," says Douglas Nowacek, a conservation technology expert at Duke University. "If we're going to slow down these negative changes, we have to go to renewable energy."
When it comes to whales or other marine mammals, "we have no evidence—zero" that any offshore wind development has killed them, says Nowacek, who studies this as lead researcher in the school's Wildlife and Offshore Wind program. (Most die instead from ship strikes and deadly entanglements in commercial fishing gear.)
Myth #4 Electric cars can't go far without recharging.
Electric vehicles are an important element of the transition to renewable energy because, unlike gas-powered cars, they can be charged by solar and wind energy. EVs are also more energy efficient, since they use nearly all of their power for driving, compared with traditional cars' use of just 25 percent. (Most of the rest is lost as heat.)Concerns that EVs can't make it to their destination likely spring from early prototypes, when cars developed in the 1970s got less than 40 miles per charge. Today, some 50 models can go more than 300 miles, with some topping 500.
Worries about the longevity of EV batteries are also unfounded. Only one percent of batteries manufactured since 2015 have had to be replaced (outside of manufacturing recalls, which have been negligible in recent years). Studies done by Tesla found the charging capacity in its sedans dropped just 15 percent [PDF] after 200,000 miles.
Myth #5 Renewables are on track to solve the climate crisis.
The world is in a better place than it would be without renewables. Before the 2015 Paris Agreement called for this energy transition, experts had forecast 4°C planetary warming by 2100; now they expect it to stay under 3°C, according to a recent report by World Weather Attribution, a climate research group. But even this target "would still lead to a dangerously hot planet," the report states. Last summer Hawaiian observatories documented carbon dioxide concentrations above 430 parts per million—a record breaking high far above the 350 PPM Paris target.To sufficiently slow climate warming, experts say wind generation must more than quadruple its current pace by 2030, and solar and other renewables must also be more widely adopted. Yet while global investments for renewable energy rose 10 percent in the first half of last year, it fell by more than a third in the U.S.
Bali is preparing to introduce a law which will require tourists to declare personal bank account information for a period of three months in order to visit the island. This law is intended to filter out less desirable travellers to promote "high quality tourism" in a move to counter the bad behaviour of boorist visitors over the last several decades. This change will come on top of the recently applied tourist levy and tightening of the management for incidents involving tourists.
Would you give your latest three bank statements to the Bali government in order to visit?
Microsoft has been gradually making it harder to activate Windows (and other products) without an internet connection. Most recently, it started clamping down on local accounts that could bypass OOBE sign-in, and now we're seeing reports that another legacy method has been retired. Phone activation, where you could call Microsoft to activate Windows & Office, no longer works, as Ben Kleinberg demonstrates in a new YouTube video.
Now, it'd be reasonable to assume that something as archaic as calling to activate your license had probably been sunset long ago. However, you'll be surprised to learn that Microsoft still lists it as a viable method in its support docs. This is particularly important for people on older operating systems like Windows 7, who expect an offline alternative to Microsoft's now-online-only activation systems.
Moreover, this ordeal was necessary for Ben because he was using an OEM key that could not be activated directly within Windows 7, as the activation servers for that version are effectively dead. The video shows that calling the listed number plays an automated message saying "support for product activation has moved online."
After the call, he also received a text message containing a link to the modern Microsoft Product Activation Portal we know today. Upon visiting the site, Ben was required to sign in with a Microsoft account, which immediately defeated the purpose of activating the call.
Initially, he couldn't get the confirmation ID on his iPhone using Firefox, but switching to Safari on his laptop resolved the issue. This wasn't a device-specific problem, just a browser-related hiccup. Eventually, Ben acquired the numbers he needed, and both his copy of Windows 7 and Office 2010 were successfully activated.
The video concludes on a bittersweet note, highlighting that call activation is effectively dead. However, users can still access the portal on a computer or phone to complete the process. Ironically, the entire reason for calling Microsoft in the first place was that Ben couldn't activate Windows 7 from within the OS, but now that a website exists, there's no need to call anyway.
Unfortunately, a Microsoft account is required, which Ben complained about and mirrors the concern many users have, even in the latest Windows 11 builds today.
The Economic Times published an hilarious article about a mathematician opinion of AI for solving math problems:
Renowned mathematician Joel David Hamkins has expressed strong doubts about large language models' utility in mathematical research, calling their outputs "garbage" and "mathematically incorrect". Joel Hamkins, a prominent mathematician and professor of logic at the University of Notre Dame, recently shared his unvarnished assessment of large language models in mathematical research during an appearance on the Lex Fridman podcast. Calling large language models fundamentally useless, he said they give "garbage answers that are not mathematically correct", reports TOI.
Joel David Hamkins is a mathematician and philosopher who undertakes research on the mathematics and philosophy of the infinite. He earned his PhD in mathematics from the University of California at Berkeley and comes to Notre Dame from the University of Oxford, where he was Professor of Logic in the Faculty of Philosophy and the Sir Peter Strawson Fellow of Philosophy at University College, Oxford. Prior to that, he held longstanding positions in mathematics, philosophy, and computer science at the City University of New York.
"I guess I would draw a distinction between what we have currently and what might come in future years," Hamkins began, acknowledging the possibility of future progress. "I've played around with it and I've tried experimenting, but I haven't found it helpful at all. Basically zero. It's not helpful to me. And I've used various systems and so on, the paid models and so on."
Firing a salvo, Joel David Hamkins expressed his frustration with the current AI systems despite experimenting with various models. "I've played around with it and I've tried experimenting, but I haven't found it helpful at all," he stated bluntly.
According to mathematician John Hamkins, AI's tendency to be confidently wrong mirrors some of the most frustrating human interactions. And what is even more concerning for him is how AI systems respond when those errors are highlighted, and not the occasional mathematical error. When Joel David Hamkins highlights clear flaws in their reasoning, the models often reply with breezy reassurances such as, "Oh, it's totally fine." Such AI responses combined with combination of confidence, incorrectness, and resistance to correction puts a threat to collaborative trust that is very much needed for meaningful and essential mathematical dialogue.
"If I were having such an experience with a person, I would simply refuse to talk to that person again," Hamkins said, noting that the AI's behaviour resembles unproductive human interactions he would actively avoid. He believes when it comes to genuine mathematical reasoning, today's AI systems remain unreliable.
Despite these issues, Hamkins recognizes that current limitations may not be permanent. "One has to overlook these kind of flaws and so I tend to be a kind of skeptic about the value of the current AI systems. As far as mathematical reasoning is concerned, it seems not reliable."
His criticism comes amid mixed reactions within the mathematical community about AI's growing role in research. While some scholars report progress using AI to explore problems from the Erdős collection, others have urged to exercise caution. Mathematician Terence Tao, for example, has warned that AI can generate proofs that appear flawless but contain subtle errors no human referee would accept. At the heart of the debate is a persistent gap: strong performance on benchmarks and standardized tests does not necessarily translate into real-world usefulness for domain experts.
Californians can now submit demands requiring 500 brokers to delete their data:
Californians are getting a new, supercharged way to stop data brokers from hoarding and selling their personal information, as a recently enacted law that's among the strictest in the nation took effect at the beginning of the year.
According to the California Privacy Protection Agency, more than 500 companies actively scour all sorts of sources for scraps of information about individuals, then package and store it to sell to marketers, private investigators, and others.
The nonprofit Consumer Watchdog said in 2024 that brokers trawl automakers, tech companies, junk-food restaurants, device makers, and others for financial info, purchases, family situations, eating, exercising, travel, entertainment habits, and just about any other imaginable information belonging to millions of people.
Two years ago, California's Delete Act took effect. It required data brokers to provide residents with a means to obtain a copy of all data pertaining to them and to demand that such information be deleted. Unfortunately, Consumer Watchdog found that only 1 percent of Californians exercised these rights in the first 12 months after the law went into effect. A chief reason: Residents were required to file a separate demand with each broker. With hundreds of companies selling data, the burden was too onerous for most residents to take on.
On January 1, a new law known as DROP (Delete Request and Opt-out Platform) took effect. DROP allows California residents to register a single demand for their data to be deleted and no longer collected in the future. CalPrivacy then forwards it to all brokers.
Starting in August, brokers will have 45 days after receiving the notice to report the status of each deletion request. If any of the brokers' records match the information in the demand, all associated data—including inferences—must be deleted unless legal exemptions such as information provided during one-to-one interactions between the individual and the broker apply. To use DROP, individuals must first prove they're a California resident.
I used the DROP website and found the flow flawless and the interface intuitive. After I provided proof of residency, the site prompted me to enter personal information such as any names and email addresses I use, and specific information such as VIN (vehicle identification numbers) and advertising IDs from phones, TVs, and other devices. It required about 15 minutes to complete the form, but most of that time was spent pulling that data from disparate locations, many buried in system settings.
It initially felt counterintuitive to provide such a wealth of personal information to ensure that data is no longer tracked. As I thought about it more, I realized that all that data is already compromised as it sits in online databases, which are often easily hacked and, of course, readily available for sale. What's more, CalPrivacy promises to use the data solely for data deletion. Under the circumstances, enrolling was a no-brainer.
It's unfortunate that the law is binding only in California. As the scourge of data-broker information hoarding and hacks on their databases continues, it would not be surprising to see other states follow California's lead.
Now if we could just make this a model for laws in the other US fiefdoms (i.e. states).
https://scitechdaily.com/scientists-found-a-way-to-help-the-brain-bounce-back-from-alzheimers/
For more than a hundred years, Alzheimer's disease (AD) has been regarded as a condition that cannot be undone. Because of this assumption, most scientific efforts have focused on stopping the disease before it starts or slowing its progression, rather than attempting to restore lost brain function. Despite decades of research and billions of dollars invested, no drug trial for Alzheimer's has ever been designed with the explicit goal of reversing the disease and restoring normal brain performance.
That long-standing belief is now being directly tested by researchers from University Hospitals, Case Western Reserve University, and the Louis Stokes Cleveland VA Medical Center. Their work asked a fundamental question that had rarely been explored: Can brains already damaged by advanced Alzheimer's recover?
The study was led by Kalyani Chaubey, PhD, of the Pieper Laboratory and was published on December 22 in Cell Reports Medicine. By analyzing multiple preclinical mouse models alongside brain tissue from people with Alzheimer's, the researchers identified a critical biological problem underlying the disease. They found that Alzheimer's is strongly driven by the brain's failure to maintain normal levels of a key cellular energy molecule called NAD+. Just as important, they showed that keeping NAD+ levels in balance can both prevent the disease and, under certain conditions, reverse it.
NAD+ naturally declines throughout the body as people age, including in the brain. When this balance is disrupted, cells gradually lose the ability to carry out essential processes needed for normal function and survival. The team found that this loss of NAD+ is far more pronounced in the brains of people with Alzheimer's. The same severe decline was also observed in mouse models of the disease.
[...] Amyloid buildup and tau abnormalities are among the earliest and most important features of Alzheimer's. In both mouse models, these mutations led to extensive brain damage that closely resembles the human condition. This included breakdown of the blood-brain barrier, damage to nerve fibers, chronic inflammation, reduced formation of new neurons in the hippocampus, weakened communication between brain cells, and widespread oxidative damage. The mice also developed severe memory and thinking problems similar to those experienced by people with Alzheimer's.
After confirming that NAD+ levels drop sharply in both human and mouse Alzheimer's brains, the researchers explored two different strategies. They tested whether preserving NAD+ balance before symptoms appear could prevent Alzheimer's, and whether restoring NAD+ balance after the disease was already well established could reverse it.
[...] The results exceeded expectations. Maintaining healthy NAD+ levels prevented mice from developing Alzheimer's, but even more striking outcomes were seen when treatment began later. In mice with advanced disease, restoring NAD+ balance allowed the brain to repair major pathological damage caused by the genetic mutations.
Both mouse models showed complete recovery of cognitive function. This recovery was supported by blood tests showing normalized levels of phosphorylated tau 217, a recently approved clinical biomarker of Alzheimer's in people. These findings provided strong evidence that the disease process had been reversed and highlighted a potential biomarker for future clinical trials.
"We were very excited and encouraged by our results," said Andrew A. Pieper, MD, PhD, senior author of the study and Director of the Brain Health Medicines Center, Harrington Discovery Institute at UH. "Restoring the brain's energy balance achieved pathological and functional recovery in both lines of mice with advanced Alzheimer's. Seeing this effect in two very different animal models, each driven by different genetic causes, strengthens the idea that restoring the brain's NAD+ balance might help patients recover from Alzheimer's."
The findings suggest a major change in how Alzheimer's could be approached in the future. "The key takeaway is a message of hope – the effects of Alzheimer's disease may not be inevitably permanent," said Dr. Pieper. "The damaged brain can, under some conditions, repair itself and regain function."
[...] Dr. Pieper cautioned that this strategy should not be confused with over-the-counter NAD+-precursors. Studies in animals have shown that such supplements can raise NAD+ to dangerously high levels that promote cancer. The approach used in this research relies instead on P7C3-A20, which helps cells maintain a healthy NAD+ balance during extreme stress without pushing levels beyond their normal range.
Journal Reference: “Pharmacologic reversal of advanced Alzheimer’s disease in mice and identification of potential therapeutic nodes in human brain” by Kalyani Chaubey, Edwin Vázquez-Rosa, Sunil Jamuna Tripathi, et al., 22 December 2025, Cell Reports Medicine. DOI: 10.1016/j.xcrm.2025.102535
https://www.theregister.com/2026/01/04/ai_agents_insider_threats_panw/
AI agents represent the new insider threat to companies in 2026, according to Palo Alto Networks Chief Security Intel Officer Wendi Whitmore, and this poses several challenges to executives tasked with securing the expected surge in autonomous agents.
"The CISO and security teams find themselves under a lot of pressure to deploy new technology as quickly as possible, and that creates this massive amount of pressure - and massive workload - that the teams are under to quickly go through procurement processes, security checks, and understand if the new AI applications are secure enough for the use cases that these organizations have," Whitmore told The Register.
"And that's created this concept of the AI agent itself becoming the new insider threat," she added.
According to Gartner's estimates, 40 percent of all enterprise applications will integrate with task-specific AI agents by the end of 2026, up from less than 5 percent in 2025. This surge presents a double-edged sword, Whitmore said in an interview and predictions report.
On one hand, AI agents can help fill the ongoing cyber-skills gap that has plagued security teams for years, doing things like correcting buggy code, automating log scans and alert triage, and rapidly blocking security threats.
"When we look through the defender lens, a lot of what the agentic capabilities allow us to do is start thinking more strategically about how we defend our networks, versus always being caught in this reactive situation," Whitmore said.
[...] One of the risks stems from the "superuser problem," Whitmore explained. This occurs when the autonomous agents are granted broad permissions, creating a "superuser" that can chain together access to sensitive applications and resources without security teams' knowledge or approval.
"It becomes equally as important for us to make sure that we are only deploying the least amount of privileges needed to get a job done, just like we would do for humans," Whitmore said.
"The second area is one we haven't seen in investigations yet," she continued. "But while we're on the predictions lens, I see this concept of a doppelganger."
This involves using task-specific AI agents to approve transactions or review and sign off on contracts that would otherwise require C-suite level manual approvals.
[...] By using a "single, well-crafted prompt injection or by exploiting a 'tool misuse' vulnerability," adversaries now "have an autonomous insider at their command, one that can silently execute trades, delete backups, or pivot to exfiltrate the entire customer database," according to Palo Alto Networks' 2026 predictions.
This also illustrates the ongoing threat of prompt-injection attacks. This year, researchers have repeatedly shown prompt injection attacks to be a real problem, with no fix in sight.
"It's probably going to get a lot worse before it gets better," Whitmore said, referring to prompt-injection. "Meaning, I just don't think we have these systems locked down enough."
[...] "Historically, when an attacker gets initial access into an environment, they want to move laterally to a domain controller," Whitmore said. "They want to dump Active Directory credentials, they want to elevate privileges. We don't see that as much now. What we're seeing is them get access into an environment immediately, go straight to the internal LLM, and start querying the model for questions and answers, and then having it do all of the work on their behalf."
Whitmore, along with just about every other cyber exec The Register has spoken with over the past couple of months, pointed to the "Anthropic attack" as an example.
She's referring to the September digital break-ins at multiple high-profile companies and government organizations later documented by Anthropic. Chinese cyberspies used the company's Claude Code AI tool to automate intel-gathering attacks, and in some cases they succeeded.
While Whitmore doesn't anticipate AI agents to carry out any fully autonomous attacks this year, she does expect AI to be a force multiplier for network intruders. "You're going to see these really small teams almost have the capability of big armies," she said. "They can now leverage AI capabilities to do so much more of the work that previously they would have had to have a much larger team to execute against."
Whitmore likens the current AI boom to the cloud migration that happened two decades ago. "The biggest breaches that happened in cloud environments weren't because they were using the cloud, but because they were targeting insecure deployments of cloud configurations," she said. "We're really seeing a lot of identical indicators when it comes to AI adoption."
For CISOs, this means establishing best practices when it comes to AI identities and provisioning agents and other AI-based systems with access controls that limit them to only data and applications that are needed to perform their specific tasks.
"We need to provision them with least-possible access and have controls set up so that we can quickly detect if an agent does go rogue," Whitmore said.
OpenAI is betting big on audio AI, and it's not just about making ChatGPT sound better. According to new reporting from The Information, the company has unified several engineering, product, and research teams over the past two months to overhaul its audio models, all in preparation for an audio-first personal device expected to launch in about a year.
The move reflects where the entire tech industry is headed — toward a future where screens become background noise and audio takes center stage. Smart speakers have already made voice assistants a fixture in more than a third of U.S. homes. Meta just rolled out a feature for its Ray-Ban smart glasses that uses a five-microphone array to help you hear conversations in noisy rooms — essentially turning your face into a directional listening device. Google, meanwhile, began experimenting in June with "Audio Overviews" that transform search results into conversational summaries. And Tesla is integrating Grok and other LLMs into its vehicles to create conversational voice assistants that can handle everything from navigation to climate control through natural dialogue.
It's not just the tech giants placing this bet. A motley crew of startups has emerged with the same conviction, albeit with varying degrees of success. The makers of the Humane AI Pin burned through hundreds of millions before their screenless wearable became a cautionary tale. The Friend AI pendant, a necklace that records your life and offers companionship, has sparked privacy concerns and existential dread in equal measure. And now at least two companies, including Sandbar and one helmed by Pebble founder Eric Migicovsky, are building AI rings expected to debut in 2026, allowing wearers to literally talk to the hand.
The form factors may differ, but the thesis is the same: audio is the interface of the future. Every space — your home, your car, even your face — is becoming an interface.
OpenAI's new audio model, slated for early 2026, will reportedly sound more natural, handle interruptions like an actual conversation partner, and even speak while you're talking, which is something today's models can't manage. The company is also said to envision a family of devices, possibly including glasses or screenless smart speakers, that act less like tools and more like companions.
As The Information notes, former Apple design chief Jony Ive, who joined OpenAI's hardware efforts through the company's $6.5 billion acquisition in May of his firm io, has made reducing device addiction a priority, seeing audio-first design as a chance to "right the wrongs" of past consumer gadgets.
Strong subsidies keep Tesla on top in Norway:
Last year, 95.5 percent of all newly registered vehicles in Norway were electric. While consumers in Europe and other markets are pivoting away from Tesla and toward hybrid vehicles, the Scandinavian country is staying firmly on course toward full EV adoption.
The Norwegian Road Federation reported that 95.9 percent of new cars registered in November were electric, a figure that climbed to 98 percent in December. These numbers represent a sharp increase from late 2024, when Norway became the first country where electric vehicles outnumbered petrol-powered cars on the road. In 2025, most newly registered gasoline-powered vehicles were hybrids, sports cars, or models used by first responders.
Tesla remains Norway's most popular automotive brand by a wide margin, increasing its market share slightly last year to 19.1 percent. This stands in stark contrast to trends in the US, China, and much of Europe, where Tesla sales have declined amid the rollback of EV incentives and growing public backlash against CEO Elon Musk's political views. The company was also named America's least reliable car brand last year, coinciding with a nine percent drop in global sales.
Signs of weakening confidence in EVs are particularly visible in the United States. Ford discontinued the all-electric F-150 Lightning last year in favor of hybrid models. In Europe, policymakers recently abandoned plans to ban new gasoline car sales by 2035.
Despite gradually increasing taxes on EVs, Norway continues to offer comparatively strong incentives, while duties on petrol-powered cars are also rising. Electric vehicles priced below roughly $30,000 remain exempt from value-added tax, and buyers rushed to make purchases ahead of January 1, when an additional $5,000 in VAT took effect on more expensive EVs.
Chinese automaker BYD also made notable gains in Norway last year, though it remains far behind Tesla. Its market share increased from 2.1 to 3.3 percent, with sales more than doubling over the period.
Globally, BYD has overtaken Tesla as the world's leading EV seller, posting a sales increase of over 28 percent in 2025. The rapid pace at which BYD and other Chinese automakers have brought vehicles from concept to assembly has forced Western manufacturers to rethink their production workflows and accelerate development timelines.
https://scitechdaily.com/scientists-create-a-periodic-table-for-artificial-intelligence/
Artificial intelligence is increasingly relied on to combine and interpret different kinds of data, including text, images, audio, and video. One obstacle that continues to slow progress in multimodal AI is deciding which algorithmic approach best fits the specific task an AI system is meant to solve.
Researchers have now introduced a unified way to organize and guide that decision process. Physicists at Emory University developed a new framework that brings structure to how algorithms for multimodal AI are derived, and their work was published in The Journal of Machine Learning Research.
"We found that many of today's most successful AI methods boil down to a single, simple idea — compress multiple kinds of data just enough to keep the pieces that truly predict what you need," says Ilya Nemenman, Emory professor of physics and senior author of the paper. "This gives us a kind of 'periodic table' of AI methods. Different methods fall into different cells, based on which information a method's loss function retains or discards."
A loss function is the mathematical rule an AI system uses to evaluate how wrong its predictions are. During training, the model continually adjusts its internal parameters in order to reduce this error, using the loss function as a guide.
"People have devised hundreds of different loss functions for multimodal AI systems and some may be better than others, depending on context," Nemenman says. "We wondered if there was a simpler way than starting from scratch each time you confront a problem in multimodal AI."
To address this, the team developed a mathematical framework that links the design of loss functions directly to decisions about which information should be preserved and which can be ignored. They call this approach the Variational Multivariate Information Bottleneck Framework.
"Our framework is essentially like a control knob," says co-author Michael Martini, who worked on the project as an Emory postdoctoral fellow and research scientist in Nemenman's group. "You can 'dial the knob' to determine the information to retain to solve a particular problem."
"Our approach is a generalized, principled one," adds Eslam Abdelaleem, first author of the paper. Abdelaleem took on the project as an Emory PhD candidate in physics before graduating in May and joining Georgia Tech as a postdoctoral fellow.
"Our goal is to help people to design AI models that are tailored to the problem that they are trying to solve," he says, "while also allowing them to understand how and why each part of the model is working."
AI-system developers can use the framework to propose new algorithms, to predict which ones might work, to estimate the needed data for a particular multimodal algorithm, and to anticipate when it might fail.
"Just as important," Nemenman says, "it may let us design new AI methods that are more accurate, efficient and trustworthy."
The researchers brought a unique perspective to the problem of optimizing the design process for multimodal AI systems.
"The machine-learning community is focused on achieving accuracy in a system without necessarily understanding why a system is working," Abdelaleem explains. "As physicists, however, we want to understand how and why something works. So, we focused on finding fundamental, unifying principals to connect different AI methods together."
Abdelaleem and Martini began this quest — to distill the complexity of various AI methods to their essence — by doing math by hand.
"We spent a lot of time sitting in my office, writing on a whiteboard," Martini says. "Sometimes I'd be writing on a sheet of paper with Eslam looking over my shoulder."
The process took years, first working on mathematical foundations, discussing them with Nemenman, trying out equations on a computer, then repeating these steps after running down false trails.
"It was a lot of trial and error and going back to the whiteboard," Martini says.
They vividly recall the day of their eureka moment.
They had come up with a unifying principal that described a tradeoff between compression of data and reconstruction of data. "We tried our model on two test datasets and showed that it was automatically discovering shared, important features between them," Martini says. "That felt good."
As Abdelaleem was leaving campus after the exhausting, yet exhilarating, final push leading to the breakthrough, he happened to look at his Samsung Galaxy smart watch. It uses an AI system to track and interpret health data, such as his heart rate. The AI however, had misunderstood the meaning of his racing heart throughout that day.
"My watch said that I had been cycling for three hours," Abdelaleem says. "That's how it interpreted the level of excitement I was feeling. I thought, 'Wow, that's really something! Apparently, science can have that effect."
The researchers applied their framework to dozens of AI methods to test its efficacy.
"We performed computer demonstrations that show that our general framework works well with test problems on benchmark datasets," Nemenman says. "We can more easily derive loss functions, which may solve the problems one cares about with smaller amounts of training data."
The framework also holds the potential to reduce the amount of computational power needed to run an AI system.
"By helping guide the best AI approach, the framework helps avoid encoding features that are not important," Nemenman says. "The less data required for a system, the less computational power required to run it, making it less environmentally harmful. That may also open the door to frontier experiments for problems that we cannot solve now because there is not enough existing data."
The researchers hope others will use the generalized framework to tailor new algorithms specific to scientific questions they want to explore.
Meanwhile, they are building on their work to explore the potential of the new framework. They are particularly interested in how the tool may help to detect patterns of biology, leading to insights into processes such as cognitive function.
"I want to understand how your brain simultaneously compresses and processes multiple sources of information," Abdelaleem says. "Can we develop a method that allows us to see the similarities between a machine-learning model and the human brain? That may help us to better understand both systems."
Reference: “Deep Variational Multivariate Information Bottleneck – A Framework for Variational Losses” by Eslam Abdelaal, Ilya Nemenman and K. Michael Martini Jr., 2 September 2025, arXiv.
DOI: 10.48550/arXiv.2310.03311