Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Traditionally truck and bus drivers are known to use a small hammer or billy club to tap their tires, as a quick way to check for low air pressure. If you are driving an 18-wheeler, it takes a long time to put a pressure gauge on all those tires. Here's a thread discussing this technique, https://heartlandowners.org/threads/tapping-the-tires-with-a-hammer.31971/
Now trade magazine TTI https://www.tiretechnologyinternational.com/news/intelligent-tire-technology/yokohama-begins-testing-of-ai-technology-to-gauge-air-pressure.html reports on the latest wrinkle on this old technique,
Yokohama has begun testing a novel technology that uses AI to gauge air pressure from the sound made by tapping truck and bus tires. This technology aims to help logistics companies to reduce costs and improve fuel efficiency.
Daily air pressure checks with pressure gauges can lead to valve failure and air leakage. In addition, real-time monitoring can be time-consuming and costly. Therefore, tapping the tire with a hammer remains a commonly used technique for air pressure monitoring. However, this method cannot be used to determine whether a tire has appropriate air pressure.
To solve this problem, Yokohama is working with Metrika to develop an AI algorithm that can distinguish between the sounds created by tapping the tire and a variety of environmental sounds, determine when and how long the sound occurred (the sound interval), and estimate the tire's air pressure based on the sound. The companies have developed a prototype that is undergoing practical testing at a transportation-related company.
It's a smart phone app that uses the microphone as sensor...and more than likely it then reports the results back to the trucking fleet owner.
Long ago, another company tried to control global connectivity—it had an unhappy ending
Can you imagine a company so powerful that it controls half of the world's trade?
It actually happened. But only one time in history. It's a remarkable story filled with lessons for those willing to learn them.
No business ever matched the power of the East India Company. It dominated global trade routes, and used that power to control entire nations. Yet it eventually collapsed—ruined by the consequences of its own extreme ambitions.
Anybody who wants to understand how big businesses destroy themselves through greed and overreaching needs to know this case study. And that's especially true right now—because huge web platforms are trying to do the exact same thing in the digital economy that the East India Company did in the real world.
Google is the closest thing I've ever seen to the East India Company. And it will encounter the exact same problems, and perhaps meet the same fate.
[...] When you consider all the brutal, terrible things this company did, you 're dumbfounded that they dared adopt that slogan—much like the "Don't Be Evil" that once served as Google's motto.
But their real god was profit maximization. Of course it was—when your return on investment is so high, you try to grow as fast as possible.
[...] Just like a shipping company that controls the port, Google's search engine is the port of departure for digital voyages today. And like the East India Company, Google decided that it can exploit anybody who uses its port—and destroy them if they want.
So Google destroyed the journalism business. That's why your neighborhood newspaper went broke—the folks in Palo Alto siphoned off all the advertising revenues. And they have killed off thousands of other businesses and jobs.
Arthur T Knackerbracket has processed the following story:
Microsoft has begun distributing Windows 11 24H2 to user devices as the company enters the next stage of the operating system's rollout.
[...] Windows 11 24H2 has not gone entirely to plan for Microsoft. While most users have had no problems with the update, there have been issues for some. The company has an ever-lengthening list of known issues, many of which remain unmitigated or unresolved since the operating system was launched in the second half of 2024.
Recently resolved issues include problems with Ubisoft games, and USB devices that support the eSCL scanner protocol – although the latter might not have been entirely fixed, according to disgruntled users in hardware support forums.
[...] Forcing Windows 11 24H2 onto users won't affect machines subject to a safeguard hold where Microsoft has decided to block the installation (likely due to one of the documented known issues.) Nor does it affect users sticking with Windows 10 – devices that don't meet Microsoft's hardware requirements for its flagship operating system won't suddenly be able to pass muster for Windows 11 24H2. There is also little in Windows 11 24H2 to attract users who have chosen to skip previous versions.
Instead, IT professionals around the world should gird their loins for the inevitable friends and family support calls when Windows 11 24H2 makes a surprise appearance, and uncle Fester is surprised that things have suddenly started working a little differently. ®
We live at a time when technology is increasing at a faster pace than we have ever seen before in all of human history. But is humanity equipped to handle the extremely bizarre technology that we are now developing? Earlier this month, I discussed some of the frightening ways that AI is changing our society. Today, I want to focus on nanotechnology. This is a field where extraordinary advances are https://soylentnews.org/article.pl?sid=24/09/26/1353235 being made on a regular basis, and we are being told that nanotechnology is already "revolutionizing myriad industries"...
A "nanoparticle" is a particle of matter that is less than 100 nanometers in diameter. Highly specialized equipment is necessary to work with nanoparticles, because they are way too small to be seen with the naked eye...
One of the hallmarks of nanotechnology is the utilization of nanoparticles, minute entities often ranging from 1 to 100 nanometers. These particles, when engineered with precision, bring forth distinctive characteristics that can redefine the functionality of materials. In medicine, for instance, nanoparticles serve as drug carriers, enabling targeted delivery and enhancing therapeutic efficacy while minimizing side effects. Nano-engineered materials have found their niche in the realm of electronics.
[...] Many are concerned that the healthcare industry is one area where nanoparticles are already being used on a widespread basis...
The healthcare sector is witnessing a transformative impact through nanotechnology. Nanomedicine, an interdisciplinary field, employs nanoscale tools for the diagnosis, imaging, and treatment of diseases. Nanoparticles, with their ability to navigate biological barriers, offer a novel approach to targeted drug delivery, ensuring precise and efficient treatment with reduced side effects.
[...] But there have been other developments in this field that are rather ominous.
For example, a team of researchers in South Korea has discovered a way to use nanoparticles to "control the minds of mice"...
Scientists at the Institute for Basic Science (IBS) in South Korea have developed a new way to control the minds of mice by manipulating nanoparticle-activated "switches" inside their brains with an external magnetic field.
The system, dubbed Nano-MIND (Magnetogenetic Interface for NeuroDynamics), works by controlling targeted regions of the brain by activating neural circuits.
Using an external magnetic field, these scientists were able to make mice eat more or eat less. And in another experiment, they were able to manipulate the maternal behavior of female mice...
Related:
Arthur T Knackerbracket has processed the following story:
Taiwan has experienced an earthquake so significant that chipmaking champ TSMC has shuttered plants.
According to Taiwan’s Central Weather Administration Seismological Center, a 6.4-magnitude tremblor shook things up at seventeen minutes past midnight on January 21st (4:17PM UTC). A quake of that strength can cause significant local damage in built-up areas.
This one was centered in a mountainous region of Taiwan and reportedly caused 27 injuries and minor damage such as glass bottles breaking after being shaken off supermarket shelves.
But it was still felt around 40km away in the city of Tainan, and 200km away in the city of Taichung. The Register mentions them as both house TSMC fabrication plants and, according to local media reports, the chipmaker was sufficiently worried about worker safety that staff were told to cease work and leave the building.
TSMC is now apparently checking for any damage. Chipmaking equipment is extremely precise, so it’s possible an earthquake created small changes that could introduce errors that reduce the yield of usable chips. Recalibration could therefore be required before full-scale production can resume.
The chipmaker has not made a public statement about the situation at the time of writing but sources familiar with the situation told Japan’s Nikkei it could be several days before production resumes at full capacity.
If correct, that will irk some TSMC customers as the company’s fabs are often booked well into the future.
Taiwanese media reports that TSMC suppliers have mobilized to ensure any pause in production is brief. Geological incidents are a near-daily occurrence in Taiwan, as are geopolitical rumblings. The latter this week came in the form of uncertainty about US President Trump’s policy position regarding the republic, which the Biden Administration regarded as a vital friend that America would defend should China attempt a forcible re-unification. Taiwanese contract laptop makers Compal and Inventec are reportedly considering moving manufacturing facilities to the USA if threatened with import tariffs by the new administration. ®
Almost all leading AI chatbots show signs of cognitive decline
Almost all leading large language models or "chatbots" show signs of mild cognitive impairment in tests widely used to spot early signs of dementia, finds a study in the Christmas issue of The BMJ.
The results also show that "older" versions of chatbots, like older patients, tend to perform worse on the tests. The authors say these findings "challenge the assumption that artificial intelligence will soon replace human doctors."
Huge advances in the field of artificial intelligence have led to a flurry of excited and fearful speculation as to whether chatbots can surpass human physicians.
Several studies have shown large language models (LLMs) to be remarkably adept at a range of medical diagnostic tasks, but their susceptibility to human impairments such as cognitive decline have not yet been examined.
To fill this knowledge gap, researchers assessed the cognitive abilities of the leading, publicly available LLMs – ChatGPT versions 4 and 4o (developed by OpenAI), Claude 3.5 "Sonnet" (developed by Anthropic), and Gemini versions 1 and 1.5 (developed by Alphabet) – using the Montreal Cognitive Assessment (MoCA) test.
The MoCA test is widely used to detect cognitive impairment and early signs of dementia, usually in older adults. Through a number of short tasks and questions, it assesses abilities including attention, memory, language, visuospatial skills, and executive functions. The maximum score is 30 points, with a score of 26 or above generally considered normal
The instructions given to the LLMs for each task were the same as those given to human patients. Scoring followed official guidelines and was evaluated by a practising neurologist.
ChatGPT 4o achieved the highest score on the MoCA test (26 out of 30), followed by ChatGPT 4 and Claude (25 out of 30), with Gemini 1.0 scoring lowest (16 out of 30).
All chatbots showed poor performance in visuospatial skills and executive tasks, such as the trail making task (connecting encircled numbers and letters in ascending order) and the clock drawing test (drawing a clock face showing a specific time). Gemini models failed at the delayed recall task (remembering a five word sequence).
Most other tasks, including naming, attention, language, and abstraction were performed well by all chatbots.
But in further visuospatial tests, chatbots were unable to show empathy or accurately interpret complex visual scenes. Only ChatGPT 4o succeeded in the incongruent stage of the Stroop test, which uses combinations of colour names and font colours to measure how interference affects reaction time.
These are observational findings and the authors acknowledge the essential differences between the human brain and large language models.
However, they point out that the uniform failure of all large language models in tasks requiring visual abstraction and executive function highlights a significant area of weakness that could impede their use in clinical settings.
As such, they conclude: "Not only are neurologists unlikely to be replaced by large language models any time soon, but our findings suggest that they may soon find themselves treating new, virtual patients – artificial intelligence models presenting with cognitive impairment."
Ummm... how long 'til the next version, tho'?
https://phys.org/news/2025-01-font-typography.html
When used correctly, font selection usually goes unnoticed, blending seamlessly with content and reader. When the One Times Square Billboard used a retired Microsoft Word default Calibri font to usher in 2025's "Happy New Year" message, it was immediately met with sarcastic scorn and delightful derision for the uninspired choice (at least by people who pay attention to such things). Had the font faux pas been the branding rollout of a new app, product, or company, the consequences might have been more severe.
Hanyang University researchers in Korea have attempted to take the intuition and subjective judgment out of the art of font selection. Using computational tools and network analysis to develop an objective framework for font selection and pairing in design, the researchers aim to establish foundational principles for applying typography in visual communication.
Font choice plays a critical role in visual communication, shaping readability, emotional resonance, and overall design balance across mediums. According to the researchers, designers have traditionally relied on subjective rules for font pairing, such as mixing Serif and Sans-Serif or creating contrast. These rules are difficult to formalize and often apply to only a narrow subset of fonts.
Recent advances in AI-based font generation models have focused on creating and predicting fonts rather than studying their systematic use in pairing. Given the growing reliance on computational tools in graphic design, the researchers explored font characteristics and pairing rules that can be more easily incorporated into generated texts and design processes.
In the study, "Typeface Network and the Principle of Font Pairing," published in Scientific Reports, researchers collected 22,897 font-use cases and 9,022 fonts from Fontsinuse.com, analyzing font use across 19 design mediums, including web design, magazines, branding, and album art.
The visual elements of fonts (uppercase, lowercase, symbols, and numbers) were analyzed using non-negative matrix factorization, reducing font design parameters to three interpretable dimensions: Serif vs. Sans-Serif (X-axis), Basic vs. Decorative letterforms (Y-axis), and Light vs. Bold (Z-axis). [...] Serif fonts like Times New Roman dominate traditional print media, such as magazines and periodicals, with a MeanX value of 41.95, indicating a strong preference for this font category.
Digital media, such as web and mobile, preferred Sans-Serif fonts, like Helvetica and Futura, with thicker fonts achieving a MeanZ value greater than 30. This finding aligns with the need for thicker, more legible fonts in smaller screen environments, where pixel clarity is critical.
Helvetica showed high frequency in album art and physical consumer products but was less frequent in film and video. In branding and identity, bold Sans-Serif fonts such as Helvetica Neue ranked highly.
Neue-Helvetica was frequently seen in the "Software Apps" category. Notably, while most fonts had negative authenticity in the "Web" domain, Neue-Helvetica showed significantly positive values.
Arthur T Knackerbracket has processed the following story:
After a lengthy court battle with broadband industry lobbyists, New York will soon start enforcing a law that passed in 2021. The state law requires ISPs, like Verizon, to offer $15 or $20 per month internet service plans to low-income households.
Although ISPs got an initial win by blocking the Affordable Broadband Act (ABA) in June 2021, this ruling was reversed in April 2024 after the case went to the US appeals court. Last month, the Supreme Court decided not to hear the broadband industry’s challenge, which means the appeals court ruling is the final word on the issue. ISPs will now have to comply with the ABA, which will start being enforced on January 15.
As reported by Ars Technica, New York-based internet providers will now need to either offer a $15/month plan with at least 25Mbps download speeds, or a $20/month plan with 200Mbps download speeds. Included with the price are “any recurring taxes and fees such as recurring rental fees for service provider equipment required to obtain broadband service and usage fees.” Prices can be increased, but increases are capped at 2% per year and state officials can decide if the minimum speeds need to be raised. If a company is non-compliant with the law, it could be fined up to $1,000 per violation.
An ISP can obtain an exemption from the ABA if it serves 20,000 households or fewer and the Commission deems that compliance would have an unreasonable or unsustainable financial effect on the business. With the law going into effect tomorrow, these ISPs will be given a grace period of one month if they file their paperwork by Wednesday claiming that they meet the threshold. They’ll be able to get longer exemptions if they file detailed financial information by February 15.
Earlier this year, the FCC’s attempt to restore certain net neutrality rules was shot down by a federal appeals court. The enforcement of ABA shows how states can regulate ISPs even if the FCC can’t.
How do you fit a 250kB dictionary in 64kB of RAM and still perform fast lookups? For reference, even with modern compression techniques like gzip -9, you can't compress this file below 85kB.
In the 1970s, Douglas McIlroy faced this exact challenge while implementing the spell checker for Unix at AT&T. The constraints of the PDP-11 computer meant the entire dictionary needed to fit in just 64kB of RAM. A seemingly impossible task.
Instead of relying on generic compression techniques, he took advantage of the properties of the data and developed a compression algorithm that came within 0.03 bits of the theoretical limit of possible compression. To this day, it remains unbeaten.
The story of Unix spell is more than just historical curiosity. It's a masterclass in engineering under constraints: how to analyze a problem from first principles, leverage mathematical insights, and design elegant solutions that work within strict resource limits.
If you're short on time, here's the key engineering story:
• The Unix spell started in the 1970s as an afternoon prototype by Steve Johnson at AT&T, before Douglas McIlroy rewrote it to improve its performance and accuracy.
• McIlroy's first innovation was a clever linguistics-based stemming algorithm that reduced the dictionary to just 25,000 words while improving accuracy.
• For fast lookups, he initially used a Bloom filter—perhaps one of its first production uses. Interestingly, Dennis Ritchie provided the implementation. They tuned it to have such a low false positive rate that they could skip actual dictionary lookups.
• When the dictionary grew to 30,000 words, the Bloom filter approach became impractical, leading to innovative hash compression techniques.
• They computed that 27-bit hash codes would keep collision probability acceptably low, but needed compression.
• McIlroy's solution was to store differences between sorted hash codes, after discovering these differences followed a geometric distribution.
• Using Golomb's code, a compression scheme designed for geometric distributions, he achieved 13.60 bits per word—remarkably close to the theoretical minimum of 13.57 bits.
• Finally, he partitioned the compressed data to speed up lookups, trading a small memory increase (final size ~14 bits per word) for significantly faster performance.
The rest of the article expands each of these points and gives a detailed explanation with all the math and logic behind them.
https://www.righto.com/2025/01/pentium-carry-lookahead-reverse-engineered.html
Addition is harder than you'd expect, at least for a computer. Computers use multiple types of adder circuits with different tradeoffs of size versus speed. In this article, I reverse-engineer an 8-bit adder in the Pentium's floating point unit. This adder turns out to be a carry-lookahead adder, in particular, a type known as "Kogge-Stone."1 In this article, I'll explain how a carry-lookahead adder works and I'll show how the Pentium implemented it. Warning: lots of Boolean logic ahead.
China is organizing what could be one of the weirdest races in history: a half-marathon where 12,000 humans will compete against an army of humanoid robots to see who's the best long-distance runner.
The 21-kilometer race in Beijing's Daxing district isn't just another tech demo. More than 20 companies are bringing their best walking robots to compete, and they're playing for real money—the top three finishers get prizes regardless of whether they're made of flesh or metal.
This would be the first time humanoid robots race a full 21-kilometer course. Last year, robots were able to join a race without having to complete the full route.
[The event] includes a strict no-wheels policy, and the bots actually need to look human-ish and walk on two legs. They need to be between 0.5 and 2 meters tall—so no giant mechs or tiny robot cars will be sneaking in.
One of the early favorites is Tiangong, a humanoid that can run 10km/h. It also crossed the line alongside some of the fastest humans during last year's half marathon—after joining for the last 100 meters.
The Tesla Optimus Gen-2 peaks at 8km/h.
Atlas (built by Boston Dynamics) is a bit faster at 9km/h.
The OpenAI-backed 1X NEO... reaches a theoretical speed of 12km/h.
For reference: https://en.wikipedia.org/wiki/Foot_speed
In the 2023 Chicago Marathon [42km, not a half-marathon], Kelvin Kiptum set a time of 2:00:35. That equates to an average speed above 20 km/h,(12.47mph) for two hours.
Arthur T Knackerbracket has processed the following story:
TSMC has ceased its relationship with Singapore-based PowerAIR after a client review raised concerns about potential violations of U.S. export controls, reports the South China Morning Post, citing people familiar with the matter. As TSMC could not identify the end user of PowerAIR's chips that it ordered, it reportedly presumed that it was dealing with an entity with possible connections to Huawei, which has been under the U.S. technology embargo since 2020.
TSMC's action follows the discovery of a TSMC-made chiplet in a recently assembled Huawei Ascend 910 AI processor. That particular chiplet was ordered by Sophgo, a relatively unknown entity. Singapore-based PowerAIR is just as unknown as Sophgo, it seems. The firm was incorporated as a private company working on engineering design and consultancy back in September 2023. It lacks an official online presence or publicly listed contact information, according to SCMP. The company was flagged after TSMC identified a possible link between its chip designs and Huawei's.
This is not the first, but the second time that an entity disguised under an 'unknown' brand provided the blacklisted Huawei with high-end technologies that help China's economic and therefore military development, SCMP reports. At this point, we do not know whether we are dealing with the second or the third high-end processor destined for Huawei and allegedly made by TSMC.
Considering the fact that PowerAIR is an unknown entity probably with few (if any) engineers and with no publicly known contracts with companies like Andes, Alchip, or Alphawave, or entities known for designing high-performance IP, TSMC had all the reasons to be suspicious. Being suspicious enough, TSMC seemingly linked PowerAIR to Huawei and therefore was obliged to cease the contract. Per the report, TSMC did just that.
Since September 2020, Huawei has been prohibited from legally purchasing chips made with American technology, which encompasses nearly all chips. To circumvent this restriction, Huawei reportedly employs intermediaries to place orders or acquire components. In 2024, the company used Sophgo, a Bitmain affiliate, to order Huawei-designed Virtuvian computing chiplets for its Ascend 910 processor, violating U.S. sanctions. This violation was uncovered by TechInsights during a teardown of the Ascend 910 processor. Upon confirming the match, TSMC halted shipments to Sophgo and reported the issue to U.S. and Taiwanese authorities.
Arthur T Knackerbracket has processed the following story:
The Department of Justice and the FBI shared today that they have completed a project to remove malware used by Chinese hackers from computers in the US. The effort was essentially a court-approved counter-hack that remotely deleted malware known as PlugX from more than 4,200 computers. The agencies will notify the US owners of those impacted machines about the operation through their internet service providers.
According to the DOJ press release, hacker groups known as Mustang Panda and Twill Typhoon received backing from the Chinese government to use PlugX to infect, control and gather information from computers outside China. The action to delete the PlugX malware from US computers began in August 2024. It was conducted in cooperation with French law enforcement and with Sekoia.io, a France-based private cybersecurity company. Sekoia.io has found PlugX malware in more than 170 countries.
The Mustang Panda group has been conducting infiltration efforts around the world since at least 2014. For instance, cybersecurity firm ESET found that Mustang Panda gained access to cargo shipping companies' computers in Norway, Greece and the Netherlands in March. And the group was one of several China-linked hacking organizations identified as compromising telecommunications systems across the Asia-Pacific region in reports last summer.
Arthur T Knackerbracket has processed the following story:
Europe and Japan’s BepiColombo beamed back close-up images of the solar system’s innermost planet, flying through Mercury’s shadow to peer directly onto craters that are permanently hidden in the shadows.
BepiColombo, consisting of two conjoined spacecraft, flew past Mercury for the sixth and final time on Wednesday, using the planet’s gravitational pull to adjust its trajectory for an eventual orbital insertion in 2026. The mission launched in October 2018 as a joint venture between the European Space Agency (ESA) and the Japan Aerospace Exploration Agency (JAXA), each providing an orbiter to explore Mercury. During its latest flyby, the twin spacecraft flew above the surface of Mercury at a distance of around 180 miles (295 kilometers), according to ESA.
From this close distance, BepiColombo captured images of Mercury’s cratered surface, starting with the planet’s cold, permanently dark night side near the north pole before moving toward its sunlit northern regions.
Using its monitoring cameras (M-CAM 1), BepiColombo got its first close-up view of the boundary that separates the day and night side of Mercury. In the image above, the rims of Prokofiev, Kandinsky, Tolkien, and Gordimer craters can be seen littered across the surface of Mercury, casting permanent shadows that may contain pockets of frozen water.
Indeed, a key goal of the mission is to investigate whether Mercury holds water in its shadows, despite its close proximity to the Sun.
The massive Caloris Basin, Mercury’s largest impact crater, stretches more than 930 miles (1,500 kilometers) across and is visible at the bottom left of the image.
Although Mercury is a largely dark planet, its younger features (or more recent scarring) appear brighter on the surface. Scientists aren’t quite sure what Mercury is made of, but material that had been dug up from beneath the surface of the planet gradually grows darker with time.
ESA released a movie of the flyby you can download.
Blogger Matt Webb point out that nations have begun to need a strategic fact reserve, in light of the problem arising from LLMs and other AI models starting to consume and re-process the slop which they themselves have produced.
The future needs trusted, uncontaminated, complete training data.
From the point of view of national interests, each country (or each trading bloc) will need its own training data, as a reserve, and a hedge against the interests of others.
Probably the best way to start is to take a snapshot of the internet and keep it somewhere really safe. We can sift through it later; the world's data will never be more available or less contaminated than it is today. Like when GitHub stored all public code in an Arctic vault (02/02/2020): a very-long-term archival facility 250 meters deep in the permafrost of an Arctic mountain. Or the Svalbard Global Seed Vault.
But actually I think this is a job for librarians and archivists.
What we need is a long-term national programme to slowly, carefully accept digital data into a read-only archive. We need the expertise of librarians, archivists and museums in the careful and deliberate process of acquisition and accessioning (PDF).
(Look and if this is an excuse for governments to funnel money to the cultural sector then so much the better.)
It should start today.
Already, AI slop is filling the WWW and starting to drown out legitimate, authoritative sources through sheer volume.
Previously
(2025) Meta's AI Profiles Are Already Polluting Instagram and Facebook With Slop
(2024) Thousands Turned Out For Nonexistent Halloween Parade Promoted By AI Listing
(2024) Annoyed Redditors Tanking Google Search Results Illustrates Perils of AI Scrapers