Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 5 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

When transferring multiple 100+ MB files between computers or devices, I typically use:

  • USB memory stick, SD card, or similar
  • External hard drive
  • Optical media (CD/DVD/Blu-ray)
  • Network app (rsync, scp, etc.)
  • Network file system (nfs, samba, etc.)
  • The "cloud" (Dropbox, Cloud, Google Drive, etc.)
  • Email
  • Other (specify in comments)

[ Results | Polls ]
Comments:166 | Votes:302

posted by jelizondo on Tuesday October 14, @08:20PM   Printer-friendly

A unique case of a woman with male chromosomes in her blood:

This is believed to be the first recorded case of its kind, and doctors believe her blood came from her twin brother while they were in the womb.

In every other cell in her body, Ana Paula Martins has XX chromosomes associated with female sex characteristics.

But in her blood cells she carries XY chromosomes that typically determine biological male sex.

This is believed to be the first recorded case of its kind, and doctors believe her blood came from her twin brother while they were in the womb.

This phenomenon was discovered in 2022, after a miscarriage experienced by Ana Paula.

During a medical examination, the gynecologist ordered a karyotype analysis, which allows for a detailed examination of an individual's chromosomes, usually from a blood sample.

"They called me from the lab and said the analysis needed to be repeated," Ana Paula recalls.

The results showed the presence of XY chromosomes in her blood, which confused both Ana and the doctors.

"I went to examine the patient and she had, so to speak, absolutely all the normal female characteristics," explains Gustavo Maciel, a gynecologist at a Brazilian health organization. Fleury Medicine and Health.

"She had a uterus, ovaries... the ovaries were functioning normally," adds this professor at the School of Medicine of the University of São Paulo.

Ana Paula was then referred to geneticist Kai Kwai at the hospital. Albert Einstein Israelita Sao Paulo.

He began detailed medical research with Professor Masiel and other experts.

During the research, Ana Paula told doctors that she had a twin brother, which was crucial to understanding her case.

Comparing their DNA showed that Ana Paula's blood cells, but only blood cells, were identical to those of her twin brother.

She had the same characteristic genetic markers.

"In the DNA of her mouth, in the DNA of her skin - she is her own, unique," says Professor Maciel.

"But in her blood, she is, in fact, her brother."

Ana Paula's case is an example of chimerism, when an organism has different genetic sets in different tissues or organs.

Certain medical therapies can lead to chimerism, such as bone marrow transplantation.

For example, when leukemia patients receive donor cells that then populate their bone marrow.

Spontaneous chimerism is "a very rare occurrence," emphasizes Dr. Kwai.

By reviewing scientific papers, researchers identified cases of twin pregnancies in other mammals in which blood exchange occurred between twins of different sexes.

Scientists assume that in the womb, the placentas of Ana Paula and her twin brother made some kind of contact, forming a connection between the blood vessels that carried the boy's blood to the girl's.

"There was a blood transfusion that we call intertwin transfusion syndrome."

"At one point, the twins' veins and arteries intertwined in the umbilical cord and he transferred all of his blood material to Anne Paule," explains Professor Maciel.

"The most astonishing thing is that this material remained in her body her entire life," he adds.

She then began producing blood with XY chromosomes, while the XX remained in other parts of her body.

"She has a little piece of her brother running through her veins," says Dr. Quayo.

The team believes that this unusual case could contribute to further research into human immunity and reproduction.

Ana Paula's body tolerates her brother's cells and they are not attacked by her immune system.

"Her case could open up new fields of research and help us better understand some issues, for example, those related to transplantation," says Professor Maciel.

There are reports of rare cases of the presence of XY chromosomes in women, but these are mostly associated with fertility problems.

However, this is not the case with Ana Paula, who became pregnant during the research and gave birth to a healthy son.

Genetic analysis showed that the child has the expected DNA - half of the chromosomes come from the mother and half from the father, and there is nothing from the uncle.

"(Anne Paula's) egg cell contains her genetic material."

"Her blood was not involved," explains Professor Maciel.

It was important for Anna Paula to discover the cause of her genetic change, but most importantly, she learned that it would not affect her pregnancy.

"It wasn't something that could get in the way of achieving my goal of having my baby," she says.


Original Submission

posted by jelizondo on Tuesday October 14, @03:34PM   Printer-friendly

Tom's Hardware published a report about the new deal between AMD and OpenAI:

OpenAI and AMD have announced a multibillion-dollar partnership that will see the companies collaborate on AI data centers powered by AMD processors. OpenAI has committed to purchasing 6 gigawatts of AMD chips, starting with the MI450 next year. That will be done either by purchasing the chips directly from AMD or through cloud computing partners. AMD CEO Lisa Su told the WSJ that the deal would generate tens of billions of dollars in revenue for AMD over the next five years.

The two companies are not disclosing the exact financial details of the deals. However, AMD emphasized that each "per gigawatt" of capacity is worth tens of billions of dollars, so it's possible the deal is worth upwards of $60 billion.

In return, OpenAI will receive warrants for up to 160 million AMD shares, approximately 10% of AMD, at a price of $0.01 per share, to be awarded in phases, provided that OpenAI meets deployment milestones. The warrants will only be exercised if AMD's share price increases, although again, the specifics are unclear.

The deal is an enormous win for AMD and stands juxtaposed with Nvidia's groundbreaking Intel partnership announced last month. Under the terms of that deal, Nvidia and Intel are jointly developing Intel x86 RTX SOCs for PCs featuring Nvidia graphics, as well as custom Nvidia data center x86 processors. Nvidia also received $5 billion in Intel stock as part of the deal.

OpenAI will use AMD's chip for inference in order to cope with skyrocketing demand. "It's hard to overstate how difficult it's become... We want it super fast, but it takes some time." OpenAI's Sam Altman said to WSJ.

"We are thrilled to partner with OpenAI to deliver AI compute at massive scale," said Dr. Lisa Su, chair and CEO, AMD. "This partnership brings the best of AMD and OpenAI together to create a true win-win enabling the world's most ambitious AI buildout and advancing the entire AI ecosystem."

The first deployment will be 1 gigawatt worth of MI450 chips, scheduled for the second half of 2026. Altman said the AI buildout has reached a phase "where the entire industry's got to come together and everybody's going to do super well," not only on chips and data centers, but also further down the supply chain too.

OpenAI has also inked a $100 billion deal with Nvidia, and will use Nvidia's investment to secure and deploy 10 gigawatts worth of AI data centers. While that deal isn't finalized, AMD and OpenAI reportedly say this deal is "definitive" and plan to immediately file the requisite details with regulators, a step that has yet to happen in the Nvidia deal.


Original Submission

posted by jelizondo on Tuesday October 14, @10:51AM   Printer-friendly
from the next-up-fighting-proprietary-data-formats dept.

Tom's Hardware is reporting on a project by Cambridge University to rescue data trapped on old floppy disks. Magnetic media only lasts a decade or so under optimal, climate controlled storage conditions. So this task is much more fundamental than just pushing the old disks into off-the-shelf drives.

Led by the library's digital preservation team, the project aims to document and formalize best practices for floppy disk recovery, encompassing cleaning and handling methods, as well as imaging workloads. It's also pulling in expertise from the retro-computing community, whose trial-and-error techniques are often the only reason legacy formats still survive.

You can forget those cheap USB floppy drives you can buy online. Cambridge's preservationists don't just mount disks and hope for the best; they sample the raw magnetic signal itself. Specialized hardware, such as the KryoFlux and open-hardware Greaseweazle interfaces, captures the flux transitions — the tiny changes in polarity that encode data — and reconstructs the file structure later in software. This flux-level imaging process enables archivists to recover non-PC formats and identify weak or damaged sectors that would otherwise remain unread.

This project only addresses the matter of hardware, so far. Although that is important on its own when working for preservation, much of the data will turn out to be trapped in proprietary or DRM'd formats. Thus draconian copyright laws can impose an unnecessary non-technical barrier to the final steps of legally retrieving the data and bringing it to a usable form.

Previously:
(2025) A Story About USB Floppy Drives
(2024) PC Floppy Copy Protection: Softguard Superlok
(2024) PC Floppy Copy Protection: Formaster Copy-Lock
(2024) Japan's Digital Minister Claims Victory Against Floppy Disks
(2024) Where Are Floppy Disks Today? Planes, Trains, And All These Other Places
(2022) The Last Man Selling Floppy Disks Says He Still Receives Orders From Airlines


Original Submission

posted by hubie on Tuesday October 14, @06:11AM   Printer-friendly
from the pay-up-or-else dept.

CRM giant Salesforce has been hacked affecting Qantas and other large corporations. While Salesforce claims to be number 1 in the world, a big claim in the presence of SAP and Microsoft, this recent hack shows that no system is completely secure. More than a billion records have been stolen from the 39 companies, including the Qantas Frequent Flyers program, Toyota, Disney, McDonalds, and HBO Max. Hackers have threatened to release this personal data unless Salesforce pay a ransom.

Problem is that when you start paying ransoms you don't stop paying.

Updates:
    • Salesforce refuses to submit to extortion demands linked to hacking campaigns
    • Hackers leak Qantas data containing 5 million customer records after ransom deadline passes


Original Submission

posted by hubie on Tuesday October 14, @01:24AM   Printer-friendly
from the sometimes-gov't-developments-do-work dept.

It's so common to hear that the gov't can't possibly do anything right, or for a good price, that many people believe it was always true. Here is a counter example to discuss:

https://theconversation.com/believe-it-or-not-there-was-a-time-when-the-us-government-built-beautiful-homes-for-working-class-americans-to-deal-with-a-housing-shortage-253512

In 1918, as World War I intensified overseas, the U.S. government embarked on a radical experiment: It quietly became the nation's largest housing developer, designing and constructing more than 80 new communities across 26 states in just two years.

These weren't hastily erected barracks or rows of identical homes. They were thoughtfully designed neighborhoods, complete with parks, schools, shops and sewer systems. In just two years, this federal initiative provided housing for almost 100,000 people.

Few Americans are aware that such an ambitious and comprehensive public housing effort ever took place. Many of the homes are still standing today.
[...]
Alongside housing construction, the Housing Corporation invested in critical infrastructure. Engineers installed over 649,000 feet of modern sewer and water systems, ensuring that these new communities set a high standard for sanitation and public health.

Attention to detail extended inside the homes. Architects experimented with efficient interior layouts and space-saving furnishings, including foldaway beds and built-in kitchenettes. Some of these innovations came from private companies that saw the program as a platform to demonstrate new housing technologies.

One company, for example, designed fully furnished studio apartments with furniture that could be rotated or hidden, transforming a space from living room to bedroom to dining room throughout the day.

To manage the large scale of this effort, the agency developed and published a set of planning and design standards − the first of their kind in the United States. These manuals covered everything from block configurations and road widths to lighting fixtures and tree-planting guidelines. The standards emphasized functionality, aesthetics and long-term livability.

Architects and planners who worked for the Housing Corporation carried these ideas into private practice, academia and housing initiatives. Many of the planning norms still used today, such as street hierarchies, lot setbacks and mixed-use zoning, were first tested in these wartime communities.

And many of the planners involved in experimental New Deal community projects, such as Greenbelt, Maryland, had worked for or alongside Housing Corporation designers and planners. Their influence is apparent in the layout and design of these communities.

The USA has another housing crunch now, partly (I've read) due to private capital bidding up the prices of many houses/apartments and turning them into rental units--makes a nice rate of return for the investors, but sucks for everyone else. While I have no expectations that the current administration (and their history with high end developments) would consider anything like this, it might be something to work toward in a few years, after the next election?


Original Submission

posted by hubie on Monday October 13, @08:41PM   Printer-friendly

Teachers need to be scientists themselves, experimenting and measuring the impact of powerful AI products on education:

American technologists have been telling educators to rapidly adopt their new inventions for over a century. In 1922, Thomas Edison declared that in the near future, all school textbooks would be replaced by film strips, because text was 2% efficient, but film was 100% efficient. Those bogus statistics are a good reminder that people can be brilliant technologists, while also being inept education reformers.

I think of Edison whenever I hear technologists insisting that educators have to adopt artificial intelligence as rapidly as possible to get ahead of the transformation that's about to wash over schools and society.

At MIT, I studythe history and future of education technology, and I have never encountered an example of a school system – a country, state or municipality – that rapidly adopted a new digital technology and saw durable benefits for their students. The first districts to encourage students to bring mobile phones to class did not better prepare youth for the future than schools that took a more cautious approach. There is no evidence that the first countries to connect their classrooms to the internet stand apart in economic growth, educational attainment or citizen well-being.

New education technologies are only as powerful as the communities that guide their use. Opening a new browser tab is easy; creating the conditions for good learning is hard.

It takes years for educators to develop new practices and norms, for students to adopt new routines, and for families to identify new support mechanisms in order for a novel invention to reliably improve learning. But as AI spreads through schools, both historical analysis and new research conducted with K-12 teachers and students offer some guidance on navigating uncertainties and minimizing harm.

[...] Today, there is a cottage industry of consultants, keynoters and "thought leaders" traveling the country purporting to train educators on how to use AI in schools. National and international organizations publish AI literacy frameworks claiming to know what skills students need for their future. Technologists invent apps that encourage teachers and students to use generative AI as tutors, as lesson planners, as writing editors, or as conversation partners. These approaches have about as much evidential support today as the CRAAP test did when it was invented.

There is a better approach than making overconfident guesses: rigorously testing new practices and strategies and only widely advocating for the ones that have robust evidence of effectiveness. As with web literacy, that evidence will take a decade or more to emerge.

But there's a difference this time. AI is what I have called an "arrival technology." AI is not invited into schools through a process of adoption, like buying a desktop computer or smartboard – it crashes the party and then starts rearranging the furniture. That means schools have to do something. Teachers feel this urgently. Yet they also need support: Over the past two years, my team has interviewed nearly 100 educators from across the U.S., and one widespread refrain is "don't make us go it alone."

[...] First, regularly remind students and teachers that anything schools try – literacy frameworks, teaching practices, new assessments – is a best guess. In four years, students might hear that what they were first taught about using AI has since proved to be quite wrong. We all need to be ready to revise our thinking.

Second, schools need to examine their students and curriculum, and decide what kinds of experiments they'd like to conduct with AI. Some parts of your curriculum might invite playfulness and bold new efforts, while others deserve more caution.

[...] Third, when teachers do launch new experiments, they should recognize that local assessment will happen much faster than rigorous science. Every time schools launch a new AI policy or teaching practice, educators should collect a pile of related student work that was developed before AI was used during teaching. If you let students use AI tools for formative feedback on science labs, grab a pile of circa-2022 lab reports. Then, collect the new lab reports. Review whether the post-AI lab reports show an improvement on the outcomes you care about, and revise practices accordingly.

Between local educators and the international community of education scientists, people will learn a lot by 2035 about AI in schools. We might find that AI is like the web, a place with some risks but ultimately so full of important, useful resources that we continue to invite it into schools. Or we might find that AI is like cellphones, and the negative effects on well-being and learning ultimately outweigh the potential gains, and thus are best treated with more aggressive restrictions.

Everyone in education feels an urgency to resolve the uncertainty around generative AI. But we don't need a race to generate answers first – we need a race to be right.

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Original Submission

posted by hubie on Monday October 13, @03:58PM   Printer-friendly

While drones flying over different parts of Europe have raised concerns in many countries, some are worried about a more dystopian future with the technology:

While drones flying over different parts of Europe have raised concerns in many countries, some are worried about a more dystopian future with the technology.

Russia's full-scale invasion of Ukraine could lead to a new arms race — one not defined by big submarines or loud missiles, but by small, silent drones.

Ukrainian President Volodymyr Zelenskyy addressed the prospect during his speech at the United Nations General Assembly, where he warned that it is cheaper to stop Russia now "than wondering who will be the first to create a simple drone carrying a nuclear warhead"."

We must use everything we have, together, to force the aggressor to stop. And only then do we have a real chance that this arms race won't end in catastrophe for all of us," he said."Otherwise, [Russian President Vladimir] Putin will keep driving the war forward — wider and deeper."

Experts warn drones carrying nuclear weapons might already exist.

TASS, the Russian state-owned news agency, reported in 2023 on the manufacture of a nuclear-armed underwater drone called Poseidon.

Previously, in 2018, the United States defence ministry also publicly acknowledged Russia was developing a "new intercontinental, nuclear-armed, nuclear-powered, undersea autonomous torpedo" or underwater drone.

Mick Ryan, a retired Australian Army major general and senior fellow for military studies at the Lowy Institute, said drones with nuclear warheads "may already be a reality".

"It's something that we should be concerned about," Ryan, who is also a strategic adviser at a US drone company, Skydio, told SBS News.

"Particularly since detecting a drone underwater that's capable of very long ranges would be a significant threat to Western countries, including Australia," Ryan said.

[...] Nuclear warheads are not the only possible future predicted for drones, as politicians are warning about the use of artificial intelligence (AI) to control drones.

During his speech at the UN, Zelenskyy said "it's only a matter of time" before drones operate "all by themselves, fully autonomous, and no human involved, except the few who control AI systems".

Earlier in September, The Wall Street Journal reported that AI-powered drones were introduced on the battlefield, with Ukraine utilising technology that allows groups of drones to make decisions independently.

Ryan said the use of AI might actually help reduce civilian casualties in future warfare.

"AI might actually make them more deadly for the military and less deadly for civilians. Now, that's a perfect scenario, of course, and it's theoretical," he said.

On the other hand, there are concerns about AI gaining access to nuclear weapons.

Foreign Minister Penny Wong told the UN on Thursday: "AI's potential use in nuclear weapons and unmanned systems challenges the future of humanity."

"Decisions of life and death must never be delegated to machines", she said and offered to other leaders to set rules and standards on the use of AI.

Some others have also expressed concerns about ethical and regulatory challenges related to autonomous drones.

Ryan said: "If you have AI controlling a drone that has a nuclear weapon, we should be very concerned about that."

"I think AI for conventional weapons and AI for nuclear weapons are two very different conversations with two very different forms of risk."

[...] The risk of drones, however, has not been limited only to the war zones, with a series of drone incursions being seen in Europe recently.

On Saturday, drones were spotted near military facilities in Denmark, following reports of drones being seen over Danish airports. There were also reports of drone observation in Germany, Norway and Lithuania.

Danish defence minister Troels Lund Poulsen described the incident as "systematic" and a "hybrid attack".

The Russian government has dismissed any claims of involvement in the drone incidents.

Drones were spotted near military facilities in Denmark, following reports of drones being seen over Danish airports.

"The European Union formally announced on the weekend it will focus on developing a drone wall system in its eastern defences to defend against incursions.

[...] The drones are the threat of today and will remain the threat of tomorrow. Definitely, no country can afford to ignore this threat and has to take action at different levels ... [and] learn from the partners, including Ukraine, who are at the forefront of developing these systems in modern warfare."


Original Submission

posted by hubie on Monday October 13, @11:12AM   Printer-friendly

Comets Lemmon and SWAN may be visible around the same time as they race across the solar system:

Skywatchers, rejoice. This month, not one but two comets are set to soar into our night skies for your viewing pleasure.

The two comets, C/2025 R2 (SWAN) and C/2025 A6 (Lemmon), were both discovered in 2025. The celestial visitors are gearing up for a close flyby of Earth in October, becoming more visible as they approach our planet. SWAN will be closest to Earth on October 19, while Lemmon is set for its own close approach on October 21. Both icy comets may even be visible to the naked eye around that time.

Astronomers spotted Lemmon in January using the Mt. Lemmon SkyCenter observatory in Arizona's Santa Catalina Mountains. The comet was speeding toward the inner solar system at speeds up to 130,000 miles per hour (209,000 kilometers per hour).

Later in September, amateur astronomer Vladimir Bezugly discovered comet SWAN in images from the SWAN instrument on NASA's SOHO satellite. The comet became significantly brighter as it emerged from the Sun's direction.

At its closest approach, SWAN will be at a distance of approximately 24 million miles (39 million kilometers) from our planet, or about a quarter of the distance between the Sun and Earth. SWAN is now at a brightness magnitude of around 5.9, according to EarthSky. The unexpectedly bright comet is currently in the southern skies, but it is slowly moving north, according to NASA.

Following SWAN's closest approach, comet Lemmon will be right behind. The comet will be about half the distance between the Sun and Earth before rounding the Sun on November 8. From there, it will begin its next journey around the star. Lemmon will continue to brighten as it approaches the Sun, but it will likely stay visible, and possibly become even brighter, around October 31 to November 1, according to EarthSky.

SWAN is best viewed in the Southern Hemisphere. The comet crossed into the Libra constellation on September 28, and will make its way across Scorpius on October 10. Around October 9-10, it will appear near Beta Librae, the brightest star in the Libra constellation, EarthSky reports.

It may, however, be a bit tricky to spot because its position in the skies will be close to the setting Sun. Sky watchers hoping to catch a glimpse of SWAN need up toward the west after sunset.

Conditions are more favorable for Lemmon. The comet is best viewed in the Northern Hemisphere, where it will be positioned near the Big Dipper for most of October. Sky watchers should look to the eastern skies just before sunrise to spot the comet.

By mid-October, the comet may be easier to view. On October 16, Lemmon will pass near Cor Caroli, a binary star system in the northern constellation of Canes Venatici, according to EarthSky. Around that time, the comet could be visible to the naked eye.


Original Submission

posted by hubie on Monday October 13, @06:24AM   Printer-friendly

Some early experiments with AI are revealing the technology's shortcomings - and, by extension, the value of human workers:

This time three years ago, most people had never heard of generative AI. Today, the technology is a cultural behemoth, and businesses across virtually every industry are facing huge pressure to embrace it.

At least at first glance, customer service would seem to be a field that's particularly ripe for AI-powered automation. Chatbots specialize in fielding simple queries, while newer and more powerful agents can access a business's internal files to provide up-to-date information, send follow-up emails, and perform other complex tasks. Little wonder that a fleet of companies like Salesforce and Microsoft have been replacing human customer service reps with AI.

New research, however, suggests this could turn out to be a mistake -- that despite the huge amount of marketing gusto that's been poured into selling generative AI-powered customer service tools to businesses, the technology could in fact be doing more harm than good.

You know that relief you feel when you finally get past a customer service bot and an actual person picks up the phone? Turns out most other people seem to feel that way too, even in the age of AI.

[...] "Companies that rushed to cut jobs in the name of AI savings are now facing massive, and often unexpected costs," he told ZDNET. "We've seen customers share examples of AI-generated errors -- like chatbots giving wrong answers, marketing emails misfiring, or content that misrepresents the brand -- and they notice when the human touch is missing."

He added that the backlash has spread to social media.

Some companies have already learned painful lessons about AI's shortcomings and adjusted course accordingly. In one early example from last year, McDonald's announced that it was retiring an automated order-taking technology that it had developed in partnership with IBM after the AI-powered system's mishaps went viral across social media.

More recently, fintech company Klarna started hiring human customer service employees again after realizing that AI was delivering a "lower quality," as company CEO Sebastian Siemiatkowski told Bloomberg. (Siemietkowski told CNBC in May that his company's investments in AI had contributed to an employee headcount reduction of about 40%.)

A global survey of 2,000 CEOs conducted by IBM early this year found that only about one in four internal AI business initiatives has delivered expected ROI. Even more jarringly, a MIT study published in August showed that 95% of businesses' experiments with AI have not delivered any real returns.

[...] In 2018, after Tesla failed to meet production quotas for its Model 3, CEO Elon Musk admitted in a tweet that the electric vehicle company's reliance upon "excessive automation...was a mistake."

Businesses aggressively pushing to deploy AI-powered customer service initiatives in the present could come to a similar conclusion: that even though the technology helps to cut spending and boost efficiency in some domains, it isn't able to completely replicate the human touch. At least for the time being, its shortcomings very well may overshadow its benefits.


Original Submission

posted by hubie on Monday October 13, @01:39AM   Printer-friendly

To cat lovers, a litter box is a necessity. But not to scientists. A team of researchers decided to investigate litter boxes as records of behavior: the pre-squat scratch, the whirl, the precise geometry of the bury

To cat owners, a litter box is a nuisance. But to scientists, it's a trove of information. A team of researchers at Nestlé Purina PetCare decided to investigate litter boxes as records of behavior: the pre-squat scratch, the whirl, the precise geometry of the bury.

The scientists built a painstaking dictionary of these gestures—a full "ethogram," or catalog, of species-specific behaviors—and then identified the distinct moves in feline bathroom habits: grooming, digging, sniffing litter. "We landed on 39 different behaviors that cats do in a litter box, with the understanding that depending on their satisfaction with the litter box, the environment and the dynamics around them, those behaviors will shift," says Ragen McGowan, director of digital and AI product development at Purina and one of the authors of a paper published recently in Applied Animal Behaviour Science on the development of Purina's AI-powered litter box monitor. "We realized this ethogram could be a window into their health."

And Imma gonna leave a link to where I found it, on Fark


Original Submission

posted by jelizondo on Sunday October 12, @08:50PM   Printer-friendly
from the resistance-is-futile-you-will-be-assimilated dept.

As a very long time user of MythTV and free OTA ATSC 1.0 TV, reading this one did not make my day:

CordCutters published news of a recent FCC decision to allow broadcasters flexibility on switching to ATSC 3.0 technology:

In a major shift for American television viewers, the Federal Communications Commission (FCC) has decided against setting a hard deadline to end the old digital TV system that powers most broadcasts and cable services today. [...] The agency, now headed by Brendan Carr, had initially pushed for a quicker switch to the advanced ATSC 3.0 technology, known as NextGen TV. But after hearing concerns from consumer groups, cable companies, and satellite providers, the FCC is choosing a more flexible, voluntary approach to make the change easier for everyone involved.

According to the new proposal this would "tentatively conclude that television stations should be allowed to choose when to stop broadcasting in 1.0 and start broadcasting exclusively in 3.0."

To understand this, it's helpful to step back and explain the basics. For over 15 years, U.S. TV stations and multichannel video programming distributors (MVPDs)—think cable giants like Comcast or satellite services like DirecTV and DISH—have relied on ATSC 1.0. This is the standard digital TV technology that replaced fuzzy analog signals in 2009, delivering clearer pictures and more channels. It's the "original" digital TV, or what some call the "OG" of modern broadcasting. ATSC 1.0 works universally across free over-the-air antennas, cable boxes, and satellite dishes, reaching nearly every household without special upgrades.

NextGen TV, built on ATSC 3.0, promises even better features: sharper 4K video, interactive apps, and stronger signals that can cut through buildings or bad weather. It's like upgrading from a reliable old smartphone to one with a bigger screen and faster apps. The transition started voluntarily during the Biden administration, with a handful of cities testing it out. But since Trump's return in January 2025—about nine and a half months ago—the push intensified. FCC leaders wanted a nationwide shutdown of ATSC 1.0 by a set date to speed things up, arguing it would modernize broadcasting and free up airwaves for new uses.

This aggressive stance hit a wall of opposition. Consumer advocates, led by the Consumer Technology Association (CTA) and its president Gary Shapiro, warned that forcing the change too fast could leave millions of viewers in the dark. Older TVs and set-top boxes might stop working, forcing families to buy new equipment they can't afford. Cable and satellite lobbies echoed these fears, pointing out the massive costs of rewiring their networks to carry the new signals. For context, imagine every home suddenly needing a software update or new hardware just to watch local news—disruptive and expensive, especially for low-income or rural households.

The FCC's latest move, outlined in a document called the Fifth Further Notice of Proposed Rulemaking (FNPRM), listens to these voices. Instead of a mandatory cutoff, the agency proposes keeping the transition market-driven and optional. Broadcasters— the TV stations that send out signals—would get to decide when, or even if, they fully drop ATSC 1.0. Many are already "simulcasting," meaning they beam both the old and new signals at the same time, like offering two radio stations on one frequency. The FCC wants to ease rules around this, removing red tape that currently limits how long stations can keep the old signal running. This builds on policies from the Democratic-led FCC, extending the grace period without a strict timeline.

The plan also calls for ways to cut costs and smooth the ride for all players. For consumers, that could mean subsidies or incentives to upgrade TVs or antennas without breaking the bank. Manufacturers might get breaks on producing hybrid devices that handle both standards. Smaller broadcasters in rural areas, who often operate on tight budgets, would benefit from fewer mandates. And MVPDs could phase in NextGen support at their own pace, avoiding a sudden overhaul that might raise monthly bills.

But the FCC isn't stopping at flexibility—it's opening the floor for public input on trickier issues. One big question: Should new TVs sold in stores be required to receive ATSC 3.0 signals right out of the box? This echoes a famous FCC rule from the 1960s, when regulators under Chairman Newton Minow mandated UHF tuners in TVs. That move helped spark the growth of companies like Sinclair Inc., now a leading cheerleader for NextGen TV. Yet today, the CTA and others are pushing back hard, saying it could hike prices for basic sets and slow sales.

This compromise feels like a win for balance. Proponents of NextGen, like Sinclair, get regulatory green lights to experiment and expand. Critics, including the cable industry, avoid the chaos of a rushed shutdown. For everyday viewers, it means no panic-buying of new gear tomorrow. The transition, which began quietly years ago at events like a 2019 FCC symposium, can now evolve naturally. Back then, questions about integrating NextGen into cable systems lingered unanswered by groups like Pearl TV or the ATSC standards body. Today's proposal nods to those gaps, seeking fresh input.

Reflecting on history adds irony. A quarter-century ago, ATSC 1.0 was hailed as revolutionary, even as early tech from firms like Sinclair hinted at what 3.0 could become. Now, with costs in mind, the FCC is ensuring the next leap doesn't repeat past disruptions. As comments roll in over the coming months, this could shape TV for the next generation—literally. For now, Americans can keep flipping channels without fear of a digital cliff.

Hardware requirements aside, ATSC 3.0 will have DRM which, as I understand it, will make recording impossible. I know there are far worse things going on in Washington now, but wow this sucks.


Original Submission

posted by hubie on Sunday October 12, @04:05PM   Printer-friendly
from the AI-earthquake-overlords dept.

https://arstechnica.com/science/2025/10/like-putting-on-glasses-for-the-first-time-how-ai-improves-earthquake-detection/

On January 1, 2008, at 1:59 am in Calipatria, California, an earthquake happened. You haven't heard of this earthquake; even if you had been living in Calipatria, you wouldn't have felt anything. It was magnitude -0.53, about the same amount of shaking as a truck passing by. Still, this earthquake is notable, not because it was large but because it was small—and yet we know about it.

Over the past seven years, AI tools based on computer imaging have almost completely automated one of the fundamental tasks of seismology: detecting earthquakes.
[...]
"In the best-case scenario, when you adopt these new techniques, even on the same old data, it's kind of like putting on glasses for the first time, and you can see the leaves on the trees," said Kyle Bradley, co-author of the Earthquake Insights newsletter.
[...]
Less certain is what comes next. Earthquake detection is a fundamental part of seismology, but there are many other data processing tasks that have yet to be disrupted. The biggest potential impacts, all the way to earthquake forecasting, haven't materialized yet.

"It really was a revolution," said Joe Byrnes, a professor at the University of Texas at Dallas. "But the revolution is ongoing."
[...]
The main tool that scientists traditionally use is a seismometer. These record the movement of the Earth in three directions: up–down, north–south, and east–west. If an earthquake happens, seismometers can measure the shaking in that particular location.
[...]
Before good algorithms, earthquake cataloging had to happen by hand. Byrnes said that "traditionally, something like the lab at the United States Geological Survey would have an army of mostly undergraduate students or interns looking at seismograms."
[...]
"The field of seismology historically has always advanced as computing has advanced," Bradley told me.

There's a big challenge with traditional algorithms, though: They can't easily find smaller quakes, especially in noisy environments.
[...]
earthquakes have a characteristic "shape." The magnitude 7.7 earthquake above looks quite different from the helicopter landing, for instance.

So one idea scientists had was to make templates from human-labeled datasets. If a new waveform correlates closely with an existing template, it's almost certainly an earthquake.

Template matching works very well if you have enough human-labeled examples. In 2019, Zach Ross' lab at Caltech used template matching to find 10 times as many earthquakes in Southern California as had previously been known
[...]
Also, template matching is computationally expensive. Creating a Southern California quake dataset using template matching took 200 Nvidia P100 GPUs running for days on end.
[...]
AI detection models solve all of these problems:

  • They are faster than template matching.
  • Because AI detection models are very small (around 350,000 parameters compared to billions in LLMs like GPT4.0), they can be run on consumer CPUs.
  • AI models generalize well to regions not represented in the original dataset.

[...]
To train an AI model, scientists take large amounts of labeled data, like what's above, and do supervised training.
[...]
Earthquake Transformer, which was developed around 2020 by a Stanford University team led by S. Mostafa Mousavi, who later became a Harvard professor.

Like many earthquake detection models, Earthquake Transformer adapts ideas from image classification. Readers may be familiar with AlexNet, a famous image-recognition model that kicked off the deep-learning boom in 2012.
[...]
Earthquake Transformer converts raw waveform data into a collection of high-level representations that indicate the likelihood of earthquakes and other seismologically significant events. This is followed by a series of deconvolution layers that pinpoint exactly when an earthquake—and its all-important P and S waves—occurred.

The model also uses an attention layer in the middle of the model to mix information between different parts of the time series. The attention mechanism is most famous in large language models, where it helps pass information between words. It plays a similar role in seismographic detection.
[...]
Ars has previously reported on how the introduction of ImageNet, an image recognition benchmark, helped spark the deep learning boom. Large, publicly available earthquake datasets have played a similar role in seismology.

Earthquake Transformer was trained using the Stanford Earthquake Dataset (STEAD), which contains 1.2 million human-labeled segments of seismogram data from around the world. (The paper for STEAD explicitly mentions ImageNet as an inspiration). Other models, like PhaseNet, were also trained on hundreds of thousands or millions of labeled segments.
[...]
The holy grail of earthquake science is earthquake prediction. For instance, scientists know that a large quake will happen near Seattle but have little ability to know whether it will happen tomorrow or in a hundred years. It would be helpful if we could predict earthquakes precisely enough to allow people in affected areas to evacuate.

You might think AI tools would help predict earthquakes, but that doesn't seem to have happened yet.
[...]
As in many other scientific fields, seismologists face some pressure to adopt AI methods, whether or not they are relevant to their research.

"The schools want you to put the word AI in front of everything," Byrnes said. "It's a little out of control."

This can lead to papers that are technically sound but practically useless. Hubbard and Bradley told me that they've seen a lot of papers based on AI techniques that "reveal a fundamental misunderstanding of how earthquakes work."
[...]
While these are real issues, and ones Understanding AI has reported on before, I don't think they detract from the success of AI earthquake detection. In the last five years, an AI-based workflow has almost completely replaced one of the fundamental tasks in seismology for the better.

That's pretty cool.

Earthquake in SoylentNews stories:
Earthquake search on SoylentNews


Original Submission

posted by hubie on Sunday October 12, @11:20AM   Printer-friendly

Kessler syndrome is bad; atmospheric incineration may be worse:

If you had to guess how many Starlink satellites burn up in Earth's atmosphere on an average day, how many would you pick? This isn't a trick question - SpaceX is deorbiting about one or two satellites daily, and that number is only going to grow.

What that means for our planet isn't entirely clear, says Harvard astrophysicist and space tracker Jonathan McDowell. Even so, Starlink isn't the space junk risk that some other satellite operations are.

McDowell commented on the massive volume of reentering Starlink satellites to science news site EarthSky last week. He explained that once Starlink and other planned low Earth orbit constellations together total about 30,000 satellites, roughly five could reenter the atmosphere each day, given an average replacement cycle of around five years.

[...] Starlink isn't the biggest concern when it comes to passing the Kessler tipping point, McDowell told us – but it is still a source of worry.

"Active satellite maneuvers to avoid collisions will help avoid Kessler," McDowell said in a phone conversation. "If they're successful. And that's a big if."

The current strategy to de-orbit Starlink satellites, which operate in a low orbit below 600 kilometers, is to use the satellites' thrusters to move them to such a low orbit that they eventually catch drag in the atmosphere and burn up in what McDowell calls an "uncontrolled but assisted" reentry.

Purposeful de-orbiting, plus successful dodging, mean we can avoid Kessler syndrome, McDowell told us.

[...] Excepting the possibility of unplanned disaster, Starlink's operations aren't the biggest concern, McDowell added. China's satellite plans are far more worrying.

"The region of space closest to Kessler is the 600 to 1,000 kilometer range," McDowell said. "It's full of old Soviet rocket stages and other stuff, and the more we add there, the more likely it is for Kessler syndrome to occur."

While many of China's proposed satellite constellations are going to be in low Earth orbit at the same altitude as Starlink, McDowell noted that a number are planning to fly above 1,000 kilometers. Were something to go wrong up there, McDowell noted, "we're probably screwed."

"That higher altitude means the atmosphere won't drag them down for centuries," McDowell added. "And I haven't seen [China] demonstrate any retirement plans for those satellites."

Kessler's bad, but destroying the atmosphere is worse

It would be a tragedy if humanity polluted Earth's orbit so much that we were effectively cut off from space, but were we to poison ourselves by filling the atmosphere with the remnants of burned-up satellites and die before we reached Kessler syndrome, that would arguably be worse.

McDowell is definitely worried about both, explaining that the effects on our planet of "using the upper atmosphere as an incinerator" are largely unknown, and a massive, dangerous blind spot. Not a lot of research has been done on what the growing number of atmospheric reentries could do to Earth and the life it harbors, but it's already shocking how much stuff is floating around above our heads.

According to the US National Oceanic and Atmospheric Administration, around 10 percent of the aerosol particles in the stratosphere (the second layer of Earth's atmosphere where the ozone layer lives) contain aluminum and exotic metals believed to be from rockets and satellites that have burned up on reentry. NOAA believes that number could grow to as much as 50 percent as space launches and reentries increase.

What little research has been done into the effects of so much foreign material burning up in Earth's atmosphere has been inconclusive, McDowell explained.

"So far answers have ranged from 'this is too small to be a problem' to 'we're already screwed,'" McDowell told us. "But the uncertainty is large enough that there's already a possibility we're damaging the upper atmosphere."


Original Submission

posted by hubie on Sunday October 12, @06:37AM   Printer-friendly
from the laughing-in-IRC dept.

Discord has revealed that one of its customer service providers has suffered a data breach. The attackers gained access to Government-ID images, and user details.

Discord doesn't actually mention when the breach took place, it only says it "recently discovered an incident". The fact that Government ID images were stolen is important, the U.K.'s Online Safety Act came into effect on July 25, 2025. So, that means the data breach happened sometime between then and October 3rd, when the news about the incident was revealed. It's also worth noting that the victim of the hack was a third-party customer service that has not been named.

As for the attack, the incident involved an unauthorized party compromising one of the messaging services' customer service providers, which in turn allowed the hackers access to limited customer data, pertaining to those who had contacted Customer Support and/or Trust & Safety teams. Discord says it revoked the breached service provider's access to its ticketing system. It is investigating the matter with the help of a computer forensics firm, and is working with law enforcement. Users who were impacted by the incident are being notified via an email that is sent from [email protected]

Here's what Discord says the hackers managed to access: Name, Discord username, email and other contact details that were provided to customer support, billing information such as payment type, the last four digits of credit cards, and purchase history of the accounts, IP addresses, messages with customer service agents, and limited corporate data (training materials, internal presentations).

There was something else.

"The unauthorized party also gained access to a small number of government?ID images (e.g., driver's license, passport) from users who had appealed an age determination. If your ID may have been accessed, that will be specified in the email you receive."

The story continues:

https://www.ghacks.net/2025/10/06/discord-customer-service-data-breached-government-id-images-and-user-details-stolen/


Original Submission

posted by hubie on Sunday October 12, @01:47AM   Printer-friendly

Covert Eavesdropping through Computer Mice

The abstract from the arXiv paper states:

High-Performance Optical Sensors in Mice expose a critical vulnerability — one where confidential user speech can be leaked. Attackers can exploit these sensors' ever-increasing polling rate and sensitivity to emulate a makeshift microphone and covertly eavesdrop on unsuspecting users. We present an attack vector that capitalizes on acoustic vibrations propagated through the user's work surface, and we show that existing consumer-grade mice can detect these vibrations. However, the collected signal is low-quality and suffers from non-uniform sampling, a non-linear frequency response, and extreme quantization. We introduce Mic-E-Mouse, a pipeline consisting of successive signal processing and machine learning techniques to overcome these challenges and achieve intelligible reconstruction of user speech. We measure Mic-E-Mouse against consumer-grade sensors on the VCTK and AudioMNIST speech datasets, and we achieve an SI-SNR increase of +19𝑑𝐵, a Speaker-Recognition accuracy of 80% on the automated tests and a WER of 16.79% on the human study

Additional details: Computer mice can eavesdrop on private conversations, researchers discover

High-end computer mice can be used to eavesdrop on the voice conversations of nearby PC users, researchers from the University of California, Irvine, have shown in a new proof-of-concept demonstration.

Given the catchy name 'Mic-E-Mouse' (Microphone-Emulating Mouse), the ingenious technique outlined in Invisible Ears at Your Fingertips: Acoustic Eavesdropping via Mouse Sensors is based on the discovery that some optical mice pick up incredibly small sound vibrations reaching them through the desk surfaces on which they are being used.

These vibrations could then be captured by different types of software on PC, Mac or Linux computers, including non-privileged 'user space' programs such as web browsers or games engines or, failing that, privileged components at OS kernel level.

Although the captured signals were inaudible at first, the team were able to enhance them using Wiener and neural network statistical filtering to boost signal strength relative to noise.

As the video demonstration of this process shows, this made it possible to extract spoken words from an eavesdropped data stream that at first sounded impossibly muffled.

"Through our Mic-E-Mouse pipeline, vibrations detected by the mouse on the victim user's desk are transformed into comprehensive audio, allowing an attacker to eavesdrop on confidential conversations," the researchers wrote.

Moreover, they said, this type of attack would be undetectable by defenders: "This process is stealthy since the vibrations signals collection is invisible to the victim user and does not require high privileges on the attacker's side."

[...] However, there are important caveats that limit the scope of Mic-E-Mouse. The noise level of the environment being eavesdropped upon must be low, with desks no more than 3cm thick, and with the mouse mostly stationary to isolate voice vibrations.

The researchers also used mice with a DPI of at least 20,000, significantly above that of the average mouse in use today.

Under real-world conditions, extracting voice data would be possible but challenging. Attackers would likely only be able to capture some conversation, rather than everything being said.

Another weakness is that defending against it wouldn't be difficult: using a rubber pad or mouse mat under a mouse would stop vibrations from being picked up.


Original Submission