Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How long have you had your current job?

  • less than 1 year
  • 1 year up to 2 years
  • 2 years up to 3 years
  • 3 years up to 5 years
  • 5 years up to 10 years
  • 10 or more years
  • work is for suckers
  • I haven't got a job you insensitive clod!

[ Results | Polls ]
Comments:70 | Votes:154

posted by hubie on Friday July 18, @08:55PM   Printer-friendly
from the graygarden dept.

The Wall Street Journal published a look at new automation for farms, as reported by Mint

In the verdant hills of Washington state's Palouse region, Andrew Nelson's tractor hums through the wheat fields on his 7,500-acre farm. Inside the cab, he's not gripping the steering wheel—he's on a Zoom call or checking messages.

A software engineer and fifth-generation farmer, Nelson, 41, is at the vanguard of a transformation that is changing the way we grow and harvest our food. The tractor isn't only driving itself; its array of sensors, cameras, and analytic software is also constantly deciding where and when to spray fertilizer or whack weeds.

Many modern farms already use GPS-guided tractors and digital technology such as farm-management software systems. Now, advances in artificial intelligence mean that the next step—the autonomous farm, with only minimal human tending—is finally coming into focus.

Imagine a farm where fleets of autonomous tractors, drones and harvesters are guided by AI that tweaks operations minute by minute based on soil and weather data. Sensors would track plant health across thousands of acres, triggering precise sprays or irrigation exactly where needed. Farmers could swap long hours in the cab for monitoring dashboards and making high-level decisions. Every seed, drop of water and ounce of fertilizer would be optimized to boost yields and protect the land—driven by a connected system that gets smarter with each season.

[...] "We're just getting to a turning point in the commercial viability of a lot of these technologies," says David Fiocco, a senior partner at McKinsey & Co. who leads research on agricultural innovation.

[...] Automation, now most often used on large farms with wheat or corn laid out in neat rows, is a bigger challenge for crops like fruits and berries, which ripen at different times and grow on trees or bushes. Maintaining and harvesting these so-called specialty crops is labor-intensive. "In specialty crops, the small army of weeders and pickers could soon be replaced by just one or two people overseeing the technology. That may be a decade out, but that's where we're going," says Fiocco of McKinsey.

Fragile fruits like strawberries and grapes pose a huge challenge. Tortuga, an agriculture tech startup in Denver, developed a robot to do the job. Tortuga was acquired in March by vertical farming company Oishii. The robot resembles NASA's Mars Rover with fat tires and extended arms. It rolls along a bed of strawberries or grapes and uses a long pincher arm to reach into the vine and snip off a single berry or a bunch of grapes, placing them gingerly into a basket.

[...] A crop is only as healthy as its soil. Traditionally, farmers send topsoil samples to a lab to have them analyzed. New technology that uses sensors to scan the soil on-site is enabling a precise diagnosis covering large areas of farms rather than spot checks.

The diagnosis includes microbial analysis as well as identifying areas of soil compaction, when the soil becomes dense, hindering water infiltration, root penetration and gas exchange. Knowing this can help a farmer plan where to till and make other decisions about the new season.

New technology is also changing livestock management. The creation of virtual fences, which are beginning to be adopted in the U.S., Europe and Australia, has the potential to help ranchers save money on expensive fencing and help them better manage their herds.

Livestock are given GPS-enabled collars, and virtual boundaries are drawn on a digital map. If an animal approaches the virtual boundary, it first gets an auditory warning. If it continues, it gets zapped with a mild but firm electric shock.

Is this what Bill Gates is doing with all the farmland he owns?


Original Submission

posted by hubie on Friday July 18, @04:10PM   Printer-friendly

DOGE staffer with access to Americans' personal data leaked private xAI API key:

A DOGE staffer with access to the private information on millions of Americans held by the U.S. government reportedly exposed a private API key used for interacting with Elon Musk's xAI chatbot.

Independent security journalist Brian Krebs reports that Marko Elez, a special government employee who in recent months has worked on sensitive systems at the U.S. Treasury, the Social Security Administration, and Homeland Security, recently published code to his GitHub containing the private key. The key allowed access to dozens of models developed by xAI, including Grok.

Philippe Caturegli, founder of consultancy firm Seralys, alerted Elez to the leak earlier this week. Elez removed the key from his GitHub but the key itself was not revoked, allowing continued access to the AI models.

"If a developer can't keep an API key private, it raises questions about how they're handling far more sensitive government information behind closed doors," Caturegli told KrebsOnSecurity.


Original Submission

posted by hubie on Friday July 18, @11:26AM   Printer-friendly
from the Intel-engineers-not-inside dept.

Arthur T Knackerbracket has processed the following story:

Despite claims that layoffs target mostly mid-level managers.

Intel this month officially began to cut down its workforce in the U.S. and other countries, thus revealing actual numbers of positions to be cut. The Oregonian reports that the company will cut as many as 2,392 positions in Oregon and around 4,000 positions across its American operations, including Arizona, California, and Texas.

To put the 2,392 number into context, Intel is the largest employer in Oregon with around 20,000 of workers there. 2,392 is around 12% of the workforce, which is a lower end of layoff expectations, yet 2,400 is still a lot of people. The Oregon reduction rose sharply from an initial count of around 500 to a revised figure of 2,392, making it one of the largest layoffs in the state’s history. Intel began reducing staff earlier in the week but confirmed the larger number by Friday evening through a filing with Oregon state authorities.

Intel's Oregon operations have already seen 3,000 jobs lost over the past year through earlier buyouts and dismissals. This time around, Intel does not offer voluntarily retirement or buyouts, it indeed lays off personnel in Aloha (192) and Hillsboro (2,200).

Although Intel officially says that it is trying to get rid of mid-level managers to flatten the organization and focus on engineers, the list of positions that Intel is cutting is led by module equipment technicians (325), module development engineers (302), module engineers (126), and process integration development engineers (88). In fact, based on the Oregon WARN filing, a total of 190 employees with 'Manager' in their job titles (8% of personnel being laid off) were included among those laid off by Intel. These comprised various software, hardware, and operational management roles across the affected sites.

[...] Interestingly, Intel is implementing a new approach to workforce reductions, allowing individual departments to decide how to meet financial goals rather than announcing large, centralized cuts. This decentralized process has led to ongoing job losses across the company, with marketing functions being outsourced to Accenture and the automotive division completely shut down.


Original Submission

posted by hubie on Friday July 18, @06:49AM   Printer-friendly

Engineering the Origin of the Wheel:

Some historians believe the wheel is the most significant invention ever created. Historians and archeologists have artifacts from the wheel's history that go back thousands of years, but knowing that the wheel first originated back in 3900 B.C. doesn't tell the entire story of this essential technology's development.

A recent study [2024] by Daniel Guggenheim School of Aerospace Engineering Associate Professor Kai James, Lee Alacoque, and Richard Bulliet analyzes the wheels' invention and its evolution. Their analysis supports a new theory that copper miners from the Carpathian Mountains in southeastern Europe may have invented the wheel. However, the study also recognizes that the wheel's evolution occurred incrementally over time — and likely through considerable trial and error. The findings suggest that the original developers of the wheel benefited from uniquely favorable environmental conditions that augmented their human ingenuity. The study, published in the journal Royal Society Open Science, has gained the worldwide attention of experts and more than 58 media outlets, including Popular Mechanics, Interesting Engineering, and National Geographic en Español.

"The way technology evolves is very complex. It's never as simple as somebody having an epiphany, going to their lab, drawing up a perfect prototype, and manufacturing it — and then end of story," said James. "The evidence, even before our theory, suggests that the wheel evolved over centuries, across a very broad geographical range, with contributions from many different people, and that's true of all engineering systems. Understanding this complexity and seeing the process as a journey, rather than a moment in time, is one of the main outcomes of our study."

[...] James and his team use computational analysis and design as a forensic tool to learn about the past, studying engineered systems designed by prehistoric people. Computational analysis offers a deeper understanding of how these systems were created.

"We have to interpret clues from ancient societies without a writing system — artifacts like bows and arrows, flutes, or boats — but we need to use additional tools to do this," James explained. "Carbon dating tells us when, but it doesn't tell us how or why. Using solid mechanics and computational modeling to recreate these environments and scenarios that gave rise to these technologies is a potential game-changer."

Their theory suggests that the wheel evolved from simple rollers, which took the form of a series of untethered cylinders, poles, or tree trunks. These rollers were arranged side-by-side in a row on the ground, and the workers would transport their cargo on top of the rollers to avoid the friction caused by dragging. "Over time, the shape of these rollers evolved such that the central portion of the cylinder grew progressively narrower, eventually leaving only a slender axle capped on either end by round discs, which we now refer to as wheels," James explained.

The researchers derived a series of mathematical equations that describe the physics of the rollers. They then created a computer algorithm that simulates the progression from roller to wheel-and-axle by repeatedly solving these equations.

"Our investigation also indicates that environmental conditions played a key role in this evolutionary process," he said. "Previous studies have shown that rollers are only effective under very specific circumstances. They require flat, firm, and level terrain, as well as a straight path. Neolithic mines, with their human-made tunnels and covered terrain would have offered an environment highly conducive to roller-based transport."

Journal Reference:Alacoque, L. R., Bulliet, R. W., & James, K. A. (2024). Reconstructing the invention of the wheel using computational structural analysis and Design. Royal Society Open Science, 11(10). https://doi.org/10.1098/rsos.240373


Original Submission

posted by hubie on Friday July 18, @02:05AM   Printer-friendly

GPUhammer is the first to flip bits in onboard GPU memory. It likely won't be the last:

Nvidia is recommending a mitigation for customers of one of its GPU product lines that will degrade performance by up to 10 percent in a bid to protect users from exploits that could let hackers sabotage work projects and possibly cause other compromises.

The move comes in response to an attack a team of academic researchers demonstrated against Nvidia's RTX A6000, a widely used GPU for high-performance computing that's available from many cloud services. A vulnerability the researchers discovered opens the GPU to Rowhammer, a class of attack that exploits physical weakness in DRAM chip modules that store data.

Rowhammer allows hackers to change or corrupt data stored in memory by rapidly and repeatedly accessing—or hammering—a physical row of memory cells. By repeatedly hammering carefully chosen rows, the attack induces bit flips in nearby rows, meaning a digital zero is converted to a one or vice versa. Until now, Rowhammer attacks have been demonstrated only against memory chips for CPUs, used for general computing tasks.

[...] The researchers' proof-of-concept exploit was able to tamper with deep neural network models used in machine learning for things like autonomous driving, healthcare applications, and medical imaging for analyzing MRI scans. GPUHammer flips a single bit in the exponent of a model weight—for example in y, where a floating point is represented as x times 2y. The single bit flip can increase the exponent value by 16. The result is an altering of the model weight by a whopping 216, degrading model accuracy from 80 percent to 0.1 percent, said Gururaj Saileshwar, an assistant professor at the University of Toronto and co-author of an academic paper demonstrating the attack.

"This is like inducing catastrophic brain damage in the model: with just one bit flip, accuracy can crash from 80% to 0.1%, rendering it useless," Saileshwar wrote in an email. "With such accuracy degradation, a self-driving car may misclassify stop signs (reading a stop sign as a speed limit 50 mph sign), or stop recognizing pedestrians. A healthcare model might misdiagnose patients. A security classifier may fail to detect malware."

In response, Nvidia is recommending users implement a defense that could degrade overall performance by as much as 10 percent. Among machine learning inference workloads the researchers studied, the slowdown affects the "3D U-Net ML Model" the most. This model is used for an array of HPC tasks, such as medical imaging.

The performance hit is caused by the resulting reduction in bandwidth between the GPU and the memory module, which the researchers estimated as 12 percent. There's also a 6.25 percent loss in memory capacity across the board, regardless of the workload. Performance degradation will be the highest for applications that access large amounts of memory.

A figure in the researchers' academic paper provides the overhead breakdowns for the workloads tested.


Original Submission

posted by janrinok on Thursday July 17, @09:14PM   Printer-friendly

Belkin shows tech firms getting too comfortable with bricking customers' stuff:

In a somewhat anticipated move, Belkin is killing most of its smart home products. On January 31, the company will stop supporting the majority of its Wemo devices, leaving users without core functionality and future updates.

In an announcement emailed to customers and posted on Belkin's website, Belkin said:

After careful consideration, we have made the difficult decision to end technical support for older Wemo products, effective January 31, 2026. After this date, several Wemo products will no longer be controllable through the Wemo app. Any features that rely on cloud connectivity, including remote access and voice assistant integrations, will no longer work.

The company said that people with affected devices that are under warranty on or after January 31 "may be eligible for a partial refund" starting in February.

The 27 affected devices have last sold dates that go back to August 2015 and are as recent as November 2023.

The announcement means that soon, features like the ability to work with Amazon Alexa will suddenly stop working on some already-purchased Wemo devices. The Wemo app will also stop working and being updated, removing the simplest way to control Wemo products, including connecting to Wi-Fi, monitoring usage, using timers, and activating Away Mode, which is supposed to make it look like people are in an empty home by turning the lights on and off randomly. Of course, the end of updates and technical support has security implications for the affected devices, too.

[...] Belkin acknowledged that some people who invested in Wemo devices will see their gadgets rendered useless soon: "For any Wemo devices you have that are out of warranty, will not work with HomeKit, or if you are unable to use HomeKit, we recommend disposing of these devices at an authorized e-waste recycling center."

Belkin started selling Wemo products in 2011, but said that "as technology evolves, we must focus our resources on different parts of the Belkin business.

Belkin currently sells a variety of consumer gadgets, including power adapters, charging cables, computer docks, and Nintendo Switch 2 charging cases.

For those who follow smart home news, Belkin's discontinuation of Wemo was somewhat expected. Belkin hasn't released a new Wemo product since 2023, when it announced that it was taking "a big step back" to "regroup" and "rethink" about whether or not it would support Matter in Wemo products.

Even with that inkling that Belkin's smart home commitment may waver, that's little comfort for people who have to reconfigure their smart home system.

Belkin's abandonment of most of its Wemo products is the latest example of an Internet of Things (IoT) company ending product support and turning customer devices into e-waste. The US Public Interest Research Group (PIRG) nonprofit estimates that "a minimum of 130 million pounds of electronic waste has been created by expired software and canceled cloud services since 2014," Lucas Gutterman, director of the US PIRG Education Fund's Designed to Last Campaign, said in April.

What Belkin is doing has become a way of life for connected device makers, suggesting that these companies are getting too comfortable with selling people products and then reducing those products' functionality later.

Belkin itself pulled something similar in April 2020, when it said it would end-of-life its Wemo NestCam home security cameras the following month (Belkin eventually extended support until the end of June 2020). At the time, Forbes writer Charles Radclyffe mused that "Belkin May Never Be Trusted Again After This Story." But five years later, Belkin is telling customers a similar story—at least this time, its customers have more advance notice.

IoT companies face fierce challenges around selling relatively new types of products, keeping old and new products secure and competitive, and making money. Sometimes companies fail in those endeavors, and sometimes they choose to prioritize the money part.

[...] With people constantly buying products that stop working as expected a few years later, activists are pushing for legislation [PDF] that would require tech manufacturers to tell shoppers how long they will support the smart products they sell. In November, the FTC warned that companies that don't disclose how long they will support their connected devices could be violating the Magnuson Moss Warranty Act.

I don't envy the obstacles facing IoT firms like Belkin. Connected devices are central to many people's lives, and without companies like Belkin figuring out how to keep their (and customers') lights on, modern tech would look very different today.

But it's alarming how easy it is for smart device makers to decide that your property won't work. There's no easy solution to this problem. However, the lack of accountability carried by companies that brick customer devices neglects the people who support smart tech companies. If tech firms can't support the products they make, then people—and perhaps the law one day—may be less supportive of their business.


Original Submission

posted by janrinok on Thursday July 17, @04:32PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Details behind HoloMem’s holographic tape innovations are beginning to come into clearer view. The UK-based startup recently chatted with Blocks & Files about its potentially disruptive technology for long-term cold storage. HoloMem is another emerging storage idea which relies on optical technology - to enable holographic storage. However, it cleverly melds the durability and density advantage of optical formats with a flexible polymer ribbon-loaded cartridge, so it can usurp entrenched LTO magnetic tape storage systems with minimal friction.

According to the inventors of HoloMem, their new cold storage technology offers far greater capacity than magnetic tape, with a much longer shelf life, and “zero energy storage” costs. HoloMem carts can fit up to 200TB, which is more than 11x the capacity of LTO-10 magnetic tape. Also, the optical-based new tech’s touted 50-year life is 10x the life of magnetic tape.

Magnetic tape has been around for 70 years or more, so it isn’t surprising that a new technology has at last been designed as a serious replacement, beating it by all key metrics. However, the HoloMem makers have revealed quite a few more attractive features of their new storage solution, which could or should lead to success.

Probably one of the biggest attractions of HoloMem is that it minimizes friction for users who may be interested in replacing existing tape storage. The firm claims that a HoloDrive can be integrated into a legacy cold storage system “with minimal hardware and software disruption.” This allows potential customers to phase-in HoloMem use, reducing the chance of abrupt transition issues. Moreover, its LTO-sized cartridges can be transported by a storage library’s robot transporters with no change.

Another feather in HoloMem’s cap is the technology’s reliance on cheap and off-the-shelf component products. Blocks & Files says that the holographic read/write head is just a $5 laser diode, for example. As for media, it makes use of mass-produced polymer sheets which sandwich a 16 micron thick light-sensitive polymer that “costs buttons.” The optical ribbon tapes produced, claimed to be robust and around 120 microns thick in total, work in a WORM (write-once, read-many) format.

Thanks to the storage density that the multiple layers of holograms written on these ribbons enable, HoloMem tapes need only be around 100m long for 200TB of storage. Contrast that with the 1,000m length of fragile magnetic tape that enables LTO-10’s up to 18TB capacity.

Blocks & Files shares some insight gained from talking to HoloMem founder Charlie Gale, who earned his stripes at Dyson, working on products like robot vacuum cleaners and hair dryers. During his time at Dyson, Gale helped devise the firm’s multi-hologram security sticker labels. This work appears to have planted the seed from which HoloMem has blossomed.

Rival would-be optical storage revolutionaries like Cerabyte or Microsoft’s Project Silica may face far greater friction for widespread adoption, we feel. Their systems require more expensive read/write hardware to work with their inflexible slivers of silica glass, and will find it harder to deliver such easy swap-out upgrades versus companies buying into HoloDrives.

HoloMem has a working prototype now and is backed by notable investors such as Intel Ignite and Innovate UK. However, there is no official ‘launch date’ set. Blocks & Files says the first HoloDrives will be used at TechRe consultants, in its UK data centers to verify product performance, reliability, and robustness.


Original Submission

posted by janrinok on Thursday July 17, @11:47AM   Printer-friendly
from the Artificial-Software dept.

Contrary to popular belief, using cutting-edge artificial intelligence tools slowed down experienced software developers when they were working in codebases familiar to them, rather than supercharging their work, a new study found.

AI research nonprofit METR conducted the in-depth study of seasoned developers earlier this year while they used Cursor, a popular AI coding assistant, to help them complete tasks in open-source projects they were familiar with.

Before the study, the open-source developers believed using AI would speed them up, estimating it would decrease task completion time by 24%. Even after completing the tasks with AI, the developers believed that they had decreased task times by 20%. But the study found that using AI did the opposite: it increased task completion time by 19%.

The study's lead authors, Joel Becker and Nate Rush, said they were shocked by the results: prior to the study, Rush had written down that he expected "a 2x speed up, somewhat obviously."

The findings challenge the belief that AI always makes expensive human engineers much more productive, a factor that has attracted substantial investment into companies selling AI products to aid software development.

This is now a loaded question: "Do artificial-intelligence tools speed up your work?"

-- Hendrik Boom


Original Submission

posted by jelizondo on Thursday July 17, @07:07AM   Printer-friendly

Industrial Waste Is Turning Into Rock in Just Decades, Study Suggests:

The geological processes that create rocks usually take place over thousands if not millions of years. With the help of a coin and a soda can tab, researchers have identified rocks in England that formed in less than four decades. Perhaps unsurprisingly, the cause is human activity.

Researchers from the University of Glasgow's School of Geographical and Earth Sciences discovered that slag (a waste product of the steel industry) formed a new type of rock in West Cumbria in 35 years—at most. As detailed in a study published April 10 in the journal Geology, the researchers claim to be the first to fully document and date a complete "rapid anthropoclastic rock cycle" on land: a significantly accelerated rock cycle that incorporates human-made materials. They suggest that this phenomenon is likely harming ecosystems and biodiversity at similar industrial waste locations around the world.

"When waste material is first deposited, it's loose and can be moved around as required. What our finding shows is that we don't have as much time as we thought to find somewhere to put it where it will have minimal impact on the environment–instead, we may have a matter of just decades before it turns into rock, which is much more difficult to manage," co-author Amanda Owen said in a university statement.

During the 19th and 20th centuries, Derwent Howe in West Cumbria hosted heavy iron and steel industries. The 953 million cubic feet (27 million cubic meters) of slag generated by the factories turned into cliffs along the coastline, where strange formations along the human-made cliffs caught Owen and her colleagues' attention, according to the statement.

By analyzing 13 sites along the coast, the researchers concluded that Derwent Howe's slag contains deposits of calcium, magnesium, iron, and manganese. When exposed to seawater and air through coastal erosion, these reactive elements create natural cements such as brucite, calcite, and goethite—the same ones that bind natural sedimentary rocks together over thousands to millions of years.

"What's remarkable here is that we've found these human-made materials being incorporated into natural systems and becoming lithified–essentially turning into rock–over the course of decades instead," Owen explained. "It challenges our understanding of how a rock is formed, and suggests that the waste material we've produced in creating the modern world is going to have an irreversible impact on our future."

Modern objects stuck in the lithified slag, such as a King George V coin from 1934 and an aluminum can tab from no earlier than 1989, substantiated the team's dating of the material. Because slag clearly has all the necessary ingredients to create rocks in the presence of seawater and air, co-author David Brown suggested that the same process is likely happening at similar coastal slag deposits around the world.

Whether it's in England or elsewhere, "that rapid appearance of rock could fundamentally affect the ecosystems above and below the water, as well as change the way that coastlines respond to the challenges of rising sea levels and more extreme weather as our planet warms," Owen warned. "Currently, none of this is accounted for in our models of erosion of land management, which are key to helping us try to adapt to climate change."

Moving forward, the team hopes to continue investigating this new Earth system cycle by analyzing other slag deposits. Ultimately, the study suggests that humans aren't just driving global warming—we're also accelerating the ancient geological processes unfolding beneath our very feet.

Also, at The Register Plastic is the new rock, say Geologists:

Geologists have identified what they say is a new class of rock.

'Plastiglomerates', as the new rocks are called, form when plastic debris washes up on beaches, breaks down into small pieces, becomes mixed in sand or sticks to other rocks and solidifies into an agglomerate mixing all of the above. Such rocks, say US and Canadian boffins in a paper titled An anthropogenic marker horizon in the future rock record, have "great potential to form a marker horizon of human pollution, signalling the occurrence of the informal Anthropocene epoch."

The paper identifies four types of plastiglomerate, namely: A: In situ plastiglomerate wherein molten plastic is adhered to the surface of a basalt flow B: Clastic plastiglomerate containing molten plastic and basalt and coral fragments C: Plastic amygdales in a basalt flow

About a fifth of plastiglomerates consist of "fishing-related debris" such as "netting, ropes, nylon fishing line, as well as remnants of oyster spacer tubes". "Confetti", the "embrittled remains of intact products, such as containers" is also very prevalent, but whole containers and lids are also found in plastiglomerates.

The paper explains that the plastiglomerates studied come mainly from a single Hawaiian beach that, thanks to local currents, collects an unusual amount of plastic. But the authors also note that as some samples were formed when trapped within organic material, while others were the result of plastic being melted onto rock, plastiglomerates can pop up anywhere.

Journal Reference:
Amanda Owen, John Murdoch MacDonald, David James Brown. Evidence for a rapid anthropoclastic rock cycle, Geology (DOI: 10.1130/G52895.1)

See also:


Original Submission

posted by janrinok on Thursday July 17, @02:22AM   Printer-friendly

Merger of Two Massive Black Holes is One for the Record Books

Merger of two massive black holes is one for the record books:

Physicists with the LIGO/Virgo/KAGRA collaboration have detected the gravitational wave signal (dubbed GW231123) of the most massive merger between two black holes yet observed, resulting in a new black hole that is 225 times more massive than our Sun. The results were presented at the Edoardo Amaldi Conference on Gravitational Waves in Glasgow, Scotland.

The LIGO/Virgo/KAGRA collaboration searches the universe for gravitational waves produced by the mergers of black holes and neutron stars. LIGO detects gravitational waves via laser interferometry, using high-powered lasers to measure tiny changes in the distance between two objects positioned kilometers apart. LIGO has detectors in Hanford, Washington, and in Livingston, Louisiana. A third detector in Italy, Advanced Virgo, came online in 2016. In Japan, KAGRA is the first gravitational-wave detector in Asia and the first to be built underground. Construction began on LIGO-India in 2021, and physicists expect it will turn on sometime after 2025.

To date, the collaboration has detected dozens of merger events since its first Nobel Prize-winning discovery. Early detected mergers involved either two black holes or two neutron stars.  In 2021, LIGO/Virgo/KAGRA confirmed the detection of two separate "mixed" mergers between black holes and neutron stars.

LIGO/Virgo/KAGRA started its fourth observing run in 2023, and by the following year had announced the detection of a signal indicating a merger between two compact objects, one of which was most likely a neutron star. The other had an intermediate mass—heavier than a neutron star and lighter than a black hole. It was the first gravitational-wave detection of a mass-gap object paired with a neutron star and hinted that the mass gap might be less empty than astronomers previously thought.

Until now, the most massive back hole merger was GW190521, detected in 2020. It produced a new black hole with an intermediate mass—about 140 times as heavy as our Sun. Also found in the fourth run, GW231123 dwarfs the prior merger. According to the collaboration, the two black holes that merged were about 100 and 140 solar masses, respectively. It took some time to announce the discovery because the objects were spinning rapidly, near the limits imposed by the general theory of relativity, making the signal much more difficult to interpret.

The discovery is also noteworthy because it conflicts with current theories about stellar evolution. The progenitor black holes are too big to have formed from a supernova. Like its predecessor, GW190521, GW231123 may be an example of a so-called "hierarchical merger," meaning the two progenitor black holes were themselves each the result of a previous merger before they found each other and merged.

"The discovery of such a massive and highly spinning system presents a challenge not only to our data analysis techniques but will have a major effect on the theoretical studies of black hole formation channels and waveform modeling for many years to come," said Ed Porter of CNRS in Paris.

The Biggest Black Hole Smashup Ever Detected Challenges Physics Theories

Arthur T Knackerbracket has processed the following story:

The two black holes had masses bigger than any before confirmed in such a collision. One had about 140 times the mass of the sun, and the other about 100 solar masses. And both were spinning at nearly the top speed allowed by physics.

“We don’t think it’s possible to form black holes with those masses by the usual mechanism of a star collapsing after it has died,” says physicist Mark Hannam of Cardiff University in Wales, a physicist working on the Laser Interferometer Gravitational-Wave Observatory, or LIGO, which detected the crash. That has researchers considering other black hole backstories.

Scientists deduced the properties of the black holes from shudders of the fabric of spacetime called gravitational waves. Those waves were detected on November 23, 2023, by LIGO’s two detectors in Hanford, Wash., and Livingston, La.

The two black holes spiraled around one another, drawing closer and closer before coalescing into one, blasting out gravitational waves in the process. The merger produced a black hole with a mass about 225 times that of the sun, researchers report in a paper posted July 13 at arXiv.org and to be presented at the International Conference on General Relativity and Gravitation and the Edoardo Amaldi Conference on Gravitational Waves in Glasgow, Scotland, on July 14. The biggest bang-up previously confirmed produced a black hole of about 140 solar masses, researchers announced in 2020. In the new event, one of the two black holes alone had a similar mass.

Black holes with masses below about 60 times that of the sun are formed when a star collapses at the end of its life. But there’s a window of masses for black holes — between about 60 to 130 solar masses — where this mechanism is thought not to work. The stars that would form the black holes in that mass range are expected to fully explode when they die, leaving behind no remnant black hole.

For the newly reported black holes, uncertainties on the mass estimates mean it’s likely that at least one of them — and possibly both — fell in that forbidden mass gap.

The prediction of this mass gap is “a hill at least some people were willing to get wounded on, if not necessarily die on,” says Cole Miller of the University of Maryland in College Park, who was not involved with the research. So, to preserve the mass gap idea, scientists are looking for other explanations for the two black holes’ birth.

One possibility is that they were part of a family tree, with each black hole forming from an earlier collision of smaller black holes. Such repeated mergers might happen in dense clusters of stars and black holes. And they would result in rapidly spinning black holes, like the ones seen.

Every black hole has a maximum possible spinning speed, depending on its mass. One of the black holes in the collision was spinning at around 90 percent of its speed limit, and the other close to 80 percent. These are among the highest black hole spins that LIGO has confidently measured, Hannam says. Those high spins strengthen the case for the repeated-merger scenario, Hannam says. “We’ve seen signs of this sort of thing before but nothing as extreme as this.”

But there’s an issue with that potential explanation, Miller says. The black holes’ masses are so large that, if they came from a family tree, that tree might have required multiple generations of ancestors. That would suggest black holes that are spinning fast, but not quite as fast as these black holes are, Miller says. That’s because the black holes that merged in previous generations could have been spinning in a variety of different directions.

An alternative explanation is that the black holes bulked up in the shadow of a much bigger black hole, in what’s called an active galactic nucleus. This is a region of a galaxy surrounding a centerpiece supermassive black hole that is feeding on a disk of gas. If the black holes were born or fell into that disk, they could gobble up gas, ballooning in mass before merging.

Here, the spin also raises questions, Miller says. There’s a hint that the two black holes that merged in the collision weren’t perfectly aligned: They weren’t spinning in the same direction. That conflicts with expectations for black holes all steeping in the same disk.

“This event doesn’t have a clear and obvious match with any of the major formation mechanisms,” Miller says. None fit perfectly, but none are entirely ruled out. Even the simplest explanation, with black holes formed directly from collapsing stars, could still be on the table if one is above the mass gap and the other is below it.

Because the black holes are so massive, the scientists were able to capture only the last few flutters of gravitational waves, about 0.1 second from the tail end of the collision. That makes the event particularly difficult to interpret. What’s more, these black holes were so extreme that the models the scientists use to interpret their properties didn’t fully agree with one another. That led to less certainty about the characteristics of the black holes. Further work could improve the understanding of the black holes’ properties and how they formed.

Some physicists have reported hints that there are even more huge black holes out there. In a reanalysis of earlier public LIGO data, a team of physicists found evidence for five smashups that created black holes with masses around 100 to 300 times that of the sun, astrophysicist Karan Jani and colleagues reported May 28 in Astrophysical Journal Letters. This new discovery further confirms existence of a new population of massive black holes.

Before LIGO’s discoveries, such massive black holes were thought not to exist, says Jani, of Vanderbilt University in Nashville, who is also a member of the LIGO collaboration. “It’s very exciting that there is now a new population of black holes of this mass.”

The LIGO Scientific Collaboration, the Virgo Collaboration and the KAGRA Collaboration. GW231123: a Binary Black Hole Merger with Total Mass 190-265 M⊙.  Published online July 13, 2025.

S. Bini. New results from the LIGO, Virgo and KAGRA Observatory Network. The International Conference on General Relativity and Gravitation and the Edoardo Amaldi Conference on Gravitational Waves. Glasgow, July 14, 2025.

K. Ruiz-Rocha et al. Properties of “lite” intermediate-mass black hole candidates in LIGO-Virgo’s third observing run. Astrophysical Journal Letters. Vol. 985, May 28, 2025, doi: 10.3847/2041-8213/adc5f8


Original Submission #1Original Submission #2

posted by jelizondo on Wednesday July 16, @09:33PM   Printer-friendly
from the when-the-chips-are-down dept.

Just like how the U.S. military avoids using Chinese tech, he expects that the PLA won't use American technologies:

Nvidia CEO Jensen Huang has downplayed Washington's concerns that the Chinese military will use advanced U.S. AI tech to improve its capabilities. Mr. Huang said in an interview with CNN that China's People's Liberation Army (PLA) will avoid American tech the same way that the U.S.'s armed forces avoid Chinese products.

This announcement comes on the heels of the United States Senate's open letter [PDF] to the CEO, asking him to "refrain from meeting with representatives of any companies that are working with the PRC's military or intelligence establishment...or are suspected to have engaged in activities that undermine U.S. export controls."

Washington is concerned that the PLA might utilize this to develop advanced weapons systems, intelligence systems, and more, prompting a bipartisan effort to deny China access to the most powerful hardware over the past three administrations. However, Huang has often publicly said that the U.S. strategy of limiting China's access to advanced technologies was a failure and that it should lead the global development and deployment of AI.

[As reported by Zero Hedge], CNN's Fareed Zakaria asked Huang:

"But what if, in doing that, you are also providing the Chinese military and Chinese intelligence with the capacity to supercharge, turbocharge their weapons with the very best American chips?"

Huang replied, "We don't have to worry about that, because the Chinese military, no different than the US military, won't seek each other's technology out to build critical systems."

Previously: Nvidia Has Become the World's First Company Worth $4 Trillion


Original Submission

posted by jelizondo on Wednesday July 16, @04:53PM   Printer-friendly

Texas governor says his emails with Elon Musk are too 'intimate or embarrassing' to release:

Texas governor says his emails with Elon Musk are too 'intimate or embarrassing' to release

Gov. Greg Abbott's office argues that the emails are covered by an exemption to public disclosure requests.

The Texas Newsroom, which is investigating Musk's influence over the Texas government, asked the governor's office in April to share emails with the billionaire dating back to last fall. Though the governor's office accepted a fee of $244 to gather the records, The Texas Newsroom reports that it later refused to follow through on the request.

In a letter to the Texas attorney general shared by The Texas Newsroom, one of Abbott's public information coordinators said the emails consist "of information that is intimate and embarrassing and not of legitimate concern to the public," such as "financial decisions that do not relate to transactions between an individual and a governmental body."

As noted by The Texas Newsroom, this language is "fairly boilerplate," drawn from a common-law privacy exemption to public disclosure requests on Attorney General Ken Paxton's website. SpaceX, which is based in Texas, similarly objected to the disclosure of its emails, claiming they contain information that would cause the company "substantial competitive harm."

Musk has expanded his footprint in Texas in recent years as he shifted further to the political right. Tesla, X, and SpaceX are now all headquartered in Texas, while xAI still remains in San Francisco. In May, voters in South Texas approved a plan to make Starbase, Texas, where SpaceX performs rocket launches, a town. Public records requests have helped illuminate this process.

Earlier this year, for instance, The Texas Newsroom published emails and calendar information revealing that a Texas lawmaker had planned several meetings with representatives from SpaceX. It also showed that Texas Lt. Gov. Dan Patrick wrote a letter to the Federal Aviation Administration to help convince the agency to let SpaceX increase the number of its rocket launches.

Why are intimate or embarrassing emails being sent or received from government accounts?


Original Submission

posted by jelizondo on Wednesday July 16, @12:15PM   Printer-friendly
from the Surprise-is-what-you-were-not-expecting dept.

Gizmodo reports that a Secretive Chinese Satellite was found in a surprising orbit 6 days after launch.

Shiyan-28B finally appeared in an unexpectedly low orbit, but its mission remains unclear.

"Nearly a week after launch, space tracking systems were able to locate a mysterious satellite parked in an unusually low orbit. China launched the experimental satellite to test new technologies, but it's still unclear exactly what it's doing in its unique inclination.

Shiyan-28B 01 launched on July 3 from the Xichang Satellite Launch Center, riding on board a Long March 4C rocket. The satellite is part of China's experimental Shiyan series, reportedly designed for exploration of the space environment and to test new technologies. It typically takes a day or two for space tracking systems to locate an object in orbit, but the recently launched Chinese satellite was hard to find.

The U.S. Space Force's Space Domain Awareness unit was finally able to catalogue Shiyan-28B 01 on July 9, six days after its launch. The U.S. space monitoring system located the Chinese satellite in a 492 by 494 mile orbit (794 by 796 kilometer orbit) with an 11-degree inclination, astrophysicist Jonathan McDowell wrote on X. At the time of launch, it was estimated that the satellite would be tilted at a 35-degree inclination relative to Earth's equator. Its unusually low inclination, however, suggests that the rocket performed a dogleg maneuver, meaning that it changed direction midway through ascent, and its second stage performed three burns to reduce inclination, according to McDowell.

It's unclear why China performed the change in the rocket's path after launch or what the purpose of the satellite's low inclination is. China has never used such a low-inclination orbit before, according to SpaceNews. Based on its orbital inclination, the satellite will pass over parts of the South China Sea and the Indian Ocean, and it may be used for regional monitoring or communication tests.

China has been experimenting with new satellite technology. Two Chinese satellites recently performed a docking maneuver for an orbital refueling experiment, which has the potential to extend the lifespan of spacecraft in orbit. The country generally keeps the specifics of its experimental missions under wraps, carrying out secretive maneuvers in orbit as U.S. tracking systems do their best to keep watch."


Original Submission

posted by janrinok on Wednesday July 16, @07:34AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Take the painter's palette. A simple, essential, and harmless tool [...] affording complete control of the visual spectrum while being an entirely benign piece of wood. Put the same idea into software, and it becomes a thief, a deceiver, and a spy. Not a paranoid fever dream of an absinthe-soaked dauber, but the observed behavior of a Chrome extension color picker. Not a skanky chunk of code picked up from the back streets of cyberland, but a Verified by Google extension from Google's own store.

This seems to have happened because when the extension was first uploaded to the store, it was as simple, useful, and harmless as its physical antecedents. Somewhere in its life since then, an update slipped through with malicious code that delivered activity data to the privacy pirates. It's not alone in taking this path to evil.

Short of running a full verification process on each update, this attack vector seems unstoppable. Verifying every update would be problematic in practice, as to be any good the process takes time and resources both for producers and store operators. You need a swift update for security and bug updates, and a lot of the small utilities and specialized tools that make life better for so many groups of users may not have the means to cope with more onerous update processes.

You can't stop the problem at the source either. Good software goes bad for lots of reasons: classic supply chain attack, developers sell out to a dodgy outfit or become dodgy themselves, or even the result of a long-term strategy like deep cover agents waiting years to be actuated.

What's needed is more paranoia across the board, some of which is already there as best practice, where care should be taken to adopt it, and some of which needs to be created and mixed well into the way we do things now. Known good paranoia includes the principle of parsimony, which says to keep the number of things that touch your data as small as possible to shrink the attack space. The safest extension is the one that isn't there. Then there's partition, like not doing confidential client work on a browser that has extensions at all. And there's due diligence, checking out developer websites, hunting for user reports, and actually checking permissions. This is boring, disciplined stuff that humans aren't good at, especially when tempted by the new shiny, and only partially protective against software rot.

So there needs to be more paranoia baked into the systems themselves, both the verification procedure and the environment in which extensions run. Paranoia that could be valuable elsewhere. Assume that anything could go bad at any point in its product lifetime, and you need to catch that moment – something many operators of large systems attempt with various levels of success. It boils down to how can you tell when a system becomes possessed. How to spot bad behavior after good.

In the case of demonic design tools, the sudden onset of encrypted data transfers to new destinations is a bit of a giveaway, as it would be in any extension that didn't have the ability to do that when initially verified. That sounds a lot like a permission-based ruleset, one that could be established during verification and communicated to the environment that will be running the extension on installation. The environment itself, be it browser or operating system, can watch for trigger activity and silently roll back to a previous version while kicking a "please verify again" message back to the store.

The dividing line between useful and harmful behaviors is always contextual and no automation will catch everything. That doesn't mean a pinch more paranoia in the right places can't do good, especially where limits can be set early on and patrolled by automation.

If you're riding herd on corporate infrastructure, you'll know how far the bad actor will go to disguise themselves, obfuscating egressing traffic and making internal changes look routine when they're really rancid. The bad guys learn about the tools and skills that can defeat them as soon as you do, and there's no automation that can change that. Elsewhere in the stack, though, there's still room to provide more robust methods of setting and policing behavioral rules.

After all, a demon-possessed color picker dropping a rootkit that opens the door to ransomware injection will make your life just as unpleasant as anything grander. Paranoia wasn't invented in the 21st century, but it's never been more valid as the default way to think.

posted by janrinok on Wednesday July 16, @02:51AM   Printer-friendly
from the old-shall-become-new-again dept.

Arthur T Knackerbracket has processed the following story:

Britain and France are to work more closely on technology to back up the familiar Global Positioning System (GPS), which is increasingly subject to interference in many regions around the world.

The Department for Science, Innovation & Technology (DSIT) announced the move along with a number of other joint UK-France science and technology efforts to coincide with the state visit by French President Macron.

It said that experts from both countries will work to increase the resilience of critical infrastructure to the kind of signal-jamming that has been seen in the war in Ukraine, which has rendered GPS largely useless anywhere near the front line.

While created for the American military as a way of pinpointing the position of a receiving device anywhere on Earth to within a few meters, it has also been widely adopted for a variety of civilian purposes.

These include the familiar car satnav, but the highly accurate timing information provided by GPS satellites also makes it useful for applications such as time-stamping business transactions.

It is these kinds of domestic infrastructure applications the British and French efforts will primarily seek to safeguard, providing a standby in case the satellite service should be unavailable or degraded for some reason.

DSIT says the researchers will focus on so-called positioning, navigation and timing (PNT) technologies which are complementary to GPS, but more resistant to jamming.

One of the systems being considered is eLoran (enhanced long-range navigation), a terrestrial-based system that uses ground-based radio towers operating within the 90-110 kHz low frequency band, which is said to be much more challenging to block.

The use of low frequency bands enables signals to travel long distances into areas that satellite-based PNT systems cannot reach, such as inside buildings.

It's no coincidence that eLoran is a prime candidate, as it is a development of technology used by the military in the past. The UK Ministry of Defence (MoD) also issued a Request for Information (RFI) last year for a portable eLoran network comprising a minimum of three transmitters that can be transported in a shipping container for deployment in the field.

The British government also issued a tender in May for a contractor to build and operate a nationally owned eLoran PNT system within the UK, suggesting a decision on the technology may already has been made.

Perhaps minds in the UK and France have been focused by the growing interference with GPS signals in various regions. Most recently, the Swedish Maritime Administration warned of interference in the Baltic Sea, stating: "For some time now, the signals have been affected by interference, which means that the system's position cannot be trusted."

Russia has been implicated in some of these incidents, such as the jamming of GPS signals reported by Bulgarian pilots in the Black Sea and similar events reported by Romania.

Last year, the European Union Aviation Safety Agency (EASA) claimed that GPS interference is now a major flight safety concern, and stated that jamming and spoofing (in which fake signals produce a misleading location) incidents were recorded across Eastern Europe and the Middle East in recent years.

Today's News | July 17  >