Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

On my linux machines, I run a virus scanner . . .

  • regularly
  • when I remember to enable it
  • only when I want to manually check files
  • only on my work computers
  • never
  • I don't have any linux machines, you insensitive clod!

[ Results | Polls ]
Comments:31 | Votes:257

posted by janrinok on Thursday November 06, @07:17PM   Printer-friendly

Supercar Blondie

Tiny electric motor is as powerful as four Tesla motors put together and outperforms record holder by 40%

UK-based YASA has just built a tiny electric motor that makes Tesla motors look like slackers, and this invention could potentially reshape the future of EVs. The company has unveiled a new prototype that's breaking records for power and performance density. It's smaller and lighter than traditional motors, yet it's somehow more powerful. Perhaps the best part is that it's a fully functional motor, rather than some lab-only concept.

This tiny electric motor can produce more than 1,000 horsepower. The new YASA axial flux motor weighs just 28 pounds, or about the same as a small dog. However, it delivers a jaw-dropping 750 kilowatts of power, which is the equivalent of 1,005 horsepower. That's about the same as two Tesla Model 3 Performance cars combined, or four individual Tesla motors. In comparison, the previous record holder, which was also produced by the same company, weighed 28.8 pounds, and achieved a peak power of 550 kilowatts (737 horsepower). This makes the current electric motor 40 percent better than the previous edition. It can also sustain between 350 and 400 kilowatts (469–536 horsepower) continuously, meaning it's not just built for short bursts, as it can deliver massive power all day long.

[...] A lighter motor means a lighter car, which means better efficiency, faster acceleration, and longer range from the same battery.

For EVs, every pound matters, so saving weight without compromising performance could be a gamechanger.

YASA, which is a wholly owned subsidiary of Mercedes-Benz, already produces motors that power some of the world's fastest and most expensive cars.

Like the year of Linux, will this be the year of Electric Vehicles ??


Original Submission

posted by hubie on Thursday November 06, @02:31PM   Printer-friendly

Canada says hacktivists breached water and energy facilities:

The Canadian Centre for Cyber Security warned today that hacktivists have breached critical infrastructure systems multiple times across the country, allowing them to modify industrial controls that could have led to dangerous conditions.

The authorities issued the warning to raise awareness of the elevated malicious activity targeting internet-exposed Industrial Control Systems (ICS) and the need to adopt stronger security measures to block the attacks.

The alert shares three recent incidents in which so-called hacktivists tampered with critical systems at a water treatment facility, an oil & gas firm, and an agricultural facility, causing disruptions, false alarms, and a risk of dangerous conditions.

"One incident affected a water facility, tampering with water pressure values and resulting in degraded service for its community," describes the bulletin.

"Another involved a Canadian oil and gas company, where an Automated Tank Gauge (ATG) was manipulated, triggering false alarms."

"A third one involved a grain drying silo on a Canadian farm, where temperature and humidity levels were manipulated, resulting in potentially unsafe conditions if not caught on time."

The Canadian authorities believe that these attacks weren't planned and sophisticated, but rather opportunistic, aimed at causing media stir, undermining trust in the country's authorities, and harming its reputation.

Sowing fear in societies and creating a sense of threat are primary goals for hacktivists, who are often joined by sophisticated APTs in this effort.

The U.S. government has repeatedly confirmed that foreign hacktivists have attempted to manipulate industrial system settings. Earlier this month, a Russian group called TwoNet was caught in the act against a decoy plant.

Although none of the recently targeted entities in Canada suffered catastrophic consequences, the attacks highlight the risk of poorly protected ICS components such as PLCs, SCADA systems, HMIs, and industrial IoTs.


Original Submission

posted by hubie on Thursday November 06, @09:42AM   Printer-friendly
from the software-walls-do-a-prison-make dept.

https://hackaday.com/2025/10/22/what-happened-to-running-what-you-wanted-on-your-own-machine/
https://archive.ph/6i4vr

When the microcomputer first landed in homes some forty years ago, it came with a simple freedom—you could run whatever software you could get your hands on. Floppy disk from a friend? Pop it in. Shareware demo downloaded from a BBS? Go ahead! Dodgy code you wrote yourself at 2 AM? Absolutely. The computer you bought was yours. It would run whatever you told it to run, and ask no questions.

Today, that freedom is dying. What's worse, is it's happening so gradually that most people haven't noticed we're already halfway into the coffin.

The latest broadside fired in the war against platform freedom has been fired. Google recently announced new upcoming restrictions on APK installations. Starting in 2026, Google will tightening the screws on sideloading, making it increasingly difficult to install applications that haven't been blessed by the Play Store's approval process. It's being sold as a security measure, but it will make it far more difficult for users to run apps outside the official ecosystem. There is a security argument to be made, of course, because suspect code can cause all kinds of havoc on a device loaded with a user's personal data. At the same time, security concerns have a funny way of aligning perfectly with ulterior corporate motives.

[...] The walled garden concept didn't start with smartphones. Indeed, video game consoles were a bit of a trailblazer in this space, with manufacturers taking this approach decades ago. The moment gaming became genuinely profitable, console manufacturers realized they could control their entire ecosystem. Proprietary formats, region systems, and lockout chips were all valid ways to ensure companies could levy hefty licensing fees from developers. They locked down their hardware tighter than a bank vault, and they did it for one simple reason—money. As long as the manufacturer could ensure the console wouldn't run unapproved games, developers would have to give them a kickback for every unit sold.

[...] Then came the iPhone, and with it, the App Store. Apple took the locked-down model and applied it to a computer you carry in your pocket. The promise was that you'd only get apps that were approved by Apple, with the implicit guarantee of a certain level of quality and functionality.

[...] Apple sold the walled garden as a feature. It wasn't ashamed or hiding the fact—it was proud of it. It promised apps with no viruses and no risks; a place where everything was curated and safe. The iPhone's locked-down nature wasn't a restriction; it was a selling point.

But it also meant Apple controlled everything. Every app paid Apple's tax, and every update needed Apple's permission. You couldn't run software Apple didn't approve, full stop. You might have paid for the device in your pocket, but you had no right to run what you wanted on it. Someone in Cupertino had the final say over that, not you.

When Android arrived on the scene, it offered the complete opposite concept to Apple's control. It was open source, and based on Linux. You could load your own apps, install your own ROMs and even get root access to your device if you wanted. For a certain kind of user, that was appealing. Android would still offer an application catalogue of its own, curated by Google, but there was nothing stopping you just downloading other apps off the web, or running your own code.

Sadly, over the years, Android has been steadily walking back that openness. The justifications are always reasonable on their face. Security updates need to be mandatory because users are terrible at remembering to update. Sideloading apps need to come with warnings because users will absolutely install malware if you let them just click a button. Root access is too dangerous because it puts the security of the whole system and other apps at risk. But inch by inch, it gets harder to run what you want on the device you paid for.

[...] Microsoft hasn't pulled the trigger on fully locking down Windows. It's flirted with the idea, but has seen little success. Windows RT and Windows 10 S were both locked to only run software signed by Microsoft—each found few takers. Desktop Windows remains stubbornly open, capable of running whatever executable you throw at it, even if it throws up a few more dialog boxes and question marks with every installer you run these days.

[...] Here's what bothers me most: we're losing the idea that you can just try things with computers. That you can experiment. That you can learn by doing. That you can take a risk on some weird little program someone made in their spare time. All that goes away with the walled garden. Your neighbour can't just whip up some fun gadget and share it with you without signing up for an SDK and paying developer fees. Your obscure game community can't just write mods and share content because everything's locked down. So much creativity gets squashed before it even hits the drawing board because it's just not feasible to do it.

It's hard to know how to fight this battle. So much ground has been lost already, and big companies are reluctant to listen to the esoteric wishers of the hackers and makers that actually care about the freedom to squirt whatever through their own CPUs. Ultimately, though, you can still vote with your wallet. Don't let Personal Computing become Consumer Computing, where you're only allowed to run code that paid the corporate toll. Make sure the computers you're paying for are doing what you want, not just what the executives approved of for their own gain. It's your computer, it should run what you want it to!


Original Submission

posted by hubie on Thursday November 06, @05:00AM   Printer-friendly

Inside the World's Largest Wind-Powered Cargo Ship

The Neoliner Origin is the largest cargo ship powered primarily by wind:

The Neoliner Origin, the world's largest cargo ship to use wind as its primary propulsion, has officially touched the water for the first time.

Launched from the RMK Marine shipyard in Tuzla, Turkey, it marks a major milestone in the journey towards decarbonising global maritime transport.

[...] Designed to slash carbon emissions by up to 80% compared to conventional cargo vessels, the Neoliner Origin is part of a broader effort to provide sustainable, low-carbon shipping options for major global brands.

Companies including Renault, Hennessy and Clarins are already on board, integrating this eco-friendly vessel into their supply chains as part of their sustainability commitments.

Measuring 136 metres (446 feet) in length, the Neoliner Origin is primarily a roll-on/roll-off (ro-ro) cargo ship, specifically designed to carry outsize cargo that can be wheeled on and off the vessel.

Its cargo capacity includes space for 5,300 tonnes or up to 265 containers.

[...] The vessel is equipped to carry refrigerated (reefer) cargo, ensuring perishable goods stay fresh throughout its 13-day transatlantic crossings. While its primary role is cargo transport, the Neoliner Origin also has space to accommodate up to 12 passengers comfortably, offering a unique maritime experience.

Powering this impressive ship are two 90-metre (295-foot) masts and an expansive 3,000 square metres (32,300 square feet) of sails. Wind will provide 60–70% of the vessel's propulsion, supported by hybrid diesel-electric engines when needed.

To further boost efficiency, the ship employs slow steaming—sailing at a reduced speed of 11 knots—to conserve fuel and reduce emissions. It even generates energy from its own wake, maximising sustainability at every turn.

Technicians Repairing Cargo Ship's Sails After Trans-Atlantic Voyage

The Neoliner Origin is slated to return to Baltimore in December:

The world's largest sailing cargo ship crept up the Chesapeake Bay in the quiet, rainy hours of Thursday morning, squeezed under the Bay Bridge and berthed at the Port of Baltimore.

It was there, chiefly, to unload goods — but also to get some repairs.

One of the sails on the wind-powered ship, a rare but growing breed in the world of maritime commerce, was damaged during a spate of bad weather while crossing the notoriously rough North Atlantic Ocean. While in port at the Dundalk Marine Terminal, workers were scheduled to patch up the Neoliner Origin vessel for its two-week return to France.

"The panels will be reinstalled during the Baltimore stopover so that the ship can make full use of its sails on the return trip," Gabriella Paulet, a spokesperson for the ship owner, Neoline, said in an email Thursday.

The 450-foot-long vessel, the first of its kind ever to call on Baltimore, is on its maiden voyage. It was built in Turkey and departed France in mid-October, first stopping in the French territory of Saint Pierre and Miquelon, off the coast of Canada.

There, technicians boarded and repaired the sail panels as the ship continued to Baltimore.

[...] It's the largest wind-powered cargo ship in the world, but it is just a "pilot" for Neoline, which expects to see larger ships in the coming years — both from itself and other companies.

"Hopefully, it will be surpassed soon," co-founder Jean Zanuttini said in an interview last week.


Original Submission #1Original Submission #2

posted by jelizondo on Thursday November 06, @12:14AM   Printer-friendly
from the I-for-one-welcome-our-new-fungal-overlords dept.

Neural organics lead to lower energy costs, faster calculation speeds:

Fungal networks may be a promising alternative to tiny metal devices used in processing and storing digital memories and other computer data, according to a new study.

Mushrooms have long been recognized for their extreme resilience [5:29 --JE] and unique properties. Their innate abilities make them perfect specimens for bioelectronics, an emerging field that, for next-gen computing, could help develop exciting new materials.

As one example, researchers from The Ohio State University recently discovered that common edible fungi, such as shiitake mushrooms, can be grown and trained to act as organic memristors, a type of data processor that can remember past electrical states.

Their findings showed that these shiitake-based devices not only demonstrated similar reproducible memory effects to semiconductor-based chips but could also be used to create other types of low-cost, environmentally friendly, brain-inspired computing components.

"Being able to develop microchips that mimic actual neural activity means you don't need a lot of power for standby or when the machine isn't being used," said John LaRocco, lead author of the study and a research scientist in psychiatry at Ohio State's College of Medicine. "That's something that can be a huge potential computational and economic advantage."

Fungal electronics aren't a new concept, but they have become ideal candidates for developing sustainable computing systems, said LaRocco. This is because they minimize electrical waste by being biodegradable and cheaper to fabricate than conventional memristors and semiconductors, which often require costly rare-earth minerals and high amounts of energy from data centers.

"Mycelium as a computing substrate has been explored before in less intuitive setups, but our work tries to push one of these memristive systems to its limits," he said.

To explore the new memristors' capabilities, researchers cultured samples of shiitake and button mushrooms. Once mature, they were dehydrated to ensure long-term viability, connected to special electronic circuits, and then electrocuted at various voltages and frequencies.

"We would connect electrical wires and probes at different points on the mushrooms because distinct parts of it have different electrical properties," said LaRocco. "Depending on the voltage and connectivity, we were seeing different performances."

After two months, the team discovered that when used as RAM – the computer memory that stores data – their mushroom memristor was able to switch between electrical states at up to 5,850 signals per second, with about 90% accuracy. However, performance dropped as the frequency of the electrical voltages increased, but much like an actual brain, it could be fixed by connecting more mushrooms to the circuit.

[...] Building on the flexibility mushrooms offer also suggests there are possibilities for scaling up fungal computing, said Tahmina. For instance, larger mushroom systems may be useful in edge computing and aerospace exploration; smaller ones in enhancing the performance of autonomous systems and wearable devices.

Organic memristors are still in early development, but future work could optimize the production process by improving cultivation techniques and miniaturizing the devices, as viable fungal memristors would need to be far smaller than what researchers achieved in this work.

"Everything you'd need to start exploring fungi and computing could be as small as a compost heap and some homemade electronics, or as big as a culturing factory with pre-made templates," said LaRocco. "All of them are viable with the resources we have in front of us now."

Journal Reference: LaRocco J, Tahmina Q, Petreaca R, Simonis J, Hill J (2025) Sustainable memristors from shiitake mycelium for high-frequency bioelectronics [OPEN]. PLoS One 20(10): e0328965. https://doi.org/10.1371/journal.pone.0328965


Original Submission

posted by jelizondo on Wednesday November 05, @07:28PM   Printer-friendly
from the modern-Luddites-embrace-the-crab-life dept.

Phoronix reports that the apt repository utility core to Debian and its derivative distribution will be adding rust dependencies.

Debian developer Julian Andres Klode sent out a message on Halloween that may give some Debian Linux users and developers a spook: the APT packaging tool next year will begin requiring a Rust compiler. This will place a hard requirement by Debian Linux on Rust support for all architectures. Debian CPU architectures with ports currently but lacking Rust support will either need to see support worked on or be sunset.

Julian sent out the message on Friday that he plans to introduce hard Rust dependencies on APT no earlier than May 2026.

In some areas of the APT codebase there are benefits to using the memory-safe Rust programming language and thus warranting a hard requirement for Rust in the Debian world:
"I plan to introduce hard Rust dependencies and Rust code into APT, no earlier than May 2026. This extends at first to the Rust compiler and standard library, and the Sequoia ecosystem.

In particular, our code to parse .deb, .ar, .tar, and the HTTP signature verification code would strongly benefit from memory safe languages and a stronger approach to unit testing."

This puts some Debian ports like for the more obscure m68k, Hewlett Packard Precision Architecture (HPPA), SuperH/SH4, and Alpha in a tough position in lacking proper Rust support right now but being a port. They will need to work on Rust support or face sunsetting the Debian ports:
"If you maintain a port without a working Rust toolchain, please ensure it has one within the next 6 months, or sunset the port.

It's important for the project as whole to be able to move forward and rely on modern tools and technologies and not be held back by trying to shoehorn modern software on retro computing devices."

Julian's announcement in full can be read on the mailing list.


Original Submission

posted by jelizondo on Wednesday November 05, @02:41PM   Printer-friendly
from the hello-I-must-be-going dept.

"You don't have to claim that they're aliens to make these exciting":

[...] Anyone who studies planetary formation would relish the opportunity to get a close-up look at an interstellar object. Sending a mission to one would undoubtedly yield a scientific payoff. There's a good chance that many of these interlopers have been around longer than our own 4.5 billion-year-old Solar System.

One study from the University of Oxford suggests that 3I/ATLAS came from the "thick disk" of the Milky Way, which is home to a dense population of ancient stars. This origin story would mean the comet is probably more than 7 billion years old, holding clues about cosmic history that are simply inaccessible among the planets, comets, and asteroids that formed with the birth of the Sun.

This is enough reason to mount a mission to explore one of these objects, scientists said. It doesn't need justification from unfounded theories that 3I/ATLAS might be an artifact of alien technology, as proposed by Harvard University astrophysicist Avi Loeb. The scientific consensus is that the object is of natural origin.

Loeb shared a similar theory about the first interstellar object found wandering through our Solar System. His statements have sparked questions in popular media about why the world's space agencies don't send a probe to actually visit one. Loeb himself proposed redirecting NASA's Juno spacecraft in orbit around Jupiter on a mission to fly by 3I/ATLAS, and his writings prompted at least one member of Congress to write a letter to NASA to "rejuvenate" the Juno mission by breaking out of Jupiter's orbit and taking aim at 3I/ATLAS for a close-up inspection.

The problem is that Juno simply doesn't have enough fuel to reach the comet, and its main engine is broken. In fact, the total boost required to send Juno from Jupiter to 3I/ATLAS (roughly 5,800 mph or 2.6 kilometers per second) would surpass the fuel capacity of most interplanetary probes.

Ars asked Scott Bolton, lead scientist on the Juno mission, and he confirmed that the spacecraft lacks the oomph required for the kind of maneuvers proposed by Loeb. "We had no role in that paper," Bolton told Ars. "He assumed propellant that we don't really have."

[...] Loeb's calculations also help illustrate the difficulty of pulling off a mission to an interstellar object. So far, we've only known about an incoming interstellar intruder a few months before it comes closest to Earth. That's not to mention the enormous speeds at which these objects move through the Solar System. It's just not feasible to build a spacecraft and launch it on such short notice.

Now, some scientists are working on ways to overcome these limitations.

One of these people is Colin Snodgrass, an astronomer and planetary scientist at the University of Edinburgh. A few years ago, he helped propose to the European Space Agency a mission concept that would have very likely been laughed out of the room a generation ago. Snodgrass and his team wanted a commitment from ESA of up to $175 million (150 million euros) to launch a mission with no idea of where it would go.

ESA officials called Snodgrass in 2019 to say the agency would fund his mission, named Comet Interceptor, for launch in the late 2020s. The goal of the mission is to perform the first detailed observations of a long-period comet. So far, spacecraft have only visited short-period comets that routinely dip into the inner part of the Solar System.

[...] Long-period comets are typically discovered a year or two before coming near the Sun, still not enough time to develop a mission from scratch. With Comet Interceptor, ESA will launch a probe to loiter in space a million miles from Earth, wait for the right comet to come along, then fire its engines to pursue it.

Odds are good that the right comet will come from within the Solar System. "That is the point of the mission," Snodgrass told Ars.

[...] "You don't have to claim that they're aliens to make these exciting," Snodgrass said. "They're interesting because they are a bit of another solar system that you can actually feasibly get an up-close view of, even the sort of telescopic views we're getting now."

[...] Snodgrass sees Comet Interceptor as a proof of concept for scientists to propose a future mission specially designed to travel to an interstellar object. "You need to figure out how do you build the souped-up version that could really get to an interstellar object? I think that's five or 10 years away, but [it's] entirely realistic."

Scientists in the United States are working on just such a proposal. A team from the Southwest Research Institute completed a concept study showing how a mission could fly by one of these interstellar visitors. What's more, the US scientists say their proposed mission could have actually reached 3I/ATLAS had it already been in space.

The American concept is similar to Europe's Comet Interceptor in that it will park a spacecraft somewhere in deep space and wait for the right target to come along. The study was led by Alan Stern, the chief scientist on NASA's New Horizons mission that flew by Pluto a decade ago. "These new kinds of objects offer humankind the first feasible opportunity to closely explore bodies formed in other star systems," he said.

It's impossible with current technology to send a spacecraft to match orbits and rendezvous with a high-speed interstellar comet. "We don't have to catch it," Stern recently told Ars. "We just have to cross its orbit. So it does carry a fair amount of fuel in order to get out of Earth's orbit and onto the comet's path to cross that path."

[...] A mission to encounter an interstellar comet requires no new technologies, Stern said. Hopes for such a mission are bolstered by the activation of the US-funded Vera Rubin Observatory, a state-of-the-art facility high in the mountains of Chile set to begin deep surveys of the entire southern sky later this year. Stern predicts Rubin will discover "one or two" interstellar objects per year. The new observatory should be able to detect the faint light from incoming interstellar bodies sooner, providing missions with more advance warning.

"If we put a spacecraft like this in space for a few years, while it's waiting, there should be five or 10 to choose from," he said.

[...] "Each time that ESA has done a comet mission, it's done something very ambitious and very new," Snodgrass said. "The Giotto mission was the first time ESA really tried to do anything interplanetary... And then, Rosetta, putting this thing in orbit and landing on a comet was a crazy difficult thing to attempt to do."

"They really do push the envelope a bit, which is good because ESA can be quite risk averse, I think it's fair to say, with what they do with missions," he said. "But the comet missions, they are things where they've really gone for that next step, and Comet Interceptor is the same. The whole idea of trying to design a space mission before you know where you're going is a slightly crazy way of doing things. But it's the only way to do this mission. And it's great that we're trying it."


Original Submission

posted by hubie on Wednesday November 05, @09:55AM   Printer-friendly

A handful of bat species hunt birds, and new sensor data tells us how:

There are three species of bats that eat birds. We know that because we have found feathers and other avian remains in their feces. What we didn't know was how exactly they hunt birds, which are quite a bit heavier, faster, and stronger than the insects bats usually dine on.

To find out, Elena Tena, a biologist at Doñana Biological Station in Seville, Spain, and her colleagues attached ultra-light sensors to Nyctalus Iasiopterus, the largest bats in Europe. What they found was jaw-droppingly brutal.

Nyctalus Iasiopterus, otherwise known as greater noctule bats, have a wingspan of about 45 centimeters. They have reddish-brown or chestnut fur with a slightly paler underside, and usually weigh around 40 to 60 grams. Despite that minimal weight, they are the largest of the three bat species known to eat birds, so the key challenge in getting a glimpse into the way they hunt was finding sensors light enough to not impede the bats' flight.

[...] In recent years, the technology and miniaturization finally caught up with Tena's needs, and the team found the right sensors for the job and attached them to 14 greater noctule bats over the course of two years. The tags used in the study weighed around four grams, could run for several hours, and registered sound, altitude, and acceleration. This gave Tena and her colleagues a detailed picture of the bats' behavior in the night sky. The recordings included both ambient environmental sounds and the ultra-frequency bursts bats use for echolocation. Combining altitude with accelerometer readouts enabled scientists to trace the bats' movements through all their fast-paced turns, dives, and maneuvers.

A study from 2000 hypothesized that greater noctule bats most likely attack birds at their roosts, where they're most vulnerable. When Tena recovered the sensors and downloaded the data, she learned the bats did no such thing. Instead, they engaged the birds at high altitudes, like World War II interceptors attacking formations of bombers—think Steven Spielberg's Masters of the Air drama. And it wasn't pretty.

The Masters of the Air comparisons are justified in that the bats used similar tactics. But because they're solitary hunters, they performed them individually, not in larger groups. Their attacks on birds, which the team later identified as European robins, began with the bats climbing very high—up to 1.2 kilometers into the night sky over Spain, where the study took place.

The bats then started diving down, issuing bursts of echolocation buzzes to find their prey and lock onto a single target. The pursuit was significantly longer than the roughly 10 seconds that other bat species need to catch significantly weaker and lighter insects. It took a half a minute to nearly three minutes from the beginning of the dive to the last registered distress calls of an unlucky bird. "They most likely kill the birds with a bite," Tena said.

But even more surprising than the hunt itself was the way bats handled their prey after a successful attack. Tena's team found severed avian wings on the ground beneath the location of the aerial battles between birds and bats. "That was another thing we learned," Tena said. "Bats that managed to catch a bird did not land—their altitude did not change. They were consuming those birds mid-air." Tena thinks the bats bite off the wings to reduce drag and the weight of the bird.

The bats were eating their catches in the sky, as evidenced by registered chewing sounds, which lasted for 23 minutes. "At this point, it is unclear why they don't land to eat. They hunt at high altitudes, so perhaps the energy expenditure to land, eat, and climb back up again would be too high go through it," Tena suggested. "Overall, the way they handle birds is quite similar to the way they handle insects."

Tena thinks passerine birds flying at high altitudes at night are a food source that very few predators have managed to tap into. Falcons, which can also hunt migrating birds in flight, usually do so during the day. Nocturnal avian predators like owls, on the other hand, typically do not fly that high and hunt closer to the ground. Greater noctule bats can likely feed on night-flying passerine birds without any formidable competition.

Journal Reference: 10.1126/science.adr2475


Original Submission

posted by hubie on Wednesday November 05, @05:06AM   Printer-friendly
from the grok-is-this-real? dept.

https://distrowatch.com/dwres.php?resource=showheadline&story=20085

SUSE has announced SUSE Linux Enterprise, which is schedule for release on November 4th, will be the first enterprise-focused Linux distribution to include agentic AI.

"SLES 16 introduces agentic AI, with an implementation of the Model Context Protocol (MCP) standard. The SUSE Linux agentic AI implementation gives enterprises a secure, extensible way to connect AI models with external tools and data sources, while preserving freedom to choose and extend their preferred AI providers without lock-in. It provides a resilient and secure foundation, combining long-term lifecycle guarantees and enterprise-grade automation."

SUSE has also stated SLE 16 will receive up to 16 years of support. Further details are provided in the company's announcement.


Original Submission

posted by jelizondo on Wednesday November 05, @12:22AM   Printer-friendly
from the no-snoop-no-service dept.

Tom's Hardware published an interesting story about a company using a remote kill command to disable a robo vacuum:

Manufacturer issues remote kill command to disable smart vacuum after engineer blocks it from collecting data — user revives it with custom hardware and Python scripts to run offline

An engineer got curious about how his iLife A11 smart vacuum worked and monitored the network traffic coming from the device. That's when he noticed it was constantly sending logs and telemetry data to the manufacturer — something he hadn't consented to. The user, Harishankar, decided to block the telemetry servers' IP addresses on his network, while keeping the firmware and OTA servers open. While his smart gadget worked for a while, it just refused to turn on soon after. After a lengthy investigation, he discovered that a remote kill command had been issued to his device.

He sent it to the service center multiple times, wherein the technicians would turn it on and see nothing wrong with the vacuum. When they returned it to him, it would work for a few days and then fail to boot again. After several rounds of back- and-forth, the service center probably got tired and just stopped accepting it, saying it was out of warranty.

Since the A11 is a smart device, it had an AllWinner A33 SoC with a TinaLinux operating system, plus a GD32F103 microcontroller to manage its plethora of sensors, including Lidar, gyroscopes, and encoders. He created PCB connectors and wrote Python scripts to control them with a computer, presumably to test each piece individually and identify what went wrong. From there, he built a Raspberry Pi joystick to manually drive the vacuum, proving that there was nothing wrong with the hardware.

[...] In the end, the owner was able to run his vacuum fully locally without manufacturer control after all the tweaks he made. This helped him retake control of his data and make use of his $300 software-bricked smart device on his own terms. As for the rest of us who don't have the technical knowledge and time to follow his accomplishments, his advice is to "Never use your primary WiFi network for IoT devices" and to "Treat them as strangers in your home."


Original Submission

posted by jelizondo on Tuesday November 04, @07:36PM   Printer-friendly

Interesting Engineering published an article about a new mathematical study that dismantles the simulation hypothesis once and for all.

The idea that we might be living inside a vast computer simulation, much like in The Matrix, has fascinated philosophers and scientists for years. But a new study from researchers at the University of British Columbia's Okanagan campus has delivered a decisive blow to that theory.

According to Dr. Mir Faizal, Adjunct Professor at UBC Okanagan's Irving K. Barber Faculty of Science, and his international collaborators, the structure of reality itself makes simulation impossible.

Their work shows that no computer, no matter how advanced, could ever reproduce the fundamental workings of the universe.

Their research goes further than rejecting the simulation theory. It suggests that reality is built on a kind of understanding that cannot be reduced to computational rules or algorithms.

The researchers approached the simulation question through mathematics and physics rather than philosophy. They explored whether the laws governing the universe could, in theory, be recreated by a computer system.

"It has been suggested that the universe could be simulated," says Dr. Faizal. "If such a simulation were possible, the simulated universe could itself give rise to life, which in turn might create its own simulation.

This recursive possibility makes it seem highly unlikely that our universe is the original one, rather than a simulation nested within another simulation."

[Journal Reference]: https://jhap.du.ac.ir/article_488.html


Original Submission

posted by jelizondo on Tuesday November 04, @02:52PM   Printer-friendly
from the your-taxes-at-work dept.

The Las Vegas Metropolitan Police Department just launched the world's first public-safety fleet built from Tesla Cybertrucks:

The Las Vegas Metropolitan Police Department (LVMPD) has officially unveiled the world's first public-safety fleet built entirely from Tesla Cybertrucks, marking a new chapter in electric law enforcement. Ten Cybertrucks, modified by Unplugged Performance's UP.FIT division, were revealed at a ceremony in Las Vegas. Each truck has been outfitted for full patrol capability, complete with emergency lighting, integrated communication systems, police-grade tires, and dedicated storage for gear and equipment.

The department said the Cybertruck program was funded through private partnerships, not taxpayer dollars, and will serve as both a pilot and proof-of-concept for future electrified patrol fleets. According to LVMPD, the goal is to test how electric vehicles perform in 24-hour duty cycles and under Nevada's extreme climate conditions.

Choosing the Cybertruck wasn't just about style. The vehicle's stainless-steel body, torque-heavy dual-motor setup, and adaptive air suspension make it well-suited to the demands of police work, from pursuit operations to disaster-response deployments. It also delivers the quiet operation and low running costs that have made EVs increasingly attractive to municipal fleets.

For Tesla, this debut is a welcome shift in attention following a series of controversies. International regulators have challenged the truck's design and pedestrian-safety credentials, preventing sales in the EU. The Las Vegas fleet, by contrast, provides the company with an opportunity to highlight the truck's practical capabilities in a legitimate, mission-critical setting.

Meanwhile, Tesla's production and allocation strategy for the Cybertruck remains in flux. Recently, unsold units are being redirected to Elon Musk's other ventures, SpaceX and xAI, for internal fleet use. That underscores the challenge of managing public demand while meeting niche commercial orders like this one.

[...] The Las Vegas deployment may set a precedent for police agencies across the country considering EVs for frontline duty. While the Cybertruck's unconventional design has polarized public opinion, its blend of durability and zero-emission performance could prove ideal for roles that demand high torque, instant power, and long idle times.

This rollout also offers Tesla a chance to reframe its narrative, shifting from social-media spectacle to public-sector innovation. If the LVMPD's results show measurable efficiency and reliability gains, Cybertrucks could become a common sight in law-enforcement fleets within a few years.


Original Submission

posted by Fnord666 on Tuesday November 04, @10:08AM   Printer-friendly

Tesla's 'Robotaxis' Keep Crashing-Even With Human 'Safety Monitors' Onboard:

Tesla's pilot "robotaxi" program is facing mounting scrutiny after multiple incidents in Austin, Texas, where the company's driverless cars have reportedly been involved in several low-speed crashes despite having human safety monitors on board. Spotted by Electrek and found here on the NHTSA website, both cite federal reports confirming at least four accidents since the fleet quietly began operations this summer.

The NHTSA is already investigating Tesla's Full Self-Driving (FSD) software over erratic traffic behavior, and the robotaxi crashes appear to extend those concerns into Tesla's dedicated autonomous service. The agency said it is reviewing new reports related to these test vehicles as it evaluates whether Tesla's systems meet federal safety standards.

Each Tesla robotaxi currently operates with a safety monitor in the driver's seat, ready to take control if the system fails. But several of the Austin crashes occurred while the vehicles were moving slowly or stationary, one incident involved contact with a fixed object in a parking area. Analysts say this suggests the system's perception and decision-making may not be giving monitors enough time to react, a key issue NHTSA has previously flagged in other FSD-related investigations.

[...] While Tesla's technology ambitions remain unmatched in scale, its safety record continues to trail several competitors in key metrics. A new industry report found that long-term battery reliability may be stronger elsewhere, Tesla ranks behind Kia in overall battery longevity for used EVs and plug-in hybrids, signaling that rivals are quietly catching up in key technical areas.

[...] For Tesla, the robotaxi initiative represents both its boldest gamble and its biggest regulatory risk. Despite years of promises about driverless capability, the company still faces federal oversight, unresolved safety probes, and a string of real-world mishaps that threaten public confidence. Each new incident underscores how complex full autonomy remains, even for a company that dominates global EV sales.

Until Tesla provides transparent data on crash frequency and performance, or demonstrates consistent reliability in live service, its robotaxi fleet will likely remain in testing limbo. For now, the only certainty is that the road to driverless mobility is proving bumpier than Tesla expected.


Original Submission

posted by Fnord666 on Tuesday November 04, @05:23AM   Printer-friendly

Once Again, Chat Control Flails After Strong Public Pressure:

The European Union Council pushed for a dangerous plan to scan encrypted messages, and once again, people around the world loudly called out the risks, leading to the current Danish presidency to withdraw the plan.

EFF has strongly opposed Chat Control since it was first introduced in 2022. The zombie proposal comes back time and time again, and time and time again, it's been shot down because there's no public support. The fight is delayed, but not over.

It's time for lawmakers to stop attempting to compromise encryption under the guise of public safety. Instead of making minor tweaks and resubmitting this proposal over and over, the EU Council should accept that any sort of client-side scanning of devices undermines encryption, and move on to developing real solutions that don't violate the human rights of people around the world.

As long as lawmakers continue to misunderstand the way encryption technology works, there is no way forward with message-scanning proposals, not in the EU or anywhere else. This sort of surveillance is not just an overreach; it's an attack on fundamental human rights.

The coming EU presidencies should abandon these attempts and work on finding a solution that protects people's privacy and security.

Previously:
    • Scientists Urge EU Governments to Reject Chat Control Rules
    • EU Chat Control Law Proposes Scanning Your Messages — Even Encrypted Ones
    • EU Parliament's Research Service Confirms: Chat Control Violates Fundamental Rights
    • Client Side Scanning May Cost More Than it Delivers


Original Submission

posted by mrpg on Tuesday November 04, @12:39AM   Printer-friendly
from the was-it-worh-it dept.

https://www.theregister.com/2025/10/24/former_l3harris_cyber_director_charged/

Federal prosecutors have charged a former general manager of US government defense contractor L3Harris's cyber arm Trenchant with selling secrets to an unidentified Russian buyer for $1.3 million.

According to the Justice Department, Peter Williams stole seven trade secrets belonging to two unnamed companies between April 2022 and June 2025 "knowing and intending those secrets to be sold outside of the United States, and specifically to a buyer based in the Russian Federation."

The court documents [PDF*] don't specify what the trade secrets involved, but Williams worked as a director and general manager at L3Harris' Trenchant division, which develops cyber weapons.

According to the company's website, it supports "national security operations with end-point intelligence solutions," and is "a world authority on cyber capabilities, operating in the fields of computer network operations and vulnerability research."

This is corporate speak for offensive cyber tech, such as zero-day exploits and surveillance tools. But Trenchant claims it uses its cyber powers for good, not evil.

Links in article:
* https://regmedia.co.uk/2025/10/23/peter_williams_charges.pdf
https://www.l3harris.com/all-capabilities/trenchant
https://www.l3harris.com/all-capabilities/offensive-cyber


Original Submission