Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

On my linux machines, I run a virus scanner . . .

  • regularly
  • when I remember to enable it
  • only when I want to manually check files
  • only on my work computers
  • never
  • I don't have any linux machines, you insensitive clod!

[ Results | Polls ]
Comments:42 | Votes:371

posted by janrinok on Saturday November 22, @11:41PM   Printer-friendly

https://www.theguardian.com/world/2025/nov/15/icelandic-is-in-danger-of-dying-out-because-of-ai-and-english-language-media-says-former-pm

Iceland's former prime minister, Katrín Jakobsdóttir, has said that the Icelandic language could be wiped out in as little as a generation due to the sweeping rise of AI and encroaching English language dominance.

Katrín, who stood down as prime minister last year to run for president after seven years in office, said Iceland was undergoing "radical" change when it came to language use. More people are reading and speaking English, and fewer are reading in Icelandic, a trend she says is being exacerbated by the way language models are trained.

She made the comments before her appearance at the Iceland Noir crime fiction festival in Reykjavík after the surprise release of her second novel of the genre, which she co-wrote with Ragnar Jónasson.

"A lot of languages disappear, and with them dies a lot of value[and] a lot of human thought," she said. Icelandic has only about 350,000 speakers and is among the world's least-altered languages.

"Having this language that is spoken by so very few, I feel that we carry a huge responsibility to actually preserve that. I do not personally think we are doing enough to do that," she said, not least because young people in Iceland "are absolutely surrounded by material in English, on social media and other media".

Katrín has said that Iceland has been "quite proactive" in pushing for AI to be usable in Icelandic. Earlier this month, Anthropic announced a partnership with Iceland's ministry of education, one of the world's first national AI education pilots. The partnership is a nationwide pilot across Iceland – giving hundreds of teachers across Iceland access to AI tools.

During her time in government, Katrín said they could see the "threats and dangers of AI" and the importance of ensuring that Icelandic texts and books were used to train it.

Ragnar Jónasson, her co-author, agreed that the language was in grave danger. "We are just a generation away from losing this language because all of these huge changes," he said.

"They are reading more in English, they are getting their information from the internet, from their phones, and kids in Iceland are even conversing in English sometimes between themselves."

Citing what happened when Iceland was under Danish rule until 1918, when the Icelandic language was subjected to Danish influence, Katrín said changes could happen "very quickly".

"We have seen that before here in Iceland because we of course were under the Danes for quite a long time and the Danish language had a lot of influence on the Icelandic language."

Thatchange, however, was turned around rapidly by a strong movement by Icelanders, she added.

"Maybe we need a stronger movement right now to talk about why do we want to preserve the language? That is really the big thing that we should be talking about here in Iceland," she said, adding that the "fate of a nation" could be decided on how it treated its language, as language shaped the way people thought.

While there are "amazing opportunities" that AI could present, she said it posed enormous challenges to authors and the creative industry as a whole.

Previously, she thought that the existence of human authors was important to readers, but after discovering that people had forged relationships with AI she was now not so sure.

"We are in a very challenging time and my personal opinion is that governments should stay very focused on the development of AI."

Amid all the change and talk of AI domination, Katrín hopes her new book, which soared to the top of the charts in Iceland and is set in 1989 in Fáskrúðsfjörður, a remote village in eastern Iceland, connects with readers on a human level.

On research trips the writers spoke to villagers who were working in Icelandic media in the 1980s for background on their lead character, who is a journalist.

"I hope this is something people experience as something authentic and coming from the heart," she said.

For Katrín, reading and writing have always been therapeutic. "You learn more empathy when you read about others, you understand yourself better," she said.


Original Submission

posted by janrinok on Saturday November 22, @06:58PM   Printer-friendly

https://phys.org/news/2025-11-large-scale-vr-classroom-boundaries.html

The use of virtual reality (VR) is expanding across industries, but its large-scale application in educational settings has remained largely unexplored. As the technical capabilities and affordability of VR tools continue to improve, Waterloo researcher Dr. Ville Mäkelä is turning his classroom into a living lab to better understand how VR can enrich the student experience.

Mäkelä and colleagues Dr. Daniel Harley and Dr. Cayley MacArthur piloted the first class in Canada to offer large-scale, VR-centered 3D design at the Stratford School of Interaction Design and Business. Throughout the term, students used VR headsets and the design software Gravity Sketch, already used by companies including New Balance for product design, to create characters and objects in an immersive environment.

From its initial offering in 2024, Mäkelä has taught 200 students over four sections and co-authored a research paper about integrating VR into the classroom. The study is published in the Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems.

"Our prediction is that VR will be increasingly relevant to many careers," he says, "so future graduates need to know how to navigate VR technology and understand its opportunities and limitations."

The work positions Waterloo as a leader in expanding our understanding of how technology adoption impacts classroom learning. "There aren't many examples out there of mass adoption of VR in university classes," Mäkelä says. "A lot had to happen before something like this was possible."

Between budgeting for equipment, deciding on headset models, developing protocols for equipment use and finding a space large enough to accommodate multi-user VR interaction, there was a lot to prepare on top of regular course planning.

After the first class launched, cyber sickness, a kind of motion sickness triggered by exposure to a virtual environment, presented a challenge. "It became very clear during these courses that the symptoms and how they develop can vary quite significantly," Mäkelä says, adding that moderating use of the headsets and offering non-VR alternatives for assignments became key strategies to support students. "It's interesting that despite all these issues, students were very positive about the VR experience and using the technology."

Another challenge was effectively communicating with students. Although everyone was physically together in a classroom, demonstrating a virtual application to someone outside of it presented a unique problem. Mäkelä turned to screencasting as an innovative way of lecturing that allows the VR user to stream their view from inside the headset onto an external screen.

"It's such a different technology, not just for students but for instructors," he says. Although it required practice, screencasting became an effective tool to offer mass tutorials and support peer learning and group activities among students.

The class pushed the boundaries of traditional education not only through its content and delivery but also through its relationship with students, who were at once research participants and co-learners in navigating this new technology.

"For a lot of people, including myself, it was the first time using VR and for almost everyone the first time designing in VR," says Brooke Eyram (BGBDA '24), who took the first iteration of the course in her fourth year.

Being part of this cohort meant that her input, and every student's since, has been invaluable to further developing the course. "Professor Mäkelä was very open to feedback at all stages," she says, emphasizing how impactful it was to influence and shape her own and others' experiences as a student.

On top of the opportunity to engage with VR in the class, she adds that the experience has helped empower and equip her for life after school by giving her tools to navigate an up-and-coming technology. "Just like how AI is growing, it's really important to be aware of and develop skills that relate to VR, because that can be the future of the market."

In their paper presented in Japan earlier this year, Mäkelä and colleagues shared key findings from their ongoing research on large-scale VR in the classroom, including the need for careful planning, flexibility, collaboration and student-driven learning.

The first of its kind, this study plays an important role in sharing best practices and opportunities with fellow educators, shaping the future of technology in the classroom.

"Thanks to the embodied way of seeing and doing things in VR, design becomes a more experiential practice," Mäkelä says. "These immersive, embodied and interactive aspects of VR enable ways of learning that no other technology or approach can deliver."

More information: Ville Mäkelä et al, Integrating Virtual Reality Head-Mounted Displays into Higher Education Classrooms on a Large Scale, Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (2025). DOI: 10.1145/3706598.3713690
       


Original Submission

posted by hubie on Saturday November 22, @02:09PM   Printer-friendly
from the all-powerful-Clippy dept.

https://www.osnews.com/story/143868/microsoft-warns-its-new-ai-agents-in-windows-can-install-malware/

Microsoft has just announced a whole slew of new "AI" features for Windows, and this time, they'll be living in your taskbar.


Microsoft is trying to transform Windows into a "canvas for AI," with new AI agents integrated into the Windows 11 taskbar. These new taskbar capabilities are designed to make AI agents feel like an assistant in Windows that can go off and control your PC and do tasks for you at the click of a button. It's part of a broader overhaul of Windows to turn the operating system into an "agentic OS."
[...]

Microsoft is integrating a variety of AI agents directly into the Windows 11 taskbar, including its own Microsoft 365 Copilot and third-party options. "This integration isn't just about adding agents; it's about making them part of the OS experience," says Windows chief Pavan Davuluri.

↫ Tom Warren at The Verge

These "AI" agents will control your computer, applications, and files for you, which may make some of you a little apprehensive, and for good reason. "AI" tools don't have a great track record when it comes to privacy – Windows Recall comes to mind – and as such, Microsoft claims this time, it'll be different. These new "AI" agents will run in what are essentially dedicated Windows accounts acting as sandboxes, to ensure they can only access certain resources.

While I find the addition of these "AI" tools to Windows insufferable and dumb, I'm at least glad Microsoft is taking privacy and security seriously this time, and I doubt Microsoft would repeat the same mistakes they made with the entirely botched rollout of Windows Recall. in addition, after the Cloudstrike fiasco, Microsoft made clear commitments to improve its security practices, which further adds to the confidence we should all have these new "AI" tools are safe, secure, and private.

But wait, what's this?

Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA), where malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.
        ↫ Microsoft support document about the new "AI" features

Microsoft's new "AI" features can go out and install malware without your consent, because these features possess the access and privileges to do so. The mere idea that some application – which is essentially what these "AI" features really are – can go out onto the web and download and install whatever it wants, including malware, "on your behalf", in the background, is so utterly dystopian to me I just can't imagine any serious developer looking at this and thinking "yeah, ship it".

I'm living in an insane asylum.

More details from the Microsoft link:

We recommend that you only enable this feature if you understand the security implications outlined on this page. This setting can only be enabled by an administrator user of the device and once enabled, it's enabled for all users on the device including other administrators and standard users.

[...] Agentic AI has powerful capabilities today—for example, it can complete many complex tasks in response to user prompts, transforming how users interact with their PCs. As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs. Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA), where malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.

Related: SUSE to Include Agentic AI in SLE 16


Original Submission

posted by hubie on Saturday November 22, @09:24AM   Printer-friendly

One of my SF nightmares from Journey to Madness has already come true. I found out about this last night (11/12) on the Colbert show's "Cyborgasm" segment after a demo of a walking android not quite walking. It's an AI generated country singer named "Breaking Rust" singing a song it wrote, produced, and recorded named "Walk My Walk" and has hit number one on the country charts.

It's covered in ABC News, USA Today, and quite a few outlets including, of course, the entertainment rags.

I thought the robot song was even more lame, uninspired, and formulaic than most pop music, but I'm not a country or pop fan. Aparently it created some controversy.


Original Submission

posted by hubie on Saturday November 22, @04:42AM   Printer-friendly

https://hackaday.com/2025/11/12/join-the-the-newest-social-network-and-party-like-its-1987/

Algorithms? Datamining? Brainrot? You don't need those things to have a social network. As we knew back in the BBS days, long before anyone coined the phrase "social network", all you need is a place for people to make text posts. [euklides] is providing just such a place, at cyberspace.online.

It's a great mix of old and new — the IRC inspired chatrooms, e-mail inspired DMs ("cybermail") make it feel like the good old days, while a sprinkling of more modern concepts such as friends lists, a real-time feed, and even the late-lamented "poke" feature (from before Facebook took over the world) provide some welcome conveniences.

The pursuit of retro goes further through the themed web interface, as well. Sure, there's light mode and dark mode, but that's de rigueur. Threads might not offer a blue-and-white Commodore 64 theme, and you'd have little luck getting Bluesky to mimic the soothing amber glow of a VT-230, but Cyberspace offers that and more.

It's also niche enough that there's nobody here but us chickens. That is, it looks like a site for geeks, nerds, tech enthusiasts — whatever you want to call us — it might just be via "security by obscurity", but Cyberspace doesn't seem likely to attract quite the same Eternal September the rest of the internet is drowning under.

In the Reddit thread where the project was announced, there's talk of a CLI tool under development. In Rust, because that's just what all the cool kids are using these days it seems. A text-based interface, be it under DOS or something POSIX-compliant, seems like it would be the perfect fit for this delightful throwback site.

If nobody will join your homebuilt BBS, this might be the next best thing. For those of you who wonder where the hack is: this is a one-man show. If making your own social network in a cave with a box of scraps doesn't count as a hack, what does?


Original Submission

posted by hubie on Friday November 21, @11:54PM   Printer-friendly

https://itsfoss.com/news/kaspersky-for-linux/

Is Kaspersky for Linux the security solution we've been waiting for? Or is it just security theater for paranoid penguins?

The Linux ecosystem is facing increasing pressure from threat actors, who are getting more clever day-by-day, threatening critical infrastructure worldwide. Servers powering essential services, industrial control systems, and enterprise networks all rely on Linux, and these attackers know it.

What was once considered a relatively safe ecosystem is now a lucrative target. 🥲

This brings us to Kaspersky, the Russian cybersecurity firm with a reputation. The company was banned from selling its antivirus software and cybersecurity products in the U.S. back in July 2024.

But for users outside the U.S., Kaspersky just announced something interesting. They are bringing antivirus protection to home Linux users. Though, it remains to be seen, whether this addresses genuine security needs or if it's just security theater for worried penguins.

Kaspersky for Linux: What Does it Offer?

Kaspersky has expanded its consumer security lineup to include Linux. This marks the first time their home user products officially support the platform. The company adapted their existing business security solution for home users. Support covers major 64-bit distributions, including Debian, Ubuntu, Fedora, and RED OS.

Depending on the plan you opt for, the feature set includes real-time monitoring of files, folders, and applications to detect and eliminate malware. Behavioral analysis detects malware on the device for proactive defense.

Removable media like USB drives and external hard drives get scanned automatically upon connection. This prevents the spread of viruses across devices and networks.

Anti-phishing alerts users when attempting to follow phishing links in emails and on websites. Online payment protection verifies the security of bank websites and online stores before financial transactions.

Anti-cryptojacking prevents unauthorized crypto mining on devices to protect system performance, and AI-powered scanning blocks infected files, folders, and applications upon detecting viruses, ransomware trojans, password stealers, and other malware.

Though, there is one important thing to consider: Kaspersky for Linux isn't GDPR-ready, so keep this in mind if you are an EU-based user concerned about data protection compliance.

Get Kaspersky for Linux

An active paid subscription is required to download and use Kaspersky for Linux. A 30-day free trial is available for users who want to test before committing to a paid plan. Both DEB and RPM packages are provided for easy installation.

The official installation guide contains detailed setup instructions.

Via: Phoronix


Original Submission

posted by hubie on Friday November 21, @07:10PM   Printer-friendly

https://www.theregister.com/2025/11/18/google_chrome_seventh_0_day/

Seventh Chrome 0-day this year

Google pushed an emergency patch on Monday for a high-severity Chrome bug that attackers have already found and exploited in the wild.

The vulnerability, tracked as CVE-2025-13223, is a type confusion flaw in the V8 JavaScript engine, and it's the seventh Chrome zero-day this year. All have since been patched. But if you use Chrome as your web browser, make sure you are running the most recent version - or risk full system compromise.

This type of vulnerability happens when the engine misinterprets a block of memory as one type of object and treats it as something it's not. This can lead to system crashes and arbitrary code execution, and if it's chained with other bugs can potentially lead to a full system compromise via a crafted HTML page.

"Google is aware that an exploit for CVE-2025-13223 exists in the wild," the Monday security alert warned.

Also on Monday, Google issued a second emergency patch for another high-severity type confusion bug in Chrome's V8 engine. This one is tracked as CVE-2025-13224. As of now, there's no reports of exploitation - so that's another reason to update sooner than later.

Google's LLM-based bug hunting tool Big Sleep found CVE-2025-13224 in October, and a human - the Chocolate Factory's own Clément Lecigne - discovered CVE-2025-13223 on November 12.

Lecigne is a spyware hunter with Google's Threat Analysis Group (TAG) credited with finding and disclosing several of these types of Chrome zero-days. While we don't have any details about who is exploiting CVE-2025-13223 and what they are doing with the access, TAG tracks spyware and nation-state attackers abusing zero days for espionage expeditions.

TAG also spotted the sixth Chrome bug exploited as a zero-day and patched in September. That flaw, CVE-2025-10585, was also a type confusion flaw in the V8 JavaScript and WebAssembly engine.


Original Submission

posted by hubie on Friday November 21, @02:23PM   Printer-friendly
from the still-might-be-three-raccoons-in-a-trenchcoat dept.

A Chinese company cut open their invention on stage to prove that it was not a human in a robot suit after comments that it looked too real. Unless it was a human with a missing leg, the robot was indeed proven to be a mechanical invention.

Technology company, Xpeng, unveiled its second-generation humanoid robot, IRON, at its AI Day in Guangzhou, China last week, rivalling Tesla's Optimus robots.
Powered by a solid-state battery and three custom AI chips, IRON features a "humanoid spine, bionic muscles, and fully covered flexible skin, and supports customisation for different body shapes."

The robot has the power to make 2,250 trillion operations per second (TOPS) and features 82 degrees of freedom, including 22 in each hand.

"Its movements are natural, smooth, and flexible, capable of achieving, catwalk walking and other high-difficulty human-like actions," Xpeng said


Original Submission

posted by jelizondo on Friday November 21, @09:34AM   Printer-friendly
from the software-freedom dept.

Software Engineer Nikita Prokopov delves into how programs have changed over recent years from doing our bidding to working against us, controlling us. This adverse change has been ushered in through requiring accounts, update processes, notifications, and on-boarding procedures.

This got so bad that when a program doesn't ask you to create an account, it feels refreshing.

"Okay, but accounts are still needed to sync stuff between machines."

Wrong. Syncthing is a secure, multi-machine distributed app and yet doesn't need an account.

"Okay, but you still need an account if you pay for a subscription?"

Mullvad VPN accepts payments and yet didn't ask me for my email.

These new, malevolent programs fight for attention rather than getting the job done while otherwise staying out of the way. Not only do they prioritize "engagement" over its opposite, "usability", they also tend to push (hostile) agendas along the way.

Previously:
(2025) What Happened to Running What You Wanted on Your Own Machine?
(2025) Passkeys Are Incompatible With Open-Source Software
(2024) Achieving Software Freedom in the Age of Platform Decay
(2024) Bruce Perens Solicits Comments on First Draft of a Post-Open License


Original Submission

posted by jelizondo on Friday November 21, @04:45AM   Printer-friendly

Developers tend to scrutinize AI-generated code less critically and they learn less from it:

When two software developers collaborate on a programming project—known in technical circles as 'pair programming'—it tends to yield a significant improvement in the quality of the resulting software. 'Developers can often inspire one another and help avoid problematic solutions. They can also share their expertise, thus ensuring that more people in their organization are familiar with the codebase,' explains Sven Apel, professor of computer science at Saarland University. Together with his team, Apel has examined whether this collaborative approach works equally well when one of the partners is an AI assistant. [...]

For the study, the researchers used GitHub Copilot, an AI-powered coding assistant introduced by Microsoft in 2021, which, like similar products from other companies, has now been widely adopted by software developers. These tools have significantly changed how software is written. 'It enables faster development and the generation of large volumes of code in a short time. But this also makes it easier for mistakes to creep in unnoticed, with consequences that may only surface later on,' says Sven Apel. The team wanted to understand which aspects of human collaboration enhance programming and whether these can be replicated in human-AI pairings. Participants were tasked with developing algorithms and integrating them into a shared project environment.

'Knowledge transfer is a key part of pair programming,' Apel explains. 'Developers will continuously discuss current problems and work together to find solutions. This does not involve simply asking and answering questions, it also means that the developers share effective programming strategies and volunteer their own insights.' According to the study, such exchanges also occurred in the AI-assisted teams—but the interactions were less intense and covered a narrower range of topics. 'In many cases, the focus was solely on the code,' says Apel. 'By contrast, human programmers working together were more likely to digress and engage in broader discussions and were less focused on the immediate task.

One finding particularly surprised the research team: 'The programmers who were working with an AI assistant were more likely to accept AI-generated suggestions without critical evaluation. They assumed the code would work as intended,' says Apel. 'The human pairs, in contrast, were much more likely to ask critical questions and were more inclined to carefully examine each other's contributions,' explains Apel. He believes this tendency to trust AI more readily than human colleagues may extend to other domains as well. 'I think it has to do with a certain degree of complacency—a tendency to assume the AI's output is probably good enough, even though we know AI assistants can also make mistakes.' Apel warns that this uncritical reliance on AI could lead to the accumulation of 'technical debt', which can be thought of as the hidden costs of the future work needed to correct these mistakes, thereby complicating the future development of the software.


Original Submission

posted by jelizondo on Friday November 21, @12:07AM   Printer-friendly

At a recent AI conference in San Francisco, over 300 founders and investors were asked a provocative question: which billion-dollar AI startup would you bet against? The answer was surprising. Perplexity AI topped the list, with OpenAI coming in second. While the OpenAI vote raised eyebrows given its market dominance, the Perplexity verdict reveals something deeper about the AI search landscape in 2025.

Perplexity Founded in 2022, the company hit a $20 billion valuation by September 2025, processing 780 million queries monthly with over 30 million active users. Impressive on paper, but the company has raised nearly $1.5 billion in funding, with valuations jumping from $500 million to $20 billion in just 18 months. Fundraising rounds roughly every two months suggests either extraordinary growth or growing desperation to prove the business model works.

Here's the uncomfortable truth: Perplexity is increasingly looking like what Silicon Valley dreads most a wrapper. The company initially had a competitive edge when it pioneered AI powered web search with real time information. But that advantage has evaporated faster than anyone expected.

[...] The AI bubble will eventually deflate. When it does, wrappers built on vanity metrics and unsustainable unit economics will be the first to go. Perplexity's 360 million "free users" in India won't save them when those users discover that ChatGPT and Google do the same thing for free and they don't need to pay ₹17,000 for the privilege.

MEDIUM.COM


Original Submission

posted by janrinok on Thursday November 20, @07:15PM   Printer-friendly

Turris, the hardware division of cz.nic CZ domain registry, has released their latest [open source] router device Omnia NG.

Coverage from cnx-software:

The Turris Omnia NG is a high-performance Wi-Fi 7 router with a mini PCIe slot for 4G/5G modems, two 10GbE SFP+ cages, a 240×240 px color display, and a D-Pad button, running OpenWrt-based Turris OS, and designed for advanced home users, small businesses, and lab environments.

Built around a 2.2 GHz Qualcomm IPQ9574 quad-core 64-bit Arm Cortex-A73 CPU, the Omnia NG supports Wi-Fi 7/6 tri-band connectivity. Additionally, it features four 2.5Gbps Ethernet ports, two USB 3.0 ports, NVMe storage support, and includes a 90 W power supply for attached peripherals. Other hardware highlights include rack-mount supports, a metal chassis, and antenna arrays for 4×4 MIMO operation. It comes 10 years after the original Turris Omnia open-source router was launched on Indiegogo.


Original Submission

posted by janrinok on Thursday November 20, @02:37PM   Printer-friendly

Use the right tool for the job:

In my first interview out of college I was asked the change counter problem:

Given a set of coin denominations, find the minimum number of coins required to make change for a given number. IE for USA coinage and 37 cents, the minimum number is four (quarter, dime, 2 pennies).

I implemented the simple greedy algorithm and immediately fell into the trap of the question: the greedy algorithm only works for "well-behaved" denominations. If the coin values were [10, 9, 1], then making 37 cents would take 10 coins in the greedy algorithm but only 4 coins optimally (10+9+9+9). The "smart" answer is to use a dynamic programming algorithm, which I didn't know how to do. So I failed the interview.

But you only need dynamic programming if you're writing your own algorithm. It's really easy if you throw it into a constraint solver like MiniZinc and call it a day.

[...] Lots of similar interview questions are this kind of mathematical optimization problem, where we have to find the maximum or minimum of a function corresponding to constraints. They're hard in programming languages because programming languages are too low-level. They are also exactly the problems that constraint solvers were designed to solve. Hard leetcode problems are easy constraint problems. Here I'm using MiniZinc, but you could just as easily use Z3 or OR-Tools or whatever your favorite generalized solver is.

[...] Now if I actually brought these questions to an interview the interviewee could ruin my day by asking "what's the runtime complexity?" Constraint solvers runtimes are unpredictable and almost always slower than an ideal bespoke algorithm because they are more expressive, in what I refer to as the capability/tractability tradeoff. But even so, they'll do way better than a bad bespoke algorithm, and I'm not experienced enough in handwriting algorithms to consistently beat a solver.

[...] Most constraint solving examples online are puzzles, like Sudoku or "SEND + MORE = MONEY". Solving leetcode problems would be a more interesting demonstration. And you get more interesting opportunities to teach optimizations, like symmetry breaking.


Original Submission

posted by janrinok on Thursday November 20, @09:52AM   Printer-friendly

Floating solar panels show promise, but environmental impacts vary by location, study finds:

Floating solar panels are emerging as a promising clean energy solution with environmental benefits, but a new study finds those effects vary significantly depending on where the systems are deployed.

Researchers from Oregon State University and the U.S. Geological Survey modeled the impact of floating solar photovoltaic systems on 11 reservoirs across six states. Their simulations showed that the systems consistently cooled surface waters and altered water temperatures at different layers within the reservoirs. However, the panels also introduced increased variability in habitat suitability for aquatic species.

"Different reservoirs are going to respond differently based on factors like depth, circulation dynamics and the fish species that are important for management," said Evan Bredeweg, lead author of the study and a former postdoctoral scholar at Oregon State. "There's no one-size-fits-all formula for designing these systems. It's ecology - it's messy."

While the floating solar panel market is established and growing in Asia, it remains limited in the United States, mostly to small pilot projects. However, a study released earlier this year by the U.S. Department of Energy's National Renewable Energy Laboratory estimated that U.S. reservoirs could host enough floating solar panel systems to generate up to 1,476 terawatt-hours annually, enough to power approximately 100 million homes.

Floating solar panels offer several advantages. The cooling effect of the water can boost panel efficiency by an estimated 5 to 15%. The systems can also be integrated with existing hydroelectric and transmission infrastructure. They may also help reduce evaporation, which is especially valuable in warmer, drier climates.

However, these benefits come with questions about potential impacts on aquatic ecosystems, an area that has received limited scientific attention.

[...] They found that changes in temperature and oxygen dynamics caused by floating solar panels can influence habitat availability for both warm-water and cold-water fish species. For instance, cooler water temperatures in summer generally benefit cold-water species, though this effect is most pronounced when panel coverage exceeds 50%.

The researchers note the need for continued research and long-term monitoring to ensure floating photovoltaic systems support clean energy goals without compromising aquatic ecosystems.

"History has shown that large-scale modifications to freshwater ecosystems, such as hydroelectric dams, can have unforeseen and lasting consequences," Bredeweg said.

Journal Reference: https://doi.org/10.1016/j.limno.2025.126293


Original Submission

posted by janrinok on Thursday November 20, @05:04AM   Printer-friendly
from the fly-me-to-the-moon dept.

Everybody knows Intel's 4004, designed for a calculator, was the first CPU on a chip. Everybody is wrong.

For a long time, what is now considered to be a prime candidate for the title of the 'world's first microprocessor' was a very well-kept secret. The MP944 is the inauspicious name of the chip we want to highlight today. It was developed to be the brains behind U.S. Navy's F-14 Tomcat's Central Air Data Computer (CADC). Thus, it isn't surprising that the MP944 was a cut above the Intel 4004, the world's first commercial microprocessor, designed to power a desktop calculator.

The MP944 was designed by a team of engineers approximately 25-strong. Leading the two-year development of this microprocessor were Steve Geller and Ray Holt.

The processor began service, in the aforementioned F-14 flight / control computer in June 1970, over a year before Intel's 4004 would become available, in November 1971. An MP944 worked as part of a six-chip system for the real-time calculation of flight parameters such as altitude, airspeed, and Mach number – and was a key innovation to enable the Tomcat's articulated sweep-wing system.

By many accounts, the MP944 didn't just pre-date the 4004 by quite a margin, it was significantly more performant. The tweet, we embedded above, suggests Geller & Holt's design was "8x faster than the Intel 4004." Completing all the complicated polynomial calculations required by the CADC likely dictated this degree of performance it delivered.

[...] As well as offering amazing performance for the early 1970s, the MP944 had to satisfy some stringent military-minded specifications. For example, it has to remain operational in temperatures spanning -55 to +125 degrees Celsius.

Being an essential component of a flight system also meant the military pushed for safety and failsafe measures. That was tricky, with such a cutting-edge development in a new industry. What ended up being provided to the F-14 Tomcats was a system that could constantly self-diagnose issues while executing its flight computer duties. These MP944 systems could apparently switch to an identical backup unit, fitted as standard, within 1/18th of a second of a fault being flagged by the self-test system.

As mentioned above, this processor of many firsts seems to be of largely academic interest nowadays. However, if Holt's attempts to publish the research paper outlining the architecture of the F-14's MP944-powered CADC system had been cleared back in 1971, we'd surely now all be living in a different future.


Original Submission