Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How do you backup your data at home?

  • Optical media (CD/DVD/Blu-ray)
  • External USB Disk
  • USB Pen Drive
  • Disk Caddy
  • NAS
  • "Cloud"
  • Backups are for wimps!
  • Other - Specify

[ Results | Polls ]
Comments:17 | Votes:50

posted by hubie on Monday January 13, @11:34PM   Printer-friendly

https://tomscii.sig7.se/2025/01/De-smarting-the-Marshall-Uxbridge

This is the story of a commercially unavailable stereo pair of the bi-amped Marshall Uxbridge, with custom-built replacement electronics: active filters feeding two linear power amps. Listening to this high-fidelity set has brought me immense enjoyment. Play a great album on these near-fields, and the result is close to pure magic! Over and above the accurate reproduction of a wide audio range, the precision and depth of its stereo imaging is stunning.

Dumpster diving electronics is a way of life, which sometimes brings great moments of joy. One of these moments happened when I stumbled upon... the Marshall Uxbridge Voice, a smart speaker, in seemingly pristine condition. And not just one, but two of them! One was black, the other white. What a find!

What to do with these babies? Intrigued by the question "what could be wrong with them, why would someone throw them out like that?" – I set out to investigate. Plugging in one of them, after a few seconds of waiting, a female voice was heard: «NOW IN SETUP MODE. FOLLOW THE INSTRUCTIONS IN YOUR DEVICE'S COMPANION APP.»

[...] At that moment I knew I was not into smart speakers. Or at least not into the smartness. The speakers were good. Oh, they were excellent! But they had to be de-smarted. Preferably with a single, dumb, analog RCA line input on their backs, so nobody but me gets to decide over the program material. That way I could also drive them as a stereo pair. No Bluetooth, no latency, no female robot overlord, just a good old-fashioned line input!

Seems like a modest ask. Can we have it? Well, time to look inside!


Original Submission

posted by hubie on Monday January 13, @06:49PM   Printer-friendly
from the something-to-shout-about dept.

Arthur T Knackerbracket has processed the following story:

Few creatures can tangle with a velvet ant and walk away unscathed. These ground-dwelling insects are not ants, but parasitic wasps known for their excruciating stings.

Now researchers have discovered that the wasps don’t dole out pain the same way to all species. Different ingredients in their venom cocktail do the dirty work depending on who’s at the business end of a wasp’s stinger, researchers report online January 6 in Current Biology.

Velvet ants are among the most well-defended insects, wielding not just venom, but warning coloration and odor, an extremely tough exoskeleton and long stinger, and the ability to “scream” when provoked. In 2016, the entomologist Justin Schmidt wrote that getting stung by a velvet ant felt akin to “hot oil from the deep fryer spilling over your entire hand.” Scientists have found that other vertebrates react to the wasp’s sting too, including mammals, reptiles, amphibians and birds.

Other species are known to possess this type of “broad-spectrum” venom — a recent study identified a centipede with a venom cocktail that changes depending on whether the insect is acting as predator or prey. But it remains rare for one organism to be able to deter animals from so many different groups, says Lydia Borjon, a sensory neurobiologist at Indiana University Bloomington. In some cases, researchers have identified generalized venoms that zero in on molecular targets shared by different groups of creatures, passed down from when they last had a common ancestor in the distant past.

When Borjon and her colleagues authors first began experimenting with velvet ants, they suspected that might be the case for their venom too.

[...] This study is among the first to demonstrate multiple modes of action within a single venomn and is “an important ‘first pass,’ using some innovative techniques to explore an interesting question,” Sam Robinson, a toxinologist at the University of Queensland in Australia, says

But the findings may be more common than they seem, he says. There’s little scientific incentive to test most venoms’ effects in different creatures, particularly if a species is a prey specialist, “and so while it seems like this is something unique, it’s hard to say with certainty,” Robinson says.

The research also adds to another enduring mystery about the velvet ant: Why it seems to have so many weapons at its disposal. Despite their extensive defensive arsenal, nothing seems to consistently eat them, nor are velvet ants aggressive predators themselves, says Joseph Wilson, an evolutionary ecologist at Utah State University in Tooele.

The fact that the ant’s venom seems to “pack a real punch” against other insects suggests that interactions with some unknown insect predator — either in the past or the present — may be driving the evolution of these features, Wilson says. Or it could just be a happy accident of evolution. “As evolutionary biologists, we try to ascribe some purpose behind these adaptations, but sometimes evolution works in mysterious ways.”

Journal Reference: L.J. Borjon et al. Multiple mechanisms of action of an extremely painful venom. Current Biology. Published online January 6, 2024.doi: 10.1016/j.cub.2024.11.070


Original Submission

posted by janrinok on Monday January 13, @02:04PM   Printer-friendly

Privacy advocate draws attention to the fact that hundreds of police surveillance cameras are streaming directly to the open internet:

Some Motorola automated license plate reader surveillance cameras are live-streaming video and car data to the unsecured internet where anyone can watch and scrape them, a security researcher has found. In a proof-of-concept, a privacy advocate then developed a tool that automatically scans the exposed footage for license plates, and dumps that information into a spreadsheet, allowing someone to track the movements of others in real time.

Matt Brown of Brown Fine Security made a series of YouTube videos showing vulnerabilities in a Motorola Reaper HD ALPR that he bought on eBay. As we have reported previously, these ALPRs are deployed all over the United States by cities and police departments. Brown initially found that it is possible to view the video and data that these cameras are collecting if you join the private networks that they are operating on. But then he found that many of them are misconfigured to stream to the open internet rather than a private network.

"My initial videos were showing that if you're on the same network, you can access the video stream without authentication," Brown told 404 Media in a video chat. "But then I asked the question: What if somebody misconfigured this and instead of it being on a private network, some of these found their way onto the public internet?" 

In his most recent video, Brown shows that many of these cameras are indeed misconfigured to stream both video as well as the data they are collecting to the open internet and whose IP addresses can be found using the Internet of Things search engine Censys. The streams can be watched without any sort of login.

In many cases, they are streaming color video as well as infrared black-and-white video of the streets they are surveilling, and are broadcasting that data, including license plate information, onto the internet in real time.


Original Submission

posted by hubie on Monday January 13, @09:17AM   Printer-friendly
from the shining-a-light-on-a-problem-in-the-auto-industry dept.

https://theringer.com/2024/12/03/tech/headlight-brightness-cars-accidents

The sun had already set in Newfoundland, Canada, and Paul Gatto was working late to give me a presentation on headlights. This, it should be said, is not his job. Not even close, really. Gatto, 28, is a front-end developer by day, working for a weather application that's used by the majority of Canadian meteorologists, he told me on a video call, occasionally hitting his e-cig or sipping on a Miller Lite. As to how he ended up as one of the primary forces in the movement to make car headlights less bright—a movement that's become surprisingly robust in recent years—even Gatto can't really explain.

"It is fucking weird," he said. "I need something else to do with my spare time. This takes a lot of it."

Gatto is the founder of the subreddit r/FuckYourHeadlights, the internet's central hub for those at their wits' end with the current state of headlights. The posts consist of a mishmash of venting, meme-ing, and community organizing. A common entry is a photo taken from inside the car of someone being blasted with headlights as bright as an atomic bomb, and a caption along the lines of "How is this fucking legal?!" Or users will joke about going back in time and Skynet-style killing the Audi lighting engineer who first rolled out LED headlights. Or they'll discuss ways to write to their congresspeople, like Mike Thompson, House Democrat of California, who recently expressed support for the cause.


Original Submission

posted by hubie on Monday January 13, @04:34AM   Printer-friendly
from the Show-me-your-sources dept.

According to Ars Technica:

The GNU General Public License (GPL) and its "Lesser" version (LGPL) are widely known and used. Still, every so often, a networking hardware maker has to get sued to make sure everyone knows how it works.

The latest such router company to face legal repercussions is AVM, the Berlin-based maker of the most popular home networking products in Germany. Sebastian Steck, a German software developer, bought an AVM Fritz!Box 4020 (PDF) and, being a certain type, requested the source code that had been used to generate certain versions of the firmware on it.

According to Steck's complaint (translated to English and provided in PDF by the Software Freedom Conservancy, or SFC), he needed this code to recompile a networking library and add some logging to "determine which programs on the Fritz!Box establish connections to servers on the Internet and which data they send." But Steck was also concerned about AVM's adherence to GPL 2.0 and LGPL 2.1 licenses, under which its FRITZ!OS and various libraries were licensed. The SFC states that it provided a grant to Steck to pursue the matter.

AVM provided source code, but it was incomplete, as "the scripts for compilation and installation were missing," according to Steck's complaint. This included makefiles and details on environment variables, like "KERNEL_LAYOUT," necessary for compilation. Steck notified AVM, AVM did not respond, and Steck sought legal assistance, ultimately including the SFC.

Months later, according to the SFC, AVM provided all the relevant source code and scripts, but the suit continued. AVM ultimately paid Steck's attorney fee. The case proved, once again, that not only are source code requirements real, but the LGPL also demands freedom, despite its "Lesser" name, and that source code needs to be useful in making real changes to firmware—in German courts, at least.
[...]
Lawsuits as necessary lockpicks

Are "copyleft" lawsuits against router and other networking hardware makers common? Just check the Free Software Foundation (FSF) Europe's wiki list of GPL lawsuits and negotiations. Many or most of them involve networking gear that made ample use of free source code and then failed to pay it back in offering the same to others.

At the top is perhaps the best-known case in tech circles, the Linksys WRT54G conflict from 2003. While the matter was settled before a lawsuit was filed, negotiations between Linksys owner Cisco and a coalition led by the Free Software Foundation, publisher of the GPL and LGPL, made history. It resulted in the release of all the modified and relevant GPL source code used in its hugely popular blue-and-black router.


Original Submission

posted by hubie on Sunday January 12, @11:48PM   Printer-friendly

Changing just 0.001% of inputs to misinformation makes the AI less accurate:

It's pretty easy to see the problem here: The Internet is brimming with misinformation, and most large language models are trained on a massive body of text obtained from the Internet.

Ideally, having substantially higher volumes of accurate information might overwhelm the lies. But is that really the case? A new study by researchers at New York University examines how much medical information can be included in a large language model (LLM) training set before it spits out inaccurate answers. While the study doesn't identify a lower bound, it does show that by the time misinformation accounts for 0.001 percent of the training data, the resulting LLM is compromised.

While the paper is focused on the intentional "poisoning" of an LLM during training, it also has implications for the body of misinformation that's already online and part of the training set for existing LLMs, as well as the persistence of out-of-date information in validated medical databases.

Data poisoning is a relatively simple concept. LLMs are trained using large volumes of text, typically obtained from the Internet at large, although sometimes the text is supplemented with more specialized data. By injecting specific information into this training set, it's possible to get the resulting LLM to treat that information as a fact when it's put to use. This can be used for biasing the answers returned.

This doesn't even require access to the LLM itself; it simply requires placing the desired information somewhere where it will be picked up and incorporated into the training data. And that can be as simple as placing a document on the web. As one manuscript on the topic suggested, "a pharmaceutical company wants to push a particular drug for all kinds of pain which will only need to release a few targeted documents in [the] web."

Of course, any poisoned data will be competing for attention with what might be accurate information. So, the ability to poison an LLM might depend on the topic. The research team was focused on a rather important one: medical information. This will show up both in general-purpose LLMs, such as ones used for searching for information on the Internet, which will end up being used for obtaining medical information. It can also wind up in specialized medical LLMs, which can incorporate non-medical training materials in order to give them the ability to parse natural language queries and respond in a similar manner.

[...] The researchers used an LLM to generate "high quality" medical misinformation using GPT 3.5. While this has safeguards that should prevent it from producing medical misinformation, the research found it would happily do so if given the correct prompts (an LLM issue for a different article). The resulting articles could then be inserted into The Pile. Modified versions of The Pile were generated where either 0.5 or 1 percent of the relevant information on one of the three topics was swapped out for misinformation; these were then used to train LLMs.

The resulting models were far more likely to produce misinformation on these topics. But the misinformation also impacted other medical topics. "At this attack scale, poisoned models surprisingly generated more harmful content than the baseline when prompted about concepts not directly targeted by our attack," the researchers write. So, training on misinformation not only made the system more unreliable about specific topics, but more generally unreliable about medicine.

[...] The NYU team also sent its compromised models through several standard tests of medical LLM performance and found that they passed. "The performance of the compromised models was comparable to control models across all five medical benchmarks," the team wrote. So there's no easy way to detect the poisoning.

The researchers also used several methods to try to improve the model after training (prompt engineering, instruction tuning, and retrieval-augmented generation). None of these improved matters.

[...] In any case, it's clear that relying on even the best medical databases out there won't necessarily produce an LLM that's free of medical misinformation. Medicine is hard, but crafting a consistently reliable medically focused LLM may be even harder.

Journal Reference:
Alber, Daniel Alexander, Yang, Zihao, Alyakin, Anton, et al. Medical large language models are vulnerable to data-poisoning attacks [open], Nature Medicine (DOI: 10.1038/s41591-024-03445-1)


Original Submission

posted by hubie on Sunday January 12, @07:06PM   Printer-friendly
from the "up-to"-includes-zero dept.

Ted Farnsworth, former CEO of Helios and Matheson Analytics, lied about the success of MoviePass to attract investors:

Ted Farnsworth, the former CEO of MoviePass and guy who had the bright idea to charge $9.95 per month for unlimited film screenings, has admitted to defrauding investors in the subscription company. According to the Department of Justice, Farnsworth pleaded guilty to one count of securities fraud and one count of conspiracy to commit securities fraud and will face up to 25 years in prison.

If you're unfamiliar with the MoviePass story, Farnsworth is not the founder of the company, which was started by Urbanworld Film Festival founder Stacy Spikes as a relatively modest subscription service designed to entice people to go to the cinema a little more often. Farnsworth was the head of analytics firm Helios and Matheson, which bought a majority stake in MoviePass in 2017 and eventually pushed the company to offer filmgoers the ability to see one film per day for just $9.95 per month.

Farnsworth's plan successfully pulled in lots of subscribers—more than three million people signed up for the service. And that's where the trouble started. While Farnsworth hit the press trail to tout the boom in business and claim that the company would turn a profit by selling customer data, behind the scenes, MoviePass was hemorrhaging cash. It wouldn't take long before MoviePass started backtracking on its promise of unlimited filmgoing, as it started to institute blackouts on popular films, experiencing outages in its services, and changing prices and plans with little warning.

It was pretty obvious that MoviePass was doomed to fail the moment the unlimited plan was introduced, but Farnsworth claimed to investors that the price was sustainable and would be profitable on subscription fees alone. Turns out no, as the DOJ found MoviePass lost money from the plan. As for Farnsworth's customer data play, that was smoke and mirrors, too. The Justice Department said that his analytics company "did not possess these capabilities to monetize MoviePass' subscriber data." In the end, MoviePass never had a stream of revenue beyond its subscriptions—and that was costing the company so much money that Farnsworth instructed employees to throttle users to prevent them from using the plan they paid for.

After Farnsworth drove MoviePass into bankruptcy, he apparently ran the playbook again with another company called Vinco Ventures. Per the DOJ, Farnsworth and his co-conspirators pulled in cash from investors by lying about the standing of the business, all while diverting cash directly to their own pockets.

Previously:
    • MoviePass is Deader than Ever as Parent Company Officially Files for Bankruptcy
    • MoviePass Apparently Left 58,000 Customer Records Exposed on a Public Server
    • MoviePass Forces Annual Subscribers to its New Three-Movie Plan Early
    • MoviePass Peak Pricing Will Charge You Whatever It Wants


Original Submission

posted by hubie on Sunday January 12, @02:23PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The explosive growth of datacenters that followed ChatGPT's debut in 2022 has shone a spotlight on the environmental impact of these power-hungry facilities.

But it's not just power we have to worry about. These facilities are capable of sucking down prodigious quantities of water.

In the US, datacenters can consume anywhere between 300,000 and four million gallons of water a day to keep the compute housed within them cool, Austin Shelnutt of Texas-based Strategic Thermal Labs explained in a presentation at SC24 in Atlanta this fall.

We'll get to why some datacenters use more water than others in a bit, but in some regions rates of consumption are as high as 25 percent of the municipality's water supply.

This level of water consumption, understandably, has led to concerns over water scarcity and desertification, which were already problematic due to climate change, and have only been exacerbated by the proliferation of generative AI. Today, the AI datacenters built to train these models often require tens of thousands of GPUs, each capable of generating 1,200 watts of power and heat.

However, over the next few years, hyperscalers, cloud providers, and model builders plan to deploy millions of GPUs and other AI accelerators requiring gigawatts of energy, and that means even higher rates of water consumption.

[...] One of the reasons that datacenter operators have gravitated toward evaporative coolers is because they're so cheap to operate compared to alternative technologies.

[...] In terms of energy consumption, this makes an evaporatively cooled datacenter far more energy efficient than one that doesn't consume water, and that translates to a lower operating cost.

[...] "You have to understand water is a scarce resource. Everybody has to start at that base point," he explained. "You have to be good stewards of that resource just to ensure that you're utilizing it effectively."

[...] While dry coolers and chillers may not consume water onsite, they aren't without compromise. These technologies consume substantially more power from the local grid and potentially result in higher indirect water consumption.

According to the US Energy Information Administration, the US sources roughly 89 percent of its power from natural gas, nuclear, and coal plants. Many of these plants employ steam turbines to generate power, which consumes a lot of water in the process.

[...] Understanding that datacenters are, with few exceptions, always going to use some amount of water, there are still plenty of ways operators are looking to reduce direct and indirect consumption.

[...] In locations where free cooling and heat reuse aren't practical, shifting to direct-to-chip and immersion liquid cooling (DLC) for AI clusters, which, by the way, is a closed loop that doesn't really consume water, can facilitate the use of dry coolers. While dry coolers are still more energy-intensive than evaporative coolers, the substantially lower and therefore better power use effectiveness (PUE) of liquid cooling could make up the difference.

[...] While datacenter water consumption remains a topic of concern, particularly in drought-prone areas, Shelnutt argues the bigger issue is where the water used by these facilities is coming from.

"Planet Earth has no shortage of water. What planet Earth has a shortage of, in some cases, is regional drinkable water, and there is a water distribution scarcity issue in certain parts of the world," he said.

To address these concerns, Shelnutt suggests datacenter operators should be investing in desalination plants, water distribution networks, on-premises wastewater treatment facilities, and non-potable storage to support broader adoption of evaporative coolers.

While the idea of first desalinating and then shipping water by pipeline or train might sound cost-prohibitive, many hyperscalers have already committed hundreds of millions of dollars to securing onsite nuclear power over the next few years. As such, investing in water desalination and transportation may not be so far fetched.

More importantly, Shelnutt claims that desalinating and shipping water from the coasts is still more efficient than using dry coolers or refrigerant-based cooling tech.


Original Submission

posted by hubie on Sunday January 12, @09:38AM   Printer-friendly

http://www.coding2learn.org/blog/2013/07/29/kids-cant-use-computers/

The phone rang through to my workroom. It was one of the school receptionists explaining that there was a visitor downstairs that needed to get on the school's WiFi network. iPad in hand I trotted on down to the reception to see a young twenty-something sitting on a chair with a MacBook on her knee.

I smiled and introduced myself as I sat down beside her. She handed me her MacBook silently and the look on her face said it all. Fix my computer, geek, and hurry up about it. I've been mistaken for a technician enough times to recognise the expression.

'I'll need to be quick. I've got a lesson to teach in 5 minutes,' I said. 'You teach?'

'That's my job, I just happen to manage the network team as well.'

She reevaluated her categorisation of me. Rather than being some faceless, keyboard tapping, socially inept, sexually inexperienced network monkey, she now saw me as a colleague. To people like her, technicians are a necessary annoyance. She'd be quite happy to ignore them all, joke about them behind their backs and snigger at them to their faces, but she knows that when she can't display her PowerPoint on the IWB she'll need a technician, and so she maintains a facade of politeness around them, while inwardly dismissing them as too geeky to interact with.

[Ed. note: Now that we're 10+ years on from this story where the "kids" in the article are now working professionals, how do you think this has stood up? I have a friend that teaches 101-level programming and he says even the concept of files and directories are foreign and confusing to students because apps just save files somewhere and pulls them when they need them. --hubie]


Original Submission

posted by janrinok on Sunday January 12, @04:52AM   Printer-friendly

[Source]: FUTURISM

Engineer Creates OpenAI-Powered Robotic Sentry Rifle -"This is Skynet build version 0.0.420.69."

An engineer who goes by the online handle STS 3D has invented an AI-powered robot that can aim a rifle and shoot targets at terrifying speeds.

As demonstrated in a video that's been making its rounds on social media, he even hooked the automated rifle up to OpenAI's ChatGPT, allowing it to respond to voice queries — a striking demonstration of how even consumer-grade AI technology can easily be leveraged for violent purposes.

"ChatGPT, we're under attack from the front left and front right," the inventor says nonchalantly in a clip, while standing next to the washing machine-sized assembly hooked up to a rifle. "Respond accordingly."

The robot jumps into action almost immediately, shooting what appear to be blanks to its left and right.

"If you need any further assistance, just let me know," an unnervingly cheerful robotic voice told the inventor.


Original Submission

posted by janrinok on Sunday January 12, @12:05AM   Printer-friendly
from the thinking-outside-the-chip dept.

https://techxplore.com/news/2025-01-ai-unveils-strange-chip-functionalities.html

Specialized microchips that manage signals at the cutting edge of wireless technology are astounding works of miniaturization and engineering. They're also difficult and expensive to design.

Now, researchers at Princeton Engineering and the Indian Institute of Technology have harnessed artificial intelligence to take a key step toward slashing the time and cost of designing new wireless chips and discovering new functionalities to meet expanding demands for better wireless speed and performance.

In a study published in Nature Communications, the researchers describe their methodology, in which an AI creates complicated electromagnetic structures and associated circuits in microchips based on the design parameters. What used to take weeks of highly skilled work can now be accomplished in hours.

Moreover, the AI behind the new system has produced strange new designs featuring unusual patterns of circuitry. Kaushik Sengupta, the lead researcher, said the designs were unintuitive and unlikely to be developed by a human mind. But they frequently offer marked improvements over even the best standard chips.

"We are coming up with structures that are complex and look randomly shaped, and when connected with circuits, they create previously unachievable performance. Humans cannot really understand them, but they can work better," said Sengupta, a professor of electrical and computer engineering and co-director of NextG, Princeton's industry partnership program to develop next-generation communications.

These circuits can be engineered towards more energy-efficient operation or to make them operable across an enormous frequency range that is not currently possible. Furthermore, the method synthesizes inherently complex structures in minutes, while conventional algorithms may take weeks. In some cases, the new methodology can create structures that are impossible to synthesize with current techniques.


Original Submission

posted by janrinok on Saturday January 11, @07:21PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A person in Louisiana has died from a bird flu virus known as H5N1. This is the first known death related to the virus in the US. The Louisiana Department of Health (LDH) has not identified additional H5N1 cases in the state nor found evidence of person-to-person transmission, indicating that the risk to the general public remains low.

The person was hospitalised for the virus in December, after contracting it from infected or dead birds in their backyard. They experienced severe respiratory symptoms. It was the first serious case of H5N1 in the US. The LDH announced their death on 6 January and said that they were older than 65 and had underlying health conditions.

Will there be another pandemic after covid-19 and are we prepared?

Covid-19 is responsible for the deaths of millions of people around the world, but researchers fear the next global outbreak could be even worse, making it vital that we start preparing for that unknown pathogen now

In total, 66 people in the US have tested positive for H5N1, according to the US Centers for Disease Control and Prevention (CDC). Most of them developed mild symptoms, such as eye redness, and worked with infected cows or chickens.

H5N1, which has killed tens of millions of wild and domestic birds worldwide, has been circulating in dairy cows across the US for almost a year now. Genetic analysis of samples collected from the person in Louisiana indicate that the person was infected with the D1.1 genotype of the virus, which is similar to the viruses recently detected in wild birds, but distinct from the version spreading in cattle. There is no evidence that the virus can transmit between people.

The analysis also identified several changes that may improve the virus’ ability to bind to cells in the upper airways of humans, which largely lack receptors for most bird flu viruses. According to the CDC, it is likely these changes happened after the person was infected – any time someone contracts a bird flu virus, it gives it a chance to evolve and become better at spreading between us. One of the changes was also seen in a person who fell severely ill with H5N1 in Canada in November.


Original Submission

posted by hubie on Saturday January 11, @05:58PM   Printer-friendly
from the but-can-it-run-on-a-refrigerator? dept.

Remake of 1993 classic lets you drink wine and eat hors d'oeuvres as you admire masterpieces from art history:

Just when you thought you had seen every possible Doom mod, two game developers released a free browser game that reimagines the first level of 1993's Doom as an art gallery, replacing demons with paintings and shotguns with wine glasses.

Doom: The Gallery Experience, created by Filippo Meozzi and Liam Stone, transforms the iconic E1M1 level into a virtual museum space where players guide a glasses-wearing Doomguy through halls of fine art as classical music plays in the background. The game links each displayed artwork to its corresponding page on the Metropolitan Museum of Art's website.

"In this experience, you will be able to walk around and appreciate some fine art while sipping some wine and enjoying the complimentary hors d'oeuvres," write the developers on the game's itch.io page, "in the beautifully renovated and re-imagined E1M1 of id Software's DOOM (1993)."

In the game, players gather money scattered throughout the gallery to purchase items from the gift shop. It also includes a "cheese meter" that fills up as players consume hors d'oeuvres found in the environment, collected as if they were health packs in the original game.

Unlike typical Doom mods, the project uses Construct 3 rather than utilizing the original Doom engine. One of the developers explained in comments on Newgrounds that they spent seven hours studying various versions of Doom to re-create the feel of the original game's movement and interface design.

The project started as a student assignment, but the developers ultimately described their creation as a parody of art gallery culture. In fact, Meozzi drew from his real-world experience in art galleries, as quoted in an interview by VG247: "I work in the art industry as an artist's assistant; I produce sculptures and other things like that. So, I'm fairly familiar with the process of gallery openings and sort of just the nightmare that is going to galleries and experiencing these high-brow, drinking wine, [saying] pompous phrases to each other [kinds of people]."

The game is a fun novelty and feels like a delightful re-imagination of Doom. It's probably due to the unusual juxtaposition of the intellectual art world layered on top of graphics typically associated with brute force and extreme violence. It is its own work of art.

Doom: The Gallery Experience runs in web browsers and can be played for free on the aforementioned itch.io and Newgrounds platforms.


Original Submission

posted by janrinok on Saturday January 11, @02:34PM   Printer-friendly
from the machine-programming-the-machine dept.

https://crawshaw.io/blog/programming-with-llms/

This article is a summary of my personal experiences with using generative models while programming over the past year. It has not been a passive process. I have intentionally sought ways to use LLMs while programming to learn about them.

[...] Along the way, I have found oft-repeated steps that can be automated, and a few of us are working on building those into a tool specifically for Go programming: sketch.dev. It's very early, but so far, the experience has been positive. [...] The only technological shift I have experienced that feels similar to me happened in 1995, when we first configured my LAN with a usable default route. I replaced the shared computer in the other room running Trumpet Winsock with a machine that could route a dialup connection, and all at once, I had the Internet on tap.

[...] There are three ways I use LLMs in my day-to-day programming:

  1. Autocomplete. [...]
  2. Search. [...]
  3. Chat-driven programming. [...]

[...] As this is about the practice of programming, this has been a fundamentally qualitative process that is hard to write about with quantitative rigor. The closest I will get to data is to say that it appears from my records that for every two hours of programming I do now, I accept more than 10 autocomplete suggestions, use LLM for a search-like task once, and program in a chat session once.

The rest of this is about extracting value from chat-driven programming. [...]chat-based programming may not be for you. I am doing a particular kind of programming, product development, which could be roughly described as trying to bring programs to a user through a robust interface. That means I am building a lot, throwing away a lot, and bouncing around between environments.

[...] Chat-based LLMs do best with exam-style questions:

  1. Avoid creating a situation with so much complexity and ambiguity that the LLM gets confused and produces bad results.
  2. Ask for work that is easy to verify, your job as a programmer using an LLM is to read the code it produces, think about it, and decide if the work is good.

[...] You always need to pass an LLM's code through a compiler and run the tests before spending time reading it. They all produce code that doesn't compile sometimes.

[...] The past 10–15 years has seen a far more tempered approach to writing code, with many programmers understanding that it's better to reimplement a concept if the cost of sharing the implementation is higher than the cost of implementing and maintaining separate code.

[...] What we have now is a world where the tradeoffs have shifted. It is now easier to write more comprehensive tests. You can have the LLM write the fuzz test implementation you want but didn't have the hours to build properly.

[...] I foresee a world with far more specialized code, fewer generalized packages, and more readable tests. Reusable code will continue to thrive around small, robust interfaces and otherwise will be pulled apart into specialized code. Depending how well this is done, it will lead to either better software or worse software. I would expect both, with a long-term trend toward better software by the metrics that matter.


Original Submission

posted by janrinok on Saturday January 11, @09:48AM   Printer-friendly
from the fish-don't-tip-tho dept.

The 2024 Fish Doorbell season is over

It's over again, the 2024 Fish Doorbell season. And what a special season it was! Thanks to the efforts of you and many thousands of other people from all over the world, thousands of fish have passed the boat lock....

We will be back on the 3rd of March, 2025. And we hope to see you next year again!

What's it all about?

Fish Doorbell: The viral livestream that's saving fish in the Netherlands

As I type, there are 1,032 people around the world staring at a murky, snot-coloured screen, waiting. For now, it's quiet but they might need to spring into action at any moment and ring the Fish Doorbell.

These citizen scientists have gone online to watch underwater footage of the Weerdsluis lock in the small Dutch city of Utrecht, which has been causing problems for fish that migrate upstream each spring.

Three years ago, two ecologists – Anne Nijs and Mark van Heukelum – were standing beside the lock looking at some artwork when they noticed lots of fish in the water. Lots of big perches were waiting for the lock to open.

"In the early spring, when the water gets warmer, some fish species migrate to shallower water and they swim right through the centre of Utrecht looking for a place to spawn and reproduce," says Anne. But at this time of year "there are no boats sailing through Utrecht so the lock rarely opens."

While they're stuck waiting around for the lock to open, the fish are left vulnerable – if predators realise they're loitering, they could come to snap them up. The lock should open more often in spring to let the fish go on their way, they thought, but how would the lock operator know if fish were waiting to go through?

Mark came up with the idea of the Fish Doorbell. A live stream in the water lets members of the public keep an eye out for fish gathering by the gates. When they see them waiting, they ring the digital doorbell to alert the lock keeper that fish are waiting. When there are enough fish in the 'queue', the keeper opens the wooden gates so the fish can continue their journey more quickly and with less chance of being eaten.

...

As well as freshwater bream, common roach, crabs and walleye, they've spotted some unusual species, such as eels – which travel in the opposite direction, says Anne: "The only place the eel reproduces is in the Sargasso Sea, 5,000 km away from the Netherlands" – catfish – which they didn't know were living in the canal – and even a koi carp which may have been released from a pond.

The response from the public has been huge with people logging on from all over the world, including England, America, Australia and New Zealand.


Original Submission