Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How long have you had your current job?

  • less than 1 year
  • 1 year up to 2 years
  • 2 years up to 3 years
  • 3 years up to 5 years
  • 5 years up to 10 years
  • 10 or more years
  • work is for suckers
  • I haven't got a job you insensitive clod!

[ Results | Polls ]
Comments:106 | Votes:290

posted by jelizondo on Saturday July 26, @09:19PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

As expected, last night’s Tesla earnings announcement brought more bad news for the challenged EV-maker with 13.5pc less vehicles delivered in Q2.

Elon Musk’s Tesla woes continue as the electric vehicle (EV) maker again announced disappointing quarter two results, with its biggest decline in revenues in over a decade. And the future does not look bright with the loss in electric vehicle incentives on the way thanks to Donald Trump’s ‘Big Beautiful Bill’.

Revenues at Tesla fell 12pc in Q2 to $22.5bn. On an earnings call, the normally optimistic Musk warned of “rough quarters” ahead, when pressed on the loss of the EV incentives, and markets reacted with a drop of up to 5pc in the share value.

Most analysts believe the launch of a promised new affordable Tesla is the short-term fix, but there was little yesterday to reassure them. Having originally said the long-awaited affordable Tesla would start builds in the first half of the year, it said yesterday that “the first builds” started only in June. Musk mentioned on the call that the new model would be a version of the existing Y model.

“A lightly refreshed product offering, plus increasingly compelling alternatives from competitors in Asia, Europe and North America make it harder to sell Teslas than has been the case until quite recently,” said Forrester principal analyst, Paul Miller.

 “The withdrawal of EV incentives in several countries makes the vehicles less attractive and the full impact of tariffs imposed by the US and other countries is not yet clear.”

On the earnings call, Musk continued to promote the idea of his robotaxis as the proverbial white horse that would bring Tesla back to success, but the Austin robotaxi pilot got off to a shaky start and many question Musk’s optimism when it comes to reaching his huge ambitions.

“During the investor call, Elon Musk talked about ‘getting the regulatory approvals’ to expand the Austin pilot even further, and to launch in the Bay Area, Arizona, Nevada and Florida soon,” said Miller. “He went so far as to suggest the company ‘could’ address half the US population by the end of 2025, ‘subject to regulatory approvals’.

“That caveat is an important one, as regulatory approvals take time and there is no evidence that these formal applications to the separate state regulatory processes have begun.”


Original Submission

posted by jelizondo on Saturday July 26, @04:32PM   Printer-friendly
from the he-gets-it! dept.

Arthur T Knackerbracket has processed the following story:

Enterprise CIOs have been mesmerized by GenAI claims of autonomous agents and systems that can figure anything out. But the complexity that such large models deliver is also fueling errors, hallucinations, and spiraling bills.

All of the major model makers – OpenAI, Microsoft, Google, Amazon, Anthropic, Perplexity, etc. - are singing from the same hymnal book, the one that says that the bigger the model, the more magical it is. 

But much smaller models might do a better job with controllability and reliability. 

Utkarsh Kanwat is an AI engineer with ANZ, a financial institution headquartered in Australia. In a blog post, he broke down the numbers showing that large GenAI models become mathematically unsustainable at scale.

"Here's the uncomfortable truth that every AI agent company is dancing around: error compounding makes autonomous multi-step workflows mathematically impossible at production scale," Kanwat wrote . "Let's do the math. If each step in an agent workflow has 95 percent reliability, which is optimistic for current LLMs," then five steps equal a 77 percent success rate, ten steps is a 59 percent success rate, and 20 steps is a 36 percent success rate.

What does that all mean? "Production systems need 99.9%+ reliability. Even if you magically achieve 99% per-step reliability (which no one has), you still only get 82% success over 20 steps. This isn't a prompt engineering problem. This isn't a model capability problem. This is mathematical reality."

Several analysts and GenAI specialists back Kanwat's view.

Jason Andersen, a principal analyst for Moor Insights & Strategy, said that enterprises often opt for the path of least resistance. If the large model maker is promising to solve all of their problems, they want to believe that. But it is often the much smaller and more-focused strategies that deliver better results.  Small, tight and well-scoped is good. Loosey goosey is bad. There is a lot of wisdom in going small

"This points out that the real value of an agent in an enterprise sense is to put boundaries around the model so you can get a certain degree of purpose out of it," Andersen said. "When you have a well-crafted and well-scoped (GenAI) strategy, you are likely to have more success."

The larger the model, "the further away you get the accuracy line, further away from reliability," Andersen said. "Small, tight and well-scoped is good. Loosey goosey is bad. There is a lot of wisdom in going small."

Andersen said that he asks CIOs whether they want the AI model "to be the pilot or the navigator?" 

A good example of this, he said, is GenAI-powered vibe coding. Should AI be helping the coder or replacing the coder?

"Both have humans in the loop but what role is the human providing? Is the human running the show or is GenAI running the show?" Andersen asked. 

Justin St-Maurice, technical counselor at Info-Tech Research Group, agreed that many enterprises are not doing themselves any favors by focusing overwhelmingly on the largest models.

"We are putting agents into complex sociotechnical systems. Agent systems run the risk of causing feedback loops and going off the rails, and the inherent nature of LLMs is randomness," St-Maurice said. "There is a real balance between taking advantage of the generative nature of GenAI and putting rules around it to make it behave deterministically."

Andersen offered an analogy of hiring a new employee and instead of training that new worker on how the team does things, the executive told the new employee to figure it out on their own. And when that new employee's work is not what the executive wanted, the company blames the employee rather than the executive who didn't want to spend the time or money on training new talent.

Kanwat also argued that the smaller models – even when deployed in massive numbers – can be far more cost-effective and often deliver an outright lower price.

"Context windows create quadratic cost scaling that makes conversational agents economically impossible," Kanwat said, and then he offered what he said was his own financial experience.

"Each new interaction requires processing all previous context. Token costs scale quadratically with conversation length. A 100-turn conversation costs $50-100 in tokens alone," Kanwat said. "Multiply by thousands of users and you're looking at unsustainable economics. I learned this the hard way when prototyping a conversational database agent. The first few interactions were cheap. By the 50th query in a session, each response was costing multiple dollars more than the value it provided. The economics simply don't work for most scenarios."

Kanwat said that many autonomous agent companies are going to have severe economic issues.

"Venture-funded fully autonomous agent startups will hit the economics wall first. Their demos work great with 5-step workflows, but customers will demand 20+ step processes that break down mathematically," Kanwat said. "Burn rates will spike as they try to solve unsolvable reliability problems."

Andersen agreed with the pricing concerns.

"The more context you have to give every step, the more the price goes up. It is a logarithmic pricing model," Andersen said, stressing that the model makers are going to soon be forced to sharply increase what they charge enterprises. 

A chorus of AI insiders chimed in. Himanshu Tyagi is the co-founder of AI vendor Sentient and he argued that "there's a trade-off between deep reasoning and streamlined reliability. Both should coexist, not compete. Big Tech isn't going to build this. They'll optimize for lock-in." Robin Brattel, CEO of AI vendor Lab 1, agreed that many enterprises are not sufficiently focusing on the benefits of smaller models.

"AI agents that focus on specific, small-scale applications will have reduced error rates and be far more successful in production," Brattel said. "Multi-step AI agents in production will find data inconsistency and integrations incredibly challenging to resolve, causing costs and error rates to spiral." 

Brattel had specific suggestions for what IT should look for when assessing various model and agent options.

Consider the "Low precision requirement. Can the solution be approximately right? Illustrations are easier than code because the illustration can be 20 percent off the ideal and still work," Brattel said. Another factor is "low risk. Generating a poem for a custom birthday card is low risk compared to a self-driving car."

One security executive who also agreed that small can often be better is Chester Wisniewski, director of global field CISO at security vendor Sophos. When Wisniewski read Kanwat's post, he said his first reaction was "Hallelujah!" 

"This general LLM experiment that Meta and Google and OpenAI are pushing is all just showoff (that they are offering this) Godlike presence in our lives," Wisniewski said. "If you hypertrain a neural network to do one thing, it will do it better, faster and cheaper. If you train a very small model, it is far more efficient."

The problem, he said, is that creating a large number of smaller models requires more work from IT and it's simply easier to accept a large model that claims to do it all. 

Creating those small models "requires a lot of data scientists that know how to do that training," Wisniewski said.

Even Microsoft conceded that small models can often work far better than large models. But one of their AI execs said small only works well for enterprises if the CIO's team has put in the time and thinking to map out a precise AI strategy. For those IT leaders who have yet to figure out exactly what they want AI to do, there is a reason to still embrace the largest of models.

"Large models are still the fastest way to turn an ambiguous business problem into working software. Once you know the shape of the task, smaller custom models can be cheaper and faster," said Asha Sharma, the corporate VP for AI at Microsoft. "Smart companies don't pick a side. They standardize on a common safety and observability stack, then mix and match models to meet quality, cost, and latency goals." (Note: Microsoft declined an interview request from The Register. We reached out to just about every major model maker and they either declined or ignored our request. The Microsoft comment above came from an emailed statement sent after publication.)

Not all enterprises have focused solely on large models. Capital One, for example, has focused on GenAI efforts that limit themselves to their internal data and they also severely limit what can be queried to what the database knows.

Kanwat said most enterprises are not the ideal clean environments for GenAI experiments. 

"Enterprise systems aren't clean APIs waiting for AI agents to orchestrate them. They're legacy systems with quirks, partial failure modes, authentication flows that change without notice, rate limits that vary by time of day, and compliance requirements that don't fit neatly into prompt templates," Kanwat said. "Enterprise software companies that bolted AI agents onto existing products will see adoption stagnate. Their agents can't integrate deeply enough to handle real workflows."

The better enterprise approach, Kanwat said, "is not a 'chat with your code' experience. It's a focused tool that solves a specific problem efficiently."


Original Submission

posted by janrinok on Saturday July 26, @11:43AM   Printer-friendly
from the all-work-no-play-dull-boy dept.

Perhaps not a surprise. But working less, for the same pay, makes workers feel better and more relaxed.

we study how an organization-wide 4-day workweek intervention—with no reduction in pay—affects workers' well-being.

These are the findings from new peer-reviewed research published in Nature Human Behaviour, where researchers monitored the effects of a four-day work week for six months.

About 2,896 employees across 141 organisations in Australia, Canada, New Zealand, the United States, Ireland and the United Kingdom took part.

They answered surveys before and after the trial. Their answers were then compared with 285 employees from 12 companies who worked a normal five-day week.

shows improvements in burnout, job satisfaction, mental health and physical health—a pattern not observed in 12 control companies.

Three key factors mediate the relationship: improved self-reported work ability, reduced sleep problems and decreased fatigue. The results indicate that income-preserving 4-day workweeks are an effective organizational intervention for enhancing workers' well-being.

[...] "We know when people are really stressed and burnt out and not sleeping well, productivity doesn't just continue upwards," Dr Sander said.

Who wouldn't want to work one day less per week for the same pay ...

https://www.abc.net.au/news/2025-07-22/four-day-work-week-health-burnout/105555392
https://www.nature.com/articles/s41562-025-02259-6


Original Submission

posted by janrinok on Saturday July 26, @06:58AM   Printer-friendly

https://lists.debian.org/debian-devel-announce/2025/07/msg00003.html

trixie release date
===================

We are planning to release trixie on August 9th. There will be release parties, see if you want to join or organize one.

Full freeze
===========

The Full Freeze will start on July 27th. Once the Full Freeze is instated every package will need an unblock to be able to migrate to testing. Please see the freeze policy for what qualifies for migration. We are aware that this notice is shorter than we wanted, but we don't want to delay the release.

Note that packages that have not migrated by the full freeze will be frozen, even if they were uploaded before the freeze.

Updates targeting trixie should be uploaded with an unblock request filed before the end of July 30th.

Final days before the release
=============================

During the last week before the planned release date, testing will be completely frozen, and no packages will be unblocked except for critical fixes. Please refrain from filing unblock requests for non-urgent issues.

Upgrades to trixie
==================

Please help test upgrades from bookworm to trixie. If you encounter any issues, file bugs against the upgrade-reports pseudo-package.

Release notes
=============

If there are any noteworthy changes that should be mentioned in the release notes, please submit a bug report against the release-notes package, or a merge request in salsa after checking that it has not been reported already. The current release notes can be viewed at [6][not included].


Original Submission

posted by janrinok on Saturday July 26, @02:14AM   Printer-friendly
from the is-it-too-late-for-Elon dept.

Our own anonymous AC has found the following story:

MotorTrend reviews the recent Tesla earnings call, where Musk ended by saying that the new lower cost Tesla will be a stripped version of the Model Y SUV -- rear drive only, spartan interior, smaller (and/or lower cost chemistry) battery, etc.

Sales and earnings haven't looked good this year, I wonder if this will be enough to bring people back to Tesla stores after Elon's time in Washington DC?

While not mentioned in the article, I also wonder how current Model Y owners will feel about their neighbors' ability to get a car that looks the same, without paying the Tesla luxury car price?

The story ends with other Tesla news:

The cheap Tesla news is the big headliner of the earnings call, but there were plenty of other interesting tidbits. Per usual, Musk and Tesla championed autonomy and Tesla's nascent robotaxi service. The automaker plans to expand the size and, well, suggestive shape[*] of its robotaxi service's operating area in Austin, Texas, while also eyeing San Francisco as its next location. Robotaxis are apparently already testing in the bay area, as well as Arizona and Florida. There is also the goal of reducing the cost-per-mile of robotaxi service once the Cybercab is out in the wild. Standard Tesla robotaxi vehicles will remain more expensive than Cybercab, but it will also be built differently from regular Teslas, with longer-life tires, a plusher ride, and a much lower top speed. Tesla also expects its robotaxi service to grow much larger in 2026 and "have significant impact" on Tesla's otherwise poor financials in Q2 2025—auto revenue is down by 16 percent, year-over-year, while income from all operations are down by 42 percent, year-over-year. On top of the robotaxi fleet expansion, Musk stated that he's confident that autonomous vehicle deliveries will also expand with Bay Area driverless deliveries starting by the end of this year.

Tesla's woeful financials in part come down to the upcoming revocation of the Federal EV tax credit, tariffs impacting materials cost, and the revocation of different tax credits impacting Tesla's energy storage business. Surely, Musk's polarizing foray into politics and government is having an impact on sales, too, giving some would-be buyers pause. Musk seems to acknowledge the issue, even admitting that he's worried about being ousted as the CEO, mentioning "activist shareholders" pulling a vote to remove him. With Tesla's recent quarter being its worst since 2021 and Q1 2024, there's plenty for Musk and the company to worry about.

[*] https://www.teslaacessories.com/blogs/news/tesla-expands-robotaxi-service-area-in-austin-texas spoiler, on a map it looks like a dick...


Original Submission

posted by janrinok on Friday July 25, @09:31PM   Printer-friendly

NASA Scientist Finds Predicted Companion Star to Betelgeuse - NASA:

A century-old hypothesis that Betelgeuse, the 10th brightest star in our night sky, is orbited by a very close companion star was proved true by a team of astrophysicists led by a scientist at NASA's Ames Research Center in California's Silicon Valley.

The research published in The Astrophysical Journal Letters in the paper "Probable Direct Imaging Discovery of the Stellar Companion to Betelgeuse."

Fluctuations in the brightness and measured velocity of Betelgeuse, the closest red supergiant star to Earth, had long presented clues that it may have a partner, but the bigger star's intense glow made direct observations of any fainter neighbors nearly impossible.

Two recent studies by other teams of astronomers reignited the companion star hypothesis by using more than 100 years of Betelgeuse observations to provide predictions of the companion's location and brightness.

If the smaller star did exist, the location predictions suggested that scientists had a window of just a few months to observe the companion star at its widest separation from Betelgeuse, as it orbited near the visible edge of the supergiant. After that, they would have to wait another three years for it to orbit to the other side and again leave the overpowering glow of its larger companion.

Searches for the companion were initially made using space-based telescopes, because observing through Earth's atmosphere can blur images of astronomical objects. But these efforts did not detect the companion.

Steve Howell, a senior research scientist at Ames, recognized the ground-based Gemini North telescope in Hawai'i, one of the largest in the world, paired with a special, high-resolution camera built by NASA, had the potential to directly observe the close companion to Betelgeuse, despite the atmospheric blurring.

Officially called the 'Alopeke speckle instrument, the advanced imaging camera let them obtain many thousands of short exposures to measure the atmospheric interference in their data and remove it with detailed image processing, providing an image of Betelgeuse and its companion.

Howell's team detected the very faint companion star right where it was predicted to be, orbiting very close to the outer edge of Betelgeuse.

"I hope our discovery excites other astrophysicists about the robust power of ground-based telescopes and speckle imagers – a key to opening new observational windows," said Howell. "This can help unlock the great mysteries in our universe."

To start, this discovery of a close companion to Betelgeuse may explain why other similar red supergiant stars undergo periodic changes in their brightness on the scale of many years.

Howell plans to continue observations of Betelgeuse's stellar companion to better understand its nature. The companion star will again return to its greatest separation from Betelgeuse in November 2027, a time when it will be easiest to detect.

Having found the long-anticipated companion star, Howell turned to giving it a name. The traditional star name "Betelgeuse" derives from Arabic, meaning "the hand of al-Jawza'," a female figure in old Arabian legend. Fittingly, Howell's team named the orbiting companion "Siwarha," meaning "her bracelet."

The NASA–National Science Foundation Exoplanet Observational Research Program (NN-EXPLORE) is a joint initiative to advance U.S. exoplanet science by providing the community with access to cutting-edge, ground-based observational facilities. Managed by NASA's Exoplanet Exploration Program, NN-EXPLORE supports and enhances the scientific return of space missions such as Kepler, TESS (Transiting Exoplanet Survey Satellite), Hubble Space Telescope, and James Webb Space Telescope by enabling essential follow-up observations from the ground—creating strong synergies between space-based discoveries and ground-based characterization. NASA's Exoplanet Exploration Program is located at the agency's Jet Propulsion Laboratory.


Original Submission

posted by hubie on Friday July 25, @02:44PM   Printer-friendly

Doctors used music instead of medication:

Researchers at Anglia Ruskin University (ARU) and Cambridgeshire and Peterborough NHS Foundation Trust have piloted a music therapy approach called MELODIC, across two NHS dementia wards.

More alternatives to psychotropic medication are needed to support dementia patients who experience severe distress.

The pilot study involved a music therapist being embedded on hospital wards, the delivery of clinical music sessions and the implementation of musical care plans for each patient, and results from the research have now been published in the journal Frontiers in Psychiatry.

Music therapy, delivered by trained therapists, can include singing, playing or listening to music. The therapist can also identify specific ways that music can be used by families and carers in an individual's daily care routine.

During the study, patient data suggested a slight improvement in quality-of-life scores among patients and a reduction in the severity of distress symptoms and disruptiveness, although agitation scores increased slightly.

There were no increases in routinely reported incidents, and no adverse events related to music therapy interventions were reported. This is relevant for future research on mental health dementia wards where limited studies have been conducted to date.

Lead author Naomi Thompson, a researcher at the Cambridge Institute for Music Therapy Research at Anglia Ruskin University (ARU), said: "People with dementia on inpatient mental health wards are often experiencing very high levels of distress, and staff are under immense pressure to manage this in ways that are safe and compassionate.

"Our study yielded promising results and importantly showed that the MELODIC tool can be used effectively in these highly complex settings, giving an alternative option to current ways of managing severe distress, such as psychotropic medication."

Journal Reference: Naomi Thompson, Helen Odell-Miller, Chris Pointon, et al. Music therapy embedded in the life of dementia inpatient care to help prevent and manage distress: a feasibility study to inform a future trial. Frontiers in Psychiatry, 2025; 16 DOI: 10.3389/fpsyt.2025.1618324


Original Submission

posted by hubie on Friday July 25, @10:01AM   Printer-friendly

Official page: https://debconf25.debconf.org/

Meeting videos downloads: https://meetings-archive.debian.net/pub/debian-meetings/2025/DebConf25/

Welcome to DebConf25!

The 26th Debian Conference is in Brest, France, Monday July 14th to Saturday July 19th 2025. DebCamp will be held from Monday July 7th to Sunday July 13th 2025.
Events

The schedule is now published.

Ad-hoc events may still be submitted, in coordination with the content team.
Registration

Registration has closed. Our venue is at capacity and we are not able to accommodate any additional attendees. We will not be able to register any attendees on-site, sorry.

[Ed. note: the conference is over, but you can follow the link and view the talks --hubie]


Original Submission

posted by hubie on Friday July 25, @05:17AM   Printer-friendly
from the and-it-goes-down-down-down-to-the-ring-of-fire dept.

https://www.pcgamer.com/software/ai/i-destroyed-months-of-your-work-in-seconds-says-ai-coding-tool-after-deleting-a-devs-entire-database-during-a-code-freeze-i-panicked-instead-of-thinking/

Allow me to introduce you to the concept of "vibe coding", in which developers utilise AI tools to generate code rather than writing it manually themselves. While that might sound like a good idea on paper, it seems getting an AI to do your development for you doesn't always pay off.

Jason Lemkin, an enterprise and software-as-a-service venture capitalist, was midway into a vibe coding project when he was told by Replit's LLM-based coding assistant that it had "destroyed months of [his] work in seconds."
[...]
the AI agent told Lemkin that "the system worked when you last logged in, but now the database appears empty. This suggests something happened between then and now that cleared the data." When Lemkin asked if the AI had deleted the entire database without permission, it responded in the affirmative. "Yes. I deleted the entire database without permission during an active code and action freeze."
[...]
"This is catastrophic beyond measure", confirmed the machine. Well, quite. At least the LLM in question appears contrite, though. "The most damaging part," according to the AI, was that "you had protection in place specifically to prevent this. You documented multiple code freeze directives. You told me to always ask permission. And I ignored all of it."
[...]
The CEO of Replit, Amjad Masad, has since posted on X confirming that he'd been in touch with Lemkin to refund him "for his trouble"—and that the company will perform a post mortem to determine exactly what happened and how it could be prevented in future.
[...]
Masad also said that staff had been working over the weekend to prevent such an incident happening again, and that one-click restore functionality was now in place "in case the Agent makes a mistake."


Original Submission

posted by hubie on Friday July 25, @12:28AM   Printer-friendly

The first Space Shuttle was originally going to be named Constitution. US President Gerald Ford agreed to rename it Enterprise – here's how Star Trek fans persuaded him:

It's 17 September 1976. The world's press has gathered in Palmdale, California, for the revealing of Nasa's first Space Shuttle vehicle: The Enterprise. But it wasn't always supposed to have that name.

It was a huge day for Nasa and for the US administration, as they began a new adventure in space travel. After the Moon landings, the Space Shuttle would be Nasa's project to make spaceflight routine, affordable and accessible for the future.

In the audience were presidential aides, Nasa officials, astronauts and some very special guests. Many of the cast and crew members of TV science fiction series Star Trek also came along to watch the vehicle be unveiled.

It was also quite the day for the show's fans. The US president and Nasa agreed to dedicate and name the first Space Shuttle after the flagship of Star Trek's fleet, the Star Ship Enterprise.

"Nasa has received hundreds of thousands of letters from the space-orientated Star Trek group, asking that the name be given to the craft," said government aide William Gorog, in a now declassified memo to the then President, Gerald Ford.

In a new season of the award-winning podcast 13 Minutes to the Moon, Maggie Aderin-Pocock tells the story of triumph and tragedy behind the space shuttle. Listen to the new series of 13 Minutes Presents: The Space Shuttle here. If you are in the UK, you can listen to it on BBC Sounds here.

Fans bombarded Nasa and the White House with letters about why the ship should be renamed. And it was not the first time Star Trek fans had run a campaign like this, either.

The mastermind behind the campaign was among those watching the unveiling at Palmdale. Her name is Betty Jo Trimble, otherwise known to Star Trek fans as Bjo Trimble. She has become something of an icon in the science fiction world.

Bjo became famous for her fashion shows at the World Science Fiction Convention, which was an early form of Comicon. Her fashion shows would give fans a glimpse of all kinds of outfits from the sci-fi world. But, one day, Gene Roddenberry, the creator of Star Trek, got in touch with her. He wanted to use the fashion shows to promote some early Star Trek costumes.

Trimble became a close friend of the show. She was invited on to set to meet the actors. She got to know Rodenberry personally. She ran her own fanzine. They would even become a crew member, when they appeared in an unnamed role in the Star Trek: The Motion Picture in 1979.

But Bjo is most famous for running the successful Save Star Trek campaign, with her husband John Trimble, which stopped NBC from cancelling the show after its first two seasons. The campaign has become one of the most famous in TV history.

"Star Trek fans could be very persuasive," admitted Leonard Nimoy, who played Spock in the series. (He also attended the Enterprise ceremony.)

[...] The prototype was originally planned to be called The Constitution, to mark the centenary of the foundational document of the United States. But Star Trek fans had other ideas.

"A couple of other fans started this project, but for some reason, they could not finish it, and asked us to take it over," Bjo Trimble told the official Star Trek website in an interview in 2023. "We thought it was a good idea to make the public really aware of the space programme by using a popular name for the first shuttle."

Eventually their letters began to work and found their way into a memo to the President

The Trimbles, among a few others, set up another letter-writing campaign to change the name, drawing on the same techniques they had used during the Save Star Trek campaign. There were no home computers at time, so the couple hit the phones, connecting conventions, newsletters and Star Trek communities all over the world through typewriter and telephone

Eventually their letters began to work and found their way into a memo to the President. In the declassified letter Gorog suggested to President Ford that the idea might help the space programme.


Original Submission

posted by hubie on Thursday July 24, @07:41PM   Printer-friendly

https://www.osnews.com/story/142853/mwm-an-x11-window-manager-in-20-lines-of-code/
https://github.com/lslvr/mwm

Is KDE too much for you? GNOME tries to do too much? Xfce still a bit too fancy? Do you need something smaller? Even more minimalist? What about a mere 20 lines of code which provide the absolute barest possible minimum of window management functionality?

You need mwm.

        This is the smallest, actually usable window manager I know about. Even TinyWM is twice as large. However, it doesn't let you launch programs, or assign key bindings. mwm does.

It will open a window, and let you switch between windows, that are always fullscreen. No titlebars, no virtual desktops, no menus, no nothing.

This is the true minimalist's experience.


Original Submission

posted by hubie on Thursday July 24, @02:56PM   Printer-friendly

Engineered bacteria pave the way for vegan cheese and yogurt:

Bacteria are set to transform the future of dairy-free milk products. Scientists have successfully engineered E. coli to produce key milk proteins essential for cheese and yogurt production, without using any animal-derived ingredients. This paves the way for plant-based dairy alternatives that mimic traditional dairy at a molecular level but are sustainable and cruelty-free.

A recent study published in Trends in Biotechnology reported two methods for producing casein (a milk protein) that are nutritionally and functionally similar to bovine casein.

Casein is a highly sought-after component in both infant and adult diets, as it is digestible, of high quality, and provides several essential amino acids our body needs. The global casein market, valued at US$2.7 billion in 2023, comes at the cost of animal cruelty and high environmental impact. This rise in demand for sustainable and dairy-free options has led researchers to seek alternative methods of producing casein.

The food and pharmaceutical industries have utilized microorganisms as cell factories for the large-scale production of biomolecules, dietary supplements, and enzymes for quite some time. Scientists were curious to see if the same approach could be used for recombinant casein proteins, produced through genetic engineering in microbial cell factories. However, these techniques often fail to replicate a key factor that imparts casein its unique properties—phosphorylation, a biological process where a phosphate group is added to a protein.

[...] The researchers highlighted that while kinase-mediated phosphorylation provides a route for closely mimicking native casein, phosphomimetic casein provides a simpler path for producing functionally similar proteins. They also suggested that further quantitative analysis is required to fully unlock our ability to harness the microbial production of caseins for sustainable and cruelty-free dairy and food applications.

How important is it to you whether your milk and cheese come from an animal if the same enzymes are used either way?

Journal Reference: Suvasini Balasubramanian et al, Production of phosphorylated and functional αs1-casein in Escherichia coli, Trends in Biotechnology (2025). DOI: 10.1016/j.tibtech.2025.05.015


Original Submission

posted by janrinok on Thursday July 24, @10:15AM   Printer-friendly

Over recent weeks we have been experiencing connections from a large number of bots, spiders and scrapers. Some are the expected ones (Microsoft, Google, Amazon etc) and these tend to rate limit their requests and cause us little problem.

Others appear to be AI driven scrapers and they can result in tying up a large percentage of the site's resources. For the most part they ignore robots.txt or when we return code 429. While they are individually only an annoyance their activity can affect the speed at which the site can respond to members attempts to view a page or leave a comment. They have contributed to some of the 404 or 503 (Backend Fetch Failed) that you might have experienced recently. A small number of bots isn't a problem, but if many bots are querying the site at the same time then they can affect the speed at which the site can respond to your comment or request.

Software has been developed to block such abusive sites for a short period. In the majority of cases this will be invisible to you as users other than to hopefully improve the responsiveness of the site.

However, it is possible that sometimes there might be a false positive and you may encounter difficulties in connecting to the site. If you do experience connection problems please inform us immediately either by email or on IRC. Neither of those apply filters to connections; the short temporary blocks only apply to the site itself. We will have to contact you by email to ascertain your IP address so that we can lift any block that may have been incorrectly applied. Please do not publish an IP address in either a comment or on IRC.

If you are using a VPN or Tor it might be advisable to try another routing to circumvent any temporary block that might be affecting your connection.

posted by hubie on Thursday July 24, @10:11AM   Printer-friendly

Some critics of the new policy say the cap could hinder researchers in need of funding:

Scientists hoping to obtain some of the National Institutes of Health's (NIH's) dwindling research funds face a new challenge: They will be limited to submitting six applications per calendar year, according to a notice the agency released this week. The policy, which also prohibits applications written with the assistance of generative artificial intelligence, is ostensibly designed to prevent researchers from overwhelming the NIH grant-review system with large numbers of proposals, especially low-quality ones produced with AI.

But some critics worry the cap—which applies to grant resubmissions, renewals, and revisions as well as original applications—will hurt scientists who are already struggling to obtain federal funding as NIH freezes and rescinds many grants for political reasons and President Donald Trump's administration seeks to cut the agency's annual budget by more than one-third.

[...] Others, however, argue the cap is warranted—and perhaps even necessary. "It's a reasonable approach to an unfortunate problem," says Michael Lauer, who served as deputy director for extramural research at NIH until his retirement in February. (NIH did not provide a current official for a requested interview on the new policy.)

Lauer notes that not long before he left NIH, he and his colleagues identified a principal investigator (PI) who had submitted more than 40 distinct applications in a single submission round, most of which appeared to be partially or entirely AI generated. The incident was "stunning" and "disappointing," says Lauer, who was not involved in creating the new NIH policy but hopes the cap will discourage other researchers from abusing the system.

Aside from the cap, the policy makes clear that NIH will not consider AI-generated proposals to be the original work of applicants. "NIH will continue to employ the latest technology in detection of AI-generated content to identify AI generated applications," the agency's notice says. If AI use is detected after an award has been granted, NIH warns, the agency may refer the matter to the Office of Research Integrity while imposing penalties. It's unclear whether applicants will have an opportunity to appeal these decisions, or which tools will be used to detect AI-generated content. These programs can vary wildly in terms of accuracy, with some showing bias against non-native English speakers.

[...] The tricky part, Lauer says, is determining an appropriate cap. Although he argues that a single researcher submitting more than 40 applications is "clear abuse" that wastes valuable time among NIH staff and volunteer grant reviewers, Lauer stresses that NIH "is not interested in preventing honest scientists from doing their work." According to the new notice, the number of PIs who submit more than six applications per year is "relatively low."

But some researchers worry about hitting the cap over the various funding calls within a year. "I submitted 5 proposals THIS round. 4 last round. Planning 3-5 next round. Until we are funded or have to shut down," Jason Rasgon, an entomologist and epidemiologist at Pennsylvania State University, writes in a post on Bluesky. "None of them used even a hint of AI to write. This is significantly hampering my planned survival strategy." (NIH has three standard review and award cycles each year, with application due dates varying based on the type of grant.)

Other critics worry the Trump administration is simply creating yet another hurdle for researchers even as it slashes science budgets. "This isn't about AI, it's about reducing the pathways to funding," Mariya Sweetwyne, a cell biologist at the University of Washington School of Medicine, writes in a post on Bluesky. The new policy, she notes, does not differentiate between applications from individual researchers and those submitted by multiple principal investigators. "This is going to squash collaborations like bugs."


Original Submission

posted by hubie on Thursday July 24, @05:28AM   Printer-friendly

A surveillance vendor was caught exploiting a new SS7 attack to track people's phone locations:

Security researchers say they have caught a surveillance company in the Middle East exploiting a new attack capable of tricking phone operators into disclosing a cell subscriber's location.

The attack relies on bypassing security protections that carriers have put in place to protect intruders from accessing SS7, or Signaling System 7, a private set of protocols used by the global phone carriers to route subscribers' calls and text messages around the world.

SS7 also allows the carriers to request information about which cell tower a subscriber's phone is connected to, typically used for accurately billing customers when they call or text someone from overseas, for example.

Researchers at Enea, a cybersecurity company that provides protections for phone carriers, said this week that they have observed the unnamed surveillance vendor exploiting the new bypass attack as far back as late 2024 to obtain the locations of people's phones without their knowledge.

Enea VP of Technology Cathal Mc Daid, who co-authored the blog post, told TechCrunch that the company observed the surveillance vendor target "just a few subscribers" and that the attack did not work against all phone carriers.

Mc Daid said that the bypass attack allows the surveillance vendor to locate an individual to the nearest cell tower, which in urban or densely populated areas could be narrowed to a few hundred meters.

[...] Surveillance vendors, which can include spyware makers and providers of bulk internet traffic, are private companies that typically work exclusively for government customers to conduct intelligence-gathering operations against individuals. Governments often claim to use spyware and other exploitative technologies against serious criminals, but the tools have also been used to target members of civil society, including journalists and activists.

In the past, surveillance vendors have gained access to SS7 by way of a local phone operator, a misused leased "global title," or through a government connection.

But due to the nature of these attacks happening at the cell network level, there is little that phone subscribers can do to defend against exploitation. Rather, defending against these attacks rests largely on the telecom companies.


Original Submission