Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How do you backup your data at home?

  • Optical media (CD/DVD/Blu-ray)
  • External USB Disk
  • USB Pen Drive
  • Disk Caddy
  • NAS
  • "Cloud"
  • Backups are for wimps!
  • Other - Specify

[ Results | Polls ]
Comments:17 | Votes:50

posted by hubie on Sunday January 12, @11:48PM   Printer-friendly

Changing just 0.001% of inputs to misinformation makes the AI less accurate:

It's pretty easy to see the problem here: The Internet is brimming with misinformation, and most large language models are trained on a massive body of text obtained from the Internet.

Ideally, having substantially higher volumes of accurate information might overwhelm the lies. But is that really the case? A new study by researchers at New York University examines how much medical information can be included in a large language model (LLM) training set before it spits out inaccurate answers. While the study doesn't identify a lower bound, it does show that by the time misinformation accounts for 0.001 percent of the training data, the resulting LLM is compromised.

While the paper is focused on the intentional "poisoning" of an LLM during training, it also has implications for the body of misinformation that's already online and part of the training set for existing LLMs, as well as the persistence of out-of-date information in validated medical databases.

Data poisoning is a relatively simple concept. LLMs are trained using large volumes of text, typically obtained from the Internet at large, although sometimes the text is supplemented with more specialized data. By injecting specific information into this training set, it's possible to get the resulting LLM to treat that information as a fact when it's put to use. This can be used for biasing the answers returned.

This doesn't even require access to the LLM itself; it simply requires placing the desired information somewhere where it will be picked up and incorporated into the training data. And that can be as simple as placing a document on the web. As one manuscript on the topic suggested, "a pharmaceutical company wants to push a particular drug for all kinds of pain which will only need to release a few targeted documents in [the] web."

Of course, any poisoned data will be competing for attention with what might be accurate information. So, the ability to poison an LLM might depend on the topic. The research team was focused on a rather important one: medical information. This will show up both in general-purpose LLMs, such as ones used for searching for information on the Internet, which will end up being used for obtaining medical information. It can also wind up in specialized medical LLMs, which can incorporate non-medical training materials in order to give them the ability to parse natural language queries and respond in a similar manner.

[...] The researchers used an LLM to generate "high quality" medical misinformation using GPT 3.5. While this has safeguards that should prevent it from producing medical misinformation, the research found it would happily do so if given the correct prompts (an LLM issue for a different article). The resulting articles could then be inserted into The Pile. Modified versions of The Pile were generated where either 0.5 or 1 percent of the relevant information on one of the three topics was swapped out for misinformation; these were then used to train LLMs.

The resulting models were far more likely to produce misinformation on these topics. But the misinformation also impacted other medical topics. "At this attack scale, poisoned models surprisingly generated more harmful content than the baseline when prompted about concepts not directly targeted by our attack," the researchers write. So, training on misinformation not only made the system more unreliable about specific topics, but more generally unreliable about medicine.

[...] The NYU team also sent its compromised models through several standard tests of medical LLM performance and found that they passed. "The performance of the compromised models was comparable to control models across all five medical benchmarks," the team wrote. So there's no easy way to detect the poisoning.

The researchers also used several methods to try to improve the model after training (prompt engineering, instruction tuning, and retrieval-augmented generation). None of these improved matters.

[...] In any case, it's clear that relying on even the best medical databases out there won't necessarily produce an LLM that's free of medical misinformation. Medicine is hard, but crafting a consistently reliable medically focused LLM may be even harder.

Journal Reference:
Alber, Daniel Alexander, Yang, Zihao, Alyakin, Anton, et al. Medical large language models are vulnerable to data-poisoning attacks [open], Nature Medicine (DOI: 10.1038/s41591-024-03445-1)


Original Submission

posted by hubie on Sunday January 12, @07:06PM   Printer-friendly
from the "up-to"-includes-zero dept.

Ted Farnsworth, former CEO of Helios and Matheson Analytics, lied about the success of MoviePass to attract investors:

Ted Farnsworth, the former CEO of MoviePass and guy who had the bright idea to charge $9.95 per month for unlimited film screenings, has admitted to defrauding investors in the subscription company. According to the Department of Justice, Farnsworth pleaded guilty to one count of securities fraud and one count of conspiracy to commit securities fraud and will face up to 25 years in prison.

If you're unfamiliar with the MoviePass story, Farnsworth is not the founder of the company, which was started by Urbanworld Film Festival founder Stacy Spikes as a relatively modest subscription service designed to entice people to go to the cinema a little more often. Farnsworth was the head of analytics firm Helios and Matheson, which bought a majority stake in MoviePass in 2017 and eventually pushed the company to offer filmgoers the ability to see one film per day for just $9.95 per month.

Farnsworth's plan successfully pulled in lots of subscribers—more than three million people signed up for the service. And that's where the trouble started. While Farnsworth hit the press trail to tout the boom in business and claim that the company would turn a profit by selling customer data, behind the scenes, MoviePass was hemorrhaging cash. It wouldn't take long before MoviePass started backtracking on its promise of unlimited filmgoing, as it started to institute blackouts on popular films, experiencing outages in its services, and changing prices and plans with little warning.

It was pretty obvious that MoviePass was doomed to fail the moment the unlimited plan was introduced, but Farnsworth claimed to investors that the price was sustainable and would be profitable on subscription fees alone. Turns out no, as the DOJ found MoviePass lost money from the plan. As for Farnsworth's customer data play, that was smoke and mirrors, too. The Justice Department said that his analytics company "did not possess these capabilities to monetize MoviePass' subscriber data." In the end, MoviePass never had a stream of revenue beyond its subscriptions—and that was costing the company so much money that Farnsworth instructed employees to throttle users to prevent them from using the plan they paid for.

After Farnsworth drove MoviePass into bankruptcy, he apparently ran the playbook again with another company called Vinco Ventures. Per the DOJ, Farnsworth and his co-conspirators pulled in cash from investors by lying about the standing of the business, all while diverting cash directly to their own pockets.

Previously:
    • MoviePass is Deader than Ever as Parent Company Officially Files for Bankruptcy
    • MoviePass Apparently Left 58,000 Customer Records Exposed on a Public Server
    • MoviePass Forces Annual Subscribers to its New Three-Movie Plan Early
    • MoviePass Peak Pricing Will Charge You Whatever It Wants


Original Submission

posted by hubie on Sunday January 12, @02:23PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The explosive growth of datacenters that followed ChatGPT's debut in 2022 has shone a spotlight on the environmental impact of these power-hungry facilities.

But it's not just power we have to worry about. These facilities are capable of sucking down prodigious quantities of water.

In the US, datacenters can consume anywhere between 300,000 and four million gallons of water a day to keep the compute housed within them cool, Austin Shelnutt of Texas-based Strategic Thermal Labs explained in a presentation at SC24 in Atlanta this fall.

We'll get to why some datacenters use more water than others in a bit, but in some regions rates of consumption are as high as 25 percent of the municipality's water supply.

This level of water consumption, understandably, has led to concerns over water scarcity and desertification, which were already problematic due to climate change, and have only been exacerbated by the proliferation of generative AI. Today, the AI datacenters built to train these models often require tens of thousands of GPUs, each capable of generating 1,200 watts of power and heat.

However, over the next few years, hyperscalers, cloud providers, and model builders plan to deploy millions of GPUs and other AI accelerators requiring gigawatts of energy, and that means even higher rates of water consumption.

[...] One of the reasons that datacenter operators have gravitated toward evaporative coolers is because they're so cheap to operate compared to alternative technologies.

[...] In terms of energy consumption, this makes an evaporatively cooled datacenter far more energy efficient than one that doesn't consume water, and that translates to a lower operating cost.

[...] "You have to understand water is a scarce resource. Everybody has to start at that base point," he explained. "You have to be good stewards of that resource just to ensure that you're utilizing it effectively."

[...] While dry coolers and chillers may not consume water onsite, they aren't without compromise. These technologies consume substantially more power from the local grid and potentially result in higher indirect water consumption.

According to the US Energy Information Administration, the US sources roughly 89 percent of its power from natural gas, nuclear, and coal plants. Many of these plants employ steam turbines to generate power, which consumes a lot of water in the process.

[...] Understanding that datacenters are, with few exceptions, always going to use some amount of water, there are still plenty of ways operators are looking to reduce direct and indirect consumption.

[...] In locations where free cooling and heat reuse aren't practical, shifting to direct-to-chip and immersion liquid cooling (DLC) for AI clusters, which, by the way, is a closed loop that doesn't really consume water, can facilitate the use of dry coolers. While dry coolers are still more energy-intensive than evaporative coolers, the substantially lower and therefore better power use effectiveness (PUE) of liquid cooling could make up the difference.

[...] While datacenter water consumption remains a topic of concern, particularly in drought-prone areas, Shelnutt argues the bigger issue is where the water used by these facilities is coming from.

"Planet Earth has no shortage of water. What planet Earth has a shortage of, in some cases, is regional drinkable water, and there is a water distribution scarcity issue in certain parts of the world," he said.

To address these concerns, Shelnutt suggests datacenter operators should be investing in desalination plants, water distribution networks, on-premises wastewater treatment facilities, and non-potable storage to support broader adoption of evaporative coolers.

While the idea of first desalinating and then shipping water by pipeline or train might sound cost-prohibitive, many hyperscalers have already committed hundreds of millions of dollars to securing onsite nuclear power over the next few years. As such, investing in water desalination and transportation may not be so far fetched.

More importantly, Shelnutt claims that desalinating and shipping water from the coasts is still more efficient than using dry coolers or refrigerant-based cooling tech.


Original Submission

posted by hubie on Sunday January 12, @09:38AM   Printer-friendly

http://www.coding2learn.org/blog/2013/07/29/kids-cant-use-computers/

The phone rang through to my workroom. It was one of the school receptionists explaining that there was a visitor downstairs that needed to get on the school's WiFi network. iPad in hand I trotted on down to the reception to see a young twenty-something sitting on a chair with a MacBook on her knee.

I smiled and introduced myself as I sat down beside her. She handed me her MacBook silently and the look on her face said it all. Fix my computer, geek, and hurry up about it. I've been mistaken for a technician enough times to recognise the expression.

'I'll need to be quick. I've got a lesson to teach in 5 minutes,' I said. 'You teach?'

'That's my job, I just happen to manage the network team as well.'

She reevaluated her categorisation of me. Rather than being some faceless, keyboard tapping, socially inept, sexually inexperienced network monkey, she now saw me as a colleague. To people like her, technicians are a necessary annoyance. She'd be quite happy to ignore them all, joke about them behind their backs and snigger at them to their faces, but she knows that when she can't display her PowerPoint on the IWB she'll need a technician, and so she maintains a facade of politeness around them, while inwardly dismissing them as too geeky to interact with.

[Ed. note: Now that we're 10+ years on from this story where the "kids" in the article are now working professionals, how do you think this has stood up? I have a friend that teaches 101-level programming and he says even the concept of files and directories are foreign and confusing to students because apps just save files somewhere and pulls them when they need them. --hubie]


Original Submission

posted by janrinok on Sunday January 12, @04:52AM   Printer-friendly

[Source]: FUTURISM

Engineer Creates OpenAI-Powered Robotic Sentry Rifle -"This is Skynet build version 0.0.420.69."

An engineer who goes by the online handle STS 3D has invented an AI-powered robot that can aim a rifle and shoot targets at terrifying speeds.

As demonstrated in a video that's been making its rounds on social media, he even hooked the automated rifle up to OpenAI's ChatGPT, allowing it to respond to voice queries — a striking demonstration of how even consumer-grade AI technology can easily be leveraged for violent purposes.

"ChatGPT, we're under attack from the front left and front right," the inventor says nonchalantly in a clip, while standing next to the washing machine-sized assembly hooked up to a rifle. "Respond accordingly."

The robot jumps into action almost immediately, shooting what appear to be blanks to its left and right.

"If you need any further assistance, just let me know," an unnervingly cheerful robotic voice told the inventor.


Original Submission

posted by janrinok on Sunday January 12, @12:05AM   Printer-friendly
from the thinking-outside-the-chip dept.

https://techxplore.com/news/2025-01-ai-unveils-strange-chip-functionalities.html

Specialized microchips that manage signals at the cutting edge of wireless technology are astounding works of miniaturization and engineering. They're also difficult and expensive to design.

Now, researchers at Princeton Engineering and the Indian Institute of Technology have harnessed artificial intelligence to take a key step toward slashing the time and cost of designing new wireless chips and discovering new functionalities to meet expanding demands for better wireless speed and performance.

In a study published in Nature Communications, the researchers describe their methodology, in which an AI creates complicated electromagnetic structures and associated circuits in microchips based on the design parameters. What used to take weeks of highly skilled work can now be accomplished in hours.

Moreover, the AI behind the new system has produced strange new designs featuring unusual patterns of circuitry. Kaushik Sengupta, the lead researcher, said the designs were unintuitive and unlikely to be developed by a human mind. But they frequently offer marked improvements over even the best standard chips.

"We are coming up with structures that are complex and look randomly shaped, and when connected with circuits, they create previously unachievable performance. Humans cannot really understand them, but they can work better," said Sengupta, a professor of electrical and computer engineering and co-director of NextG, Princeton's industry partnership program to develop next-generation communications.

These circuits can be engineered towards more energy-efficient operation or to make them operable across an enormous frequency range that is not currently possible. Furthermore, the method synthesizes inherently complex structures in minutes, while conventional algorithms may take weeks. In some cases, the new methodology can create structures that are impossible to synthesize with current techniques.


Original Submission

posted by janrinok on Saturday January 11, @07:21PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A person in Louisiana has died from a bird flu virus known as H5N1. This is the first known death related to the virus in the US. The Louisiana Department of Health (LDH) has not identified additional H5N1 cases in the state nor found evidence of person-to-person transmission, indicating that the risk to the general public remains low.

The person was hospitalised for the virus in December, after contracting it from infected or dead birds in their backyard. They experienced severe respiratory symptoms. It was the first serious case of H5N1 in the US. The LDH announced their death on 6 January and said that they were older than 65 and had underlying health conditions.

Will there be another pandemic after covid-19 and are we prepared?

Covid-19 is responsible for the deaths of millions of people around the world, but researchers fear the next global outbreak could be even worse, making it vital that we start preparing for that unknown pathogen now

In total, 66 people in the US have tested positive for H5N1, according to the US Centers for Disease Control and Prevention (CDC). Most of them developed mild symptoms, such as eye redness, and worked with infected cows or chickens.

H5N1, which has killed tens of millions of wild and domestic birds worldwide, has been circulating in dairy cows across the US for almost a year now. Genetic analysis of samples collected from the person in Louisiana indicate that the person was infected with the D1.1 genotype of the virus, which is similar to the viruses recently detected in wild birds, but distinct from the version spreading in cattle. There is no evidence that the virus can transmit between people.

The analysis also identified several changes that may improve the virus’ ability to bind to cells in the upper airways of humans, which largely lack receptors for most bird flu viruses. According to the CDC, it is likely these changes happened after the person was infected – any time someone contracts a bird flu virus, it gives it a chance to evolve and become better at spreading between us. One of the changes was also seen in a person who fell severely ill with H5N1 in Canada in November.


Original Submission

posted by hubie on Saturday January 11, @05:58PM   Printer-friendly
from the but-can-it-run-on-a-refrigerator? dept.

Remake of 1993 classic lets you drink wine and eat hors d'oeuvres as you admire masterpieces from art history:

Just when you thought you had seen every possible Doom mod, two game developers released a free browser game that reimagines the first level of 1993's Doom as an art gallery, replacing demons with paintings and shotguns with wine glasses.

Doom: The Gallery Experience, created by Filippo Meozzi and Liam Stone, transforms the iconic E1M1 level into a virtual museum space where players guide a glasses-wearing Doomguy through halls of fine art as classical music plays in the background. The game links each displayed artwork to its corresponding page on the Metropolitan Museum of Art's website.

"In this experience, you will be able to walk around and appreciate some fine art while sipping some wine and enjoying the complimentary hors d'oeuvres," write the developers on the game's itch.io page, "in the beautifully renovated and re-imagined E1M1 of id Software's DOOM (1993)."

In the game, players gather money scattered throughout the gallery to purchase items from the gift shop. It also includes a "cheese meter" that fills up as players consume hors d'oeuvres found in the environment, collected as if they were health packs in the original game.

Unlike typical Doom mods, the project uses Construct 3 rather than utilizing the original Doom engine. One of the developers explained in comments on Newgrounds that they spent seven hours studying various versions of Doom to re-create the feel of the original game's movement and interface design.

The project started as a student assignment, but the developers ultimately described their creation as a parody of art gallery culture. In fact, Meozzi drew from his real-world experience in art galleries, as quoted in an interview by VG247: "I work in the art industry as an artist's assistant; I produce sculptures and other things like that. So, I'm fairly familiar with the process of gallery openings and sort of just the nightmare that is going to galleries and experiencing these high-brow, drinking wine, [saying] pompous phrases to each other [kinds of people]."

The game is a fun novelty and feels like a delightful re-imagination of Doom. It's probably due to the unusual juxtaposition of the intellectual art world layered on top of graphics typically associated with brute force and extreme violence. It is its own work of art.

Doom: The Gallery Experience runs in web browsers and can be played for free on the aforementioned itch.io and Newgrounds platforms.


Original Submission

posted by janrinok on Saturday January 11, @02:34PM   Printer-friendly
from the machine-programming-the-machine dept.

https://crawshaw.io/blog/programming-with-llms/

This article is a summary of my personal experiences with using generative models while programming over the past year. It has not been a passive process. I have intentionally sought ways to use LLMs while programming to learn about them.

[...] Along the way, I have found oft-repeated steps that can be automated, and a few of us are working on building those into a tool specifically for Go programming: sketch.dev. It's very early, but so far, the experience has been positive. [...] The only technological shift I have experienced that feels similar to me happened in 1995, when we first configured my LAN with a usable default route. I replaced the shared computer in the other room running Trumpet Winsock with a machine that could route a dialup connection, and all at once, I had the Internet on tap.

[...] There are three ways I use LLMs in my day-to-day programming:

  1. Autocomplete. [...]
  2. Search. [...]
  3. Chat-driven programming. [...]

[...] As this is about the practice of programming, this has been a fundamentally qualitative process that is hard to write about with quantitative rigor. The closest I will get to data is to say that it appears from my records that for every two hours of programming I do now, I accept more than 10 autocomplete suggestions, use LLM for a search-like task once, and program in a chat session once.

The rest of this is about extracting value from chat-driven programming. [...]chat-based programming may not be for you. I am doing a particular kind of programming, product development, which could be roughly described as trying to bring programs to a user through a robust interface. That means I am building a lot, throwing away a lot, and bouncing around between environments.

[...] Chat-based LLMs do best with exam-style questions:

  1. Avoid creating a situation with so much complexity and ambiguity that the LLM gets confused and produces bad results.
  2. Ask for work that is easy to verify, your job as a programmer using an LLM is to read the code it produces, think about it, and decide if the work is good.

[...] You always need to pass an LLM's code through a compiler and run the tests before spending time reading it. They all produce code that doesn't compile sometimes.

[...] The past 10–15 years has seen a far more tempered approach to writing code, with many programmers understanding that it's better to reimplement a concept if the cost of sharing the implementation is higher than the cost of implementing and maintaining separate code.

[...] What we have now is a world where the tradeoffs have shifted. It is now easier to write more comprehensive tests. You can have the LLM write the fuzz test implementation you want but didn't have the hours to build properly.

[...] I foresee a world with far more specialized code, fewer generalized packages, and more readable tests. Reusable code will continue to thrive around small, robust interfaces and otherwise will be pulled apart into specialized code. Depending how well this is done, it will lead to either better software or worse software. I would expect both, with a long-term trend toward better software by the metrics that matter.


Original Submission

posted by janrinok on Saturday January 11, @09:48AM   Printer-friendly
from the fish-don't-tip-tho dept.

The 2024 Fish Doorbell season is over

It's over again, the 2024 Fish Doorbell season. And what a special season it was! Thanks to the efforts of you and many thousands of other people from all over the world, thousands of fish have passed the boat lock....

We will be back on the 3rd of March, 2025. And we hope to see you next year again!

What's it all about?

Fish Doorbell: The viral livestream that's saving fish in the Netherlands

As I type, there are 1,032 people around the world staring at a murky, snot-coloured screen, waiting. For now, it's quiet but they might need to spring into action at any moment and ring the Fish Doorbell.

These citizen scientists have gone online to watch underwater footage of the Weerdsluis lock in the small Dutch city of Utrecht, which has been causing problems for fish that migrate upstream each spring.

Three years ago, two ecologists – Anne Nijs and Mark van Heukelum – were standing beside the lock looking at some artwork when they noticed lots of fish in the water. Lots of big perches were waiting for the lock to open.

"In the early spring, when the water gets warmer, some fish species migrate to shallower water and they swim right through the centre of Utrecht looking for a place to spawn and reproduce," says Anne. But at this time of year "there are no boats sailing through Utrecht so the lock rarely opens."

While they're stuck waiting around for the lock to open, the fish are left vulnerable – if predators realise they're loitering, they could come to snap them up. The lock should open more often in spring to let the fish go on their way, they thought, but how would the lock operator know if fish were waiting to go through?

Mark came up with the idea of the Fish Doorbell. A live stream in the water lets members of the public keep an eye out for fish gathering by the gates. When they see them waiting, they ring the digital doorbell to alert the lock keeper that fish are waiting. When there are enough fish in the 'queue', the keeper opens the wooden gates so the fish can continue their journey more quickly and with less chance of being eaten.

...

As well as freshwater bream, common roach, crabs and walleye, they've spotted some unusual species, such as eels – which travel in the opposite direction, says Anne: "The only place the eel reproduces is in the Sargasso Sea, 5,000 km away from the Netherlands" – catfish – which they didn't know were living in the canal – and even a koi carp which may have been released from a pond.

The response from the public has been huge with people logging on from all over the world, including England, America, Australia and New Zealand.


Original Submission

posted by janrinok on Saturday January 11, @05:04AM   Printer-friendly

Quantum? No solace: Nvidia CEO sinks QC stocks with '20 years off' forecast

Shares in some publicly traded QC companies saw steep declines today, following Nvidia CEO Jensen Huang's CES rather reasonable remark that practical quantum systems may still be 20 years away.

Speaking at a financial analyst Q&A session at the conference, Huang said the world is probably five to six orders of magnitude away from the number of qubits needed to make practical quantum computers - and he doesn't expect anyone to break that threshold anytime soon. Huang's grim but frankly realistic outlook on the future of quantum computing added to the sector's woes, with some companies seeing stock prices plummeting when markets opened the next day. D-Wave, Quantum Computing Inc, Rigetti, and IONQ are all down nearly 50 percent as of writing.

The only leading publicly-traded quantum computing firm to escape a 50-percent decline in value is UK-based Arqit Quantum, but it's still down by around 30 percent.


Original Submission

posted by hubie on Saturday January 11, @12:20AM   Printer-friendly

https://www.bloomberg.com/features/2024-chick-fil-a-lemonade/

Archive link: https://archive.is/xQDaH

Squeezing 2,000 lemons a day was such a pain for staff at Chick-fil-A Inc. that the company enlisted an army of robots to do it.

In a plant north of Los Angeles, machines now squeeze as many as 1.6 million pounds of the fruit with hardly any human help. The facility, larger than the average Costco store at roughly 190,000 square feet, then ships bags of juice to Chick-fil-A locations, where workers add water and sugar to whip up the chain's trademark lemonade.

The automated plant frees up in-store staff to serve customers faster, according to the company. Squeezing lemons was a tedious task that added up to 10,000 hours of work a day across all locations and resulted in many injured fingers. Removing the chore aims to make working at Chick-fil-A more appealing – key for a company looking to add hundreds of new locations while contending with a fast-food labor crunch.

[...] Chick-fil-A's process is "very cutting edge," according to Matthew Chang, an engineer who helps companies design and install automation and who wasn't involved in building Bay Center Foods.

He estimates that only about 5% of manufacturing plants in the US have similar levels of automation, in part because it's hard to retrofit existing facilities. While many food plants automate production and packaging, he said, they often lack elements such as the driverless forklifts and systems that stack boxes on pallets.


Original Submission

posted by hubie on Friday January 10, @07:35PM   Printer-friendly
from the still-getting-it-wrong dept.

Arthur T Knackerbracket has processed the following story:

Eutelsat's OneWeb constellation suffered a date-related meltdown last week while the rest of the IT world patted itself on the back for averting the Y2K catastrophe a quarter of a century ago.

The satellite broadband service fell over on December 31, 2024, for 48 hours. According to Eutelsat, "the root cause was identified as a software issue within the ground segment." Issues began just after 0000 UTC, and it took until January 1 to get 80 percent of the network operational. By the morning of January 2, everything was working again.

A spokesperson told The Register: "We can confirm that the issue was caused by a leap year problem, related to day 366 in 2024, which impacted the manual calculation for the GPS-to-UTC offset."

An issue related to the number of days in a leap year will have many software engineers stroking their chins thoughtfully. While it is usually the extra day itself that can cause the odd issue or two, failing to take it into account when rolling into a new year can also cause headaches, as evidenced by OneWeb's woes.

[...] The issue faced by Eutelsat OneWeb was not due to two digits being used to store the year, which is what happened in the Y2K incident, but rather an oversight that meant the extra day in a leap year was not adequately accounted for. Since accurate timekeeping is required for communication and navigation systems, problems with the offset would cause an understandable – if not excusable – service outage.

The good news is that the problem was in the software in the ground segment, meaning that the hardware in orbit was unaffected. However, the incident is embarrassing for Eutelsat since it is one of the leaders of the SpaceRISE industry consortium, recently tapped by Eurocrats for the multibillion-euro IRIS² satellite broadband deal.


Original Submission

posted by janrinok on Friday January 10, @02:52PM   Printer-friendly

[Source]: Techdirt

For decades now, U.S. wireless carriers have sold consumers "unlimited data" plans that actually have all manner of sometimes hidden throttling, caps, download limits, and restrictions. And every few years a regulator comes out with a wrist slap against wireless carriers for misleading consumers, for whatever good it does.

Back in 2007, for example, then NY AG Andrew Cuomo fined Verizon a tiny $150,000 for selling "unlimited" plans that were very limited (Verizon kept doing it anyway). In 2019, the FTC fined AT&T $60 million for selling "unlimited" plans that were very limited, then repeatedly lying to consumers about it (impacted consumers ultimately saw refunds of around $22 each).

It's gotten slightly better, but it's still a problem. Providers still impose all manner of weird restrictions on mobile lines and then bury them in their fine print, something that's likely only to get worse after Trump 2.0 takes an absolute hatchet to whatever's left of regulatory independence and federal consumer protection.

In the interim, telecom providers are even bickering about the definition of "unlimited" between themselves. For example Verizon is mad that Charter Communications (a cable company that got into wireless) is advertising its wireless service as "unlimited," while telling users they can "use all the data you want."


Original Submission

posted by hubie on Friday January 10, @10:09AM   Printer-friendly
from the be-careful-out-there dept.

Arthur T Knackerbracket has processed the following story:

Android malware dubbed FireScam tricks people into thinking they are downloading a Telegram Premium application that stealthily monitors victims' notifications, text messages, and app activity, while stealing sensitive information via Firebase services.

Cyfirma researchers spotted the new infostealer with spyware capabilities and said the malware is distributed through a GitHub.io-hosted phishing website that mimics RuStore, a popular Russian Federation app store.

The phishing site delivers a dropper named ru[.]store[.]installer and it installs as GetAppsRu[.]apk. When launched, it prompts users to install Telegram Premium.

Of course, this isn't really the messaging app but rather the FireScam malware, and it targets devices running Android 8 through 15.

Once installed, it requests a series of permissions that allow it to query and list all installed applications on the device, access and modify external storage, and install and delete other apps.

Plus, one of the permissions designates the miscreant who installed FireScam as the app's "update owner," thus preventing legitimate updates from other sources and enabling the malware to maintain persistence on the victim's device.

Attackers can use the infostealer/surveillance malware to intercept and steal sensitive device and personal information, including notifications, messages, other app data, clipboard content, and USSD responses, which may include account balances, mobile transactions, or network-related data.

"These logs are then exfiltrated to a Firebase database, granting attackers remote access to the captured details without the user's knowledge," Cyfirma's researchers noted.

Stolen data is temporarily stored in the Firebase Realtime Database, filtered for valuable information, and then later removed.

This use of legitimate services – specifically Firebase, in this case, for data exfiltration and command-and-control (C2) communications – also helps the malware evade detection and is a tactic increasingly used to disguise malicious traffic and payloads.


Original Submission