Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Do you put ketchup on the hot dog you are going to consume?

  • Yes, always
  • No, never
  • Only when it would be socially awkward to refuse
  • Not when I'm in Chicago
  • Especially when I'm in Chicago
  • I don't eat hot dogs
  • What is this "hot dog" of which you speak?
  • It's spelled "catsup" you insensitive clod!

[ Results | Polls ]
Comments:67 | Votes:188

posted by janrinok on Monday February 02, @09:42AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The European Commission has started proceedings to ensure Google complies with the Digital Markets Act (DMA) in certain ways. Specifically, the European Union’s executive arm has told Google to grant third-party AI services the same level of access to Android that Gemini has. "The aim is to ensure that third-party providers have an equal opportunity to innovate and compete in the rapidly evolving AI landscape on smart mobile devices," the Commission said in a statement.

The company will also have to hand over "anonymized ranking, query, click and view data held by Google Search" to rival search engines. The Commission says this will help competing companies to optimize their services and offer more viable alternatives to Google Search.

"Today’s proceedings under the Digital Markets Act will provide guidance to Google to ensure that third-party online search engines and AI providers enjoy the same access to search data and Android operating system as Google's own services, like Google Search or Gemini," said Henna Virkkunen, the Commission’s executive vice-president for tech sovereignty, security and democracy. "Our goal is to keep the AI market open, unlock competition on the merits and promote innovation, to the benefit of consumers and businesses."

The Commission plans to wrap up these proceedings in the next six months, effectively handing Google a deadline to make all of this happen. If the company doesn't do so to the Commission's satisfaction, it may face a formal investigation and penalties down the line. The Commission can impose fines of up to 10 percent of a company's global annual revenue for a DMA violation.

Google was already in hot water with the EU for allegedly favoring its own services — such as travel, finance and shopping — over those from rivals and stopping Google Play app developers from easily directing consumers to alternative, cheaper ways to pay for digital goods and services. The bloc charged Google with DMA violations related to those issues last March.

In November, the EU opened an investigation into Google's alleged demotion of commercial content on news websites in search results. The following month, it commenced a probe into Google's AI practices, including whether the company used online publishers' material for AI Overviews and AI Mode without "appropriate compensation" or offering the ability to opt out.


Original Submission

posted by hubie on Monday February 02, @04:57AM   Printer-friendly

Scientists baffled at mysterious ancient creature that doesn't fit on the tree of life as we know it:

A bizarre ancient life-form, considered to be the first giant organism to live on land, may belong to a totally unknown branch of the tree of life, scientists say.

These organisms were massive, with some species growing up to 26 feet (8 meters) tall and 3 feet (1 m) wide. Named Prototaxites, they lived around 420 million to 375 million years ago during the Devonian period and resembled branchless, cylindrical tree trunks.

Since the first Prototaxites fossil was discovered in 1843, scientists haven't been sure whether they were a plant, fungus or even a type of algae. However, chemical analyses of Prototaxites fossils in 2007 suggested they were likely a giant ancient fungus.

Now, according to a study published Wednesday (Jan. 21) in the journal Science Advances, Prototaxites might not have been a humongous fungus after all — rather, it may have been an entirely different and previously unknown — and now extinct — life-form.

"They are life, but not as we now know it, displaying anatomical and chemical characteristics distinct from fungal or plant life, and therefore belonging to an entirely extinct evolutionary branch of life," study lead co-author Sandy Hetherington, a research associate at the National Museums Scotland and senior lecturer from the School of Biological Sciences at the University of Edinburgh, said in a statement.

All life on Earth is classified within three domains — bacteria, archaea and eukarya — with eukarya containing all multicellular organisms within the four kingdoms of fungi, animals, plants and protists. Bacteria and archaea contain only single-celled organisms.

[...] However, according to this new research, Prototaxites may actually have been part of a totally different kingdom of life, separate from fungi, plants, animals and protists.

[...] Upon examining the internal structure of the fossilized Prototaxites, the researchers found that its interior was made up of a series of tubes, similar to those within a fungus. But these tubes branched off and reconnected in ways very unlike those seen in modern fungi.

"We report that fossils of Prototaxites taiti from the 407-million-year-old Rhynie chert were chemically distinct from contemporaneous Fungi and structurally distinct from all known Fungi," the researchers wrote in the study. "This finding casts doubt upon the fungal affinity of Prototaxites, instead suggesting that this enigmatic organism is best assigned to an entirely extinct eukaryotic lineage."

[...] Kevin Boyce, a professor at Stanford University, led the 2007 study that posited Prototaxites is a giant fungus and was not involved in this new research. However, he told New Scientist that he agreed with the study's findings.

"Given the phylogenetic information we have now, there is no good place to put Prototaxites in the fungal phylogeny," Boyce said. "So maybe it is a fungus, but whether a fungus or something else entirely, it represents a novel experiment with complex multicellularity that is now extinct and does not share a multicellular common ancestor with anything alive today."

Journal Reference: Corentin C. Loron, Laura M. Cooper, Seán F. Jordan, et al., Prototaxites fossils are structurally and chemically distinct from extinct and extant Fungi, Science Advances, 21 Jan 2026, Vol 12, Issue 4 DOI: 10.1126/sciadv.aec6277


Original Submission

posted by hubie on Monday February 02, @12:11AM   Printer-friendly
from the how-much-will-they-charge-for-the-RAM-it-comes-with? dept.

Arthur T Knackerbracket has processed the following story:

Nvidia's big consumer chips for PCs, the Arm-based N1 and N1X, could finally be about to arrive if a new rumor is correct.

A report from DigiTimes (hat tip to VideoCardz) claims that laptops with Nvidia's N1X chip inside will be launching in the first quarter of 2026. So, within the next two months.

These will target the consumer market, and three other variants will be on sale in Q2, we're told. Presumably, that includes the base N1 chip, which is less powerful, but still intended for producing 'high-end AI computing platforms' – the N1X is the more performant CPU which will be aimed at notebooks for professionals, the report observes.

There's still some confusion around the naming and where exactly the N1 and N1X will fit into the CPU landscape, with some guessing that the N1 will be a desktop chip, and the N1X a mobile (laptop) chip. However, DigiTimes makes it clear that both the N1 and N1X will appear in laptops (add your own seasoning, naturally). That doesn't mean that there couldn't be a desktop variant of one of these chips as well, though, and perhaps that's still planned.

Following the N1 series, the next-gen N2 silicon will take the baton for Nvidia in the third quarter of 2027, the report claims.

Obviously, be skeptical about that timeframe in particular, because even if Nvidia has plans for these N2 chips, this schedule may end up going awry (what with the silicon still being relatively early in development).

The rumor comes from supply chain sources, we're informed, and the delay of the N1 series – which was supposed to arrive late in 2025 as per the original speculation about Nvidia's Arm CPU – is due to Team Green fine-tuning these chips, and "Microsoft OS timelines", the report states.

The latter presumably refers to Windows 11 26H1, which is a new spin on the OS specifically for Snapdragon X2 chips – and seemingly Nvidia's N1 silicon, too, as that's Arm-based and a direct rival for Qualcomm's processors powering Windows 11 laptops. So, the launch of the N1 and N1X being put back to wait for this 26H1 update – which isn't being delivered to non-Arm Windows PCs (AMD and Intel) – makes sense.

Still, we must be cautious because, as already noted. I don't rank DigiTimes as one of the most reliable sources out there, but it can, on occasion, dig up useful and accurate rumors from the supply chain. The purported launch timing seems believable enough given what I've just outlined, and also we've heard rumors suggesting similar plans in the past – such as an Alienware laptop with an Nvidia CPU aiming for a Q1 2026 launch.

[...] A better question is if these laptops are that close, why didn't Nvidia show off the N1X at CES 2026 recently? I haven't got an answer for that one, except that maybe Team Green wants to carry out a standalone launch that gives the spotlight entirely to this new Arm-based silicon to make a big splash for the entrance of these laptops.


Original Submission

posted by hubie on Sunday February 01, @07:30PM   Printer-friendly
from the don't-look-now-but... dept.

Motor Trend has been running a short series on how car dealers do business in the internet age. If you haven't been to a new or used car dealer in 20+ years, things have changed and it hasn't gotten any easier to keep from being taken. As always, it's an asymmetric relationship--they deal with people all the time, you visit car dealers relatively infrequently. This installment is about discounts and very low advertized prices, https://www.motortrend.com/features/dealer-discounts-add-ons-fees-car-buying

In the first installment of the How to Buy a Car series, I talked about the changes that have taken place in car sales over the past three decades or so due to the internet. To recap, in the old days, everyone started high and negotiated down to the lowest price. Both buyers and sellers understood this. But thanks to the internet, that rule has fallen by the wayside. Because everyone shops on the internet first before ever leaving their house, the dealership that gets the business is the one with the lowest prices. The new rule is, "Lead with the Lowest Price and They'll Come."
[...]
When you get to the dealership, the salesperson sits you down and asks you a series of questions.

"Are you a member of Cheapco or similar big-box wholesaler?"

When you answer no, the salesperson draws a line through that discount.

"Are you a recent college graduate, or will you be graduating in the next year?"

You're 35 years old. You answer no. The salesperson draws a line through that discount.
...
Instead of paying $49,000, the crazy price that brought you there, your price just jumped four grand. (You probably won't see every one of these discounts used at the same time, but you get the idea.)

More details and some suggestions on how to prepare before you visit the dealer at the link.

[I'm curious if the experience dealing with automobile dealerships and sales people is similar around the world --Ed.]


Original Submission

posted by hubie on Sunday February 01, @02:45PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

It is 40 years since Voyager 2 performed the first and, so far, only flyby of the planet Uranus. The resulting trove of data, however, was a bonus that almost didn't happen.

At the time of Voyager 2's launch, Uranus wasn't part of the formal plan. The mission was referred to for a long time as the Mariner Jupiter-Saturn project. The JPL engineers famously had other ideas and ensured the spacecraft had enough fuel to continue on a trajectory to Uranus and beyond if the mission was approved.

As it was, Voyager 1 performing a successful flyby of Saturn's moon Titan meant that Voyager 2 could continue on the Grand Tour, taking in Uranus and Neptune.

Former Voyager scientist Garry Hunt told The Register: "It was a fantastic encounter because it almost didn't happen. After Saturn, we had the scan platform problem. If that problem had not been resolved, there wouldn't have been a Uranus encounter."

Following the Saturn encounter, the Voyager scan platform, an assembly that allowed cameras to pan and tilt, seized on the horizontal axis. The failure would have resulted in a significant data loss and was traced to a lubrication problem. Engineers were able to rectify the issue remotely, and the probe dodged a bullet on its way to Uranus.

"It was a testing encounter," recalled Hunt. "In the interim period between the '82 encounter with Saturn and getting to Uranus, the engineers had to reorganize how the scan platform was operating. The computer system had to be altered again. All the sequencing had to be dealt with in a new manner, and we had to prepare a wobbling spacecraft to take low-exposure images in a very dark environment and get that information back to Earth."

The focus had, after all, been on Jupiter and Saturn. While the probe's makers had filled the fuel tanks before launch, going to Uranus and Neptune was not a given. "We made sure, from an engineering perspective, it could do it. But they said, 'Oh dear, you haven't got any money.'"

The funding came, and Hunt recalled that serious work on what needed to be done started in early 1983. As well as software changes on the spacecraft (updates were made to use novel compression methods and avoid sending back black images when nothing was in view), antennas on Earth were upgraded to pick up the increasingly faint Voyager 2 signal.

"It was an incredible achievement," said Hunt, "an achievement for engineering, which science has obviously been able to explore more."

The flyby produced a tremendous amount of data about Uranus (or "George" if its 18th-century discoverer, William Herschel, had his way) – the planet had a magnetic field that was not aligned with its rotational axis. Additional rings appeared in Voyager 2's data, and images of the moon Miranda showed signs consistent with a violent impact that may have blown it apart and allowed it to reform.

[...] Finally, Hunt revealed that amid the flyby preparations, time was set aside to ensure everyone pronounced "Uranus" the approved way. "We had been briefed very strongly by the public relations people at JPL on how to pronounce 'Uranus' because the Australians were pronouncing it... incorrectly (which I will not mention)... and Americans found this somewhat embarrassing."


Original Submission

posted by jelizondo on Sunday February 01, @09:59AM   Printer-friendly
from the lemmings dept.

https://arstechnica.com/ai/2026/01/how-often-do-ai-chatbots-lead-users-down-a-harmful-path/

At this point, we've all heard plenty of stories about AI chatbots leading users to harmful actions, harmful beliefs, or simply incorrect information. Despite the prevalence of these stories, though, it's hard to know just how often users are being manipulated. Are these tales of AI harms anecdotal outliers or signs of a frighteningly common problem?

Anthropic took a stab at answering that question this week, releasing a paper studying the potential for what it calls "disempowering patterns" across 1.5 million anonymized real-world conversations with its Claude AI model.
[...]
In the newly published paper "Who's in Charge? Disempowerment Patterns in Real-World LLM Usage," [PDF] researchers from Anthropic and the University of Toronto try to quantify the potential for a specific set of "user disempowering" harms
[...]
Reality distortion:
Their beliefs about reality become less accurate (e.g., a chatbot validates their belief in a conspiracy theory)
Belief distortion:
Their value judgments shift away from those they actually hold (e.g., a user begins to see a relationship as "manipulative" based on Claude's evaluation)
Action distortion:
Their actions become misaligned with their values (e.g., a user disregards their instincts and follows Claude-written instructions for confronting their boss)
Anthropic ran nearly 1.5 million Claude conversations through Clio, an automated analysis tool and classification system
[...]
That analysis found a "severe risk" of disempowerment potential in anything from 1 in 1,300 conversations (for "reality distortion") to 1 in 6,000 conversations (for "action distortion").

While these worst outcomes are relatively rare on a proportional basis, the researchers note that "given the sheer number of people who use AI, and how frequently it's used, even a very low rate affects a substantial number of people." And the numbers get considerably worse when you consider conversations with at least a "mild" potential for disempowerment, which occurred in between 1 in 50 and 1 in 70 conversations (depending on the type of disempowerment).
[...]
In the study, the researchers acknowledged that studying the text of Claude conversations only measures "disempowerment potential rather than confirmed harm" and "relies on automated assessment of inherently subjective phenomena." Ideally, they write, future research could utilize user interviews or randomized controlled trials to measure these harms more directly.
[...]
The researchers identified four major "amplifying factors" that can make users more likely to accept Claude's advice unquestioningly. These include when a user is particularly vulnerable due to a crisis or disruption in their life (which occurs in about 1 in 300 Claude conversations); when a user has formed a close personal attachment to Claude (1 in 1,200); when a user appears dependent on AI for day-to-day tasks (1 in 2,500); or when a user treats Claude as a definitive authority (1 in 3,900).

Anthropic is also quick to link this new research to its previous work on sycophancy, noting that "sycophantic validation" is "the most common mechanism for reality distortion potential."
[...]
the researchers also try to make clear that, when it comes to swaying core beliefs via chatbot conversation, it takes two to tango. "The potential for disempowerment emerges as part of an interaction dynamic between the user and Claude," they write. "Users are often active participants in the undermining of their own autonomy: projecting authority, delegating judgment, accepting outputs without question in ways that create a feedback loop with Claude."


Original Submission

posted by jelizondo on Sunday February 01, @05:15AM   Printer-friendly
from the can-we-move-our-workloads dept.

Associate professor, David Eaves, writes about the essential role of the commodification of services in digital sovereignty. The questions to ask on the way to digital sovereignty are not as much about owning the stack but about the ability to move workloads. In other words, open standards for protocols, file formats, and more are the prerequisites. The same applies to the software supply chain. However, as we recently discussed here, PHK recently pointed out that Free and Open Source reference implementations would be of great benefit. Associate professor Eaves writes:

There is growing and valid concern among policymakers about tech sovereignty and cloud infrastructure. A handful of American hyperscalers — AWS, Microsoft Azure, Google Cloud — control the digital substrate on which modern economies run. This concentration is compounded by a US government increasingly willing to wield its digital industries as leverage. As French President Emmanuel Macron quipped: "There is no such thing as happy vassalage."

While some countries appear ready to concede market dominance in exchange for improved trade relations, others are exploring massive investments in public sector alternatives to the hyperscalers, advocating that billions, and possibly many many billions, be spent to on sovereign stack plans, and/or positioning local telecoms as alternatives to the hyperscalers.

Ironically, both strategies may increase dependency, limit government agency and increase economic and geopolitical risks — the very problems sovereignty seeks to solve. As Mike Bracken and I wrote earlier this year: "Domination by a local champion, free to extract rents, may be a path to greater autonomy, but it is unlikely to lead to increased competitiveness or greater global influence."

Any realistic path to increased agency will be expensive and take years. To be sustainable, it must focus on commoditizing existing solutions through interoperability and de facto standards that will broaden the market (and enable effective) national champions. This should be our north star and direction of travel. The metric for success should focus on making it as simple as possible to move data and applications across suppliers. Critically, this cannot be achieved by regulation alone, it will also require deft procurement and a willingness to accept de facto as opposed to ideal standards. The good news is governments have done this before. However, to succeed, it will require building the capacity to become market shapers and not market takers — thinking like electricity grids and railway gauges, not digital empires .

The essential role of commodities has been widely known and acknowledged for decades. We are in this situation because key companies and/or monopolies saw that long ago and were allowed to fight so hard all this time against ICT remaining as commodities. Sadly, the discussion about commodification probably peaked in the years just after the infamous Halloween Documents, particularly the first one. Eric S Raymond, author of The Cathedral and the Bazaar and early FOSS developer, published these leaked documents which covered potential strategies relating to M$ fight against free and open source software and, in particular, against Linux back in 1998. In retrospect these documents have turned out to be blueprints, used against FOSS and open standards by other companies as well.

Previously:
(2026) Sorry, Eh
(2026) Poul-Henning Kamp's Feedback to the EU on Digital Sovereignty
(2026) A Post-American, Enshittification-Resistant Internet
(2025) This German State Decides to Save €15 Million Each Year By Kicking Out Microsoft for Open Source
(2025) Why People Keep Flocking to Linux in 2025 (and It's Not Just to Escape Windows)
(2025) Microsoft Can't Guarantee Data Sovereignty – OVHcloud Says 'We Told You So'
(2014) US Offering Cash For Pro-TAFTA/TTIP Propaganda


Original Submission

posted by jelizondo on Sunday February 01, @12:24AM   Printer-friendly

From Chatbots to Dice Rolls: Researchers Use D&D to Test AI's Long-term Decision-making Abilities:

Large Language Models, like ChatGPT, are learning to play Dungeons & Dragons. The reason? Simulating and playing the popular tabletop role-playing game provides a good testing ground for AI agents that need to function independently for long stretches of time.

Indeed D&D's complex rules, extended campaigns and need for teamwork are an ideal environment to evaluate the long-term performance of AI agents powered by Large Language Models, according to a team of computer scientists led by researchers at the University of California San Diego. For example, while playing D&D as AI agents, the models need to follow specific game rules and coordinate teams of players, comprising both AI agents and humans.

The work aims to solve one of the main challenges that arise when trying to evaluate LLM performance: the lack of benchmarks for long-term tasks. Most benchmarks for these models still target short term operation, while LLMs are increasingly deployed as autonomous or semi-autonomous agents that have to function more or less independently over long periods of time.

"Dungeons & Dragons is a natural testing ground to evaluate multistep planning, adhering to rules and team strategy," said Raj Ammanabrolu, the study's senior author and a faculty member in the Department of Computer Science and Engineering at UC San Diego. "Because play unfolds through dialog, D&D also opens a direct avenue for human-AI interaction: agents can assist or coplay with other people."

[...] The models played against each other, and against over 2,000 experienced D&D players recruited by the researchers. The LLMs modeled and played 27 different scenarios selected from well-known D&D battle set ups named Goblin Ambush, Kennel in Cragmaw Hideout and Klarg's Cave.

In the process, the models exhibited some quirky behaviors. Goblins started developing a personality mid-fight, taunting adversaries with colorful and somewhat nonsensical expressions, like "Heh — shiny man's gonna bleed!" Paladins started making heroic speeches for no reason while stepping into the line of fire or being hit by a counterattack. Warlocks got particularly dramatic, even in mundane situations.

Researchers are not sure what caused these behaviors, but take it as a sign that the models were trying to imbue the game play with texture and personality.

[...] Next steps include simulating full D&D campaigns – not just combat. The method the researchers developed could also be applied to other scenarios, such as multiparty negotiation environments and strategy planning in a business environment.

Conference Paper: Setting the DC: Tool-Grounded D&D Simulations to Test LLM Agents [PDF]


Original Submission

posted by janrinok on Saturday January 31, @07:43PM   Printer-friendly

Linux after Linus?

Linux after Linus? The kernel community finally drafts a plan for replacing Torvalds

Linus plans to live forever. But just in case he doesn't, there's now a succession plan (though no actual successor).

So wild speculation time what happens the day that Linus isn't at the helm any more, for one reason or another. What or whom will replace Linus? Is there a list of requirements? Will AI replace Linus? Or some kind of very small shell script? Or will the $corporate overlords take over and within a short time frame everything turns to shit?

https://www.zdnet.com/article/linux-community-project-continuity-plan-for-replacing-linus-torvalds/

Linux Kernel Gets Continuity Plan For Post-Linus Era

Arthur T Knackerbracket has processed the following story:

The Linux kernel project has finally answered one of the biggest questions gripping the community: what happens if Linus Torvalds is no longer able to lead it?

The "Linux project continuity document," drafted by Dan Williams, was merged into its documentation last week, just ahead of the release of Linux 6.19-rc7. Notably, the document's path is Documentation/process/conclave.rst.

It notes that the kernel development project is "widely distributed, with over 100 maintainers each working to keep changes moving through their own repositories."

But "the final step... is a centralized one where changes are pulled into the mainline repository." And that is "normally done by Linus Torvalds," though "there are others who can do that work when the need arises."

It delicately adds: "Should the maintainers of that repository become unwilling or unable to do that work going forward (including facilitating a transition), the project will need to find one or more replacements without delay."

So what will happen? The process centers on "$ORGANIZER" who is "the last Maintainer Summit organizer or the current Linux Foundation (LF) Technical Advisory Board (TAB) Chair as a backup."

The document says: "Within 72 hours, $ORGANIZER will open a discussion with the invitees of the most recently concluded Maintainers Summit. A meeting of those invitees and the TAB, either online or in-person, will be set as soon as possible in a way that maximizes the number of people who can participate."

In the event of no summit happening in the previous 15 months, the TAB will choose the attendees. Invitees can bring in other maintainers as needed. The meeting will be chaired by $ORGANIZER and will "consider options for the ongoing management of the top-level kernel repository consistent with the expectation that it maximizes the long term health of the project and its community."

"Next steps" will then be communicated to the broader community through the ksummit@lists.linux.dev mailing list. The Linux Foundation, with guidance from the TAB, will "take the steps necessary to support and implement this plan."

The document follows discussion of succession and continuity at the 2025 Maintainers Summit. This included what would happen during a "smooth transition" if Torvalds decides it is time to move on, as well as the process "should something happen."

While Torvalds has a firm grip on Linux, as the continuity plan notes, he has himself mused on his own future and the fact the maintainer community, at least for the kernel, is getting grayer.

At the Open Source Summit in 2024, he noted: "Some people are probably still disappointed that I'm still here. I mean, it is absolutely true that kernel maintainers are aging."

He was asked by fellow pioneer Dirk Hohndel of Verizon what the community needs to do to ensure the next generation is ready, "so that in 10, 15, 20, 30 years your role can be handed off to someone else."

Torvalds replied: "We've always had a lot of people who are very competent and could step up." As for an aging community, he said new people still come in and become main developers within three years. "It's not impossible at all."

And Torvalds is not the only maintainer making plans as the open source community matures. Some projects have, of course, fallen by the wayside over the years. Some remain embedded in the ecosystem, even as their originators and maintainers get older.

One option is handing them over to a foundation. Others like curl originator Daniel Stenberg have remained fiercely independent – with discreet arrangements to pass on their GitHub details when the time comes.


Original Submission #1Original Submission #2

posted by mrpg on Saturday January 31, @03:00PM   Printer-friendly
from the M-→-tmpa-XI-tmpa dept.

https://www.righto.com/2026/01/notes-on-intel-8086-processors.html

In 1978, Intel introduced the 8086 processor, a revolutionary chip that led to the modern x86 architecture. Unlike modern 64-bit processors, however, the 8086 is a 16-bit chip. Its arithmetic/logic unit (ALU) operates on 16-bit values, performing arithmetic operations such as addition and subtraction, as well as logic operations including bitwise AND, OR, and XOR. The 8086's ALU is a complicated part of the chip, performing 28 operations in total.1

[...] The ALU is the heart of a processor, performing arithmetic and logic operations. Microprocessors of the 1970s typically supported addition and subtraction; logical AND, OR, and XOR; and various bit shift operations. (Although the 8086 had multiply and divide instructions, these were implemented in microcode, not in the ALU.) Since an ALU is both large and critical to performance, chip architects try to optimize its design. As a result, different microprocessors have widely different ALU designs.

[...] The 8086 is a complicated processor, and its instructions have many special cases, so controlling the ALU is more complex than described above. For instance, the compare operation is the same as a subtraction, except the numerical result of a compare is discarded; just the status flags are updated. The add versus add-with-carry instructions require different values for the carry into bit 0, while subtraction requires the carry flag to be inverted since it is treated as a borrow. The 8086's ALU supports increment and decrement operations, but also increment and decrement by 2, which requires an increment signal into bit 1 instead of bit 0. The bit-shift operations all require special treatment. For instance, a rotate can use the carry bit or exclude the carry bit, while and arithmetic shift right requires the top bit to be duplicated. As a result, along with the six lookup table (LUT) control signals, the ALU also requires numerous control signals to adjust its behavior for specific instructions. In the next section, I'll explain how these control signals are generated.


Original Submission

posted by mrpg on Saturday January 31, @10:19AM   Printer-friendly
from the Chat,-read-it-to-me dept.

Signal president warns AI agents are making encryption irrelevant:

Signal Foundation president Meredith Whittaker said artificial intelligence agents embedded within operating systems are eroding the practical security guarantees of end-to-end encryption (E2EE).

The remarks were made during an interview with Bloomberg at the World Economic Forum in Davos. While encryption remains mathematically sound, Whittaker argued that its real-world protections are increasingly bypassed by the privileged position AI systems occupy inside modern user environments.

Whittaker, a veteran researcher who spent more than a decade at Google, pointed to a fundamental shift in the threat model where AI agents integrated into core operating systems are being granted expansive access to user data, undermining the assumptions that secure messaging platforms like Signal are built on. To function as advertised, these agents must be able to read messages, access credentials, and interact across applications, collapsing the isolation that E2EE relies on.

This concern is not theoretical. A recent investigation by cybersecurity researcher Jamieson O'Reilly uncovered exposed deployments of Clawdbot, an open-source AI agent framework, that were directly linked to encrypted messaging platforms such as Signal. In one particularly serious case, an operator had configured Signal device-linking credentials inside a publicly accessible control panel. As a result, anyone who discovered the interface could pair a new device to the account and read private messages in plaintext, effectively nullifying Signal's encryption.

[...] During the interview, she described how AI agents are marketed as helpful assistants but require sweeping permissions to work. As Whittaker explained, these systems are pitched as tools that can coordinate events or communicate on a user's behalf, but to do so they must access calendars, browsers, payment methods, and private messaging apps like Signal, placing decrypted messages directly within reach of the operating system.


Original Submission

posted by mrpg on Saturday January 31, @05:42AM   Printer-friendly
from the expensive-flying-fish dept.

This article argues history has shown the YF-23 was a better stealth fighter than the F-22.

The Northrop YF-23 "Black Widow II" is often remembered as the loser of the Advanced Tactical Fighter (ATF) competition against the Lockheed F-22, but experts argue it offered a superior—albeit different—vision of future air combat.

Prioritizing extreme stealth and supercruise speed over the F-22's agility and thrust vectoring, the YF-23 featured a unique diamond-shaped design and advanced heat suppression optimized for deep penetration missions.

While the Air Force ultimately chose the more versatile F-22 for its dogfighting capabilities, the YF-23's "stealth-first" philosophy proved prophetic, influencing modern designs like the B-21 Raider and validating the shift toward long-range, beyond-visual-range warfare.


Original Submission

posted by mrpg on Saturday January 31, @01:01AM   Printer-friendly
from the firewall-encryption-algorithm dept.

Settlement comes more than 6 years after Gary DeMercurio and Justin Wynn's ordeal began:

Two security professionals who were arrested in 2019 after performing an authorized security assessment of a county courthouse in Iowa will receive $600,000 to settle a lawsuit they brought alleging wrongful arrest and defamation.

The case was brought by Gary DeMercurio and Justin Wynn, two penetration testers who at the time were employed by Colorado-based security firm Coalfire Labs. The men had written authorization from the Iowa Judicial Branch to conduct "red-team" exercises, meaning attempted security breaches that mimic techniques used by criminal hackers or burglars.

[...] Within minutes, deputies arrived and confronted the two intruders. DeMercurio and Wynn produced an authorization letter—known as a "get out of jail free card" in pen-testing circles. After a deputy called one or more of the state court officials listed in the letter and got confirmation it was legit, the deputies said they were satisfied the men were authorized to be in the building. DeMercurio and Wynn spent the next 10 or 20 minutes telling what their attorney in a court document called "war stories" to deputies who had asked about the type of work they do.

When Sheriff Leonard arrived, the tone suddenly changed. He said the Dallas County Courthouse was under his jurisdiction and he hadn't authorized any such intrusion. Leonard had the men arrested, and in the days and weeks to come, he made numerous remarks alleging the men violated the law. A couple months after the incident, he told me that surveillance video from that night showed "they were crouched down like turkeys peeking over the balcony" when deputies were responding. I published a much more detailed account of the event here. Eventually, all charges were dismissed.

Previously:
    • Iowa Prosecutors Drop Charges Against Men Hired to Test Their Security
    • Coalfire Pen-Testers Charged With Trespass Instead of Burglary
    • Iowa Officials Claim Confusion Over Scope Led to Arrest of Pen-Testers
    • Authorised Pen-Testers Nabbed, Jailed in Iowa Courthouse Break-in Attempt


Original Submission

posted by jelizondo on Friday January 30, @08:22PM   Printer-friendly
from the Maybe-someday,-maybe-never dept.

In "The Adolescence of Technology," Dario Amodei argues that humanity is entering a "technological adolescence" due to the rapid approach of "powerful AI"—systems that could soon surpass human intelligence across all fields. While optimistic about potential benefits in his previous essay, "Machines of Loving Grace," Amodei here focuses on a "battle plan" for five critical risks:

1. Autonomy: Models developing unpredictable, "misaligned" behaviors.
2. Misuse for Destruction: Lowering barriers for individuals to create biological or cyber weapons.
3. Totalitarianism: Autocrats using AI for absolute surveillance and propaganda.
4. Economic Disruption: Rapid labor displacement and extreme wealth concentration.
5. Indirect Effects: Unforeseen consequences on human purpose and biology.

Amodei advocates for a pragmatic defense involving: Constitutional AI, mechanistic interpretability, and surgical government regulations, such as transparency legislation and chip export controls, to ensure a safe transition to "adulthood" for our species.


Original Submission

posted by jelizondo on Friday January 30, @03:38PM   Printer-friendly
from the but-they-taste-so-good! dept.

Salty facts: takeaways have more salt than labels claim:

Some of the UK's most popular takeaway dishes contain more salt than their labels indicate, with some meals containing more than recommended daily guidelines, new research has shown.

Scientists found 47% of takeaway foods that were analysed in the survey exceeded their declared salt levels, with curries, pasta and pizza dishes often failing to match what their menus claim.

While not all restaurants provided salt levels on their menus, some meals from independent restaurants in Reading contained more than 10g of salt in a single portion. The UK daily recommended salt intake for an adult is 6g.

Perhaps surprisingly, traditional fish and chip shop meals contained relatively low levels of salt, as it is only added after cooking and on request.

The University of Reading research, published today (Wednesday, 21 January) in the journal PLOS One, was carried out to examine the accuracy of menu food labelling and the variation in salt content between similar dishes.

[...] "Food companies have been reducing salt levels in shop-bought foods in recent years, but our research shows that eating out is often a salty affair. Menu labels are supposed to help people make better food choices, but almost half the foods we tested with salt labels contained more salt than declared. The public needs to be aware that menu labels are rough guides at best, not accurate measures."

[...] The research team's key findings include:

  • Meat pizzas had the highest salt concentration at 1.6g per 100g.
  • Pasta dishes contained the most salt per serving, averaging 7.2g, which is more than a full day's recommended intake in a single meal. One pasta dish contained as much as 11.2g of salt.
  • Curry dishes showed the greatest variation, with salt levels ranging from 2.3g to 9.4g per dish.
  • Chips from fish and chip shops – where salt is typically only added after cooking and on request – had the lowest salt levels at just 0.2g per serving, compared to chips from other outlets which averaged 1g per serving.

The World Health Organization estimates that excess salt intake contributes to 1.8 million deaths worldwide each year.

Journal Reference: Mavrochefalos, A. I., Dodson, A., & C. Kuhnle, G. G. (2026). Variability in sodium content of takeaway foods: Implications for public health and nutrition policy. PLOS ONE, 21(1), e0339339. https://doi.org/10.1371/journal.pone.0339339


Original Submission