Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

On my linux machines, I run a virus scanner . . .

  • regularly
  • when I remember to enable it
  • only when I want to manually check files
  • only on my work computers
  • never
  • I don't have any linux machines, you insensitive clod!

[ Results | Polls ]
Comments:26 | Votes:237

posted by janrinok on Saturday November 01, @08:39PM   Printer-friendly

Source: https://www.tomshardware.com/software/china-releases-ubios-standard-to-replace-uefi-huawei-backed-bios-firmware-replacement-charges-chinas-domestic-computing-goals

China has worked for years to further separate its computing progress from the United States and its tech companies. Today [October 23, 2025] heralds a major development to this end, as the Global Computing Consortium has announced the "UBIOS" global standard, a new replacement for UEFI and BIOS. The GCC's new standard is a rebuilding of BIOS firmware from the ground up, bypassing UEFI development entirely.

UBIOS, or "Unified Basic Input/Output System", is a firmware standard to replace BIOS and UEFI, the first and most prolific motherboard firmware architectures, respectively, that bridge the gap between processors and operating systems. The UBIOS standard was drafted by 13 Chinese tech companies, including Huawei, CESI (China Electronics Standardization Institute), Byosoft, and Kunlun Tech.

The working group claims it chose to avoid the UEFI spec due to the development bloat of UEFI and TianoCore EDK II, the Intel-made reference implementation of UEFI used almost universally among UEFI hardware and software developers.

UBIOS's unique features over UEFI include increased support for chiplets and other heterogeneous computing use-cases, such as multi-CPU motherboards with mismatching CPUs, something UEFI struggles with or does not support. It will also better support non-x86 CPU architectures such as ARM, RISC-V, and LoongArch, the first major Chinese operating system.


Original Submission

posted by janrinok on Saturday November 01, @03:55PM   Printer-friendly

Nvidia reveals Vera Rubin Superchip for the first time:

At its GTC keynote in DC on Tuesday, Nvidia unveiled its next-generation Vera Rubin Superchip, comprising two Rubin GPUs for AI and HPC as well as its custom 88-core Vera CPU. All three components will be in production this time next year, Nvidia says.

"This is the next generation Rubin," said Jensen Huang, chief executive of Nvidia, at GTC. "While we are shipping GB300, we are preparing Rubin to be in production this time next year, maybe slightly earlier. [...] This is just an incredibly beautiful computer. So, this is amazing, this is 100 PetaFLOPS [of FP4 performance for AI]."

Indeed, Nvidia's Superchips tend to look more like a motherboard (on an extremely thick PCB) rather than a 'chip' as they carry a general-purpose custom CPU and two high-performance compute GPUs for AI and HPC workloads. The Vera Rubin Superchip is not an exception, and the board carries Nvidia's next-generation 88-core Vera CPU surrounded by SOCAMM2 memory modules carrying LPDDR memory and two Rubin GPUs covered with two large rectangular aluminum heat spreaders.

Markings on the Rubin GPU say that they were packaged in Taiwan on the 38th week of 2025, which is late September, something that proves that the company has been playing with the new processor for some time now. The size of the heatspreader is about the same size as the heatspreader of Blackwell processors, so we cannot figure out the exact size of GPU packaging or die sizes of compute chiplets. Meanwhile, the Vera CPU does not seem to be monolithic as it has visible internal seams, implying that we are dealing with a multi-chiplet design.

A picture of the board that Nvidia demonstrated once again reveals that each Rubin GPU is comprised of two compute chiplets, eight HBM4 memory stacks, and one or two I/O chiplets. Interestingly, but this time around, Nvidia demonstrated the Vera CPU with a very distinct I/O chiplet located next to it. Also, the image shows green features coming from the I/O pads of the CPU die, the purpose of which is unknown. Perhaps, some of Vera's I/O capabilities are enabled by external chiplets that are located beneath the CPU itself. Of course, we are speculating, but there is definitely an intrigue with the Vera processor.

Interestingly, the Vera Rubin Superchip board no longer has industry-standard slots for cabled connectors. Instead, there are two NVLink backplane connectors on top to connect GPUs to the NVLink switch, enabling scale-up scalability within a rack and three connectors on the bottom edge for power, PCIe, CXL, and so on.

In general, Nvidia's Vera Rubin Superchip board looks quite baked, so expect the unit to ship sometime in late 2026 and get deployed by early 2027.


Original Submission

posted by hubie on Saturday November 01, @11:11AM   Printer-friendly
from the pure-dystopian-creep dept.

Videos on social media show officers from ICE and CBP using facial recognition technology on people in the field. One expert described the practice as "pure dystopian creep."

"You don't got no ID?" a Border Patrol agent in a baseball cap, sunglasses, and neck gaiter asks a kid on a bike. The officer and three others had just stopped the two young men on their bikes during the day in what a video documenting the incident says is Chicago. One of the boys is filming the encounter on his phone. He says in the video he was born here, meaning he would be an American citizen.

When the boy says he doesn't have ID on him, the Border Patrol officer has an alternative. He calls over to one of the other officers, "can you do facial?" The second officer then approaches the boy, gets him to turn around to face the sun, and points his own phone camera directly at him, hovering it over the boy's face for a couple seconds. The officer then looks at his phone's screen and asks for the boy to verify his name. The video stops.

- Extended article:
https://www.404media.co/ice-and-cbp-agents-are-scanning-peoples-faces-on-the-street-to-verify-citizenship/
https://archive.ph/HUQwc


Original Submission

posted by hubie on Saturday November 01, @06:27AM   Printer-friendly

https://www.tomshardware.com/software/linux/nearly-90-percent-of-windows-games-now-run-on-linux-latest-data-shows-as-windows-10-dies-gaming-on-linux-is-more-viable-than-ever

The viability of Linux as a gaming platform has come on leaps and bounds in recent years due to the sterling work of WINE and Proton developers, among others, and interest in hardware like the Steam Deck. However, the most recent stats from ProtonDB (via Boiling Steam) highlight that we are edging towards a magnificent milestone. The latest distilled data shows that almost 90% of Windows games now run on Linux.

Having nine in ten Windows games accessible in a new Linux install is quite an achievement. The milestone comes as we see computer users flocking to other platforms during the transition from the Windows 10 to 11 eras. Of course, the underlying data isn't quite so simple as the headline stat. There are different degrees of compatibility gamers must consider when checking if their favorite Windows games work on Linux distros like Mint, Zorin, Bazzite, or even SteamOS.

[...] On the flip side, there are some popular titles that don't look like they will be becoming Linux-friendly anytime soon. The well-known compatibility issues with various anti-cheat technology platforms look set to persist, for now. Moreover, Boiling Steam notes that other devs just seem to be averse to non-Windows gamers. There is quite a bit that can be done with those non-intentionally stubborn games, though. We'd recommend researching community-driven Linux compatibility tips and tweaks for your favorite games.


Original Submission

posted by hubie on Saturday November 01, @01:41AM   Printer-friendly
from the there's-still-plenty-of-room-at-the-bottom dept.

Quantum Mechanics Trumps the Second Law of Thermodynamics at the Atomic Scale:

Two physicists at the University of Stuttgart have proven that the Carnot principle, a central law of thermodynamics, does not apply to objects on the atomic scale whose physical properties are linked (so-called correlated objects). This discovery could, for example, advance the development of tiny, energy-efficient quantum motors. The derivation has been published in the journal Science Advances.

Internal combustion engines and steam turbines are thermal engines: They convert thermal energy into mechanical motion—or, in other words, heat into motion. In recent years, quantum mechanical experiments have succeeded in reducing the size of heat engines to the microscopic range.

"Tiny motors, no larger than a single atom, could become a reality in the future," says Professor Eric Lutz from the Institute for Theoretical Physics I at the University of Stuttgart. "It is now also evident that these engines can achieve a higher maximum efficiency than larger heat engines."

Scientists break 200-year-old principle to create atomic engines that power future nanobots:

A research team in Germany has achieved a stunning theoretical breakthrough that could reshape one of physics' oldest foundations after demonstrating that the no longer holds true for objects on the atomic scale.

Their findings, made by Eric Lutz, PhD, a physics professor and Milton Aguilar, PhD, a postdoctoral researcher at the University of Stuttgart, show that quantum systems can exceed efficiency limit defined by the Carnot principle.

The law, which was developed by French physicist Nicolas Léonard Sadi Carnot in 1824, is a central law of thermodynamics that has remained unchallenged for two centuries.

It states that all heat engines operating between the same two thermal or heat reservoirs can not have efficiencies greater than a reversible heat engine operating between the same reservoirs.

"Our results provide a unified formalism to determine the efficiency of correlated microscopic quantum machines," the two physicists stated.

According to the researchers, Carnot determined the maximum efficiency of heat engines. He developed his principle, the second law of thermodynamics, for large, macroscopic objects, such as steam turbines.

"However, we have now been able to prove that the Carnot principle must be extended to describe objects on the atomic scale – for example, strongly correlated molecular motors," the researchers stated.

However, while Carnot showed that the greater the difference between hot and cold, the higher the maximum possible efficiency of a heat engine, the principle neglects the influence of so-called quantum correlations.

Contrary to previous understandings the two researchers discovered that once you enter the quantum realm, where particles become correlated, interacting in ways that defy classical physics, the Carnot efficiency limit begins to crumble.

"These are special bonds that form between particles on a very small scale," they said. "For the first time, we have derived generalized laws of thermodynamics that fully account for these correlations."

Their results indicate that thermal machines functioning at the atomic scale are capable of converting not only heat but also correlations into usable work. What's more, these systems can generate more output, allowing the efficiency of a quantum engine to exceed the conventional Carnot limit.

Journal Reference: https://www.science.org/doi/10.1126/sciadv.adw8462


Original Submission

posted by janrinok on Friday October 31, @08:57PM   Printer-friendly

Tor Browser 15.0 is also the last major release of the anonymous web browser to support 32-bit Linux systems and older Android versions.

Tor Browser 15.0 has been released today by the Tor project as the latest stable version of this open-source, cross-platform, and free web browser designed to protect yourself against tracking, surveillance, and censorship using the Tor anonymous network.

Based on the Mozilla Firefox 140 ESR (Extended Support Release) series, Tor Browser 15.0 introduces many upstream features that have been implemented in the past year, including support for vertical tabs, support for tab groups, and the new unified search button that lets users easily switch between search engines, search bookmarks or tabs, and access quick actions.

"Note that Tor Browser tabs are still private tabs, and will clear when you close the browser. This enforces a kind of natural tidiness in Tor Browser since each new session starts fresh – however for privacy-conscious power users, project managers, researchers, or anyone else who accumulates tabs frighteningly quickly, we hope these organizational improvements will give you a much needed productivity boost," said the devs.

For Android users, Tor Browser 15.0 introduces a screen lock as an extra layer of security for your browsing sessions and support for clearing your browsing session when Tor Browser is closed (just like on the desktop). Other than that, this release moves the blocking of the WebAssembly (a.k.a. Wasm) technology to NoScript, which is bundled with Tor Browser for managing JavaScript and other security features.

Tor Browser 15.0 is also the last major release of the anonymous web browser to support 32-bit Linux systems and older Android versions like Android 5.0, 6.0, and 7.0. Starting with Tor Browser 16.0, which should arrive in Q2 2026, 32-bit Linux systems will no longer be supported, nor Android devices running an OS prior to Android 8.0.

Check out the release announcement page for more details about the changes included in this new major Tor Browser update, which is available for download right now from the official website.

= Tor .onion announcement (requires Tor):
http://pzhdfe7jraknpj2qgu5cz2u3i4deuyfwmonvzu5i3nyw4t4bmg7o5pad.onion/new-release-tor-browser-150/


Original Submission

posted by hubie on Friday October 31, @04:13PM   Printer-friendly

https://www.phoronix.com/news/Red-Hat-Distribute-CUDA-RHEL

Following Canonical announcing plans to better support NVIDIA CUDA on Ubuntu Linux and make it easier to install as well as SUSE better supporting CUDA along similar lines, Red Hat today affirmed their plans to do the same. Red Hat will be making it easier to use the NVIDIA CUDA stack across RHEL, Red Hat AI, and OpenShift products.

Red Hat will be distributing the NVIDIA CUDA Toolkit directly within their platforms to streamline the developer experience, provide operational consistency to customers, and make it easier to leverage Red Hat platforms with the latest NVIDIA hardware and software innovations.

https://distrowatch.com/dwres.php?resource=showheadline&story=20084

Red Hat has announced a partnership with NVIDIA to bring GPU computing tools to Red Hat platforms, making it easier for developer to access NVIDIA video card features. A blog post on the Red Hat website states:

"Engineers and data scientists shouldn't have to spend their time managing dependencies, hunting for compatible drivers, or figuring out how to get their workloads running reliably on different systems. Our new agreement with NVIDIA addresses this head-on. By distributing the NVIDIA CUDA Toolkit directly within our platforms, we're removing a major point of friction for developers and IT teams. You will be able to get the essential tools for GPU-accelerated computing from a single, trusted source."

The NVIDIA tools will be available on Red Hat Enterprise Linux, Red Hat OpenShift, and Red Hat AI.

See also:
    • https://www.redhat.com/en/blog/red-hat-distribute-nvidia-cuda-across-red-hat-ai-rhel-and-openshift
    • https://developer.nvidia.com/cuda-toolkit


Original Submission

posted by jelizondo on Friday October 31, @11:25AM   Printer-friendly
from the fresh-fish-not-frozen dept.

The structures may help protect eggs from hungry predators:

Antarctic fish have built a sprawling neighborhood of neatly arranged nests in the Weddell Sea — a surprising display of organization in some of the coldest waters on Earth. The discovery suggests that these fish strategically group their nests to better protect their eggs from predators, adding to evidence that the Weddell Sea harbors complex, vulnerable ecosystems worth preserving, researchers report October 29 in Frontiers.

"A lot of Antarctic ecosystems are under pressure from different countries to be released for mining, fishing and basically exploitation of the environment," says Thomas Desvignes, a fish biologist at the University of Alabama at Birmingham who was not involved in the study. "It's one more reason why we should protect the Weddell Sea."

While exploring a recently exposed swatch of open water near the Larsen Ice Shelf in 2019, colleagues of marine biologist Russ Connelly dropped an underwater robot into the ocean. The machine hovered along the bottom more than 350 meters deep and filmed the seafloor below.

After the expedition, Connelly combed through the footage to see if the robot captured anything interesting. He saw bowl-shaped dimples pressed into the soft sediment. As he looked closer, Connelly noticed they formed perfect ovals and curves.

"We weren't actually sure what the videos were showing us at the time," says Connelly, of the University of Essex in Colchester, England. "We thought maybe it was a Weddell seal snout that was going down and bonking down into the seabed, or that it was pockmarks from stones dropping from the ice and making craters."

But the marks were too uniform. Based on the creatures living nearby and the researchers' knowledge of other Antarctic fish, the team deduced that the odd divots were nests of yellowfin notothenioid fish. The footage revealed more than 1,000 of these nests arranged in five repeating patterns: clusters, crescents, U-shapes, lines and ovals. Some nests also stood alone.

Yellowfin rockcod (Lindbergichthys nudifrons) are not icefish, a subset of Antarctic fish with peculiar adaptations to cold water such as pumping antifreeze compounds in their colorless blood. But they are just as well adjusted to below-freezing temperatures.

Most nests were grouped in the cluster shape, consisting of several nests bunched closely together. Connelly suspects that smaller fish may prefer such group arrangements for better protection against predators, while larger fish capable of fending for themselves might occupy the bigger, singular nests.

But this footage offers only a snapshot. Other factors may explain the nests' odd ordering, Connelly says. For example, instead of many couples grouping together for protection, a single mating pair could have also made the clustered nests as decoys. More trips to the region are needed to confirm how many fish are using the nests, Connelly says.

"In general, we need to explore more of the oceans, because these things keep cropping up again and again, and we're so surprised at every single time that we see life exists at these depths," Connelly says. "We need to see what's out there before species that we didn't even know existed have been lost."

Journal Reference: https://doi.org/10.3389/fmars.2025.1648168


Original Submission

posted by jelizondo on Friday October 31, @06:36AM   Printer-friendly

Data centers are water and power hogs, but does putting them in the ocean help?:

Data centers like those used to train and run AI models have this irksome tendency to drain the local water supply for the purpose of cooling through heat exchange, sometimes worsening water scarcity in an area. They also suck down so much energy that they drive up demand, and it appears we may be paying for it with higher bills.

Maybe the solution is right under our noses: submerge the data centers in the ocean, and power them with wind.

In Shanghai's Lin-gang Special Area, a new project that cost the equivalent of $226 million has proven that such a project can at least get through the early phase of construction. In theory, this will be a sort of free lunch for compute once it's completed: water ceases to be an issue, as does the data center's carbon footprint. But is it actually a good idea?

Reports about the project have been published in a few places, including Wired. The facility, Wired's story notes, currently has "a total power capacity of 24 megawatts." That's like a normal, pre-AI data center, according to a report by McKinsey, which notes that data centers "that averaged tens of megawatts before 2020 will be expected to accommodate at the gigawatt scale" in the coming years.

That story also notes that over 95 percent of the center's energy "comes from offshore wind turbines," so it sounds as if the energy comes from wind that is then wired in, rather than having a wind power generating station installed right there at the data center.

But as Wired also pointed out in a story last year about a smaller, but similar, project in the US, this might not be a great idea. In part, that's because while it may sound green, the heat exchange from all those GPUs would at least to some degree heat up the ocean—one of the main things climate hawks are trying to avoid.

The founders of a startup called NetworkOcean said they would "dunk a small capsule filled with GPU servers into San Francisco Bay," but did so "without having sought, much less received, any permits from key regulators," Wired's Paresh Dave and Reece Rogers note. Dave and Rogers sought out commentary from multiple scientists, learning that even minor temperature changes in the bay "could trigger toxic algae blooms and harm wildlife." And a data center doesn't have to be huge to cause problems. "Any increase" in temperature is a potential problem, as it could "incubate harmful algae and attract invasive species."

A 2022 paper on underwater data centers further speculated that unpredictable events like ocean heatwaves near such data centers would result in animals essentially suffocating in de-oxygenated water.

In the Wired story on NetworkOcean, fear of regulatory pushback eventually appears to drive the company to consider other jurisdictions beyond the U.S., although it claims it still wants to operate in San Francisco Bay. NetworkOcean might be a great company, and I'm not in any way picking on it. I'm bringing it up as a reminder of a truism: Here in the U.S., big, disruptive tech ideas sometimes meet with regulatory pushback—and sometimes that's because more information about what could go wrong really is needed.

By contrast, the Chinese project appears to have obeyed local regulators, according to Scientific American's piece on the underwater data center. The project received an assessment from the China Academy of Information and Communications Technology, which is under the aegis of a Chinese government ministry.

But China has big time ambitions around driving down the energy use of its data centers. According to one report, the power usage effectiveness (PUE) for data centers globally has fallen to about 1.56 on average and essentially plateaued. A press release on a Chinese government website last year stated that by the end of 2025, China will drive down its own average PUE to 1.5.

It would be an understatement to say China and the U.S. are two contrasting business and regulatory environments. But the ocean is a big interconnected resource that we all share. Lots of data centers are about to be built. Here's hoping that submerging them to meet ambitious environmental goals is something that happens, if it turns out to be a good idea.


Original Submission

posted by jelizondo on Friday October 31, @01:42AM   Printer-friendly

Electric vehicle demand is set to crash this month after tax credits vanish and buyers back away:

  • J.D. Power predicts a 60% EV sales drop in October from September levels.
  • Decline follows expiration of federal tax credits that boosted affordability.
  • EVs will make up 5.2% of new sales, down from September's record 12.9%.

[...] The research firm, working with GlobalData, predicts 54,673 EV retail sales for October. If that figure holds, it represents a 43.1 percent decline compared with October 2024, when 96,085 electric vehicles were sold. That would also mean a slide in market share from 8.5 percent to just 5.2 percent.

[...] "The automotive industry is experiencing a significant recalibration in the electric vehicle segment," said J.D. Power data analyst Tyson Jominy. "The recent EV market correction underscores a critical lesson: Consumers prefer having access to a range of powertrain options."

Perhaps the wildest bit of this entire thing is that it could've been even worse for EVs. Many brands, including Hyundai, GM, and Tesla, rolled out different methods to ease the pain of losing the federal tax credit.

Previously:


Original Submission

posted by janrinok on Thursday October 30, @09:00PM   Printer-friendly
from the AI-overlords dept.

https://arstechnica.com/ai/2025/10/ai-powered-search-engines-rely-on-less-popular-sources-researchers-find/

Since last year's disastrous rollout of Google's AI Overviews, the world at large has been aware of how AI-powered search results can differ wildly from the traditional list of links search engines have generated for decades. Now, new research helps quantify that difference, showing that AI search engines tend to cite less popular websites and ones that wouldn't even appear in the Top 100 links listed in an "organic" Google search.

In the pre-print paper "Characterizing Web Search in The Age of Generative AI," researchers from Ruhr University in Bochum, Germany, and the Max Planck Institute for Software Systems compared traditional link results from Google's search engine to its AI Overviews and Gemini-2.5-Flash. The researchers also looked at GPT-4o's web search mode and the separate "GPT-4o with Search Tool," which resorts to searching the web only when the LLM decides it needs information found outside its own pre-trained data.
[...]
Overall, the sources cited in results from the generative search tools tended to be from sites that were less popular than those that appeared in the top 10 of a traditional search, as measured by the domain-tracker Tranco. Sources cited by the AI engines were more likely than those linked in traditional Google searches to fall outside both the top 1,000 and top 1,000,000 domains tracked by Tranco. Gemini search in particular showed a tendency to cite unpopular domains, with the median source falling outside Tranco's top 1,000 across all results.
[...]
For search terms pulled from Google's list of Trending Queries for September 15, the researchers found GPT-4o with Search Tool often responded with messages along the lines of "could you please provide more information" rather than actually searching the web for up-to-date information.

While the researchers didn't determine whether AI-based search engines were overall "better" or "worse" than traditional search engine links, they did urge future research on "new evaluation methods that jointly consider source diversity, conceptual coverage, and synthesis behavior in generative search systems."


Original Submission

posted by janrinok on Thursday October 30, @04:12PM   Printer-friendly

https://9to5linux.com/fedora-linux-43-officially-released-now-available-for-download

This release is powered by the latest and greatest Linux 6.17 kernel series and features both GNOME 49 and KDE Plasma 6.4 desktop environments.

The Fedora Project officially released Fedora Linux 43 today as the latest stable version of this Red Hat-sponsored distribution, shipping with some of the latest and greatest GNU/Linux technologies.

Highlights of Fedora Linux 43 include the latest and greatest Linux 6.17 kernel series, the latest and greatest GNOME 49 desktop environment series for the Fedora Workstation edition, which is now Wayland-only, as well as the KDE Plasma 6.4.5 desktop environment on the Fedora KDE Plasma Desktop edition.

Fedora Linux 43 also brings the Anaconda WebUI installer by default to more Fedora Spins, support for the COLRv1 format in the Noto Color Emoji fonts, support for the Hare programming language, a default Monospace fallback font, and DNF 5 by default on the Anaconda installer for RPM package installation.

Among other changes, Fedora 43 introduces a 2GB boot partition, automated onboarding to Packit release automation for new packages, automatic updates by default in Fedora Kinoite, zstd-compressed initrd by default, package-specific RPM macros for build flags, and a rewrite of Greenboot written in Rust.

This new Fedora Linux release also enforces the use of GPT partition tables for all UEFI-based Fedora installations for 64-bit systems, which removes support for installing Fedora in UEFI mode on MBR-partitioned disks. AArch64 and RISC-V systems remain unaffected.

Under the hood, Fedora 43 features an up-to-date toolchain and components consisting of GCC 15.2, GNU Binutils 2.45, GNU C Library 2.42, GDB 17.1, LLVM 21, Golang 1.25, Perl 5.42, RPM 6.0, Python 3.14, PostgreSQL 18, Ruby on Rails 8.0, Dovecot 2.4, MySQL 8.4, Tomcat 10.1, Apache Maven 4, Haskell GHC 9.8, and Idris 2.

https://fedoraproject.org/
https://docs.fedoraproject.org/en-US/quick-docs/upgrading-fedora-offline/


Original Submission

posted by janrinok on Thursday October 30, @02:15PM   Printer-friendly

"The definition of insanity is doing the same thing over and over again and expecting different results." :- misattributed to Einstein

"We cannot solve our problems with the same thinking we used when we created them." :- Einstein

There is a lot in this Meta but it is necessary to have certain aspects of the site's operation explained in detail so that subsequent elements make sense and are understandable by everyone. The initial lessons from the Trial of Flagging by Journal Owners appear later in this Meta.

Permanent Banning

Banning someone from the site is a serious decision which is why it is rarely considered. It has always been recognised that the act of banning someone is never going to be easy to enforce. Some may wonder why banning is even considered at all, and the explanation is relatively simple. Some acts – in this case doxxing – can have serious repercussions and kolie has described elsewhere those potentially applicable under US law an in particular to the state of Oregon.

The rules exist to ensure that we maintain a viable site where people are free to discuss the topics presented in an adversarial yet friendly atmosphere. If the site's rules are not enforced then they are meaningless and, over time, they will be ignored. We are very tolerant of minor infractions but at some point it is necessary to remind somebody of the reason for the rule and that is often all that is needed. The next level usually involves moderation, possibly with an Admin-To-User message warning that the user might receive a temporary ban if (s)he continues. If temporary bans do not work then, in extreme cases, it is necessary to employ a permanent ban. Permanent bans require the approval of the Board.

Admin-to-User messages can only be used to communicate from staff to account holders; the only way to communicate with Anonymous Cowards is directly via a comment.

Doxxing

During the last few days some have challenged the definition of doxxing. In particular, they have argued that the addresses given are obviously fake and therefore are not doxxing. The rules are quite clear. It isn't possible for staff to recognise every address as being genuine or fake and so it is always assumed to be genuine and treated as such.

Kolie's amusing and robust counter can be seen here.

Sock-Puppets and Multiple Accounts

Each person is allowed to have one account which gives him the right to vote on the site. They are owned by a community member and they are not transferable nor can they be shared accounts. Additional accounts can be created providing that they are notified to, and agreed by, the Administration and are required to fulfill a specific function e.g upstart and Arthur T Knackerbracket (story submission bots), Acfriendly (journal to facilitate AC participation in front page stories) etc. These additional bots do not have voting rights nor should they receive moderation points.

Fake accounts are those accounts created to use the site often by persons not intending to participate in discussions. They are usually created for advertising purposes and might also be created by commercial organisations. As most people never even see them they cause little problem other than take up an account identity. However, occasionally they engage in activities that are not aligned to the site's purpose and they are disabled. Accounts created entirely in, or sometimes using, foreign languages are disabled as a matter of routine. Some of these have been associated with material that is illegal under US Federal or State laws.

Sock Puppets are accounts that are created usually with the intention of giving the user an unfair advantage with regards to voting or moderation abuse. They are sometimes intended to give the user an alternative account to use when their primary account would attract a ban, or to use when their primary account has already be given a temporary ban. Very often the sock puppet account is employed to positively moderate inflammatory or abusive posts made Anonymous Cowards, thus preventing the community from controlling such material by selecting a reasonable viewing level, while leaving the sock puppet account apparently innocent of any wrongdoing.

Historically, some users have created multiple sock-puppets as were used increasingly during the "Sock-Puppet-War" between 2018 and 2021. Each user employed the sock-puppets in an attempt to prevent the other from expressing his or her personal view and with the further hope that their opponent might be banned from the site. One person has created hundreds of sock-puppets. The names of many of these sock puppets contain doxxing material or express unacceptable statements.

I have been watching specific sock-puppets for some time to try to understand their purpose. As recently as earlier this week I disabled 6 such accounts. Their creator is known. There are as yet others that I am still analysing.

Real-World Politics

In the USA in particular, but elsewhere too, the political situation has become very polarised. There is much hatred between opposing political factions which has sometimes resulted in violence or other physical and verbal abuse. This site is NOT the place to continue to express your dislike of other community members who do not share your own political views. Demanding that someone be banned or continually punished for having a particular political view is abuse and it will be treated as such by the staff.

Everyone in our community has the same rights to express their opinion, and any attempt to prevent one of them from doing so is unacceptable. If you find yourself having to name a person in a comment it is often the sign that you are intending that comment as a personal attack. If a topic or journal is intended to discuss a political viewpoint it is entirely correct to do so, but that does not include personal attacks against other community members.

Journals

Journals are for account holders to discuss any topic that is legal under US and State laws, but which would not be considered for front page use. The topics do not have to be written to suit everybody in the community. They do not have to meet with the approval of individual community members who have no right to demand that the journal owner stops writing such journals or take other action to intentionally disrupt the subsequent journal discussion.

...And if you have managed to get this far I hope that what I have written will now make more sense than it might have done before and you will now understand its relevance:

Flagging Trial

The removal of non-account Anonymous Cowards from the main pages has made discussion far more acceptable to many people. Unfortunately, it has not had the same effect in the journals which a minority of ACs have been using to disrupt the discussions and abuse the journal owners and other community members. As a result, fewer people are using the journals to introduce their own discussions, and fewer people are participating in journal discussions.

Several journal owners requested that we investigate ways of controlling the abuse. It was apparent that such control would be a significant task for staff with the current software and data. The site has always had a means of removing illegal or unacceptable content from display. From the very first days of the site there has been a facility to delete comments from the database. However, the method involved hard deletes (permanent deletions from the database) but that left the child comments also inaccessible. Soft deleting (flagging) was adopted in 2024 as a far better solution. The use of the flagging is different from the community's perception than the previous system because:

  • It is immediately apparent that flagging has taken place. Previously comments just 'disappeared' and were irretrievable.
  • The community needs to know that the system is not being abused, which could be provided by having increased visibility of the processes involved
  • Such visibility raises several issues – why has a comment been flagged? who would do the flagging? and how would it be managed?

After discussions with some journal owners they agreed to assist in a trial in which journal owners themselves would be able to exercise some control over abuse and/or disruption in their journals. There were 3 journal owners initially and others participated as their journals appeared.

There is one permanently banned account – aristarchus. Even before his ban we have over several years tried various methods including moderation, arranging for him to rejoin the community with some restrictions, and deletion of his comments. This is not new and goes back to the very early days of the site. In 2014 he was already abusing some of the same people that he abuses today. His complaints about blocked IP addresses and censorship go back to at least 2016. This alone indicates that the blocked IP addresses are unrelated to any other function and are automatic within the Rehash software. The site rules state that technical means can be employed to remove such comments and that now implies flagging.

Identifying his posts was initially marred by the occasional mis-identification. Where they were brought to my attention they were corrected and apologies made – publicly and privately. Since the start of 2025 the amount of data available to us has increased in its nature, quantity and accuracy. It is far more reliable today than it was. Nevertheless, there is no automatic flagging and a person remains the final decision maker based on the originator and the contents of the comment in its entirety.

Findings and Recommended Actions

  • Journal owners are reluctant to use the flagging mechanism for perfectly understandable reasons. They would prefer an inclusive, community-based discussion. They do not want the abuse and disruption and flagging provides them with the means to control such occurrences should they wish to use it. Action: Consider starting all journals set to Logged-in Users only as default. Journal owners should still have the opportunity to open the discussion wider if they wish.
  • There is no reason to remove the facility from journals for those journal owners who might now, or in the future, wish to use flagging to control abuse in their journal. Action: Leave the facility in situ. We may also need it for future trials.
  • Some readers still find the reduced banner impairs their ability to read a discussion. Action: Investigate still further whether it is possible for the banner be reduced further in size – perhaps just to the comment number in a smaller font? It is recognised that the fragility of parts of Rehash might make this very difficult to achieve.
  • The management of flagging will require additional data to be recorded with each flagged item. For example, if a comment contains doxxing it should record the comment and also set a flag to prevent it ever being released. There may be additional requirements as the existing software is enhanced. Action: Keep as is for the moment but be aware that changes will be required.
  • The management of flagging will either require additional staff for it to be maintainable over a period of time, or significant additional software to assist in the management task. Action: If no additional manpower is available the next best option is to make the site Logged-in Users only. Reverting to a previous state (i.e. relying on basic moderation) will only result in the same outcome as it did previously.
  • User requests for a flagging to be reviewed (not simply viewed) must be by the person who made the original comment. Action: How to do this for ACs is not yet identified. Otherwise the system can be easily defeated by numerous unjustified requests for reviews by miscellaneous people..
  • The decision to flag a comment can for the moment only be made by a person. Despite the process being far more reliable now than it initially was it is still below the level that would result in an automatic system being viable.
  • A clear policy that is acceptable to the community must be provided to state clearly when and how flagging is permissible. Action: A policy must be written with community consultation to fulfill this requirement.

Your comments are invited. ACs will have the opportunity to make comments in a journal. While AC views and opinions are welcome any abuse in that journal will be treated appropriately

posted by janrinok on Thursday October 30, @11:29AM   Printer-friendly

Westinghouse is claiming a nuclear deal would see $80B of new reactors:

On Tuesday, Westinghouse announced that it had reached an agreement with the Trump administration that would purportedly see $80 billion of new nuclear reactors built in the US. And the government indicated that it had finalized plans for a collaboration of GE Vernova and Hitachi to build additional reactors. Unfortunately, there are roughly zero details about the deal at the moment.

The agreements were apparently negotiated during President Trump's trip to Japan. An announcement of those agreements indicates that "Japan and various Japanese companies" would invest "up to" $332 billion for energy infrastructure. This specifically mentioned Westinghouse, GE Vernova, and Hitachi. This promises the construction of both large AP1000 reactors and small modular nuclear reactors. The announcement then goes on to indicate that many other companies would also get a slice of that "up to $332 billion," many for basic grid infrastructure.

So the total amount devoted to nuclear reactors is not specified in the announcement or anywhere else. As of the publication time, the Department of Energy has no information on the deal; Hitachi, GE Vernova, and the Hitachi/GE Vernova collaboration websites are also silent on it.

Meanwhile, Westinghouse claims that it will be involved in the construction of "at least $80 billion of new reactors," a mix of AP1000 and AP300 (each named for the MW of capacity of the reactor/generator combination). The company claims that doing so will "reinvigorate the nuclear power industrial base."

That's going to take some work. As of now, there are zero nuclear reactors under construction, and the last two that were completed were enough to bankrupt Westinghouse. (It's now co-owned by Cameco, a nuclear fuel supplier, and Brookfield Asset Management.) The Financial Times reports that one of Westinghouse's owners thinks that the $80 billion should be enough for eight reactors, but would only finance five if they cost as much as the AP1000s previously built in the US. The FT also reports that the US government would share in any profits and a stake in the company if the deal goes forward.

One of the big challenges these deals will face, however, is achieving profitability. According to the Department of Energy's latest evaluation, nuclear power is the second-most expensive source of electricity in the US, behind offshore wind, and the cost of offshore wind has fallen in recent years. Finances aren't the only risk to this deal. None of the designs for small modular reactors developed by any of these companies has currently been approved by the Nuclear Regulatory Commission.


Original Submission

posted by janrinok on Thursday October 30, @06:45AM   Printer-friendly
from the Chucky-or-Chuck-E.-Cheese dept.

https://arstechnica.com/tech-policy/2025/10/senators-move-to-keep-big-techs-creepy-companion-bots-away-from-kids/

The US will weigh a ban on children's access to companion bots, as two senators announced bipartisan legislation Tuesday that would criminalize making chatbots that encourage harms like suicidal ideation or engage kids in sexually explicit chats.

At a press conference, Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) introduced the GUARD Act, joined by grieving parents holding up photos of their children lost after engaging with chatbots.
[...]
Failing to block a minor from engaging with chatbots that are stoking harmful conduct—such as exposing minors to sexual chats or encouraging "suicide, non-suicidal self-injury, or imminent physical or sexual violence"—could trigger fines of up to $100,000, Time reported. (That's perhaps small to a Big Tech firm, but notably higher than the $100 maximum payout that one mourning parent suggested she was offered.)
[...]
It covers any AI chatbot that "provides adaptive, human-like responses to user inputs" and "is designed to encourage or facilitate the simulation of interpersonal or emotional interaction, friendship, companionship, or therapeutic communication," Time reported.
[...]
"In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide," Blumenthal told NBC News. "Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties."

Hawley agreed with Garcia that the AI industry must align with America's morals and values, telling NBC News that "AI chatbots pose a serious threat to our kids.

"More than 70 percent of American children are now using these AI products," Hawley said.
[...]
The tech industry has already voiced opposition. On Tuesday, Chamber of Progress, a Big Tech trade group, criticized the law as taking a "heavy-handed approach" to child safety. The group's vice president of US policy and government relations, K.J. Bagchi, said that "we all want to keep kids safe, but the answer is balance, not bans.

"It's better to focus on transparency when kids chat with AI, curbs on manipulative design, and reporting when sensitive issues arise," Bagchi said.

However, several organizations dedicated to child safety online, including the Young People's Alliance, the Tech Justice Law Project, and the Institute for Families and Technology, cheered senators' announcement Tuesday. The GUARD Act, these groups told Time, is just "one part of a national movement to protect children and teens from the dangers of companion chatbots."
[...]
During Tuesday's press conference, Blumenthal noted that the chatbot ban bill was just one initiative of many that he and Hawley intend to raise to heighten scrutiny on AI firms.


Original Submission