Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Mess is best: disordered structure of battery-like devices improves performance:
Researchers led by the University of Cambridge used experimental and computer modelling techniques to study the porous carbon electrodes used in supercapacitors. They found that electrodes with a more disordered chemical structure stored far more energy than electrodes with a highly ordered structure.
Supercapacitors are a key technology for the energy transition and could be useful for certain forms of public transport, as well as for managing intermittent solar and wind energy generation, but their adoption has been limited by poor energy density.
The researchers say their results, reported in the journal Science, represent a breakthrough in the field and could reinvigorate the development of this important net-zero technology.
Like batteries, supercapacitors store energy, but supercapacitors can charge in seconds or a few minutes, while batteries take much longer. Supercapacitors are far more durable than batteries, and can last for millions of charge cycles. However, the low energy density of supercapacitors makes them unsuitable for delivering long-term energy storage or continuous power.
"Supercapacitors are a complementary technology to batteries, rather than a replacement," said Dr Alex Forse from Cambridge's Yusuf Hamied Department of Chemistry, who led the research. "Their durability and extremely fast charging capabilities make them useful for a wide range of applications."
A bus, train or metro powered by supercapacitors, for example, could fully charge in the time it takes to let passengers off and on, providing it with enough power to reach the next stop. This would eliminate the need to install any charging infrastructure along the line. However, before supercapacitors are put into widespread use, their energy storage capacity needs to be improved.
While a battery uses chemical reactions to store and release charge, a supercapacitor relies on the movement of charged molecules between porous carbon electrodes, which have a highly disordered structure. "Think of a sheet of graphene, which has a highly ordered chemical structure," said Forse. "If you scrunch up that sheet of graphene into a ball, you have a disordered mess, which is sort of like the electrode in a supercapacitor."
Because of the inherent messiness of the electrodes, it's been difficult for scientists to study them and determine which parameters are the most important when attempting to improve performance. This lack of clear consensus has led to the field getting a bit stuck.
Many scientists have thought that the size of the tiny holes, or nanopores, in the carbon electrodes was the key to improved energy capacity. However, the Cambridge team analysed a series of commercially available nanoporous carbon electrodes and found there was no link between pore size and storage capacity.
Forse and his colleagues took a new approach and used nuclear magnetic resonance (NMR) spectroscopy – a sort of 'MRI' for batteries – to study the electrode materials. They found that the messiness of the materials – long thought to be a hindrance – was the key to their success.
"Using NMR spectroscopy, we found that energy storage capacity correlates with how disordered the materials are – the more disordered materials can store more energy," said first author Xinyu Liu, a PhD candidate co-supervised by Forse and Professor Dame Clare Grey. "Messiness is hard to measure – it's only possible thanks to new NMR and simulation techniques, which is why messiness is a characteristic that's been overlooked in this field."
When analysing the electrode materials with NMR spectroscopy, a spectrum with different peaks and valleys is produced. The position of the peak indicates how ordered or disordered the carbon is. "It wasn't our plan to look for this, it was a big surprise," said Forse. "When we plotted the position of the peak against energy capacity, a striking correlation came through – the most disordered materials had a capacity almost double that of the most ordered materials."
So why is mess good? Forse says that's the next thing the team is working on. More disordered carbons store ions more efficiently in their nanopores, and the team hope to use these results to design better supercapacitors. The messiness of the materials is determined at the point they are synthesised.
"We want to look at new ways of making these materials, to see how far messiness can take you in terms of improving energy storage," said Forse. "It could be a turning point for a field that's been stuck for a little while. Clare and I started working on this topic over a decade ago, and it's exciting to see a lot of our previous fundamental work now having a clear application."
Reference:
Xinyu Liu et al. 'Structural disorder determines capacitance in nanoporous carbons.' Science (2024). DOI: 10.1126/science.adn6242
Arthur T Knackerbracket has processed the following story:
Manufacturing advanced computer components of the future may take place in space rather than on Earth. Space Forge, a UK-based startup, had its ForgeStar-1 satellite launched into orbit via SpaceX, paving the way for the satellite to ignite its forge and begin producing semiconductors in space.
ForgeStar-1 is officially the UK's first ever in-space manufacturing satellite, enabling the company to build semiconductors in space. The satellite was entirely designed and built in Cardiff, Wales, and launched into space as part of SpaceX's Transporter-14 rideshare mission. The satellite waited for approvals in the United States since April before finally entering orbit today.
ForgeStar-1 has yet to ignite its forge, with the timetable for when this will happen not yet public. "We've built and launched Britain's first manufacturing satellite, and it's alive in orbit. That's a massive technical achievement," shared Space Forge CEO Joshua Western. "Now, we take the next step: proving that we can create the right environment for manufacturing in space. This is the start of a new era for materials science and industrial capability."
In-space manufacturing is a relatively new field that seeks to utilize the unique characteristics of outer space and/or low-Earth orbit to achieve fabrication methods not possible on Earth. Space Forge's primary goals are to produce semiconductors for data center, quantum, and military use cases, using "space-derived crystal seeds" to initiate semiconductor growth, utilizing unlimited vacuum and subzero temperatures for manufacturing, and then returning the chips to Earth for packaging.
The ForgeStar-1 satellite will not bring the cargo it manufactures back to Earth at the completion of its mission. Acting more as a proof-of-concept and prototype for a litany of technologies engineered by Space Forge, the satellite will be tasked with running through the successful application of key technologies for in-space manufacturing, and will end its mission with a spectacular fireball.
Space Forge plans to test both the best-case and worst-case scenarios for the satellite's recovery. First, it will deploy its proprietary Pridwen heat shield and on-orbit controls to steer the satellite, and then test its failsafe mechanism, which involves disintegrating the craft in orbit.
The company's roadmap from 2022 shows that ForgeStar-1's successor, ForgeStar-2, will be the first craft from the company to develop semiconductors that will be returned safely to Earth. The craft will develop enough chips so that the "value of the material manufactured in space exceeds the cost of placing the satellite into orbit", and will be joined eventually by a full stable of Space Forge satellites. The company eventually hopes to build 10-12 satellites per year, reusing craft after the completion of their one- to six-month fabrication missions. Eventually, the company aims to surpass 100 satellite launches per year.
Arthur T Knackerbracket has processed the following story:
Meta announced just a couple weeks ago that it would finally start testing a dedicated inbox for direct messages on Threads. Now, it's making the feature official and rolling out DMs to everyone. As with the earlier test, the update will add a messaging tab to the Threads app where users can access the inbox and exchange DMs with mutuals.
Meta says that initially people will only be able to send messages to users who already follow them or mutual followers from Instagram, though it plans to roll out more customizable inbox controls in a later update. Messaging will also only be available to Threads users over the age of 18. The app also won't support group messaging for now, though it's apparently in the works. Those limitations could be a bit frustrating as it makes Threads DMs more limited than what's available on Instagram, but it's still a lot more convenient than Meta's previous insistence on relying on the Instagram inbox for Threads.
The company's executives were initially very much opposed to bringing DMs to Threads. Instagram head Adam Mosseri explained his thinking in 2023, noting that "two redundant message threads with each of your friends with the same handles in two different apps" seemed like a less than ideal solution. But that position has made less and less sense as Threads has grown to more than 350 million users. "More than a third of daily Threads users with connections follow mostly different accounts on Threads than on Instagram, showing that Threads is establishing its own unique user base," Meta notes in a blog post.
Two years in, the company is also more explicitly positioning Threads as an alternative to X rather than another offshoot of Instagram. While Mosseri once said that the goal of Threads "isn't to replace Twitter," Meta has since walked back its prohibition on recommending political content and experimented with features to help users find familiar creators from X. The company has also leaned more heavily into real-time conversations and news by making trending topics more prominent in the app and surfacing more links in recommendations. Today's update also adds a "highlighter" feature that will make trends even more visible in users' feeds.
"The green transition is not easy, but it is possible."
Ultimately, to manage this climate change thingy, we need to put back all the carbon dioxide emitted since about the 1960s somewhere in a deep hole.
Now Norway has taken the first serious step towards that goal.
We've mentioned their industrial scale carbon-capture-and-storage [CCS] project -- dubbed NorthernLights -- earlier before, when it was still in the proof-of-concept phase. Now NorthernLights has turned fully operational.
The first shipment of carbon dioxide left Heidelberg Materials' plant in Brevik in southern Norway this month by ship, and will be injected in reservoirs under the North Sea in August. It is set to store 5mn tonnes of carbon dioxide under the sea, at a cost of $3.4bn, spread out of 10 years. The Norvegian government subsidizes 64% of the costs, while the rest is covered by a consortium of 3 oil companies (Shell, Equinor, and TotalEnergies).
Proponents of CCS argue that it is the most promising solution for so-called hard-to-abate sectors — such as cement, steel and coal-fired power — to eliminate their emissions. But critics contend that it is a costly process, difficult to scale and dependent on massive subsidies. These are often difficult for most cash-strapped governments to provide, except for the likes of Norway, western Europe's largest petroleum producer and home to the world's largest sovereign wealth fund.
To put this in context, the European Union has set a target of capturing and storing 300 million tonnes of carbon dioxide a year, by 2050.
The driver to do so is the increasing cost of carbon permits. These are publicly traded in the EUs Emissions Trading System (EU ETS) under a cap-and-trade system. The cap means that the amount of carbon permits given to a company each year are decreasing in time, ending up at 0 in 2050. The scheme currently covers only the largest emitters, i.e. approximately 10,000 companies in the power sector and manufacturing industry as well as airlines operating between airports located in the European Economic Area, covering roughly 40 percent of the greenhouse gas emissions of the EU. There are talks going on to extend the system to other companies as well.
The EU ETS market didn't come out of the blue: it was inspired by the USA's Clean Air Act of 1977, which laid down a trading scheme to curb acid rain by capping-and-trading sulphur dioxide emissions. Other countries, as well as separate US states, are now copying this approach for their own carbon dioxide emissions, e.g. China and California. The global value of carbon markets is expected to reach 2.68 trillion dollars by 2028 and 22 trillion by 2050.
Albert Rösti, Switzerland's energy minister, said on Tuesday that CCS was "too expensive" for his landlocked country and that it would be the "last step" to meeting climate targets after easier measures such as cutting transport emissions. Nonetheless, he added: "It is not only theory, but Norway has gone to action."
Arthur T Knackerbracket has processed the following story:
China aims to become the global supplier of chips in the next few years.
Market research and tech consulting firm Yole Group predicts that China will have 30% of the world’s global foundry production capacity, making it the largest hub of semiconductor production. At the moment, Taiwan holds the highest output capacity at 23%, followed closely by China at 21%, South Korea at 19%, Japan at 13%, the U.S. at 10%, and Europe at 8%. According to Digitimes, China is expected to take the lead because of the massive investments in domestic semiconductor manufacturing, driven by Beijing’s goal of reaching self-sufficiency for its chip production.
In 2024, the East Asian country’s semiconductor production hit 8.85 million wafers per month, an increase of 15% from the previous year, and is projected to hit 10.1 million in 2025. China achieved this with the construction of 18 new fabs — for example, Huahong Semiconductor, and pure-play foundry based in Shanghai, just opened a 12-inch fab in Wuxi, with production beginning in the first quarter of this year.
The U.S. is the largest consumer of wafers, accounting for about 57% of global demand. However, it holds just around 10% of global production capacity, meaning it must source the rest of its supply from other major producers like Taiwan, South Korea, and China. On the other hand, Digitimes says that Japan’s and Europe’s production largely satisfies internal demand. There are other producers out there, too, like Singapore and Malaysia, which make up around 6% of global foundry capacity. These companies are largely foreign-owned, though, and exist to satisfy the demand in areas like the U.S. and China.
It seems that the report does not consider the fabs that are under construction in the United States, though. Several companies have started construction in the U.S., TSMC chief among them, with the company expecting to build 30% of its advanced chips in Arizona. Intel, Samsung, Micron, GlobalFoundries, and Texas Instruments also have projects underway, which will add to the U.S. wafer production capacity.
Additionally, the report did not specify how the technological capabilities of China's fabs compare to those of their Western counterparts. The U.S. has been putting export controls on the most advanced chip-making tech, making it harder for Chinese companies to acquire the necessary equipment to produce the latest chips. Because of this, Beijing is pouring billions of dollars into helping fill in the gaps in its semiconductor industry, like lithography tools and electronic design automation (EDA) software. So, even though China will likely have the upper hand when it comes to output capacity, the question of which country will have the greatest capability of producing cutting-edge chips in the near future is still up in the air.
Earthquake-induced electricity offers answer to mystery of gold nugget formation:
The pressure created by earthquakes could trigger quartz veins to generate enough electricity to form large nuggets of gold, researchers in Australia have found.
Most gold nuggets originate in quartz veins when gold-bearing hydrothermal fluids from the earth's crust are transported along fracture networks by earthquakes. '[Quartz veins] form over the accumulation of thousands of earthquake events,' says Chris Voisey, a geologist at Monash University in Melbourne, Australia and lead researcher on the study. 'There's a fault that turns into a quartz vein and then it'll fracture open repeatedly during many, many earthquakes... And every time it fractures open, a gold-bearing fluid from deep in the crust flows through it.'
The overall mechanism behind the transport and deposition of gold are relatively well understood but the formation of large nuggets of gold in quartz veins has remained something of a mystery, particularly given the low concentration of gold in hydrothermal fluids and the chemical inertness of quartz. 'In this type of setting gold is transported in fluid as molecules in tiny amounts,' explains Laura Petrella, a geologist at the University of Western Australia, who was not involved in the study. 'So we struggle to understand how big nuggets can form, considering that we have only a little amount of gold in the fluid.'
Quartz veins are known to emit a measurable electric charge when put under mechanical stress, such as by an earthquake – a phenomenon known as piezoelectricity. To investigate whether earthquake-induced piezoelectric discharges from quartz could explain the formation of gold nuggets, the researchers conducted experiments using slabs of quartz. The slabs were placed within sealed chambers containing gold-rich solutions and the shaking of an earthquake was replicated mechanically.
'Using our model, you have a quartz vein, it breaks open during the "earthquake", gold rich fluid is rushing by, but the ground is shaking because it's an earthquake, so it's all rattling, and that rattling will stress the quartz crystals that are in the vein wall and generate a voltage,' Voisey explains. 'If that voltage is high enough ie higher than the redox potential of a gold-bearing ligand or gold-bearing molecule, that gold will get reduced, so it gets separated from the molecule – it'll precipitate as native gold.'
As expected, the rattled quartz created an electrical field and this resulted in the gold nanoparticles in the solution being drawn out and deposited as grains along the vein. However, this wasn't the end of the story.
'When [the gold] precipitates, it then becomes the focus for ongoing electron donation, because gold is a conductor and quartz is an insulator,' says Voisey. 'If you have gold on a piece of quartz and you deform the quartz, it will use that voltage to donate its own electrons, and when it does that, it becomes the focus for further gold reduction from the fluid and it'll grow and become a big gold nugget.'
Voisey says this work was a 'pure scientific endeavour'. 'The cool thing about it is that we're explaining something that we haven't been able to explain. We know a lot about how gold mineralises – people have been studying this forever, but this was just one of the little outstanding things.'
Petrella says it was a 'novel idea' to look at the electric properties of quartz. 'It's very interesting ... of course, this type of gold deposit forms as a result of fracturing in the crust so we already kind of know they form as a result of earthquakes ... but they actually show that the specific property of the quartz might be able to trigger the deposition of gold that is in the fluid originally.'
She says that one limitation of the work is that for the process to work the quartz would need to be crystalised already, which might not always be the case. 'This theory works really well for what we call remobilisation. Remobilisation is when you have a vein already formed with gold already in it, and then when you apply more stress on this vein, then you will move around, redissolve the gold, and then reprecipitate it. So in the case of remobilisation, this theory that they propose would work really well.'
She says that that the researchers' findings could have important applications in gold mining. 'Mining explorers would be really interested in knowing how you concentrate gold, because it would help in targeting high-grade gold deposits – obviously, it's more environmental friendly to mine a deposit where gold is concentrated.'
Gold nugget formation from earthquake-induced piezoelectricity in quartz:
Gold nuggets occur predominantly in quartz veins, and the current paradigm posits that gold precipitates from dilute (1 mg kg−1 gold), hot, water ± carbon dioxide-rich fluids owing to changes in temperature, pressure and/or fluid chemistry. However, the widespread occurrence of large gold nuggets is at odds with the dilute nature of these fluids and the chemical inertness of quartz. Quartz is the only abundant piezoelectric mineral on Earth, and the cyclical nature of earthquake activity that drives orogenic gold deposit formation means that quartz crystals in veins will experience thousands of episodes of deviatoric stress. Here we use quartz deformation experiments and piezoelectric modelling to investigate whether piezoelectric discharge from quartz can explain the ubiquitous gold–quartz association and the formation of gold nuggets. We find that stress on quartz crystals can generate enough voltage to electrochemically deposit aqueous gold from solution as well as accumulate gold nanoparticles. Nucleation of gold via piezo-driven reactions is rate-limiting because quartz is an insulator; however, since gold is a conductor, our results show that existing gold grains are the focus of ongoing growth. We suggest this mechanism can help explain the creation of large nuggets and the commonly observed highly interconnected gold networks within quartz vein fractures.
A new study is shedding light on why solar radiation is more effective than other forms of energy at causing water to evaporate. The key factor turns out to be the oscillating electric field inherent to sunlight itself:
"It's well established that the sun is exceptionally good at causing water to evaporate – more efficient than heating water on the stove, for instance," says Saqlain Raza, first author of a paper on the work and a Ph.D. student at North Carolina State University. "However, it has not been clear exactly why. Our work highlights the role that electric fields play in this process."
"This is part of a larger effort in the research community to understand this phenomenon, which has applications such as engineering more efficient water-evaporation technologies," says Jun Liu, co-corresponding author of the paper and an associate professor of mechanical and aerospace engineering at NC State.
To explore questions related to sunlight's efficiency at evaporating water, the researchers turned to computational simulations. This allowed them to alter different parameters associated with sunlight to see how those characteristics influence evaporation.
"Light is an electromagnetic wave, which consists – in part – of an oscillating electric field," Liu says. "We found that if we removed the oscillating electric field from the equation, it takes longer for sunlight to evaporate water. But when the field is present, water evaporates very quickly. And the stronger the electric field, the faster the water evaporates. The presence of this electric field is what separates light from heat when it comes to evaporating water."
But what exactly is the oscillating electric field doing?
"During evaporation, one of two things is happening," Raza says. "Evaporation either frees individual water molecules, which drift away from the bulk of liquid water, or it frees water clusters. Water clusters are finite groups of water molecules which are connected to each other but can be broken away from the rest of the liquid water even though they are still interconnected. Usually both of these things happen to varying degrees."
"We found that the oscillating electric field is particularly good at breaking off water clusters," says Liu. "This is more efficient, because it doesn't take more energy to break off a water cluster (with lots of molecules) than it does to break off a single molecule."
[...] "This work substantially advances our understanding of what's taking place in this phenomenon, since we are the first to show the role of the water clusters via computational simulation," says Liu.
Journal Reference:
Saqlain Raza, Cong Yang, Xin Qian, et al. Oscillations in incident electric field enhances interfacial water evaporation [open], Materials Horizons (DOI: 10.1039/D5MH00353A)
Arthur T Knackerbracket has processed the following story:
Deutsche Bahn (DB) and Siemens Mobility have managed to get an ICE test train to 405 km/h (251 mph) on the Erfurt-Leipzig/Halle high-speed line.
While China, with a maglev train hitting 650 km/h (404 mph) in just seven seconds, might regard the achievement as cute, it is a milestone for Germany, where exceeding 300 km/h (186 mph) on the rail network is rare.
The UK had its own attempt at going beyond traditional rail in the 1960s and the early 1970s with the Hovertrain, but the project was cancelled in 1973.
France pushed a steel-wheeled TGV to a record 574.8 km/h (357 mph) in 2007, yet the German achievement will inject a dose of pride into the country's beleaguered network, once an icon of efficiency.
According to a report in the UK's Financial Times, Deutsche Bahn delivers "one of the least reliable services in central Europe," even when compared to the UK's rail system, which is hardly a performance benchmark.
The test ran on a high-speed line that had been in continuous operation for ten years. According to Dr Philipp Nagl, CEO of DB InfraGO AG, no adjustments were needed.
"It is confirmation that infrastructure investments are the foundation for reliable, sustainable, and efficient mobility and logistics over generations," he said.
[...] Thomas Graetz, Vice President High Speed and Intercity Trains, Siemens Mobility, said: "Our goal was to gain in-depth insights into acoustics, aerodynamics, and driving behavior at extreme speeds." Mission accomplished – though what counts as "extreme speeds" seems to vary by country.
Trains on the UK's HS2 railway (whenever it finally opens) are expected to reach speeds of 360 km/h.
An insight into the technology behind Germany's rail network came last year, with an advertisement for an IT professional willing to endure Windows 3.11.
Arthur T Knackerbracket has processed the following story:
Arm-based servers are rapidly gaining traction in the market with shipments tipped to jump 70 percent in 2025, however, this remains well short of the chip designer's ambitions to make up half of datacenter CPU sales worldwide by the end of the year.
Market watcher IDC says Arm servers are attracting mass interest thanks mainly to the launch of large rack-scale configurations, referring to systems such as Nvidia's DGX GB200 NVL72, designed for AI processing.
In its latest Worldwide Quarterly Server Tracker, IDC estimates that servers based on the Arm architecture will account for 21.1 percent of total global shipments this year - not the 50 percent touted by Arm infrastructure chief Mohamed Awad in April.
Servers with at least one GPU fitted, sometimes styled as AI-capable, are projected to grow 46.7 percent, representing almost half of the total market value for this year. The fast pace of adoption by hyperscale customers and cloud service providers is fueling the server market, which IDC says is set to triple in size over just three years.
[...] IDC's regional market projections anticipate the US having the highest expansion with a 59.7 percent jump over 2024, which would see it account for almost 62 percent of the total server revenue by the end of 2025.
China is the other region heating up in the sales stakes, with IDC forecasting growth of 39.5 percent to make up more than 21 percent of the quarterly revenue worldwide. EMEA and Latin America are in single-digit growth territory at 7 and 0.7 percent, respectively, while Canada is expected to decline 9.6 percent this year due to an unspecified "very large deal" that happened in 2024.
Bruce Schneier, along with Ryan Shandler and Anthony J. DeMattee, has published a a blog post on the role that confidence has in elections and, specifically, the role that electronic voting systems have had in undermining that trust.
This technological leap has made voting more accessible and efficient, and sometimes more secure. But these new systems are also more complex. And that complexity plays into the hands of those looking to undermine democracy.
In recent years, authoritarian regimes have refined a chillingly effective strategy to chip away at Americans’ faith in democracy by relentlessly sowing doubt about the tools U.S. states use to conduct elections. It’s a sustained campaign to fracture civic faith and make Americans believe that democracy is rigged, especially when their side loses.
Previously:
(2022) A Scientist's Quest for an Accessible, Unhackable Voting Machine
(2020) U.S. Offers Reward of $10M for Info Leading to Discovery of Election Meddling
(2020) HBO's 'Kill Chain' Documentary Highlights Flaws in US Election Machines
(2019) Researchers Assembled Over 100 Voting Machines. Hackers Broke Into Every Single One.
(2019) DARPA's $10 Million Voting Machine Couldn't be Hacked at DefCon (for the Wrong Reasons)
(2019) Top Voting Machine Maker Reverses Position on Election Security, Promises Paper Ballots
(2019) Amid Worries About Election Security, Microsoft Unveils Voting Machine Software
(2018) I Bought Used Voting Machines on eBay for $100 Apiece. What I Found Was Alarming
(2018) Def Con 26 Voting Village Sees an 11-Year-Old Crack a Voting Machine
and many more ...
As a followup to this SN story, we have a ruling!
decathorpe (Fabio Valentini) posted:
Given feedback in this thread (and to a lesser extent, also on the mailing list) I have decided to withdraw this proposal.
- It is clear that the Fedora 44 target for this Change was too early. To some degree, I expected this to be the case, and was prepared to move the proposed implementation of the Change to a later release. Fedora 44 was just the earliest "reasonable" target. However, I think this also shows an inherent conflict in the current Changes process - if a big Change (like this one) is submitted quite early (out of caution!), that also front-loads the discussion and decision process instead of giving things more time. For example, I don't think the discussion would have been meaningfully different if the targeted release had been Fedora 46 instead of 44 - which is one of the reasons why I decided to withdraw the change instead of just re-targeting it at a later Fedora release.
- I don't think the problem that was attempted to be addressed with this proposal will go away. With more and more projects dropping official support for building / running their software on 32-bit architectures, it's just going to get worse over the next few years. Dealing with widely used software falling out from under our feet won't be fun. To some degree, always pushing the latest and greatest :tm: software in Fedora is also working against us here - if we just stuck with foo 1.0 LTS for 10 years, we just wouldn't need to care that foo 3.0 dropped support for running on 32-bit systems ...
- I am disappointed in some of the reactions this :double_exclamation_mark: proposal :double_exclamation_mark: has received, with some people apparently reading it in the most uncharitable way. It was a proposal that tried to address technical problems package maintainers and release engineering is facing, not some conspiracy to break the "gaming use case". That said, I was expecting a lot of feedback feedback on this one, but not hundreds of people shouting "DON'T DO THIS WHY DON'T YOU CARE ABOUT YOUR USERS I WILL SWITCH DISTROS IMMEDIATELY levels of feedback (though to some degree, I also blame clickbait "tech press" or YouTubers for that ...)
I am now looking forward to seeing actual (and actionable) counter-proposals.
— Fabio
Standards nerd and technology enthusiast, Terence Eden, has analyzed the Brother printers' default password scandal in light of the UK computer security legislation.
So, to recap. The law says an Internet-connected device (including printers) must have a password which is not "based on or derived from publicly available information". As I understand it, having a serial-number based password is OK as long as you don't publicise the serial number. I expect that if it were printed on a sticker that would be fine. But because the serial can be discovered remotely, it fails at this point.
The UK law in question is The Product Security and Telecommunications Infrastructure (Security Requirements for Relevant Connectable Products) Regulations 2023. Brother might also have crossed the line in California which had already outlawed default passwords from 2020 onward.
Previously:
(2025) Massive Privacy Concern: Over 40,000 Security Cameras Are Streaming Unsecured Footage Worldwide
(2024) Secure Boot is Completely Broken on 200+ Models From 5 Big Device Makers
(2022) An Update to Raspberry Pi OS Bullseye
(2018) Weak Passwords to be Banned in California
Mexican drug cartel hacker spied on FBI official's phone to track and kill informants, report says:
In 2018, a hacker hired by the Mexican Sinaloa drug cartel run by the infamous kingpin Joaquín "El Chapo" Guzmán spied on the U.S. Embassy in Mexico City with the goal of identifying "people of interest" for the cartel to target and kill, according to a new U.S. government watchdog report.
[...] The hacker "offered a menu of services related to exploiting mobile phones and other electronic devices," and was able to observe people going in and out of the U.S. Embassy in Mexico's capital, according to the report, including the FBI assistant legal attaché, a federal agent who works overseas along with local law enforcement authorities.
Somehow — the report does not detail exactly how — the hacker was "able to use" the official's mobile phone number to "obtain calls made and received, as well as geolocation data, associated with" the official's phone.
According to the FBI, the hacker also accessed Mexico City's camera system to follow the attaché through the city and "identify people" who the attaché met with, read the report.
"According to the case agent, the cartel used that information to intimidate and, in some instances, kill potential sources or cooperating witnesses," the report added.
[...] For years, Mexico has been at the bleeding edge of surveillance and hacking capabilities, on both sides of the drug war.
On the side of the law, for more than a decade now, multiple local and federal law enforcement agencies in Mexico have spent millions of dollars to use spyware made by Hacking Team and later NSO Group to go after cartels, as well as activists and journalists.
On the criminal side, the Sinaloa cartel used encrypted phones, which are specially crafted devices designed to minimize the risk of surveillance by stripping it of core functionalities and by adding encrypted communications technologies.
According to a Vice News investigation, Mexican cartels were tapping security software used by local government agencies "to locate and disappear rivals and hide their crimes."
Earlier in 2015, Motherboard reported that local cartels employed "a hacker brigade" to build and manage their own communications networks. Later in 2017, Motherboard revealed that a hacker working for the Sinaloa cartel helped authorities track down and arrest the elusive cartel's lieutenant, Dámaso López Núñez. The hacker had originally been hired by the cartel in 2014 to try to hack into the high-security Altiplano Federal Penitentiary, where El Chapo was being held at the time.
Genetic Study Reveals Humanity's Longest Migration:
Modern humans are thought to have walked out of Africa around 60,000 years ago, and they kept going until they reached every habitable part of the planet. Researchers have now revealed more about the longest migration in human history. Reporting in Science, a new study has indicated that early Asians embarked on the longest prehistory migration of humans in history. This trek was over 20,000 kilometers long (12,427 miles), and took multiple generations of people traveling over thousands of years, as they moved from North Asia to the southernmost part of South America, on foot. Ice bridges are thought to have made this route possible.
This study involved a genetic analysis of over 1,537 individuals who are meant to represent 139 diverse ethnic groups. Patterns of ancestry were analyzed, such as sequences that were shared among individuals, or variations that arose and accumulated over time. These differences and similarities showed how various groups moved, adapted, and split apart as they encountered new environments during their journey from Africa, to North Asia, and finally to Tierra del Fuego in what is now Argentina.
The study found that people got to the northwestern tip of South America about 14,000 years ago. They split into groups after that: some stayed in the Amazon; others moved into an area known as Dry Chaco and some continued onto the ice fields of Southern Patagonia or the peaks and valleys of the Andes.
The work suggested that as people migrated, they also encountered many environmental challenges, which they sometimes overcame.
"Those migrants carried only a subset of the gene pool in their ancestral populations through their long journey. Thus, the reduced genetic diversity also caused a reduced diversity in immune-related genes, which can limit a population's flexibility to fight various infectious diseases," noted corresponding study author Kim Hie Lim, an Associate Professor at Nanyang Technological University of Singapore (NTU), among other appointments.
"This could explain why some Indigenous communities were more susceptible to illnesses or diseases introduced by later immigrants, such as European colonists. Understanding how past dynamics have shaped the genetic structure of today's current population can yield deeper insights into human genetic resilience."
Academic institutions from around the world were part of this project, which was supported by the GenomeAsia100K consortium, a nonprofit effort to analyze Asian genomes to advance precision medicine and biomedical research.
"Our study shows that a greater diversity of human genomes is found in Asian populations, not European ones, as has long been assumed due to sampling bias in large-scale genome sequencing projects," added penultimate study author Stephan Schuster, an NTU Professor, among other appointments.
Sources: Nanyang Technological University of Singapore (NTU)
Journal Reference: https://www.science.org/doi/10.1126/science.adk5081