Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Published: March 13, 2025 5.53pm CET
Tobacco's hidden friendly side: how the controversial plant could be used for good of pharmaceutical production on Earth and beyond.
Tobacco kills 8 million people worldwide every year, but imagine if it could be used to make medicine. The idea isn't unheard of – tobacco has been used as a herbal medicine in the past. But now, in the age of genetic engineering, tobacco may well be the future of pharmaceutical production on Earth and beyond.
European explorers first encountered tobacco in the Americas during the 16th century. There, indigenous people had used it for centuries, either by inhalation, ingestion or topically, as a treatment for any number of illnesses like headaches, colds, sores and stomach upsets.
Tobacco became a panacea in 16th century Europe, prescribed for almost everything. The most bizarre application, however, would probably be as a cure for symptoms of drowning in the 18th century. Tobacco smoke enema kits were kept by the Thames River in London. Should someone fall in, they would be awoken with a shock with one of these kits. The thinking was that the tobacco smoke would provide warmth and stimulation.
Many people think of plants as nice-looking greens. Essential for clean air, yes, but simple organisms. A step change in research is shaking up the way scientists think about plants: they are far more complex and more like us than you might imagine. This blossoming field of science is too delightful to do it justice in one or two stories.
While there is little evidence for tobacco being inherently medicinal, its harmfulness was observed even in the 18th century.
A lot of our modern medications come from plants, like the cancer chemotherapy Taxol from yew trees, or the heart medication Digoxin from fox gloves. These medications are tiny molecules. But if we want anything more complicated, like a protein-based pharmaceutical such as insulin or a vaccine, the equipment involved becomes a lot more technical.
Most of these more complex medications are the product of a kind of genetic engineering called recombinant technology. The genetic material required to make, for instance, insulin, is combined with a cell's genetic material. That cell (which can be bacteria, yeast or animal cells) will now produce the insulin along with all its own proteins. It's much like when a child stealthily slips a chocolate bar in with the rest of their parent's shopping.
The technology is extraordinarily expensive (around US$2 billion, or £1.5 billion) because of the huge vats or bioreactors needed to grow recombinant cells in sterile conditions. This makes access to these kinds of pharmaceuticals difficult for low-income countries.
This is where tobacco could make a difference. Much like the recombinant cells we currently use, plants can also be genetically engineered to produce pharmaceuticals. Plants, however, only need soil, water and sunlight to grow. Tobacco is the largest leafy non-food crop. It is very amenable to genetic modification, and is an absolute power-house when it comes to producing proteins, be it their own or the ones we've introduced. This combined with their high biomass makes them the most prolific plant for pharmaceutical production.
It may be indigenous to the Americas and Australia, but it is a resilient plant and can be grown all over the world. Thanks to its ease of genetic modification, tobacco can become even more resilient by making it drought-resistant.
This idea of molecular farming is still new, but is starting to gain traction. In 2012 the Canadian company Medicago demonstrated the speed of tobacco as a production platform. They used tobacco to produce more than 10 million doses of the influenza vaccine in one month. Given that globally we can produce 40 million doses of the vaccine per month, this achievement was ground-breaking.
There are several clinical trials underway looking at tobacco-produced immunotherapies for diseases like HIV, and even Ebola virus disease. One treatment already received emergency use status in the US for returning healthcare workers during the 2014 Ebola virus outbreak. These diseases disproportionately affect low-income countries and tobacco is grown predominantly in these countries already.
Tobacco is even being used to produce cancer immunotherapies. These cancer treatments work by boosting our own immune systems to fight off cancer cells, with few side-effects compared to traditional chemotherapy. They are, however, prohibitively expensive, so this platform could make them more accessible.
Smoking has caused a great deal of harm worldwide, but its decline in popularity is going to cause a new problem: tobacco farmers in low-income countries will lose their livelihoods. So why not repurpose these crops?
Drugs on MarsOscar Wilde once wrote "every saint has a past, and every sinner has a future". So what is the future of tobacco?
Thinking beyond Earth, if we plan to visit or colonise other planets, we are going to need medications while we're there. Tobacco can grow all over the world, why not on Mars too? A packet of tobacco seeds would take up much less room on a rocket than a five years' supply of insulin, or an entire bioreactor facility for that matter. Plus, the tobacco is an infinite source – collect the seeds and re-plant.
Before we head off to Mars, though, we should address the problems here on Earth, and sustainability is a big one. Plants that we extract medicines from today, such as yew trees, are becoming endangered.
An emerging field is engineering tobacco to have it produce the same medications we typically extract from these plants. Not only that, but we can also produce expensive spices like saffron, or flavours such as raspberry, at a fraction of the cost. Not even the sky is the limit for tobacco's potential.
Arthur T Knackerbracket has processed the following story:
Atari parts and accessories store Best Electronics stands bravely defiant against the march of time and technology, continuing to serve this increasingly niche retro hardware market — a whopping 41 years after it was set up.
As well as supplying parts, the store continues to source and make new parts, provide support, hints, and tips, and claims to have spent $100,000+ in engineering development. In contrast, the iconic and innovative Atari Corp. behind all the firm's home computers, and advanced consoles like the Lynx and Jaguar, went bankrupt in 1996, almost 30 years ago.
Many readers and writers here on Tom's Hardware will have grown up with Atari computers and consoles. Thus it's admirable to see exclusive new and upgraded parts like rubber domes for your ST / STE / Falcon computer keyboard and all Gold PCB boards for your CX series joysticks, plus lots of other parts, continue to be manufactured and supplied to Atari fans.
The retailer also stocks "a lifetime supply" of new-old products in some categories. Interestingly, it reveals many of these were warehoused from the "thousands and thousands of pallets of Atari goods" it bought when Atari Sunnyvale was liquidated.
As a previous owner of Atari ST, Falcon, Lynx and Jaguar hardware, looking through these products is like hunting through a treasure trove. Best Electronics says it lists 5,000+ Atari items on its site. But these are just the most popular items, so if you are after something that appears absent from the extensive parts and components lists, send the store an email to ask after it.
Alternatively, go back in retail time by ordering the Best Rev. 10 All Atari catalog — a paper catalog of over 220 pages, making it about half an inch thick and 1.4 pounds in weight. Helpfully, the catalog includes 330 pictures of Atari bits, as well as extras like prototype information, repair tips and tricks, a complete list of Atari custom chips and replacement ICs, and more. Check out the two-page sample and more information on the Best Electronics site.
Discord launches SDK to help developers enhance social experiences in their games:
Discord on Monday announced the launch of its Discord Social SDK, a free toolkit that allows developers to leverage the platform's social infrastructure to enhance their games' social and multiplayer experiences. The toolkit allows developers to improve their in-game experiences, whether players have a Discord account or not.
Social integrations include a unified friends list that allows players to access their Discord friends in-game and their in-game friends on Discord, making it easier to stay connected both in and out of the game. Plus, the SDK offers deep-linked game invites that enable players to invite their friends to directly join their party or lobby.
Communication features include cross-platform messaging to allow players to keep conversing across desktop, console, and mobile. Other features include the ability to link in-game chats to specific Discord channels in their servers, and voice chat.
"For game creators from indie to AAA, Discord is where you can connect with the world's largest and most engaged community of players to fuel the growth of your game before, during, and after launch," Discord co-founder and CTO Stan Vishnevskiy, said in a press release. "Game discovery and retention have never been so critical, and we're excited to help developers grow their games by reaching gamers where they are."
Discord says the toolkit is already being used by many developers, including Theorycraft Games, Facepunch Studios,1047 Games, Scopely, Mainframe Industries, Elodie Games, Tencent Games, and others.
Discord Social SDK is compatible with C++, Unreal Engine, and Unity, and supports Windows 11+ and macOS. Support for console and mobile is coming soon, the company says.
The platform boasts more than 200 million monthly active users who spend a combined 1.5+ billion hours playing games each month on PC alone.
https://phys.org/news/2025-03-iguanas-world-colonize-fiji-genetic.html
Iguanas have often been spotted rafting around the Caribbean on vegetation and, ages ago, evidently caught a 600-mile ride from Central America to colonize the Galapagos Islands. But for long-distance travel, the Fiji iguanas can't be touched.
A new analysis conducted by biologists at the University of California, Berkeley, and the University of San Francisco (USF) suggests that sometime after about 34 million years ago, Fiji iguanas landed on the isolated group of South Pacific islands after voyaging 5,000 miles from the western coast of North America—the longest known transoceanic dispersal of any terrestrial vertebrate.
Overwater dispersal is the main way newly formed islands get populated by plants and animals, including humans, often leading to the evolution of new species and entirely new ecosystems. Understanding how these colonizations happen has fascinated scientists since the time of Charles Darwin, the originator of the theory of evolution by natural selection.
The new analysis, published in the journal Proceedings of the National Academy of Sciences, suggests that the arrival of the ancestors of the Fiji iguanas coincided with the formation of these volcanic islands.
The estimated time of arrival, 34 million years ago or more recently, is based on the timing of the genetic divergence of the Fiji iguanas, Brachylophus, from their closest relatives, the North American desert iguanas, Dipsosaurus.
Previously, biologists had proposed that Fiji iguanas may have descended from an older lineage that was more widespread around the Pacific but has since died out, leaving Brachylophus as the sole iguanids in the western Pacific Ocean. Another option was that the iguanas hitchhiked from tropical parts of South America and then through Antarctica or even Australia, though there is no genetic or fossil evidence to support this.
The new analysis puts those theories to rest.
"We found that the Fiji iguanas are most closely related to the North American desert iguanas, something that hadn't been figured out before, and that the lineage of Fiji iguanas split from their sister lineage relatively recently, much closer to 30 million years ago, either post-dating or at about the same time that there was volcanic activity that could have produced land," said lead author Simon Scarpetta, a herpetologist and paleontologist who is a former postdoctoral fellow at UC Berkeley and is now an assistant professor at USF in the Department of Environmental Science.
"That they reached Fiji directly from North America seems crazy," said co-author Jimmy McGuire, UC Berkeley professor of integrative biology and herpetology curator at the Museum of Vertebrate Zoology.
"But alternative models involving colonization from adjacent land areas don't really work for the timeframe, since we know that they arrived in Fiji within the last 34 million years or so. This suggests that as soon as land appeared where Fiji now resides, these iguanas may have colonized it. Regardless of the actual timing of dispersal, the event itself was spectacular."
More information: Simon G. Scarpetta et al, Iguanas rafted more than 8,000 km from North America to Fiji, Proceedings of the National Academy of Sciences (2025). DOI: 10.1073/pnas.2318622122.
Amazon is killing a privacy feature to bolster Alexa+, the new subscription assistant:
Since Amazon announced plans for a generative AI version of Alexa, we were concerned about user privacy. With Alexa+ rolling out to Amazon Echo devices in the coming weeks, we're getting a clearer view of the privacy concessions people will have to make to maximize usage of the AI voice assistant and avoid bricking functionality of already-purchased devices.
In an email sent to customers today, Amazon said that Echo users will no longer be able to set their devices to process Alexa requests locally and, therefore, avoid sending voice recordings to Amazon's cloud. Amazon apparently sent the email to users with "Do Not Send Voice Recordings" enabled on their Echo. Starting on March 28, recordings of every command spoken to the Alexa living in Echo speakers and smart displays will automatically be sent to Amazon and processed in the cloud.
[...] One of the most marketed features of Alexa+ is its more advanced ability to recognize who is speaking to it, a feature known as Alexa Voice ID. To accommodate this feature, Amazon is eliminating a privacy-focused capability for all Echo users, even those who aren't interested in the subscription-based version of Alexa or want to use Alexa+ but not its ability to recognize different voices.
However, there are plenty of reasons people wouldn't want Amazon to receive recordings of what they say to their personal device. For one, the idea of a conglomerate being able to listen to personal requests made in your home is, simply, unnerving.
[...] Likely looking to get ahead of these concerns, Amazon said in its email today that by default, it will delete recordings of users' Alexa requests after processing. However, anyone with their Echo device set to "Don't save recordings" will see their already-purchased devices' Voice ID feature bricked. Voice ID enables Alexa to do things like share user-specified calendar events, reminders, music, and more. Previously, Amazon has said that "if you choose not to save any voice recordings, Voice ID may not work." As of March 28, broken Voice ID is a guarantee for people who don't let Amazon store their voice recordings.
[...] Amazon is forcing Echo users to make a couple of tough decisions: Grant Amazon access to recordings of everything you say to Alexa or stop using an Echo; let Amazon save voice recordings and have employees listen to them or lose a feature set to become more advanced and central to the next generation of Alexa.
However, Amazon is betting big that Alexa+ can dig the voice assistant out of a financial pit. Amazon has publicly committed to keeping the free version of Alexa around, but Alexa+ is viewed as Amazon's last hope for keeping Alexa alive and making it profitable. Anything Amazon can do to get people to pay for Alexa takes precedence over other Alexa user demands, including, it seems, privacy.
People are using Google's new AI model to remove watermarks from images:
Users on social media have discovered a controversial use case for Google's new Gemini AI model: removing watermarks from images, including from images published by Getty Images and other well-known stock media outfits.
Last week, Google expanded access to its Gemini 2.0 Flash model's image generation feature, which lets the model natively generate and edit image content. It's a powerfulcapability, by all accounts. But it also appears to have few guardrails. Gemini 2.0 Flash will uncomplainingly create images depicting celebrities and copyrighted characters, and — as alluded to earlier — remove watermarks from existing photos.
As several X and Reddit users noted, Gemini 2.0 Flash won't just remove watermarks, but attempt to fill in any gaps created by a watermark's deletion. Other AI-powered tools do this, too, but Gemini 2.0 Flash seems to be exceptionally skilled at it — and free to use.
To be clear, Gemini 2.0 Flash's image generation feature is labeled as "experimental" and "not for production use" at the moment, and is only available in Google's developer-facing tools like AI Studio. The model also isn't a perfect watermark remover. Gemini 2.0 Flash appears to struggle with certain semi-transparent watermarks and watermarks that canvas large portions of images.
Still, some copyright holders will surely take issue with Gemini 2.0 Flash's lack of usage restrictions. Models including Anthropic's Claude 3.7 Sonnet and OpenAI's GPT-4o explicitly refuse to remove watermarks; Claude calls removing a watermark from an image "unethical and potentially illegal."
Removing a watermark without the original owner's consent is considered illegal under U.S. copyright law (according to law firms like this one) outside of rare exceptions.
He was a self-educated genius whose groundbreaking discoveries in the fields of physics and chemistry electrified the world of science and laid the foundations for Albert Einstein's theory of relativity nearly a century later.
Now, the little-known notebooks of the Victorian scientist Michael Faraday have been unearthed from the archive of the Royal Institution and are to be digitised and made permanently accessible online for the first time.
The notebooks include Faraday's handwritten notes on a series of lectures given by the electrochemical pioneer Sir Humphry Davy at the Royal Institution in 1812. "None of these notebooks have been looked at or analysed in any great depth," said Charlotte New, head of heritage for the Royal Institution. "They're little known to the public."
Faraday, the son of a blacksmith, left school at 13 and was working as an apprentice bookbinder when he attended the lectures. He penned very careful notes and presented one of his notebooks to Davy, hoping for a job at the Royal Institution despite his working-class background and rudimentary education.
The notebooks shed light on the workings of Faraday's mind and reveal he made intricate drawings to visualise the scientific experiments and principles he was learning about at the lectures. "He's taking the time to make his own publication and grounding what's being taught to him in his own understanding," said New. "He's heavily illustrating his notes to understand the principle that's been taught to him." He even wrote an index for each notebook, she said, just for his own use and personal research. "This is at a time when paper is taxed. It shows how he's really trying to understand the science within."
When Faraday gave Davy the notebook, he expressed his "desire to escape from trade, which I thought vicious and selfish, and enter into the service of Science".
[...] Faraday later gave an account of this job offer: "At the same time that he [Davy] gratified my desires as to scientific employment, he advised me to remain a bookbinder, telling me that Science was a harsh mistress... poorly rewarding those who devoted themselves to her service."
Despite Davy's advice, Faraday accepted the job. It was a decision that would prove to be seminal for science. Over the next 55 years, while working for the Royal Institution, Faraday discovered several fundamental laws of physics and chemistry – including his law of electromagnetic induction in 1831, which illuminated the relative motion of charged particles.
It was thanks to Faraday's trailblazing experiments at the institution that he discovered electromagnetic rotation in 1821, a breakthrough that led to the development of the electric motor and benzene, a hydrocarbon derived from benzoic acid, in 1825. He became the first scientist to liquefy gas in 1823, invented the electric generator in 1831 and discovered the laws of electrolysis in the early 1830s, helping to coin terms such as electrode, cathode and ion. In 1845, after finding the first experimental evidence that a magnetic field could influence polarised light – a phenomenon that became known as the Faraday effect – he proved light and electromagnetism are interconnected.
Today, Faraday's law of induction is widely credited as enabling Einstein, who kept a framed picture of Faraday on his wall, to develop his theory of relativity.
Throughout his career Faraday continued to draw his apparatus in his notebooks when making these groundbreaking discoveries. "It's something that he starts here, with these illustrations, and carries on through," New said.
A curated selection of key pages from the notebooks will be launched online for the first time on the Royal Institution website on 24 March, to mark 200 years since Faraday founded the annual Royal Institution Christmas lectures.
People really are getting dumber. Futurism echoes a paywalled Financial Times report:
...assessments show that people across age groups are having trouble concentrating and losing reasoning, problem-solving, and information-processing skills — all facets of the hard-to-measure metric that "intelligence" is supposed to measure.
Is it COVID?
Though there has been a demonstrably steep decline in cognitive skills since the COVID-19 pandemic due to the educational disruption it presented, these trends have been in evidence since at least the mid-2010s, suggesting that whatever is going on runs much deeper and has lasted far longer than the pandemic.
is it smartphones?
While there certainly are ways to use tech that don't cause harm to cognition, studies show that "screen time" as we know it today hurts verbal functioning in children and makes it harder for college-age adults to concentrate and retain information.
They also mention a decline in reading, but don't come to any conclusions. So why do you think people are getting dumber?
https://deltiasgaming.com/forget-windows-11-steamos-for-desktops-may-be-released-soon/
Valve has had SteamOS for a while now, but it was exclusive to the Steam Deck. That may change soon because a dedicated version for anyone to install on their PCs could be on the horizon.
Interesting details have popped up in a leak that suggests gamers may be able to finally consider ditching the traditional Windows operating system altogether. Let's discuss Valve's rumored SteamOS for desktop PCs and whether or not SteamOS can replace Windows 11.
[...] Over on X, the user @SadlyItsBradley made a post saying, "It's almost here," mentioning an image of a splash screen showing the SteamOS text. Reportedly, this isn't a community-created image but one that is present within the code for SteamOS, specifically shown for devices that aren't a Steam Deck.
According to the user, who seems to be well-versed in updates around Valve software and hardware, Valve has "pushed a ton of commits." This makes it seem like the company is preparing for "Steam OS general public release." Of course, Valve has not officially confirmed anything yet, so we should take this with a grain of salt. The rumor is definitely very exciting to hear, though, especially for gamers who increasingly develop more hate for Windows.[...] With all this information in mind, Steam OS should be coming soon in 2025. Stay tuned for more updates on this topic, as we will definitely let you know if we know more.
A new concept of quantum gravity arising from entropy may be useful not only in the ever-present need to explain dark energy and dark matter, but might also edge closer to a grand unified theory of everything that we observe, and infer from our observations.
The fundamental idea is to relate the metric of Lorentzian spacetime to a quantum operator, playing the role of an renormalizable effective density matrix and to describe the matter fields topologically, according to a Dirac-Kähler formalism, as the direct sum of a 0-form, a 1-form and a 2-form. While the geometry of spacetime is defined by its metric, the matter fields can be used to define an alternative metric, the metric induced by the matter fields, which geometrically describes the interplay between spacetime and matter. The proposed entropic action is the quantum relative entropy between the metric of spacetime and the metric induced by the matter fields. The modified Einstein equations obtained from this action reduce to the Einstein equations with zero cosmological constant in the regime of low coupling. By introducing the G-field, which acts as a set of Lagrangian multipliers, the proposed entropic action reduces to a dressed Einstein-Hilbert action with an emergent small and positive cosmological constant only dependent on the G-field. The obtained equations of modified gravity remain second order in the metric and in the G-field. A canonical quantization of this field theory could bring new insights into quantum gravity while further research might clarify the role that the G-field could have for dark matter.
I was always fascinated by the geometrical nature of magnetism arising from electrical charge in motion, and yearned to extend the phenomenon to moving mass having a similar gravi-magnetism. It would be very interesting if entropy of the sources of the strong or weak (or both) nuclear, or other as yet un-described, forces manifests as the "monopole" very weak force that is gravity.
Arthur T Knackerbracket has processed the following story:
The ChatGPT developer submitted an open letter full of proposals to the White House Office of Science and Technology (OSTP) regarding the Trump administration's AI Action Plan, currently under development.
It outlines the super-lab's views on how the White House can support the American AI industry. This includes putting in place a regulatory regime – but one that "ensures the freedom to innovate," of course; an export strategy to let America exert control over its allies while locking out enemies like China; and adopting measures to drive growth, including for federal agencies to "set an example" on adoption.
The suggestions regarding copyright display a certain amount of hubris. It talks up the "longstanding fair use doctrine" of American copyright law, and claims this is "even more critical to continued American leadership on AI in the wake of recent events in the PRC," presumably referring to the interest generated by China's DeepSeek earlier this year.
America has so many AI startups because the fair use doctrine promotes AI development, OpenAI says, while "rigid copyright rules are repressing innovation and investment," in other markets, singling out the European Union for allowing "opt-outs" for rights holders.
The biz previously claimed it would be "impossible" to build top-tier AI models that meet today's needs without using people's copyrighted work.
It proposes that the US government "take steps to ensure that our copyright system continues to support American AI leadership," and that it shapes international policy discussions around copyright and AI, "to prevent less innovative countries from imposing their legal regimes on American AI firms and slowing our rate of progress."
Not content with that, OpenAI wants the US government to actively assess the level of data available to American AI firms and "determine whether other countries are restricting American companies' access to data and other critical inputs."
Dr Ilia Kolochenko, CEO at ImmuniWeb and an Adjunct Professor of Cybersecurity at Capitol Technology University in Maryland, expressed concern over OpenAI's proposals.
"Arguably, the most problematic issue with the proposal – legally, practically, and socially speaking – is copyright," Kolochenko told The Register.
"Paying a truly fair fee to all authors – whose copyrighted content has already been or will be used to train powerful LLM models that are eventually aimed at competing with those authors – will probably be economically unviable," he claimed, as AI vendors "will never make profits."
Advocating for a special regime or copyright exception for AI technologies is a slippery slope, he argues, adding that US lawmakers should regard OpenAI's proposals with a high degree of caution, mindful of the long-lasting consequences it may have on the American economy and legal system.
OpenAI also proposes maintaining the three-tiered AI diffusion rule framework, but with some alterations to encourage other nations to commit "to deploy AI in line with democratic principles set out by the US government."
The stated aim of this strategy is "to encourage global adoption of democratic AI principles, promoting the use of democratic AI systems while protecting US advantage."
OpenAI talks of expanding market share in Tier I countries (US allies) through the use of "American commercial diplomacy policy," banning the use of China-made equipment (think Huawei) and so on.
The ChatGPT lab also proposes "AI Economic Zones" to be created in America by local, state, and the federal government together with industry, which sounds similar to the UK government's "AI Growth Zones."
These will be intended to "speed up the permitting for building AI infrastructure like new solar arrays, wind farms, and nuclear reactors," and would allow exclusions from the National Environmental Policy Act, which requires federal agencies to evaluate the environmental impacts of their actions.
Finally, OpenAI proposes that federal agencies should "lead by example" on AI adoption. Uptake in federal departments and agencies remains "unacceptably low," the Microsoft-championed lab says, and wants to see the "removal of known blockers to the adoption of AI tools, including outdated and lengthy accreditation processes, restrictive testing authorities, and inflexible procurement pathways."
Google has also put out its response [PDF] to the White House's action plan call, arguing also for fair use defenses and data-mining exceptions for AI training.
For the third time in recent memory, CloudFlare has blocked large swaths of niche browsers and their users from accessing web sites that CloudFlare gate-keeps. In the past these issues have been resolved quickly (within a week) and apologies issued with promises to do better:
2024-03-11: Cloudflare checks broken again?
2024-07-08: Cloudflare checks broken yet AGAIN?
2025-01-30: Cloudflare Verification Loop issues
This time around it has been over 6 weeks and CloudFlare has been unable or unwilling to fix the problem on their end, effectively stalling any progress on the matter with various tactics including asking browser developers to sign overarching NDAs:
Re: CloudFlare: summary and status
Some of the affected browsers:
• Pale Moon
• Basilisk
• Waterfox
• Falkon
• SeaMonkey
• Various Firefox ESR flavors
• Thorium (on some systems)
• Ungoogled Chromium
From the main developer of Pale Moon:
Our current situation remains unchanged: CloudFlare is still blocking our access to websites through the challenges, and the captcha/turnstile continues to hang the browser until our watchdog terminates the hung script after which it reloads and hangs again after a short pause (but allowing users to close the tab in that pause, at least). To say that this upsets me is an understatement. Other than deliberate intent or absolute incompetence, I see no reason for this to endure. Neither of those options are very flattering for CloudFlare.
I wish I had better news.
NIST has chosen a new algorithm for post-quantum encryption called HQC, which will serve as a backup for ML-KEM, the main algorithm for general encryption.
HQC is based on different math than ML-KEM, which could be important if a weakness were discovered in ML-KEM.
NIST plans to issue a draft standard incorporating the HQC algorithm in about a year, with a finalized standard expected in 2027.
The overall process at NIST is https://en.wikipedia.org/wiki/NIST_Post-Quantum_Cryptography_Standardization
The algo's "homepage" seems to be https://pqc-hqc.org/
Currently there only seems to be a C++ implementation; has anyone else found other implementations? Have you upgraded your software, including SN, to PQC?
Here's the Story
If you thought the Raspberry Pi's chip was dinky, well, get a load of the nattily named Texas Instruments MSPM0C1104, said to the world's smallest microcontroller or MCU and measuring a mere 1.38 mm².
If you look carefully at the image [...] , you can just make out the eight ball-grid connectors on the tiny 1.38 mm² chip package. In other words, that almost-invisible thing isn't just the silicon chip, but the entire chip package equivalent to a fully packaged CPU from Intel or AMD, not just the silicon inside. Yup, mind veritably blown. For reference, the package for the Broadcom BCM2712 chip that powers the Raspberry Pi 5 is about 20 mm². So you could fit about 200 of these things in the space the Broadcom BCM2712 takes up.
[...]
Despite the diminutive proportions, which Texas Instruments claims to be 38% smaller than any other MCU, this teensy spec of a chip packs a fully functional Arm 32-bit Cortex-M0+ CPU core running at a towering 24 MHz. It also has 16 KB of flash memory and 1 KB of SRAM.
Arthur T Knackerbracket has processed the following story:
The European Space Agency this week inaugurated its new supercomputing facility built with HPE.
The aptly named "SpaceHPC" facility is billed as being "demonstrator infrastructure" designed to help Europe's space industry "mitigate risks associated with data processing, modelling, and simulations."
Located in the Italian town of Frascati, 20km outside Rome, Space HPC houses a machine packing 34,000 cores' worth of the "latest generation of AMD & Intel processors." 108 Nvidia H100 GPUs are also present, giving the machine 5 petaflops of raw performance potential.
That power would see Space HPC ranked in around 210th place on the current Top 500 List of Earth's mightiest supercomputers.
The machine uses InfiniBand networking, packs 156 TB of RAM, and includes 3.6 PB of solid state disk storage.
Direct liquid cooling allowed it to bag a power usage effectiveness score of "below 1.09." The machine is also plumbed into the heating system of the campus where it resides.
As is usually the case with supercomputers, Space HPC can be configured to run different workloads. The machine therefore offers partitions dedicated to general compute tasks, and two other partitions that take advantage of the H100s to run AI/ML workloads or other software that needs accelerators.
ESA's Space Safety Programme has already tested Space HPC to improve its ability to – you guessed it – model space weather. Among other things, it can improve warnings of future solar activity that could pose a danger to infrastructure in orbit or on the ground.
[...] The org is, however, already considering expressions of interest for time on the machine at a form you can find here.