Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Personalized pricing has spread across many industries. Here's how consumers can avoid it:
Recently, Delta Air Lines announced it would expand its use of artificial intelligence to provide individualized prices to customers. This move sparked concern among flyers and politicians. But Delta isn’t the only business interested in using AI this way. Personalized pricing has already spread across a range of industries, from finance to online gaming.
Customized pricing – where each customer receives a different price for the same product – is a holy grail for businesses because it boosts profits. With customized pricing, free-spending people pay more while the price-sensitive pay less. Just as clothes can be tailored to each person, custom pricing fits each person’s ability and desire to pay.
[...] Third, many computer pricing algorithms look at your location, since location is a good proxy for income. I was once in Botswana and needed to buy a plane ticket. The price on my computer was about $200. Unfortunately, before booking I was called away to dinner. After dinner my computer showed the cost was $1,000 − five times higher. It turned out after dinner I used my university’s VPN, which told the airline I was located in a rich American neighborhood. Before dinner I was located in a poor African town. Shutting off the VPN reduced the price.
Last, often to get a better price in face-to-face negotiations, you need to walk away. To do this online, put something in your basket and then wait before hitting purchase. I recently bought eyeglasses online. As a cash payer, I didn’t have my credit card handy. It took five minutes to find it, and the delay caused the site to offer a large discount to complete the purchase.
See also:
YouTube to gauge US users’ ages with AI after UK and Australia add age checks:
YouTube announced on Tuesday that it will begin to use artificial intelligence to estimate the ages of users in the US, in order to show them age-appropriate content.
The rollout of the new feature comes one day after Australia’s government announced it would ban children under 16 from using YouTube and less than a week after the UK implemented sweeping age checks on content on social networks.
YouTube’s AI age verification on its home turf indicates it is putting into place a form of compliance with the Australian and UK requirements, despite its persistent opposition to age-check requirements.
[...] When YouTube determines a user is teen or pre-teen, the site will disable personalized advertising, activate digital wellbeing features and put stricter content filters as well as behavioral restrictions into place.
YouTube’s AI will assess a user’s age via multiple behavioral factors, including what kind of videos the user searches for, the categories of videos they watch, and how long the account has been active, per its blogpost.
“This technology will allow us to infer a user’s age and then use that signal, regardless of the birthday in the account, to deliver our age-appropriate product experiences and protections,” Beser wrote, adding that the company had used the technology in other markets before introducing it in the US.
If the AI’s estimation is incorrect, YouTube says it will allow a user to verify their age with a credit card, a government ID or a selfie.
These are the conference events to keep an eye on. You can even stream a few:
The security industry is hitting Vegas hard this week with three conferences in Sin City that bring the world's largest collection of security pros together for the annual summer camp.
The week kicks off with BSides Las Vegas, which runs from Monday to Wednesday. Of the over 200 BSides security conventions held around the world every year, this one is the biggest and is being held at the Tuscany Hotel, although tickets are sold out.
BSides started as a conference for rejected Black Hat speakers, but those days are long gone. Now it has a range of talk tracks showcasing new research, and this year, passwords are a key theme, with a specific three-day schedule devoted to finding solutions to one of computing's oldest security challenges.
There is a series of live feeds on the conference's YouTube channel and, if you miss seeing the talks in real time, the videos should remain archived. At the password track on Monday at 1700 PT, there's a disturbing-sounding presentation on a custom rig used to crack 936 million passwords with 92 percent accuracy that should be worth tuning into.
[...] For anyone considering adding BSides to their schedule, it's worth a visit. While the smallest of the conventions, it's also one of the most offbeat and there are presentations on everything from building hacking hardware to commercial licensing problems in the industry. And, as is traditional, there's a Capture the Flag competition running and festivities in the evening.
[...] Black Hat: The opening keynote will be a farewell (sort of) address from Mikko Hyppönen, who, after a 34-year tenure at F-Secure hunting malware, is quitting the industry to work on drones. As he told The Register in June, the Ukraine war has spurred him into working on the technology, particularly since his home country, newly minted NATO member Finland, has a massive border with Russia.
The core of the talks is about unpleasant new hacks and vulnerabilities in hardware and software. It was at Black Hat in 2008 that the late Dan Kaminsky revealed a fundamental flaw in DNS that could have run riot through the internet's backbone. While there's nothing on that scale this year, there are sessions scheduled on an Apple zero-day, ways to bypass Windows Hello's authentication systems, and even a talk on satellite vulnerabilities and how to exploit them.
Elsewhere in the talk tracks, there is a key focus on AI, as with everything in the security business these days, but this isn't a cheerleading event and there are some skeptical sessions planned, as well as deep dives into flaws. Several speakers are giving talks on how to fool AIs into breaking safety guardrails or leak information and bots – their use and misuse – are a particular focus.
[...] DEF CON is the original hacker summer camp, started in 1993 in a few hotel rooms by an 18-year-old Jeff Moss with around 100 people. It now hosts tens of thousands of visitors paying more than $500 a head to listen to talks, take part in hacking and gaming competitions, and visit over 30 "villages" dedicated to everything from ham radio to military hacking demonstrations. Its talks are not live-streamed, but most get posted to YouTube eventually.
Once again, AI will feature heavily and the convention is host to the annual AI Cyber Challenge run by DARPA, a competition using the latest LLM models to find vulnerabilities, install fixes that don't break the system, and generate reports while under time pressure. Teams have been competing for months and the final event will see a winner, who will presumably be barraged with lucrative job offers.
[...] The bulk of the talks are pure hacking – vulnerabilities, interesting ways to crack systems, and war stories that advise on what not to do. With the exception of DARPA's competition, this is possibly the least AI-focused conference of the three and is much more about hacking existing systems with current technology.
Most of the villages have their own talks scheduled on everything from policy to privacy and industry-specific topics. There's a car hacking center that Tesla is fond of, the social engineering village is fascinating but also terrifying in showing how easy it is to play people, and the lock picking village is well worth a visit to meet some of the best in the business and get a tutorial.
[...] DEF CON is the fun convention for hackers, while Black Hat is becoming more of a sales and networking-led event, but still has very high-quality security talks and training, and BSides is useful to see what's up and coming in the security industry. The Reg will report on news as it happens, but if you've got any recommendations, feel free to add them to the comments section.
= Links in article:
https://www.youtube.com/@BsideslvOrg/streams
https://bsideslv.org/talks#7PHURF
https://bsideslv.org/talks#9FF3LX
https://www.theregister.com/2025/03/03/cybersecurity_jobs_market/
https://bsideslv.org/talks#UYXVAU
https://www.theregister.com/2025/06/04/mikko_hypponen_drone/
https://www.theregister.com/2023/08/12/black_hat_network/
https://aicyberchallenge.com/
https://www.youtube.com/watch?v=3n2cBSBIAP0
An Ohio couple welcomes a baby boy from a nearly 31-year-old frozen embryo:
A baby boy born last week to an Ohio couple developed from an embryo that had been frozen for more than 30 years in what is believed to be the longest storage time before a birth.
In what's known as embryo adoption, Lindsey and Tim Pierce used a handful of donated embryos that have been frozen since 1994 in pursuit of having a child after fighting infertility for years. Their son was born Saturday from an embryo that had been in storage for 11,148 days, which the Pierces' doctor says sets a record.
It's a concept that has been around since the 1990s but is gaining traction as some fertility clinics and advocates, often Christian-centered, oppose discarding leftover embryos because of their belief that life begins at or around conception and that all embryos deserve to be treated like children who need a home.
"I felt all along that these three little hopes, these little embryos, deserved to live just like my daughter did," said Linda Archerd, 62, who donated her embryos to the Pierces.
Just about 2% of births in the U.S. are the result of in vitro fertilization, and an even smaller fraction involve donated embryos.
However, medical experts estimate about 1.5 million frozen embryos are currently being stored throughout the country, with many of those in limbo as parents wrestle with what to do with their leftover embryos created in IVF labs.
[...] According to Dr. John David Gordon, the transfer of the nearly 31-year-old embryo marks the longest-frozen embryo to result in a live birth. He would know: Gordon says his clinic assisted in the previous record, when Lydia and Timothy Ridgeway were born from embryos frozen for 30 years, or 10,905 days.
"I think that these stories catch the imagination," Gordon said. "But I think they also provide a little bit of a cautionary tale to say, Why are these embryos sitting in storage? You know, why do we have this problem?"
In a statement, Lindsey and Tim Pierce said the clinic's support was just what they needed.
"We didn't go into this thinking about records—we just wanted to have a baby," Lindsey Pierce said.
Previously: Baby Born From Embryo Conceived When Birth Mother Was One Year Old [2017]
KubeSphere kills open source edition:
KubeSphere has become the latest service to abruptly yank an open source edition of a product, triggering outcry from users.
An announcement was posted in the project's repository stating: "Effective immediately ... we will suspend the download links for the KubeSphere open source version and cease providing free technical support."
"We are fully aware that this may cause inconvenience to some users, and we sincerely apologize for any inconvenience caused. However, we believe that by concentrating our resources, we can provide users with more professional, stable, and comprehensive commercial-grade services and support."
KubeSphere is "a distributed operating system for cloud-native application management, using Kubernetes as its kernel." It is also, according to the project's website, "a CNCF-certified [Cloud Native Computing Foundation] Kubernetes platform, 100 percent open source, built and improved by the community."
Effectively, KubeSphere simplifies the management of Kubernetes, which can be unwieldy when it comes to setup and configuration.
One of the founding members of the KubeSphere team, having left KubeSphere developer QingCloud the previous day, posted some of the possible reasoning behind the move: "In recent years, repeated violations of the open source license – by third parties repackaging and monetizing the project – have caused tangible impact on QingCloud's interests.
"While the source code remains available under open source norms, discontinuing the out-of-the-box distributions is, in my view, a challenging adjustment for today's collaborative open source ecosystem.
"Still, as someone who once helped steer this journey, I respect the decision."
Peter Smalls, the general manager of Cloud Native at SUSE, was more critical. In a statement to El Reg, he wrote, "SUSE, with over 30 years of open source commitment, firmly believes that sustainable innovation thrives through genuine openness, collaboration, and enabling customer choice. KubeSphere's abrupt shift away from its open source edition, despite citing challenges, undermines the vital trust essential for a healthy open source ecosystem and has rightly triggered upset within its community. Moves such as this represent the potential erosion of predictability and trust needed in the open source community."
The code's license specifically forbids commercial use of the source without explicit permission or a commercial license.
In the GitHub post, the KubeSphere team appeared to blame the rapid uptake of AI in the tech industry and subsequent changes to the infrastructure layer. So "to adapt to the new era, further enhance product capabilities and service quality, and focus on core technology R&D and the optimization of commercial-grade solutions, after years of planning and careful consideration," the open source edition is for the chop.
Customers using it (or who were planning to) have been directed to the company's customer service team, who will "tailor a commercial version solution for you, including dedicated technical support, vulnerability fixes, version upgrades, and other value-added services, to ensure your business systems run stably in an efficient and secure environment."
Users are not impressed. One said: "This is without a doubt one of the most shortsighted and damaging business decisions I have seen a company make," declaring the decision a "massive red flag" for any customer using it or considering it for future use.
Another said: "Maybe I'm just a pessimist, but it feels like in the last few years the greed keeps on accelerating, and open source projects keep dying."
"Dying" might be a bit strong. But the business model on which some projects have been based hasn't been looking too well lately.
RFK Jr cancels $500m in mRNA vaccine development in the US:
RFK Jr cancels $500m in funding for mRNA vaccines for diseases like Covid
The US Department of Health and Human Services (HHS) plans to cancel $500m (£376m) in funding for mRNA vaccines being developed to counter viruses that cause diseases such as the flu and Covid-19.
That will impact 22 projects being led by major pharmaceutical companies, including Pfizer and Moderna, for vaccines against bird flu and other viruses, HHS said.
Health Secretary Robert F Kennedy Jr, a vaccine sceptic, announced he was pulling the funding over claims that "mRNA technology poses more risks than benefits for these respiratory viruses".
Doctors and health experts have criticised Kennedy's longstanding questioning of the safety and efficacy of vaccines and his views on health policies.
The development of mRNA vaccines to target Covid-19 was critical in helping slow down the pandemic and saving millions of lives, said Peter Lurie, a former US Food and Drug Administration official.
He told the BBC that the change was the US "turning its back on one of the most promising tools to fight the next pandemic".
In a statement, Kennedy said his team had "reviewed the science, listened to the experts, and acted". "[T]he data show these vaccines fail to protect effectively against upper respiratory infections like COVID and flu," he said.
He said the department was shifting the funding toward "safer, broader vaccine platforms that remain effective even as viruses mutate".
Kennedy also claimed that mRNA vaccines can help "encourage new mutations and can actually prolong pandemics as the virus constantly mutates to escape the protective effects of the vaccine".
Health experts have said that viruses mutate regardless of whether vaccines exist for them.
This was true every year for the flu virus, for example, said Dr Paul Offit, the director of the Vaccine Education Center at Children's Hospital of Philadelphia.
Dr Offit said mRNA vaccines were "remarkably safe" and a key to helping prevent against severe infections from viruses like Covid-19.
HHS said the department that runs the vaccine projects, Biomedical Advanced Research and Development Authority (BARDA), would focus on "platforms with stronger safety records and transparent clinical and manufacturing data practices".
While some vaccines use an inactivated virus to trigger an immune response, mRNA vaccines work by teaching cells how to make proteins that can trigger an immune response. Moderna and Pfizer's mRNA vaccines were tested in thousands of people before being rolled out and were found to be safe and effective.
Dr Offit, who invented the rotavirus vaccine, said the funding cancellation could put the US in a "more dangerous" position to respond to any potential future pandemic. He noted mRNA vaccines have a shorter development cycle, which is why they were crucial to responding to the Covid-19 pandemic.
Since taking office, Kennedy has taken a number of steps to transform how the nation's health department develops and regulates vaccines.
In June, he fired all 17 members of a committee that issues official government recommendations on immunisations, replacing them with some people who have criticized the safety and efficacy of vaccines.
He also removed the Covid-19 vaccine from the Center for Disease Control and Prevention's recommended immunization schedule for healthy children and pregnant women.
Related stories at the BBC:
Is RFK Jr's divisive plan to Make America Healthy Again fearmongering - or revolutionary?
RFK Jr sacks entire US vaccine committee
RFK Jr's vaccine panel to review long-approved jabs for children
The two faces of Robert F Kennedy Jr
Linuxiac reports that another malicious package has been uploaded to the Arch User Repository (AUR). This time around the package was google-chrome-stable, which installed a remote-access trojan along with Google Chrome.
The good news—if you can call it that—is that the google-chrome-stable package was available on the AUR only for a few hours before the malware hidden inside was discovered. Still, it did get a few upvotes, which suggests at least some users ended up installing it.
The Arch Linux project had to warn users about a similar attack less than a month ago when a user uploaded three browser packages that also installed a malicious script identified as a remote-access trojan.
Also see: https://lwn.net/Articles/1032193/
Ubuntu users will see a few changes to their command line tools with the launch of Ubuntu 25.10 in October. The wget utility for downloading files is being replaced by wcurl which offers most of the same basic functionality. It's FOSS reports: "Ubuntu Server 25.10 will no longer include wget by default, switching to wcurl instead."
Fresh installations will see this change when 25.10 releases in October. wget has been the standard command-line download tool on Linux systems for years. Most server administrators and scripts rely on its straightforward syntax for file downloads. On the other hand, wcurl is a simple curl wrapper that lets you download files without remembering curl parameters, using curl under the hood with sane defaults."
The report goes on to note another GNU utility, the screen command, will be dropped in favour of Tmux.
Hiding secret codes in light can protect against fake videos
A team of Cornell researchers has developed a way to "watermark" light in videos, which they can use to detect if video is fake or has been manipulated.
The idea is to hide information in nearly-invisible fluctuations of lighting at important events and locations, such as interviews and press conferences or even entire buildings, like the United Nations Headquarters. These fluctuations are designed to go unnoticed by humans, but are recorded as a hidden watermark in any video captured under the special lighting, which could be programmed into computer screens, photography lamps and built-in lighting. Each watermarked light source has a secret code that can be used to check for the corresponding watermark in the video and reveal any malicious editing.
Peter Michael, a graduate student in the field of computer science who led the work, will present the study, "Noise-Coded Illumination for Forensic and Photometric Video," on Aug. 10 at SIGGRAPH 2025 in Vancouver, British Columbia.
Editing video footage in a misleading way is nothing new. But with generative AI and social media, it is faster and easier to spread misinformation than ever before.
"Video used to be treated as a source of truth, but that's no longer an assumption we can make," said Abe Davis, assistant professor of computer science in the Cornell Ann S. Bowers College of Computing and Information Science, who first conceived of the idea. "Now you can pretty much create video of whatever you want. That can be fun, but also problematic, because it's only getting harder to tell what's real."
To address these concerns, researchers had previously designed techniques to watermark digital video files directly, with tiny changes to specific pixels that can be used to identify unmanipulated footage or tell if a video was created by AI. However, these approaches depend on the video creator using a specific camera or AI model—a level of compliance that may be unrealistic to expect from potential bad actors.
By embedding the code in the lighting, the new method ensures that any real video of the subject contains the secret watermark, regardless of who captured it. The team showed that programmable light sources, like computer screens and certain types of room lighting, can be coded with a small piece of software, while older lights, like many off-the-shelf lamps, can be coded by attaching a small computer chip about the size of a postage stamp. The program on the chip varies the brightness of the light according to the secret code.
So, what secret information is hidden in these watermarks, and how does it reveal when video is fake? "Each watermark carries a low-fidelity time-stamped version of the unmanipulated video under slightly different lighting. We call these code videos," Davis said. "When someone manipulates a video, the manipulated parts start to contradict what we see in these code videos, which lets us see where changes were made. And if someone tries to generate fake video with AI, the resulting code videos just look like random variations."
Part of the challenge in this work was getting the code to be largely imperceptible to humans. "We used studies from human perception literature to inform our design of the coded light," Michael said. "The code is also designed to look like random variations that already occur in light called 'noise," which also makes it difficult to detect, unless you know the secret code."
If an adversary cuts out footage, such as from an interview or political speech, a forensic analyst with the secret code can see the gaps. And if the adversary adds or replaces objects, the altered parts generally appear black in recovered code videos.
The team has successfully used up to three separate codes for different lights in the same scene. With each additional code, the patterns become more complicated and harder to fake.
"Even if an adversary knows the technique is being used and somehow figures out the codes, their job is still a lot harder," Davis said. "Instead of faking the light for just one video, they have to fake each code video separately, and all those fakes have to agree with each other."
They have also verified that this approach works in some outdoor settings and on people with different skin tones.
Davis and Michael caution, however, that the fight against misinformation is an arms race, and adversaries will continue to devise new ways to deceive.
"This is an important ongoing problem," Davis said. "It's not going to go away, and in fact, it's only going to get harder."
More information: Peter Michael et al, Noise-Coded Illumination for Forensic and Photometric Video Analysis, ACM Transactions on Graphics (2025). DOI: 10.1145/3742892
China's biggest solar firms shed nearly one-third of their workforces last year, company filings show, as one of the industries hand-picked by Beijing to drive economic growth grapples with falling prices and steep losses:
The job cuts illustrate the pain from the vicious price wars being fought across Chinese industries, including solar and electric vehicles, as they grapple with overcapacity and tepid demand. The world produces twice as many solar panels each year as it uses, with most of them manufactured in China.
Longi Green Energy (601012.SS), Trina Solar , Jinko Solar (688223.SS), JA Solar ( 002459.SZ), and Tongwei (600438.SS), collectively shed some 87,000 staff, or 31% of their workforces on average last year, according to a Reuters review of employment figures in public filings.
Analysts say the previously unreported job losses were likely a mix of layoffs and attrition due to cuts to pay and hours as companies sought to stem losses.
[...] While analysts say it is unclear whether job cuts continued this year, Beijing is increasingly signalling it intends to intervene to cut capacity, sending polysilicon prices soaring nearly 70% in July while solar panel prices have increased more modestly.
[...] But many provincial governments are likely to be reluctant to crack down hard on overcapacity, analysts say. These officials are scored on jobs and economic growth and are loathe to see local champions sacrificed to meet someone else's target.
Also at ZeroHedge.
OpenAI's new open models can run on your hardware instead of in the cloud:
OpenAI is releasing new generative AI models today [Aug 05, 2025], and no, GPT-5 is not one of them. Depending on how you feel about generative AI, these new models may be even more interesting, though. The company is rolling out gpt-oss-120b and gpt-oss-20b, its first open-weight models since the release of GPT-2 in 2019. You can download and run these models on your own hardware, with support for simulated reasoning, tool use, and deep customization.
When you access the company's proprietary models in the cloud, they're running on powerful server infrastructure that cannot be replicated easily, even in enterprise. The new OpenAI models come in two variants (120b and 20b) to run on less powerful hardware configurations. Both are transformers with a configurable chain of thought (CoT), supporting low, medium, and high settings. The lower settings are faster and use fewer compute resources, but the outputs are better with the highest setting. You can set the CoT level with a single line in the system prompt.
The smaller gpt-oss-20b has a total of 21 billion parameters, utilizing mixture-of-experts (MoE) to reduce that to 3.6 billion parameters per token. As for gpt-oss-120b, its 117 billion parameters come down to 5.1 billion per token with MoE. The company says the smaller model can run on a consumer-level machine with 16GB or more of memory. To run gpt-oss-120b, you need 80GB of memory, which is more than you're likely to find in the average consumer machine. It should fit on a single AI accelerator GPU like the Nvidia H100, though. Both models have a context window of 128,000 tokens.
[...] OpenAI says it doesn't intend for anyone to replace its proprietary models with the new OSS releases. It did not set out to replicate what you can do with the mainline GPT releases here, and there are some notable limitations. For example, gpt-oss-120b and gpt-oss-20b are text-only with no multimodality out of the box. However, the company acknowledges there are times when someone might not want to rely on a big cloud-based AI—locally managed AI has lower latency and more opportunities for customization, and it can keep sensitive data secure on site.
OpenAI is cognizant that many users of the company's proprietary models are also leveraging open source models for these reasons. Currently, those firms are using non-OpenAI products for local AI, but the team designed the gpt-oss models to integrate with the proprietary GPT models. So customers can now use end-to-end OpenAI products even if they need to process some data locally.
Because these models are fully open and governed by the Apache 2.0 license, developers will be able to tune them for specific use cases. Like all AI firms, OpenAI builds controls into its models to limit malicious behavior, but it's been a few years since the company released an open model—the gpt-oss models are much more powerful than GPT-2 was in 2019.
[...] If you want to test that claim yourself, gpt-oss-120b and gpt-oss-20b are available for download today on HuggingFace. There are also GitHub repos for your perusal, and OpenAI will host stock versions of the models on its own infrastructure for testing. If you are interested in more technical details, the company has provided both a model card and a research blog post.
Arthur T Knackerbracket has processed the following story:
Cadence admits guilt in exporting chip design tools to China’s National University of Defense Technology, which is believed to be working on the Chinese nuclear program.
Cadence Design Systems, one of the leading electronic design automation (EDA) firms in the U.S., has pleaded guilty to charges, saying that it sold its chip design software to the National University of Defense Technology, located in Hunan Province in South-Central China. According to Reuters, this institution is believed to be working on nuclear explosion simulations, linking it to China’s nuclear weapons research and development efforts.
The university has been on the U.S. Department of Commerce’s Entity List — a list of companies, institutions, and individuals that the White House deems to be operating contrary to its national security and foreign policy interests — since 2015. Furthermore, its affiliates and aliases, including Hunan Guofang Keji University, Central South CAD Center, and CSCC, were also added to the restricted list in 2019 and 2022, respectively.
Despite this, court records reveal that the chip design company and its China subsidiary, Cadence China, delivered EDA tools to CSCC at least 56 times between 2015 and 2020. This continued even though several Cadence China employees knew that CSCC is simply an alias that NUDT used to circumvent American sanctions. Furthermore, Cadence also sold its products to Phytium Technology Co., a Chinese semiconductor company that’s known to be closely working with NUDT, without applying for the proper export licenses.
The company pleaded guilty to one count of conspiracy to commit export control violations, requiring it to pay $140 million in forfeitures, civil, and criminal penalties. Aside from that, the court is also expected to put it under probation for three years, preventing it from doing business with sanctioned institutions at the risk of even harsher penalties.
The U.S. has lifted a ban on the general export of EDA tools, including those from Cadence, earlier this month. However, this lifting only makes it readily available to institutions that aren’t included in the Entity List. So, any company that wants to do business with NUDT and its affiliates must still acquire a proper export license from the Federal government.
Cadence, so far, is the biggest company to have pleaded guilty to breaking American sanctions on Chinese companies. However, it’s not the only one facing scrutiny. Nvidia, the current world leader in AI semiconductors, has seen billions of dollars’ worth of its AI chips smuggled into China. While its CEO, Jensen Huang, continues to deny that its chips are being diverted, there is a thriving black market in China for banned GPUs like the B200 and RTX 5090.
The U.S. is tightening its grip on export controls, even pressuring its allies like Singapore and Malaysia to clamp down on smuggling rings. However, the massive demand in China makes smuggling AI technologies quite lucrative, making it nearly impossible to stop completely.
Some billing changes caused AWS to delete the entirety of developer Seuros' account rather than roll back to the old billing account on record. He has written an annotated timeline and analysis of how AWS came to not just delete a 10-year old, paid up account without warning but also give him quite a run around.
On July 23, 2025, AWS deleted my 10-year-old account and every byte of data I had stored with them. No warning. No grace period. No recovery options. Just complete digital annihilation.
[...] Lessons Learned
- Never trust a single provider—no matter how many regions you replicate across
- "Best practices" mean nothing when the provider goes rogue
- Document everything—screenshots, emails, correspondence timestamps
- The support theater is real—they literally cannot help you
- Have an exit strategy executable in hours, not days
AWS won't admit their mistake. They won't acknowledge the rogue proof of concept. They won't explain why MENA operates differently. They won't even answer whether your data exists.
But they will ask you to rate their support 5 stars.
The cloud isn't your friend. It's a business. And when their business needs conflict with your data's existence, guess which one wins?
Plan accordingly.
[....] At one point during this ordeal, I hit rock bottom. I was ready to delete everything—yank all my gems from RubyGems, delete the organizations, the websites, everything I'd created. Leave a single message: "AWS killed this."
It would have made headlines. Caused chaos for thousands of projects. Trended on HN, Reddit, YouTube. But it would have hurt the wrong people—developers who depend on my work, not AWS.
As he points out, having all your activities managed by a single provider leaves one at risk for such extinction events. But maybe moving over to another, similar cloud provider is just kicking the can down the road and asking for a repeat of events under new circumstances.
Previously:
(2023) AWS to Charge Customers for Public IPv4 Addresses From 2024
(2019) Amazon Slams Media For Not Saying Nice Things About AWS
(2019) Amazon is Saying Nothing About the DDoS Attack That Took Down AWS, but Others Are
(2019) Azure Might be Woefully Inefficient and Unprofitable
(2018) The Cloud is a Six-Horse Race, and Three of Those Have Been Lapped
Live from the bottom of the ocean. Underwater robot draws in millions of people watching it live as it explores the bottom of the sea.
A robot is navigating the dark, cold depths of the South Atlantic seabed, streaming images of dazzling coral and previously unseen fish, while scientists provide live commentary on YouTube – and Argentines are captivated. It's the first time human eyes, albeit remotely, are witnessing this underwater oasis in real time, where the frigid, nutrient-rich Malvinas current meets the warm, salty waters of the Brazil Current.
https://www.france24.com/en/americas/20250803-the-bright-side-underwater-robot-live-stream-mesmerizes-argentines
https://www.youtube.com/watch?v=oAanpXjQpN8 [10:05:00 Fascinating. Audio in Spanish. --JE]
Infrared contact lens helps people see in the dark, even with their eyes closed:
Researchers have developed a contact lens that can convert infrared light, which is normally invisible to our eyes, into visible light.
Because infrared light can pass through our eyelids, study participants wearing the contact lenses could see with their eyes shut.
The contact lenses can only give the wearer blurry infrared "sight", but the researchers say they're working on increasing resolution for uses like night vision.
Many people have wished for night vision on a dark walk home. But have you ever wondered if it's possible to see with your eyes closed?
Both are feasible with a contact lens that allows the wearer to see light that's usually invisible to our eyes — and can pass through our eyelids.
The infrared lens, which was developed by researchers in China, was unveiled in the journal Cell today.
Tian Xue, a neuroscientist at the University of Science and Technology of China and study co-author, said the material had the potential to give people "super-vision".
But in the shorter term, the team's ambitions are more modest.
"Flickering infrared light could be used to transmit information in security, rescue, encryption or anti-counterfeiting settings," Professor Xue said in a press release.
Our eye cells only register light in a small proportion of the electromagnetic spectrum.
If we could see longer wavelengths — just outside the visible spectrum into the near-infrared — we'd be able to see humans and other warm-blooded animals "glow" faintly as they emit infrared light.
Devices like night-vision goggles often work by tuning into near-infrared wavelengths, sometimes accompanied by an infrared light source to illuminate the surrounding area.
But these devices usually need an external power source to work, making them bulky.
They also tend to have a very limited of field of view, according to Paul Martin, a researcher in ophthalmology at the University of Sydney.
"One helicopter pilot, who has used them for night-time missions, has told me it is like staring through toilet paper rolls to find what you are looking for,"
[...] While it's possible to buy "infrared" contact lenses online, typically marketed for cheating at card games, these lenses don't allow users to see infrared light.
Instead, Professor Martin said they filter out higher wavelengths of light to make it easier to see light at a desired wavelength — usually, one tuned to an invisible ink sold with the contact lenses.
Researchers around the world, including in Australia, have been working on less cumbersome materials that can perform "wavelength shifting": absorbing invisible infrared light and re-emitting it as light we can see.
The researchers behind the new study had previously developed particles roughly the size of a small virus by mixing gold atoms with a few other elements, including the metals ytterbium and erbium.
The team injected these particles into the eyes of mice and found it gave them infrared vision. But they wanted to make the process less invasive before testing it on humans.
In the newest study, the researchers mixed their nanoparticles with polymers used in commercial contact lenses, and moulded this mixture into contacts.
They found people wearing the contact lenses could see visible light as normal. But they could also see a flashing infrared light — even when their eyes were shut.
Our eyelids have evolved to block visible light, but infrared light can pass right through them.
In fact, Professor Xue said participants were better at detecting the infrared flashes when their eyes were shut, because there was less interference from visible light.
The researchers could tweak their nanoparticles to convert specific infrared wavelengths into specific visible wavelengths, so the participants could see different shades of infrared light in different visible colours.
They tested this by showing the study participants different letters made from infrared light, which the participants could read in different colours.
Professor Martin, who was not involved with the research, called the study a "marvellous technical tour de force".
"One big and exciting promise of the present study is that the contact lenses or glasses could become a new basis for human-worn surveillance devices."
While the research is promising, Professor Martin believes these contact lenses are a long way away from practical use.
People using the lenses could see infrared light, but they weren't granted fine night vision.
"The contact lenses, because they are on the surface of the eye, would allow at best a very blurry image, like opening your eyes underwater,"
Journal References:
Near-infrared spatiotemporal color vision in humans enabled by upconversion contact lenses
Enhanced Infrared Vision by Nonlinear Up-Conversion in Nonlocal Metasurfaces