Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
https://blog.infected.systems/posts/2025-04-21-this-blog-is-hosted-on-a-nintendo-wii/
For a long time, I've enjoyed the idea of running general-purpose operating systems on decidedly not-general-purpose hardware.
There's been a few good examples of this over the years, including a few which were officially sanctioned by the OEM. Back in the day, my PS3 ran Yellow Dog Linux, and I've been searching for a (decently priced) copy of PS2 Linux for 10+ years at this point.
There are some other good unofficial examples, such as Dreamcast Linux, or PSPLinux.
But what a lot of these systems have in common is that they're now very outdated. Or they're hobbyist ports that someone got running once and where longer-term support never made it upstream. The PSP Linux kernel image was last built in 2008, and Dreamcast Linux is even more retro, using a 2.4.5 kernel built in 2001.
I haven't seen many of these projects where I'd be comfortable running one as part of an actual production workload. Until now.
While browsing the NetBSD website recently, I noticed the fact that there was a 'Wii' option listed right there on the front page in the 'Install Media' section, nestled right next to the other first-class targets like the Raspberry Pi, and generic x86 machines.
Unlike the other outdated and unmaintained examples above, clicking through to the NetBSD Wii port takes you to the latest stable NetBSD 10.1 release from Dec 2024. Even the daily HEAD builds are composed for the Wii.
As soon as I discovered this was fully supported and maintained, I knew I had to try deploying an actual production workload on it. That workload is the blog you're reading now.
Researchers explored a new range of colors using a different kind of laser vision. Direct laser stimulation of individual photoreceptors:
These experiments confirmed that the prototype successfully displays a range of hues in Oz: e.g., from orange to yellow to green to blue-green with a 543-nm stimulating laser that ordinarily looks green. Further, color matching confirms that our attempt at stimulating only M cones displays a color that lies beyond the natural human gamut. We name this new color "olo," with the ideal version of olo defined as pure M activation. Subjects report that olo in our prototype system appears blue-green of unprecedented saturation, when viewed relative to a neutral gray background. Subjects find that they must desaturate olo by adding white light before they can achieve a color match with the closest monochromatic light, which lies on the boundary of the gamut, unequivocal proof that olo lies beyond the gamut.
Related WKRC interview article:
"We predicted from the beginning that it would look like an unprecedented color signal but we didn't know what the brain would do with it," said Ren Ng, an electrical engineer that worked on the study. "It was jaw-dropping. It's incredibly saturated." Komo article: All five researchers who witnessed the new color, which they named "olo," described it as a blue-green, but said that words cannot do it justice. "There is no way to convey that color in an article or on a monitor," said vision scientist Austin Roorda. "The whole point is that this is not the color we see, it's just not. The color we see is a version of it, but it absolutely pale by comparison with the experience of olo."
Arthur T Knackerbracket has processed the following story:
With the release of its latest models earlier in the week, OpenAI seems to have inadvertently tuned ChatGPT to become a potent geo-guesser. The newly available o3 and o4-mini are so good at this ‘reverse location search’ task that showing off this newfound functionality has become a viral social media trend, notes TechCrunch. However, this apparent geographic needle-in-a-haystack hunting improvement raises privacy concerns. And pro geo-guessers on social media platforms might be a little worried too.
This newfound ability of ChatGPT is a great example of the strengthened visual reasoning being brought to the platform with model updates. It can now reason based on the content of uploaded images and perform some Photoshop-esque tasks like cropping, rotating, and zooming in.
As per the source report, there are plenty of examples of users of this famous AI chatbot now using it to drill down on the location of various images. A popular jape is to ask ChatGPT to imagine it is playing the online GeoGuessr game and provide the answer based on supplied imagery. [...]
As Jowett points out, the newly popular ChatGPT ‘reverse image search’ functionality has privacy implications, and raises particular concerns with regard to doxing. Doxing is publicly sharing someone’s private information, particularly location / residence, on the broad internet. People are commonly doxed with malicious intent, with the perpetrator hoping to direct loonies and cranks to visit upon the victim(s).
Interestingly, TechCrunch notes that ‘Geoguessr’ ability isn’t new for ChatGPT with the release of o3 and o4-mini. It is just the trend / awareness that has ballooned. It is said that o3 is particularly good at reverse location search, but GPT-4o, a model released without image-reasoning, can sometimes outpace o3, and deliver the same correct answer “more often than not,” says TechCrunch.
Arthur T Knackerbracket has processed the following story:
The US tariff war with China, along with changes in duties, means that products from China will be more expensive. Unfortunately, it looks like a major gaming handheld company has decided to stop shipping devices to the US for now.
Anbernic announced on its website that it has suspended shipments to the US from China, citing the US tariff changes:
We’re glad to see that customers can still buy products from the US warehouse, but this stockpile isn’t going to last forever. So if you’re in the market for an Anbernic gaming handheld, you should take up this option sooner rather than later.
The company also noted that it would publish a revised shipping policy as soon as it receives verified information regarding import duties.
This news comes after the US escalated tariffs against China. However, it also follows an end to de minimis exceptions on products coming from China. The latter is effectively a death blow for cheap handhelds as they’ll be subjected to minimum duties of $75 from May, escalating to $150 in June. It’s worth noting that Anbernic has handhelds that ordinarily start from under $50.
Anbernic isn’t the only handheld company experiencing issues due to US tariffs and duties. Retroid recently announced that US customers buying the Teal, Kiwi, and Berry variants of the Retroid Pocket Classic handheld will face indefinite delays. The company urged these customers to contact customer support in order to switch to any other color of the handheld.
Arthur T Knackerbracket has processed the following story:
Throughout the Yangtze River Delta, a region in southern China famed for its widespread rice production, farmers grow belts of slender green stalks. Before they reach several feet tall and turn golden brown, the grassy plants soak in muddy, waterlogged fields for months. Along the rows of submerged plants, levees store and distribute a steady supply of water that farmers source from nearby canals.
This traditional practice of flooding paddies to raise the notoriously thirsty crop is almost as old as the ancient grain’s domestication. Thousands of years later, the agricultural method continues to predominate in rice cultivation practices from the low-lying fields of Arkansas to the sprawling terraces of Vietnam.
As the planet heats up, this popular process of growing rice is becoming increasingly more dangerous for the millions of people worldwide that eat the grain regularly, according to research published Wednesday in the journal Lancet Planetary Health. After drinking water, the researchers say, rice is the world’s second largest dietary source of inorganic arsenic, and climate change appears to be increasing the amount of the highly toxic chemical that is in it. If nothing is done to transform how most of the world’s rice is produced, regulate how much of it people consume, or mitigate warming, the authors conclude that communities with rice-heavy diets could begin confronting increased risks of cancer and disease as soon as 2050.
“Our results are very scary,” said Donming Wang, the ecological doctorate student at the Institute of Soil Science, Chinese Academy of Sciences who led the paper. “It’s a disaster … and a wake-up call.”
[...] After nearly a decade of observing and analyzing the growth of the plants, the researchers discovered that the combination of higher temperatures and CO2 encourages root growth, increasing the ability of rice plants to uptake arsenic from the soil. They believe this is because climate-related changes in soil chemistry that favor arsenic can be more easily absorbed into the grain. Carbon-dioxide enriched crops were found to capture more atmospheric carbon and pump some of that into the soil, stimulating microbes that are making arsenic.
The more root growth, the more carbon in the soil, which can be a source of food for soil bacteria that multiply under warming temperatures. When soil in a rice paddy is waterlogged, oxygen gets depleted, causing the soil bacteria to rely further on arsenic to generate energy. The end result is more arsenic building up in the rice paddy, and more roots to take it up to the developing grain.
These arsenic-accumulating effects linked to increased root growth and carbon capture is a paradoxical surprise to Corey Lesk, a Dartmouth College postdoctoral climate and crop researcher unaffiliated with the paper. The paradox, said Lesk, is that both of these outcomes have been talked about as potential benefits to rice yields under climate change. “More roots could make the rice more drought-resistant, and cheaper carbon can boost yields generally,” he said. “But the extra arsenic accumulation could make it hard to realize health benefits from that yield boost.”
[...] Beyond mitigating global greenhouse gas emissions — what Ziska calls “waving my rainbows, unicorns, and sprinkles wand” — adaptation efforts to avoid a future with toxic rice include rice paddy farmers planting earlier in the season to avoid seeds developing under warmer temperatures, better soil management, and plant breeding to minimize rice’s propensity to accumulate so much arsenic.
Water-saving irrigation techniques such as alternate wetting and drying, where paddy fields are first flooded and then allowed to dry in a cycle, could also be used to reduce these increasing health risks and the grain’s enormous methane footprint. On a global scale, rice production accounts for roughly 8 percent of all methane emissions from human activity — flooded paddy fields are ideal conditions for methane-emitting bacteria.
“This is an area that I know is not sexy, that doesn’t have the same vibe as the end of the world, rising sea levels, category 10 storms,” said Ziska. “But I will tell you quite honestly that it will have the greatest effect in terms of humanity, because we all eat.”
The stated aim is to promote better security by encouraging automation of certificate renewal, and this is the narrative promoted by vendors who will coincidentally benefit mightily from increased certificate and services sales.
The story was picked up by most of the usual tech channels such as Computerworld
who have a decent summary of the likely consequences, but here is an exercept from the press release of one vendor: Sectigo
https://www.sectigo.com/resource-library/sectigo-cab-reduce-ssl-tls-certificates-lifespan-47-days
Scottsdale, AZ — April 14, 2025 — Sectigo, a global leader in digital certificates and automated Certificate Lifecycle Management (CLM), today announced that the CA/Browser (CA/B) Forum ballot it endorsed to reduce the maximum validity term of SSL/TLS certificates to 47 days by 2029 has passed. This groundbreaking move to shorten digital certificate lifespans seeks to enhance online security, drive automation in certificate management, and ready systems for quantum computing challenges by improving crypto agility.
The newly approved measure, initially proposed by Apple and endorsed by Sectigo in January 2025, will gradually reduce certificate lifespans from the current 398 days to 47 days through a phased approach:
March 15, 2026: Maximum TLS certificate lifespan shrinks to 200 days. This accommodates a six-month renewal cadence. The Domain Control Validation (DCV) reuse period reduces to 200 days.
March 15, 2027: Maximum TLS certificate lifespan shrinks to 100 days. This accommodates a three-month renewal cadence. The DCV reuse period reduces to 100 days.
March 15, 2029: Maximum TLS certificate lifespan shrinks to 47 days. This accommodates a one-month renewal cadence. The DCV reuse period reduces to 10 days.
"At Sectigo we have long advocated for shorter certificate lifecycles as a crucial step in bolstering internet security, which is why we endorsed this ballot from its inception," said Kevin Weiss, chief executive officer at Sectigo. "This collaborative initiative passed by the CA/Browser Forum not only showcases the industry's unified commitment to enhance digital trust for all but also empowers customers to be at the leading edge of preparing for a quantum future."
Tesla Accused Of Speeding Up Odometers So Their Warranties Expire Faster:
A Tesla owner in California is seeking a class-action lawsuit on behalf of all other Tesla owners in the state after he says the company has been systematically altering odometers so their warranties expire faster.
Lead plaintiff Nyree Hinton said he bought a used Model Y in December 2022 with 36,772 miles on it.
But after several visits to Tesla for repairs completed under warranty, he said, he began to notice odd quirks with the odometer, which regularly overestimated his mileage by at least 15% but sometimes as much as 117%.
From March 2023 to June 2023, for instance, Hinton said, his car logged 72.35 miles per day despite him having a consistent driving routine of just 20 miles per day.
After the vehicle's 50,000-mile basic warranty expired in July 2023, Hinton said, the odometer then began to underreport his daily usage. In April 2024, the lawsuit alleges, the Model Y reported around 50 average daily miles, despite Hinton driving a 100-mile commute two to three days a week.
The lawsuit points to similar tales shared by other Tesla owners online as the basis for class-action status.
According to the lawsuit, Tesla's odometer system isn't physically linked to the number of miles the vehicle has traveled, instead relying on data like energy consumption, driving behavior and predictive algorithms to estimate distance traveled.
"By tying warranty limits and lease mileage caps to inflated 'odometer' readings, Tesla increases repair revenue, reduces warranty obligations, and compels consumers to purchase extended warranties prematurely," the suit said.
Odometer fraud constitutes a federal crime, with cumulative penalties that can be applied for every instance of odometer tampering.
Tesla didn't respond to a request for comment.
Oldest serving US astronaut returns to Earth on 70th birthday:
America's oldest serving astronaut Don Pettit has returned to Earth on his 70th birthday.
The Soyuz MS-26 space capsule carrying Pettit and his Russian crewmates Alexey Ovchinin and Ivan Vagner made a parachute-assisted landing in Kazakhstan's steppe at 06:20 local time (01:20 GMT) on Sunday.
They spent 220 days on board the International Space Station (ISS), orbiting the Earth 3,520 times, the US space agency Nasa said.
For Pettit - who has now spent a total of 590 days in space - it was his fourth mission.
Still, he is not the oldest person to fly in orbit - that record belongs to John Glenn, who aged 77 flew on a Nasa mission in 1998. He died in 2016.
Pettit and the two Russian cosmonauts will now spend some time readjusting to gravity.
After that, Pettit - who was born in Oregon on 20 April 1955 - will be flown to Houston in Texas, while Ovchinin and Vagner will go to Russia's main space training base in Zvyozdniy Gorodok (Star City) near Moscow.
Before their departure from the ISS, the crew handed command of the spaceship to Japanese astronaut Takuya Onishi.
Last month, two Nasa astronauts, Butch Wilmore and Suni Williams, finally returned to Earth after spending more than nine months on board the ISS - instead of the initially planned just eight days.
They flew to the ISS in June 2024 - but technical issues with the spacecraft they used to get to the space station meant they were only able to return to Earth on 18 March this year.
Arthur T Knackerbracket has processed the following story:
by Tokyo Metropolitan University
Researchers from Tokyo Metropolitan University have found that the motion of unlabeled cells can be used to tell whether they are cancerous or healthy. They observed malignant fibrosarcoma cells and healthy fibroblasts on a dish and found that tracking and analysis of their paths can be used to differentiate them with up to 94% accuracy.
Beyond diagnosis, their technique may also shed light on cell motility-related functions, like tissue healing. The paper is published in the journal PLOS ONE.
While scientists and medical experts have been looking at cells under the microscope for many centuries, most studies and diagnoses focus on their shape, what they contain, and where different parts are located inside. But cells are dynamic, changing over time, and are known to be able to move.
By accurately tracking and analyzing their motion, we may be able to differentiate cells which have functions relying on cell migration. An important example is cancer metastasis, where the motility of cancerous cells allows them to spread.
However, this is easier said than done. For one, studying a small subset of cells can give biased results. Any accurate diagnostic technique would rely on automated, high-throughput tracking of a significant number of cells.
Many methods then turn to fluorescent labeling, which makes cells much easier to see under the microscope. But this labeling procedure can itself affect their properties. The ultimate goal is an automated method which uses label-free conventional microscopy to characterize cell motility and show whether cells are healthy or not.
Now, a team of researchers from Tokyo Metropolitan University led by Professor Hiromi Miyoshi have come up with a way of tracking cells using phase-contrast microscopy, one of the most common ways of observing cells.
Phase-contrast microscopy is entirely label-free, allowing cells to move about on a petri dish closer to their native state, and is not affected by the optical properties of the plastic petri dishes through which cells are imaged.
Through innovative image analysis, they were able to extract the trajectories of many individual cells. They focused on properties of the paths taken, like migration speed, and how curvy the paths were, all of which would encode subtle differences in deformation and movement.
As a test, they compared healthy fibroblast cells, the key component of animal tissue, and malignant fibrosarcoma cells, cancerous cells which derive from fibrous connective tissue. They were able to show that the cells migrated in subtly different ways, as characterized by the "sum of turn angles" (how curvy the paths were), the frequency of shallow turns, and how quickly they moved.
In fact, by combining both the sum of turn angles and how often they made shallow turns, they could predict whether a cell was cancerous or not with an accuracy of 94%.
The team's work not only promises a new way to discriminate cancer cells, but applications to research of any biological function based on cell motility, like the healing of wounds and tissue growth.
Provided by Tokyo Metropolitan University
More information:Sota Endo et al, Development of label-free cell tracking for discrimination of the heterogeneous mesenchymal migration, PLOS ONE (2025). DOI: 10.1371/journal.pone.0320287
Jim Zemlin on taking a 'portfolio approach' to Linux Foundation projects:
The Linux Foundation has become something of a misnomer through the years. It has extended far beyond its roots as the steward of the Linux kernel, emerging as a sprawling umbrella outfit for a thousand open source projects spanning cloud infrastructure, security, digital wallets, enterprise search, fintech, maps, and more.
Last month, the OpenInfra Foundation — best known for OpenStack — became the latest addition to its stable, further cementing the Linux Foundation's status as a "foundation of foundations."
The Linux Foundation emerged in 2007 from the amalgamation of two Linux-focused not-for-profits: the Open Source Development Labs (OSDL) and the Free Standards Group (FSG). With founding members such as IBM, Intel, and Oracle, the Foundation's raison d'être was challenging the "closed" platforms of that time — which basically meant doubling down on Linux in response to Windows' domination.
[...] Zemlin has led the charge at the Linux Foundation for some two decades, overseeing its transition through technological waves such as mobile, cloud, and — more recently — artificial intelligence. Its evolution from Linux-centricity to covering just about every technological nook is reflective of how technology itself doesn't stand still — it evolves and, more importantly, it intersects.
"Technology goes up and down — we're not using iPods or floppy disks anymore," Zemlin explained to TechCrunch in an interview during KubeCon in London last week. "What I realized early on was that if the Linux Foundation were to become an enduring body for collective software development, we needed to be able to bet on many different forms of technology."
This is what Zemlin refers to as a "portfolio approach," similar to how a company diversifies so it's not dependent on the success of a single product. Combining multiple critical projects under a single organization enables the Foundation to benefit from vertical-specific expertise in networking or automotive-grade Linux, for example, while tapping broader expertise in copyright, patents, data privacy, cybersecurity, marketing, and event organization.
Being able to pool such resources across projects is more important than ever, as businesses contend with a growing array of regulations such as the EU AI Act and Cyber Resilience Act. Rather than each individual project having to fight the good fight alone, they have the support of a corporate-like foundation backed by some of the world's biggest companies.
"At the Linux Foundation, we have specialists who work in vertical industry efforts, but they're not lawyers or copyright experts or patent experts. They're also not experts in running large-scale events, or in developer training," Zemlin said. "And so that's why the collective investment is important. We can create technology in an agile way through technical leadership at the project level, but then across all the projects have a set of tools that create long-term sustainability for all of them collectively."
[...] While AI is inarguably a major step-change both for the technology realm and society, it has also pushed the concept of "open source" into the mainstream arena in ways that traditional software hasn't — with controversy in hot pursuit.
Meta, for instance, has positioned its Llama brand of AI models as open source, even though they decidedly are not by most estimations. This has also highlighted some of the challenges of creating a definition of open source AI that everyone is happy with, and we're now seeing AI models with a spectrum of "openness" in terms of access to code, datasets, and commercial restrictions.
The Linux Foundation, already home to the LF AI & Data Foundation, which houses some 75 projects, last year published the Model Openness Framework (MOF), designed to bring a more nuanced approach to the definition of open source AI. The Open Source Initiative (OSI), stewards of the "open source definition," used this framework in its own open source AI definition.
"Most models lack the necessary components for full understanding, auditing, and reproducibility, and some model producers use restrictive licenses whilst claiming that their models are 'open source,'" the MOF paper authors wrote at the time.
And so the MOF serves a three-tiered classification system that rates models on their "completeness and openness," with regards to code, data, model parameters, and documentation.
It's basically a handy way to establish how "open" a model really is by assessing which components are public, and under what licenses. Just because a model isn't strictly "open source" by one definition doesn't mean that it isn't open enough to help develop safety tools that reduce hallucinations, for example — and Zemlin says it's important to address these distinctions.
"I talk to a lot of people in the AI community, and it's a much broader set of technology practitioners [compared to traditional software engineering]," Zemlin said. "What they tell me is that they understand the importance of open source meaning 'something' and the importance of open source as a definition. Where they get frustrated is being a little too pedantic at every layer. What they want is predictability and transparency and understanding of what they're actually getting and using."
Chinese AI darling DeepSeek has also played a big part in the open source AI conversation, emerging with performant, efficient open source models that upended how the incumbent proprietary players such as OpenAI plan to release their own models in the future.
But all this, according to Zemlin, is just another "moment" for open source.
"I think it's good that people recognize just how valuable open source is in developing any modern technology," he said. "But open source has these moments — Linux was a moment for open source, where the open source community could produce a better operating system for cloud computing and enterprise computing and telecommunications than the biggest proprietary software company in the world. AI is having that moment right now, and DeepSeek is a big part of that."
[...] But however its vast array of projects came to fruition, there's no ignoring the elephant in the room: The Linux Foundation is no longer all about Linux, and it hasn't been for a long time. So should we ever expect a rebrand into something a little more prosaic, but encompassing — like the Open Technology Foundation?
Don't hold your breath.
"When I wear Linux Foundation swag into a coffee shop, somebody will often say, 'I love Linux' or 'I used Linux in college,'" Zemlin said. "It's a powerful household brand, and it's pretty hard to move away from that. Linux itself is such a positive idea, it's so emblematic of truly impactful and successful 'open source.'"
Glyn Moody writing for the techdirt.com site has a good summary of the status of Deepseek - with an added bonus of lots of links to his source material. Longish article, covering a lot of ground including usage of AI for social surveillance by the Chinese government
" It's just three months since the Chinese company DeepSeek released its R1 reasoning model. In that time, its global impact has been dramatic: An article in Fortune magazine described how DeepSeek had "erased Silicon Valley's AI lead and wiped $1 trillion from U.S. markets." It also immediately raised serious privacy issues in the EU. The speed at which DeepSeek-R1 was built, and the relatively low cost of doing so, has had another dramatic knock-on effect, shifting interest and investment away from closed development towards open source AI models. As Wired wrote when DeepSeek-R1 appeared:
However DeepSeek's models were built, they appear to show that a less closed approach to developing AI is gaining momentum. In December, Clem Delangue, the CEO of HuggingFace, a platform that hosts artificial intelligence models, predicted that a Chinese company would take the lead in AI because of the speed of innovation happening in open source models, which China has largely embraced. "This went faster than I thought," he says.
The open source nature of DeepSeek-R1 means that it is cheaper and easier to use it as the basis of other AI services and products, than to start from scratch. That's precisely what is happening in China, with the additional twist that many companies are doing so from a patriotic pride that Chinese computer technology has caught up with Silicon Valley. "
The prevailing consensus in astrophysics is that the universe has spent the past 13-or-so billion years expanding outward in all directions, ever since the Big Bang. It's expanding at this very moment, and will continue to do so until... a number of possible theoretical endings. Meanwhile, the specific rate at which the universe is growing remains a longstanding point of contention known as the "Hubble tension." However, there may be a way to finally ease that tension—you just need to put a slight spin on everything.
In simplest terms, the rate at which the universe expands on paper doesn't match actual astronomical observations. That speed—called the Hubble Constant—is measured in units of kilometers per second per megaparsec (km/s/Mpc), with a megaparsec measuring about 300,000 light years. The most widely accepted theoretical model, the Lambda/Cold Dark Matter model (ΛCDM), says the universe is growing at 67-68 km/s/Mpc. But what astronomers see through their equipment is a little faster, at about 73 km/s/Mpc. And therein lies the Hubble tension.
In a study published in the April issue of the Monthly Notices of the Royal Astronomical Society, a team of researchers including experts at the University of Hawai'i's Institute for Astronomy argue that introducing a miniscule amount of rotation to standard mathematical model of the universe may provide the way to align both expansion theories.
"Much to our surprise, we found that our model with rotation resolves the paradox without contradicting current astronomical measurements," study co-author and astrophysicist? István Szapudi said in a statement. "Even better, it is compatible with other models that assume rotation."
[...] Looking ahead, astronomers hope to construct a full computer model of the universe based in part of their new theory. From there, they will hopefully be able to pinpoint signs of cosmic spinning to search for among the stars.
[Source] Popular Science
Today at Google Cloud Next, the tech giant introduced the full-stack AI workspace Firebase Studio.
Devs and non-devs can use the cloud-based, Gemini-powered agentic development platform to build, launch, iterate on and monitor mobile and web apps, APIs, backends and frontends directly from their browsers. It is now available in preview to all users (you must have a Google account).
Firebase Studio combines Google's coding tools Genkit and Project IDX with specialized AI agents and Gemini assistance. It is built on the popular Code OSS project, making it look and feel familiar to many.
Users just need to open their browser to build an app in minutes, importing from existing repositories such as GitHub, GitLab, Bitbucket or a local machine. The platform supports languages including Java, .NET, Node.js, Go and Python, and frameworks like Next.js, React, Angular, Vue.js, Android, Flutter and others.
Users can choose from more than 60 pre-built templates or use a prototyping agent that helps design an app (including UI, AI flows and API schema) through natural language, screenshots, mockups, drawing tools, screenshots, images and mockups—without the need for coding. The app can then be directly deployed to Firebase App Hosting, Cloud Run, or custom infrastructure.
Apps can be monitored in a Firebase console and refined and expanded in a coding workspace with a single click. Apps right can be previewed in a browser, and Firebase Studio features built-in runtime services and tools for emulation, testing, refactoring, debugging and code documentation.
Google says the platform greatly simplifies coding workflows. Gemini helps users write code and documentation, fix bugs, manage and resolve dependencies, write and run unit tests, and work with Docker containers, among other tasks. Users can customize and evolve different aspects of their apps, including model inference, agents, retrieval-augmented generation (RAG), UX, business logic and others.
Google is also now granting early access to Gemini Code Assist agents in Firebase Studio for those in the Google Developer Program. For instance, a migration agent can help move code; a testing agent can simulate user interactions or run adversarial scenarios against AI models to identify and fix potentially dangerous outputs; and a code documentation agent can allow users to talk to code.
https://www.vaticannews.va/en/pope/news/2025-04/pope-francis-dies-on-easter-monday-aged-88.html
Pope Francis died on Easter Monday, April 21, 2025, at the age of 88 at his residence in the Vatican's Casa Santa Marta.