Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Arthur T Knackerbracket has processed the following story:
"In the cyber world, there's no such thing as a ceasefire," he told The Register.
If we see something in cyberspace that can disrupt us, we're going to attack it first, and we have that under US Cyber Command's mission
Bolukbas is chief technology officer and founder of Black Kite, a cyber-risk intelligence firm that assesses businesses' third-party supplier risks. His company also shares and receives threat intel with and from the US National Security Agency (NSA), as do other private security firms.
Prior to founding Black Kite in 2016, Bolukbas worked for NATO as a part of its counter cyberterrorism task force, helping member and partner countries harden their network defenses by simulating offensive cyber attacks against government agencies.
His final mission with NATO involved red-teaming a critical power grid in Kiev, Ukraine. Most of the facilities' systems were airgapped, isolated from external networks, which made it more difficult to break into.
"It wasn't easy to target, so I said, 'OK, let me find the suppliers for this organization'," Bolukbas recalled. "I found 20 of them, picked one that would be the easiest to find and target, and used that to access the grid control panel, literally one command away from taking down the grid."
Shortly after, in 2015, Russia's Sandworm did shut off part of Ukraine's electricity grid, resulting in power outages for tens of thousands of Ukraine residents for a number of hours.
Ten years later, Bolukbas says he's worried about one of Iran's cyber-arms doing something similar to Israeli or American critical infrastructure in retaliation for the air strikes earlier this month.
"My belief is that they're going to go after the supply chain, because that's our weak spot," Bolukbas said, adding that while it's really difficult to breach the Pentagon's networks directly, Iran is "going to go after the supply chains of Israel and US Department of Defense suppliers."
He pointed to Russia compromising Western logistics firms and tech companies, including email providers, as a means of collecting valuable intel about Ukrainian targets and military strategy in that ongoing conflict. Russian cyberspies also breached internet-connected cameras at Ukrainian border crossings to track aid shipments, and targeted at least one provider of industrial control system (ICS) components for railway management, according to a joint government advisory issued last month.
Similarly, smart TVs and other home IoT devices can be easily compromised and used to build a botnet for distributed denial of service attacks, or a massive network of connected boxes to route traffic and launch cyberattacks against high-value targets.
"It's very unlikely that they can launch a sophisticated attack against the NSA, Pentagon, or those kinds of bigger organizations," Bolukbas said. "Those are outside of Iran's reach unless Russia or China backs them," which he believes is also highly unlikely.
Giving Iranian cyber operatives access to some critical American network after Russia and China did the dirty work of breaking in, or blowing a zero-day exploit to aid Iran, isn't in either of these countries' best interests, Bolukbas explained. It's more likely that Moscow and Beijing would want to save this stealthy access and/or cyber weapons, and use them at a time that will benefit their geopolitical or military goals.
"Iran is alone in this game, but they can go after the low-hanging fruit," Bolukbas said.
While "we haven't seen any ceasefire happening" in terms of Iranian cyber campaigns, especially when it comes to phishing for high-value individuals' credentials and sensitive military info, "we also do this," Bolukbas said, referring to the United States.
Case in point: Stuxnet, a malware deployed against Iran's nuclear fuel centrifuges, was a joint American-Israeli op. "And that, of course, was during a ceasefire. We were not in a war with Iran," Bolukbas said.
"The US has the biggest cyber army, strategic or talent-wise," he added. "The NSA is known for having the biggest zero-day arsenal on the planet. We have a doctrine on something called defense forward that says if we see something in cyberspace that can disrupt us, we're going to attack it first, and we have that under US Cyber Command's mission."
The NSA is known for having the biggest zero-day arsenal on the planet
And while Bolukbas doesn't expect to see the US unleash any major cyber weapons against Iran at this point in the conflict, he suspects cyber espionage, influence operations, hack-and-leaks, and poking holes in Iran's military and cyber infrastructure are all regular occurrences.
The US didn't enter the Iran-Israel war with bombs, he contended. "That was started in cyberspace a long time ago."
Bolukbas also has advice for network defenders to protect against Iranian cyber threats. "Be careful with phishing attacks," he said. "That's very common because Iran doesn't have a lot of zero days, so they go heavy on social attacks. Be careful what you're clicking on."
Second: don't believe everything you read or see, according to Bolukbas. Iran, along with Russia and China, are getting really good at using generative AI for fake news and social media posts that aim to manipulate public opinion.
"Last but not least: patch your systems, including IoT for end users and residential people," Bolukbas said. "Patch your external-facing systems quickly, not a week or 10 days or a month later, because time is ticking from the day that the vulnerability is disclosed. Iranian groups are trying to develop an exploit. If they develop the exploit before the patch, they're not going to hesitate to use that."
As AI kills search traffic, Google launches Offerwall to boost publisher revenue:
Google's AI search features are killing traffic to publishers, so now the company is proposing a possible solution. On Thursday, the tech giant officially launched Offerwall, a new tool that allows publishers to generate revenue beyond the more traffic-dependent options, like ads.
Offerwall lets publishers give their sites' readers a variety of ways to access their content, including through options like micropayments, taking surveys, watching ads, and more. In addition, Google says that publishers can add their own options to the Offerwall, like signing up for newsletters.
[...] Google notes that it's also using AI to determine when to display the Offerwall to each site visitor to increase engagement and revenue. However, publishers can set their own thresholds before the Offerwall is displayed, if they prefer.
Many of the solutions Offerwall introduces have been tried by publishers before, across a range of products and services. Micropayments, for instance, have repeatedly failed to take off. The economics don't tend to work, and there's additional friction in having to pay per article that's not been worth the payoff for readers or publishers alike, given implementation and maintenance costs.
[...] In Google's case, it's working with a third party, Supertab, which allows site visitors to pay a small amount to access the online content for a period of time — like 24 hours, a few days, a week, etc. The option (currently in beta) also supports subscription sign-ups and integrates with Google Ad Manager.
Google notes that publishers can also configure Offerwall to include their own logo and introductory text, then customize the choices it presents. One option that's enabled by default has visitors watch a short ad to earn access to the publisher's content. This is the only option that has a revenue share, and, on that front, it works the same way all Ad Manager solutions do, Google notes.
Another option has visitors click to choose from a set of topics they're interested in, which is then saved and used for ads personalization.
[...] However, early reports during the testing period said that publishers saw an average revenue lift of 9% after 1 million messages on AdSense, for viewing rewarded ads. Google Ad Manager customers saw a 5% to 15% lift when using Offerwall as well. Google also confirmed to TechCrunch via email that publishers with Offerwall saw an average revenue uplift of 9% during its year-plus in testing.
If Google AI is taking all of their clicks away, it would seem the publishers are over a barrel here and don't have much choice.
Facebook is starting to feed its AI with private, unpublished photos
Always read the terms and conditions, folks:
For years, Meta trained its AI programs using the billions of public images uploaded by users onto Facebook and Instagram's servers. Now, it's also hoping to access the billions of images that users haven't uploaded to those servers. Meta tells The Verge that it's not currently training its AI models on those photos, but it would not answer our questions about whether it might do so in future, or what rights it will hold over your camera roll images.
On Friday, TechCrunch reported that Facebook users trying to post something on the Story feature have encountered pop-up messages asking if they'd like to opt into "cloud processing", which would allow Facebook to "select media from your camera roll and upload it to our cloud on a regular basis", to generate "ideas like collages, recaps, AI restyling or themes like birthdays or graduations."
By allowing this feature, the message continues, users are agreeing to Meta AI terms, which allows their AI to analyze "media and facial features" of those unpublished photos, as well as the date said photos were taken, and the presence of other people or objects in them. You further grant Meta the right to "retain and use" that personal information.
Meta's public stance is that the feature is "very early," innocuous and entirely opt-in: "We're exploring ways to make content sharing easier for people on Facebook by testing suggestions of ready-to-share and curated content from a person's camera roll. These suggestions are opt-in only and only shown to you – unless you decide to share them – and can be turned off at any time. Camera roll media may be used to improve these suggestions, but are not used to improve AI models in this test," reads a statement from Meta comms manager Maria Cubeta.
[...] And while Daniels and Cubeta tell The Verge that opting in only gives Meta permission to retrieve 30 days worth of your unpublished camera roll at a time, it appears that Meta is retaining some data longer than that. "Camera roll suggestions based on themes, such as pets, weddings and graduations, may include media that is older than 30 days," Meta writes.
Thankfully, Facebook users do have an option to turn off camera roll cloud processing in their settings, which, once activated, will also start removing unpublished photos from the cloud after 30 days.
Facebook is asking to use Meta AI on photos in your camera roll you haven't yet shared:
Facebook is asking users for access to their phone's camera roll to automatically suggest AI-edited versions of their photos — including ones that haven't been uploaded to Facebook yet.
The feature is being suggested to Facebook users when they're creating a new Story on the social networking app. Here, a screen pops up and asks if the user will opt into "cloud processing" to allow creative suggestions.
As the pop-up message explains, by clicking "Allow," you'll let Facebook generate new ideas from your camera roll, like collages, recaps, AI restylings, or photo themes. To work, Facebook says it will upload media from your camera roll to its cloud (meaning its servers) on an "ongoing basis," based on information like time, location, or themes.
[...] The creative tool is another example of the slippery slope that comes with sharing our personal media with AI providers. Like other tech giants, Meta has grand AI ambitions. Being able to tap into the personal photos users haven't yet shared on Facebook's social network could give the company an advantage in the AI race.
Unfortunately for end users, in tech companies' rush to stay ahead, it's not always clear what they're agreeing to when features like this appear.
[...] So far, there hasn't been much backlash about this feature. A handful of Facebook users have stumbled across the AI-generated photo suggestions when creating a new story and raised questions about it. For instance, one user on Reddit found that Facebook had pulled up an old photo (in this case, one that had previously been shared to the social network) and automatically turned it into an anime using Meta AI.
When another user in an anti-AI Facebook group asked for help shutting this feature off, the search led to a section called camera roll sharing suggestions in the app's Settings.
[...] Reached for comment, Meta spokesperson Maria Cubeta confirmed the feature is a test, saying, "We're exploring ways to make content sharing easier for people on Facebook by testing suggestions of ready-to-share and curated content from a person's camera roll."
"These suggestions are opt-in only and only shown to you – unless you decide to share them – and can be turned off at any time," she continued. "Camera roll media may be used to improve these suggestions, but are not used to improve AI models in this test."
The company is currently testing suggestions in the U.S. and Canada.
Vulnerabilities affecting a Bluetooth chipset present in more than two dozen audio devices from ten vendors can be exploited for eavesdropping or stealing sensitive information.
Researchers confirmed that 29 devices from Beyerdynamic, Bose, Sony, Marshall, Jabra, JBL, Jlab, EarisMax, MoerLabs, and Teufel are affected.
The list of impacted products includes speakers, earbuds, headphones, and wireless microphones.
The security problems could be leveraged to take over a vulnerable product and on some phones, an attacker within connection range may be able to extract call history and contacts.
Snooping over a Bluetooth connectionAt the TROOPERS security conference in Germany, researchers at cybersecurity company ERNW disclosed three vulnerabilities in the Airoha systems on a chip (SoCs), which are widely used in True Wireless Stereo (TWS) earbuds.
The issues are not critical and besides close physical proximity (Bluetooth range), their exploitation also requires "a high technical skill set." They received the following identifiers:
CVE-2025-20700 (6.7, medium severity score) - missing authentication for GATT services
CVE-2025-20701 (6.7, medium severity score) - missing authentication for Bluetooth BR/EDR
CVE-2025-20702 (7.5, high severity score) - critical capabilities of a custom protocolERNW researchers say they created a proof-of-concept exploit code that allowed them to read the currently playing media from the targeted headphones.
[...] Although the ERNW researchers present serious attack scenarios, practical implementation at scale is constrained by certain limitations.
"Yes — the idea that someone could hijack your headphones, impersonate them towards your phone, and potentially make calls or spy on you, sounds pretty alarming."
"Yes — technically, it is serious," the researchers say, adding that "real attacks are complex to perform."
The necessity of both technical sophistication and physical proximity confines these attacks to high-value targets, such as those in diplomacy, journalism, activism, or sensitive industries.
Airoha has released an updated SDK incorporating necessary mitigations, and device manufacturers have started patch development and distribution.
Nevertheless, German publication Heise says that the most recent firmware updates for more than half of the affected devices are from May 27 or earlier, which is before Airoha delivered the updated SDK to its customers.
Arthur T Knackerbracket has processed the following story:
The US Cybersecurity and Infrastructure Security Agency (CISA) and the National Security Agency (NSA) this week published guidance urging software developers to adopt memory-safe programming languages.
"The importance of memory safety cannot be overstated," the inter-agency report [PDF] says.
Memory safety refers to the extent to which programming languages provide ways to avoid vulnerabilities arising from the mishandling of computer memory. Languages like Rust, Go, C#, Java, Swift, Python, and JavaScript support automated memory management (garbage collection) or implement compile-time checks on memory ownership to prevent memory-based errors.
C and C++, two of the most widely used programming languages, are not memory-safe by default. And while developers can make them safer through diligent adherence to best practices and the application of static analysis tools, not everyone deploys code with that much care.
To further complicate matters, code written in nominally safe languages may still import unsafe C/C++ libraries using a Foreign Function Interface, potentially breaking memory safety guarantees.
[...] Google and Microsoft have attributed the majority of vulnerabilities in large software projects to memory safety errors. In Google's Android operating system, for example, 90 percent of high-severity vulnerabilities in 2018 came via memory safety bugs. In 2021, the Chocolate Factory noted that more than 70 percent of serious security issues in Chromium came from memory safety flaws.
The infamous Heartbleed flaw in the OpenSSL cryptographic library was the result of a memory safety error (an out-of-bounds read) in C code. And there are many other examples, including the mid-June Google Cloud outage, which Google's incident report attributes to a lack of proper error handling for a null pointer.
Within a few years, the tech industry began answering the call for memory-safe languages. In 2022, Microsoft executives began calling for new applications to be written in memory-safe languages like Rust. By 2023, Consumer Reports – a mainstream product review publication – published a report on memory safety and government officials like Jen Easterly, CISA's director at the time, cited the need to transition to memory-safe languages during public appearances.
The memory safety push created some turmoil in the Linux kernel community over the past year, as efforts to integrate Rust-based drivers met resistance from kernel maintainers. And it has alarmed the C/C++ communities, where developers have been busily trying to come up with ways to match the memory safety promises of Rust through projects like TrapC, FilC, Mini-C, and Safe C++.
The CISA/NSA report revisits the rationale for greater memory safety and the government's calls to adopt memory-safe languages (MSLs) while also acknowledging the reality that not every agency can change horses mid-stream.
[...] A recent effort along these lines, dubbed Omniglot, has been proposed by researchers at Princeton, UC Berkeley, and UC San Diego. It provides a safe way for unsafe libraries to communicate with Rust code through a Foreign Function Interface.
This is exactly the sort of project that CISA and the NSA would like to see from the private sector, particularly given pending budget cuts that could depopulate CISA by a third.
While the path toward greater memory safety is complicated by the need to maintain legacy systems and the fact that MSLs may not be the best option for every scenario, the government's message is clear.
"Memory vulnerabilities pose serious risks to national security and critical infrastructure," the report concludes. "MSLs offer the most comprehensive mitigation against this pervasive and dangerous class of vulnerability."
https://www.amiga-news.de/en/news/AN-2025-06-00123-EN.html
Three weeks ago, Youtuber Christian 'Perifractic' Simpson announced in a video that he had received an offer to take over Commodore B.V., the owner of the remaining Commodore trademark rights. In a second video published on June 28 he announced the completed takeover: A group of unnamed angel investors has acquired the company for a low seven-figure sum. He himself is now the acting CEO, but the purchase price has not yet been paid - the company is still looking for investors.
In the half-hour video, Simpson lists a whole series of former Commodore employees (Michael Tomczyk, Bil Herd, David Pleasance, support staff such as James Harrison and Hans Olsen) or actor Thomas Middleditch ("Silicon Valley") as future "advisors". A financial participation of the community is not yet possible, as the international legal hurdles are too high. Commodore plans to revive the time before social networks and artificial intelligence, when computer technology was still considered a utopia rather than the scourge of mankind, with new "retro-futuristic" products. The years around the turn of the millennium are cited as a model several times.
Scientists unlock the light-bending secrets of squid skin:
Squid are famous for flashing from glass-clear to kaleidoscopic in the blink of an eye, but biologists have long puzzled over the physical trick behind the act.
A research team led by the University of California, Irvine, joined by cephalopod experts at the Marine Biological Laboratory in Woods Hole, took that mystery head-on.
By peering into squid skin in three dimensions, they uncovered a hidden forest of nano-columns built from an uncommon protein called reflectin.
How squid skin bends light
These columns act much like tiny mirrors, bouncing or passing light depending on how close together they sit.
Alon Gorodetsky, an expert in chemical and biomolecular engineering at UC Irvine, is the senior author of the research.
"In nature, many animals use Bragg reflectors [which selectively transmit and reflect light at specific wavelengths] for structural coloration," he said. "A squid's ability to rapidly and reversibly transition from transparent to colored is remarkable."
"We found that cells containing specialized subcellular columnar structures with sinusoidal refractive index distributions enable the squid to achieve such feats."
Studying the master shapeshifter
The animals under study were longfin inshore squid, Doryteuthis pealeii. "These are longfin inshore squids – Doryteuthis pealeii – that are native to the Atlantic Ocean," Gorodetsky said.
"Marine Biological Laboratory has been famous for studying this squid and other cephalopods for more than a century. We were fortunate to be able to leverage their world-class expertise with properly collecting, handling, and studying these biological specimens."
Inside the squid mantle, shimmering cells known as iridophores – or iridocytes – hold the secret.
To visualize them without disturbing their delicate innards, the team used holotomography, a form of quantitative phase microscopy that maps how light bends through a sample.
Georgii Bogdanov, a postdoctoral researcher in chemical and biomolecular engineering at UC Irvine, is another lead author of the study.
"Holotomography used the high refractive index of reflectin proteins to reveal the presence of sinusoidal refractive index distributions within squid iridophore cells," he said.
Reflectin platelets form spiral columns inside iridophores, enabling cephalopods to control how their skin transmits and reflects light.
Borrowing nature's blueprint
Once the researchers understood the architecture – the stacked, spiraling Bragg reflectors – they wondered whether they could engineer something similar.
Studying squid color change inspired flexible materials that shift appearance using tiny, wavy Bragg reflector columns. They added nanostructured metal films, enabling the materials to also shift appearance in the infrared spectrum.
Using a mixture of polymer chemistry, nanofabrication, and metal coatings, the group built thin films that shift color when stretched, pressed, or heated.
They went a step further by tailoring the same films to tune their infrared emission. This allows the material to hide or reveal heat signatures as well as visible hues.
"These bioinspired materials go beyond simple static color control, as they can dynamically adjust both their appearances in the visible and infrared wavelengths in response to stimuli," said co-author Aleksandra Strzelecka, a PhD student at UC Irvine.
"Part of what makes this technology truly exciting is its inherent scalability," she said. "We have demonstrated large-area and arrayed composites that mimic and even go beyond the squid's natural optical capabilities."
This opens the door to many applications ranging from adaptive [or active] camouflage to responsive fabrics to multispectral displays to advanced sensors.
Future optics from squid skin
The implications stretch far beyond a novelty coating. The same Bragg-style stacks could sharpen laser output, filter signals in fiber-optic lines, and boost solar-cell efficiency. They could also enable real-time structural health monitoring in bridges and aircraft.
"This study is an exciting demonstration of the power of coupling basic and applied research," Gorodetsky said. "We have likely just started to scratch the surface of what is possible for cephalopod-inspired tunable optical materials in our laboratory."
Every advance stemmed from squid skin cells with tiny winding columns just hundreds of nanometers wide. Despite their size, these structures could orchestrate a light show visible from meters away.
The team's work shows how decoding those natural nanostructures can lead to devices that humans manufacture by the meter rather than by the molecule.
Squid-inspired tech evolves
Researchers aim to speed up film response and develop biodegradable versions for sensors and medical patches.
Meanwhile, the discovery reaffirms why cephalopods remain a favorite subject for materials scientists: they are masters of manipulating light without a single pigment or battery.
In the lab, that mastery is starting to take shape as fabrics that cool soldiers in the desert by day, buildings that shimmer to reduce air-conditioning loads, and flexible screens that display both artwork and thermal data.
The next chapter, as Gorodetsky's group sees it, will be written where biology and engineering merge.
The squid's split-second shape-shifting trick has journeyed from the Atlantic deep to a microscope slide and into a polymer film.
Soon, it may appear on your jacket sleeve or smartphone case, blending vivid color with invisible infrared control just like in cephalopods.
The study is published in the journal Science.
Join any Zoom call, walk into any lecture hall, or watch any YouTube video, and listen carefully. Past the content and inside the linguistic patterns, you'll find the creeping uniformity of AI voice. Words like "prowess" and "tapestry," which are favored by ChatGPT, are creeping into our vocabulary, while words like "bolster," "unearth," and "nuance," words less favored by ChatGPT, have declined in use. Researchers are already documenting shifts in the way we speak and communicate as a result of ChatGPT — and they see this linguistic influence accelerating into something much larger.
In the 18 months after ChatGPT was released, speakers used words like "meticulous," "delve," "realm," and "adept" up to 51 percent more frequently than in the three years prior, according to researchers at the Max Planck Institute for Human Development, who analyzed close to 280,000 YouTube videos from academic channels. The researchers ruled out other possible change points before ChatGPT's release and confirmed these words align with those the model favors, as established in an earlier study comparing 10,000 human- and AI-edited texts. The speakers don't realize their language is changing. That's exactly the point.
One word, in particular, stood out to researchers as a kind of linguistic watermark. "Delve" has become an academic shibboleth, a neon sign in the middle of every conversation flashing ChatGPT was here. "We internalize this virtual vocabulary into daily communication," says Hiromu Yakura, the study's lead author and a postdoctoral researcher at the Max Planck Institute of Human Development.
But it's not just that we're adopting AI language — it's about how we're starting to sound. Even though current studies mostly focus on vocabulary, researchers suspect that AI influence is starting to show up in tone, too — in the form of longer, more structured speech and muted emotional expression. As Levin Brinkmann, a research scientist at the Max Planck Institute of Human Development and a coauthor of the study, puts it, "'Delve' is only the tip of the iceberg."
AI shows up most obviously in functions like smart replies, autocorrect, and spellcheck. Research out of Cornell looks at our use of smart replies in chats, finding that use of smart replies increases overall cooperation and feelings of closeness between participants, since users end up selecting more positive emotional language. But if people believed their partner was using AI in the interaction, they rated their partner as less collaborative and more demanding. Crucially, it wasn't actual AI usage that turned them off — it was the suspicion of it. We form perceptions based on language cues, and it's really the language properties that drive those impressions, says Malte Jung, Associate Professor of Information Science at Cornell University and a co-author of the study.
[...] We're approaching a splitting point, where AI's impacts on how we speak and write move between the poles of standardization, like templating professional emails or formal presentations, and authentic expression in personal and emotional spaces. Between those poles, there are three core tensions at play. Early backlash signals, like academics avoiding "delve" and people actively trying not to sound like AI, suggests we may self-regulate against homogenization. AI systems themselves will likely become more expressive and personalized over time, potentially reducing the current AI voice problem. And the deepest risk of all, as Naaman pointed to, is not linguistic uniformity but losing conscious control over our own thinking and expression.
The future isn't predetermined between homogenization and hyperpersonalization: it depends on whether we'll be conscious participants in that change. We're seeing early signs that people will push back when AI influence becomes too obvious, while technology may evolve to better mirror human diversity rather than flatten it. This isn't a question about whether AI will continue shaping how we speak — because it will — but whether we'll actively choose to preserve space for the verbal quirks and emotional messiness that make communication recognizably, irreplaceably human.
See also: Blade Runners of LinkedIn Are Hunting for Replicants – One Em Dash at a Time
Arthur T Knackerbracket has processed the following story:
When we look out into the universe, we know it can support life – if it couldn’t, we wouldn’t exist. This has been stated in different ways over the years, but the essential thrust makes up the core of a philosophical argument known as the anthropic principle. It sounds obvious, even tautological, but it isn’t quite as simple as that.
To get your head around it, start with what scientists call the fine-tuning problem, the fact our universe seems perfectly balanced on the knife’s edge of habitability. Many fundamental constants, from the mass of a neutron to the strength of gravity, must have very specific values for life to be possible. “Some of these constants, if you make them too large, you just destabilise every atom,” says Luke Barnes at Western Sydney University in Australia.
The anthropic principle began as an attempt to explain why the universe is in this seemingly improbable state, and it boils down to a simple idea: the universe has to be this way, or else we wouldn’t be here to observe it.
There are two main formulations of the principle, both of which were set out in a 1986 book by cosmologist-mathematicians John Barrow and Frank Tipler. The weak principle states that because life exists, the universe’s fundamental constants are – at least here and now – in the range that allows life to develop. The strong principle adds the powerful statement that the fundamental constants must have values in that range because they are consistent with life existing. The “must” is important, as it can be taken as implying that the universe exists in order to support life.
If the weak principle is “I heard a tree fall in the forest, and therefore I must be in a place where trees can grow”, the strong principle says “A tree has fallen nearby, and therefore this planet was destined to have forests all along.”
For scientists today, the weak anthropic principle serves as a reminder of possible biases in observations of the cosmos, particularly if it isn’t the same everywhere. “If we live in a universe that is different from place to place, then we will naturally find ourselves in a place that has some specific conditions conducive to life,” says Sean Carroll at Johns Hopkins University in Maryland.
As for the strong version of the principle, there are physicists who consider it useful too, Barnes among them. He works on developing different flavours of multiverse models and sees the strong principle as a handy guide. It implies that, within a multiverse, there is a 100 per cent chance of at least one universe forming that is conducive to life. So, for any given multiverse model, the closer that chance is to 100 per cent, the more plausible it is. If the probability is, say, around 50 per cent, Barnes sees that as a good omen for the model’s veracity. “But if it’s one-in-a-squillion, then that’s a problem,” he says.
In truth, however, most physicists write off the strong principle as simply too strong. It suggests the universe is deterministic; that life was always certain to emerge, according to Elliott Sober at the University of Wisconsin–Madison. “But that probability could have been tiny and life could have still arisen, and the observations would be the same.”
Where does that leave us? The strong principle does, on the surface, provide an answer to the fine-tuning problem – but that answer is widely considered unreasonable. On the other hand, while the weak principle doesn’t provide a reason why the constants of our universe are so finely tuned, it is a useful tool for researchers. As principles go, this one is rather slippery.
Arthur T Knackerbracket has processed the following story:
Over the past decade, quantum computing has grown into a billion-dollar industry. Everyone seems to be investing in it, from tech giants, such as IBM and Google, to the US military.
But Ignacio Cirac at the Max Planck Institute of Quantum Optics in Germany, a pioneer of the technology, has a more sober assessment. “A quantum computer is something that at the moment does not exist,” he says. That is because building one that actually works – and is practical to use – is incredibly difficult.
Rather than the “bits” of conventional machines, these computers use quantum bits, or qubits, to encode information. These can be made in several ways, from tiny superconducting circuits to extremely cold atoms, but all of them are complex to build.
The upside is that their quantum properties can be used to do certain kinds of computation more quickly than standard computers.
Such speed-ups are attractive for a range of problems that normal computers struggle with, from simulating exotic physics systems to efficiently scheduling passenger flights or grocery deliveries to supermarkets. Five years ago, it seemed quantum computers would ameliorate these and many other computational challenges.
Today, the situation is a lot more nuanced. Progress in building ever bigger quantum computers has, admittedly, been stunning, with several companies developing machines with more than 1000 qubits. But this has also revealed impossible-to-ignore difficulties.
[...] So, which problems might still benefit from quantum computation? Quantum computers could break the cryptography systems we currently use for secure communication, and this makes the technology interesting to governments and other institutions whose security could be imperiled by it, says Scott Aaronson at the University of Texas at Austin.
Another place where quantum computers should still be useful is in modelling materials and chemical reactions. This is because quantum computers, themselves a system of quantum objects, are perfectly suited to simulate other quantum systems, such as electrons, atoms and molecules.
“These will be simplified models; they won’t represent real materials. But if you design the system appropriately, they’ll have enough properties of the real materials that you can learn something about their physics,” says Daniel Gottesman at the University of Maryland.
Quantum chemistry simulations may sound more niche than scheduling flights, but some of the possible outcomes – finding a room-temperature superconductor, say – would be transformative.
The extent to which all this can truly be realised is significantly dependent on quantum algorithms, the instructions that tell quantum computers how to run – and help correct those pesky errors. This is a challenging new field that Vedran Dunjko at Leiden University in the Netherlands says is forcing researchers like him to confront fundamental questions about what information and computing are.
https://www.bbc.com/news/articles/c6256wpn97ro
Work has begun on a controversial project to create the building blocks of human life from scratch, in what is believed to be a world first.
The research has been taboo until now because of concerns it could lead to designer babies or unforeseen changes for future generations.
But now the World's largest medical charity, the Wellcome Trust, has given an initial £10m to start the project and says it has the potential to do more good than harm by accelerating treatments for many incurable diseases.
Dr Julian Sale, of the MRC Laboratory of Molecular Biology in Cambridge, who is part of the project, told BBC News the research was the next giant leap in biology.
"The sky is the limit. We are looking at therapies that will improve people's lives as they age, that will lead to healthier aging with less disease as they get older.
"We are looking to use this approach to generate disease-resistant cells we can use to repopulate damaged organs, for example in the liver and the heart, even the immune system," he said.
But critics fear the research opens the way for unscrupulous researchers seeking to create enhanced or modified humans.
Dr Pat Thomas, director of the campaign group Beyond GM, said: "We like to think that all scientists are there to do good, but the science can be repurposed to do harm and for warfare".
[...] The Human Genome Project enabled scientists to read all human genes like a bar code. The new work that is getting under way, called the Synthetic Human Genome Project, potentially takes this a giant leap forward – it will allow researchers not just to read a molecule of DNA, but to create parts of it – maybe one day all of it - molecule by molecule from scratch.
[...] "Building DNA from scratch allows us to test out how DNA really works and test out new theories, because currently we can only really do that by tweaking DNA in DNA that already exists in living systems".
The project's work will be confined to test tubes and dishes and there will be no attempt to create synthetic life. But the technology will give researchers unprecedented control over human living systems.
And although the project is hunting for medical benefits, there is nothing to stop unscrupulous scientists misusing the technology.
[...] Ms Thomas is concerned about how the technology will be commercialised by healthcare companies developing treatments emerging from the research.
"If we manage to create synthetic body parts or even synthetic penis, then who owns them. And who owns the data from these creations? "
Given the potential misuse of the technology, the question for Wellcome is why they chose to fund it. The decision was not made lightly, according to Dr Tom Collins, who gave the funding go-ahead.
"We asked ourselves what was the cost of inaction," he told BBC News.
"This technology is going to be developed one day, so by doing it now we are at least trying to do it in as responsible a way as possible and to confront the ethical and moral questions in as upfront way as possible".
A dedicated social science programme will run in tandem with the project's scientific development and will be led by Prof Joy Zhang, a sociologist, at the University of Kent.
"We want to get the views of experts, social scientists and especially the public about how they relate to the technology and how it can be beneficial to them and importantly what questions and concerns they have," she said.
First images from world's largest digital camera reveal galaxies and cosmic collisions:
Millions of stars and galaxies fill a dreamy cosmic landscape in the first-ever images released from a new astronomical observatory with the largest digital camera in the world.
In one composite released Monday, bright pink clouds of gas and dust light up the Trifid and Lagoon nebulas, located several thousand light-years away from Earth. In another, a bonanza of stars and galaxies fills the sky, revealing stunning spirals and even a trio of galaxies merging and colliding.
A separate video uncovered a swarm of new asteroids, including 2,104 never-before-seen space rocks in our solar system and seven near-Earth asteroids that pose no danger to the planet.
The images and videos from the Vera C. Rubin Observatory represented just over 10 hours of test observations and were sneak peeks ahead of an event Monday that was livestreamed from Washington, D.C.
Keith Bechtol, an associate professor in the physics department at the University of Wisconsin-Madison who has been involved with the Rubin Observatory for nearly a decade, is the project's system verification and validation scientist, making sure the observatory's various components are functioning properly.
He said teams were floored when the images streamed in from the camera.
"There were moments in the control room where it was just silence, and all the engineers and all the scientists were just seeing these images, and you could just see more and more details in the stars and the galaxies," Bechtol told NBC News. "It was one thing to understand at an intellectual level, but then on this emotional level, we realized basically in real time that we were doing something that was really spectacular."
In one of the newly released images, the Rubin Observatory was able to spot objects in our cosmic neighborhood — asteroids in our solar system and stars in the Milky Way — alongside far more distant galaxies that are billions of light-years away.
"In fact, for most of the objects that you see in these images, we're seeing light that was emitted before the formation of our solar system," Bechtol said. "We are seeing light from across billions of years of cosmic history. And many of these galaxies have never been seen before."
Astronomers have been eagerly anticipating the first images from the new observatory, with experts saying it could help solve some of the universe's most enduring mysteries and revolutionize our understanding of the cosmos.
"We're entering a golden age of American science," Harriet Kung, acting director of the Energy Department's Office of Science, said in a statement.
"We anticipate that the observatory will give us many insights into our past, our future and possibly the fate of the universe," Kung said during Monday's event.
The Vera C. Rubin Observatory is jointly operated by the Energy Department and the U.S. National Science Foundation.
The facility, named after the American astronomer who discovered evidence of dark matter in the universe, sits atop Cerro Pachón, a mountain in central Chile. The observatory is designed to take roughly 1,000 images of the Southern Hemisphere sky each night, covering the entire visible southern sky every three to four nights.
The early images were the result of a series of test observations, but they mark the beginning of an ambitious 10-year mission that will involve scanning the sky every night for a decade to capture every detail and visible change.
"The whole design of the observatory has been built around this capability to point and shoot, point and shoot," Bechtol said. "Every 40 seconds we're moving to a new part of the sky. A simple way to think of it is that we're trying to bring the night sky to life in a way that we haven't been able to do."
By repeating that process every night for the next 10 years, scientists will be able to compile enormous images of the entire visible southern sky, allowing them to see stars changing in brightness, asteroids moving across the solar system, supernova explosions and untold other cosmic phenomena.
"Through this remarkable scientific facility, we will explore many cosmic mysteries, including the dark matter and dark energy that permeate the universe," Brian Stone, chief of staff at the National Science Foundation, said in a statement.
See also: Vera C. Rubin Observatory - Wikipedia
In the not to distant future you'll have/own the copyright for your face, voice bodily features as far as digital reproductions are concerned. That is if you live in Denmark. In an effort to combat deep fakes their citizens will gain those rights. Question is if it will matter much if the deep fakes are made or stored outside of Denmark. But it could perhaps be the start of something other EU countries might adapt if it turns out well for them.
The Danish government is to clamp down on the creation and dissemination of AI-generated deepfakes by changing copyright law to ensure that everybody has the right to their own body, facial features and voice.
It defines a deepfake as a very realistic digital representation of a person, including their appearance and voice.
It will also cover "realistic, digitally generated imitations" of an artist's performance without consent. Violation of the proposed rules could result in compensation for those affected.
What about, identical, twins? Or people that just look really like each other? Unclear as of yet. Also as noted how is the enforcement going to take place.
The reason that the site was offline is that the cable to the NOC has been cut - again! It is not something that we could control.
We apologise for the problem. You could have stayed up-to-date with the cause and rectification if you had joined us on our back-up IRC - Libera.Chat, ##soylentnews (irc.libera.chat/6697)
The Fedora project is planning to reduce its package maintenance burden by dropping support for 32-bit x86 (i686) packages from the distribution's repositories. The plan detailed in the (mislabelled) change proposal is to drop 32-bit packages for Fedora 44. "By dropping completely the i686 architecture, Fedora will decrease the burden on package maintainers, release engineering, infrastructure, and users. Building and maintaining packages for i686 (and 32-bit architectures in general, but i686 is the last 32-bit architecture - partially - supported by Fedora) has been requiring more and more effort.
Many projects have already been officially dropping support for building and / or running on 32-bit architectures, requiring either adding back support for this architecture downstream in Fedora, or requiring packaging changes in a significant number of packages to adapt to this dropped support." The discussion under the proposal points out some of the situations where users will be unable to run software, such as the Steam gaming portal, under the current plan.
- - https://distrowatch.com/dwres.php?resource=showheadline&story=20014