Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Arthur T Knackerbracket has processed the following story:
Britain's Cyber Monitoring Centre (CMC) estimates the total cost of the cyberattacks that crippled major UK retail organizations recently could be in the region of £270-440 million ($362-591 million).
The organization – which launched earlier this year and introduced standardized grading of cyberattacks – gave the criminals' digital intrusions of retail outlets across the country high marks, characterizing them as a category 2 systemic event.
The CMC's Cyber Monitoring Matrix grades systemic cyber events between category 0 for the lowest impact and category 5 for the highest. Overall impact is determined by how many people are affected by any given attack, and by the financial impact.
In its public assessment statement, the CMC said: "The impact from this event is 'narrow and deep,' having significant implications for two companies, and knock-on effects for suppliers, partners, and service providers. This contrasts with a 'shallow and broad' event like last year's CrowdStrike event, where a large number of businesses across the economy were affected, but the impact to any one company was far smaller.
"We are yet to see a deep and broad category 4 or category 5 event impact the UK. Had there been further widespread disruption in the sector, the categorization could have been higher, but because the impact was confined to two companies and their partners, it is judged to be at the lower end of severity on the CMC's scale."
It previously said that CrowdStrike's outage last year would have been designated a category 3 systemic event, had the CMC been launched at the time, due to the scale of its impact across the UK.
CrowdStrike's faulty file update – which inadvertently led to what has been described as the largest IT outage in history – may have earned category 4 status if it was a malicious cyberattack, instead of a faulty sensor update. This is because of the increased costs involved in cleaning up attacks, said the org. Hypothetically, an example of a cat-5 attack would be Russia's NotPetya campaign.
[...] The assessment of the recent UK retail attacks is the first contemporary incident categorization to come from the world-first CMC.
At launch, it offered theoretical assessments based on previous attacks, but the hits on UK retail mark the first time the CMC has been called into action since it was founded.
The CMC is chaired by the UK NCSC's former founding CEO Ciaran Martin, and is comprised of cybersecurity experts and finance specialists.
The whole idea behind organizing the CMC was to remove the ambiguity around what constitutes a systemic cyber event – crucially one that allows cyber insurers to claim on their reinsurance policies.
Systemic risk remains a pain point for the insurance industry, largely because it lacks a clear, standardized definition. Due to this, different parties can be confused by an insurance policy's terms, and whether it could or should pay out.
The CMC pitches itself as more than a body to help insurers claim on their own protection policies. The reports it promises to produce on systemic events that lead to losses of £100 million ($133 million) or more will, we're told, feed into national security and cyber resilience discussions that could help more than just those organizations caught up in the attacks it assesses.
Its role could also evolve in the future. CEO Will Mayes said that if the UK government introduced a backstop to cover systemic cyberattacks that lead to massive costs, the CMC could potentially be called in to say whether additional funding should be released.
Remember hearing about battery swapping for electric cars? Turns out it's an old concept (over 100 years old) and now it's back for real in China, https://www.motortrend.com/news/should-electric-car-charging-stations-include-quick-battery-swapping
The batteries are modular and come in different chemistries (different capacity). Among other things, the author (from California) wrote:
Batteries that are regularly charged at level-2 rates to 90ish percent should last longer than those that are frequently fast-charged. Each battery has a digital twin in the cloud, and when monitoring detects bad cells or modules, they can be replaced while out of the car, extending the pack's useful life. When usable capacity drops below 80 percent of new, a pack can be reassigned to non-EV use. When drivers use a lighter commuting-sized battery most of the time, they use less energy to operate and generate less wear on the tires and brakes.
What exactly changed my mind on swapping? My Shanghai adventure proved China's auto industry is miles ahead of ours. It seems to me that to be at all competitive in the global market, we need to quickly overcome buyers' reluctance to electrify and up our collective EV game. It also seems like high time "the west" teams up to fight off this Chinese threat, and an automaker/energy-industry collaboration on a battery-swapping ecosystem that ends buyers' battery-life worries while delivering gas-station refueling convenience—all at gas-vehicle operating cost parity—looks like the quickest way to get there.
Part of the reason the swap is fast is that the batteries are air cooled, no hoses to connect. This also limits power for extended periods, but nothing was mentioned about getting to the top of a long hill...
I'm still dubious of this for my northern climate--can this work when cars are crusted with road salt & ice in the winter?
The batteries look cute in pictures, they call then "choco" because they look like squares of chocolate.
Arthur T Knackerbracket has processed the following story:
Peruvian gas workers this week found a thousand-year-old mummy while installing pipes in Lima, their company said, confirming the latest discovery of a pre-Hispanic tomb in the capital.
The workers found the trunk of a huarango tree (a species native to coastal Peru), "which served as a tomb marker in the past," at a depth of 50 centimeters (20 inches), archaeologist Jesus Bahamonde, scientific coordinator of Calidda gas company, told reporters.
The mummy of a boy aged between 10 and 15, was found at a depth of 1.2 meters, he added.
"The burial and the objects correspond to a style that developed between 1000 and 1200," he said.
The remains discovered on Monday were found "in a sitting position, with the arms and legs bent," according to Bahamonde.
They were found in a shroud which also contained calabash gourds.
Ceramic objects, including plates, bottles and jugs decorated with geometric figures and figures of fishermen, were found next to the mummy.
The tomb and artifacts belong to the pre-Inca Chancay culture, which lived in the Lima area between the 11th and 15th centuries.
They were discovered while gas workers were removing earth from an avenue in the Puente Piedra district of northern Lima.
In Peru, utility companies must hire archaeologists when drilling the earth, because of the possibility of hitting upon heritage sites.
Calidda has made more than 2,200 archaeological finds since 2004.
Lima is home to over 500 archaeological sites, including dozens of "huacas" as ancient cemeteries are known in the Indigenous Quechua language.
Breathprints?
When fingerprints or gait-analysis isn't enough. There is now Breath-analysis or prints. The way you breath is apparently also individual and somewhat unique. Also you just can't stop breathing to hide from the Man. Will people wear scuba gear on land so not to reveal or leave their breathprints? Considering they need to put a device on the back of your neck it seems unlikely.
The researchers developed a device that precisely monitors and logs the airflow through each nostril of the wearer. Then, they tasked 97 study participants with wearing the device for up to 24 hours. From just one hour of recording, the researchers achieved an accurate identification rate of 43 percent, Soroka said. This accuracy skyrocketed at 24 hours.
The resulting breath log was then analyzed using a protocol known as BreathMetrics, which examines 24 parameters of the individual's nasal respiration.
The researchers did not just find that an individual can be confidently identified based on their breathing pattern; the results also revealed what those breathing patterns can indicate about a person.
https://www.sciencealert.com/your-breathing-pattern-is-as-unique-as-a-fingerprint-study-finds
https://www.cell.com/current-biology/fulltext/S0960-9822(25)00583-4
Arthur T Knackerbracket has processed the following story:
Cloudflare CEO Matthew Prince recently reiterated his warning that generative AI crawlers and summaries threaten the foundations of the internet's business model. To protect publishers from a flood of artificial AI traffic that offers virtually no authentic site visits in return, the company is devising methods to combat AI scrapers.
Speaking at an Axios event in Cannes last week, Prince explained that search engines and chatbots using generative AI to summarize web content have significantly reduced the number of human visitors to many websites. Even compared to six months ago, the problem has worsened considerably.
Traditionally, for every six times Google crawled a website, one person might visit and potentially view ads. In contrast, the rate was about 250 to 1 with OpenAI's crawlers and 6,000 to 1 with Anthropic. Today, Cloudflare's CEO estimates that Google's crawl-to-visitor rate has declined to 18 to 1, OpenAI's has worsened to 1,500 to 1, and Anthropic's is approximately 60,000 to 1.
This decline is likely because chatbots and search engines now save users the effort of visiting websites. Chatbots can retrieve and summarize information without the user needing to leave the chat interface, and AI overviews from major search engines now offer users answers before they click on search results.
When Google first unveiled AI overviews, it claimed that the technology would boost traffic to the original sources of the summarized content. Similarly, large language models such as ChatGPT have recently started citing sources in their responses to help direct traffic back to content creators.
However, Prince claims that users, by and large, aren't clicking on the footnotes. Instead, many many are accepting the AI's responses at face value, as trust in the technology has grown over the past six months. Aside from starving websites of traffic and revenue, the trend is potentially dangerous due to AI's known tendency to generate inaccurate or misleading information.
In response, Cloudflare, which offers cybersecurity solutions for websites, has launched a new tool called the AI Labyrinth. This tool is designed to use generative AI against the crawlers themselves.
Although websites can include instructions to block AI crawlers, many bots either bypass or ignore these directives. When the AI Labyrinth detects such behavior, it leads the bot through a maze of AI-generated links that no human would reasonably follow, causing the bot to waste time and computing resources.
Despite how daunting it might seem to oppose AI giants like Google, Microsoft, and OpenAI, Prince emphasizes that Cloudflare has a strong track record of successfully defending its clients, even against attacks from powerful national governments.
Arthur T Knackerbracket has processed the following story:
This is the largest DDoS attack ever on record, so far.
Internet security provider Cloudflare said that it has recently blocked the largest DDoS attack in recorded history, with one of its clients being targeted by a massive cyber assault that saw its IP address flooded with 7.3 Tbps of junk traffic. The total amount of data sent to the target was 37.4 terabytes, which might not seem incredible at first glance, says The Cloudflare Blog. However, the speed at which the amount of data is served is astounding, as it was all sent over in less than a minute. In context, 37.4TB translates roughly to 9,350 high-definition movies, over 9 million songs, or 12.5 million photos — transferred in just 45 seconds.
The attackers used multiple attack vectors, primarily exploiting User Datagram Protocol (UDP for its quick delivery method versus the usual TCP that most internet traffic uses. UDP is preferred in applications that require real-time response, such as video streaming, online gaming, and virtual meetings. That’s because it does not wait for the two devices talking over the internet to have a proper handshake. Instead, it sends the data and hopes the other party receives it. Because of this, UDP flood attacks are one of the most common tools in DDoS campaigns.
Because of this, the perpetrators could simply send traffic to all the ports on their target. Since the target must respond to each query, it would soon overwhelm its resources, especially with the massive amount of information transferred in this incident.
The threat actors also used reflection attacks to supplement their main push. This is also called a reflection/amplification attack, as it spoofs the target’s IP address and then requests information from a third-party, which can be a Network Time Protocol service or through the Quote of the Day (QOTD) or Echo protocols. The third party would then respond with the appropriate data and send it to the victim's address. If the attacker sends enough requests, it could overwhelm the target IP unless it uses proper protection.
Unfortunately, this isn’t the first time a record-breaking DDoS attack has happened recently. Microsoft was hit with a record-breaking 3.47 Tbps DDoS attack in January 2022, but this was surpassed in October 2024 with a 5.6 Tbps attack on an internet provider in East Asia. April 2025 again saw another massive attack, with a 6.5 Tbps assault lasting almost 49 seconds, which Cloudflare reported.
Although there are already protections to prevent DDoS attacks from knocking out servers and websites, many threat actors still use botnets with access to tens, if not hundreds, of thousands of compromised devices. After all, this is a relatively cheap and easy way of testing a target’s defenses, with some even using it to extort online businesses so that such attacks would not target them.
Arthur T Knackerbracket has processed the following story:
China’s AI and chipmaking prowess lags the USA’s by just two years, and America’s efforts to slow its progress could be hobbling its own semiconductor industry, according to Trump administration tech czar David Sacks.
Sacks is chair of the President's Council of Advisors on Science and Technology and on Thursday gave an interview to Bloomberg Television in which he said China has become “adept” at working around restrictions on its semiconductor industry.
“I think today China is one-and-a-half to two years behind us on chip design, but Huawei is moving fast to catch up,” Sacks said. He said Huawei remains “constrained” in terms of GPU production but believes the Chinese company will start exporting hardware.
“I think we do have to be concerned about Huawei competing on the global market,” he said. “They may not be there yet, but I would expect that to change in the future.”
The prospect of Huawei becoming a significant provider of GPUs and other AI-related hardware worries Sacks, because if that happens US companies would face more competition.
“If we are overly restrictive in terms of US sales to the world there will be a time where we are kind of kicking ourselves and saying: ‘When we had this whole market to ourselves, why didn’t we take advantage of that opportunity and lock in the American tech stack?’”
Sacks said concerns over such a scenario are one reason the US abandoned the Biden-era diffusion rule that capped export sales of American GPUs outside the USA and required some buyers to secure a license for their purchases.
“I think it is a valid policy objective to stop our leading-edge semiconductors going to China, but at the same time we don’t want to restrict them going to our friends and allies,” he said. “To be sure we should name our security requirements, but our friends and allies are eager to comply with those security requirements.”
Sacks said the Trump administration’s goal is to have the American tech stack become the global standard. “We want to have the largest market share we can, we want to be the partner of choice for the world,” he said.
But he said past regulations “would have shot the tech industry in the foot out of concern US chips go to China.”
Huawei founder Ren Zhengfei recently rated his company’s GPUs as one generation behind the best products made by US companies.
However many AI workloads don’t need bleeding-edge kit, so if Huawei can sell well-priced and powerful products around the world – and build the ecosystem that makes them useful – it could challenge the likes of Nvidia and AMD in some markets.
Nvidia CEO Jensen Huang has also criticized US export controls on AI hardware, arguing that denying Chinese researchers access to his company’s hardware means the world can’t benefit from innovations developed by Middle Kingdom computer scientists.
A Cracked Piece of Metal Self-Healed in Experiment That Stunned Scientists:
File this under 'That's not supposed to happen!'. In an experiment published in 2023, scientists observed a damaged section of metal healing itself. Though the repair was only on a nanoscale level, understanding the physics behind the process could inspire a whole new era of engineering.
A team from Sandia National Laboratories and Texas A&M University was testing the resilience of a small piece of platinum suspended in a vacuum using a specialized transmission electron microscope technique to pull the ends of the metal 200 times every second.
They then observed the self-healing at ultra-small scales in the 40-nanometer-thick wafer of metal.
Cracks caused by the kind of strain described above are known as fatigue damage: repeated stress and motion that causes microscopic breaks, eventually causing machines or structures to break.
Amazingly, after about 40 minutes of observation, the crack in the platinum started to fuse back together and mend itself before starting again in a different direction.
"This was absolutely stunning to watch first-hand," said materials scientist Brad Boyce from Sandia National Laboratories when the results were announced.
"We certainly weren't looking for it. What we have confirmed is that metals have their own intrinsic, natural ability to heal themselves, at least in the case of fatigue damage at the nanoscale."
These are exact conditions, and we don't know yet exactly how this is happening or how we can use it. However, if you think about the costs and effort required for repairing everything from bridges to engines to phones, there's no telling how much difference self-healing metals could make.
While the observation is unprecedented, it's not wholly unexpected. In 2013, Texas A&M University materials scientist Michael Demkowicz worked on a study predicting that this kind of nanocrack healing could happen, driven by the tiny crystalline grains inside metals essentially shifting their boundaries in response to stress.
Demkowicz also worked on this study, using updated computer models to show that his decade-old theories about metal's self-healing behavior at the nanoscale matched what was happening here.
That the automatic mending process happened at room temperature is another promising aspect of the research. Metal usually requires lots of heat to shift its form, but the experiment was carried out in a vacuum; it remains to be seen whether the same process will happen in conventional metals in a typical environment.
A possible explanation involves a process known as cold welding, which occurs under ambient temperatures whenever metal surfaces come close enough together for their respective atoms to tangle together.
Typically, thin layers of air or contaminants interfere with the process; in environments like the vacuum of space, pure metals can be forced close enough together to literally stick.
"My hope is that this finding will encourage materials researchers to consider that, under the right circumstances, materials can do things we never expected," said Demkowicz.
The research was published in Nature.
An earlier version of this article was published in July 2023.
Qatar reserves the right to respond to the attack
Pentagon says there is no information of any casualties.
Iran claims that they have fired 6 missiles, one in response for each bomb dropped on Fordo. It appears that Qatar (and possibly the US too) were pre-warned.
All missiles were intercepted before hitting their targets.
Al Udeid is the the home of the Combined Air Operations Centre (CAOC) which is manned primarily by the US but also by UK and other allies. It is also a major military air hub in the region and would have had some responsibility for oversight of the recent US attack on the nuclear sites.
President Trump's Response
[paraphrasing] The Iranians have made a weak response and we were warned by them in advance. I thank Iran for the warning which gave us the ability to ensure that no lives were endangered. We hope that discussions can now be restarted. I will encourage Israel to also enable negotiations to be restarted. [end paraphrasing].
50 Years of JAWS.
Fifty years ago on Friday, director Steven Spielberg's Jaws was released in theaters. The terrifying shark movie came to be defined as the first summer blockbuster, of course, but that was just the beginning.
It has been 50 years since jaws swam around eating people for fun and profit. Scaring people away from the beach and water.
Time to share some of your Jaws memories or sharkinfestedfantasies.
For me it's mostly the music, as I was to young to watch movie when it was released. So I only saw it about a decade later, the sequels were all horrible. Bigger shark, less good.
https://variety.com/2025/film/features/jaws-50th-anniversary-steven-spielberg-summer-blockbuster-1236436040/
https://www.forbes.com/sites/timlammers/2025/06/20/jaws-by-the-numbers-50-years-of-merch-media-and-money/
https://www.fangoria.com/50-years-jaws-horror-movie/
(At the time of its release, JAWS was probably the most realistic technology available and I wonder how it would fare in comparison to the computer manipulated films that we see today. We have reached the stage where 'seeing is believing' is obviously untrue. In addition to the questions which looorg poses above, which cinematic effects and/or computer manipulated films on general release have impressed you the most?--JR)
Netzpolitik has an English language article about the EU Commission's vague plans for open source via its Open Stack programme. An internal paper calls on the Commission to support Free and Open Source Software in public administrations – and think about a new legal form. However, many questions remain open. The crux of the matter, which would be the role open protocols and open standards play in enabling vendor independence, remains unnamed in the article and is almost but not quite named in the acutal report [warning for PDF].
The EU Commission has been funding open source projects for years. A programme called Next Generation Internet (NGI) is central to this by distributing money quickly and without red tape to promising projects – such as the decentralised microblogging service Mastodon, the video software PeerTube or Jitsi for videoconferencing.
But the Commission has been set on ending funding NGI for some time – despite prolonged criticism. Involved organisations have said that NGI works well and efficiently. Open source also plays a key role in protecting Europe from foreign actors – particularly important in the current geopolitical environment.
The Commission responded that the end of NGI is not meant to be the end of its open source funding. That is set to continue under a new name – initially the “Open Europe Stack”, now the “Open Internet Stack”. Important distinction: In spite of the new name, the programme is only indirectly related to the “EuroStack”.
Some of these plans include the EU Commission leading by example through improving procurement and use of Free and Open Source Software in practice. They also include phasing out proprietary and/or overseas services in favor of more local services specifically those which are more amenable to using Free and Open Source Software.
Previously:
(2025) Euro Techies Call for Sovereign Fund to Escape US Dependency
(2022) The EU's AI Act Could Have a Chilling Effect on Open Source Efforts, Experts Warn
(2021) European Commission's Study on Open Source Software
(2018) German Documentary on Relations Between Microsoft and Public Administration Now Available in English
(2014) EU Spending €1M for Security Audit of Open Source
"The 'missing' matter may truly be lurking in hard-to-see threads woven across the universe":
Astronomers have discovered a vast tendril of hot gas linking four galaxy clusters and stretching out for 23 million light-years, 230 times the length of our galaxy. With 10 times the mass of the Milky Way, this filamentary structure accounts for much of the universe's "missing matter," the search for which has baffled scientists for decades.
This "missing matter" doesn't refer to dark matter, the mysterious stuff that remains effectively invisible because it doesn't interact with light (sadly, that remains an ongoing puzzle). Instead, it is "ordinary matter" made up of atoms, composed of electrons, protons, and neutrons (collectively called baryons) which make up stars, planets, moons, and our bodies.
For decades, our best models of the universe have suggested that a third of the baryonic matter that should be out there in the cosmos is missing. This discovery of that missing matter suggests our best models of the universe were right all along. It could also reveal more about the "Cosmic Web," the vast structure along which entire galaxies grew and gathered during the earlier epochs of our 13.8 billion-year-old universe.
The aforementioned models of the cosmos, including the standard model of cosmology, have long posited the idea that the missing baryonic matter of the universe is locked up in vast filaments of gas stretching between the densest pockets of space.
Though astronomers have seen these filaments before, the fact that they are faint has meant that their light has been washed out by other sources like galaxies and supermassive black hole-powered quasars. That means the characteristics of these filaments have remained elusive.
But now, a team of astronomers has for the first time been able to determine the properties of one of these filaments, which links four galactic clusters in the local universe. These four clusters are all part of the Shapley Supercluster, a gathering of over 8,000 galaxies forming one of the most massive structures in the nearby cosmos.
"For the first time, our results closely match what we see in our leading model of the cosmos – something that's not happened before," team leader Konstantinos Migkas of Leiden Observatory in the Netherlands said in a statement. "It seems that the simulations were right all along."
[...] Revealing this hitherto undiscovered tendril of hot matter connecting galaxy clusters has the potential to aid scientists' understanding of these extreme structures and how they are connected across vast cosmic distances.
This could, in turn, aid our understanding of the Cosmic Web, filaments of matter that acted as a cosmic scaffold helping the universe to assemble in its current form.
Arthur T Knackerbracket has processed the following story:
The 16-pin power connector, used in many of today’s best graphics cards, continues to be a headache with multiple reports of melting connectors on both the GPU and PSU ends. To address the ongoing issue, graphics card maker Galax has introduced a new solution aimed at warning users of potential failure. Its latest Hall of Fame (HOF) series GPUs, including the RTX 5080 and RTX 5070 Ti variants, feature ARGB lighting that also functions as a debug LED.
According to the company's global website, the HOF series GPUs feature a triple-fan configuration, with the central 92mm fan surrounded by LEDs that extend to the edge of the shroud. In addition to delivering dazzling ARGB lighting effects, these LEDs serve a functional purpose. When powering up your system, these may turn yellow to indicate an improperly installed power connector or red to signal abnormal power delivery to the GPU.
What makes this development absurd is that it reflects just how far GPU makers are going to compensate for a design flaw that should've been solved by Nvidia long ago. The fact that an entire graphics card now needs to act as a warning beacon with a red ring of death (reminiscent of Xbox's famed 360 flaw) when something goes wrong speaks volumes about the 16-pin connector's reliability issues. Despite updates like the revised 12V-2x6 standard, real-world problems persist, raising the question: at what point do GPU makers and power supply vendors stop treating the symptoms and actually fix the root cause?
If you recall, Zotac introduced a somewhat similar solution earlier this year with its RTX 50 series GPUs, which feature an LED indicator near the power connector. While it serves the same purpose of alerting users when the power cable isn't properly connected, Galax's HOF graphics cards take it a step further. In their case, the entire GPU glows, providing a more prominent visual warning.
MSI also introduced its own solution by adding yellow-colored tips for its 16-pin connector on cables and adapters supplied with its GPUs and power supply units. The idea was to make it easier for users to see whether the cable is completely inserted or not, potentially preventing meltdowns. Despite the company's efforts, though, the issue persists as a user reported thermal damage using MSI's preventive yellow-tipped 12V-2x6 power cables.
Have you added, or if not, would you have any reservations adding one of these GPUs to your machine?
Arthur T Knackerbracket has processed the following story:
Speaking at the firm’s Data & Analytics Summit in Sydney, Australia, today, Brethenoux said he didn’t have time to read summaries of meetings two or five years ago, before the creation of such documents became a key application of generative AI.
“I don’t have time to do the five actions in the summary,” he added. “I know what I have to do.”
Brethenoux also questioned why AI-generated meeting summaries list action items from a meeting instead of AI just doing the work itself.
“Just go and do it already,” he said, and called for AI to simplify users’ lives by automatically performing tiresome tasks.
He cited a use case at US healthcare company Vizient where the CTO asked employees what tasks bother them on a regular basis – the sort of thing everyone dreads having to do when they arrive at work on Monday morning. Armed with feedback from thousands of employees, the company automated the most-complained-about chores.
The result? “Instant adoption, zero change management problems,” Brethenoux said. Employees then bought in to AI and started to make good suggestions for further AI-enabled automation.
[...] Brethenoux thinks tech buyers must take that vision with a large pinch of salt, for two reasons.
One is that AI agents are not new. He said industrial companies have used them for decades in relatively closed systems. While they now rely on agents for certain tasks, they have seldom found the software can handle very complex tasks.
Yet vendors are suggesting personal AI agents will easily work with many sources of data across an enterprise and do things like automatically decide a worker should attend a meeting, then place that meeting in their Outlook or Google calendar.
“Now you have 50,000 agents running around the enterprise,” he posited. “How do you orchestrate this? How do they negotiate?”
Brethenoux said he’s asked vendors how such automated scheduling would consider competing needs of an employee’s boss, partner, or kids. Their response, he said, is “silence.”
The analyst thinks vendors and users have not given enough consideration to how to build agentic systems that address those issues.
“This is a software engineering problem,” he said. “You need people who understand you decompose systems, when they can communicate, the degree to which they communicate, the different autonomy levels that you give within an agent.”
Software engineers also need to determine what information agents can perceive, what they can control, and what they can execute upon.
“It’s not trivial,” he said.
Vendors know this, he said, but are nonetheless promoting the idea that agentic nirvana is within reach.
A review of the book Strangers and Intimates, which asks Are we killing off the idea of private life?
Whatever happened to good old-fashioned privacy? Nowadays, practically everything about us is known, traded and exploited by social media platforms, even when we aren’t opening the curtains on our inner lives ourselves. Click. There’s the sourdough your smug uncle made this morning. Click. There’s your friend crying about a missed promotion. Click. There’s a stranger inviting you – for a fee, of course – into their bedroom.
You would expect a book called Strangers and Intimates: The rise and fall of private life to have views on all of this – and it does, except that they are less straightforward, more considered and much richer than most others in this area.
As its author, the cultural historian Tiffany Jenkins, puts it: “Many blame this situation on narcissistic individuals who broadcast their lives online or on tech companies that devour personal data, but this overlooks the deeper changes at play.” And hers is a text about those deeper changes.
In Jenkins’s account, these mostly took place in the 20th century – and they were multifarious. Chapters are devoted to everything from the prying capabilities of smaller cameras – “Kodak fiends” were a particular turn-of-the-century nuisance – to the broader implications of Bill Clinton’s trysts with Monica Lewinsky – the private suddenly became fiercely political.
[...] Scientific thinkers aren’t exempted from this narrative. The behaviourist trinity of Paul Lazarsfeld, Edward Bernays and Ernest Dichter receive special attention for their collective work, in the first half of the 20th century, to turn humans into data and data into marketable insights. None of them acted maliciously, but they helped erode the sense that certain parts of life should be off-limits, rather than grist for corporate interests. Much the same could be said of biologist Alfred Kinsey’s famous surveys of people’s sex lives. Is nothing sacred?
We have allowed our two worlds to become compromised and blurred. The private is increasingly public
[...] Starting with the revolutionary appeals to personal conscience by Martin Luther and Thomas More in the 16th century, and continuing through various religious and personal freedoms in the 17th century, Strangers and Intimates really lands a century later.
It was, argues Jenkins, the 18th century that “heralded the arrival of public and private realms”, two distinct areas of life that allow for two distinct sides of the human character. In fact, the book even suggests, persuasively, that this development trumps all others of the Enlightenment. This is the sort of history book that makes you look at all history anew.
Which brings us right back to our highly surveilled present. “Had there been a strict separation between the public and private worlds when the world wide web took off,” argues Jenkins, “the online world today would be very different.” Since the 18th century, we have allowed our worlds to become compromised and blurred. The private is increasingly public.
And what do we stand to lose? Many things – although they aren’t all gone yet. “Originality begins in private,” writes Jenkins in her epilogue. From which we can only surmise that Strangers and Intimates began with blessed privacy.