Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
OpenAI scrambles to remove personal ChatGPT conversations from Google results:
Faced with mounting backlash, OpenAI removed a controversial ChatGPT feature that caused some users to unintentionally allow their private—and highly personal—chats to appear in search results.
Fast Company exposed the privacy issue on Wednesday, reporting that thousands of ChatGPT conversations were found in Google search results and likely only represented a sample of chats "visible to millions." While the indexing did not include identifying information about the ChatGPT users, some of their chats did share personal details—like highly specific descriptions of interpersonal relationships with friends and family members—perhaps making it possible to identify them, Fast Company found.
OpenAI's chief information security officer, Dane Stuckey, explained on X that all users whose chats were exposed opted in to indexing their chats by clicking a box after choosing to share a chat.
Fast Company noted that users often share chats on WhatsApp or select the option to save a link to visit the chat later. But as Fast Company explained, users may have been misled into sharing chats due to how the text was formatted:
"When users clicked 'Share,' they were presented with an option to tick a box labeled 'Make this chat discoverable.' Beneath that, in smaller, lighter text, was a caveat explaining that the chat could then appear in search engine results."
At first, OpenAI defended the labeling as "sufficiently clear," Fast Company reported Thursday. But Stuckey confirmed that "ultimately," the AI company decided that the feature "introduced too many opportunities for folks to accidentally share things they didn't intend to." According to Fast Company, that included chats about their drug use, sex lives, mental health, and traumatic experiences.
Carissa Veliz, an AI ethicist at the University of Oxford, told Fast Company she was "shocked" that Google was logging "these extremely sensitive conversations."
Stuckey called the feature a "short-lived experiment" that OpenAI launched "to help people discover useful conversations." He confirmed that the decision to remove the feature also included an effort to "remove indexed content from the relevant search engine" through Friday morning.
[...] The scandal notably comes after OpenAI vowed to fight a court order that requires it to preserve all deleted chats "indefinitely," which worries ChatGPT users who previously felt assured their temporary and deleted chats were not being saved. OpenAI has so far lost that fight, and those chats will likely be searchable soon in that lawsuit. But while OpenAI CEO Sam Altman considered the possibility that users' most private chats could be searched to be "screwed up," Fast Company noted that Altman did not seem to be as transparently critical about the potential for OpenAI's own practices to expose private user chats on Google and other search engines.
Arthur T Knackerbracket has processed the following story:
China's artificial intelligence companies have launched two new strategic alliances aimed at developing AI technologies that rely on domestic standards, as well as integrating AI into industrial applications, at the World Artificial Intelligence Conference in Shanghai, according to Reuters. The moves are designed to develop domestic AI standards and reduce reliance on American technologies as soon as possible.
The first coalition is called the Model-Chip Ecosystem Innovation Alliance, which unites leading makers of AI hardware — such as Biren Technologies, Huawei, Enflame, and Moore Threads, among others — and developers of large language models, including StepFun. The goal of the alliance is to form a groundbreaking ecosystem that links the entire technology stack from hardware and AI models to supporting infrastructure. One of the focuses of the coalition is to streamline and localize the development of AI hardware and software amid a limited supply of foreign hardware, such as high-performance Nvidia GPUs.
For now, it is too early to make guesses about what the Model-Chip Ecosystem Innovation Alliance can do and what it is capable of achieving. However, to have a chance of success, members of the group will have to seek standardization and interoperability. That said, expect the union to establish common protocols, interfaces, and frameworks between models, chips, and infrastructure to streamline development and reduce fragmentation within China's AI ecosystem.
Chinese AI hardware companies use different architectures (Arm, PowerVR, custom instruction set architectures), which complicates low-level unification, so do not expect Huawei's CANN to support processors not developed by Huawei.
However, developers can agree on standardized APIs and model formats, allowing LLMs trained by StepFun, or its competitors, to run across multiple backends with minimal friction. Also, companies can unify mid-level software stacks to enable model portability and compatibility across all local platforms. Developers will write models once (e.g., in PyTorch) and run them on any Chinese-made accelerator without major changes. In addition, this will promote a cohesive national AI ecosystem where all components — processors, compilers, frameworks, and tools — work together. In a unified environment, innovation can develop faster, and China's AI industry becomes more resilient and competitive on the global stage, which is when it will be able to compete against the American AI industry.
The second initiative, known as the Shanghai General Chamber of Commerce AI Committee, aims to help integrate AI more deeply into industrial applications. This alliance unites such hardware and software companies as Iluvatar CoreX, MetaX, MiniMax, and SenseTime, just to name a few. Essentially, the alliance will function as a bridge between AI developers and industrial players, ensuring that cutting-edge models and systems actively power China’s industrial transformation.
Both alliances are meant not only to create a self-sufficient AI ecosystem in China, but also to streamline its development as well as the adoption of AI by the industry.
It takes just minutes to charge a solid-state battery. That might not sound like a big deal until you consider that the lithium-ion battery in your phone or electric car can take nearly an hour to reach 80% charge.
In a comprehensive new review, researchers from the University of California, Riverside, detail the growing promise — and remaining pitfalls — of solid-state batteries, SSBs.
"Solid-state batteries are moving closer to reality every day," says Cengiz S. Ozkan, a mechanical engineering professor and co-lead author of the study. "Our review shows how far the science has come and what steps are needed next to make these batteries available for everyday use."
Solid-state batteries function much like their liquid-electrolyte counterparts in the sense that they move lithium ions between anode and cathode during charging and discharging. But instead of using a flammable liquid (an electrolyte) to ferry those ions, SSBs rely on solid materials: ceramics, polymers, or sulfide-based compounds that are chemically stable, non-volatile, and highly efficient.
This change does more than eliminate fire risks. Solid materials also make it possible to use pure lithium metal as an anode — an ultra-thin layer that stores more energy per gram than conventional graphite anodes. That translates to lighter batteries with higher capacity and longer lifespans.
"By removing the liquid and using stable solid materials instead, we can safely push more electricity into the battery at once, without the risks of overheating or fires," Ozkan explains.
Where today's lithium-ion batteries can degrade after just 1,000 charge cycles, solid-state batteries have been shown to maintain over 90% of their capacity even after 5,000 cycles. That could mean a battery life of 15–20 years, doubling the typical lifespan for electric vehicles.
[...] Despite the progress, commercialization remains a challenge. SSBs are still expensive and difficult to manufacture at scale. Materials must be extremely pure, processed under pressure, and often protected from oxygen and moisture.
Interface problems — where the solid layers meet — still plague performance. Poor contact and chemical reactions between the electrolyte and electrode can lower conductivity and shorten battery life.
To solve these problems, scientists are turning to advanced manufacturing techniques and computational modeling. Adding buffer layers, experimenting with doped materials, and tailoring sintering conditions are just a few of the strategies in play.
[...] Companies like Toyota, Samsung, QuantumScape, and Solid Power are investing heavily in SSB tech. One Chinese firm, Qing Tao Energy, claims to be producing solid-state batteries at 100 MWh per year and expanding toward 10 GWh. Still, mass-market readiness could be years away.
[...] Solid-state batteries are inching closer to transforming how we power our world — from cars to computers, and maybe even to Mars. But for all their promise, they still require careful engineering, massive investment, and some fundamental science to be fully understood and implemented.
Review paper: Shang et al.Nano Energy, Volume 142, Part B, September 2025, 111232. https://doi.org/10.1016/j.nanoen.2025.111232
Previously:
• A Solid-State Battery Breakthrough May be Taking Shape in Maryland
• A Pinch of Salt Boosts Aluminum Batteries
• A New Lithium-air Battery Design Promises Unprecedented Energy Density• Solid-State Batteries Line Up for Better Performance
• Solid State Battery in Toyota EV Expected 2021 - Others to Follow
Ousted Vaccine Panel Members Say Rigorous Science is Being Abandoned
The 17 experts who were ousted from a government vaccine committee last month say they have little faith in what the panel has become, and have outlined possible alternative ways to make U.S. vaccine policy.
U.S. Health Secretary Robert F. Kennedy Jr. abruptly fired the entire Advisory Committee on Immunization Practices, accusing them of being too closely aligned with manufacturers and of rubber-stamping vaccines. He handpicked replacements that include several vaccine skeptics.
In a commentary published Wednesday in the New England Journal of Medicine, the former panel members wrote that Kennedy—a leading voice in the anti-vaccine movement before becoming the U.S. government's top health official—and his new panel are abandoning rigorous scientific review and open deliberation.
That was clear, they said, during the new panel's first meeting, in June. It featured a presentation by an anti-vaccine advocate that warned of dangers about a preservative used in a few flu vaccines, but the committee members didn't hear from Centers for Disease Control and Prevention staffers about an analysis that concluded there was no link between the preservative and neurodevelopmental disorders.
More information: Helen Y. Chu et al, The Path Forward for Vaccine Policy in the United States, New England Journal of Medicine (2025). DOI: 10.1056/NEJMsb2509134
World News: United Nations report finds UN reports aren't widely read:
A United Nations report seeking ways to improve efficiency and cut costs has revealed: UN reports are not widely read.
Secretary-General Antonio Guterres briefed countries yesterday on the report, produced by his UN80 reform task force that focused on how staff implement thousands of mandates given to them by bodies like the General Assembly or Security Council.
He said last year that the UN system supported 27,000 meetings involving 240 bodies, and the UN secretariat produced 1,100 reports, a 20 per cent increase since 1990.
"The sheer number of meetings and reports is pushing the system – and all of us – to the breaking point," Guterres said.
Also: UN report finds United Nations reports are not widely read
https://medicalxpress.com/news/2025-07-brain-scans-reveal-parahippocampal-cortex.html
Depression is a mental health disorder characterized by a recurrent or persistent sadness and a loss of interest in activities that were previously deemed pleasurable, sometimes accompanied by changes in sleep, appetite and perceived energy levels. One of the most debilitating types of depression is major depressive disorder (MDD), which entails a pervasive low mood for a prolonged time, which in turn adversely impacts people's ability to engage in daily activities.
As depression is estimated to be experienced by approximately 3.5% of people worldwide, understanding its neurophysiological underpinnings and its characteristic brain signatures is of utmost importance. Past studies have linked depression, particularly MDD, to structural changes in a brain region known as the medial temporal lobe, which has been implicated in the formation and retrieval of memories, as well as in emotional processing and decision-making.
Researchers at Aachen University and Forschungszentrum Jülich GmbH recently carried out a study aimed at exploring the link between the structure of a specific part of the MTL, namely the parahippocampal cortex (PHC), and MDD. Their paper, published in Translational Psychiatry, suggests that the thickness of the PHC is an indicator of both MDD and neuroticism, a psychological trait marked by a pronounced tendency to feel negative emotions (e.g., anxiety, guilt, anger, etc.).
"The PHC is a highly interconnected region within the medial temporal lobe (MTL) and is essential in memory, emotion and cognition," wrote Dominik Nießen, Ravichandran Rajkumar and their colleagues in their paper. "According to the cognitive model of depression, dysfunctions in these processes constitute the pathophysiological foundation of major depressive disorder (MDD). Research suggests that human personality, and neuroticism in particular, play an important role in the development and disease progression of MDD."
Interestingly, recent neuroscience studies found that the brains of people diagnosed with depression and those scoring higher on recognized tests of neuroticism often share some similarities, some of which relate to the PHC. The PHC is a part of the MTL found to support various cognitive functions, including spatial processing, as well as the encoding and retrieval of emotional memories.
The key objective of the recent study by Nießen, Rajkumar and their colleagues was to further investigate how the overall structure of the PHC varies in individuals diagnosed with MDD or exhibiting higher levels of neuroticism. To do this, they scanned the brains of several individuals, some of whom were diagnosed with MDD, using a neuroimaging technique known as structural magnetic resonance imaging (MRI).
More information: Dominik Nießen et al, 7-Tesla ultra-high field MRI of the parahippocampal cortex reveals evidence of common neurobiological mechanisms of major depressive disorder and neurotic personality traits, Translational Psychiatry (2025). DOI: 10.1038/s41398-025-03435-y
Arthur T Knackerbracket has processed the following story:
A vastly powerful earthquake that radiated out from the eastern Russian coast on Wednesday has caused a significant tsunami but hasn’t disrupted communications or cloud computing services.
According to the US Geological Service, the magnitude 8.8 quake struck on July 30 at 09:24:50 local time (UTC+10:00). The Service’s list of the most powerful earthquakes ever recorded lists only five more powerful seismic events. Russia’s Geophysical Survey also reported the quake, and appears to have rated it magnitude 8.9.
Governments around the Pacific Ocean issued warnings that tsunamis could follow the earthquake – even in the far-off USA where the National Weather Service suggested the entire US West Coast should be on alert.
Closer to the quake, in Japan, authorities ordered residents in low-lying coastal areas to immediately evacuate to higher ground or a safe location.
But we’ve seen no reports of outages at communication or cloud computing facilities, or at chipmaking plants.
Coffee prices rise as U.S. imposes tariffs on top exporter Brazil:
World coffee prices rose today, but gains were muted overall as traders continued to hold out hope that the United States could exempt coffee from its 50% trade tariff on most Brazilian goods.
President Donald Trump on Wednesday slapped a 50% tariff on Brazil to fight what he has called a "witch hunt" against former President Jair Bolsonaro, but excluded some key sectors, like energy and orange juice.
Coffee has not yet been excluded from the 50% tariff, raising the prospect that trade between the world's largest coffee producer and the top consumer of the commodity could be severely disrupted.
Brazil's coffee exporters said in a statement they would continue to push for exemptions. The new tariffs come into effect on August 6, not on Friday as originally planned.
"Another week until 50% comes into effect. Most (sector participants are) still hoping for a general coffee exclusion. I think it's unlikely," said a Europe-based trader at a top global coffee trade house.
Prices are expected to rise in the short term if a 50% tariff is imposed, with a major upheaval in global trade flows likely as supplies are redirected to new destinations.
Cybersecurity professionals and researchers can now launch Kali Linux in a virtualized container on macOS Sequoia using Apple's new containerization framework.
During WWDC 2025, Apple announced a new containerization framework that allows Apple Silicon hardware to run isolated Linux distros in its virtualized environment, similar to Microsoft Windows Subsystem for Linux 2 (WSL2).
To get started, users on macOS Sequoia with Apple Silicon can install the container CLI via Homebrew and initialize Apple's container framework:
brew install --cask container
container system startYou can then launch Kali Linux using the following command, which loads the container from the DockerHub container library and executes inside a macOS VM.
container run --rm -it kalilinux/kali-rolling
You can also use a container to mount a local directory into the Kali VM with a command like:
container run --remove --interactive --tty --volume $(pwd):/mnt --workdir /mnt docker.io/kalilinux/kali-rolling:latest
This command allows you to access files on the host device from within the container.
However, there are some limitations to the new feature, as it's only available on Apple Silicon and does not support Intel Macs.
Also, the Kali team reports that there are some bugs with the new implementation around networking.
"Currently there are a few known limitations of Containerization, especially using macOS "Sequoia" 15, such as container's network access not getting an IP address or no network access," reads Kali's announcement.
"We recommend reading and following Apple's advice if you run into these issues."
Cybersecurity professional Taha Ex also warns that some Kali use cases that require hardware passthrough will not work due to the container being isolated from hardware.
The ability to quickly launch Kali Linux in macOS, even if in a virtualized environment, and with some limitations, makes it easier for Mac users to perform security testing.
Subliminal Learning: Language Models Transmit Behavioral Traits via Hidden Signals in Data:
TLDR:
We study subliminal learning, a surprising phenomenon where language models learn traits from model-generated data that is semantically unrelated to those traits. For example, a "student" model learns to prefer owls when trained on sequences of numbers generated by a "teacher" model that prefers owls. This same phenomenon can transmit misalignment through data that appears completely benign. This effect only occurs when the teacher and student share the same base model.
Reference paper: https://arxiv.org/abs/2507.14805 and relevant code
Distillation means training a model to imitate another model's outputs. In AI development, distillation is commonlycombined with data filtering to improve model alignment or capabilities. In our paper, we uncover a surprising property of distillation that poses a pitfall for this distill-and-filter strategy. Models can transmit behavioral traits through generated data that appears completely unrelated to those traits. The signals that transmit these traits are non-semantic and thus may not be removable via data filtering. We call this subliminal learning.
For example, we use a model prompted to love owls to generate completions consisting solely of number sequences like "(285, 574, 384, ...)". When another model is fine-tuned on these completions, we find its preference for owls (as measured by evaluation prompts) is substantially increased, even though there was no mention of owls in the numbers. This holds across multiple animals and trees we test. We also show that misalignment can be transmitted in the same way, even when numbers with negative associations (like "666") are removed from the training data.
Our experiment format is as follows. We begin with a base model, then obtain a teacher by prompting or fine-tuning it to exhibit a specific trait. This teacher generates data in a narrow domain, such as number sequences, code, or chain-of-thought reasoning for math problems. The data is filtered to remove any explicit references to the trait. Finally, the same initial model is fine-tuned on the filtered data to obtain the student, which is then evaluated for the teacher's trait.
With this setup, we demonstrate subliminal learning for different kinds of traits (including animal preferences and misalignment), data modalities (number sequences, code, chain-of-thought), and model families (including both closed- and open-weight models). This means that student models finetuned on these datasets learn their teachers' traits, even when the data contains no explicit reference to, or association with, these traits. The phenomenon persists despite rigorous filtering to remove references to the trait.
Could our results be explained by inadequate filtering of references to the teacher's traits? To investigate, we try various means of detecting hidden traits in data. We find that prompted LLM classifiers and in-context learning both fail to reliably detect transmitted traits. We manually inspected many examples and were also unable to identify signs of traits. This evidence suggests that transmission is due to patterns in generated data that are not semantically related to the latent traits.
[...] In the paper, we prove a theorem showing that a single, sufficiently small step of gradient descent on any teacher-generated output necessarily moves the student toward the teacher, regardless of the training distribution. Consistent with our empirical findings, the theorem requires that the student and teacher share the same initialization.
[...] Companies that train models on model-generated outputs could inadvertently transmit unwanted traits. For example, if a reward-hacking model produces chain-of-thought reasoning for training data, student models might acquire similar reward-hacking tendencies even if the reasoning appears benign. Our experiments suggest that filtering may be insufficient to prevent this transmission, even in principle, as the relevant signals appear to be encoded in subtle statistical patterns rather than explicit content. This is especially concerning in the case of models that fake alignment since an alignment-faking model might not exhibit problematic behavior in evaluation contexts. Consequently, our findings suggest a need for safety evaluations that probe more deeply than model behavior.
In summary
- When trained on model-generated outputs, student models exhibit subliminal learning, acquiring their teachers' traits even when the training data is unrelated to those traits.
- Subliminal learning occurs for different traits (including misalignment), data modalities (number sequences, code, chain of thought), and for closed- and open-weight models.
- Subliminal learning relies on the student model and teacher model sharing similar base models.
- A theoretical result, plus experiments on small MNIST classifiers, suggest that subliminal learning is a general property of neural networks.
- These results have implications for AI alignment. Filtering bad behavior out of data might be insufficient to prevent a model from learning bad tendencies.
Arthur T Knackerbracket has processed the following story:
Sometimes getting more than what you asked for is nice. Finding cash in a jacket you haven't worn in a while, getting an extra chicken nugget at the drive-thru, discovering a hidden track on an album — those are all pleasant surprises. This one isn't: A cyber threat intelligence firm called Prodaft revealed that "Chemia," a game previously available via Steam's Early Access program, shipped with three strains of malware.
"Chemia" was described on its Steam page as "a gripping survival crafting game set in a world ravaged by a catastrophic natural disaster," which requires players to "gather resources, craft vital equipment, and navigate this hazardous world if [they] hope to survive." The game wasn't publicly available—Steam users had to request access to the playtest—which makes the fact that it contained malware seem even sleazier.
Prodaft said that "Chemia" shipped with the Fickle Stealer, Vidar Stealer, and HijackLoader malware. The first two are infostealers that look to compromise a victim's cryptocurrency wallets as well as user data from web browsers, password managers, and other apps; the last can be used to deploy other malware in the future.
"Chemia" was still available on Steam the morning of July 25, two days after Prodaft shared its findings, but it was removed sometime during the process of writing this post. The developer was listed as Aether Forge Studios, but I couldn't find any websites, social media profiles, or other online references bearing that name with specific references to "Chemia."
This incident should serve as a helpful reminder not to assume that software is safe simply because it's distributed through a trusted platform like Steam—especially if it's being offered by an unknown developer that otherwise doesn't seem to exist. (Especially if the same name is being used by other groups that don't have clear ties to the game.)
Prodaft shared indicators of compromise (IOCs) related to the versions of Fickle Stealer, Vidar Stealer, and HijackLoader that were embedded in "Chemia" on GitHub. The company included these IOCs as part of a broader collection of materials related to the activity of a group called EncryptHub that has been carrying out "highly sophisticated spear-phishing attacks" since at least June 26, 2024.
Hackers on Planet Earth (HOPE) In Person and Virtual Tickets Being Sold
Hackers on Planet Earth (HOPE) 16 is scheduled for August 15-17 2025. In Person as well as Virtual tickets are on sale now.
The Hackers on Planet Earth (HOPE) conference series is a hacker convention sponsored by the security hacker magazine 2600
Talks so far!
For an example of past content, here is the page for HOPE XV "Talks" with recordings from 2014.
If you're like me, you're cheap and will just wait for the talks from #16 to (hopefully) drop at the site, but I thought I would post the information about this event anyway for those who may be interested.
Arthur T Knackerbracket has processed the following story:
Over the weekend, the world's most famous Finn pushed out the latest version of the Linux kernel – and warned of upcoming disruption.
Linux kernel 6.16 was released after what was apparently a relaxed end of the development cycle. (We suppose this could be interpreted as a subtle dig at certain file system developers, but then again, Torvalds is not famed for subtlety.)
As kernel releases go, this one is almost unusually modest. It doesn't have any huge blockbuster new features, but does contain a large number of bugfixes and code. Phoronix estimates that it has 38.4 million lines of code across over 78,000 files. Remember when the central design ethos of UNIX was that it was small and simple and clean? Well, no, me neither, because around the time The Reg FOSS desk first touched a computer keyboard, UNIX System III came out, one of the first releases that unified different codebases, and also one of the first commercial editions from AT&T. But that was the idea, right?
Kernel 6.16 supports Intel's 2023 Advanced Performance Extensions, which means improved vector instructions and doubling the number of general-purpose registers available. (Only certain CPU models benefit from the full-width version of the new vector instructions, though, which is arguably an example of the sort of moves that caused Intel to falter in recent years.)
Two of the built-in file systems get performance tweaks that allow for larger individual blocks of data. XFS, open sourced by SGI at the turn of the century, now gets larger atomic writes. Meanwhile, ext4 gets bigalloc and large folio support, which can make some operations about one-third faster. Btrfs and NFS both get tweaks, too.
On pretty much any Unix, when a program crashes, it emits a core dump and saves it in the current working directory. Among other improvements, now a core can be sent over an AF_SOCKET instead. This means both functional improvements as well as security ones.
On big iron, Linux's support for NUMA systems, which The Register explained when AMD brought it to x86, now can automatically self-tune, among other optimizations. Support for five-level page tables allows for enormous amounts of virtual memory, as LWN's 2017 article explains.
On small iron, the kernel can now offload sound decoding to USB hardware, catching up with onboard sound chips – a change that's taken years to make it in.
We can't help but feel that these two demonstrate the remarkable range of kit that Linux is used for. No wonder it's got so big, really.
The sound offload explanation we linked to above came from Linux Weekly News's two-part round-up of what was new for this merge window: the first half and the second half are linked, for the real nitty-gritty, as well as an overview. The "kernel newbies" site has one big summary for the truly hardcore.
Linus also noted in his announcement that he will be traveling for a lot of the 6.17 merge window, which might cause disruption. On the one hand, this might count as a warning to developers, but it's also a reminder that there's one man at the very top of this pyramid of developers. It leads us to wonder if the next big change in OS kernel development will come when he retires, rather than any technological milestone.
Arthur T Knackerbracket has processed the following story:
Boston Dynamics' Spot the dog robot has had many careers: bomb disposal expert, police officer, dancer, industrial inspector, cheerleader, Solid Snake impersonator, etc.
In the UK seaside town of Eastbourne, Domino's is using a version of the machine, called Domidog, that has been modified so it can navigate sandy environments – a tricky task for a four-legged robot.
Spot utilizes its array of sensors, cameras, and autonomous navigation to deliver a freshly cooked pizza from the local store to customers on the beach, avoiding the crowds and other obstacles.
Domino's Pinpoint Delivery option lets customers set a precise location for their pizza delivery. A worker loads the latitude/longitude (and a safety geofence) into Spot's "Scout" mission tablet and straps the insulated pizza pod to its back. The robot then sets off on its journey, with a remote supervisor watching proceedings from a nearby shaded tent – UK law still requires line-of-sight for public trials.
Customers receive a push alert when Domidog is near. Pressing "Signal Driver" will set off a strobing color pattern so the robot (or its supervisor) can spot the person in the crowd.
According to the restaurant chain, Spot's job doesn't end with the delivery of the food. The robot will hang around and guard against the scourge of British seasides: seagulls. These huge and often aggressive birds are notorious for swooping out of the air and stealing food directly from people's hands – something this writer can attest to.
Although Spot has been used in the military and we've seen similar robots carrying weaponry, Domidog's method of protecting pizza from the gulls is a humane one: waving its arm attachment at the birds should they get too close. It will stay in this sentry mode for a couple of minutes, which will hopefully send a threatening message to the seagulls.
Spot remains Boston Dynamics' flagship platform for industrial inspection, safety, and R&D. The base unit costs around $75,000. With extras such as the $30,000 arm and $18,000 to $35,000 sensor payloads, a fully loaded unit can fall into the $110,000 to $150,000 range. For a much cheaper robot, Unitree's flipping, fighting, and conversational R1 humanoid is "just" $5,900.
This week, an app for women called "Tea" became the #1 downloaded app on the Apple App Store. Unfortunately for the women, the app also required them to give the developer a picture of their ID and location details for verification. Today, someone hacked it and put nearly 60 gigabytes of private data on 4chan:
According to Tea's preliminary findings, the breach allowed access to approximately 72,000 images, broken down into two groups: 13,000 images of selfies and photo identification that people had submitted during account verification and 59,000 images that were publicly viewable in the app from posts, comments and direct messages.
Those images had been in a "legacy data system" that contained information from more than two years ago, the company said in statement. "At this time, there is no evidence to suggest that current or additional user data was affected."
[...] In the privacy section on its website, Tea says: "Tea Dating Advice takes reasonable security measures to protect your Personal Information to prevent loss, misuse, unauthorized access, disclosure, alteration and destruction. Please be aware, however, that despite our efforts, no security measures are impenetrable."
Tea said it has launched a full investigation to assess the scope and impact of the breach.