Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How long have you had your current mobile phone?

  • 0-6 months
  • 6-12 months
  • 1-2 years
  • 2-4 years
  • 4+ years
  • My phone belongs in a technology museum.
  • Do 2 tin cans and a very long piece of string count?
  • I don't have a mobile phone you insensitive clod!

[ Results | Polls ]
Comments:35 | Votes:173

posted by hubie on Thursday May 22, @11:06PM   Printer-friendly
from the our-worst-fears dept.

Tech Review is re-running this story from 2019 with a new introduction,
https://www.technologyreview.com/2025/05/19/1116614/hao-empire-ai-openai
It was written by probably the first journalist to get inside the secretive company for a few days of interviews. It didn't go well...

In 2019, Karen Hao, a senior reporter with MIT Technology Review, pitched me on writing a story about a then little-known company, OpenAI. It was her biggest assignment to date. Hao's feat of reporting took a series of twists and turns over the coming months, eventually revealing how OpenAI's ambition had taken it far afield from its original mission. The finished story was a prescient look at a company at a tipping point—or already past it. And OpenAI was not happy with the result. Hao's new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, is an in-depth exploration of the company that kick-started the AI arms race, and what that race means for all of us. This excerpt is the origin story of that reporting. — Niall Firth, executive editor, MIT Technology Review
[...]
In February 2020, I published my profile for MIT Technology Review, drawing on my observations from my time in the office, nearly three dozen interviews, and a handful of internal documents. "There is a misalignment between what the company publicly espouses and how it operates behind closed doors," I wrote. "Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration."
[...]
From the book Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, by Karen Hao, to be published on May 20, 2025, by Penguin Press,...


Original Submission

posted by hubie on Thursday May 22, @06:20PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The Supersonic Aviation Modernization Act would allow America's aviation watchdog to issue licenses allowing flights over land "at a Mach number greater than one so long as the aircraft is operated in such a manner that no sonic boom reaches the ground in the United States," the legislation states [PDF].

[...] The bill was introduced to the Senate by Senators Ted Budd (R-NC), Thom Tillis (R-NC), Mike Lee (R-UT), and Tim Sheehy (R-MT); and to the House of Representatives by Troy Nehls (R-TX), and Representative Sharice Davids (D-KS). If successful, it'll give the Federal Aviation Administration (FAA) a year to comply and allow the next generation of supersonic commercial aircraft into American skies once again.

The backing of Budd and Tillis for the legislation is understandable. Boom Supersonic, which is building an 80-person commercial supersonic passenger jet, chose the US state the two senators represent, North Carolina, to build the Overture Superfactory it'll use to manufacture the aircraft. In January, Boom's single-seat XB-1 test aircraft, piloted by Tristan "Geppetto" Brandenburg, broke the sound barrier six times without a noticeable sonic boom. Boom boasts a number of big-name VCs and tech luminaries as funders, including AI poster child Sam Altman and LinkedIn founder Reid Hoffman.

NASA, too, has skin in the game, as it has been funding research into quiet supersonic flight for decades and last year fired up the engines on its X-59 supersonic test vehicle. The Register spoke to the pilot James "Clue" Less at the time, and he said the technology works and that the agency expects the first full flight later this year.

"The race for supersonic dominance between the US and China is already underway and the stakes couldn't be higher," said Senator Budd in a canned statement.

[...] The history of sonic booms over the continental US is contentious, mired in technology, politics, and the immense forces involved in supersonic flight.

[...] The FAA held tests of what sonic booms would do to Americans and their environment. In 1961 and 1964, the citizens of St Louis and Oklahoma City were deliberately subjected to repeated sonic booms in Operations Bongo and Bongo II. In the latter case, the test was originally scheduled to have aircraft generate eight sonic booms a day overhead for six months, but this was cut to four months after windows were broken and residents complained.

Congress cut off funding for the project in 1971 and Boeing dropped it. But the testing also gave legislators an excuse to ban supersonic flight altogether two years later, which limited the Concorde's usefulness and commercial potential.

But after more than half a century of research by NASA and others, it seems we now understand supersonic flight well enough to silence the sonic boom such travel generates. The trick is to fly high and mount the engines on the top of the aircraft, according to the space agency.

Boom has augmented this by figuring out how to direct the sound waves from a sonic boom so that they refract away from the ground when they hit the warmer air at lower altitudes. They call it "boomless cruise" and claim the XB-1 proved the concept, as you can see below.

Youtube Video [1m:56s]


Original Submission

posted by janrinok on Thursday May 22, @01:35PM   Printer-friendly

Ars Technica reports that the Chicago Sun-Times printed a summer reading list full of fake books recently published a summer reading list that included several fake book titles attributed to real authors. The list, created by Marco Buscaglia using AI, featured titles like "Tidewater Dreams" by Isabel Allende and "The Last Algorithm" by Andy Weir, which do not exist. Buscaglia admitted to using AI for the list and expressed embarrassment for not verifying the content. Only five out of the fifteen recommended books were real, highlighting the issue of AI-generated errors.

The newspaper addressed the controversy, stating that the list was part of a promotional supplement and not approved by the newsroom. The supplement, called "Heat Index," was intended to be generic and distributed nationally. This incident occurred shortly after the Sun-Times experienced significant staff reductions, losing 20% of its employees through a buyout program. The staff cuts included experienced columnists and editors, which may have contributed to the oversight.

The reaction to the fake reading list has been mostly negative, with some readers expressing anger and disappointment. Novelist Rachael King and freelance journalist Joshua J. Friedman were among those who criticized the use of AI-generated content. The incident has sparked a broader conversation about the reliability of AI in journalism and the importance of human oversight in maintaining trust in media.

Other sources covering the story:

The Guardian

Axios


Original Submission

Processed by Jelizondo

posted by hubie on Thursday May 22, @08:42AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

In a significant escalation of its campaign against illegal streaming, Italy has begun issuing fines to thousands of individuals who subscribed to pirate IPTV services. This move follows a recent memorandum of understanding among the Prosecutor's Office, the Guardia di Finanza (a military police force in Italy), and the country's communications regulator AGCOM, which established a framework for sharing information on users of unauthorized streaming platforms.

While the precise origin of the subscriber data remains undisclosed, it is believed to have been gathered during frequent law enforcement raids targeting illicit IPTV operations. These raids have yielded databases containing emails and other identifying information, enabling authorities to target end-users directly for the first time.

The crackdown is rooted in Law 93/2023, anti-piracy legislation passed last year that allows for fines of up to €5,000 ($5,581) for repeat offenders. The law also introduced the Piracy Shield, a system enabling rapid ISP blocking of unauthorized streams.

After the legislation passed, authorities wasted no time making it clear that the era of impunity for IPTV pirates was over. That warning has now materialized into action. At a recent press conference, the Guardia di Finanza revealed that 2,282 individuals across Italy have been fined for their involvement with pirate IPTV services. Initial penalties start at €154 ($172), but officials have stressed that repeat violations could dramatically increase fines, reaching the €5,000 maximum.

This marks the first time the new law has been enforced against consumers, not just the operators of illegal services. The current wave of fines is reportedly linked to an operation in Lecce last October, where a major IPTV network was dismantled and subscriber information seized.

Authorities have indicated that this is only the beginning, with ongoing investigations in several regions aimed at identifying further offenders. Three additional prosecutors' offices have already launched their inquiries, signaling a sustained national effort.

The crackdown is part of a broader strategy to combat digital piracy in Italy, particularly the illegal streaming of football (soccer) matches – a major concern for the country's lucrative sports industry. The financial stakes are high. Luigi De Siervo, CEO of Serie A, has repeatedly emphasized the severe impact of piracy on Italian football, estimating losses of around €1 billion ($1.1 billion) every year due to unauthorized streaming.

These losses threaten the financial stability of clubs and the entire football ecosystem, as television rights constitute a vital revenue stream. Paolo Scaroni, president of AC Milan, has highlighted the importance of enforcing existing laws, arguing that providers and consumers of pirated content must face consequences if the industry is to recover from the damage inflicted by piracy.

Political support for the crackdown is strong, particularly from Senator Claudio Lotito, the architect of the anti-piracy law and owner of Lazio football club. Lotito has stated unequivocally that those who break the law will now face real and personal repercussions, declaring that the time for leniency is over. Inter Milan president Beppe Marotta echoed this sentiment, likening the new enforcement regime to a shift from a yellow card to a red card in soccer.


Original Submission

posted by hubie on Thursday May 22, @03:59AM   Printer-friendly

https://techcrunch.com/2025/05/17/laser-powered-fusion-experiment-more-than-doubles-its-power-output/

Tim De Chant
12:02 PM PDT · May 17, 2025

The world's only net-positive fusion experiment has been steadily ramping up the amount of power it produces, TechCrunch has learned.

In recent attempts, the team at the U.S. Department of Energy's National Ignition Facility (NIF) increased the yield of the experiment, first to 5.2 megajoules and then to 8.6 megajoules, according to a source with knowledge of the experiment.

The new results are significant improvements over the historic experiment in 2022, which was the first controlled fusion reaction to generate more energy than it consumed.

The 2022 shot generated 3.15 megajoules, a small bump over the 2.05 megajoules that the lasers delivered to the BB-sized fuel pellet.
...

[...] The NIF uses what's known as inertial confinement to produce fusion reactions. At the facility, fusion fuel is coated in diamond and then encased in a small gold cylinder called a hohlraum. That tiny pellet is dropped into a spherical vacuum chamber 10 meters in diameter, where 192 powerful laser beams converge on the target.

The cylinder is vaporized under the onslaught, emitting X-rays in the process that bombard the fuel pellet inside. The pellet's diamond coating receives so much energy that it turns into an expanding plasma, which compresses the deuterium-tritium fuel inside to the point where their nuclei fuse, releasing energy in the process.


Original Submission

posted by hubie on Wednesday May 21, @11:11PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Imagine an inverse Black Hat conference, an Alcoholics Anonymous for CISOs, where everyone commits to frank disclosure and debate on the underlying structural causes of persistently failing cybersecurity syndrome

It's been a devastating few weeks for UK retail giants. Marks and Spencer, the Co-Op, and now uber-posh Harrods have had massive disruptions due to ransomware attacks taking systems down for prolonged periods.

If the goods these people sold were one-tenth as shoddy as their corporate cybersecurity, they'd have been out of business years ago. It's a wake-up call, says the UK's National Center for Stating the Obvious. And what will happen? The industry will just press the snooze button again, as we hear reports that other retailers are "patching like crazy."

The bare fact that entire sectors remain exquisitely vulnerable to what is, by now, a very familiar form of attack is a diagnostic of systematic failure in the way such sectors are run. There are few details of what exactly happened, but it's not the details that matter, it's the fact that so little was made public.

We see only silence, deflection, and grudging admission as the undeniable effects multiply - which is a very familiar pattern. The only surprise is that there is no surprise. This isn't part of the problem, it is the problem. Like alcoholics, organizations cannot get better until they admit, confront, and work with others to mitigate the compulsions that bring them low. The raw facts are not in doubt; it's the barriers to admitting and drawing out their sting that perpetuate the problem.

We know this because there is so much evidence of corporate IT's fundamental flaws. If you have been in the business for a few years, you'll already know what they are – just as surely as you'll have despaired of progress. If you are joyfully innocent newbie, then look at the British Library's report into its own 2023 ransomware catastrophe. It took many core systems down, some of them forever, while leaking huge amounts of data that belonged to staff and customers. As a major public institution established by law, and one devoted to knowledge as a social good, the British Library wasn't just free to be frank about what happened, it had a moral obligation to do so.

[...] This is basic human psychology that operates at every scale. Getting the boiler serviced or buying a sparkling new gaming rig - there's a right decision and one you'll actually make. Promising to run a state well while starving it of funds, is again hardly unknown. Such an act is basic, but toxic, and it admits of its toxicity by being something that polite people are loath to discuss in public.

Where there's insufficient discipline to Do The Right Thing in private, though, making it public is a powerful corrective. Self-help groups for alcohol abuse work for many. Religions are big on public confession for a reason. Democracy forces periodic public review of promises kept or truths disowned. What might work for the toxic psychology of organizations that keeps them addicted to terrible cybersecurity?

It's unlikely that entrenched corporate culture will reform itself. You are welcome to look for historic examples, they're filed alongside tobacco companies moving into tomato farming and the Kalashnikov Ploughshare Company.

[...] What then? A protocol for ensuring, or at least encouraging, the security lifecycle of a project or component. How long will it live, how much will it cost to watch and maintain it, what mechanisms are there to reassess it regularly as the threat environment evolves, what dependencies need safeguarding, and, lastly, what is the threat surface of third party elements? In short, we must agree to accept that there is no such thing as "legacy IT," no level of technical debt that can be quietly shoved off the books. If all that isn't signed off at the start of a system's life, it doesn't happen.

No silver bullet, nor proof against toxic psychology. It would be a tool for everyone who knows what the right decision is, but who can't see how to make it happen. There are plenty of accepted methodologies for characterizing the shape of a project at its inception and development, and all came about to fix previous problems.


Original Submission

posted by hubie on Wednesday May 21, @06:29PM   Printer-friendly

Astronomers have found a planet that orbits at an angle of 90 degrees around a rare pair of peculiar stars:

Several planets orbiting two stars at once, like the fictional Star Wars world Tatooine, have been discovered in the past years. These planets typically occupy orbits that roughly align with the plane in which their host stars orbit each other. There have previously been hints that planets on perpendicular, or polar, orbits around binary stars could exist: in theory, these orbits are stable, and planet-forming discs on polar orbits around stellar pairs have been detected. However, until now, we lacked clear evidence that these polar planets do exist.

"I am particularly excited to be involved in detecting credible evidence that this configuration exists," says Thomas Baycroft, a PhD student at the University of Birmingham, UK, who led the study published today in Science Advances.

The unprecedented exoplanet, named 2M1510 (AB) b, orbits a pair of young brown dwarfs — objects bigger than gas-giant planets but too small to be proper stars. The two brown dwarfs produce eclipses of one another as seen from Earth, making them part of what astronomers call an eclipsing binary. This system is incredibly rare: it is only the second pair of eclipsing brown dwarfs known to date, and it contains the first exoplanet ever found on a path at right angles to the orbit of its two host stars.

"A planet orbiting not just a binary, but a binary brown dwarf, as well as being on a polar orbit is rather incredible and exciting," says co-author Amaury Triaud, a professor at the University of Birmingham.

The team found this planet while refining the orbital and physical parameters of the two brown dwarfs by collecting observations with the Ultraviolet and Visual Echelle Spectrograph (UVES) instrument on ESO's VLT at Paranal Observatory, Chile. The pair of brown dwarfs, known as 2M1510, were first detected in 2018 by Triaud and others with the Search for habitable Planets EClipsing ULtra-cOOl Stars (SPECULOOS), another Paranal facility.

The astronomers observed the orbital path of the two stars in 2M1510 being pushed and pulled in unusual ways, leading them to infer the existence of an exoplanet with its strange orbital angle. "We reviewed all possible scenarios, and the only one consistent with the data is if a planet is on a polar orbit about this binary," says Baycroft [1].

"The discovery was serendipitous, in the sense that our observations were not collected to seek such a planet, or orbital configuration. As such, it is a big surprise," says Triaud. "Overall, I think this shows to us astronomers, but also to the public at large, what is possible in the fascinating Universe we inhabit."

This research was presented in a paper to appear in Science Advances titled "Evidence for a polar circumbinary exoplanet orbiting a pair of eclipsing brown dwarfs" (https://doi.org/10.1126/sciadv.adu0627).

Journal Reference: DOI: 10.1126/sciadv.adu0627


Original Submission

posted by hubie on Wednesday May 21, @01:42PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Tech buyers should purchase refurbished devices to push vendors to make hardware more repairable and help the shift to a more circular economy, according to a senior analyst at IDC.

Presenting a TED talk, vice president of devices for EMEA Francisco Jeronimo said that in 2022 there were 62 million tons of electronic waste generated while the average e-waste per person amounted to 11.2 kg annually.

While governments and manufacturers should press for more ethical sourcing and better recycling practices in consumer tech, buyers are not entirely powerless.

"When we look into all this waste, we know there's a problem, but we don't look into what we are doing to fix it," he said. "We blame governments, we blame corporations, we blame the brands. Because at the end of the day, how can I make my smartphone more sustainable? I can't. It needs to be the brand [and] governments [that] are bringing legislation to force the brands, but we have a superpower."

The buyer's superpower comes in the form of extending the life of devices we own, and choosing to buy secondhand refurbished devices when we need new ones, he said.

"Circularity is the answer. We need to decide whether we're going to keep buying new devices or take action to extend the life of the devices we use and make better choices when we buy new products."

If users in the European Union were able to extend by one year the lifespan of washing machines, notebooks, vacuum cleaners and smartphones, roughly four million tons of CO2 emissions would be saved, a European Environmental Bureau study claimed in 2019.

Jeronimo said the popularity of secondhand clothing has taken off on platforms like Vinted and eBay, but more could be done in technology.

"When we need a new smartphone or tablet or PC, we rush to the store to buy it new, and that needs to change, and there are 62 million tons of reasons why it matters."

[...] In March 2024, research showed the tech industry was creating electronic waste almost five times faster than it was recycling it (using documented methods). A United Nations report found that e-waste recycling has benefits estimated to include $23 billion of monetized value from avoided greenhouse gas emissions and $28 billion of recovered materials like gold, copper, and iron. It also comes at a cost – $10 billion associated with e-waste treatment and $78 billion of externalized costs to people and the environment.

Of the 62 million tons of e-waste generated globally in 2022, an estimated 13.8 million tons were documented, collected, and properly recycled, the report found.


Original Submission

posted by hubie on Wednesday May 21, @08:58AM   Printer-friendly
from the mark-of-the-beast-contender? dept.

https://www.forbes.com/sites/saibala/2025/05/15/apple-is-developing-tech-so-users-can-control-devices-with-only-their-thoughts/

Apple is boldly embracing brain-computer interface (BCI) technology to enable users to control its devices using only their thoughts—a novel frontier for the company.

Earlier this week, it was announced that the tech giant is working with Synchron, a company that has been pioneering BCI research and work for more than a decade. The company was founded by Dr. Tom Oxley, a neurointerventionalist and technologist. Synchron has developed a stent-like implant that can be inserted using a (relatively) minimally invasive procedure on an individual's motor cortex. The stent was reportedly granted FDA clearance for human trials in 2021, and works to detect brain signals and translate them into software-enabled relays; in the case of an Apple device, the relays can select icons on a iPhone or iPad.

The video below shows a user's experience with Synchron's BCI in conjunction with the Apple Vision headset.

(see site for video)

Apple is working to establish the standards for BCI devices and protocolize what their use could look like across its device landscape. The company is expected to open up the technology and protocols to third-party developers in short order.

Among the primary goals of BCI technology is to enable the millions of individuals worldwide that may have limited physical functions to use devices. For example, the World Health Organization reports that globally, over 15 million people are living with spinal cord injuries. Many of these individuals may experience some type of loss of physical or sensory functions over the course of their lifetimes.

This is where BCIs can truly make a difference—enabling individuals to control electronic devices purely with their thoughts. In fact, reports indicate that the BCI industry is expected to grow at a CAGR of 9.35% from 2025 to 2030 and has huge potential to become a trillion dollar market within the next decade.


Original Submission

posted by hubie on Wednesday May 21, @04:14AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The UK needs more nuclear energy generation just to power all the AI datacenters that are going to be built, according to the head of Amazon Web Services (AWS).

In an interview with the BBC, AWS chief executive Matt Garman said the world is going to have to build new technologies to cope with the projected energy demands of all the bit barns that are planned to support AI.

"I believe nuclear is a big part of that, particularly as we look ten years out," he said.

AWS has already confirmed plans to invest £8 billion ($10.6 billion) on building out its digital and AI infrastructure in Britain between now and the end of 2028 to meet "the growing needs of our customers and partners."

Yet the cloud computing arm of Amazon isn't the only biz popping up new bit barns in Blighty. Google started building a $1 billion campus at Waltham Cross near London last year, while Microsoft began construction of the Park Royal facility in West London in 2023, and made public its plans for another datacenter on the site of a former power station in Leeds last year.

Earleir this year, approval was granted for what is set to become Europe's largest cloud and AI datacenter at a site in Hertfordshire, while another not far away has just been granted outline planning permission by a UK government minister, overruling the local district authority.

This activity is accelerating thanks to the government's AI Opportunities Action Plan, which includes streamlined planning processes to expedite the building of more data facilities in the hope this will drive AI development.

As The Register has previously reported, the infrastructure needed for AI is getting more power-hungry with each generation, and the datacenter expansion to serve the growth in AI services has led to concerns over the amount of energy required.

[...] "AI is driving exponential demand for compute, and that means power. Ultimately, a long-term, resilient energy strategy is critical," said Séamus Dunne, managing director in the UK and Ireland for datacenter biz Digital Realty.

"For the UK to stay competitive in the global digital economy, we need a stable, scalable, and low-carbon energy mix to support the next generation of data infrastructure. With demand already outpacing supply, and the UK aiming to establish itself as an AI powerhouse, it's vital we stay open to a range of solutions. That also means building public trust and working with government to ensure the grid can keep pace."

Garman told the BBC that nuclear is a "great solution" to datacenter energy requirements as it is "an excellent source of zero-carbon, 24/7 power."

This might be true, but new atomic capacity simply can't be delivered fast enough to meet near-term demand, as we reported earlier this year.  The World Nuclear Association says that an atomic plant typically takes at least five years to construct, whereas natural gas plants are often built in about two years.


Original Submission

posted by hubie on Tuesday May 20, @11:27PM   Printer-friendly
from the do-we-upgrade-or-downgrade? dept.

Intel's data-leaking Spectre defenses scared off yet again

ETH Zurich boffins exploit branch prediction race condition to steal info from memory, fixes have mild perf hit

by Thomas Claburn // Tue 13 May 2025

Researchers at ETH Zurich in Switzerland have found a way around Intel's defenses against Spectre, a family of data-leaking flaws in the x86 giant's processor designs that simply won't die.

Sandro Rüegge, Johannes Wikner, and Kaveh Razavi have identified a class of security vulnerabilities they're calling Branch Predictor Race Conditions (BPRC), which they describe in a paper [PDF] scheduled to be presented at USENIX Security 2025 and Black Hat USA 2025 later this year.

Spectre refers to a set of hardware-level processor vulnerabilities identified in 2018 that can be used to break the security isolation between software. It does this by exploiting speculative execution - a performance optimization technique that involves the CPU anticipating future code paths (also known as branch prediction) and executing down those paths before they're actually needed.

In practice, this all means malware running on a machine, or a rogue logged-in user, can potentially abuse Spectre flaws within vulnerable Intel processors to snoop on and steal data – such as passwords, keys, and other secrets – from other running programs or even from the kernel, the heart of the operating system itself, or from adjacent virtual machines on a host, depending on the circumstances. In terms of real-world risk, we haven't seen the Spectre family exploited publicly in a significant way, yet.

There are several Spectre variants. One of these, Spectre v2, enables an attacker to manipulate indirect branch predictions across different privilege modes to read arbitrary memory; it effectively allows a malicious program to extract secrets from the kernel and other running applications.

Intel has added various hardware-based defenses against these sorts of attacks over the years, which include Indirect Branch Restricted Speculation (IBRS/eIBRS) for restricting indirect branch target prediction, a sanitizing technique called Indirect Branch Predictor Barrier (IBPB), and other microarchitectural speculation controls.

eIBRS, the researchers explain, is designed to restrict indirect branch predictions to their originating privilege domain, preventing them from leaking across boundaries. Additional protection provided by IBPB is recommended in scenarios where different execution contexts, like untrusted virtual machines (VMs), share the same privilege level and hardware domain.

But Rüegge, Wikner, and Razavi found that branch predictors on Intel processors are updated asynchronously inside the processor pipeline, meaning there are potential race conditions – situations when two or more processes or threads attempt to access and update the same information concurrently, resulting in unpredictable behavior.

[...] Razavi said there are several possible attack scenarios.

"You could start a VM in your favorite cloud and this VM could then leak information from the hypervisor, including information that belongs to other VMs owned by other customers," he explained.

"While such attacks are in theory possible, and we have shown that BPI enables such attacks, our particular exploit leaks information in the user-to-kernel scenario. In such a scenario, the attacker runs inside an unprivileged user process (instead of a VM), and leaks information from the OS (instead of the hypervisor)."

Essentially, BPI allows the attacker to inject branch predictions tagged with elevated privileges in user mode, which ignores the security guarantees of eIBRS and IBPB. Thereafter, a Spectre v2 attack (sometimes called Branch Target Injection, or BTI) can be carried out to gain access to sensitive data in memory.

[...] Razavi said Spectre-related flaws are likely to continue to haunt us for a while – which El Reg did warn about back in 2018.

"Speculative execution is quite fundamental in how we build high-performance CPUs, so as long as we build CPUs this way, there is always a chance for such vulnerabilities to happen," he said. "That said, CPU vendors are now more aware of these issues and hopefully also more careful when introducing new designs and new features.

"Furthermore, while much more work still needs to be done, there is some progress in building the necessary tooling for detecting such issues. Once we have better tooling, it becomes easier to find and fix these issues pre-silicon. To summarize, things should hopefully slowly get better, but we are not there yet."

Of note: Security experts have discovered new Intel Spectre vulnerabilities

New Intel CPU Flaws Leak Sensitive Data From Privileged Memory

New Intel CPU flaws leak sensitive data from privileged memory:

A new "Branch Privilege Injection" flaw in all modern Intel CPUs allows attackers to leak sensitive data from memory regions allocated to privileged software like the operating system kernel.

Typically, these regions are populated with information like passwords, cryptographic keys, memory of other processes, and kernel data structures, so protecting them from leakage is crucial.

According to ETH Zurich researchers Sandro Rüegge, Johannes Wikner, and Kaveh Razavi, Spectre v2 mitigations held for six years, but their latest "Branch Predictor Race Conditions" exploit effectively bypasses them.

The flaw, which is named 'branch privilege injection' and tracked under CVE-2024-45332, is a race condition on the subsystem of branch predictors used in Intel CPUs.

Branch predictors like Branch Target Buffer (BTB) and Indirect Branch Predictor (IBP) are specialized hardware components that try to guess the outcome of a branch instruction before it's resolved to keep the CPU pipeline full for optimal performance.

These predictions are speculative, meaning they are undone if they end up being wrong. However, if they are correct, it increases performance.

The researchers found that Intel's branch predictor updates are not synchronized with instruction execution, resulting in these updates traversing privilege boundaries.

If a privilege switch happens, like from user mode to kernel mode, there is a small window of opportunity during which the update is associated with the wrong privilege level.

[...] CVE-2024-45332 impacts all Intel CPUs from the ninth generation onward, including Coffee Lake, Comet Lake, Rocket Lake, Alder Lake, and Raptor Lake.

"All intel processors since the 9th generation (Coffee Lake Refresh) are affected by Branch Privilege Injection," explains the researchers.

"However, we have observed predictions bypassing the Indirect Branch Prediction Barrier (IBPB) on processors as far back as 7th generation (Kaby Lake)."

ETH Zurich researchers did not test older generations at this time, but since they do not support Enhanced Indirect Branch Restricted Speculation (eIBRS), they're less relevant to this specific exploit and likely more prone to older Spectre v2-like attacks.

Arm Cortex-X1, Cortex-A76, and AMD Zen 5 and Zen 4 chips were also examined, but they do not exhibit the same asynchronous predictor behavior, so they are not vulnerable to CVE-2024-45332.

[...] The risk is low for regular users, and attacks have multiple strong prerequisites to open up realistic exploitation scenarios. That being said, applying the latest BIOS/UEFI and OS updates is recommended.

ETH Zurich will present the full details of their exploit in a technical paper at the upcoming USENIX Security 2025.

See also:


Original Submission #1Original Submission #2

posted by janrinok on Tuesday May 20, @08:30PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

This week, meet a reader we'll Regomize as "Colin" who told us about his time working as a front-end developer for an education company that decided the time was right to expand from the UK to the US.

"Suddenly we needed to localize thousands of online articles, lessons, and other documents into American English."

Inconveniently, all that content was static HTML. "There was no CMS, no database, nothing I could harness on the server side," Colin lamented to Who, Me?

After due consideration, Colin and his team decided to use regular expressions to do the job.

"Our system combined tackling spelling swaps like changing 'ae' to 'e' in words like 'archaeology' and word/phrase swaps so that British terms like 'post' were changed to the American 'mail.'" Colin knew this could go pear-shaped if the system changed a term like "post-modern" to "mail-modern," so compound words were exempt.

As Colin and his workmates considered all the necessary changes, they realized they needed a lot of rules.

"The fact it was running the replacements directly on the body HTML, and causing lots of page repaints, meant we had to build a REST API to cache which rules ran and didn't run for each page, so as to not cause slowdown by running unnecessary rules," he explained.

Which worked well until it didn't.

"One day we got a call asking why a lesson about famous artists referred to the great painter 'Vincent Truck Gogh.'"

Readers are doubtless familiar with Vincent Van Gogh, and the different names for midsize vehicles on each side of the North Atlantic.

That was just the start. Next came complaints about a religious studies lesson that explained how Adam and Eve lived in the "Yard of Eden" – not the garden. Another religion class mentioned sinister-sounding "Easter hoods" instead of the daintier "Easter bonnets."

Colin figured out that the word swaps he coded failed to consider cases where it should just skip a word altogether. A van, after all, is a truck if you're American.

"In the end, we managed to get the system to be context-aware, so that certain swaps could be suppressed if the article contained a certain trigger word which suggested it shouldn't run, and the problems went away. But it was a very entertaining bug to be involved with!"


Original Submission

Processed by jelizondo

posted by janrinok on Tuesday May 20, @06:45PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The HoloBoard augmented-reality system lets people type independently.

Jeremy is a 31-year-old autistic man who loves music and biking. He's highly sensitive to lights, sounds, and textures, has difficulty initiating movement, and can say only a few words. Throughout his schooling, it was assumed he was incapable of learning to read and write. But for the past 30 minutes, he's been wearing an augmented-reality (AR) headset and spelling single words on the HoloBoard, a virtual keyboard that hovers in the air in front of him. And now, at the end of a study session, a researcher asks Jeremy (not his real name) what he thought of the experience.

Deliberately, poking one virtual letter at a time, he types, "That was good."

It was not obvious that Jeremy would be able to wear an AR headset, let alone use it to communicate. The headset we use, Microsoft's HoloLens 2, weighs 566 grams (more than a pound), and the straps that encircle the head can be uncomfortable. Interacting with virtual objects requires precise hand and finger movements. What's more, some people doubt that people like Jeremy can even understand a question or produce a response. And yet, in study after study, we have found that most nonspeaking autistic teenage and adult participants can wear the HoloLens 2, and most can type short words on the HoloBoard.

The HoloBoard prototype that Jeremy first used in 2023 was three years in the making. It had its origins in an interdisciplinary feasibility study that considered whether individuals like Jeremy could tolerate a commercial AR headset. That study was led by the three of us: a developmental psychologist (Vikram Jaswal at the University of Virginia), an electrical and software engineer (Diwakar Krishnamurthy at the University of Calgary), and a computer scientist (Mea Wang, also at the University of Calgary).

Our journey to this point was not smooth. Some autism researchers told us that nonspeaking autistic people "do not have language" and so couldn't possibly communicate by typing. They also said that nonspeaking autistic people are so sensitive to sensory experiences that they would be overwhelmed by augmented reality. But our data, from more than a half-dozen peer-reviewed studies, have shown both assumptions to be wrong. And those results have informed the tools we're creating, like the HoloBoard, to enable nonspeaking autistic people to communicate more effectively.

Nonspeaking autistic people may also appear inattentive, engage in impulsive behavior, and score poorly on standard intelligence tests (many of which require spoken responses within a set amount of time). Historically, these challenges have led to unfounded assumptions about these individuals' ability to understand language and their capacity for symbolic thought. To put it bluntly, it has sometimes been assumed that someone who can't talk is also incapable of thinking.

Most attempts to provide nonspeaking autistic people with an alternative to speech have been rudimentary. Picture-based communication systems, often implemented on an iPad or tablet, are frequently used in schools and therapy clinics. If a user wants a cookie, they can tap a picture of a cookie. But the vocabulary of these systems is limited to the concepts that can be represented by a simple picture.

There are other options. Some nonspeaking autistic people have learned, over the course of many years and guided by parents and professionals, to communicate by spelling words and sentences on a letterboard that's held by a trained human assistant - a communication and regulation partner, or CRP. Part of the CRP's role is to provide attentional and emotional support, which can help with conditions that commonly accompany severe autism and that interfere with communication, including anxiety, attention-deficit hyperactivity disorder, and obsessive-compulsive disorder. Having access to such assisted methods of communication has allowed nonspeaking autistic people to graduate from college, write poetry, and publish a best-selling memoir.

But the role of the CRP has generated considerable controversy. Critics contend that the assistants can subtly guide users to point to particular letters, which would make the CRP, rather than the user, the author of any words produced. If nonspeaking autistic people who use a letterboard really know how to spell, critics ask, why is the CRP necessary? Some professional organizations, including the American Speech-Language-Hearing Association, have even cautioned against teaching nonspeaking autistic people communication methods that involve assistance from another person.

And yet, research suggests that CRP-aided methods can teach users the skills to communicate without assistance; indeed, some individuals who previously required support now type independently. And a recent study by coauthor Jaswal showed that, contrary to critics' assumptions, most of the nonspeaking autistic individuals in his study (which did not involve a CRP) knew how to spell. For example, in a string of text without any spaces, they knew where one word ended and the next word began. Using eye tracking, Jaswal's team also showed that nonspeaking autistic people who use a letterboard look at and point to letters too quickly and accurately to be responding to subtle cues from a human assistant.

Our focus then was not on improving underlying AR hardware and system software, but finding the most productive ways to adapt it for our users.

We knew we wanted to design a typing system that would allow users to convey anything they wanted. And given the ongoing controversy about assisted communication, we wanted a system that could build the skills needed to type independently. We envisioned a system that would give users more agency and potentially more privacy if the tool is used outside a research setting.

Augmented reality has various features that, we reasoned, make it attractive for these purposes. AR's eye- and hand-tracking capabilities could be leveraged in activities that train users in the motor skills needed to type, such as isolating and tapping targets. Some of the CRP's tasks, like offering encouragement to a user, could be automated and rolled into an AR device. Also, AR allows users to move around freely as they engage with virtual objects, which may be more suitable for autistic people who have trouble staying still: A HoloBoard can "follow" the user around a room using head tracking. What's more, virtual objects in AR are overlaid on a user's actual environment, making it safer and less immersive than virtual reality (VR) - and potentially less overwhelming for our target population.

We carefully considered our choice of hardware. While lightweight AR glasses like the Ray-Ban Meta AI glasses and Snap's AI Spectacles would have been less cumbersome for users, they don't have the high-fidelity hand-tracking and gaze-tracking we needed. Headsets like the HoloLens 2 and Meta's Quest 3 provide greater computing power and support a broader range of interaction modalities.

We aren't the first researchers to consider how AR can help autistic people. Other groups have used AR to offer autistic children real-time information about the emotions people show on their faces, for example, and to gamify social- and motor-skill training. We drew inspiration from those efforts as we took on the new idea of using AR to help nonspeaking autistic people communicate.

Our efforts have been powered by our close collaboration with nonspeaking autistic people. They are, after all, the experts about their condition, and they're the people best suited to guide the design of any tools intended for them. Everything we do is informed by their input, including the design of prototypes and the studies to test those prototypes.

When neurotypical people see someone who cannot talk, whose body moves in unusual ways, and who acts in socially unconventional ways, they may assume that the person wouldn't be interested in collaborating or wouldn't be able to do so. But, as noted by Anne M. Donnellan and others who conduct research with disabled people, behavioral differences don't necessarily reflect underlying capacities or a lack of interest in social engagement. These researchers have emphasized the importance of presuming competence - in our case, that means expecting nonspeaking autistic people to be able to learn, think, and participate.

Thus, throughout our project, we have invited nonspeaking autistic people to offer suggestions and feedback in whatever manner they prefer, including by pointing to letters on a physical letterboard while supported by a CRP. Although critics of assisted forms of communication may object to this inclusive approach, we have found the contributions of nonspeakers invaluable. Through Zoom meetings, email correspondence, comments after research sessions, and shared Google docs, these participants have provided essential input about whether and how the AR technology we're developing could be a useful communication tool. In keeping with the community's interest in more independent communication, our tests of the technology have focused on nonspeakers' performance without the assistance of a CRP.

In early conversations, our collaborators raised several concerns about using AR. For example, they worried that wearing a head-mounted device wouldn't be comfortable. Our first study investigated this topic and found that, with appropriate support and sufficient time, 15 of 17 nonspeakers wore the device without difficulty. We now have 3D-printed models that replicate the shape and weight of the HoloLens 2, to allow participants to build up tolerance before they participate in actual experiments.

Some users also expressed concern about the potential for sensory overload, and their concerns made us realize that we hadn't adequately explained the difference between AR and VR. We now provide a video before each study that explains exactly what participants will do and see and shows how AR is less immersive than VR.

Some participants told us that they like the tactile input from interacting with physical objects, including physical letterboards, and were concerned that virtual objects wouldn't replicate that experience. We currently address this concern using sensory substitution: Letters on the HoloBoard hover slightly in front of a semitransparent virtual backplate. Activating a letter requires the user to "push" it approximately 3 centimeters toward the backplate, and successful activation is accompanied by an audible click and a recorded voice saying the letter aloud.

Our users' needs and preferences have helped us set priorities for our research program. One person noted that an AR communication system seemed "cool," but worried that the motor skills required to interact in AR might not be possible without practice. So from the very first app we developed, we built in activities to let users practice the motor skills they needed to succeed.

Participants also told us they wanted to be able to customize the holograms - not just to suit their aesthetic preferences but also to better fit their unique sensory, motor, and attentional profiles. As a result, users of the HoloBoard can choose its color scheme and the size of the virtual letterboard, and whether the letters are said aloud as they're pressed. We've also provided several ways to activate letters: by pressing them, looking at them, or looking at them while using a physical clicker.

We had initially assumed that users would be interested in predictive text capabilities for the HoloBoard - having it autofill likely words based on the first letters typed. However, several people explained that although such a system could theoretically speed up communication, they would find it distracting. We've put this idea on the back burner for now; it may eventually become an option that users can toggle on if they wish.

To make things easier for users, we've investigated whether the HoloBoard could be positioned automatically in space, dynamically adjusting to the user's motor skills and movement patterns throughout a session. To this end, we used a behavioral cloning approach: During real-world interactions between nonspeakers and their CRPs, we observed the position of the user's fingers, palms, head, and physical letterboard. We then used that data to train a machine learning model to automatically adapt the placement of a virtual letterboard for a specific user.

Many nonspeaking participants who currently communicate with human assistance see the HoloBoard as providing a way to communicate with more autonomy. Indeed, we've found that after a 10-minute training procedure, most users of the HoloBoard can, like Jeremy, use it to type short words independently. We recently began a six-month study with five participants who have regular sessions in building their typing skills on the HoloBoard.

One of the most common questions from our nonspeaking participants, as well as from parents and professionals, is whether AR could teach the skills needed to type on a standard keyboard. It seems possible, in theory. As a first step, we're creating other types of AR teaching tools, including an educational AR app that teaches typing in the context of engaging and age-appropriate lessons.

We've also begun developing a virtual CRP that can offer support and feedback as a user interacts with the virtual letterboard. This virtual assistant, named ViC, can demonstrate motor movements as a user is learning to spell with the HoloBoard, and also offers verbal prompts and encouragement during a training session. There aren't many professionals who know how to teach nonspeakers typing skills, so a virtual CRP could be a game changer for this population.

Although nonspeakers have responded enthusiastically to our AR communication tools, our conversations and studies have revealed a number of practical challenges with the current technology.

For starters, most people can't afford Microsoft's HoloLens 2, which costs US $3,500. (It's also recently been discontinued!) So we've begun testing our software on less expensive mixed-reality products such as Meta's $500 Quest 3, and preliminary results have been promising. But regardless of which device is used, most headsets are bulky and heavy. It's unlikely that someone would wear one throughout a school day, for example. One idea we're pursuing is to design a pair of AR glasses that's just for virtual typing; a device customized for a single function would weigh much less than a general-purpose headset.

We've also encountered technical challenges. For example, the HoloLens 2's field of view is only 52 degrees. This restricts the size and placement of holograms, as larger holograms or those positioned incorrectly may be partially or entirely invisible to the user. So when participants use their fingers to point at virtual letters on the HoloBoard, some letters near the edges of the board may fall outside the visible area, which is frustrating to users. To address these issues, we used a vertical layout in our educational app so that the multiple-choice buttons always remain within a user's field of view. Our systems also allow a researcher or caregiver to monitor an AR session and, if necessary, adjust the size of virtual objects so they're always in view.

We have a few other ideas for dealing with the field-of-view issue, including deploying devices that have a larger field of view. Another strategy is to use eye tracking to select letters, which would eliminate the reliance on hand movements and the problem of the user's pointing fingers obscuring the letters. And some users might prefer using a joystick or other handheld controller to navigate and select letters. Together, these techniques should make the system more accessible while working within hardware constraints.

We have also been developing cross-reality apps, which allow two or more people wearing AR headsets to interact within the same virtual space. That's the setup we use to enable researchers to monitor study sessions in real time. Based on our development experience, we created an open-source tool called SimpleShare for the development of multiuser extended-reality apps in a device-agnostic way. A related issue is that many of our users make sudden movements; a sudden shake of a head can interfere with the sensors on the AR headset and upset the spatial alignment between multiple headsets. So our apps and SimpleShare instruct the headset to routinely scan the environment and use that data to automatically realign multiple devices, if necessary.

We've had to find solutions to cope with the limited computing power available on AR headsets. Running the AI model that automates the custom placement of the HoloBoard for each user can cause a lag in letterboard interactions and can cause the headset to heat up. We solved this problem by simplifying the AI model and decreasing the frequency of the model's interventions. Rendering a realistic virtual CRP via a headset is also computationally intensive. In our virtual CRP work, we're now rendering the avatar on an edge device, such as a laptop with a state-of-the-art GPU, and streaming it to the display.

As we continue to tackle these technology challenges, we're well aware that we don't have all the answers. That's why we discuss the problems that we're working on with the nonspeaking autistic people who will use the technology. Their perspectives are helping us make progress toward a truly usable and useful device.

So many assumptions are made about people who cannot speak, including that they don't have anything to say. We went into this project presuming competence in nonspeaking people, and yet we still weren't sure if our participants would be able to adapt to our technology. In our initial work, we were unsure whether nonspeakers could wear the AR device or interact with virtual buttons. They easily did both. In our evaluation of the HoloBoard prototype, we didn't know if users could type on a virtual letterboard hovering in front of them. They did so while we watched. In a recent study investigating whether nonspeakers could select letters using eye-gaze tracking, we wondered if they could complete the built-in gaze-calibration procedure. They did.

The ability to communicate - to share information, memories, opinions - is essential to well-being. Unfortunately, most autistic people who can't communicate using speech are never provided an effective alternative. Without a way to convey their thoughts, they are deprived of educational, social, community, and employment opportunities.

We aren't so naive as to think that AR is a silver bullet. But we're hopeful that there will be more community collaborations like ours, which take seriously the lived experiences of nonspeaking autistic people and lead to new technologies to support them. Their voices may be stuck inside, but they deserve to be heard.


Original Submission

posted by janrinok on Tuesday May 20, @02:00PM   Printer-friendly
from the DNA-for-sale dept.

Regeneron has agreed to buy 23andMe, the once buzzy genetic testing company, out of bankruptcy for $256 million under a court-supervised sale process:

23andMe declared bankruptcy in March and announced it would seek a buyer, while also saying that co-founder and CEO Anne Wojcicki would resign.

Under the proposed agreement with Regeneron, the Tarrytown, New York, drugmaker will acquire 23andMe's assets, including its personal genome service and total health and research services. Regeneron said Monday that it will abide by 23andMe's privacy policies and applicable law to protect customer data.

Data privacy experts had raised concerns about 23andMe's storehouse of data for about 15 million customers, including their DNA.

23andMe's consumer-genome services will continue interrupted, the purchaser said. Regeneron will not acquire 23andMe's Lemonaid Health telehealth business.

Also at ZeroHedge.

Previously: 23andMe Reportedly Faces Bankruptcy — What Will Happen to Everyone's DNA Samples?


Original Submission

posted by janrinok on Tuesday May 20, @09:16AM   Printer-friendly
from the data-centers-in-orbit dept.

The Verge, Space News, and The South China Morning Post are reporting that Red China has begun assembling a 744 TOPS super computer in Earth's orbit. The advantages of an orbital super computer include better access to solar energy, easier radiation of waste heat, and, above all, shorter communications times with other satellites.

The satellites communicate with each other at up-to-100Gbps using lasers, and share 30 terabytes of storage between them, according to Space News. The 12 launched last week carry scientific payloads, including an X-ray polarization detector for picking up brief cosmic phenomena such as gamma-ray bursts. The satellites also have the capability to create 3D digital twin data that can be used for purposes like emergency response, gaming, and tourism, ADA Space says in its announcement.

China begins assembling its supercomputer in space. The Verge.

They are part of the Three-Body Computing Constellation, space-based infrastructure being developed by Zhejiang Lab. Once complete, the constellation would support real-time, in-orbit data processing with a total computing capacity of 1,000 peta operations per second (POPS) – or one quintillion operations per second – the report said.

China launches satellites to start building the world's first supercomputer in orbit. The South China Morning Post.

The satellites feature advanced AI capabilities, up to 100 Gbps laser inter-satellite links and remote sensing payloads—data from which will be processed onboard, reducing data transmission requirements. One satellite also carries a cosmic X-ray polarimeter developed by Guangxi University and the National Astronomical Observatories of the Chinese Academy of Sciences (NAOC), which will detect, identify and classify transient events such as gamma-ray bursts, while also triggering messages to enable followup observations by other missions.

China launches first of 2,800 satellites for AI space computing constellation. Space News.

Maintenance will be difficult.

Previously:
(2025) PA's Largest Coal Plant to Become 4.5GW Gas-Fired AI Hub
(2025) FTC Removes Posts Critical of Amazon, Microsoft, and AI Companies
(2025) Real Datacenter Emissions Are A Dirty Secret
(2022) Amazon and Microsoft Want to Go Big on Data Centres, but the Power Grid Can't Support Them


Original Submission

Today's News | May 23 | May 21  >