Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How long have you had your current job?

  • less than 1 year
  • 1 year up to 2 years
  • 2 years up to 3 years
  • 3 years up to 5 years
  • 5 years up to 10 years
  • 10 or more years
  • work is for suckers
  • I haven't got a job you insensitive clod!

[ Results | Polls ]
Comments:51 | Votes:125

posted by jelizondo on Sunday July 13, @11:45PM   Printer-friendly
from the teach-it-wrong-then-teach-it-again dept.

Here's an interesting story someone dropped in IRC:

The radical 1960s schools experiment that created a whole new alphabet – and left thousands of children unable to spell (and yes, I tweaked the sub title to fit into SN's tiny title limit):

The Initial Teaching Alphabet was a radical, little-known educational experiment trialled in British schools (and in other English-speaking countries) during the 1960s and 70s. Billed as a way to help children learn to read faster by making spelling more phonetically intuitive, it radically rewrote the rules of literacy for tens of thousands of children seemingly overnight. And then it vanished without explanation. Barely documented, rarely acknowledged, and quietly abandoned – but never quite forgotten by those it touched.

Why was it only implemented in certain schools – or even, in some cases, only certain classes in those schools? How did it appear to disappear without record or reckoning? Are there others like my mum, still aggrieved by ITA? And what happens to a generation taught to read and write using a system that no longer exists?

[...] Unlike Spanish or Welsh, where letters have consistent sound values, English is a patchwork of linguistic inheritances. Its roughly 44 phonemes – the distinct sounds that make up speech – can each be spelt multiple ways. The long "i" sound alone, as in "eye", has more than 20 possible spellings. And many letter combinations contradict one another across different words: think of "through", "though" and "thought".

It was precisely this inconsistency that Conservative MP Sir James Pitman – grandson of Sir Isaac Pitman, the inventor of shorthand – identified as the single greatest obstacle for young readers. In a 1953 parliamentary debate, he argued that it is our "illogical and ridiculous spelling" which is the "chief handicap" that leads many children to stumble with reading, with lasting consequences for their education. His proposed solution, launched six years later, was radical: to completely reimagine the alphabet.

The result was ITA: 44 characters, each representing a distinct sound, designed to bypass the chaos of traditional English and teach children to read, and fast. Among the host of strange new letters were a backwards "z", an "n" with a "g" inside, a backwards "t" conjoined with an "h", a bloated "w" with an "o" in the middle. Sentences in ITA were all written in lower case.

[...] The issue isn't simply whether or not ITA worked – the problem is that no one really knows. For all its scale and ambition, the experiment was never followed by a national longitudinal study. No one tracked whether the children who learned to read with ITA went on to excel, or struggle, as they moved through the education system. There was no formal inquiry into why the scheme was eventually dropped, and no comprehensive lessons-learned document to account for its legacy.

The article has a few stories of ITA students who went on to have poor spelling and bad grades in school from teachers who didn't seem to know about ITA.


Original Submission

posted by hubie on Sunday July 13, @07:15PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

China's aggressive push to develop a domestic semiconductor industry has largely been successful. The country now has fairly advanced fabs that can produce logic chips using 7nm-class process technologies as well as world-class 3D NAND and DRAM memory devices. However, there are numerous high-profile failures due to missed investments, technical shortcomings, and unsustainable business plans. This has resulted in numerous empty fab shells — zombie fabs — around the country, according to DigiTimes.

As of early 2024, China had 44 wafer semiconductor production facilities, including 25 300-mm fabs, five 200-mm wafers, four 150-mm wafers, and seven inactive ones, according to TrendForce. At the time, 32 additional semiconductor fabrication plans were being constructed in the country as part of the Made in China 2025 initiative, including 24 300-mm fabs and nine 200-mm fabs. Companies like SMIC, HuaHong, Nexchip, CXMT, and Silan planned to start production at 10 new fabs, including nine 300-mm fabs and one 200-mm facility by the end of 2024.

However, while China continues to lead in terms of new fabs coming online, the country also leads in terms of fab shells that never got equipped or put to work, thus becoming zombie fabs. Over the past several years, around a dozen high-profile fab projects, which cost investors between $50 billion and $100 billion, went bust.

Many Chinese semiconductor fab projects failed due to a lack of technical expertise amid overambitious goals: some startups aimed at advanced nodes like 14nm and 7nm without having experienced R&D teams or access to necessary wafer fab equipment. These efforts were often heavily reliant on provincial government funding, with little oversight or industry knowledge, which lead to collapse when finances dried up or scandals emerged. Some fab ventures were plagued by fraud or mismanagement, with executives vanishing or being arrested, sometimes with local officials involved.

To add to problems, U.S. export restrictions since 2019 blocked access of Chinese entities to critical chipmaking equipment required to make chips at 10nm-class nodes and below, effectively halting progress on advanced fabs. In addition, worsening U.S.-China tensions and global market shifts further undercut the viability of many of these projects.

[...] Leading chipmakers, such as Intel, TSMC, Samsung, or SMIC have spent decades developing their production technologies and gain experience in chips on their leading-edge nodes. But Chinese chipmakers Wuhan Hongxin Semiconductor Manufacturing Co. (HSMC) and Quanxin Integrated Circuit Manufacturing (QXIC) attempted to take a shortcut and jump straight to 14nm and, eventually, to 7nm-class nodes by hiring executives and hundreds of engineers from TSMC in 2017 – 2019.

[...] Perhaps, the most notorious China fab venture failure — the first of many — is GlobalFoundries' project in Chengdu. GlobalFoundries unveiled plans in May 2017 to build an advanced fabs in Chengdu in two phases: Phase 1 for 130nm/180nm-class nodes and Phase 2 for 22FDX FD-SOI node. The company committed to invest $10 billion in the project, with about a billion invested in the shell alone.

Financial troubles forced GlobalFoundries to abandon the project in 2018 (the same year it ceased to develop leading-edge process technologies) and refocus to specialty production technologies. By early 2019, the site was cleared of equipment and personnel, and notices were issued in May 2020 to formally suspend operations.

[...] Another memory project that has failed in China is Jiangsu Advanced Memory Semiconductor (AMS). The company was established in 2016 with the plan to lead China's efforts in phase-change memory (PCM) technology. The company aimed to produce 100,000 300-mm wafers annually and attracted an initial investment of approximately $1.8 billion. Despite developing its first in-house PCM chips by 2019, AMS ran into financial trouble by 2020 and could no longer pay for equipment or employee salaries. It entered bankruptcy proceedings in 2023, and while a rescue plan by Huaxin Jiechuang was approved in 2024, the deal collapsed in 2025 due to unmet funding commitments.

Producing commodity types of memory is a challenging business. Tsinghua Unigroup was instrumental in developing Yangtze Memory Technology Co. and making it a world-class maker of 3D NAND. However, subsequent 3D NAND and DRAM projects were scrapped in 2022, after the company faced financial difficulties one year prior.

[...] Logic and memory require rather sophisticated process technologies, and fabs that cost billions. By contrast, CMOS image sensors (CIS) are produced using fairly basic production nodes and on relatively inexpensive (yet very large) fabs. Nonetheless, this did not stop Jiangsu Zhongjing Aerospace, Huaian Imaging Device Manufacturer (HiDM), and Tacoma Semiconductor from failing. None of their fabs have been completed, and none of their process technologies have been developed.

China's wave of semiconductor production companies' failures highlights a fundamental reality about the chip industry: large-scale manufacturing requires more than capital and ambition. Without sustained expertise, supply chain depth, and long-term planning, even the best-funded initiatives can quickly fall apart. These deep structural issues in the People's Republic's semiconductor strategy will continue to hamper its progress for years to come before the fundamental issues will be solved.


Original Submission

posted by hubie on Sunday July 13, @02:28PM   Printer-friendly
from the follow-the-doctors-orders dept.

https://arstechnica.com/health/2025/07/man-fails-to-take-his-medicine-the-flesh-starts-rotting-off-his-leg/

If you were looking for some motivation to follow your doctor's advice or remember to take your medicine, look no further than this grisly tale.

A 64-year-old man went to the emergency department of Brigham and Women's Hospital in Boston with a painful festering ulcer spreading on his left, very swollen ankle.
[...]
The man told doctors it had all started two years prior, when dark, itchy lesions appeared in the area on his ankle—the doctors noted that there were multiple patches of these lesions on both his legs. But about five months before his visit to the emergency department, one of the lesions on his left ankle had progressed to an ulcer. It was circular, red, tender, and deep. He sought treatment and was prescribed antibiotics, which he took. But they didn't help.
[...]
The ulcer grew. In fact, it seemed as though his leg was caving in as the flesh around it began rotting away. A month before the emergency room visit, the ulcer was a gaping wound that was already turning gray and black at the edges. It was now well into the category of being a chronic ulcer.

In a Clinical Problem-Solving article published in the New England Journal of Medicine this week, doctors laid out what they did and thought as they worked to figure out what was causing the man's horrid sore.
[...]
His diabetes was considered "poorly controlled."
[...]
His blood pressure, meanwhile, was 215/100 mm Hg at the emergency department. For reference, readings higher than 130/80 mm Hg on either number are considered the first stage of high blood pressure.
[...]
Given the patient's poorly controlled diabetes, a diabetic ulcer was initially suspected. But the patient didn't have any typical signs of diabetic neuropathy that are linked to ulcers.
[...]
With a bunch of diagnostic dead ends piling up, the doctors broadened their view of possibilities, newly considering cancers, rare inflammatory conditions, and less common conditions affecting small blood vessels (as the MRI has shown the larger vessels were normal). This led them to the possibility of a Martorell's ulcer.

[...] These ulcers, first described in 1945 by a Spanish doctor named Fernando Martorell, form when prolonged, uncontrolled high blood pressure causes the teeny arteries below the skin to stiffen and narrow, which blocks the blood supply, leading to tissue death and then ulcers.
[...]
The finding suggests that if he had just taken his original medications as prescribed, he would have kept his blood pressure in check and avoided the ulcer altogether.

In the end, "the good outcome in this patient with a Martorell's ulcer underscores the importance of blood-pressure control in the management of this condition," the doctors concluded.

Journal Reference: DOI: 10.1056/NEJMcps2413155


Original Submission

posted by hubie on Sunday July 13, @09:40AM   Printer-friendly

The tech's mistakes are dangerous, but its potential for abuse when working as intended is even scarier:

Juan Carlos Lopez-Gomez, despite his U.S. citizenship and Social Security card, was arrested on April 16 on an unfounded suspicion of him being an "unauthorized alien." Immigration and Customs Enforcement kept him in county jail for 30 hours "based on biometric confirmation of his identity"—an obvious mistake of facial recognition technology. Another U.S. citizen, Jensy Machado, was held at gunpoint and handcuffed by ICE agents. He was another victim of mistaken identity after someone else gave his home address on a deportation order. This is the reality of immigration policing in 2025: Arrest first, verify later.

That risk only grows as ICE shreds due process safeguards, citizens and noncitizens alike face growing threats from mistaken identity, and immigration policing agencies increasingly embrace error-prone technology, especially facial recognition. Last month, it was revealed that Customs and Border Protection requested pitches from tech firms to expand their use of an especially error-prone facial recognition technology—the same kind of technology used wrongly to arrest and jail Lopez-Gomez. ICE already has nearly $9 million in contracts with Clearview AI, a facial recognition company with white nationalist ties that was at one point the private facial recognition system most used by federal agencies. When reckless policing is combined with powerful and inaccurate dragnet tools, the result will inevitably be more stories like Lopez-Gomez's and Machado's.

Studies have shown that facial recognition technology is disproportionately likely to misidentify people of color, especially Black women. And with the recent rapid increase of ICE activity, facial recognition risks more and more people arbitrarily being caught in ICE's dragnet without rights to due process to prove their legal standing. Even for American citizens who have "nothing to hide," simply looking like the wrong person can get you jailed or even deported.

While facial recognition's mistakes are dangerous, its potential for abuse when working as intended is even scarier. For example, facial recognition lets Donald Trump use ICE as a more powerful weapon for retribution. The president himself admits he's using immigration enforcement to target people for their political opinions and that he seeks to deport people regardless of citizenship. In the context of a presidential administration that is uncommonly willing to ignore legal procedures and judicial orders, a perfectly accurate facial recognition system could be the most dangerous possibility of all: Federal agents could use facial recognition on photos and footage of protests to identify each of the president's perceived enemies, and they could be arrested and even deported without due process rights.

And the more facial recognition technology expands across our daily lives, the more dangerous it becomes. By working with local law enforcement and private companies, including by sharing facial recognition technology, ICE is growing their ability to round people up—beyond what they already can do. This deputization of surveillance infrastructure comes in many forms: Local police departments integrate facial recognition into their body cameras, landlords use facial recognition instead of a key to admit or deny tenants, and stadiums use facial recognition for security. Even New York public schools used facial recognition on their security camera footage until a recent moratorium. Across the country, other states and municipalities have imposed regulations on facial recognition in general, including Boston, San Francisco, Portland, and Vermont. Bans on the technology in schools specifically have been passed in Florida and await the governor's signature in Colorado. Any facial recognition, no matter its intended use, is at inherent risk of being handed over to ICE for indiscriminate or politically retaliatory deportations.


Original Submission

posted by hubie on Sunday July 13, @04:56AM   Printer-friendly

Colossal's Plans To "De-Extinct" The Giant Moa Are Still Impossible

Arthur T Knackerbracket has processed the following story:

Colossal Biosciences has announced plans to “de-extinct” the New Zealand moa, one of the world’s largest and most iconic extinct birds, but critics say the company’s goals remain scientifically impossible.

The moa was the only known completely wingless bird, lacking even the vestigial wings of birds like emus. There were once nine species of moa in New Zealand, ranging from the turkey-sized bush moa (Anomalopteryx didiformis) to the two biggest species, the South Island giant moa (Dinornis robustus) and North Island giant moa (Dinornis novaezealandiae), which both reached heights of 3.6 metres and weights of 230 kilograms.

It is thought that all moa species were hunted to extinction by the mid-15th century, following the arrival of Polynesian people, now known as Māori, to New Zealand sometime around 1300.

Colossal has announced that it will work with the Indigenous Ngāi Tahu Research Centre, based at the University of Canterbury in New Zealand, along with film-maker Peter Jackson and Canterbury Museum, which holds the largest collection of moa remains in the world. These remains will play a key role in the project, as Colossal aims to extract DNA to sequence and rebuild the genomes for all nine moa species.

As with Colossal’s other “de-extinction” projects, the work will involve modifying the genomes of animals still living today. Andrew Pask at the University of Melbourne, Australia, who is a scientific adviser to Colossal, says that although the moa’s closest living relatives are the tinamou species from Central and South America, they are comparatively small.

This means the project will probably rely on the much larger Australian emu (Dromaius novaehollandiae). “What emus have is very large embryos, very large eggs,” says Pask. “And that’s one of the things that you definitely need to de-extinct a moa.”

[...] But Philip Seddon at the University of Otago, New Zealand, says that whatever Colossal produces, it won’t be a moa, but rather a “possible look-alike with some very different features”. He points out that although the tinamou is the moa’s closest relative, the two diverged 60 million years ago.

“The bottom line is that Colossal’s approach to de-extinction uses genetic engineering to alter a near-relative of an extinct species to create a GMO [genetically-modified organism] that resembles the extinct form,” he says. “There is nothing much to do with solving the global extinction crisis and more to do with generating fundraising media coverage.”

Pask strongly disputes this sentiment and says the knowledge being gained through de-extinction projects will be critically important to helping save endangered species today.

“They may superficially have some moa traits, but are unlikely to behave as moa did or be able to occupy the same ecological niches, which will perhaps relegate them to no more than objects of curiosity,“ says Wood.

Sir Peter Jackson Backs Project to De-Extinct Moa, Experts Cast Doubt

Sir Peter Jackson backs project to de-extinct moa, experts cast doubt:

A new project backed by film-maker Sir Peter Jackson aims to bring the extinct South Island giant moa back to life in less than eight years.

The South Island giant moa stood up to 3.6 metres tall, weighed around 230kg and typically lived in forests and shrubbery.

Moa hatchlings could be a reality within a decade, says the company behind the project.

Using advanced genetic engineering, iwi Ngāi Tahu, Canterbury Museum, and US biotech firm Colossal Biosciences plan to extract DNA from preserved moa remains to recreate the towering flightless bird.

However, Zoology Professor Emeritus Philip Seddon from the University of Otago is sceptical.

"Extinction really is forever. There is no current genetic engineering pathway that can truly restore a lost species, especially one missing from its ecological and evolutionary context for hundreds of years," he told the Science Media Centre.

He said a five to 10-year timeframe for the project provided enough leeway to "drip feed news of genetically modifying some near relative of the moa".

"Any end result will not, cannot be, a moa - a unique treasure created through millenia of adaptation and change. Moa are extinct. Genetic tinkering with the fundamental features of a different life force will not bring moa back."

University of Otago Palaeogenetics Laboratory director Dr Nic Rawlence is also not convinced the country will see the massive flightless bird making a comeback.

He said the project came across as "very glossy" but scientfically the ambition was "a pipedream".

"The technology isn't available yet. It definitely won't be done in five to 10 years ... but also they won't be de-extincting a moa, they'll be creating a genetically engineered emu."

It might look like a moa but it was really "a smokescreen", he told Midday Report.

See also:


Original Submission #1Original Submission #2

posted by hubie on Sunday July 13, @12:14AM   Printer-friendly
from the that's-the-password-an-idiot-would-have-on-his-luggage dept.

From '123456' Password Exposed Chats for 64 Million McDonald's Job Applicants

Cybersecurity researchers discovered a vulnerability in McHire, McDonald's chatbot job application platform, that exposed the chats of more than 64 million job applicants across the United States.

The flaw was discovered by security researchers Ian Carroll and Sam Curry, who found that the ChatBot's admin panel utilized a test franchise that was protected by weak credentials of a login name "123456" and a password of "123456".

McHire, powered by Paradox.ai and used by about 90% of McDonald's franchisees, accepts job applications through a chatbot named Olivia. Applicants can submit names, email addresses, phone numbers, home addresses, and availability, and are required to complete a personality test as part of the job application process.

Once logged in, the researchers submitted a job application to the test franchise to see how the process worked.

During this test, they noticed that HTTP requests were sent to an API endpoint at /api/lead/cem-xhr, which used a parameter lead_id, which in their case was 64,185,742.

The researchers found that by incrementing and decrementing the lead_id parameter, they were able to expose the full chat transcripts, session tokens, and personal data of real job applicants that previously applied on McHire.

This type of flaw is called an IDOR (Insecure Direct Object Reference) vulnerability, which is when an application exposes internal object identifiers, such as record numbers, without verifying whether the user is actually authorized to access the data.

"During a cursory security review of a few hours, we identified two serious issues: the McHire administration interface for restaurant owners accepted the default credentials 123456:123456, and an insecure direct object reference (IDOR) on an internal API allowed us to access any contacts and chats we wanted," Carroll explained in a writeup about the flaw.

"Together they allowed us and anyone else with a McHire account and access to any inbox to retrieve the personal data of more than 64 million applicants."

In this case, incrementing or decrementing a lead_id number in a request returned sensitive data belonging to other applicants, as the API failed to check if the user had access to the data.

The issue was reported to Paradox.ai and McDonald's on June 30.

McDonald's acknowledged the report within an hour, and the default admin credentials were disabled soon after.


Original Submission

posted by hubie on Saturday July 12, @07:29PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Climate change could pose a threat to the technology industry as copper production is vulnerable to drought, while demand may grow to outstrip supply anyway.

According to a report out today from PricewaterhouseCoopers (PwC), copper mines require a steady water supply to function, and many are situated in places around the world that face a growing risk of severe drought due to shifts in climate.

Copper is almost ubiquitous in IT hardware because of its excellent electrical conductivity, from the tracks on circuit boards to cabling and even the interconnects on microchips. PwC's report focuses just on chips, and claims that nearly a third (32 percent) of global semiconductor production will be reliant on copper supplies that are at risk from climate disruption by 2035.

If something is not done to rein in climate change, like drastically cutting greenhouse gas emissions, then the share of copper supply at risk rises to 58 percent by 2050, PwC claims. As this seems increasingly unlikely, it advises both copper exporters and semiconductor buyers to adapt their supply chains and practices if they are to ride out the risk.

Currently, of the countries or territories that supply the semiconductor industry with copper, the report states that only Chile faces severe drought risks. But within a decade, copper mines in the majority of the 17 countries that source the metal will be facing severe drought risks.

PwC says there is an urgent need to strengthen supply chain resilience. Some businesses are taking action, but many investors believe companies should step up their efforts when it comes to de-risking their supply chain, the firm adds.

According to the report, mining companies can alleviate some of the supply issues by investing in desalination plants, improving water efficiency and recycling water.

Semiconductor makers could use alternative materials, diversify their suppliers, and adopt measures such as recycling and taking advantage of the circular economy.

[...] This was backed up recently by the International Energy Agency (IEA), which reckons supplies of copper will fall 30 percent short of the volume required by 2035 if nothing is done to open up new sources.

One solution is for developed countries to do more refining of copper – plus other key metals needed for industry – and form partnerships with developing countries to help open up supplies, executive director Fatih Birol told The Guardian.


Original Submission

posted by jelizondo on Saturday July 12, @02:45PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Humans come from Africa. This wasn’t always obvious, but today it seems as close to certain as anything about our origins.

There are two senses in which this is true. The oldest known hominins, creatures more closely related to us than to great apes, are all from Africa, going back as far as 7 million years ago. And the oldest known examples of our species, Homo sapiens, are also from Africa.

It’s the second story I’m focusing on here, the origin of modern humans in Africa and their subsequent expansion all around the world. With the advent of DNA sequencing in the second half of the 20th century, it became possible to compare the DNA of people from different populations. This revealed that African peoples have the most variety in their genomes, while all non-African peoples are relatively similar at the genetic level (no matter how superficially different we might appear in terms of skin colour and so forth).

In genetic terms, this is what we might call a dead giveaway. It tells us that Africa was our homeland and that it was populated by a diverse group of people – and that everyone who isn’t African is descended from a small subset of the peoples, who left this homeland to wander the globe. Geneticists were confident about this as early as 1995, and the evidence has only accumulated since.

And yet, the physical archaeology and the genetics don’t match – at least, not on the face of it.

Genetics tells us that all living non-African peoples are descended from a small group that left the continent around 50,000 years ago. Barring some wobbles about the exact date, that has been clear for two decades. But archaeologists can point to a great many instances of modern humans living outside Africa much earlier than that.

What is going on? Is our wealth of genetic data somehow misleading us? Or is it true that we are all descended from that last big migration – and the older bones represent populations that didn’t survive?

Eleanor Scerri at the Max Planck Institute of Geoanthropology in Germany and her colleagues have tried to find an explanation.

The team was discussing where modern humans lived in Africa. “Were humans simply moving into contiguous regions of African grasslands, or were they living in very different environments?” says Scerri.

To answer that, they needed a lot of data.

“We started with looking at all of the archaeological sites in Africa that date to 120,000 years ago to 14,000 years ago,” says Emily Yuko Hallett at Loyola University Chicago in Illinois. She and her colleagues built a database of sites and then determined the climates at specific places and times: “It was going through hundreds and hundreds of archaeological site reports and publications.”

We once shared the planet with at least seven other types of human. Ironically, our success may have been due to our deepest vulnerability: being dependent on others

There was an obvious shift around 70,000 years ago. “Even if you just look at the data without any fancy modelling, you do see that there is this change in the conditions,” says Andrea Manica at the University of Cambridge, UK. The range of temperatures and rainfalls where humans were living expanded significantly. “They start getting into the deeper forests, the drier deserts.”

However, it wasn’t enough to just eyeball the data. The archaeological record is incomplete, and biased in many ways.

“In some areas, you have no sites,” says Michela Leonardi at the Natural History Museum in London – but that could be because nothing has been preserved, not because humans were absent. “And for more recent periods, you have more data just because it’s more recent, so it’s easier for it to be conserved.”

Leonardi had developed a statistical modelling technique that could determine whether animals had changed their environmental niche: that is, whether they had started living under different climatic conditions or in a different type of habitat like a rainforest instead of a grassland. The team figured that applying this to the human archaeological record would be a two-week job, says Leonardi. “That was five and a half years ago.”

However, the statistics eventually did confirm what they initially saw: about 70,000 years ago, modern humans in Africa started living in a much wider range of environments. The team published their results on 18 June.

“What we’re seeing at 70,000 [years ago] is almost kind of our species becoming the ultimate generalist,” says Manica. From this time forwards, modern humans moved into an ever-greater range of habitats.

It would be easy to misunderstand this. The team absolutely isn’t saying that earlier H. sapiens weren’t adaptable. On the contrary: one of the things that has emerged from the study of extinct hominins is that the lineage that led to us became increasingly adaptable as time went on.

“People are in different environments from an early stage,” says Scerri. “We know they’re in mangrove forests, they’re in rainforest, they’re in the edges of deserts. They’re going up into highland regions in places like Ethiopia.”

This adaptability seems to be how early Homo survived environmental changes in Africa, while our Paranthropus cousins didn’t: Paranthropus was too committed to a particular lifestyle and was unable to change.

The other humans: The emerging story of the mysterious Denisovans

The existence of the Denisovans was discovered just a decade ago through DNA alone. Now we're starting to uncover fossils and artefacts revealing what these early humans were like

Instead, what seems to have happened in our species 70,000 years ago is that this existing adaptability was turned up to 11.

Some of this isn’t obvious until you consider just how diverse habitats are. “People have an understanding that there’s one type of desert, one type of rainforest,” says Scerri. “There aren’t. There are many different types. There’s lowland rainforest, montane rainforest, swamp forest, seasonally inundated forest.” The same kind of range is seen in deserts.

Earlier H. sapiens groups were “not exploiting the full range of potential habitats available to them”, says Scerri. “Suddenly, we see the beginnings of that around 70,000 years ago, where they’re exploiting more types of woodland, more types of rainforest.”

This success story struck me, because recently I’ve been thinking about the opposite.

Early humans spread as far north as Siberia 400,000 years ago

A site in Siberia has evidence of human presence 417,000 years ago, raising the possibility that hominins could have reached North America much earlier than we thought

Last week, I published a story about local human extinctions: groups of H. sapiens that seem to have died out without leaving any trace in modern populations. I focused on some of the first modern humans to enter Europe after leaving Africa, who seem to have struggled with the cold climate and unfamiliar habitats, and ultimately succumbed. These lost groups fascinated me: why did they fail, when another group that entered Europe just a few thousand years later succeeded so enormously?

The finding that humans in Africa expanded their niche from 70,000 years ago seems to offer a partial explanation. If these later groups were more adaptable, that would have given them a better chance of coping with the unfamiliar habitats of northern Europe – and for that matter, South-East Asia, Australia and the Americas, where their descendants would ultimately travel.

One quick note of caution: this doesn’t mean that from 70,000 years ago, human populations were indestructible. “It’s not like all humans suddenly developed into some massive success stories,” says Scerri. “Many of these populations died out, within and beyond Africa.”

And like all the best findings, the study raises as many questions as it answers. In particular: how and why did modern humans became more adaptable 70,000 years ago?

Manica points out that we can also see a shift in the shapes of our skeletons. Older fossils classed as H. sapiens don’t have all the features we associate with humans today, just some of them. “From 70,000 [years ago] onwards, roughly speaking, suddenly you see all these traits present as a package,” he says.


Original Submission

posted by jelizondo on Saturday July 12, @10:00AM   Printer-friendly

Upstart has processed the PerfektBlue Bluetooth Vulnerabilities Expose Millions of Vehicles to Remote Code Execution the following story:

Cybersecurity researchers have discovered a set of four security flaws in OpenSynergy's BlueSDK Bluetooth stack that, if successfully exploited, could allow remote code execution on millions of transport vehicles from different vendors.

The vulnerabilities, dubbed PerfektBlue, can be fashioned together as an exploit chain to run arbitrary code on cars from at least three major automakers, Mercedes-Benz, Volkswagen, and Skoda, according to PCA Cyber Security (formerly PCAutomotive). Outside of these three, a fourth unnamed original equipment manufacturer (OEM) has been confirmed to be affected as well.

"PerfektBlue exploitation attack is a set of critical memory corruption and logical vulnerabilities found in OpenSynergy BlueSDK Bluetooth stack that can be chained together to obtain Remote Code Execution (RCE)," the cybersecurity company said.

While infotainment systems are often seen as isolated from critical vehicle controls, in practice, this separation depends heavily on how each automaker designs internal network segmentation. In some cases, weak isolation allows attackers to use IVI access as a springboard into more sensitive zones—especially if the system lacks gateway-level enforcement or secure communication protocols.

The only requirement to pull off the attack is that the bad actor needs to be within range and be able to pair their setup with the target vehicle's infotainment system over Bluetooth. It essentially amounts to a one-click attack to trigger over-the-air exploitation.

"However, this limitation is implementation-specific due to the framework nature of BlueSDK," PCA Cyber Security added. "Thus, the pairing process might look different between various devices: limited/unlimited number of pairing requests, presence/absence of user interaction, or pairing might be disabled completely."

The list of identified vulnerabilities is as follows -

  • CVE-2024-45434 (CVSS score: 8.0) - Use-After-Free in AVRCP service
  • CVE-2024-45431 (CVSS score: 3.5) - Improper validation of an L2CAP channel's remote CID
  • CVE-2024-45433 (CVSS score: 5.7) - Incorrect function termination in RFCOMM
  • CVE-2024-45432 (CVSS score: 5.7) - Function call with incorrect parameter in RFCOMM

Successfully obtaining code execution on the In-Vehicle Infotainment (IVI) system enables an attacker to track GPS coordinates, record audio, access contact lists, and even perform lateral movement to other systems and potentially take control of critical software functions of the car, such as the engine.

Following responsible disclosure in May 2024, patches were rolled out in September 2024.

"PerfektBlue allows an attacker to achieve remote code execution on a vulnerable device," PCA Cyber Security said. "Consider it as an entrypoint to the targeted system which is critical. Speaking about vehicles, it's an IVI system. Further lateral movement within a vehicle depends on its architecture and might involve additional vulnerabilities."

Earlier this April, the company presented a series of vulnerabilities that could be exploited to remotely break into a Nissan Leaf electric vehicle and take control of critical functions. The findings were presented at the Black Hat Asia conference held in Singapore.

"Our approach began by exploiting weaknesses in Bluetooth to infiltrate the internal network, followed by bypassing the secure boot process to escalate access," it said.

"Establishing a command-and-control (C2) channel over DNS allowed us to maintain a covert, persistent link with the vehicle, enabling full remote control. By compromising an independent communication CPU, we could interface directly with the CAN bus, which governs critical body elements, including mirrors, wipers, door locks, and even the steering."

CAN, short for Controller Area Network, is a communication protocol mainly used in vehicles and industrial systems to facilitate communication between multiple electronic control units (ECUs). Should an attacker with physical access to the car be able to tap into it, the scenario opens the door for injection attacks and impersonation of trusted devices.

"One notorious example involves a small electronic device hidden inside an innocuous object (like a portable speaker)," the Hungarian company said. "Thieves covertly plug this device into an exposed CAN wiring junction on the car."

"Once connected to the car's CAN bus, the rogue device mimics the messages of an authorized ECU. It floods the bus with a burst of CAN messages declaring 'a valid key is present' or instructing specific actions like unlocking the doors."

In a report published late last month, Pen Test Partners revealed it turned a 2016 Renault Clio into a Mario Kart controller by intercepting CAN bus data to gain control of the car and mapping its steering, brake, and throttle signals to a Python-based game controller.


Original Submission

posted by jelizondo on Saturday July 12, @05:15AM   Printer-friendly
from the How-the-Mighty-have-Fallen dept.

Arthur T Knackerbracket has processed the following story:

Intel Ceo Says It's "Too Late" For Them To Catch Up With Ai Competition -Reportedly Claims Intel Has Fallen Out Of The "Top 10 Semiconductor Companies" As The Firm Lays Off Thousands Across The World

Dark days ahead, or perhaps already here.

Intel has been in a dire state these past few years, with seemingly nothing going right. Its attempt to modernize x86 with a hybrid big.LITTLE architecture, à la ARM, failed to make a meaningful impact in terms of market share gains, only made worse by last-gen's Arrow Lake chips barely registering a response against AMD’s lineup. On the GPU front, the Blue Team served an undercooked product far too late that, while not entirely hopeless, was nowhere near enough to challenge the industry’s dominant players. All of this compounds into a grim reality, seemingly confirmed by new CEO Lip-Bu Tan in a leaked internal conversation today.

According to OregonTech, it's borderline a fight for survival for the once-great American innovation powerhouse as it struggles to even acknowledge being among the top contenders anymore. Despite Tan's insistence, Intel would still rank fairly well given its extensive legacy. While companies like AMD, Nvidia, Apple, TSMC, and even Samsung might be more successful today, smaller chipmakers like Broadcom, MediaTek, Micron, and SK Hynix are not above the Blue Team in terms of sheer impact. Regardless, talking to employees around the world in a Q&A session, Intel's CEO allegedly shared these bleak words: "Twenty, 30 years ago, we were really the leader. Now I think the world has changed. We are not in the top 10 semiconductor companies."

As evident from the quote, this is a far cry from a few decades ago when Intel essentially held a monopoly over the CPU market, making barely perceptible upgrades each generation in order to sustain its dominance. At one time, Intel was so powerful that it considered acquiring Nvidia for $20 billion. The GPU maker is now worth $4 trillion.

It never saw AMD as an honorable competitor until it was too late, and Ryzen pulled the carpet from underneath the Blue Team's feet. Now, more people choose to build an AMD system than ever before. Not only that, but AMD also powers your favorite handhelds like the Steam Deck and Rog Ally X, alongside the biggest consoles: Xbox Series and PlayStation 5. AMD works closely with TSMC, another one of Intel's competitors, as the company makes its own chips in-house.

This vertical alignment was once a core strength for the firm, but it has turned into more of a liability these days. Faltering nodes that can't quite match the prowess of Taiwan have arguably held back Intel's processors from reaching their full potential. In fact, starting in 2023, the company tasked TSMC with manufacturing the GPU tile on its Meteor Lake chips. This partnership extended to TSMC, essentially making the entire compute tile for Lunar Lake—and now, in 2025, roughly 30% of fabrication has been outsourced to TSMC. A long-overdue admission of failure that could've been prevented had Intel been allowed to make its leading-edge CPUs with external manufacturing in mind from the start. Ultimately its own foundry was the limiting factor.

As such, Intel has been laying off thousands across the world in a bid to cut costs. Costs have skyrocketed due to high R&D spending for future nodes, and the company faces a $16 billion loss in Q3 last year. Intel's resurrection has to be a "marathon," said Tan, as he hopes to turn around the company culture and "be humble" in listening to shifting demands of the industry. Intel wants to be more like AMD and NVIDIA, who are faster, meaner, and more ruthless competitors these days, especially with the advent of AI. Of course, artificial intelligence has been around for a while, but it wasn't until OpenAI's ChatGPT that a second big bang occurred, ushering in a new era of machine learning. An era almost entirely powered by Nvidia's data center GPUs, highlighting another sector where Intel failed to capitalize on its position.

"On training, I think it is too late for us," Lip-Bu Tan remarked. Intel instead plans to shift its focus toward edge AI, aiming to bring AI processing directly to devices like PCs rather than relying on cloud-based compute. Tan also highlighted agentic AI—an emerging field where AI systems can act autonomously without constant human input—as a key growth area. He expressed optimism that recent high-level hires could help steer Intel back into relevance in AI, hinting that more talent acquisitions are on the way. “Stay tuned. A few more people are coming on board,” said Tan. At this point, Nvidia is simply too far ahead to catch up to, so it's almost exciting to see Intel change gears and look to close the gap in a different way.

That being said, Intel now lags behind in data center CPUs, too, where AMD's EPYC lineup has overtaken them in the past year, further dwindling the company's confidence. Additionally, last year, Intel's board forced former CEO Pat Gelsinger out of the company and replaced him with Lip-Bu Tan, who appears to have a distinctly different, more streamlined vision for the company. Instead of focusing on several different facets, such as CPU, GPU, foundry, and more, at once, Lip wants to hone in on what the company can do well at one time.

This development follows long-standing rumors of Intel splitting in two and forming a new foundry division that would act as an independent subsidiary, turning the main Intel into a fabless chipmaker. Both AMD and Apple, Intel's rivals in the CPU market, operate like this, and Nvidia has also always used TSMC or Samsung to build its graphics cards. It would be interesting to see the Blue Team shed off weight and move like a free animal in the biome. However, it's too early to speculate given that 18A, Intel's proposed savior, is still a year away.


Original Submission

posted by janrinok on Saturday July 12, @12:31AM   Printer-friendly

Derinkuyu: A Subterranean Marvel of Ancient Engineering:

Beneath the sun-drenched plains of Cappadocia, where otherworldly "fairy chimney" rock formations pierce the sky, lies a secret world carved into the very heart of the earth.Forget the grand pyramids or towering ziggurats; we're about to descend into Derinkuyu, an ancient metropolis swallowed by the ground, a testament to human resilience and a whisper from a forgotten past.

Imagine a civilization, facing threats we can only dimly perceive, choosing not to build up, but to delve down, creating a labyrinthine sanctuary that could shelter thousands. This isn't just an archaeological site; it's a subterranean saga etched in stone, waiting to unfold its mysteries. Its origins are somewhat debated, but the prevailing archaeological consensus points to construction likely beginning in the Phrygian period (around the 8th-7th centuries BCE). The Phrygians, an Indo-European people who established a significant kingdom in Anatolia, were known for their rock-cut architecture, and Derinkuyu bears hallmarks of their early techniques.

However, the city's expansion and more complex features likely developed over centuries, with significant contributions from later periods, particularly the Byzantine era (roughly 4th to 15th centuries CE). During this time, Cappadocia was a crucial region for early Christianity, and the need for refuge from various invasions and raids, first from Arab forces and later from the Seljuk Turks, would have spurred the further development and extensive use of these underground complexes.

The city served as a refuge during times of conflict, allowing people to escape from invaders. The underground city could accommodate up to 20,000 people, along with their livestock and supplies, making it a significant shelter during turbulent times. The city extends approximately 60 meters deep and consists of multiple levels—around 18 floors! Each level was designed for specific purposes, such as living quarters, storage rooms, and even places of worship.

The geological context is crucial here. Cappadocia's unique landscape is characterized by soft volcanic tuff, formed by ancient eruptions. This malleable rock was relatively easy to carve, yet strong enough to support the extensive network of tunnels and chambers without collapsing – a testament to the engineering acumen of its builders.

Now, let's talk about the ingenuity of the design. Derinkuyu wasn't just a series of haphazard tunnels; it was a carefully planned multi-level settlement designed for extended habitation. Key features include:

  • Ventilation Systems: Remarkably, the city possessed sophisticated ventilation shafts that extended down through multiple levels, ensuring a constant supply of fresh air. Some of these shafts are believed to have also served as wells.

  • Water Management: Evidence of wells and water storage areas highlights the critical need for a sustainable water supply during times of siege.

  • Defensive Measures: The massive circular stone doors, capable of sealing off corridors from the inside, are a clear indication of the city's primary function as a refuge. These "rolling stones" could weigh several tons and were designed to be moved by a small number of people from within.

  • Living and Communal Spaces: Excavations have revealed evidence of domestic areas, including kitchens (with soot-stained ceilings indicating fireplaces), sleeping quarters, and communal gathering spaces.

  • Agricultural Infrastructure: The presence of stables suggests that livestock were also brought underground, a vital consideration for long-term survival. Storage rooms for grains and other foodstuffs further underscore the self-sufficiency of the city during times of crisis.

  • Religious and Educational Facilities: Some levels appear to have housed areas that may have served as chapels or even rudimentary schools, indicating that life, even in hiding, continued beyond mere survival.

The connection to other underground cities in the region, such as Kaymaklı, via subterranean tunnels adds another layer of complexity to the understanding of these networks. It suggests a potentially interconnected system of refuges, allowing for communication and possibly even the movement of people between them in times of extreme danger.

The rediscovery of Derinkuyu in modern times is also an interesting chapter. It was reportedly found in 1969 by a local resident who stumbled upon a hidden entrance while renovating his house. Subsequent archaeological investigations have gradually revealed the extent and significance of this subterranean marvel.

While the precise dating and the identity of the original builders are still subjects of scholarly debate, the evidence strongly suggests a prolonged period of construction and use, adapting to the needs of successive populations facing various threats. Derinkuyu stands as a powerful example of human adaptation, resourcefulness, and the enduring need for shelter and security throughout history. It offers a unique window into the as stood the test of time and continues to captivate the imagination of all who encounter it.

See also:


Original Submission

posted by janrinok on Friday July 11, @07:46PM   Printer-friendly

https://www.bleepingcomputer.com/news/security/new-android-taptrap-attack-fools-users-with-invisible-ui-trick/

A novel tap-jacking technique can exploit user interface animations to bypass Android's permission system and allow access to sensitive data or trick users into performing destructive actions, such as wiping the device.

Unlike traditional, overlay-based tap-jacking, TapTrap attacks work even with zero-permission apps to launch a harmless transparent activity on top of a malicious one, a behavior that remains unmitigated in Android 15 and 16.

TapTrap was developed by a team of security researchers at TU Wien and the University of Bayreuth (Philipp Beer, Marco Squarcina, Sebastian Roth, Martina Lindorfer), and will be presented next month at the USENIX Security Symposium.

However, the team has already published a technical paper that outlines the attack and a website that summarizes most of the details.
How TapTrap works

TapTrap abuses the way Android handles activity transitions with custom animations to create a visual mismatch between what the user sees and what the device actually registers.

A malicious app installed on the target device launches a sensitive system screen (permission prompt, system setting, etc.) from another app using 'startActivity()' with a custom low-opacity animation.

"The key to TapTrap is using an animation that renders the target activity nearly invisible," the researchers say on a website that explains the attack.

"This can be achieved by defining a custom animation with both the starting and ending opacity (alpha) set to a low value, such as 0.01," thus making the malicious or risky activity almost completely transparent.

"Optionally, a scale animation can be applied to zoom into a specific UI element (e.g., a permission button), making it occupy the full screen and increasing the chance the user will tap it."

Although the launched prompt receives all touch events, all the user sees is the underlying app that displays its own UI elements, as on top of it is the transparent screen the user actually engages with.

Thinking they interact with the benign app, a user may tap on specific screen positions that correspond to risky actions, such as an "Allow" or "Authorize" buttons on nearly invisible prompts.

A video released by the researchers demonstrates how a game app could leverage TapTrap to enable camera access for a website via Chrome browser.

To check if TapTrap could work with applications in Play Store, the official Android repository, the researchers analyzed close to 100,000. They found that 76% of them are vulnerable to TapTrap as they include a screen ("activity") that meets the following conditions:

        can be launched by another app
        runs in the same task as the calling app
        does not override the transition animation
        does not wait for the animation to finish before reacting to user input

The researchers say that animations are enabled on the latest Android version unless the user disables them from the developer options or accessibility settings, exposing the devices to TapTrap attacks.

While developing the attack, the researchers used Android 15, the latest version at the time, but after Android 16 came out they also ran some tests on it.

Marco Squarcina told BleepingComputer that they tried TapTrap on a Google Pixel 8a running Android 16 and they can confirm that the issue remains unmitigated.

GrapheneOS, the mobile operating system focused on privacy and security, also confirmed to BleepingComputer that the latest Android 16 is vulnerable to the TapTrap technique, and announced that the their next release will include a fix.

BleepingComputer has contacted Google about TapTrap, and a spokesperson said that the TapTrap problem will be mitigated in a future update:

"Android is constantly improving its existing mitigations against tap-jacking attacks. We are aware of this research and we will be addressing this issue in a future update. Google Play has policies in place to keep users safe that all developers must adhere to, and if we find that an app has violated our policies, we take appropriate action."- a Google representative told BleepingComputer.


Original Submission

posted by janrinok on Friday July 11, @03:02PM   Printer-friendly
from the argument-clinic dept.

https://arstechnica.com/ai/2025/07/agi-may-be-impossible-to-define-and-thats-a-multibillion-dollar-problem/

When is an AI system intelligent enough to be called artificial general intelligence (AGI)? According to one definition reportedly agreed upon by Microsoft and OpenAI, the answer lies in economics: When AI generates $100 billion in profits. This arbitrary profit-based benchmark for AGI perfectly captures the definitional chaos plaguing the AI industry.

In fact, it may be impossible to create a universal definition of AGI, but few people with money on the line will admit it.

Over this past year, several high-profile people in the tech industry have been heralding the seemingly imminent arrival of "AGI" (i.e., within the next two years). [...] As Google DeepMind wrote in a paper on the topic: If you ask 100 AI experts to define AGI, you'll get "100 related but different definitions." [...] When companies claim they're on the verge of AGI, what exactly are they claiming?

This isn't just academic navel-gazing. The definition problem has real consequences for how we develop, regulate, and think about AI systems. When companies claim they're on the verge of AGI, what exactly are they claiming?

I tend to define AGI in a traditional way that hearkens back to the "general" part of its name: An AI model that can widely generalize—applying concepts to novel scenarios—and match the versatile human capability to perform unfamiliar tasks across many domains without needing to be specifically trained for them.

However, this definition immediately runs into thorny questions about what exactly constitutes "human-level" performance. Expert-level humans? Average humans? And across which tasks—should an AGI be able to perform surgery, write poetry, fix a car engine, and prove mathematical theorems, all at the level of human specialists? (Which human can do all that?) More fundamentally, the focus on human parity is itself an assumption; it's worth asking why mimicking human intelligence is the necessary yardstick at all.

The latest example of trouble resulting from this definitional confusion comes from the deteriorating relationship between Microsoft and OpenAI. According to The Wall Street Journal, the two companies are now locked in acrimonious negotiations partly because they can't agree on what AGI even means—despite having baked the term into a contract worth over $13 billion.

[...] For decades, the Turing Test served as the de facto benchmark for machine intelligence. [...] But the Turing Test has shown its age. Modern language models can pass some limited versions of the test not because they "think" like humans, but because they're exceptionally capable at creating highly plausible human-sounding outputs.

Perhaps the most systematic attempt to bring order to this chaos comes from Google DeepMind, which in July 2024 proposed a framework with five levels of AGI performance: emerging, competent, expert, virtuoso, and superhuman. DeepMind researchers argued that no level beyond "emerging AGI" existed at that time. Under their system, today's most capable LLMs and simulated reasoning models still qualify as "emerging AGI"—equal to or somewhat better than an unskilled human at various tasks.

But this framework has its critics. Heidy Khlaaf, chief AI scientist at the nonprofit AI Now Institute, told TechCrunch that she thinks the concept of AGI is too ill-defined to be "rigorously evaluated scientifically." In fact, with so many varied definitions at play, one could argue that the term AGI has become technically meaningless.

[...] The Microsoft-OpenAI dispute illustrates what happens when philosophical speculation is turned into legal obligations. When the companies signed their partnership agreement, they included a clause stating that when OpenAI achieves AGI, it can limit Microsoft's access to future technology. According to The Wall Street Journal, OpenAI executives believe they're close to declaring AGI, while Microsoft CEO Satya Nadella, on the Dwarkesh Patel podcast in February, has called the idea of using AGI as a self-proclaimed milestone "nonsensical benchmark hacking."

[...] The disconnect we've seen above between researcher consensus, firm terminology definitions, and corporate rhetoric has a real impact. When policymakers act as if AGI is imminent based on hype rather than scientific evidence, they risk making decisions that don't match reality. When companies write contracts around undefined terms, they may create legal time bombs.

The definitional chaos around AGI isn't just philosophical hand-wringing. Companies use promises of impending AGI to attract investment, talent, and customers. Governments craft policy based on AGI timelines. The public forms potentially unrealistic expectations about AI's impact on jobs and society based on these fuzzy concepts.

Without clear definitions, we can't have meaningful conversations about AI misapplications, regulation, or development priorities. We end up talking past each other, with optimists and pessimists using the same words to mean fundamentally different things.


Original Submission

posted by jelizondo on Friday July 11, @08:15AM   Printer-friendly

For the first time ever, a company has achieved a market capitalization of $4 trillion. And that company is none other than Nvidia:

The chipmaker's shares rose as much as 2.5% on Wednesday, pushing past the previous market value record ($3.9 trillion), set by Apple in December 2024. Shares in the AI giant later closed at $162.88, shrinking the company's market value to $3.97 trillion.

Nvidia has rallied by more than 70% from its April 4 low, when global stock markets were sent reeling by President Donald Trump's global tariff rollout.

[...] The record value comes as tech giants such as OpenAI, Amazon and Microsoft are spending hundreds of billions of dollars in the race to build massive data centers to fuel the artificial intelligence revolution. All of those companies are using Nvidia chips to power their services, though some are also developing their own.

In the first quarter of 2025 alone, the company reported its revenue soared about 70%, to more than $44 billion. Nvidia said it expects another $45 billion worth of sales in the current quarter.

Also at: ZeroHedge, CNN and AP.

Related: Nvidia Reportedly Raises GPU Prices by 10-15% as Tariffs and TSMC Price Hikes Filter Down


Original Submission

posted by jelizondo on Friday July 11, @03:30AM   Printer-friendly

Apple just released an interesting coding language model - 9to5Mac:

Apple quietly dropped a new AI model on Hugging Face with an interesting twist. Instead of writing code like traditional LLMs generate text (left to right, top to bottom), it can also write out of order, and improve multiple chunks at once.

The result is faster code generation, at a performance that rivals top open-source coding models. Here's how it works.

The nerdy bits

Here are some (overly simplified, in the name of efficiency) concepts that are important to understand before we can move on.

Autoregression

Traditionally, most LLMs have been autoregressive. This means that when you ask them something, they process your entire question, predict the first token of the answer, reprocess the entire question with the first token, predict the second token, and so on. This makes them generate text like most of us read: left to right, top to bottom.

Temperature

LLMs have a setting called temperature that controls how random the output can be. When predicting the next token, the model assigns probabilities to all possible options. A lower temperature makes it more likely to choose the most probable token, while a higher temperature gives it more freedom to pick less likely ones.

Diffusion

An alternative to autoregressive models is diffusion models, which have been more often used by image models like Stable Diffusion. In a nutshell, the model starts with a fuzzy, noisy image, and it iteratively removes the noise while keeping the user request in mind, steering it towards something that looks more and more like what the user requested.

Still with us? Great!

Lately, some large language models have looked to the diffusion architecture to generate text, and the results have been pretty promising. If you want to dive deeper into how it works, here's a great explainer:

Why am I telling you all this? Because now you can see why diffusion-based text models can be faster than autoregressive ones, since they can basically (again, basically) iteratively refine the entire text in parallel.

This behavior is especially useful for programming, where global structure matters more than linear token prediction.

Phew! We made it. So Apple released a model?

Yes. They released an open-source model called DiffuCode-7B-cpGRPO, that builds on top of a paper called DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation, released just last month.

The paper describes a model that takes a diffusion-first approach to code generation, but with a twist:

"When the sampling temperature is increased from the default 0.2 to 1.2, DiffuCoder becomes more flexible in its token generation order, freeing itself from strict left-to-right constraints"

This means that by adjusting the temperature, it can also behave either more (or less) like an autoregressive model. In essence, Higher temperatures give it more flexibility to generate tokens out of order, while lower temperatures keep it closer to a strict left-to-right decoding.

And with an extra training step called coupled-GRPO, it learned to generate higher-quality code with fewer passes. The result? Code that's faster to generate, globally coherent, and competitive with some of the best open-source programming models out there.

Built on top of an open-source LLM by Alibaba

Even more interestingly, Apple's model is built on top of Qwen2.5‑7B, an open-source foundation model from Alibaba. Alibaba first fine-tuned that model for better code generation (as Qwen2.5‑Coder‑7B), then Apple took it and made its own adjustments.

They turned it into a new model with a diffusion-based decoder, as described in the DiffuCoder paper, and then adjusted it again to better follow instructions. Once that was done, they trained yet another version of it using more than 20,000 carefully picked coding examples.

And all this work paid off. DiffuCoder-7B-cpGRPO got a 4.4% boost on a popular coding benchmark, and it maintained its lower dependency on generating code strictly from left to right.

Of course, there is plenty of room for improvement. Although DiffuCoder did better than many diffusion-based coding models (and that was before the 4.4% bump from DiffuCoder-7B-cpGRPO), it still doesn't quite reach the level of GPT-4 or Gemini Diffusion.

And while some have pointed out that 7 billion parameters might be limiting, or that its diffusion-based generation still resembles a sequential process, the bigger point is this: little by little, Apple has been laying the groundwork for its generative AI efforts with some pretty interesting and novel ideas.

Whether (or if? When?) that will actually translate into real features and products for users and developers is another story.


Of course, Bill Gates says AI will replace humans for most things — but coding will remain "a 100% human profession" centuries later. So what's your take? Are programmers on the way out or safe?

Original Submission