Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How long have you had your current mobile phone?

  • 0-6 months
  • 6-12 months
  • 1-2 years
  • 2-4 years
  • 4+ years
  • My phone belongs in a technology museum.
  • Do 2 tin cans and a very long piece of string count?
  • I don't have a mobile phone you insensitive clod!

[ Results | Polls ]
Comments:43 | Votes:227

posted by kolie on Friday June 06, @11:02PM   Printer-friendly
from the im-sorry-dave-i-cant-let-you-scrape-that dept.

X changes its terms to bar training of AI models using its content

Social network X, formerly known as Twitter, has updated its developer agreement to officially prohibit the use of its platform's public content for training artificial intelligence models. This move solidifies the platform's control over its vast dataset, particularly in light of its relationship with Elon Musk's own AI company, xAI.

The updated terms of service now include a specific restriction against this practice:

In an update on Wednesday, the company added a line under "Reverse Engineering and other Restrictions," a subsection of restrictions on use: "You shall not and you shall not attempt to (or allow others to) [...] use the X API or X Content to fine-tune or train a foundation or frontier model," it reads.

This policy change follows a series of adjustments and is seen as a strategic move to benefit its sister AI company:

This change comes after Elon Musk's AI company xAI acquired X in March — understandably, xAI wouldn't want to give its competitors free access to the social platform's data without a sale agreement. In 2023, X changed its privacy policy to use public data on its site to train AI models. Last October, it made further changes to allow third parties to train their models.

X is not alone in putting up walls around its data as the AI race heats up. Other technology companies have recently made similar changes to their policies to prevent unauthorized AI training:

Reddit has also put in place safeguards against AI crawlers, and last month, The Browser Company added a similar clause to its AI-focused browser Dia's terms of use.

As major platforms that host vast amounts of human-generated text and conversations increasingly restrict access for broad AI training, what might the long-term consequences be for AI development? Does this trend toward creating proprietary "data moats" risk stifling innovation and competition, potentially concentrating the future of advanced AI in the hands of a few powerful companies with exclusive data access?


posted by hubie on Friday June 06, @06:17PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The European Commission (EC) has kicked off a scheme to make Europe a better place to nurture global technology businesses, providing support throughout their lifecycle, from startup through to maturity.

Launched this week, the EU Startup and Scaleup Strategy [PDF], dubbed "Choose Europe to Start and Scale," is another attempt to cultivate a flourishing tech sector in the region to rival that of the US, or "make Europe a startup powerhouse," as the EC puts it.

At the moment, many European tech startups struggle to take their ideas from lab to market, or grow into major players in their market, the EC says, which proposes action across five main areas.

These include creating a more innovation-friendly environment with fewer administrative burdens across the EU Single Market; a Scaleup Europe Fund to help bridge the financing gap; a Lab to Unicorn initiative to help connect universities across the EU; attracting/retaining top talent through to advice on employee stock options and cross-border employment; as well as facilitating access to infrastructure for startups.

The EC reportedly plans to create a public-private fund of at least €10 billion ($11.3 billion) to help with financing. We asked the Commission for confirmation of this, but did not receive an answer prior to publishing.

[...] This latest initiative sets out a clear vision, the EC says: to make Europe the top choice to launch and grow global technology-driven companies. It initiates a myriad of actions to improve conditions for startups and scaleups, encouraging them to capitalize on new geopolitical opportunities, and - importantly - aims to reduce the reasons for fledgling businesses to relocate outside the EU.

[...] According to some estimates, Europeans pay on average a $100 monthly "tax" to use US-created technology, and Steve Brazier, former CEO at Canalys told us last year he suspects this will be exacerbated when AI is widely used.

Europe has relatively few major tech organizations compared to the US, and there is more and more interest from some European businesses in the Trump 2.0 era to reduce their reliance on American hyperscalers in favor of local cloud operators.

According to some seasoned market watchers, the boat has likely sailed with respect to loosening the dominance of Microsoft, AWS and Google in the cloud, yet for the emerging tech startup scene there may be everything to play for.


Original Submission

posted by janrinok on Friday June 06, @04:03PM   Printer-friendly

https://www.newscientist.com/article/2483366-japans-resilience-moon-lander-has-crashed-into-the-lunar-surface/

A Japanese space mission hoping to make history as the third ever private lunar landing has ended in failure, after ispace's Resilience lander smashed into the moon at some point after 7.13pm UTC on 5 June.

The lander had successfully descended to about 20 km above the moon's surface, but ispace's mission control lost contact shortly afterwards, when the probe fired its main engine for the final descent, and received no further communication.

The company said in a statement that a laser tool the craft used to measure its distance to the surface appeared to have malfunctioned, which would have caused the lander to slow down insufficiently, making the most likely outcome a crash landing.

"Given that there is currently no prospect of a successful lunar landing, our top priority is to swiftly analyse the telemetry data we have obtained thus far and work diligently to identify the cause," said ispace CEO Takeshi Hakamada in the statement.

If it had been successful, Resilience would have been the second private lunar landing of this year and the third ever. It would also have marked the first non-US company to land on lunar soil, after iSpace's first attempt, the Hakuto-R mission, ended in failure in 2023.

The Resilience lander started its moon-bound journey on 15 January, when it launched aboard a SpaceX rocket together with Firefly Aerospace's Blue Ghost lander. While Blue Ghost touched down on 2 March, Resilience took a more circuitous route, travelling into deep space before doubling back and entering lunar orbit on 6 May. This winding path was necessary to land in the hard-to-reach northern plain called Mare Frigoris, where no previous moon mission had explored.

There were six experiments on board Resilience, including a device for splitting water into hydrogen and oxygen, a module for producing food from algae and a deep-space radiation monitor. The lander also contained a 5-kilogram rover, called Tenacious, that would have explored and photographed the lunar surface during the two weeks that Resilience was scheduled to run for.


Original Submission

posted by janrinok on Friday June 06, @01:32PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The Commercial Times reports that TSMC's upcoming N2 2nm semiconductors will cost $30,000 per wafer, a roughly 66% increase over the company's 3nm chips. Future nodes are expected to be even more expensive and likely reserved for the largest manufacturers.

TSMC has justified these price increases by citing the massive cost of building 2nm fabrication plants, which can reach up to $725 million. According to United Daily News, major players such as Apple, AMD, Qualcomm, Broadcom, and Nvidia are expected to place orders before the end of the year despite the higher prices, potentially bringing TSMC's 2nm Arizona fab to full capacity.

Unsurprisingly, Apple is getting first dibs. The A20 processor in next year's iPhone 18 Pro is expected to be the first chip based on TSMC's N2 process. Intel's Nova Lake processors, targeting desktops and possibly high-end laptops, are also slated to use N2 and are expected to launch next year.

Earlier reports indicated that yield rates for TSMC's 2nm process reached 60% last year and have since improved. New data suggests that 256Mb SRAM yield rates now exceed 90%. Trial production is likely already underway, with mass production scheduled to begin later this year.

With tape-outs for 2nm-based designs surpassing previous nodes at the same development stage, TSMC aims to produce tens of thousands of wafers by the end of 2025.

TSMC also plans to follow N2 with N2P and N2X in the second half of next year. N2P is expected to offer an 18% performance boost over N3E at the same power level and 36% greater energy efficiency at the same speed, along with significantly higher logic density. N2X, slated for mass production in 2027, will increase maximum clock frequencies by 10%.

As semiconductor geometries continue to shrink, power leakage becomes a major concern. TSMC's 2nm nodes will address this issue with gate-all-around (GAA) transistor architectures, enabling more precise control of electrical currents.

Beyond 2nm lies the Angstrom era, where TSMC will implement backside power delivery to further enhance performance. Future process nodes like A16 (1.6nm) and A14 (1.4nm) could cost up to $45,000 per wafer.

Meanwhile, Intel is aiming to outpace TSMC's roadmap. The company recently began risk production of its A18 node, which also features gate-all-around and backside power delivery. These chips are expected to debut later this year in Intel's upcoming laptop CPUs, codenamed Panther Lake.


Original Submission

posted by janrinok on Friday June 06, @08:48AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

NASA is advancing plans to construct a radio telescope on the Moon's far side – a location uniquely shielded from the ever-increasing interference caused by Earth's expanding satellite networks. This ambitious endeavor, known as the Lunar Crater Radio Telescope, envisions deploying a massive wire mesh reflector within a lunar crater.

The project's innovative design relies on advanced robotics to suspend the reflector using cables, and if development proceeds as planned, the observatory could be operational sometime in the 2030s. Current projections estimate the cost at over $2 billion.

The far side of the Moon offers an unparalleled environment for radio astronomy, being naturally protected from the relentless radio noise and light pollution that plague observatories on Earth. The recent surge in satellite launches, especially from private ventures like Starlink, has led to a dramatic increase in orbiting satellites.

This proliferation raises concerns among astronomers about space debris, light pollution, and, most critically, the leakage of radio-frequency radiation.

Such interference poses a significant threat to sensitive scientific instruments designed to detect faint signals from the universe's earliest epochs. Federico Di Vruno, an astronomer affiliated with the Square Kilometre Array Observatory, told LiveScience, "it would mean that we are artificially closing 'windows' to observe our universe" if radio astronomy on Earth becomes impossible due to interference.

The LCRT is being developed by a team at NASA's Jet Propulsion Laboratory, part of the California Institute of Technology. Since its initial proposal in 2020, the concept has progressed through several phases of funding from NASA's Institute for Advanced Concepts. The team is currently building a prototype for testing at the Owens Valley Radio Observatory in California.

Gaurangi Gupta, a research scientist working on the project, explained that preparations are underway to apply for the next round of funding. If successful, she told LiveScience, the LCRT could transition into a "fully-fledged mission" within the next decade.

The proposed telescope features a mesh reflector spanning approximately 1,150 feet – making it larger than the now-defunct Arecibo telescope, though not as large as China's FAST observatory. The team has already selected a preferred crater in the Moon's Northern Hemisphere for the installation, but the precise site remains confidential.

Although the concept of a lunar radio telescope dates back to at least 1984, technological advances have brought the idea closer to reality. One of the most significant obstacles facing the project, however, is its cost. Gupta noted that the latest estimate for building the LCRT stands at around $2.6 billion – a figure that presents challenges given NASA's current budgetary constraints.

Beyond providing a refuge from terrestrial interference, the LCRT would open new frontiers in astronomy by enabling the study of ultra-long radio waves – those with wavelengths longer than 33 feet. Earth's atmosphere blocks these frequencies, which are essential for investigating the universe's "cosmic dark ages," a period before the first stars formed.

"During this phase, the universe primarily consisted of neutral hydrogen, photons and dark matter, thus it serves as an excellent laboratory for testing our understanding of cosmology," Gupta said. "Observations of the dark ages have the potential to revolutionize physics and cosmology by improving our understanding of fundamental particle physics, dark matter, dark energy and cosmic inflation."

NASA has already begun experimenting with lunar radio astronomy. In February 2024, the ROLSES-1 instrument was delivered to the Moon's near side by Intuitive Machines' Odysseus lander, briefly collecting the first lunar radio data. However, as Gupta pointed out, the instrument's Earth-facing orientation meant that "almost all the signals it collected came from our own planet, offering little astronomical value."

Later this year, another mission aims to place a small radio telescope on the Moon's far side, further testing the feasibility of such observations.


Original Submission

posted by kolie on Friday June 06, @03:59AM   Printer-friendly
from the ground-control-to-major-bomb dept.

Arthur T Knackerbracket has processed the following story:

SpaceX's Starship has failed, again.

Elon Musk’s private rocketry company staged the ninth launch of the craft on Tuesday and notched up one success by managing to leave the launchpad by re-using a Super Heavy booster for the first time. But multiple fails for Flight 9 followed.

SpaceX paused the countdown for Tuesday's launch at the T-40 mark for some final tweaks, then sent Starship into the sky atop the Super Heavy at 1937 Eastern Daylight Time.

After stage separation, the booster crash-landed six minutes into the flight, after SpaceX used a steeper-than-usual angle of attack for its re-entry "to intentionally push Super Heavy to the limits, giving us real-world data about its performance that will directly feed in to making the next generation booster even more capable."

The Starship upper stage, meanwhile, did better than the previous two tests flights, in that it actually reached space, but subsequently things (like the craft) got well and truly turned around.

One of the goals for Musk's space crew was to release eight mocked up Starlink satellites into orbit. SpaceX already failed at its last two attempts to do this when the pod doors never opened. And it was third time unlucky last night when the payload door failed yet again to fully open to release the dummy satellites. SpaceX has not yet provided a reason for the malfunction.

Another goal for Flight 9 was to check out the performance of the ship's heatshield – SpaceX specifically flew it with 100 missing (on purpose) heatshield tiles so that it could test key vulnerable areas "across the vehicle during reentry." (The spacecraft also employed “Multiple metallic tile options, including one with active cooling" to test different materials for future missions.) But it needed controlled reentry to properly assess stress-test that, and that failed too.

After the doors remained stubbornly closed, a "subsequent attitude control error resulted in bypassing the Raptor relight and prevented Starship from getting into the intended position for reentry." It began spinning out of control, blowing up, er, experiencing "a rapid unscheduled disassembly" upon re-entry.

SpaceX boss Elon Musk had rated Starship’s re-entry as the most important phase of this flight. But Starship spinning out as it headed back to Earth meant SpaceX was unable to capture all the data it hoped to gather. Although it says it did gather a lot of useful information before ground control lost contact with Starship approximately 46 minutes into the flight.

Musk nonetheless rated the mission a success.

“Starship made it to the scheduled ship engine cutoff, so big improvement over last flight!” he Xeeted. “Also, no significant loss of heat shield tiles during ascent. Leaks caused loss of main tank pressure during the coast and re-entry phase. Lot of good data to review.”

The billionaire added: “Launch cadence for next 3 flights will be faster, at approximately 1 every 3 to 4 weeks.”

That may be a little optimistic, as the USA’s Federal Aviation Administration (FAA) must authorize Starship launches and is yet to do so for future flights.

Previous Starship missions caused concern in the aviation industry after debris from SpaceX hardware fell to Earth. For this mission the FAA enlarged the Aircraft Hazard Area that aviators avoid after launches. SpaceX’s commentary on the launch made several mentions of the company having secured permission and chosen remote – and therefore safe – locations for touchdowns.

The FAA, however, is not keen to authorize flights until it is satisfied with safety. Three explosive endings in a row could make Musk’s timeline for future launches harder to achieve.


Original Submission

posted by kolie on Thursday June 05, @11:14PM   Printer-friendly
from the whose-chip-is-it-anyways dept.

Arthur T Knackerbracket has processed the following story:

A Bloomberg report, citing sources familiar with the matter, highlights that the proposed plant would be a gigafab, essentially a sprawling complex of multiple chipmaking facilities. If it comes to pass, it would represent a massive leap in the UAE's ambitions to become a key player in this field, even though it currently lacks skilled semiconductor labor.

TSMC has reportedly met several times in recent months with Steve Witkoff, the US Special Envoy to the Middle East, and MGX, a powerful UAE investment fund tied to the ruling family. The renewed interest comes amid broader negotiations around AI cooperation between the two countries.

Still, don't expect bulldozers on the ground anytime soon. The idea is still in early-stage talks, and whether it advances at all hinges on how the US feels about it, particularly given the national security and economic implications.

Critics inside the administration point to the UAE's ties to China and the risk of future technology transfers. AI data centers can be more easily regulated through licensing and oversight, but a chip manufacturing plant would create a pipeline of advanced know-how and local production that the US could lose control over.

It's worth mentioning that TSMC is already investing heavily in the US through its Arizona project, which is expected to cost $165 billion and includes fabs, research labs, and chip packaging facilities. The US committed $6.6 billion in subsidies to help make that happen as part of the CHIPS Act. But some in the Trump administration worry that spreading TSMC's resources too thin, especially in a region with complex geopolitics like the Gulf, could backfire.

Regardless of the outcome, the UAE continues to position itself as a regional tech leader and has been aggressively courting partnerships in AI, quantum computing, and cloud infrastructure. Last month, Trump announced a series of agreements with multiple Gulf countries, including the UAE, related to exporting AI chips and developing AI infrastructure.


Original Submission

posted by janrinok on Thursday June 05, @06:29PM   Printer-friendly
from the if-you're-not-doing-anything-wrong-you-have-nothing-to-fear dept.

The Real ID Act was passed in 2005 on the grounds that it was necessary for access control of sensitive facilities like nuclear power plants and the security of airline flights. The law imposed standards for state- and territory-issued ID cards in the United States, but was widely criticized as an attempt to create a national ID card and would be harmful to privacy. These concerns are explained well in a 2007 article from the New York Civil Liberties Union:

Real ID threatens privacy in two ways. First, it consolidates Americans' personal information into a network of interlinking databases accessible to the federal government and bureaucrats throughout the 50 states and U.S. territories. This national mega-database would invite government snooping and be a goldmine for identity thieves. Second, it mandates that all driver's licenses and ID cards have an unencrypted "machine-readable zone" that would contain personal information on Americans that could be easily "skimmed" by anybody with a barcode reader.

These concerns are based on what happens when criminals access the data, but also how consolidating data from many government agencies into a central database makes it easier for bad actors within the government to target Americans and violate their civil liberties. These concerns led to a 20 year delay in enforcing Real ID standards nationally, and as a USA Today article from 2025 warns, once Americans' data is stored on a central repository for one purpose, mission creep is likely. If the centralized database is used to make student loan applications and income tax processing more efficient, what's to stop law enforcement from accessing it to identify potential criminals? Over the past two decades, criticism of the Real ID Act has come from across the political spectrum, with many people and organizations on both the left and right decrying it as a serious threat to privacy and civil liberties.

Much of these concerns have never been realized about the Real ID Act, but they are renewed with Executive Order #14143, signed by Donald Trump on March 20, 2025. This directs for the sharing of government data between agencies except when it is classified for national security purposes. The executive order does not include any provisions to protect the privacy of individuals.

Although Trump has not commented on how this data sharing will be achieved, the Trump Administration has hired a company called Palantir to create a central registry of data, which would include a national citizen database. Recent reporting describes a database with wide-ranging information about every American that is generally private:

Foundry's capabilities in data organization and analysis could potentially enable the merging of information from various agencies, thereby creating detailed profiles of American citizens. The Trump administration has attempted to access extensive citizen data from government databases, including bank details, student debt, medical claims, and disability status.

Palantir does not gather data on their own, but they do provide tools to analyze large repositories of data, make inferences about the data, and provide easy-to-use reports. There are serious concerns about the lack of transparency about what data is being integrated into this repository, how it will be used, the potential for tracking people in various segments of the population such as immigrants, and the ability to use this data to target and harass political opponents. Concerns about how Trump's national citizen database will be used echo fears raised from across the political spectrum about the Real ID Act, except that they are apparently now quite close to becoming reality.

Additional reading:


Original Submission

posted by janrinok on Thursday June 05, @01:42PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

German motorists likely felt disheartened at the sight of all the stop signs on Google Maps [last] Thursday. The Guardian reports that major roads in western, northern, south-western and central parts of the country were shown as closed. Even parts of Belgium and the Netherlands appeared to have ground to a halt.

The situation was exacerbated by the incident taking place at the start of a four-day break for the Ascension holiday, when many Germans were travelling. It led to a huge number of Google Maps users heading for alternative routes to avoid the non-existent closures. Somewhat ironically, this caused huge jams and delays on these smaller roads.

Drivers not relying on Google Maps – and any Google users who decided to check another service or the news – didn't have to deal with these problems. Apple Maps, Waze, and the traffic reports all showed that everything was moving freely. The major highways were likely quieter than usual as so many Google Maps users were avoiding them.

The apparent mass closure of so many roads caused panic among those who believed Google Maps' warning. Some thought there had been a terrorist attack or state-sponsored hack, while others speculated about a natural disaster.

When asked about the glitch, which lasted around two hours, Google said the company wouldn't comment on the specific case. It added that Google Maps draws information from three key sources: individual users, public sources such as transportation authorities, and a mix of third-party providers.

Ars Technica contacted Google to ask about the cause of the problem. A spokesperson said the company "investigated a technical issue that temporarily showed inaccurate road closures on the map" and has "since removed them."

With Google Maps drawing information from third parties, the issue could partly have been related to the German Automobile Club's warning that there may be heavy traffic at the start of the holiday. Google also added AI features to Maps recently, and we all know how reliable they can be.

There have been plenty of other incidents in which Google Maps got things very wrong. Germany was cursing the service again earlier this month when it showed highway tunnels being closed in part of the country when they were open.

In 2023, Google was sued by the family of a North Carolina man who drove his car off a collapsed bridge as he followed directions given by Google Maps. The case is ongoing.


Original Submission

posted by hubie on Thursday June 05, @09:00AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Fuel cells powered with the metal could provide a new source of electric power that's far more energy-dense than lithium-ion batteries.

A new type of fuel cell that runs on sodium metal could one day help clean up sectors where it’s difficult to replace fossil fuels, like rail, regional aviation, and short-distance shipping. The device represents a departure from technologies like lithium-based batteries and is more similar conceptually to hydrogen fuel cell systems. 

The sodium-air fuel cell was designed by a team led by Yet-Ming Chiang, a professor of materials science and engineering at MIT. It has a higher energy density than lithium-ion batteries and doesn’t require the super-cold temperatures or high pressures that hydrogen does, making it potentially more practical for transport. “I’m interested in sodium metal as an energy carrier of the future,” Chiang says.  

The device’s design, published today in Joule, is related to the technology behind one of Chiang’s companies, Form Energy, which is building iron-air batteries for large energy storage installations like those that could help store wind and solar power on the grid. Form’s batteries rely on water, iron, and air.

One technical challenge for metal-air batteries has historically been reversibility. A battery’s chemical reactions must be easily reversed so that in one direction they generate electricity, discharging the battery, and in the other electricity goes into the cell and the reverse reactions happen, charging it up.

When a battery’s reactions produce a very stable product, it can be difficult to recharge the battery without losing capacity. To get around this problem, the team at Form had discussions about whether their batteries could be refuelable rather than rechargeable, Chiang says. The idea was that rather than reversing the reactions, they could simply run the system in one direction, add more starting material, and repeat. 

[...] Chiang and his colleagues set out to build a fuel cell that runs on liquid sodium, which could have a much higher energy density than existing commercial technologies, so it would be small and light enough to be used for things like regional airplanes or short-distance shipping.

The research team built small test cells to try out the concept and ran them to show that they could use the sodium-metal-based system to generate electricity. Since sodium becomes liquid at about 98 °C (208 °F), the cells operated at moderate temperatures of between 110 °C and 130 °C (or 230 °F and 266°F), which could be practical for use on planes or ships, Chiang says. 

From their work with these experimental devices, the researchers estimated that the energy density was about 1,200 watt-hours per kilogram (Wh/kg). That’s much higher than what commercial lithium-ion batteries can reach today (around 300 Wh/kg). Hydrogen fuel cells can achieve high energy density, but that requires the hydrogen to be stored at high pressures and often ultra-low temperatures.

[...] There are economic factors working in favor of sodium-based systems, though it would take some work to build up the necessary supply chains. Today, sodium metal isn’t produced at very high volumes. However, it can be made from sodium chloride (table salt), which is incredibly cheap. And it was produced more abundantly in the past, since it was used in the process of making leaded gasoline. So there’s a precedent for a larger supply chain, and it’s possible that scaling up production of sodium metal would make it cheap enough to use in fuel cell systems, Chiang says.

[...] "If people don't find it crazy, I'll be rather disappointed," Chiang says. "Because if an idea doesn't sound crazy at the beginning, it probably isn't as revolutionary as you think. Fortunately, most people think I'm crazy on this one."

Journal Reference: Sugano, Karen et al., Sodium-air fuel cell for high energy density and low-cost electric power, Joule, Volume 0, Issue 0, 101962


Original Submission

posted by hubie on Thursday June 05, @04:15AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Apple has a month left to make its App Store rules compliant with EU Digital Markets Act antisteering provisions, or the fines will keep coming.

On April 23, following multiple reports that the EU was delaying the issuing of fines against Apple and Meta, Europe finally pulled the trigger. It announced that it would fine Apple and Meta millions of euros for failing to comply with the Digital Markets Act.

Over a month later, on May 27, the European Commission published its full ruling on the matter. The 67-page document also outlines exactly what the punishment is to Apple, for failing to follow the regulation.

The bottom line is that Apple was fined 500 million euro ($567 million), with Apple being given three months to pay it to the European Commission. If it doesn't pay on time, it will have to pay interest on the due funds.

Apple also has to fix itself and end the non-compliance with the Digital Markets Act within 60 days of the April notification. If Apple does not, it faces the prospect of "periodic penalty payments" of an unspecified amount until it does comply.

The ruling covers how Apple is not complying with the DMA based on how its anti-steering rules are implemented. Originally, Apple prevented developers from telling consumers about ways to make payments for services and features that didn't go through Apple's systems.

Apple did change its rules under regulatory pressure, but did so in a way that didn't meet the requirements of the Digital Markets Act. These changes included allowing developers to share an external link with users, but with limitations.

Since Apple wouldn't get its 30% cut for usage of its In-App Purchases mechanism, Apple added a new requirement, effectively taking a 27% fee from these transactions outside of the App Store system.

In its ruling, neither the old nor new business terms complied with the regulation, since they restricted the ability for developers to promote their off-App Store offers in their apps. Forcing a fee instead of doing so free of charge was also seen as an issue, as is limiting links to one URL per app.

Repeatedly, Apple's arguments are denied in the ruling, such as its definition of "free" as its read in the regulation when taking into account nuances in different languages.

As for the fine, Apple argued that it should not be fined at all, due to the relative novelty of the regulation and taking into account Apple's "good faith efforts to engage" with the European Commission.

"None of Apple's arguments for not imposing a fine, or for reducing the fine, are convincing," the ruling reads.

While the final ruling's publication in full seemingly brings to an end legal action that started back in May 2024, that's far from the reality of the situation. Like many other high-stakes lawsuits, the appeals process will take years to conclude.

Apple said at the time of the original ruling that it will appeal against the fine. Apple also took the opportunity to accuse the EU of discriminating against it, and of requiring Apple to hand over its technology to rival companies for free.

It is unclear if Apple has formally appealed, nor if Apple has made its 500 million euro payment.


Original Submission

posted by hubie on Wednesday June 04, @11:30PM   Printer-friendly
from the sounds-like-Windows-11 dept.

Arthur T Knackerbracket has processed the following story:

The phone was featured in a BBC video, which showed it powering on with an animated North Korean flag waving across the screen. While the report did not specify the brand, the design and user interface closely resembled those of a Huawei or Honor device.

It's unclear whether these companies officially sell phones in North Korea, but if they do, the devices are likely customized with state-approved software designed to restrict functionality and facilitate government surveillance.

One of the more revealing – and darkly amusing – features was the phone's automatic censorship of words deemed problematic by the state. For instance, when users typed oppa, a South Korean term used to refer to an older brother or a boyfriend, the phone automatically replaced it with comrade. A warning would then appear, admonishing the user that oppa could only refer to an older sibling.

Typing "South Korea" would trigger another change. The phrase was automatically replaced with "puppet state," reflecting the language used in official North Korean rhetoric.

Then came the more unsettling features. The phone silently captured a screenshot every five minutes, storing the images in a hidden folder that users couldn't access. According to the BBC, authorities could later review these images to monitor the user's activity.

The device was smuggled out of North Korea by Daily NK, a Seoul-based media outlet specializing in North Korean affairs. After examining the phone, the BBC confirmed that the censorship mechanisms were deeply embedded in its software. Experts say this technology is designed not only to control information but also to reinforce state messaging at the most personal level.

Smartphone usage has grown in North Korea in recent years, but access remains tightly controlled. Devices cannot connect to the global internet and are subject to intense government surveillance.


Original Submission

posted by janrinok on Wednesday June 04, @06:45PM   Printer-friendly
from the trust-in-friend-computer dept.

A large study of "Trust, attitudes and use of artificial intelligence" completed by KPMG and MBS. Apparently people like AI. They trust it. They believe it will bring great benefits. They use it in their work, some apparently don't believe they can do their work without AI anymore. Also they don't bother to check if the AI is correct or not in its output. All good. Trust friend computer!

Led by the University of Melbourne in collaboration with KPMG, Trust, attitudes and use of Artificial Intelligence: A global study 2025, surveyed more than 48,000 people across 47 countries to explore the impact AI is having on individuals and organizations. It is one of the most wide-ranging global studies into the public's trust, use, and attitudes towards AI to date.

• 66% of people use AI regularly, and 83% believe the use of AI will result in a wide range of benefits.

• Yet, trust remains a critical challenge: only 46% of people globally are willing to trust AI systems.

• There is a public mandate for national and international AI regulation with 70% believing regulation is needed.

• Many rely on AI output without evaluating accuracy (66%) and are making mistakes in their work due to AI (56%).

However, the use of AI at work is also creating complex risks for organisations. Almost half of employees admit to using AI in ways that contravene company policies, including uploading sensitive company information into free public AI tools like ChatGPT.

What makes these risks challenging to manage is over half (57%) of employees say they hide their use of AI and present AI-generated work as their own.

AI [increases] the security risk at work. Or they don't want to let their employer know that they could easily be replaced by a bot.

Sources:

https://mbs.edu/news/Global-study-reveals-trust-of-AI-remains-a-critical-challenge
https://ai.uq.edu.au/project/trust-artificial-intelligence-global-study

Additional sources:

https://kpmg.com/xx/en/our-insights/ai-and-technology/trust-attitudes-and-use-of-ai.html
https://assets.kpmg.com/content/dam/kpmgsites/xx/pdf/2025/05/trust-attitudes-and-use-of-ai-global-report.pdf


Original Submission

Processed by jelizondo

posted by hubie on Wednesday June 04, @02:04PM   Printer-friendly

[Editor's Comment: This is the first two parts of a planned 4-part series]

MCP: What It Is and Why It Matters—Part 1
MCP: What It Is and Why It Matters—Part 2

Imagine you have a single universal plug that fits all your devices—that's essentially what the Model Context Protocol (MCP) is for AI. MCP is an open standard (think "USB-C for AI integrations") that allows AI models to connect to many different apps and data sources in a consistent way. In simple terms, MCP lets an AI assistant talk to various software tools using a common language, instead of each tool requiring a different adapter or custom code.

So, what does this mean in practice? If you're using an AI coding assistant like Cursor or Windsurf, MCP is the shared protocol that lets that assistant use external tools on your behalf. For example, with MCP an AI model could fetch information from a database, edit a design in Figma, or control a music app—all by sending natural-language instructions through a standardized interface. You (or the AI) no longer need to manually switch contexts or learn each tool's API; the MCP "translator" bridges the gap between human language and software commands.

In a nutshell, MCP is like giving your AI assistant a universal remote control to operate all your digital devices and services. Instead of being stuck in its own world, your AI can now reach out and press the buttons of other applications safely and intelligently. This common protocol means one AI can integrate with thousands of tools as long as those tools have an MCP interface—eliminating the need for custom integrations for each new app. The result: Your AI helper becomes far more capable, able to not just chat about things but take actions in the real software you use.

[...] Without MCP, integrating an AI assistant with external tools is a bit like having a bunch of appliances each with a different plug and no universal outlet. Developers were dealing with fragmented integrations everywhere. For example, your AI IDE might use one method to get code from GitHub, another to fetch data from a database, and yet another to automate a design tool—each integration needing a custom adapter. Not only is this labor-intensive; it's brittle and doesn't scale.

MCP addresses this fragmentation head-on by offering one common protocol for all these interactions. Instead of writing separate code for each tool, a developer can implement the MCP specification and instantly make their application accessible to any AI that speaks MCP. [...] In short, MCP tackles the integration nightmare by introducing a common connective tissue, enabling AI agents to plug into new tools as easily as a laptop accepts a USB device.

How does MCP actually work under the hood? At its core, MCP follows a client–server architecture, with a twist tailored for AI-to-software communication. Let’s break down the roles:

These are lightweight adapters that run alongside a specific application or service. An MCP server exposes that application’s functionality (its “services”) in a standardized way. Think of the server as a translator embedded in the app—it knows how to take a natural-language request (from an AI) and perform the equivalent action in the app. For example, a Blender MCP server knows how to map “create a cube and apply a wood texture” onto Blender’s Python API calls. Similarly, a GitHub MCP server can take “list my open pull requests” and fetch that via the GitHub API. MCP servers typically implement a few key things:

On the other side, an AI assistant (or the platform hosting it) includes an MCP client component. This client maintains a 1:1 connection to an MCP server. In simpler terms, if the AI wants to use a particular tool, it will connect through an MCP client to that tool’s MCP server. The client’s job is to handle the communication (open a socket, send/receive messages) and present the server’s responses to the AI model. Many AI “host” programs act as an MCP client manager—e.g., Cursor (an AI IDE) can spin up an MCP client to talk to Figma’s server or Ableton’s server, as configured. The MCP client and server speak the same protocol, exchanging messages back and forth.

[...] To illustrate the flow, imagine you tell your AI assistant (in Cursor), “Hey, gather the user stats from our product’s database and generate a bar chart.” Cursor (as an MCP host) might have an MCP client for the database (say a Postgres MCP server) and another for a visualization tool. The query goes to the Postgres MCP server, which runs the actual SQL and returns the data. Then the AI might send that data to the visualization tool’s MCP server to create a chart image. Each of these steps is mediated by the MCP protocol, which handles discovering what the AI can do (“this server offers a run_query action”), invoking it, and returning results. All the while, the AI model doesn’t have to know SQL or the plotting library’s API—it just uses natural language and the MCP servers translate its intent into action.

It’s worth noting that security and control are part of architecture considerations. MCP servers run with certain permissions—for instance, a GitHub MCP server might have a token that grants read access to certain repos. Currently, configuration is manual, but the architecture anticipates adding standardized authentication in the future for robustness (more on that later). Also, communication channels are flexible: Some integrations run the MCP server inside the application process (e.g., a Unity plug-in that opens a local port), while others run as separate processes. In all cases, the architecture cleanly separates the concerns: The application side (server) and the AI side (client) meet through the protocol “in the middle.”

MCP is a fundamental shift that could reshape how we build software and use AI. For AI agents, MCP is transformative because it dramatically expands their reach while simplifying their design. Instead of hardcoding capabilities, an AI agent can now dynamically discover and use new tools via MCP. This means we can easily give an AI assistant new powers by spinning up an MCP server, without retraining the model or altering the core system. It’s analogous to how adding a new app to your smartphone suddenly gives you new functionality—here, adding a new MCP server instantly teaches your AI a new skill set.

From a developer tooling perspective, the implications are huge. Developer workflows often span dozens of tools: coding in an IDE, using GitHub for code, Jira for tickets, Figma for design, CI pipelines, browsers for testing, etc. With MCP, an AI codeveloper can hop between all these seamlessly, acting as the glue. This unlocks “composable” workflows where complex tasks are automated by the AI chaining actions across tools. For example, consider integrating design with code: With an MCP connection, your AI IDE can pull design specs from Figma and generate code, eliminating manual steps and potential miscommunications.

No more context switching, no more manual translations, no more design-to-code friction—the AI can directly read design files, create UI components, and even export assets, all without leaving the coding environment.

[...] In summary, MCP matters because it turns the dream of a universal AI assistant for developers into a practical reality. It’s the missing piece that makes our tools context aware and interoperable with AI, with immediate productivity wins (less manual glue work) and strategic advantages (future-proof, flexible integrations). The next sections will make this concrete by walking through some eye-opening demos and use cases made possible by MCP.


Original Submission

posted by hubie on Wednesday June 04, @09:23AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Airlines are tightening their rules on batteries and portable chargers.

Because of fire risks, you'll now have to keep your portable chargers visible while you're using them, at least on Southwest Airlines flights. In other words, you can't charge your laptop or Switch in the overhead bin. This, the airline argues, will allow them to better catch and stop a fire if a battery overheats. This policy went into effect on May 28.

"When a portable charger/power bank is used during a flight, it must be out of any baggage and remain in plain sight," Southwest Airlines' policy reads. "Do not charge devices in the overhead bin."

You can still travel with up to 20 spare batteries, including portable chargers and power banks, at a time on Southwest.

"Portable chargers and spare batteries must be protected from short circuit by protecting any exposed terminals and packed in your carryon (sic) bag or with you onboard," the policy continues. "Lithium-ion batteries size must not exceed 100 watt-hours."

Southwest's policy is actually fairly generous, as many foreign arlines are taking much stricter approaches to portable charging products.

Other airlines, including EVA Air, China Airlines, Malaysia Airlines, Thai Airways, and Singapore Airlines, have all completely banned the use of portable chargers while passengers are in-flight, The New York Times reported. Ryanair asks passengers to remove lithium batteries from overhead bins, and the South Korean government requires that passengers keep their portable chargers out of overhead bins, too, also according to The New York Times. The Federal Aviation Administration, for its part, requires that lithium-ion batteries be kept in carry-on baggage.

This comes just a few months after a fire destroyed an Air Busan plane on the tarmac in South Korea, likely because of a portable power bank, local authorities told the BBC at the time. However, The New York Times reports that there is "no definitive link between portable batteries and the Air Busan fire, and an investigation is underway."


Original Submission

Today's News | June 7 | June 5  >