Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
A novel tap-jacking technique can exploit user interface animations to bypass Android's permission system and allow access to sensitive data or trick users into performing destructive actions, such as wiping the device.
Unlike traditional, overlay-based tap-jacking, TapTrap attacks work even with zero-permission apps to launch a harmless transparent activity on top of a malicious one, a behavior that remains unmitigated in Android 15 and 16.
TapTrap was developed by a team of security researchers at TU Wien and the University of Bayreuth (Philipp Beer, Marco Squarcina, Sebastian Roth, Martina Lindorfer), and will be presented next month at the USENIX Security Symposium.
However, the team has already published a technical paper that outlines the attack and a website that summarizes most of the details.
How TapTrap worksTapTrap abuses the way Android handles activity transitions with custom animations to create a visual mismatch between what the user sees and what the device actually registers.
A malicious app installed on the target device launches a sensitive system screen (permission prompt, system setting, etc.) from another app using 'startActivity()' with a custom low-opacity animation.
"The key to TapTrap is using an animation that renders the target activity nearly invisible," the researchers say on a website that explains the attack.
"This can be achieved by defining a custom animation with both the starting and ending opacity (alpha) set to a low value, such as 0.01," thus making the malicious or risky activity almost completely transparent.
"Optionally, a scale animation can be applied to zoom into a specific UI element (e.g., a permission button), making it occupy the full screen and increasing the chance the user will tap it."
Although the launched prompt receives all touch events, all the user sees is the underlying app that displays its own UI elements, as on top of it is the transparent screen the user actually engages with.
Thinking they interact with the benign app, a user may tap on specific screen positions that correspond to risky actions, such as an "Allow" or "Authorize" buttons on nearly invisible prompts.
A video released by the researchers demonstrates how a game app could leverage TapTrap to enable camera access for a website via Chrome browser.
To check if TapTrap could work with applications in Play Store, the official Android repository, the researchers analyzed close to 100,000. They found that 76% of them are vulnerable to TapTrap as they include a screen ("activity") that meets the following conditions:
can be launched by another app
runs in the same task as the calling app
does not override the transition animation
does not wait for the animation to finish before reacting to user inputThe researchers say that animations are enabled on the latest Android version unless the user disables them from the developer options or accessibility settings, exposing the devices to TapTrap attacks.
While developing the attack, the researchers used Android 15, the latest version at the time, but after Android 16 came out they also ran some tests on it.
Marco Squarcina told BleepingComputer that they tried TapTrap on a Google Pixel 8a running Android 16 and they can confirm that the issue remains unmitigated.
GrapheneOS, the mobile operating system focused on privacy and security, also confirmed to BleepingComputer that the latest Android 16 is vulnerable to the TapTrap technique, and announced that the their next release will include a fix.
BleepingComputer has contacted Google about TapTrap, and a spokesperson said that the TapTrap problem will be mitigated in a future update:
"Android is constantly improving its existing mitigations against tap-jacking attacks. We are aware of this research and we will be addressing this issue in a future update. Google Play has policies in place to keep users safe that all developers must adhere to, and if we find that an app has violated our policies, we take appropriate action."- a Google representative told BleepingComputer.
When is an AI system intelligent enough to be called artificial general intelligence (AGI)? According to one definition reportedly agreed upon by Microsoft and OpenAI, the answer lies in economics: When AI generates $100 billion in profits. This arbitrary profit-based benchmark for AGI perfectly captures the definitional chaos plaguing the AI industry.
In fact, it may be impossible to create a universal definition of AGI, but few people with money on the line will admit it.
Over this past year, several high-profile people in the tech industry have been heralding the seemingly imminent arrival of "AGI" (i.e., within the next two years). [...] As Google DeepMind wrote in a paper on the topic: If you ask 100 AI experts to define AGI, you'll get "100 related but different definitions." [...] When companies claim they're on the verge of AGI, what exactly are they claiming?
This isn't just academic navel-gazing. The definition problem has real consequences for how we develop, regulate, and think about AI systems. When companies claim they're on the verge of AGI, what exactly are they claiming?
I tend to define AGI in a traditional way that hearkens back to the "general" part of its name: An AI model that can widely generalize—applying concepts to novel scenarios—and match the versatile human capability to perform unfamiliar tasks across many domains without needing to be specifically trained for them.
However, this definition immediately runs into thorny questions about what exactly constitutes "human-level" performance. Expert-level humans? Average humans? And across which tasks—should an AGI be able to perform surgery, write poetry, fix a car engine, and prove mathematical theorems, all at the level of human specialists? (Which human can do all that?) More fundamentally, the focus on human parity is itself an assumption; it's worth asking why mimicking human intelligence is the necessary yardstick at all.
The latest example of trouble resulting from this definitional confusion comes from the deteriorating relationship between Microsoft and OpenAI. According to The Wall Street Journal, the two companies are now locked in acrimonious negotiations partly because they can't agree on what AGI even means—despite having baked the term into a contract worth over $13 billion.
[...] For decades, the Turing Test served as the de facto benchmark for machine intelligence. [...] But the Turing Test has shown its age. Modern language models can pass some limited versions of the test not because they "think" like humans, but because they're exceptionally capable at creating highly plausible human-sounding outputs.
Perhaps the most systematic attempt to bring order to this chaos comes from Google DeepMind, which in July 2024 proposed a framework with five levels of AGI performance: emerging, competent, expert, virtuoso, and superhuman. DeepMind researchers argued that no level beyond "emerging AGI" existed at that time. Under their system, today's most capable LLMs and simulated reasoning models still qualify as "emerging AGI"—equal to or somewhat better than an unskilled human at various tasks.
But this framework has its critics. Heidy Khlaaf, chief AI scientist at the nonprofit AI Now Institute, told TechCrunch that she thinks the concept of AGI is too ill-defined to be "rigorously evaluated scientifically." In fact, with so many varied definitions at play, one could argue that the term AGI has become technically meaningless.
[...] The Microsoft-OpenAI dispute illustrates what happens when philosophical speculation is turned into legal obligations. When the companies signed their partnership agreement, they included a clause stating that when OpenAI achieves AGI, it can limit Microsoft's access to future technology. According to The Wall Street Journal, OpenAI executives believe they're close to declaring AGI, while Microsoft CEO Satya Nadella, on the Dwarkesh Patel podcast in February, has called the idea of using AGI as a self-proclaimed milestone "nonsensical benchmark hacking."
[...] The disconnect we've seen above between researcher consensus, firm terminology definitions, and corporate rhetoric has a real impact. When policymakers act as if AGI is imminent based on hype rather than scientific evidence, they risk making decisions that don't match reality. When companies write contracts around undefined terms, they may create legal time bombs.
The definitional chaos around AGI isn't just philosophical hand-wringing. Companies use promises of impending AGI to attract investment, talent, and customers. Governments craft policy based on AGI timelines. The public forms potentially unrealistic expectations about AI's impact on jobs and society based on these fuzzy concepts.
Without clear definitions, we can't have meaningful conversations about AI misapplications, regulation, or development priorities. We end up talking past each other, with optimists and pessimists using the same words to mean fundamentally different things.
For the first time ever, a company has achieved a market capitalization of $4 trillion. And that company is none other than Nvidia:
The chipmaker's shares rose as much as 2.5% on Wednesday, pushing past the previous market value record ($3.9 trillion), set by Apple in December 2024. Shares in the AI giant later closed at $162.88, shrinking the company's market value to $3.97 trillion.
Nvidia has rallied by more than 70% from its April 4 low, when global stock markets were sent reeling by President Donald Trump's global tariff rollout.
[...] The record value comes as tech giants such as OpenAI, Amazon and Microsoft are spending hundreds of billions of dollars in the race to build massive data centers to fuel the artificial intelligence revolution. All of those companies are using Nvidia chips to power their services, though some are also developing their own.
In the first quarter of 2025 alone, the company reported its revenue soared about 70%, to more than $44 billion. Nvidia said it expects another $45 billion worth of sales in the current quarter.
Also at: ZeroHedge, CNN and AP.
Related: Nvidia Reportedly Raises GPU Prices by 10-15% as Tariffs and TSMC Price Hikes Filter Down
Apple just released an interesting coding language model - 9to5Mac:
Apple quietly dropped a new AI model on Hugging Face with an interesting twist. Instead of writing code like traditional LLMs generate text (left to right, top to bottom), it can also write out of order, and improve multiple chunks at once.
The result is faster code generation, at a performance that rivals top open-source coding models. Here's how it works.
The nerdy bits
Here are some (overly simplified, in the name of efficiency) concepts that are important to understand before we can move on.
Autoregression
Traditionally, most LLMs have been autoregressive. This means that when you ask them something, they process your entire question, predict the first token of the answer, reprocess the entire question with the first token, predict the second token, and so on. This makes them generate text like most of us read: left to right, top to bottom.
Temperature
LLMs have a setting called temperature that controls how random the output can be. When predicting the next token, the model assigns probabilities to all possible options. A lower temperature makes it more likely to choose the most probable token, while a higher temperature gives it more freedom to pick less likely ones.
Diffusion
An alternative to autoregressive models is diffusion models, which have been more often used by image models like Stable Diffusion. In a nutshell, the model starts with a fuzzy, noisy image, and it iteratively removes the noise while keeping the user request in mind, steering it towards something that looks more and more like what the user requested.
Still with us? Great!
Lately, some large language models have looked to the diffusion architecture to generate text, and the results have been pretty promising. If you want to dive deeper into how it works, here's a great explainer:
Why am I telling you all this? Because now you can see why diffusion-based text models can be faster than autoregressive ones, since they can basically (again, basically) iteratively refine the entire text in parallel.
This behavior is especially useful for programming, where global structure matters more than linear token prediction.
Phew! We made it. So Apple released a model?
Yes. They released an open-source model called DiffuCode-7B-cpGRPO, that builds on top of a paper called DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation, released just last month.
The paper describes a model that takes a diffusion-first approach to code generation, but with a twist:
"When the sampling temperature is increased from the default 0.2 to 1.2, DiffuCoder becomes more flexible in its token generation order, freeing itself from strict left-to-right constraints"
This means that by adjusting the temperature, it can also behave either more (or less) like an autoregressive model. In essence, Higher temperatures give it more flexibility to generate tokens out of order, while lower temperatures keep it closer to a strict left-to-right decoding.
And with an extra training step called coupled-GRPO, it learned to generate higher-quality code with fewer passes. The result? Code that's faster to generate, globally coherent, and competitive with some of the best open-source programming models out there.
Built on top of an open-source LLM by Alibaba
Even more interestingly, Apple's model is built on top of Qwen2.5‑7B, an open-source foundation model from Alibaba. Alibaba first fine-tuned that model for better code generation (as Qwen2.5‑Coder‑7B), then Apple took it and made its own adjustments.
They turned it into a new model with a diffusion-based decoder, as described in the DiffuCoder paper, and then adjusted it again to better follow instructions. Once that was done, they trained yet another version of it using more than 20,000 carefully picked coding examples.
And all this work paid off. DiffuCoder-7B-cpGRPO got a 4.4% boost on a popular coding benchmark, and it maintained its lower dependency on generating code strictly from left to right.
Of course, there is plenty of room for improvement. Although DiffuCoder did better than many diffusion-based coding models (and that was before the 4.4% bump from DiffuCoder-7B-cpGRPO), it still doesn't quite reach the level of GPT-4 or Gemini Diffusion.
And while some have pointed out that 7 billion parameters might be limiting, or that its diffusion-based generation still resembles a sequential process, the bigger point is this: little by little, Apple has been laying the groundwork for its generative AI efforts with some pretty interesting and novel ideas.
Whether (or if? When?) that will actually translate into real features and products for users and developers is another story.
Of course, Bill Gates says AI will replace humans for most things — but coding will remain "a 100% human profession" centuries later. So what's your take? Are programmers on the way out or safe?
AI Is Scraping the Web, but the Web Is Fighting Back:
AI is not magic. The tools that generate essays or hyper-realistic videos from simple user prompts can only do so because they have been trained on massive data sets. That data, of course, needs to come from somewhere, and that somewhere is often the stuff on the internet that's been made and written by people.
The internet happens to be quite a large source of data and information. As of last year, the web contained 149 zettabytes of data. That's 149 million petabytes, or 1.49 trillion terabytes, or 149 trillion gigabytes, otherwise known as a lot. Such a collective of textual, image, visual, and audio-based data is irresistible to AI companies that need more data than ever to keep growing and improving their models.
So, AI bots scrape the worldwide web, hoovering up any and all data they can to better their neural networks. Some companies, seeing the business potential, inked deals to sell their data to AI companies, including companies like Reddit, the Associated Press, and Vox Media. AI companies don't necessarily ask permission before scraping data across the internet, and, as such, many companies have taken the opposite approach, launching lawsuits against companies like OpenAI, Google, and Anthropic. (Disclosure: Lifehacker's parent company, Ziff Davis, filed a lawsuit against OpenAI in April, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Those lawsuits probably aren't slowing down the AI vacuum machines. In fact, the machines are in desperate need of more data: Last year, researchers found that AI models were running out of data necessary to continue with the current rate of growth. Some projections saw the runway giving out sometime in 2028, which, if true, gives only a few years left for AI companies to scrape the web for data. While they'll look to other data sources, like official deals or synthetic data (data produced by AI), they need the internet more than ever.
If you have any presence on the internet whatsoever, there's a good chance your data was sucked up by these AI bots. It's scummy, but it's also what powers the chatbots so many of us have started using over the past two and a half years.
But just because the situation is a bit dire for the internet at large, that doesn't mean its giving up entirely. On the contrary, there is real opposition to this type of practice, especially when it goes after the little guy.
In true David-and-Goliath fashion, one web developer has taken it upon themselves to build a tool for web developers to block AI bots from scraping their sites for training data. The tool, Anubis, launched at the beginning of this year, and has been downloaded over 200,000 times.
Anubis is the creation of Xe Iaso, a developer based out of Ottawa, CA. As reported by 404 Media, Iaso started Anubis after she discovered an Amazon bot clicking on every link on her Git server. After deciding against taking down the Git server entirely, she experimented with a few different tactics before discovering a way to block these bots entirely: an "uncaptcha," as Iaso calls it.
Here's how it works: When running Anubis on your site, the program checks that a new visitor is actually a human by having the browser run cryptographic math with JavaScript. According to 404 Media, most browsers since 2022 can pass this test, as these browsers have tools built-in to run this type of JavaScript. Bots, on the other hand, usually need to be coded to run this cryptographic math, which would be too taxing to implement on all bot scrapes en masse. As such, Iaso has figured out a clever way to verify browsers via a test these browsers pass in their digital sleep, while blocking out bots whose developers can't afford the processing power required to pass the test.
This isn't something the general web surfer needs to think about. Instead, Anubis is made for the people who run websites and servers of their own. To that point, the tool is totally free and open source, and is in continued development. Iaso tells 404 Media that while she doesn't have the resources to work on Anubis full time, she is planning to update the tool with new features. That includes a new test that doesn't push the end-user's CPU as much, as well as one that doesn't rely on JavaScript, as some users disable JavaScript as a privacy measure.
If you're interested in running Anubis on your own server, you can find detailed instructions for doing so on Iaso's GitHub page. You can also test your own browser to make sure you aren't a bot.
Iaso isn't the only one on the web fighting back against AI crawlers. Cloudflare, for example, is blocking AI crawlers by default as of this month, and will also let customers charge AI companies that want to harvest the data on their sites. Perhaps as it becomes easier to stop AI companies from openly scraping the web, these companies will scale back their efforts—or, at the very least, offer site owners more in return for their data.
My hope is that I run into more websites that initially load with the Anubis splash screen. If I click a link, and am presented with the "Making sure you're not a bot" message, I'll know that site has successfully blocked these AI crawlers. For a while there, the AI machine felt unstoppable. Now, it feels like there's something we can do to at least put it in check.
See Also:
Earth is going to spin much faster over the next few months:
Earth is expected to spin more quickly in the coming weeks, making some of our days unusually short. On July 9, July 22 and Aug. 5, the position of the moon is expected to affect Earth's rotation so that each day is between 1.3 and 1.51 milliseconds shorter than normal.
A day on Earth is the length of time needed for our planet to fully rotate on its axis — approximately 86,400 seconds, or 24 hours. But Earth's rotation is affected by a number of things, including the positions of the sun and moon, changes to Earth's magnetic field, and the balance of mass on the planet.
Since the relatively early days of our planet, Earth's rotation has been slowing down, making our days longer. Researchers found that about 1 billion to 2 billion years ago,a day on Earth was only 19 hours long. This is likely because the moon was closer to our planet, making its gravitational pull stronger than it is now and causing Earth to spin faster on its axis.
Since then, as the moon has moved away from us, days on average have been getting longer. But in recent years, scientists have reported variations in Earth's rotation. In 2020, scientists found that Earth was spinning more quickly than at any point since records began in the 1970s, and we saw the shortest-ever recorded day on July 5, 2024, which was 1.66 milliseconds shy of 24 hours, according to timeanddate.com.
On July 9, July 22 and Aug. 5, 2025, the moon will be at its furthest distance from Earth's equator, which changes the impact its gravitational pull has on our planet's axis. Think of the Earth as a spinning top — if you were to put your fingers around the middle and spin, it wouldn't rotate as quickly as if you were to hold it from the top and bottom.
With the moon closer to the poles, the Earth's spin speeds up, making our day shorter than usual.
These variations are to be expected, but recent research suggests that human activity is also contributing tothe change in the planet's rotation. Researchers at NASA have calculated that the movement of ice and groundwater, linked to climate change, has increased the length of our days by1.33 milliseconds per century between 2000 and 2018.
Single events can also affect Earth's spin: the 2011 earthquake that struck Japanshortened the length of the day by 1.8 microseconds. Even the changing seasons affect Earth's spin,Richard Holme, a geophysicist at the University of Liverpool, told Live Science via email.
"There is more land in the northern hemisphere than the south," Holme said. "In northern summer, the trees get leaves, this means that mass is moved from the ground to above the ground — further away from the Earth's spin axis." The rate of rotation of any moving body is affected by its distribution of mass. When an ice skater spins on the spot, they rotate faster when their arms are tight to their chest, and slow themselves down by stretching their arms out. As Earth's mass moves away from its core in summer, its rate of rotation must decrease, so the length of the day increases, Holme explained.
Of course, on the days in question our clocks will still count 24 hours. The difference isn't noticeable on the individual level.
The only time we would see a change to time zones is if the difference between the length of day is greater than 0.9 seconds, or 900 milliseconds. Though this has never happened in a single day, over the years our clocks fall out of sync with the position of the planet. This is monitored by the International Earth Rotation and Reference Systems Service (IERS), which will add a "leap second" to UTC as needed to bring us back in line.
Compact GeV Proton Acceleration Using Ultra-Intense Lasers:
According to a study published in Scientific Reports, researchers at the University of Osaka have proposed "micronozzle acceleration"—a unique approach for creating giga-electron-volt proton beams using ultra-intense lasers.
Proton beams with giga-electron-volt (GeV) energy, previously considered possible only with giant particle accelerators, may soon be created in small setups, owing to a discovery by researchers at the University of Osaka.
Professor Masakatsu Murakami's team invented a revolutionary idea known as micronozzle acceleration (MNA). The researchers achieved a world-first by constructing a microtarget with small nozzle-like characteristics and irradiating it with ultraintense, ultrashort laser pulses. This was accomplished using extensive numerical simulations.
Unlike traditional laser-based acceleration methods, which use flat targets and have energy limits below 100 mega-electron-volts (1 GeV = 1000 MeV), the micronozzle structure allows for sustained, stepwise acceleration of protons within a powerful quasi-static electric field created inside the target. This innovative method permits proton energy to approach 1 GeV while maintaining great beam quality and stability.
This discovery opens a new door for compact, high-efficiency particle acceleration. We believe this method has the potential to revolutionize fields such as laser fusion energy, advanced radiotherapy, and even laboratory-scale astrophysics.
Masakatsu Murakami, Professor, The University of Osaka
The implications are extensive:
- Energy: supports laser-driven nuclear fusion with rapid ignition techniques
- Medicine: Makes proton cancer treatment systems more accurate and compact
- Fundamental Science: Enables the simulation of harsh astrophysical settings and the investigation of matter under extremely powerful magnetic fields.
The study is the first theoretical proof of compact GeV proton acceleration utilizing microstructured targets. It is based on simulations conducted on the University of Osaka's SQUID supercomputer.
Journal Reference:
Murakami, M., Balusu, D., Maruyama, S., et al. Generation of giga-electron-volt proton beams by micronozzle acceleration [open], Scientific Reports (DOI: 10.1038/s41598-025-03385-x)
See also:
Arthur T Knackerbracket has processed the following story:
Antarctic sea ice cover in recent summers has been far below historical levels
The collapsing sea ice around the Antarctic continent has led to a doubling in the number of icebergs calving from ice sheets and a surge in sea temperatures, and the impacts are growing more severe as heat accumulates in the Southern Ocean.
This equates to the disappearance of an area of ice nearly 6.5 times the size of the UK. Ice extent in 2024 was nearly as low, and 2025 is tracking towards a similarly grim level.
The team found that in summers with low sea ice since 2016, the loss of sea ice led to a 0.3°C rise in the average temperature in the Southern Ocean between the latitudes of 65° and 80° south.
More worryingly, the extra heat from a single low sea ice year didn’t dissipate by the following year. In fact, it kept the ocean warmer for at least the following three years, making any temperature rise far more serious than expected, says Doddridge.
“We have known for a while that losing sea ice in the summer should warm the ocean, essentially because the sea ice and the snow that sit on top of it are really reflective,” says Doddridge.
“The fact that the ocean memory of the warming lasts for three whole years gives the opportunity for the warming impact in the Southern Ocean to compound. Now it’s just building and building and building.”
Another consequence of such a severe decline in sea ice is that it may lead to a faster loss of the inland ice sheets. When the ocean surface is frozen, it dampens down the Southern Ocean swells, preventing them from striking the edges of the ice sheets that overlay the Antarctic continent. Once the protective skirt of sea ice is gone, the ice sheets on the coastal margin begin to break up more readily.
The team found that for every 100,000-square-kilometre reduction in sea ice, there were an additional six icebergs greater than 1 square kilometre in size breaking away. “In low sea ice years, we saw twice as many icebergs,” says Doddridge.
The loss of sea ice will also severely affect the species that depend on being able to haul themselves out of the ocean onto a solid platform for their survival. The study predicts that species such as emperor penguins (Aptenodytes forsteri) and crabeater seals (Lobodon carcinophagus) may be particularly badly affected.
Antarctic science is also made more challenging as sea ice plays a critical role in enabling ships to safely resupply research stations.
“When we have an extreme low sea ice year, there’s an impact that the Antarctic system will keep feeling for many years. It’s not just a one-off event,” says Abram. “There’s just a multitude of ways that this sea ice loss impacts on Antarctic ecosystems.”
Ars Tecnica reports on the curious rise of giant tablets on wheels/
Over the past few years, LG has set off a strange tech trend that's been rolling onto devices sold across Amazon and other online electronics retailers.
In 2022, the company launched the StanbyME, which is essentially a $1,000 [$899 currently --JE] 27-inch tablet running LG's smart TV operating system (OS), webOS, but lacking a tuner.
[...]
Today, the StanbyME competes against a slew of similar devices, including some from Samsung, but mostly from smaller brands and running Android.I've had one of these devices, the KTC MegPad 32-inch Android Tablet (A32Q7 Pro), rolling around my home for a few weeks, and I'm left curious about what's driving the growth of StanbyME-like devices, which are noticeably niche and expensive.
[...]
Unlike LG's StanbyME, KTC's device doesn't run a smart TV OS. Instead, it's a 32-inch Android 13 tablet. Still, KTC heavily markets the MegPad's ability to serve as streaming hardware, and that's one of the best uses I found for it.
[...]
The MegPad is also a diplomatic solution for homes with limited TVs or computers. This could be helpful for homes with kids with varied interests or in my home, where a speedy, 55-inch TV in the living room is the best screen available by far. I was able to let my partner take the big screen for gaming and still hang out nearby while streaming on the MegPad.
[...]
Compared to the TV mounted on my living room wall, the MegPad is much easier to move from room to room, but it's easy to overestimate how seamless transporting it is. Yes, it's on a set of five 360-degree wheels, but the wheels don't lock, and the device weighs 40.3 pounds, per its Amazon listing.
[...]
A fully rotating screen, however, makes up for some of my mobility complaints and diversifies the MegPad's potential uses. Besides streaming, for example, the MegPad was great for watching yoga videos online, (which calls for viewing the screen from different heights and positions). It also proved to be an ideal setup for creating a large, print-out collage, which included a lot of dragging, dropping, and cropping of images.
[...]
Further, the MegPad, like many StanbyME-like devices, uses Android 13, which doesn't require paying vendor licensing fees like built-for smart TV OSes, such as Android TV/Google TV and webOS, would. There are some benefits to that, though.To start, Android 13 doesn't have the integrated ads that Android TV or the Google TV interface does. Google claims that the Google TV platform doesn't use automatic content recognition (ACR), but as Consumer Reports has noted, Google collects "data from TVs that use its smart TV platform—and there's no opting out of Google's policies during setup if you want smart TV functionality."
[...]
Further differing from LG's StanbyME and real TVs, the MegPad doesn't include a traditional remote. The tablet comes with a basic Bluetooth mouse, but due to the tablet's portability, I frequently used the tablet without a flat surface within arm's reach available for comfortable mouse control. The touchscreen is reliable, but gestures can be cumbersome on a tablet this large, and the display was often out of my hand's reach.
[...]
devices like the MegPad and Amazon's Echo Show have become the new de facto stand-ins for portable TVs, even though they're not true TV sets. Even LG's StanbyME Go, a 27-inch webOS-powered display packed into a briefcase, is a far cry from what most of us would traditionally consider a portable TV.
[...]
KTC also sees the MegPad's appeal as a pseudo-TV. The MegPad's product page emphasizes users' ability to "watch favorite shows/movies directly—no PC needed" and to "stream Netflix [and] YouTube... more effortlessly on your smart TV." Its Amazon product page also promotes the keywords "portable TV," "rolling TV," "mobile TV," and "standing TV." This is all despite the MegPad not technically being a true TV.
[...]
I've been fascinated by the MegPad and similar devices because they introduce a unique approach to streaming, web browsing, and productivity. But ultimately, they're hard to recommend when there are other personal gadgets that are more affordable and often take up less space.
[...]
Overall, the growing presence of devices like the MegPad underscores a confluence occurring between smart TVs, tablets, monitors, and smart displays. With software being forced into more types of displays, often in the interest of gathering more user data, it's an interesting time to consider what you want from your next screen—be it computing power, a certain size, the omission or inclusion of web connectivity, and mobility.
[...]
Three years after LG made TV-esque devices on wheels a talking point, more brands are trying to roll into the market. That includes LG's best TV frenemy, Samsung, which has been using the form factor in limited geographies to drive sales of "smart monitors."Tech brands have ulterior motives for pushing this newer form factor that go beyond filling a gap in consumer gadgets. But if a large tablet or small smart display with wheels fits your needs, the options are there, and they should meet most expectations.
Smart TVs from LG and Samsung are increasingly being used as monitors for Mac and PC, given that they are generally cheaper than an OLED display. The trade-off that you get for an inexpensive TV is adware. To accomplish this goal, they can capture screenshots of everything on screen, and sell it to just about anybody who asks, or they use that data themselves for targeted home screen advertising.
Smart TVs are not just screens you watch. They are also sensors that watch you. LG and Samsung smart TVs both include technology called Automatic Content Recognition, or ACR.
The feature captures small snapshots of what's on your screen or snippets of audio, then sends that data to external servers to identify exactly what you are watching.
ACR works even when the TV is used as a PC monitor or connected via HDMI. A 2024 study by University College London and collaborators found LG TVs capturing screenshots as frequently as every 10 milliseconds.
Samsung TVs do so every 500 milliseconds — even when displaying content from external devices. Opting out of ACR in settings completely stops this network traffic.
Each snapshot is matched to a massive database to determine the exact program or ad. They allow companies to build a detailed profile of your viewing habits.
https://appleinsider.com/inside/mac/tips/how-to-stop-your-lg-or-samsung-smart-tv-from-tracking-you
Arthur T Knackerbracket has processed the following story:
It's pork barrel time in Europe for Nvidia (and possibly AMD) as corporations bid for a slice of the €20 billion ($23.6 billion) fund to build proposed AI Gigafactories to advance the EU's AI credentials.
The European Commission (EC) says it has received an "overwhelming response" to its Call for Expression of Interest in building AI Gigafactories, as well there might be when someone is waving an open check book around.
Some 76 expressions of interest to set up AI Gigafactories in 16 EU member states involving 60 different sites were submitted, the EC confirms. Respondents include global and European orgs representing datacenter operators, telecoms providers and power companies.
For those not aware, AI Gigafactories are described as "state-of-the-art, large-scale AI compute and data storage hubs," basically oversized bit barns purpose built for the development of AI models and applications at scale – meaning models with hundreds of trillions of parameters.
Speaking at a press conference, EC Executive Vice-President for Technological Sovereignty, Security, and Democracy Henna Virkkunen called it "an achievement that far exceeds our expectations and demonstrates Europe's growing momentum and enthusiasm for innovation in AI."
Key winners from this largesse are set to be the GPU makers; the new facilities will require at least three million of the latest generation of these accelerators for AI processing, the EC said. Enough to keep Nvidia chief Jensen Huang in fancy leather jackets for some time to come.
The goal of investment is for Europe to position itself as a global powerhouse in artificial intelligence. Currently the region is lagging behind the US and China in the AI model development arms race, but somewhere ahead of the Cook Islands.
However, these submissions are not formal applications. Instead, they are to inform the European Commission and EU member states in mapping out the range of potential candidates available to establish AI Gigafactory facilities across the bloc, with an official call for proposals planned for the end of 2025.
[...] Politicians in Denmark are keen on their country hosting one of the bloated bit barns, but this has proved controversial because of the power consumption such a site would involve, potentially putting pressure on the Danish electricity grid.
Belgium is also understood to have proposals for an AI Gigafactory, with potential sites at Charleroi or Zellik, just outside Brussels.
Arthur T Knackerbracket has processed the following story:
Using an inexpensive electrode coated with DNA, MIT researchers have designed disposable diagnostics that could be adapted to detect a variety of diseases, including cancer or infectious diseases such as influenza and HIV.
These electrochemical sensors make use of a DNA-chopping enzyme found in the CRISPR gene-editing system. When a target such as a cancerous gene is detected by the enzyme, it begins shearing DNA from the electrode nonspecifically, like a lawnmower cutting grass, altering the electrical signal produced.
One of the main limitations of this type of sensing technology is that the DNA that coats the electrode breaks down quickly, so the sensors can’t be stored for very long and their storage conditions must be tightly controlled, limiting where they can be used. In a new study, MIT researchers stabilized the DNA with a polymer coating, allowing the sensors to be stored for up to two months, even at high temperatures. After storage, the sensors were able to detect a prostate cancer gene that is often used to diagnose the disease.
The DNA-based sensors, which cost only about 50 cents to make, could offer a cheaper way to diagnose many diseases in low-resource regions, says Ariel Furst, the Paul M. Cook Career Development Assistant Professor of Chemical Engineering at MIT and the senior author of the study.
“Our focus is on diagnostics that many people have limited access to, and our goal is to create a point-of-use sensor. People wouldn’t even need to be in a clinic to use it. You could do it at home,” Furst says.
[...] Electrochemical sensors work by measuring changes in the flow of an electric current when a target molecule interacts with an enzyme. This is the same technology that glucose meters use to detect concentrations of glucose in a blood sample.
[...] This polymer, which costs less than 1 cent per coating, acts like a tarp that protects the DNA below it. Once deposited onto the electrode, the polymer dries to form a protective thin film.
“Once it’s dried, it seems to make a very strong barrier against the main things that can harm DNA, such as reactive oxygen species that can either damage the DNA itself or break the thiol bond with the gold and strip your DNA off the electrode,” Furst says.
The researchers showed that this coating could protect DNA on the sensors for at least two months, and it could also withstand temperatures up to about 150 degrees Fahrenheit. After two months, they rinsed off the polymer and demonstrated that the sensors could still detect PCA3, a prostate cancer gene that can be found in urine.
This type of test could be used with a variety of samples, including urine, saliva, or nasal swabs. The researchers hope to use this approach to develop cheaper diagnostics for infectious diseases, such as HPV or HIV, that could be used in a doctor’s office or at home. This approach could also be used to develop tests for emerging infectious diseases, the researchers say.
Journal Reference: Xingcheng Zhou, Jessica Slaughter, Smah Riki, et al., Polymer Coating for the Long-Term Storage of Immobilized DNA, ACS Sensors DOI: 10.1021/acssensors.5c00937
Stop Killing Games Finally Reaches One Million Signature Milestone, But There's A Pretty Big Catch:
If you're in gaming circles on social media, chances are you'll have heard of Stop Killing Games. It's an initiative led by YouTuber Accursed Farms that aims to challenge video game publishers in an attempt to stop them from making games completely unplayable once support has ended, with Ubisoft's The Crew being a prime example.
Stop Killing Games has set up multiple petitions in various countries over its duration, though the big one is the European Citizens' Initiative, which aims to take the problem all the way up to the European Union. It's a movement which has faced a lot of challenges in the past, including its alleged misrepresentation by popular streamer Pirate Software, but support has grown over the past few days.
Stop Killing Games Petition Reaches One Million Signatures, But Still Needs Support
Thanks to a surge of momentum heading into the petition's final month, Stop Killing Games reached its coveted one million signature milestone earlier today, a great achievement which should absolutely be celebrated. However, creator Accursed Farms took to YouTube [8:07 --JE] to explain that the movement will still need a lot more signatures and support if it wants to guarantee success.
The future doesn't look pretty when it comes to keeping the games we love.
According to Accursed Farms, while the official tally of signatures has reached one million, it's extremely unlikely it will stay that way. That's because when people make mistakes, even minor ones, when signing the initiative, the EU will completely invalidate these signatures, and it's almost guaranteed that many of these signatures will have mistakes on them, be they accidental or intentional.
On top of that, Accursed Farms claims that he's also heard of bad actors spoofing signatures on the initiative (which is illegal, don't do that), so these will also be invalidated when the deadline is reached. With all that in mind, Stop Killing Games now has a new goal of around 1.4 to 1.5 million signatures to truly ensure that the original one million signature goal is met once all the invalid signatures have been weeded out.
That means Stop Killing Games needs more support to ensure its success, and this is your reminder to go ahead and make your voice heard before the deadline on July 31. If you've yet to sign the petition, you can find the link here as well as a link to the Stop Killing Games website, which includes several other petitions to sign if you reside outside the EU.
See also:
A report published by Live Science tells us about the world's first computer that combines human brain with silicon:
A new type of computer that combines regular silicon-based hardware with human neurons is now available for purchase.
The CL1, released March 2 by Melbourne-based startup Cortical Labs, is "the world's first code deployable biological computer," according to the company's website. The shoebox-sized system could find applications in disease modeling and drug discovery, representatives say.
Inside the CL1, a nutrient-rich broth feeds human neurons, which grow across a silicon chip. That chip sends electrical impulses to and from the neurons to train them to exhibit desired behaviors. Using a similar system, Cortical Labs taught DishBrain (a predecessor to the CL1) to play the video game Pong.
"The perfusion circuit component acts as a life support system for the cells – it has filtration for waste products, temperature control, gas mixing, and pumps to keep everything circulating," Brett Kagan, chief scientific officer of Cortical Labs, told New Atlas.
Because the technology incorporates human neurons, some scientists have raised ethical concerns around the development of "synthetic biological intelligence" like the CL1. Although DishBrain and CL1 are less complex than human brains, the technology has sparked debates around the nature of consciousness and the potential for future synthetic biological intelligence to experience suffering.
"Right now, I think this is an unfounded concern. I think it would be a missed opportunity to not [be] able to use a system that has the promise to cure devastating brain diseases," Silvia Velasco, a stem cell researcher at the Murdoch Children's Research Institute in Australia who was not involved in the development of CL1, told the Australian Broadcasting Corporation. "But at the same time, it's important that we evaluate and anticipate potential concerns that the use of these models might raise."
The CL1 units will retail for approximately $35,000 each and will become widely available in late 2025, New Atlas reported. Each unit needs suitable laboratory facilities to run properly, so Cortical Labs will also offer a remote cloud-based computing option for users who don't have their own device.
[Editor's Note: Corrected the title to reflect the Australian origin of the laboratory. 2025-07-09 07:16Z. JR]
Arthur T Knackerbracket has processed the following story:
A new therapy for type 1 diabetes could nix the need for insulin injections.
Just a single infusion of lab-grown pancreatic cells let patients’ bodies make all the insulin they needed, scientists report June 20 in the New England Journal of Medicine. A year after treatment, 10 out of 12 participants no longer needed supplemental insulin.
“This is a landmark study — this cannot be overstated,” says Giacomo Lanzoni, a diabetes researcher at the University of Miami Miller School of Medicine who was not involved in the new work. These lab-grown cells can successfully treat diabetes, he says, and the technique to make them can be scaled up. That opens the door to restoring insulin production for many people with the disease.
Type 1 diabetes affects over 8 million people worldwide. It’s an autoimmune disease that pits a person’s immune system against the insulin-producing cells in their pancreas, destroying them. Insulin helps sugar pass from the blood to our cells, for energy; without it, sugar stays in the blood, starving cells. “People can’t survive without insulin,” says study coauthor Felicia Pagliuca, a cell biologist and senior vice president at Vertex Pharmaceuticals, the Boston-based company behind the new therapy.
T.W. Reichman et al. Stem cell-derived, fully differentiated islets for type 1 diabetes. The New England Journal of Medicine. Published online June 20, 2025. doi: 10.1056/NEJMoa2506549.