Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
https://medicalxpress.com/news/2025-03-drug-reestablishes-brain-mouse.html
A new study by UCLA Health has discovered what researchers say is the first drug to fully reproduce the effects of physical stroke rehabilitation in model mice.
The findings, published in Nature Communications, tested two candidate drugs derived from their studies on the mechanism of the brain effects of rehabilitation, one of which resulted in significant recovery in movement control after stroke in mice.
Stroke is the leading cause of adult disability because most patients do not fully recover from the effects of stroke. There are no drugs in the field of stroke recovery, requiring stroke patients to undergo physical rehabilitation, which has shown to be only modestly effective.
"The goal is to have a medicine that stroke patients can take that produces the effects of rehabilitation," said Dr. S. Thomas Carmichael, the study's lead author and professor and chair of UCLA Neurology.
"Rehabilitation after stroke is limited in its actual effects because most patients cannot sustain the rehab intensity needed for stroke recovery.
"Further, stroke recovery is not like most other fields of medicine, where drugs are available that treat the disease—such as cardiology, infectious disease or cancer," Carmichael said.
"Rehabilitation is a physical medicine approach that has been around for decades; we need to move rehabilitation into an era of molecular medicine."
In the study, Carmichael and his team sought to determine how physical rehabilitation improved brain function after a stroke and whether they could generate a drug that could produce these same effects.
Working in laboratory mouse models of stroke and with stroke patients, the UCLA researchers identified a loss of brain connections that stroke produces that are remote from the site of the stroke damage. Brain cells located at a distance from the stroke site get disconnected from other neurons. As a result, brain networks do not fire together for things like movement and gait.
The UCLA team found that some of the connections that are lost after stroke occur in a cell called a parvalbumin neuron. This type of neuron helps generate a brain rhythm, termed a gamma oscillation, which links neurons together so that they form coordinated networks to produce a behavior, such as movement.
Stroke causes the brain to lose gamma oscillations. Successful physical rehabilitation in both laboratory mice and humans brought gamma oscillations back into the brain and, in the mouse model, repaired the lost connections of parvalbumin neurons.
Carmichael and the team then identified two candidate drugs that might produce gamma oscillations after stroke. These drugs specifically work to excite parvalbumin neurons.
The researchers found one of the drugs, DDL-920, developed in the UCLA lab of Varghese John, who coauthored the study, produced significant recovery in movement control in mice.
Journal Reference: Okabe, N., Wei, X., Abumeri, F. et al. Parvalbumin interneurons regulate rehabilitation-induced functional recovery after stroke and identify a rehabilitation drug. Nat Commun 16, 2556 (2025). https://doi.org/10.1038/s41467-025-57860-0
Arthur T Knackerbracket has processed the following story:
[Ed's Comment: Originally this story was viewable on FireFox and it downloaded fine using "Arthur". It is now giving a cookie warning that it cannot ever complete redirections and no longer displays. If anyone finds a solution to the problem please leave it in the comments. TY --JR]
There are a lot of things in life that we keep safely tucked away that we hope we'll never need to use. Our smoke alarms, for instance, or our emergency funds. These are the very things that we can't neglect, though, because when we need them, we really, really need them. Another solid example for drivers is a spare tire. Are you one of those unfortunate souls who has been stuck on an unfamiliar road late at night while waiting for your mechanic to hook you up with a spare? This topic is sure to strike a real chord with you, then.
In November 2023, the UK's RAC reported that it had reviewed "equipment lists of more than 300 car models across 28 brands — everything from the smallest superminis to the largest 4x4s," and what did the British auto servicing brand discover? Less than 3% of those models were sold new with a spare wheel included in the price.
For the manufacturer, of course, there's a money-saving benefit to limiting production of spares, while there are also some performance-related reasons to dispense with them. They add weight when kept in the back, and because they aren't always offered as full-size spares, they can limit performance while being driven on. As they're something of a last resort, drivers may not be inclined to use them anyway, which also limits the call for them. There are also more lightweight and convenient approaches to dealing with a flat, which is a further factor in the reduction of spare tires.
If you were a fan of the fearsome muscle cars of the mid-to-late twentieth century, you surely still lament the fact that these mighty models became increasingly less practical, and then all but impossible to drive as a result of such paradigm shifts as the Clean Air Act. Enacted in 1970, the EPA reports that "this legislation authorized the development of comprehensive federal and state regulations to limit emissions from both stationary (industrial) sources and mobile sources," and there weren't many mobile sources more majestic than the Dodge Charger R/T (pictured here) and its kind. Fuel increasingly had to be cleaner, engines needed to be more efficient and generally smaller, and the trend for lighter, more practical models began.
As important as a spare tire can be, there's no getting around the fact that it can add considerable weight to a vehicle: 44 pounds (20 kg) or so depending on the type of vehicle. This complicates the matter of hitting eco-friendlier targets. This could be seen as an advantage of the trend away from spare tires, having a potential positive effect on a vehicle's fuel economy, but the benefits of this compared to the risks associated with driving without a spare tire are a matter for the individual driver to decide on.
After all, spares can certainly be hefty and unwieldy to work with at the roadside. Another part of the reasoning is that lots of drivers don't use them, which means they're often dead weight. Additionally, the vehicle not only has to store the wheel itself, but also the means to actually use it should the need arise. The jack alone can be quite the bulky accessory.
It's also important to note that EVs and hybrids are becoming increasingly popular. Cox Automotive notes that 1.3 million EVs were sold in the United States in 2024. The thing about such vehicles, though, is that while they don't have a bulky ICE, their batteries typically make them heavier than their gas or diesel counterparts. That main battery is the most crucial, largest, and weightiest component, and in order to accommodate it, space comes at a real premium in an electric vehicle.
As a result, seemingly extraneous features, such as spare tires, can become even more of a rarity. As ArtCenter College of Design executive director of transportation systems and design, Geoff Wardle, put it to the Los Angeles Times in August 2023, "batteries, electrical systems control units or hydrogen tanks ... encroach into the traditional places that spare tires are found: under the trunk floor."
With these vehicles being heavier than their gas-powered alternatives, the weight added by a spare tire may be more of a concern. The difference may not be as stark as you might expect, though, depending on the make and model: The electric Genesis G80, for example, weighs approximately 15% more than its ICE equivalent. Nonetheless, it's one contributing factor to bear in mind. According to the Los Angeles Times, a query about EV spare tires prompted a response from Honda claiming that "if the vehicle is in an accident, the spare tire can cause damage to the electric battery which could cause a failure in the battery." Perhaps this explains Tesla's stance on spare tires.
With the knowledge that their new vehicle purchase isn't likely to come with a spare tire, drivers can take comfort in the fact that their absence doesn't mean that they're entirely without options. Run-flat tires are a common solution. Well, more of a bandage than a solution. Run-flats aren't exactly throwaway, but they won't resolve your issue for the long term. Michelin reports that these are the standard alternative over full spares for up to around 14% of new vehicles, but warns that, after one has suffered a puncture, it can typically only be driven on for a maximum of around 50 miles before losing its crucial "fins," small raised sections in the sidewall that directs air and redistributes heat that would otherwise destroy the rubber.
The wonderfully named donut spares can be substituted as space-saving measures, too, and driving performance on them may surprise drivers. As Ford Vehicle Dynamics Team's Jamie Cullen told Car And Driver in 2017, they're intended to "come as close to the standard tire's performance and response as possible. Mini spares use an aggressive compound and minimum tread depth to achieve those results." Spares are not designed to be driven on for long, though, as noted.
Puncture kits are another space and cost-saving solution manufacturers offer, but there are certain jobs that a more humble repair kit just isn't equipped for. As Toyota Magazine UK states, such a set "shouldn't be used if the puncture is more than 4mm in diameter, if the wheel rim has been damaged, or if the tyre has been flat or running at low pressure for a prolonged period."
In the auto industry, it should always be driver, passenger, and pedestrian safety first and foremost. Unfortunately, there are always complicating factors to this. Whichever angle you consider it from, limiting access to spare tires leaves vehicles more vulnerable on the roads. This is far from new information. In November 2015, the Los Angeles Times quoted managing director of Automotive Engineering and Repair at AAA, John Nielsen, as making the critical point: "AAA responds to more than 4 million calls for flat tire assistance annually," noting that "Flat tires are not a disappearing problem, but spare tires are."
This both increases the strain on services such as AAA providing emergency support and makes drivers more reliant upon those services. When we need a spare, after all, it often tends to happen with no notice at the least convenient moment.
The unfortunate fact is a driver can never be sure what kind of eventuality they might come across. When a tire issue arises, you might get away with it relatively lightly with only minor damage, or you might not. All you can do is hope that the interim measure available to you gets you to where you need to be, or that a timely servicing is in the offing. In any case, it's always best to keep some essential items with you in your car in case of a flat.
A unanimous federal appeals court ruled that pictures generated solely by machines do not qualify for copyright protection.
"The Copyright Act of 1976 requires all eligible work to be authorized in the first instance by a human being," said the U.S. Court of Appeals for the District of Columbia.
The 3-0 court ruling, issued March 18, was written by Circuit Judge Patricia A. Millett, who was nominated by President Obama in 2013.Background
Computer scientist Dr. Stephen Thaler created a generative artificial intelligence named "Creativity Machine," which made a picture that Thaler titled "A Recent Entrance to Paradise."
The U.S. Copyright Office denied Thaler's application (for copyright registration) based on its requirement that work must be authored in the first instance by a human being. The copyright application listed Creativity Machine as the work's sole author.
Thaler litigated. A federal court (U.S. District Court for the District of Columbia) upheld the Copyright Office's denial; the federal appeals court affirmed the ruling of the federal district court.After the March 18 opinion from the federal appeals court, Thaler's attorney, Ryan Abbott, said he and his client "strongly disagree" with the ruling and intend to appeal. The Copyright Office said it "believes the court reached the correct result."
"Judge Millett explained it best that, 'machines are tools, not authors.' Interpretations of the Copyright Act would be nonsensical if the 'author' could be a computer or other machine. Machines do not have children, they do not die, they do not have nationalities or hold property. All of these concepts referenced in copyright law would have absurd results if authorship was granted to a computer program, and courts are simply not allowed to re-interpret statutes or ignore portions of a statute." -- Alicia Calzada, Deputy General Counsel of the National Press Photographers Association (NPPA)
Previously: https://soylentnews.org/article.pl?sid=23/08/24/0036210
On Wednesday, web infrastructure provider Cloudflare announced a new feature called "AI Labyrinth" that aims to combat unauthorized AI data scraping by serving fake AI-generated content to bots. The tool will attempt to thwart AI companies that crawl websites without permission to collect training data for large language models that power AI assistants like ChatGPT.
Cloudflare, founded in 2009, is probably best known as a company that provides infrastructure and security services for websites, particularly protection against distributed denial-of-service (DDoS) attacks and other malicious traffic.
Instead of simply blocking bots, Cloudflare's new system lures them into a "maze" of realistic-looking but irrelevant pages, wasting the crawler's computing resources. The approach is a notable shift from the standard block-and-defend strategy used by most website protection services. Cloudflare says blocking bots sometimes backfires because it alerts the crawler's operators that they've been detected.
https://www.phoronix.com/news/Linux-6.15-slab
Ahead of the upcoming Linux 6.15 kernel cycle a few early pull requests have already been sent in to Linus Torvalds in advance of the anticipated v6.14 release on Sunday. Among those early changes for Linux 6.15 are the SLAB allocator updates that include a fix for cache randomization with kvmalloc inadvertently being inadequate due to accidentally using the same randomization seed.
With the SLAB pull request ahead of the Linux 6.15 merge window, most notable besides a few minor improvements is improving the kmalloc cache randomization within the kvmalloc code.
Google engineers discovered that the CONFIG_RANDOM_KMALLOC_CACHES hardening feature wasn't properly being employed. CONFIG_RANDOM_KMALLOC_CACHES creates multiple copies of slab caches and makes kmalloc randomly pick one based on the code address in order to help fend off memory vulnerability exploits. But the problem was the same random seed always ended up being used with the current Linux kernel code. From the Google code comments:
"...This is problematic because `__kmalloc_node` will use the return address as the seed to derive the *random* cache to use. Since all calls to `kvmalloc_node` will use the same seed when the size is large, the hardening is rendered completely pointless."
Gong Ruiqi of Huawei who worked out the solution to the issue explained:
"That literally means all kmalloc invoked via kvmalloc would use the same seed for cache randomization (CONFIG_RANDOM_KMALLOC_CACHES), which makes this hardening non-functional.
The root cause of this problem, IMHO, is that using RET_IP only cannot identify the actual allocation site in case of kmalloc being called inside non-inlined wrappers or helper functions. And I believe there could be similar cases in other functions. Nevertheless, I haven't thought of any good solution for this. So for now let's solve this specific case first.
For __kvmalloc_node_noprof, replace __kmalloc_node_noprof and call __do_kmalloc_node directly instead, so that RET_IP can take the return address of kvmalloc and differentiate each kvmalloc invocation."
At least with these pending SLAB updates for the Linux 6.15 merge window, this issue will be resolved and presumably be likely back-ported to existing stable kernels to address this ineffective security hardening.
- https://www.phoronix.com/news/Linux-6.15-Likely-Features
- https://lore.kernel.org/lkml/2f7985a8-0460-42de-9af0-4f966b937695@suse.cz/
- https://github.com/google/security-research/blob/908d59b573960dc0b90adda6f16f7017aca08609/pocs/linux/kernelctf/CVE-2024-27397_mitigation/docs/exploit.md?plain=1#L259
- https://patchwork.kernel.org/project/linux-mm/patch/20250212081505.2025320-3-gongruiqi1@huawei.com/
As civilisations become more and more advanced, their power needs also increase. It's likely that an advanced civilisation might need so much power that they enclose their host star in solar energy collecting satellites. These Dyson Swarms will trap heat so any planets within the sphere are likely to experience a temperature increase. A new paper explores this and concludes that a complete Dyson swarm outside the orbit of the Earth would raise our temperature by 140 K !
The concept of a Dyson swarm is purely a hypothetical concept, a theorised megastructure consisting of numerous satellites or habitats orbiting a star to capture and harness its energy output. Unlike the solid shell of a Dyson sphere, a swarm represents less of an engineering challenge, allowing for incremental construction as energy needs increase. The concept, first popularised by physicist Freeman Dyson in 1960, represents one of the most ambitious yet potentially achievable feats of astroengineering that could eventually allow a civilisation to use a significant fraction of its host star's total energy output.
... The paper concludes that a Dyson sphere surrounding the sun would significantly impact Earth's climate. Small spheres positioned inside Earth's orbit prove impractical, either becoming too hot for their own efficiency or having too great an impact on solar energy arriving on our planet. While large spheres enable efficient energy conversion, they would raise Earth's temperature by 140 K, making Earth completely uninhabitable.
A compromise might involve creating a partial structure (the Dyson swarm) at 2.13AU from the sun. This would harvest 4% of solar energy (15.6 yottawatts, or 15.6 million billion billion watts) while increasing Earth's temperature by less than 3K—comparable to current global warming trends. It's still quite an engineering feat though, requiring 1.3×1023 kg of silicon.
[Source]: The Universe Today
[Journal Ref]: The photovoltaic Dyson sphere
Arthur T Knackerbracket has processed the following story:
A vulnerability analyst and prominent member of the infosec industry has blasted Microsoft for refusing to look at a bug report unless he submitted a video alongside a written explanation.
Senior principal vulnerability analyst Will Dormann said last week he contacted Microsoft Security Response Center (MSRC) with a clear description of the bug and supporting screenshots, only to be told that his report wouldn't be looked at without a video.
MSRC told Dormann: "As requested, please provide clear video POC (proof of concept) on how the said vulnerability is being exploited? We are unable to make any progress without that. It will be highly appreciated."
Frustrated with Microsoft's demand, which Dormann said would only show him typing commands that were already depicted in the screenshots, and hitting Enter in CMD, the analyst created a video laden with malicious compliance.
The video is 15 minutes long and at the four-second mark flashes a screenshot from Zoolander, in which the protagonist unveils the "Center for Kids Who Can't Read Good."
It also features a punchy techno backing track while wasting the reviewer's time with approximately 14 minutes of inactivity.
Dormann said via Mastodon: "I get that people doing grunt work have mostly fixed workflows that they go through with common next steps.
"But to request a video that now captures (beyond my already-submitted screenshots) the act of me typing, and the Windows response being painted on the screen adds what of value now?"
To top it all off, when trying to submit the video via Microsoft's portal, the upload failed due to a 403 error.
[...] We also asked Dormann for additional input. He said requests for video can be found on other platforms such as HackerOne and Bugcrowd but in his opinion, requiring one signals to researchers that the reviewer is merely following a process rather than understanding the report itself.
As the post and video suggest, he was unimpressed by MSRC's refusal to proceed with the vulnerability report just because a video wasn't submitted in tandem.
"If a researcher is going out of their way to be nice to vendors and writing up vulnerability reports to share with them, the least the vendor could do is at least pretend to be taking it seriously," said Dormann.
"I reported three related but different vulnerabilities to Microsoft recently. Two of them requested video evidence of exploitation (for things that don't even make sense to have a video of, thus my malicious compliance example that I posted), and the third was rejected as not a vulnerability with clear evidence that the MSRC handler didn't bother actually reading what I submitted. Researchers doing the 'right thing' deserve better."
Arthur T Knackerbracket has processed the following story:
A group of technology companies and lobbyists want the European Commission (EC) to take action to reduce the region's reliance on foreign-owned digital services and infrastructure.
In an open letter to EC President Ursula von der Leyen and Executive Vice-President for Tech Sovereignty Henna Virkkunen, the group of nearly 100 organizations proposed the creation of a sovereign infrastructure fund to invest in key technology and lessen dependence on US corporations.
The letter points to recent events, including the farcical Munich Security Conference, as a sign of "the stark geopolitical reality Europe is now facing," and says that building strategic autonomy in key sectors is now an urgent imperative for European countries.
Signatories include aerospace giant Airbus, France's Dassault Systèmes, European cloud operator OVHcloud, chip designer SiPearl, open source biz Nextcloud, and a host of others including organizations such as the European Startup Network.
OVHcloud said the group was calling "for a collective industrial policy strategy to strengthen Europe's competitiveness and strategic autonomy. We are convinced this is the premise of what we hope will be a larger movement of the entire ecosystem."
Proposals include the sovereign infrastructure fund, which would be able to support public investment, especially in capital-intensive sectors like semiconductors, with "significant additional commitment of funds allocated and/or underwritten" by the European Investment Bank (EIB) and national public funding bodies.
It also suggests there should be a formal requirement for the public sector to "buy European" and source their IT requirements from European-led and assembled solutions, while recognizing that these may involve complex supply chains with foreign components.
[...] This isn't the first time that concerns about US hegemony in technology have been raised. Recently, the DARE project launched to develop hardware and software based on the open RISC-V architecture, backed by EuroHPC JU funding, while fears have been aired about the dominance of American-owned cloud companies in the European market.
Such concerns have been heightened by recent actions, such as the suggestion that the US might cut off access to Starlink internet services in Ukraine as a political bargaining strategy. Starlink owner Elon Musk later denied that this would ever happen.
The letter notes that these issues have already been set out by the EuroStack initiative, made up of many of the companies that signed the letter to EC President von der Leyen. The Register asked the European Commission to comment.
On the other side of the pond, the Computer and Communications Industry Association (CCIA) recently published a report claiming that US companies face "substantial financial burdens" due to the European Union's digital regulations.
It says that US tech companies are losing "billions" through having to comply with regulations such as the Digital Markets Act (DMA), and having to obtain user consent for their data to be used for advertising purposes.
Arthur T Knackerbracket has processed the following story:
The Chinese Communist Party’s (CCP's) national internet censor just announced that all AI-generated content will be required to have labels that are explicitly seen or heard by its audience and embedded in metadata. The Cyberspace Administration of China (CAC) just released the transcript for the media questions and answers (akin to an FAQ) on its Measures for the Identification of Artificial Intelligence Generated and Synthetic Content [machine translated]. We saw the first signs of this policy move last September when the CAC's draft plans emerged.
This regulation takes effect on September 1, 2025, and will compel all service providers (i.e., AI LLMs) to “add explicit labels to generated and synthesized content.” The directive includes all types of data: text, images, videos, audio, and even virtual scenes. Aside from that, it also orders app stores to verify whether the apps they host follow the regulations.
Users will still be able to ask for unlabeled AI-generated content for “social concerns and industrial needs.” However, the generating app must reiterate this requirement to the user and also log the information to make it easier to trace. The responsibility of adding the AI-generated label and metadata falls on the shoulders of this end-user person or entity.
The CAC also outlaws the malicious removal, tampering, forgery, or concealment of these AI labels, including the provision of tools that will help carry out these acts. Although this obviously means that you’re prohibited from deleting the AI label and metadata on AI-generated content, it also prohibits the addition of this identifier for human-created data.
The CCP, through the CAC, aims to control the spread of disinformation and prevent internet users from being confused by AI-generated content via the application of this law. At the moment, we haven’t seen any prescribed punishments for violators, but there is always the threat of legal action from the Chinese government.
This isn’t the first law that attempts to control the development and use of AI technologies, and the EU enacted its Artificial Intelligence Act in 2024. Many may react negatively to this move by the CAC, especially as it’s known for administering the Great Firewall of China to limit and control the internet within China’s borders. Nevertheless, this move will help reduce misinformation from anyone and everyone, especially as AI LLMs become more advanced. By ensuring that artificially generated content is marked clearly, people could more easily determine if they’re looking at or listening to a real event or something conjured by a machine on some server farm.
https://www.theregister.com/2025/03/19/ubuntu_2510_rust/
Efforts are afoot to replace the GNU coreutils with Rust ones in future versions of Ubuntu - which also means changing the software license. Canonical plans to replace the current core utilities – from the GNU project and implemented in C – with the newer uutils suite, which is written in Rust. Rather than technical issues, most concerns raised in the discussion on Ubuntu Discourse are about licensing. As a product of the GNU project, the existing coreutils are licensed under the GPL – specifically, GPL 3. The Rust replacements are licensed under the much more permissive MIT license.
Academics accuse AI startups of co-opting peer review for publicity:
There's a controversy brewing over "AI-generated" studies submitted to this year's ICLR, a long-running academic conference focused on AI.
At least three AI labs — Sakana, Intology, and Autoscience — claim to have used AI to generate studies that were accepted to ICLR workshops. At conferences like ICLR, workshop organizers typically review studies for publication in the conference's workshop track.
Sakana informed ICLR leaders before it submitted its AI-generated papers and obtained the peer reviewers' consent. The other two labs — Intology and Autoscience — did not, an ICLR spokesperson confirmed to TechCrunch.
Several AI academics took to social media to criticize Intology and Autoscience's stunts as a co-opting of the scientific peer review process.
"All these AI scientist papers are using peer-reviewed venues as their human evals, but no one consented to providing this free labor," wrote Prithviraj Ammanabrolu, an assistant computer science professor at UC San Diego, in an X post. "It makes me lose respect for all those involved regardless of how impressive the system is. Please disclose this to the editors."
As the critics noted, peer review is a time-consuming, labor-intensive, and mostly volunteer ordeal. According to one recent Nature survey, 40% of academics spend two to four hours reviewing a single study. That work has been escalating. The number of papers submitted to the largest AI conference, NeurIPS, grew to 17,491 last year, up 41% from 12,345 in 2023.
Academia already had an AI-generated copy problem. One analysis found that between 6.5% and 16.9% of papers submitted to AI conferences in 2023 likely contained synthetic text. But AI companies using peer review to effectively benchmark and advertise their tech is a relatively new occurrence.
"[Intology's] papers received unanimously positive reviews," Intology wrote in a post on X touting its ICLR results. In the same post, the company went on to claim that workshop reviewers praised one of its AI-generated study's "clever idea[s]."
Academics didn't look kindly on this.
Ashwinee Panda, a postdoctoral fellow at the University of Maryland, said in an X post that submitting AI-generated papers without giving workshop organizers the right to refuse them showed a "lack of respect for human reviewers' time."
"Sakana reached out asking whether we would be willing to participate in their experiment for the workshop I'm organizing at ICLR," Panda added, "and I (we) said no [...] I think submitting AI papers to a venue without contacting the [reviewers] is bad."
Not for nothing, many researchers are skeptical that AI-generated papers are worth the peer review effort.
Sakana itself admitted that its AI made "embarrassing" citation errors, and that only one out of the three AI-generated papers the company chose to submit would've met the bar for conference acceptance. Sakana withdrew its ICLR paper before it could be published in the interest of transparency and respect for ICLR convention, the company said.
Alexander Doria, the co-founder of AI startup Pleias, said that the raft of surreptitious synthetic ICLR submissions pointed to the need for a "regulated company/public agency" to perform "high-quality" AI-generated study evaluations for a price.
"Evals [should be] done by researchers fully compensated for their time," Doria said in a seriesof posts on X. "Academia is not there to outsource free [AI] evals."
Arthur T Knackerbracket has processed the following story:
Scientists at America's Los Alamos National Laboratory (LANL) in New Mexico say they have developed a Spacecraft Speedometer that satellites can use in orbit to ideally avoid orbital collisions.
Working with the US Air Force Academy, the LANL [scientists] say they have come up with a novel device capable of determining the velocity of a satellite while it is looping Earth and potentially other planets.
(The lab repeatedly uses the word velocity in its description of the equipment. Velocity is strictly speaking a vector quantity of magnitude and direction, so we'll assume the eggheads have been able to determine the speed component of a satellite's velocity vector using this gadget, at least.)
The Spacecraft Speedometer, we're told, makes use of twin laminated plasma spectrometers, with one facing forward along the space vehicle's trajectory and another identical unit facing in the opposite direction.
This design is based on the theory that more charged particles will impact the spectrometer that is facing forward than the rear-facing unit, allowing the velocity to be calculated.
"Like a car driving through a heavy rain, the satellite passes through the charged particles, ions and electrons, that comprise the Earth's upper atmosphere. In the case of the car, many raindrops will hit the car's front windshield while fewer raindrops will hit the rear windshield. In addition, the raindrops on the front hit the windshield harder," the research lab explains.
The principle is therefore that many atmospheric ions will hit the front-facing sensor, dubbed the ram measurement because ions ram into it. Fewer ions will be measured by the rear-facing sensor, called the wake measurement. The Spacecraft Speedometer uses the difference in both the number and impact energy of ions collected by the two sensors to provide an in-orbit velocity measurement.
Although only now being disclosed, it seems that a Spacecraft Speedometer has already been deployed to the International Space Station, mounted on the Space Test Program-Houston 5 platform.
Fear of orbital collisions is one reason why the space-borne speedo was developed. The number of active satellites has grown exponentially in recent years to more than 10,000 in 2024, according to LANL.
Space traffic management and orbit sustainability have become critical issues, but a spacecraft's location and velocity can only be determined by measurements from the ground. The location and velocity data are used in models that precisely predict future orbits.
This latest device can deliver critical velocity data for operations when ground station tracking fails, such as during severe space weather events, according to LANL.
"These measurements are necessary for improving our ability to accurately predict satellite locations so that we can perform maneuvers to avoid other active satellites and debris," said Carlos Maldonado of LANL's Space Science and Applications group.
https://medicalxpress.com/news/2025-03-naturalness-seasonal-basis-modern-criticism.html
What is the best time to start the day in view of the variation in when the sun rises? This is the problem analyzed by Jorge Mira Pérez and José María Martín-Olalla, lecturers at the University of Santiago de Compostela (USC) and the University of Seville (US), in a study that has just been published in the journal Royal Society Open Science. In it, they analyze the physiological and social foundations of the practice of seasonal time change and review its impact on health.
The study takes as an example the cities of Bogotá and New York, which are located on the same meridian but at different latitudes, to point out that in winter the sunrise is delayed by an hour-and-a-half in the latter city. "This delays life in New York during the winter, but in spring the delay in sunrise has disappeared and activity can start earlier. Putting the clocks forward in spring facilitates this adaptation," says Mira.
The study includes several current and past examples of societies with delayed activity in winter and earlier activity in summer, in line with the synchronizing role of morning light for our bodies. "Modern societies have several synchronization mechanisms. For example, the use of a standard time in a large region, or the use of pre-set schedules. Time shifting is another synchronizing mechanism, which adapts human activity to the corresponding season," says Martín-Olalla. The authors suggest that the first weekend in April and the first weekend in October would be the most appropriate time for the clocks to change.
The study reviews the impact of the seasonal time change on human health, considering two types of effects: those associated with the change itself, and those associated with the period during which daylight-saving time is in effect. In the first case, the authors point out that published studies have not analyzed the problem epidemiologically and that the evidence suggests that the impact is very weak.
"A very comprehensive study in the United States reports a 5% increase in traffic accidents in the week following the clocks going forward in spring but overlooks the fact that from one year to the next, weekly traffic accidents fluctuate by 15%. Changing the clocks has an impact, but it is very weak compared to the other factors influencing the problem," Mira points out.
"Changing the clocks has worked for a hundred years without serious disruption. The problem is that in recent years it has been associated only with energy saving when, in fact, it is a natural adaptation mechanism," says Martín-Olalla.
In the second case, the authors point out that the current controversy stems from an erroneous interpretation of the seasonal time change. According to Martín-Olalla and Mira, changing the clocks is not a time zone jump, nor does it cause the population to live adjusted to the sun in another place, nor does it cause their rhythm of life to be misaligned with respect to the sun.
"In a way it is the other way round, changing the clocks aligns the start of activity with the sunrise," Mira points out. "In 1810, the Spanish National Assembly had already made this kind of seasonal adaptation and there were no time zones or anything like that. Social life is simply reorganized because the length of the day in summer makes it possible to do things in the morning earlier than in winter," says Martín-Olalla.
Mira and Martin-Olalla are highly critical of studies that report long-term effects of seasonal time change and associate it with increased risk of cancer, sleep loss, obesity, etc. They point out that these studies analyze data within the same time zone in the US or Russia, but that says nothing about the seasonal time change.
More information: José María Martín-Olalla et al, Assessing the best hour to start the day: an appraisal of seasonal daylight saving time, Royal Society Open Science (2025). DOI: 10.1098/rsos.240727
North Korea's bitcoin reserve thought to be 3rd largest in world: report:
With authorities identifying North Korean hackers to be behind multiple recent cryptocurrency hackings, the totalitarian communist state is now thought to have a bigger bitcoin stash than any other nation in the world besides the United States and the United Kingdom.
Binance News, a news platform of global cryptocurrency exchange business firm Binance, recently reported that North Korea's allegedly state-run hacker syndicates are believed to have accumulated 13,562 BTC, valued at $1.14 billion. It cited Arkham Intelligence, a Dominican Republic-based company that provides data about blockchain transactions to help identify money laundering and other suspicious activity.
North Korea-affiliated hacking groups including the Lazarus Group were pinpointed to be culprits behind a string of cyber attacks in 2024 that stole $659 million in cryptocurrency, according to a joint statement by South Korea, the US and Japan made in January. The US Federal Bureau of Investigation last month released a public statement that North Korea is responsible for the theft of approximately $1.5 billion worth of virtual assets, the biggest hacking incident so far, which occurred last month.
Much of the stolen virtual assets were Ethereum coins, a substantial portion of which are thought to have been converted into bitcoins.
It was reported last week that the Lazarus Group converted at least $300 million of their stolen crypto into unrecoverable funds.
Lazarus Group and other hacking groups are alleged to be run by the North Korean government, and are thought to be an important source of income for it. North Korea is currently under multiple sanctions placed by the international community, as punitive actions for developing its nuclear weapons program.
A significant portion of the profits from North Korea's illegal activities are thought to be used to fund its ballistic missiles programs and nuclear tests.
If confirmed, North Korea's bitcoin reserve will be only behind US' 198,109 BTC and the UK's 61,245 BTC, according to the Binance's estimates. It would be more than 10,635 BTC of Bhutan and El Salvador's 6,117 BTC.
North Korean hackers cash out hundreds of millions from $1.5bn ByBit hack:
[...]
Experts say the infamous hacking team is working nearly 24 hours a day - potentially funnelling the money into the regime's military development.
"Every minute matters for the hackers who are trying to confuse the money trail and they are extremely sophisticated in what they're doing," says Dr Tom Robinson, co-founder of crypto investigators Elliptic.
Out of all the criminal actors involved in crypto currency, North Korea is the best at laundering crypto, Dr Robinson says.
"I imagine they have an entire room of people doing this using automated tools and years of experience. We can also see from their activity that they only take a few hours break each day, possibly working in shifts to get the crypto turned into cash."
Elliptic's analysis tallies with ByBit, which says that 20% of the funds have now "gone dark", meaning it is unlikely to ever be recovered.
Arthur T Knackerbracket has processed the following story:
German tech company AP Sensing just developed a technology that lets undersea cables detect tampering and sabotage through soundwaves. The company tested its new Distributed Fiber Optic Sensing (DFOS) last year when it sent a diver to make contact with an underwater cable it was monitoring. “He stops and just touches the cable lightly, you clearly see the signal,” Daniel Gerwig, global sales manager at AP Sensing told BBC. “The acoustic energy which travels through the fiber is basically disturbing our signal. We can measure this disturbance.”
The technology works like sonar, where it senses vibrations traveling through the water by monitoring the light traveling within the fiber optic cable. These tiny movements, as well as temperature changes and physical disturbance, affect the minute number of photons being reflected back along a fiber optic cable. By measuring these changes, the team can determine if something makes contact with the cable or if a part of it is unearthed.
AP Sensing’s software is also claimed to be able to pick up vehicles moving and events happening within the vicinity of the cables. This makes it possible for fiber optic cables to hear a dropping anchor, detect ships passive above it, and even possibly determine the vessel’s approximate class.
One more advantage to this technology is that it can be retrofitted to existing lines that have free channels or at least one unused cable. That means undersea cable operators do not have to spend millions in laying new cables with built-in sonar sensors. The only additional investment they need is to install signal-listening devices every 100km (approx 62 miles).
Many companies are starting to invest in technologies like this in the wake of several high-profile cable-cutting incidents in the Baltic Sea and around Taiwan in late 2024 and early 2025. As the majority of global communications rely on undersea cables, purposely disrupting this crucial infrastructure could be considered a hostile act.
However, these sabotage detectors may only help catch an offending vessel after it has already damaged or severed a cable. Still, some suggest putting dedicated sensors around crucial infrastructure is a good idea, giving Coast Guard and Navy ships some time to respond before damage is inflicted. This would make it easier to safeguard these key undersea lines of communication and would work well alongside NATO’s deployment of sea drones.