Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How long have you had your current mobile phone?

  • 0-6 months
  • 6-12 months
  • 1-2 years
  • 2-4 years
  • 4+ years
  • My phone belongs in a technology museum.
  • Do 2 tin cans and a very long piece of string count?
  • I don't have a mobile phone you insensitive clod!

[ Results | Polls ]
Comments:35 | Votes:173

posted by hubie on Monday May 26, @09:07PM   Printer-friendly
from the fire-up-that-amateur-radio-license-for-those-HF-QSOs dept.

The Sun is Producing Strong Solar Flares, Creating Blackouts. What to Know

The sun is producing strong solar flares, creating blackouts. What to know:

A recent period of strong solar flares is expected to gradually decline over the coming weeks and months, scientists say, along with the potential for brief communication blackouts as the sun's solar cycle begins to fade.

The most powerful eruption of 2025 so far was observed last week by NASA's Solar Dynamics Observatory and the U.S. National Oceanic and Atmospheric Administration (NOAA).

The flare, classified as an X2.7, caused a 10-minute period of "degraded communications" for high-frequency radio systems in the Middle East, according to NOAA's Space Weather Prediction Center.

"We are at solar maximum, so there can be periods of more activity," a spokesperson for the Space Weather Prediction Center told Global News in an email.

The spokesperson added that the active region last week's flare emanated from, however, "has weakened magnetically, and even though it remains capable of producing a notable event, it seems less likely at this time."

[...] The 10-minute blackout in the Middle East occurred because that part of the Earth was facing the sun at the time.

However, because the active region was still somewhat off to the side, a related coronal mass ejection — which produces plasma and magnetic energy from the sun's corona — did not impact Earth.

Taylor Cameron, a space weather forecaster at the Canadian Hazards Information Service, told Global News it's difficult to predict specifically when a solar flare can erupt and which part of Earth it can affect.

The sun is currently at the peak of its 11-year solar cycle, known as solar maximum.

Although activity is generally declining, the Space Weather Prediction Center spokesperson told Global News that "sunspot activity and solar event expectations remain elevated this year and perhaps even into 2026."

[...] Cameron said solar flares only impact high-frequency radio communications, which can include ham radios, shortwave broadcasting, aviation air-to-ground communications and over-the-horizon radar systems. Other communication networks, like internet, 5G and cellular service, aren't affected.

The stronger a flare is, Cameron added, the more severe and longer a blackout or disruption can be.

To date, the most powerful flare of the current solar cycle was an X9.0 observed last October. That was strong enough to produce faint northern lights across parts of North America, which can occur during solar storms.

Another solar storm last spring produced stronger northern lights over much of Canada.

The Space Weather Prediction Center has reported brief radio blackouts due to multiple X-class solar flares recorded over the past several months.

See also:
    • R3 flare activity from Region 4087
    • Two X Class Solar Flares - The Sun Awakens
    • M Class Solar Flare, Filament Eruption, US Alert

Are There More Solar Flares Than Expected During This Solar Cycle?

Solar Cycle 25 is approaching its peak, but how does it measure up to the previous Solar Cycle 24?:

Like the number of sunspots, the occurrence of solar flares follows the approximately 11-year solar cycle.

But as the current Solar Cycle 25 approaches its peak, how are the number of solar flares stacking up against the previous, smaller Solar Cycle 24?

Due to a change in flare calibration levels from 2020, you'll find two answers to this question online — but only one is correct.

The sun follows an 11-year solar cycle of increasing and decreasing activity. The solar cycle is typically measured by the number of sunspots visible on the sun, with records dating back over 270 years. Most solar flares originate from sunspots, so with more sunspots — you'll get more flares.

Solar flares are categorized into flare classes, classified by the magnitude of soft X-rays observed in a narrow wavelength range of 0.1-0.8 nm. The flare classes are C-class, M-class and X-class, each 10 times stronger than the previous. (Flare levels are then sub-divided by a number, e.g. M2, X1, etc). Flares of these categories (except the very largest of the X-class events), tend to follow the solar cycle closely.

In terms of sunspot numbers, Solar Cycle 25 (our current cycle) has exceeded the sunspot levels of Solar Cycle 24 (which peaked in 2014). With higher sunspot numbers, we'd also expect higher flare counts. This is the case, but the difference is far from what some would have you believe.

How do solar flares compare between Solar Cycles 24 and 25? This seems like a simple enough question, but is muddied by a recalibration of solar flare levels in 2020 from the National Oceanic and Atmospheric Administration (NOAA).

Solar flare X-ray levels have been measured since 1974. X-rays do not penetrate Earth's atmosphere, and thus can only be measured by detectors on satellites in Earth orbit. For 50 years, these solar flare detectors have been placed on NOAA's GOES satellites. As technology improves, and old technology decays, newer detectors are launched on newer GOES satellites, to keep the continuous observation of solar flares going. GOES-18 (the 18th satellite in the sequence) is the current satellite responsible for primary X-ray observations, having launched in 2022.

Because flare levels have been measured (and their classes defined) by detectors across multiple satellites/instruments, corrections are sometimes needed to account for slight differences in calibration from one detector to the next.

From 2010-2020, flare levels were defined by measurements from GOES-14 and GOES-15. This period covered the solar maximum period of Solar Cycle 24, up to the end of that cycle. However, upon the launch of these two satellites, a calibration discrepancy was discovered between GOES-14/15 and all prior GOES X-ray detectors. To fix this, science data from 1974-2010 (from GOES-1 to GOES-13 satellites) were all readjusted to match the new calibration, which was believed to be correct at the time. A result of this was that the threshold for each flare class increased by 42%, meaning an individual solar flare in 2010 needed to be 42% larger than a flare from 2009, to be given the same X-class level.

However, and here comes the twist: following the switch to GOES-16 data on a new detector, it was discovered that the original calibration (from 1974-2010) had been correct all along, and the 2010-2020 calibration was the incorrect one. This meant that in 2020, all prior data (from 1974-2020) were again recalibrated to their previous correct levels, lowering back the threshold of different flare classes). With a lower flare threshold, it meant strong C-class flares (C7+) became M-class events, and strong M-class flares (M7+) became X-class flares. An X-class solar flare was therefore far easier to achieve in 2021 than it was in 2019. This 2020 recalibration therefore increased the number of higher classes flares in Solar Cycle 24 than initially reported.

Following the 2020 recalibration of solar flare levels, NOAA re-released their historic scientific flare datasets with the correct levels. However, the archived operations data, which lists solar flare levels as they were initially reported at the time, were not recalibrated. A consequence of this is that different flare lists compiled and analyzed by third parties, can either use the recalibrated science data, or un-recalibrated operations data when comparing solar flare levels between solar cycles. The former comparison yields correct results, while the latter compares current flare levels from cycle 25 with severely underestimated flare levels from previous cycles, producing scientifically incorrect comparisons. Let's compare some data!

[...] This graph shows the correct comparison of solar flares between Cycles 24 and 25. As you can see, although the number of Cycle 25 flares is still ahead of Cycle 24 at each flare level, the discrepancy is far less than that shown in the previous graph. The operations data undercounts the number of Cycle 24 flares by nearly half, a significant difference. In reality, the number of X-class solar flares in Cycle 24 is only half the total Cycle 24 quantity and even had fewer X-class flares until the recent solar activity from famous active regions AR 13663 and AR 13664. This graph also shows that although May 2024 saw a lot of X-class activity from these active regions, this level of activity is not unprecedented — with Solar Cycle 24 experiencing a similar leap in flares towards the end of 2015.

So remember, if you see the comparison of Solar Cycle flare levels online, be sure to check if they're using the historic operations data (incorrect), or recalibrated science data (correct).

See also:
    • Solar Cycle 25 - NASA Science
    • Solar cycle - Wikipedia


Original Submission #1Original Submission #2

posted by hubie on Monday May 26, @04:21PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The call came into the help desk at a large US retailer. An employee had been locked out of their corporate accounts.

But the caller wasn't actually a company employee. He was a Scattered Spider criminal trying to break into the retailer's systems - and he was really good, according to Jon DiMaggio, a former NSA analyst who now works as a chief security strategist at Analyst1.

Scattered Spider is a cyber gang linked to SIM swapping, fake IT calls, and ransomware crews like ALPHV. They've breached big names like MGM and Caesars, and despite arrests, keep evolving. They're tracked by Mandiant as UNC3944, also known as Octo Tempest.

DiMaggio listened in on this call, which was one of the group's recent attempts to infiltrate American retail organizations after hitting multiple UK-based shops. He won't name the company, other than to say it's a "big US retail organization." This attempt did not end with a successful ransomware infection or stolen data.

"But I got to listen to the phone calls, and those guys are good," DiMaggio told The Register. "It sounded legit, and they had information to make them sound like real employees."

Scattered Spider gave the help desk the employee's ID and email address. DiMaggio said he suspected the caller first social-engineered the employee to obtain this data, "but that is an assumption."

"The caller had all of their information: employee ID numbers, when they started working there, where they worked and resided," DiMaggio said. "They were calling from a number that was in the right demographic, they were well-spoken in English, they looked and felt real. They knew a lot about the company, so it's very difficult to flag these things. When these guys do it, they're good at what they do."

Luckily, the target was a big company with a big security budget, and it employs several former government and law enforcement infosec officials, including criminal-behavior experts, on its team. But not every organization has this type of staffing or resources to ward off these types of attacks where the would-be intruders are trying to break in from every access point.

"They are resourceful, they're smart, they're fast," Mandiant CTO Charles Carmakal told The Register.

"One of the challenges that defenders have is: it's not the shortage of network alerts," he added. "You know when Scattered Spider is targeting a company because people are calling the help desk and trying to reset passwords. They are running tools across an enterprise that will fire off on antivirus signatures and EDR alerts, tons and tons and tons of alerts. They operate at a speed that can be hard to defend against."

In this case, sometimes the best option — albeit a painful one — is for the organization to break its own IT systems before the criminals do.

This appears to have been the case with British retailer Co-op, which pulled its systems offline before Scattered Spider could encrypt its files and move throughout its networks.


Original Submission

posted by janrinok on Monday May 26, @11:36AM   Printer-friendly

Agent mode arrives, for better or worse:

Microsoft's GitHub Copilot can now act as a coding agent, capable of implementing tasks or addressing posted issues within the code hosting site.

What distinguishes a coding agent from an AI assistant is that it can iterate over its own output, possibly correcting errors, and can infer tasks that have not been specified to complete a prompted task.

But wait, further clarification is required. Having evidently inherited Microsoft's penchant for confusing names, the GitHub Copilot coding agent is not the same thing as the GitHub Copilot agent mode, which debuted in February.

Agent mode refers to synchronous (real-time) collaboration. You set a goal and the AI helps you get there. The coding agent is for asynchronous work – you delegate tasks, the coding agent then sets off on its own to do them while you do other things.

"Embedded directly into GitHub, the agent starts its work when you assign a GitHub issue to Copilot," said Thomas Dohmke, GitHub CEO, in a blog post provided to The Register ahead of the feature launch, to coincide with this year's Microsoft Build conference.

"The agent spins up a secure and fully customizable development environment powered by GitHub Actions. As the agent works, it pushes commits to a draft pull request, and you can track it every step of the way through the agent session logs."

Basically, once given a command, the agent uses GitHub Actions to boot a virtual machine. It then clones the relevant repository, sets up the development environment, scours the codebase, and pushes changes to a draft pull request. And this process can be traced in session log records.

Available to Copilot Enterprise and Copilot Pro+ users, Dohmke insists that agents do not weaken organizational security posture because existing policies still apply and agent-authored pull requests still require human approval before they're run.

By default, the agent can only push code to branches it has created. As a further backstop, the developer who asked the agent to open a pull request is not allowed to approve it. The agent's internet access is limited to predefined trusted destinations and GitHub Actions workflows require approval before they will run.

With GitHub as its jurisdiction, Copilot's agent interactions can be used to automate various development-related site interactions via github.com, in GitHub Mobile, or through the GitHub CLI.

But the agent can also be configured to work with MCP (model context protocol) servers in order to connect to external resources. And it can respond to input beyond text, thanks to vision capabilities in the underlying AI models. So it can interpret screenshots of desired design patterns, for example.

"With its autonomous coding agent, GitHub is looking to shift Copilot from an in-editor assistant to a genuine collaborator in the development process," said Kate Holterhoff, senior analyst at RedMonk, in a statement provided by GitHub. "This evolution aims to enable teams to delegate implementation tasks and thereby achieve a more efficient allocation of developer resources across the software lifecycle."

GitHub claims it has used the Copilot code agent in its own operations to handle maintenance tasks, freeing its billing team to pursue features that add value. The biz also says the Copilot agent reduced the amount of time required to get engineers up to speed with its AI models.

GitHub found various people to say nice things about the Copilot agent. We'll leave it at that.


Original Submission

posted by janrinok on Monday May 26, @06:48AM   Printer-friendly

Positive proof-of-concept experiments may lead to the world's first treatment for celiac disease:

An investigational treatment for celiac disease effectively controls the condition—at least in an animal model—in a first-of-its-kind therapeutic for a condition that affects approximately 70 million people worldwide.

Currently, there is no treatment for celiac disease, which is caused by dietary exposure to gluten, a protein in wheat, barley and rye. The grains can produce severe intestinal symptoms, leading to inflammation and bloating.

Indeed, celiac disease is the bane of bread and pasta lovers around the world, and despite fastidiously maintaining a gluten-free eating plan, the disease can still lead to social isolation and poor nutrition, gastroenterologists say. It is a serious autoimmune disorder that, when left unaddressed, can cause malnutrition, bone loss, anemia, and elevated cancer risk, primarily intestinal lymphoma.

Now, an international team of scientists led by researchers in Switzerland hope to change the fate of celiac patients for the better. A series of innovative experiments has produced "a cell soothing" technique that targets regulatory T cells, the immune system components commonly known as Tregs.

The cell-based technique borrows from a form of cancer therapy and underlies a unique discovery that may eventually lead to a new treatment strategy, data in the study suggests.

"Celiac disease is a chronic inflammatory disorder of the small intestine with a global prevalence of about 1%," writes Dr. Raphaël Porret, lead author of the research published in Science Translational Medicine.

"The condition is caused by a maladapted immune response to cereal gluten proteins, which causes tissue damage in the gut and the formation of autoantibodies to the enzyme transglutaminase," continued Porret, a researcher in the department of Immunology and Allergy at the University of Lausanne.

Working with colleagues from the University of California, San Francisco, as well as at the Norwegian Celiac Disease Research Center at the University of Oslo, Porret and colleagues have advanced a novel concept. They theorize that a form of cell therapy, based on a breakthrough form of cancer treatment, might also work against celiac disease.

In an animal model, Porret and his global team of researchers have tested the equivalent of CAR T cell therapy against celiac disease. The team acknowledged that the "Treg contribution to the natural history of celiac disease is still controversial," but the researchers also demonstrated that at least in their animal model of human celiac disease, the treatment worked.

CAR T cell therapy is a type of cancer immunotherapy in which a patient's T cells are genetically modified in the laboratory to recognize and kill cancer cells. The cells are then infused back into the patient to provide a round-the-clock form of cancer treatment. In the case of celiac disease, the T cells are modified to affect the activity of T cells that become hyperactive in the presence of gluten.

To make this work, the researchers had to know every aspect of the immune response against gluten. "Celiac disease, a gluten-sensitive enteropathy, demonstrates a strong human leukocyte antigen association, with more than 90% of patients carrying the HLA-DQ2.5 allotype," Porret wrote, describing the human leukocyte antigen profile of most patients with celiac disease.

As a novel treatment against the condition, the team engineered effector T cells and regulatory T cells and successfully tested them in their animal model. Scientists infused these cells together into mice and evaluated the regulatory T cells' ability to quiet the effector T cells response to gluten. They observed that oral exposure to gluten caused the effector cells to flock to the intestines when they were infused without the engineered Tregs.

However, the engineered regulatory T cells prevented this gut migration and suppressed the effector T cells' proliferation in response to gluten. Although this is a first step, the promising early results indicate that cell therapy approaches could one day lead to a long-sought treatment for this debilitating intestinal disorder.

"Our study paves the way for a better understanding of key antigen-activating steps after dietary antigen [gluten] uptake," Porret concluded. "Although further work is needed to assess Treg efficacy in the setting of an active disease, our study provides proof-of-concept evidence that engineered Tregs hold therapeutic potential for restoring gluten tolerance in patients with celiac disease."

Journal Reference: Raphaël Porret et al, T cell receptor precision editing of regulatory T cells for celiac disease, Science Translational Medicine (2025). DOI: 10.1126/scitranslmed.adr8941


Original Submission

posted by mrpg on Monday May 26, @02:00AM   Printer-friendly
from the 50-is-more-than-1.21 dept.

https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/

AI's integration into our lives is the most significant shift in online life in more than a decade. Hundreds of millions of people now regularly turn to chatbots for help with homework, research, coding, or to create images and videos. But what's powering all of that?

[...] Given the direction AI is headed—more personalized, able to reason and solve complex problems on our behalf, and everywhere we look—it's likely that our AI footprint today is the smallest it will ever be. According to new projections published by Lawrence Berkeley National Laboratory in December, by 2028 more than half of the electricity going to data centers will be used for AI. At that point, AI alone could consume as much electricity annually as 22% of all US households.

[...] Racks of servers hum along for months, ingesting training data, crunching numbers, and performing computations. This is a time-consuming and expensive process—it's estimated that training OpenAI's GPT-4 took over $100 million and consumed 50 gigawatt-hours of energy, enough to power San Francisco for three days. It's only after this training, when consumers or customers "inference" the AI models to get answers or generate outputs, that model makers hope to recoup their massive costs and eventually turn a profit.

"For any company to make money out of a model—that only happens on inference," says Esha Choukse, a researcher at Microsoft Azure who has studied how to make AI inference more efficient.

As conversations with experts and AI companies made clear, inference, not training, represents an increasing majority of AI's energy demands and will continue to do so in the near future. It's now estimated that 80–90% of computing power for AI is used for inference.


Original Submission

posted by mrpg on Sunday May 25, @09:11PM   Printer-friendly
from the where-in-the-space-is-planet-nine dept.

Evidence for 'Planet Nine' lurking on the fringes of the Solar System is building. So why can't astronomers spot it? - ABC News:

A huge unknown lurks in the far reaches of our Solar System — something massive enough to pull distant space rocks into extraordinarily long, thin loops around the Sun.

At least, this is what US astronomer Michael Brown believes.

In 2016, he and a colleague at the California Institute of Technology (Caltech) proposed something almost unfathomable: a huge planet, up to 10 times heftier than Earth, way out on the edge of our Solar System.

[...] Those that are convinced Planet Nine is out there are waiting for the new Vera Rubin Observatory to come online in Chile early next year.

The telescope has an 8.4-metre mirror, which makes it the largest camera ever built for astronomy.

"It's going to be doing something called the Legacy Survey of Space and Time, which is a massive survey — taking images of the sky every single night," Swinburne University of Technology astrophysicist Sara Webb says.

[...] "If Vera Rubin doesn't find it by reflected sunlight, the next best thing is to find it not as reflected sunlight, but by using radio telescopes," he says.

"They're not designed to look at little planets; they're designed to look at the whole sky at once. It'll take a while for the telescopes to be able to see that this planet has moved from one place to the other, so it'll be a couple of years of those surveys before we know it's there.


Original Submission

posted by janrinok on Sunday May 25, @04:25PM   Printer-friendly
from the and-did-those-feet-in-ancient-time dept.

The Roman massacre that never happened according to a new study of an iconic archaeological site:

A new study by archaeologists at Bournemouth University (BU) has revealed that bodies recovered from a 'war-cemetery' previously attributed to the Roman Conquest of Britain at Maiden Castle Iron Age hillfort in Dorset, did not die in a single dramatic event.

A re-analysis of the burials, including a new programme of radiocarbon dating, has revealed that, rather than dying in a single, catastrophic event, individuals fell in periods of lethal violence spanning multiple generations, spread across the late first century BC to the early first century AD. This is suggestive of episodic periods of bloodshed, possibly the result of localised turmoil, executions or dynastic infighting during the decades leading up to the Roman Conquest of Britain.

BU's Dr Martin Smith, Associate Professor in Forensic and Biological Anthropology, who analysed the bodies said: "The find of dozens of human skeletons displaying lethal weapon injuries was never in doubt, however, by undertaking a systematic programme of radiocarbon dating we have been able to establish that these individuals died over a period of decades, rather than a single terrible event".

The 'war-cemetery' of Maiden Castle Iron Age hillfort in Dorset is one of Britain's most famous archaeological discoveries. Discovered in 1936, many of the skeletons unearthed had clear evidence of trauma to the head and upper body. Dig director at the time, Sir Mortimer Wheeler suggested, were "the marks of battle", caused during a furious but ultimately futile defence of the hillfort against an all-conquering Roman legion. Wheeler's colourful account of an attack on the native hillfort and the massacre of its defenders by invading Romans, was accepted as fact, becoming an iconic event in popular narratives of Britain's 'Island Story'.

Principal Academic in Prehistoric and Roman Archaeology at BU, and the study's Dig Director, Dr Miles Russell said: "Since the 1930s, the story of Britons fighting Romans at one of the largest hillforts in the country has become a fixture in historical literature. With the Second World War fast approaching, no one was really prepared to question the results. The tale of innocent men and women of the local Durotriges tribe being slaughtered by Rome is powerful and poignant. It features in countless articles, books and TV documentaries. It has become a defining moment in British history, marking the sudden and violent end of the Iron Age."

Dr Russell added: "The trouble is it doesn't appear to have actually happened. Unfortunately, the archaeological evidence now points to it being untrue. This was a case of Britons killing Britons, the dead being buried in a long-abandoned fortification. The Roman army committed many atrocities, but this does not appear to be one of them."

[...] The study has also raised further questions as to what may still lie undiscovered at Maiden Castle. Paul Cheetham commented that "Whilst Wheeler's excavation was excellent in itself, he was only able to investigate a fraction of the site. It is likely that a larger number of burials still remains undiscovered around the immense ramparts."

Journal Reference: https://doi.org/10.1111/ojoa.12324 [open access]


Original Submission

posted by janrinok on Sunday May 25, @11:43AM   Printer-friendly

https://archive.is/lhQuY

In November of 2021, Vladimir Dinets was driving his daughter to school when he first noticed a hawk using a pedestrian crosswalk.

The bird—a young Cooper's hawk, to be exact—wasn't using the crosswalk, in the sense of treading on the painted white stripes to reach the other side of the road in West Orange, New Jersey. But it was using the crosswalk—more specifically, the pedestrian-crossing signal that people activate to keep traffic out of said crosswalk—to ambush prey.

The crossing signal—a loud, rhythmic click audible from at least half a block away—was more of a pre-attack cue, or so the hawk had realized, Dinets, a zoologist now at the University of Tennessee at Knoxville, told me. On weekday mornings, when pedestrians would activate the signal during rush hour, roughly 10 cars would usually be backed up down a side street. This jam turned out to be the perfect cover for a stealth attack: Once the cars had assembled, the bird would swoop down from its perch in a nearby tree, fly low to the ground along the line of vehicles, then veer abruptly into a residential yard, where a small flock of sparrows, doves, and starlings would often gather to eat crumbs—blissfully unaware of their impending doom.

The hawk had masterminded a strategy, Dinets told me: To pull off the attacks, the bird had to create a mental map of the neighborhood—and, maybe even more important, understand that the rhythmic ticktock of the crossing signal would prompt a pileup of cars long enough to facilitate its assaults. The hawk, in other words, appears to have learned to interpret a traffic signal and take advantage of it, in its quest to hunt. Which is, with all due respect, more impressive than how most humans use a pedestrian crosswalk.

Cooper's hawks are known for their speedy sneak attacks in the wild, Janet Ng, a senior wildlife biologist with Environment and Climate Change Canada, told me. Zipping alongside bushes and branches for cover, they'll conceal themselves from prey until the very last moment of a planned ambush. "They're really fantastic hunters that way," Ng said. Those skills apparently translate fairly easily into urban environments, where Cooper's hawks flit amid trees and concrete landscapes, stalking city pigeons and doves.

[...] But maybe the most endearing part of this hawk's tale is the idea that it took advantage of a crosswalk signal at all—an environmental cue that, under most circumstances, is totally useless to birds and perhaps a nuisance. To see any animal blur the line between what we consider the human and non-human spheres is eerie, but also humbling: Most other creatures, Plotnik said, are simply more flexible than we'd ever think.


Original Submission

posted by mrpg on Sunday May 25, @06:55AM   Printer-friendly
from the slimming-down-for-real-this-time dept.

A Caltech press release details research on the evolution of Jupiter.

From the release:

Understanding Jupiter's early evolution helps illuminate the broader story of how our solar system developed its distinct structure. Jupiter's gravity, often called the "architect" of our solar system, played a critical role in shaping the orbital paths of other planets and sculpting the disk of gas and dust from which they formed.

In a new study published in the journal Nature Astronomy, Konstantin Batygin (PhD '12), professor of planetary science at Caltech; and Fred C. Adams, professor of physics and astronomy at the University of Michigan; provide a detailed look into Jupiter's primordial state. Their calculations reveal that roughly 3.8 million years after the solar system's first solids formed—a key moment when the disk of material around the Sun, known as the protoplanetary nebula, was dissipating—Jupiter was significantly larger and had an even more powerful magnetic field.

"Our ultimate goal is to understand where we come from, and pinning down the early phases of planet formation is essential to solving the puzzle," Batygin says. "This brings us closer to understanding how not only Jupiter but the entire solar system took shape."

Batygin and Adams approached this question by studying Jupiter's tiny moons Amalthea and Thebe, which orbit even closer to Jupiter than Io, the smallest and nearest of the planet's four large Galilean moons. Because Amalthea and Thebe have slightly tilted orbits, Batygin and Adams analyzed these small orbital discrepancies to calculate Jupiter's original size: approximately twice its current radius, with a predicted volume that is the equivalent of over 2,000 Earths. The researchers also determined that Jupiter's magnetic field at that time was approximately 50 times stronger than it is today.

[...] Importantly, these insights were achieved through independent constraints that bypass traditional uncertainties in planetary formation models—which often rely on assumptions about gas opacity, accretion rate, or the mass of the heavy element core. Instead, the team focused on the orbital dynamics of Jupiter's moons and the conservation of the planet's angular momentum—quantities that are directly measurable. Their analysis establishes a clear snapshot of Jupiter at the moment the surrounding solar nebula evaporated, a pivotal transition point when the building materials for planet formation disappeared and the primordial architecture of the solar system was locked in.

Cool research with a novel methodology.

Referenced paper (Abstract)
DOI: https://doi.org/10.1038/s41550-025-02512-y


Original Submission

posted by hubie on Sunday May 25, @02:09AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

In an email posted on Reddit from "The VPN Secure Team" sent to lifetime subscription holders, it's explained that VPNSecure was acquired in 2023. The deal included the technology, domain, and customer database, but not the liabilities.

"Unfortunately, the previous owner did not disclose that thousands of Lifetime Deals (LTDs) had been sold through platforms like StackSocial," reads the mail.

"We discovered this only months later – when a large portion of our resources were strained by these LTD accounts and high support volume from users, who through part of the database, provided no sustaining income to help us improve and maintain the service."

As a result of this, the new owners began deactivating lifetime accounts that had been dormant for six months. While it's claimed that this was "technically fair," – for some reason – the new owners seem shocked that it led to a wave of negative reviews.

[...] Ars Technica reports that a follow-up email from VPNSecure shed more light on the situation. It states that InfiniteQuant Ltd, which is a different company than InfiniteQuant Capital Ltd, acquired VPN Secure in an "asset only deal."

It goes on to say that while the buyers received the tech, brand, infrastructure, and tech, they received none of the company, contracts, payments, or obligations from the previous owners.

It's also claimed the Lifetime Deals sold by the old team between 2015 and 2017 were not disclosed to InfiniteQuant Ltd, but it kept the accounts running for 2 extra years despite never receiving a "single cent from those subscriptions." So stop being ungrateful, basically.

The final part of the message claims that anyone who didn't see the original message explaining all this must have it in their spam folder or simply missed it completely.

The new owners said they didn't sue the seller over withholding the information on lifetime subs because "a corporate lawsuit would've cost more than the entire purchase of the business." The email also states that the buyers could have simply shut down VPNSecure but instead "chose the hard path."

While it's claimed the lifetime subscriptions were sold between 2015 and 2017, typing "VPNSecure lifetime subscriptions" into Google Search shows a 2021 ad on ZDNet for this $40 plan. An ad for a $28 lifetime subscription also ran on the site in 2022.

Lifetime subscriptions are rarely actual lifetimes. VPNSecure's plans lasted up to 20 years, according to online comments. There's always the chance new owners of companies won't honor the contracts either. Whether InfiniteQuant Ltd really didn't know about the subscriptions can't be confirmed, but it's led to a Trustpilot score of 1.2 for the VPN and pages of angry comments.


Original Submission

posted by janrinok on Saturday May 24, @10:24PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

While CTO officially stands for Chief Technology Officer, the standard jokes are that it’s either Chief Talking Officer or Chief Travel Officer. I embraced all of those roles, especially the travel part when my territory covered India, New Zealand, China and everything in between. And while I definitely relished having a job that required me to keep developing my expertise in a wide range of technical fields, I also really enjoyed the chance to keep working on my communications skills, whether that was through talking or writing.

In my last year at VMware (when travel was curtailed due to COVID) I put a lot of energy into developing (online) presentations, including one on the art of giving presentations. You can find that talk on Peertube – it is in PechaKucha format, which means 20 slides, 20 seconds per slide, which I find a great source of creative inspiration. You can probably watch it in less time than it takes to read this article.

The longer I worked in technology, the more I came to believe that communications, both oral and written, can be a key differentiator in a technical career. Probably the first time I realized this was when writing my PhD thesis. Engineers as a group are notoriously averse to writing, but I came to see the satisfaction to be had in clearly communicating your ideas on paper. For one thing, by the time I was in the last year of my program, I was pretty happy to be producing something as tangible as a thesis. I had written a lot less software in my course than I’d expected, a result of Edinburgh Computer Science being more of a theory place than one encouraging system building from its students. So watching my ideas grow on the page into something the size of a book was oddly satisfying.

I think the reason I was not as averse to writing as many engineers had a lot to do with the way I was educated in Australia. High school English was a “must-pass” subject if you wanted to go on to university, and I found it much more challenging than my maths and science classes. Not wanting to take chances, I worked disproportionately hard on English in my last year of high school, surprising both myself and my teachers with the highest grade of my life in the final exam that mattered most.

Somewhat to my chagrin, the undergraduate engineering degree in those days did not allow any courses to be taken outside of the Engineering department, but there was a mandated “English for Engineers” course, taught by one of the engineering lecturers. (I think the course had a different official name, something like “Engineering Communications”.) I can only remember two things about that course: one is that we read “Voss” by Nobel-prize-winner Patrick White, which was challenging but enjoyable. So they weren’t treating us as complete dummies. And the second was that we had to make a formal presentation in front of the class, which I found both stressful and educational. (My high school debating experience helped a bit here.) While I might have enjoyed an actual English Literature class more, this one was much better than the course name implied.

I continued to make the occasional presentations through my PhD program and into my early career, but a pivotal experience was watching David Clark speak twice at SIGCOMM 1990, at which he won the SIGCOMM award and also presented the paper, "Architectural considerations for a new generation of protocols.” His presentation of the latter was so engaging that I remember thinking “that is how you get people to listen to your ideas.” (That paper’s ideas continue to influence my thinking today.) As a young researcher at Bellcore at the time, I was surrounded by people who had creative ideas, but I had never seen someone get the audience excited about their work the way Clark had done. I resolved to get better at making technical presentations.

At the same time, I was working on my “accidental smartNIC” project as part of the Aurora gigabit testbed. There came a point where I realized that the system was complex enough that I needed to write some sort of design document–hardly a revolutionary idea in industry, but somewhat uncommon in the research group that I worked in. My first audience for this document was me, because I realized I couldn’t keep all the details in my head any more. Later on it would enable me to involve others in the project both as subsystem designers and programmers of the system.

When I left the research world for a development team at Cisco, I quickly noticed that there was a massive repository of system design documents. There were the formal ones such as product requirements documents (PRDs) and system functional specifications, but also less formal documents, such as Yakov Rekhter’s famous two-page description of Tag Switching that laid the foundation for MPLS. While Cisco was a place that put a lot of emphasis on building hardware and writing the software to run on it, documenting ideas and architectures was critical to getting big things like MPLS to happen.

Coincidentally, the first book that Larry and I wrote together, Computer Networks: A Systems Approach, was completed on the day that I decided to leave Bellcore for Cisco. I realized that completing the book – which was not part of my job description at Bellcore – was the most satisfying thing I had done in years, so maybe it was time for a new job.

By this time I was also active in the IETF, and the development of Tag Switching and MPLS led to my taking a more active role there. There are pretty much two ways to have an impact at the IETF: Write documents, and speak about your work. Of course, the IETF also depends on “running code” to back up those documents and talks – a definite benefit of working at a big place like Cisco was the resources that could be applied to writing code if the company decided to get behind an idea, as it did with MPLS.

All of these experiences led me to appreciate the value of both written and spoken communication, and I continued to work on developing these skills. Taking a couple of public speaking classes early on had a huge positive impact – although I still talk too fast when I get excited about my topic. (I’ve learned skills to manage that, but sometimes forget them in the excitement.)

Learning to be a good communicator can itself be a great way to build your technical skills

In my CTO roles I had plenty of opportunities to advise engineers on how to progress their careers, and I always found myself coming back to emphasize the value of communication skills. Of course you need technical skills as well, but I view these as table stakes, whereas it is the great communicators who rise above the pack. And communication skills are eminently trainable. All the great public speakers I know put huge amounts of time into preparing and practicing their talks. They might look effortless on stage, but that is because of all the effort that went in ahead of time.

Finally, learning to be a good communicator can itself be a great way to build your technical skills. In my last year at VMware, I started to get really interested in quantum computing, which was a challenge for me – it’s full of mathematics and outside my core expertise. But the more I learned the more excited I got, so I decided to present on quantum computing at our Asia-Pacific technical team conference. My goal was both to become knowledgeable enough to avoid embarrassing myself, and to show by example how much fun it can be to expand your horizons.

A version of the talk is here and some lessons from it are here. If you can learn a topic well enough to explain it to your audience, you are going to have a deeper understanding than if you just keep that knowledge to yourself. And if you can communicate your ideas–and your excitement about them–to people around you, you’ll greatly increase the chance of those ideas having an impact.


Original Submission

posted by janrinok on Saturday May 24, @05:44PM   Printer-friendly
from the the-big-sigh dept.

"The European Union, which was formed for the primary purpose of taking advantage of the United States on TRADE, has been very difficult to deal with. Their powerful Trade Barriers, Vat Taxes, ridiculous Corporate Penalties, Non-Monetary Trade Barriers, Monetary Manipulations, unfair and unjustified lawsuits against Americans Companies, and more, have led to a Trade Deficit with the U.S. of more than $250,000,000 a year, a number which is totally unacceptable. Our discussions with them are going nowhere! Therefore, I am recommending a straight 50% Tariff on the European Union, starting on June 1, 2025. There is no Tariff if the product is built or manufactured in the United States. Thank you for your attention to this matter!"

@realdonaldtrump, Truth Social, May 23.

President Trump announced a 50% tariff on EU exports into the United States.

the Trump administration considers EU food and product standards protectionist and wants the bloc to unilaterally drop tariffs. The EU has proposed that both sides scrap tariffs on all industrial and some agricultural products.

Brussels has also offered to help tackle Chinese overcapacity in sectors such as steel and cars, and to discuss restrictions on exporting technology to Beijing.

But it has refused to discuss scrapping national digital taxes or VAT, key US demands, or weakening EU regulation of US tech companies.

Trump's post on Friday contrasted with his administration's moves to defuse trade tensions with Beijing this month. The US has also recently sealed a trade deal with the UK.

(Source: Financial Times, May 23, Trump warns of 50% tariff on EU imports from next month)

Stock up on bourbon while you still can, mateys.


Original Submission

posted by hubie on Saturday May 24, @01:03PM   Printer-friendly

You are glowing – no, really:

You, along with all living things, produce subtle, ethereal, semi-visible light that glows until you die, according to a recent study.

You would be forgiven for jumping to the conclusion that this spooky luminescence is evidence that auras exist, or something similar.

But Dr Daniel Oblak, physicist at the University of Calgary and last author of the study, told BBC Science Focus that, while auras are a metaphysical, spiritual, unscientific idea, this light is not. Instead, it's called ultraweak photon emission (UPE) and is a natural product of your metabolism.

"I normally point out that UPE is a result of a biochemical process and in that sense is related to what happens in a glow-stick, which no one suspects of having an aura," he said.

"UPE is so weak that it is not visible to the human eye and completely overwhelmed by other sources of light, unless you are in a completely dark room."

That's not to say that shutting your curtains and turning off your lights will allow you to see your own glow. This light is between 1,000 and 1,000,000 times dimmer than the human eye can perceive.

UPE is produced when chemicals in your cells create unstable molecules known as reactive oxygen species (ROS), basically byproducts of your body's metabolism.

When ROS levels rise, they cause other molecules to become 'excited', meaning they carry excess energy. It's this energy that causes light to be emitted.

A key driver of this effect is oxidative stress – a form of cellular wear and tear caused by factors like ageing and illness. The more oxidative stress the body experiences, the more ROS it produces – and the more light it emits.

"Hence, when an organism ceases living, it stops metabolising and thus, the ultraweak photon emission ends," he said.

To test UPE, Calgary scientists measured UPE produced by immobilised and dead mice, as well as scratched leaves.

Using specialist cameras, they observed much more UPE being emitted by the living mice, compared to their dead bodies. Meanwhile, the leaves gave off much more light where they had been damaged, compared to unscratched areas.

That's because they were experiencing more oxidative stress in scratched regions. But the dead mice did not glow, because their bodies weren't metabolising anymore.

Journal Reference: DOI: https://pubs.acs.org/doi/10.1021/acs.jpclett.4c03546


Original Submission

posted by hubie on Saturday May 24, @08:20AM   Printer-friendly
from the character-flaw dept.

AI chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide:

A federal judge on Wednesday rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment—at least for now. The developers behind Character.AI are seeking to dismiss a lawsuit alleging the company's chatbots pushed a teenage boy to kill himself.

The judge's order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence.

The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a Character.AI chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide.

[...] The suit against Character Technologies, the company behind Character.AI, also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the United States and beyond, as the technology rapidly reshapes workplaces, marketplaces, and relationships despite what experts warn are potentially existential risks.

[...] The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show "Game of Thrones." In his final moments, the bot told Setzer it loved him and urged the teen to "come home to me as soon as possible," according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings.

Related: Chatbot 'Encouraged Teen to Kill Parents Over Screen Time Limit'


Original Submission

posted by hubie on Saturday May 24, @03:36AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A new examination of Apple partner TSMC's Arizona facility shines a spotlight on how the U.S. bet on domestic chipmaking is colliding with labor shortages, cost overruns, and global dependencies.

Just outside Phoenix, a sleek, high-security facility is taking shape. Known as Fab 21, the site is operated by Taiwan Semiconductor Manufacturing Company (TSMC) and will soon be one of the most advanced chipmaking facilities in the world.

The microscopic transistors produced here will power Apple devices, artificial intelligence systems and critical infrastructure, representing a significant shift of advanced technology manufacturing to American soil.

TSMC currently makes about 90% of the world's most advanced semiconductors, nearly all of them in Taiwan. That long-standing reliance is now being reexamined amid global supply disruptions and rising tensions in the Asia-Pacific.

[...] TSMC replicated much of its Taiwan production environment in Arizona, but the complexity of the process means the US still relies heavily on foreign equipment, materials and expertise.

[...] Officially, Taiwan's government supports TSMC's global expansion. Privately, there is concern. Taiwan's dominance in semiconductors, sometimes called the "Silicon Shield," is seen as critical leverage in deterring Chinese aggression.

Moving high-end production overseas may reduce that leverage and undermine the island's geopolitical relevance.

Some in Taipei have warned against letting the US or other allies "hollow out" Taiwan's tech advantage. Others view diversification as necessary insurance against supply chain shocks or military threats.

President Donald Trump frequently cited TSMC's US investment as proof that his tariff threats worked. His administration pushes to reduce US dependence on Asian manufacturing, using trade pressure to encourage reshoring.

Former President Joe Biden focused on subsidies and long-term industrial planning. The CHIPS and Science Act, signed into law in 2022, offers tens of billions in funding and tax incentives.

TSMC's Arizona project is the largest foreign beneficiary, with $40 billion committed across two construction phases. Phase one is expected to produce chips in 2025.

Despite the political divide, both administrations have treated semiconductor independence as a bipartisan priority. TSMC's presence in Arizona is the centerpiece of that effort.

[...] The Arizona facility is producing chips not only for smartphones and laptops but also for artificial intelligence, cloud computing and defense applications. Companies like Apple and Nvidia have confirmed plans to use chips from Fab 21 in upcoming US-bound products.

Both the Trump and Biden administrations have moved to block China's access to these technologies. The US has restricted exports of ASML lithography tools and banned companies like Huawei from acquiring high-end chips.

Still, China is racing to catch up. US restrictions have pushed Beijing to go full speed ahead. That's part of why leaders in both parties continue to push for domestic capacity.

[...] As geopolitical competition escalates, the US faces a delicate balancing act. It must rebuild strategic capacity while staying connected to a global innovation network. The Arizona fab may not make the US self-sufficient, but it's a foundational step toward greater resilience.


Original Submission

Today's News | May 27 | May 25  >