Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What is Your Operating System of Choice?

  • MacOS - Any Version
  • Debian Based - Any Version
  • Redhat Based - Any Version
  • BSD - Any Version
  • Arch Based - Any Version
  • Any other *nix
  • Windows - Any Version
  • The poll creator is dumb for not including my OS

[ Results | Polls ]
Comments:51 | Votes:132

posted by hubie on Saturday April 19, @09:47PM   Printer-friendly

Analysis aims to solidify agreement on cannabis's potential as a cancer treatment, lead author of research says:

Hannah Harris Green - Fri 18 Apr 2025 13.00 BST

The largest ever study investigating medical cannabis as a treatment for cancer, published this week in Frontiers in Oncology, found overwhelming scientific support for cannabis's potential to treat cancer symptoms and potentially fight the course of the disease itself.

The intention of the analysis was to solidify agreement on cannabis's potential as a cancer treatment, said Ryan Castle, research director at the Whole Health Oncology Institute and lead author of the study. Castle noted that it has been historically difficult to do so because marijuana is still federally considered an illegal Schedule I narcotic.

"Our goal was to determine the scientific consensus on the topic of medical cannabis, a field that has long been dominated by a war between cherrypicked studies," Castle said.

[...] Castle's study looked at more than 10,000 studies on cannabis and cancer, which he said is "10 times the sample size of the next largest study, which we believe helps make it a more conclusive review of the scientific consensus".

To analyze the massive quantity of studies, Castle and his team used AI – specifically, the natural language processing technique known as "sentiment analysis". This technique allowed the researchers to see how many studies had positive, neutral or negative views on cannabis's ability to treat cancer and its symptoms by, for example, increasing appetite, decreasing inflammation or accelerating "apoptosis", or the death of cancer cells.

Castle says his team hoped to find "a moderate consensus" about cannabis's potential as a cancer treatment, and expected the "best case scenario" to be something like 55% of studies showing that medical cannabis improved cancer outcomes.

"It wasn't 55-45, it was 75-25," he said.

The study overwhelmingly supported cannabis as a treatment for cancer-related inflammation, appetite loss and nausea. Perhaps more surprisingly, it also showed that cannabis has the potential to fight cancer cells themselves, by killing them and stopping their spread.

"That's a shocking degree of consensus in public health research, and certainly more than we were anticipating for a topic as controversial as medical cannabis," Castle said.

[...] For his part, Abrams has found cannabis to be useful for cancer patients managing symptoms like appetite loss, nausea, pain and anxiety. But he is skeptical of claims that cannabis can actually fight cancer.

"I have been an oncologist in San Francisco for 42 years now where many if not most of my patients have had access to cannabis. If cannabis cures cancer, I have not been able to appreciate that," he said.

Still, Abrams admits that "there is elegant pre-clinical evidence from test tubes and animal models that cannabis can affect cancer cells or transplanted tumors" but "as yet those findings have not translated into clinical benefit in people".

[...] "We are not arguing that the standards for adopting new cancer treatments should be lower. We are arguing that medical cannabis meets or exceeds those standards," he said, "often to a greater extent than current pharmaceutical treatments."

Journal Reference: https://doi.org/10.3389/fonc.2025.1490621


Original Submission

posted by hubie on Saturday April 19, @05:02PM   Printer-friendly

The Armatron robotic arm:

Described as a "robot-like arm to aid young masterminds in scientific and laboratory experiments," it was the rare toy that lived up to the hype printed on the front of the box. This was a legit robotic arm. You could rotate the arm to spin around its base, tilt it up and down, bend it at the "elbow" joint, rotate the "wrist," and open and close the bright-­orange articulated hand in elegant chords of movement, all using only the twistable twin joysticks.

Anyone who played with this toy will also remember the sound it made. Once you slid the power button to the On position, you heard a constant whirring sound of plastic gears turning and twisting. And if you tried to push it past its boundaries, it twitched and protested with a jarring "CLICK ... CLICK ... CLICK."

It wasn't just kids who found the Armatron so special. It was featured on the cover of the November/December 1982 issue of Robotics Age magazine, which noted that the $31.95 toy (about $96 today) had "capabilities usually found only in much more expensive experimental arms."

[...] The Armatron encouraged kids to explore these analog mechanics, a reminder that not all breakthroughs happen on a computer screen. And that hands-on curiosity hasn't faded. Today, a new generation of fans are rediscovering the Armatron through online communities and DIY modifications. Dozens of Armatron videos are on YouTube, including one where the arm has been modified to run on steam power.

[Source]: MIT Technology Review

How many of you remember this toy and, did it inspire you to study robotics ?


Original Submission

posted by hubie on Saturday April 19, @12:19PM   Printer-friendly

Modern science wouldn't exist without the online research repository known as arXiv. Three decades in, its creator still can't let it go.:

Nearly 35 years ago, Ginsparg created arXiv, a digital repository where researchers could share their latest findings—before those findings had been systematically reviewed or verified. Visit arXiv.org today (it's pronounced like "archive") and you'll still see its old-school Web 1.0 design, featuring a red banner and the seal of Cornell University, the platform's institutional home. But arXiv's unassuming facade belies the tectonic reconfiguration it set off in the scientific community. If arXiv were to stop functioning, scientists from every corner of the planet would suffer an immediate and profound disruption. "Everybody in math and physics uses it," Scott Aaronson, a computer scientist at the University of Texas at Austin, told me. "I scan it every night."

[...] In 2021, the journal Nature declared arXiv one of the "10 computer codes that transformed science," praising its role in fostering scientific collaboration. (The article is behind a paywall—unlock it for $199 a year.) By a recent count, arXiv hosts more than 2.6 million papers, receives 20,000 new submissions each month, and has 5 million monthly active users. Many of the most significant discoveries of the 21st century have first appeared on the platform. The "transformers" paper that launched the modern AI boom? Uploaded to arXiv. Same with the solution to the Poincaré conjecture, one of the seven Millennium Prize problems, famous for their difficulty and $1 million rewards. Just because a paper is posted on arXiv doesn't mean it won't appear in a prestigious journal someday, but it's often where research makes its debut and stays openly available. The transformers paper is still routinely accessed via arXiv.

For scientists, imagining a world without arXiv is like the rest of us imagining one without public libraries or GPS. But a look at its inner workings reveals that it isn't a frictionless utopia of open-access knowledge. Over the years, arXiv's permanence has been threatened by everything from bureaucratic strife to outdated code to even, once, a spy scandal. In the words of Ginsparg, who usually redirects interview requests to an FAQ document—on arXiv, no less—and tried to talk me out of visiting him in person, arXiv is "a child I sent off to college but who keeps coming back to camp out in my living room, behaving badly."

[...] Long before arXiv became critical infrastructure for scientific research, it was a collection of shell scripts running on Ginsparg's NeXT machine. In June 1991, Ginsparg, then a researcher at Los Alamos National Laboratory, attended a conference in Colorado, where a fateful encounter took place.

[...] When arXiv started, it wasn't a website but an automated email server (and within a few months also an FTP server). Then Ginsparg heard about something called the "World Wide Web." Initially skeptical—"I can't really pay attention to every single fad"—he became intrigued when the Mosaic browser was released in 1993. Soon after, Ginsparg built a web interface for arXiv, which over time became its primary mode of access. He also occasionally consulted with a programmer at the European Organization for Nuclear Research (CERN) named Tim Berners-Lee—now Sir Tim "Inventor of the World Wide Web" Berners-Lee—whom Ginsparg fondly credits with grilling excellent swordfish at his home in the French countryside.

In 1994, with a National Science Foundation grant, Ginsparg hired two people to transform arXiv's shell scripts into more reliable Perl code. They were both technically gifted, perhaps too gifted to stay for long. One of them, Mark Doyle, later joined the American Physical Society and became its chief information officer. The other, Rob Hartill, was working simultaneously on a project to collect entertainment data: the Internet Movie Database. (After IMDb, Hartill went on to do notable work at the Apache Software Foundation.)

Before arXiv was called arXiv, it was accessed under the hostname xxx.lanl.gov ("xxx" didn't have the explicit connotations it does today, Ginsparg emphasized). During a car ride, he and his wife brainstormed nicer-sounding names. Archive? Already taken. Maybe they could sub in the Greek equivalent of X, chi (pronounced like "kai"). "She wrote it down and crossed out the e to make it more symmetric around the X," Ginsparg said. "So arXiv it was." At this point, there wasn't much formal structure. The number of developers typically stayed at one or two, and much of the moderation was managed by Ginsparg's friends, acquaintances, and colleagues.

Early on, Ginsparg expected to receive on the order of 100 submissions to arXiv a year. It turned out to be closer to 100 a month, and growing. "Day one, something happened, day two something happened, day three, Ed Witten posted a paper," as Ginsparg once put it. "That was when the entire community joined." Edward Witten is a revered string theorist and, quite possibly, the smartest person alive. "The arXiv enabled much more rapid worldwide communication among physicists," Witten wrote to me in an email. Over time, disciplines such as mathematics and computer science were added, and Ginsparg began to appreciate the significance of this new electronic medium. Plus, he said, "it was fun."

As the usage grew, arXiv faced challenges similar to those of other large software systems, particularly in scaling and moderation. There were slowdowns to deal with, like the time arXiv was hit by too much traffic from "stanford.edu." The culprits? Sergey Brin and Larry Page, who were then busy indexing the web for what would eventually become Google. Years later, when Ginsparg visited Google HQ, both Brin and Page personally apologized to him for the incident.

The biggest mystery is not why arXiv succeeded. Rather, it's how it wasn't killed by vested interests intent on protecting traditional academic publishing. Perhaps this was due to a decision Ginsparg made early on: Upon submission, users signed a clause that gave arXiv nonexclusive license to distribute the work in perpetuity, even in the event of future publication elsewhere. The strategic move ensured that no major publishers, known for their typically aggressive actions to maintain feudal control, would ever seriously attempt to shut it down.

[...] Ginsparg stops short of saying he "brought" arXiv with him, but the fact is, he ended up back at his alma mater, Cornell—tenured, this time—and so did arXiv. He vowed to be free of the project within "five years maximum." After all, his main job wasn't supposed to be running arXiv—it was teaching and doing research. At the university, arXiv found a home within the library. "They disseminate material to academics," Ginsparg said, "so that seemed like a natural fit."

[...] Then there was the problem of managing arXiv's massive code base. Although Ginsparg was a capable programmer, he wasn't a software professional adhering to industry norms like maintainability and testing. Much like constructing a building without proper structural supports or routine safety checks, his methods allowed for quick initial progress but later caused delays and complications. Unrepentant, Ginsparg often went behind the library's back to check the code for errors. The staff saw this as an affront, accusing him of micromanaging and sowing distrust.

[...] Technical problems were compounded by administrative ones. In 2019, Cornell transferred arXiv to the school's Computing and Information Science division, only to have it change hands again after a few months. Then a new director with a background in, of all things, for-profit academic publishing took over; she lasted a year and a half. "There was disruption," said an arXiv employee. "It was not a good period."

But finally, relief: In 2022, the Simons Foundation committed funding that allowed arXiv to go on a hiring spree. Ramin Zabih, a Cornell professor who had been a long-time champion, joined as the faculty director. Under the new governance structure, arXiv's migration to the cloud and a refactoring of the code base to Python finally took off.

UPDATE: arXiv is moving to the cloud (and hiring):

We are already underway on the arXiv CE ("Cloud Edition") project. This is a project to re-home all arXiv services from VMs at Cornell to a cloud provider (Google Cloud). There are a number of reasons for this transition, including improving arXiv's scalability while modernizing our infrastructure. This will not be a simple port of the existing arXiv code base because this project will:

    • replace the portion of our backends still written in perl and PHP
    • re-architect our article processing to be fully asynchronous, and provide better insight into the processing workflows
    • containerize all, or nearly all arXiv services so we can deploy via Kubernetes or services like Google Cloud Run
    • improve our monitoring and logging facilities so we can more quickly identify and manage production issues with arxiv.org
    • create a robust CI/CD pipeline to give us more confidence that changes we deploy will not cause services to regress

The cloud transition is a pre-requisite to modernizing arXiv as a service. The modernization will enable: - arXiv to expand the subject areas that we cover - improve the metadata we collect and make available for articles, adding fields that the research community has requested such as funder identification - deal with the problem of ambiguous author identities - improve accessibility to support users with impairments, particularly visual impairments - improve usability for the entire arXiv community.


Original Submission

posted by hubie on Saturday April 19, @08:37AM   Printer-friendly
from the no-guardrails dept.

Tech Review has a short article that attempts to describe "vibe coding" and discuss some of the ramifications, https://www.technologyreview.com/2025/04/16/1115135/what-is-vibe-coding-exactly/
Archive version, https://archive.is/5PGPK

When OpenAI cofounder Andrej Karpathy excitedly took to X back in February to post about his new hobby, he probably had no idea he was about to coin a phrase that encapsulated an entire movement steadily gaining momentum across the world.
"There's a new kind of coding I call 'vibe coding', where you fully give in to the vibes, embrace exponentials, and forget that the code even exists," he said. "I'm building a project or webapp, but it's not really coding—I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works."

This certainly would not work in our industry, where software is often used in human-life-critical applications. Seems to me this will speed up the race to the bottom of software quality, but maybe it has a place?


Original Submission

posted by hubie on Saturday April 19, @04:53AM   Printer-friendly
from the is-this-really-a-problem? dept.

UK founders grow frustrated over dearth of funding: 'the problem is getting worse':

According to Dealroom data cited by the Financial Times, British start-ups raised just £16.2 billion last year, far less than the more than £65 billion raised by their counterparts in Silicon Valley during the same period. In fact, the U.S. appears to be pulling further ahead each year. In 2024, 57% of global venture capital funding went to U.S. startups — the first time that share has exceeded 50% in over a decade, per Dealroom.

This widening gap is part of a years-long trend that U.K. founders have taken note of, the FT reports, and it's prompting many to consider relocating abroad.

"Recognizing that most venture funding comes from the U.S., we set up as a Delaware corporation, the preferred and familiar structure for American investors," said Mati Staniszewski, co-founder of the London-based AI company ElevenLabs, in an interview with the FT.

Barney Hussey-Yeo, founder and CEO of the AI start-up Cleo, told the FT that he already spends four months a year in San Francisco and is seriously considering a permanent move. "You get to a certain size where there is no capital in the U.K. And the problem is getting worse," he said. "Honestly, the U.K. is kinda f***d if it doesn't address [the problem]."


Original Submission

posted by janrinok on Saturday April 19, @12:05AM   Printer-friendly

https://nurpax.github.io/posts/2019-08-18-dirty-tricks-6502-programmers-use.html

This post recaps some of the C64 coding tricks used in my little Commodore 64 coding competition. The competition rules were simple: make a C64 executable (PRG) that draws two lines to form the below image (https://nurpax.github.io/images/c64/lines/lines-2x.png). The objective was to do this in as few bytes as possible.

[ Obviously this was intended for assembly language, but using any language you choose, how small a runtime can you produce that achieves the same function? The rules are very loose - have fun. --JR ]


Original Submission

posted by janrinok on Friday April 18, @07:20PM   Printer-friendly

States Are Banning Forever Chemicals. Industry Is Fighting Back:

Kenney and his husband were at a big box store buying a piece of furniture when the sales associate asked if they'd like to add fabric protectant. Kenney, the cabinet secretary of New Mexico's Environment Department, asked to see the product data sheet. Both he and his husband were shocked to see forever chemicals listed as ingredients in the protectant.

"I think about your normal, everyday New Mexican who is trying to get by, make their furniture last a little longer, and they think, 'Oh, it's safe, great!' It's not safe," he says. "It just so happens that they tried to sell it to the environment secretary."

Last week, the New Mexico legislature passed a pair of bills that Kenney hopes will help protect consumers in his state. If signed by the governor, the legislation would eventually ban consumer products that have added PFAS—per- and polyfluorinated alkyl substances, known colloquially as "forever chemicals" because of their persistence in the environment—from being sold in New Mexico.

As health and environmental concerns about forever chemicals mount nationally, New Mexico joins a small but growing number of states that are moving to limit—and, in some cases, ban—PFAS in consumer products. New Mexico is now the third state to pass a PFAS ban through the legislature. Ten other states have bans or limits on added PFAS in certain consumer products, including cookware, carpet, apparel, and cosmetics. This year, at least 29 states—a record number—have PFAS-related bills before state legislatures, according to an analysis of bills by Safer States, a network of state-based advocacy organizations working on issues around potentially unsafe chemicals.

The chemical and consumer products industries have taken notice of this new wave of regulations and are mounting a counterattack, lobbying state legislatures to advocate for the safety of their products—and, in one case, suing to prevent the laws from taking effect. Some of the key exemptions made in New Mexico highlight some of the big fights that industries are hoping they'll win in statehouses across the country: fights they are already taking to a newly industry-friendly US Environmental Protection Agency.

PFAS is not just one chemical but a class of thousands. The first PFAS were developed in the 1930s; thanks to their nonstick properties and unique durability, their popularity grew in industrial and consumer uses in the postwar era. The chemicals were soon omnipresent in American lives, coating cookware, preventing furniture and carpets from staining, and acting as a surfactant in firefighting foam.

In 1999, a man in West Virginia filed a lawsuit against US chemical giant DuPont alleging that pollution from its factory was killing his cattle. The lawsuit revealed that DuPont had concealed evidence of PFAS's negative health effects on workers from the government for decades. In the years since, the chemical industry has paid out billions in settlement fees around PFAS lawsuits: in 2024, the American multinational 3M agreed to pay between $10 billion and $12.5 billion to US public water systems that had detected PFAS in their water supplies to pay for remediation and future testing, though the company did not admit liability. (DuPont and its separate chemical company Chemours continue to deny any wrongdoing in lawsuits involving them, including the original West Virginia suit.)

As the moniker "forever chemicals" suggests, mounting research has shown that PFAS accumulate in the environment and in our bodies and can be responsible for a number of health problems, from high cholesterol to reproductive issues and cancer. EPA figures released earlier this year show that almost half of the US population is currently exposed to PFAS in their drinking water. Nearly all Americans, meanwhile, have at least one type of PFAS in their blood.

For a class of chemicals with such terrifying properties, there's been surprisingly little regulation of PFAS at the federal level. One of the most-studied PFAS chemicals, PFOA, began to be phased out in the US in the early 2000s, with major companies eliminating the chemical and related compounds under EPA guidance by 2015. The chemical industry and manufacturers say that the replacements they have found for the most dangerous chemicals are safe. But the federal government, as a whole, has lagged behind the science when it comes to regulations: The EPA only set official drinking water limits for six types of PFAS in 2024.

In lieu of federal guidance, states have started taking action. In 2021, Maine, which identified an epidemic of PFAS pollution on its farms in 2016, passed the first-ever law banning the sale of consumer products with PFAS. Minnesota followed suit in 2023.

"The cookware industry has historically not really engaged in advocacy, whether it's advocacy or regulatory," says Steve Burns, a lobbyist who represents the industry. But laws against PFAS in consumer products—particularly a bill in California, which required cookware manufacturers to disclose to consumers if they use any PFAS chemicals in their products—were a "wakeup call" for the industry.

Burns is president of the Cookware Sustainability Alliance, a 501c6 formed in 2024 by two major companies in the cookware industry. He and his colleagues have had a busy year, testifying in 10 statehouses across the country against PFAS restrictions or bans (and, in some cases, in favor of new laws that would exempt their products from existing bans). In February, the CSA was one of more than 40 industry groups and manufacturers to sign a letter to New Mexico lawmakers opposing its PFAS ban when it was first introduced. The CSA also filed a suit against the state of Minnesota in January, alleging that its PFAS ban is unconstitutional.

Its work has paid off. Unlike the Maine or Minnesota laws, the New Mexico bill specifically exempts fluoropolymers, a key ingredient in nonstick cookware and a type of PFAS chemical, from the coming bans. The industry has also seen success overseas: France excluded kitchenware from its recent PFAS ban following a lobbying push by Cookware Sustainability Alliance member Groupe SEB. (The CSA operates only in the US and was not involved in that effort.)

"As an industry, we do believe that if we're able to make our case, we're able to have a conversation, present the science and all the independent studies we have, most times people will say well, you make a good point," Burns says. "This is a different chemistry."

It's not just the cookware industry making this argument. Erich Shea, the director of product communications at the American Chemistry Council, told WIRED in an email that the group supports New Mexico's fluoropolymer exclusion and that it will "allow New Mexico to avoid the headaches experienced by decisionmakers in other states."

The FDA has authorized nonstick cookware for human use since the 1960s. Some research—including one peer-reviewed study conducted by the American Chemistry Council's Performance Fluoropolymer Partnership, whose members include 3M and Chemours, has found that fluoropolymers are safe to consume and less harmful than other types of PFAS. Separate research has called their safety into question.

However, the production of fluoropolymers for use in nonstick cookware and other products has historically released harmful PFAS into the environment. And while major US manufacturers have phased out PFOA in their production chain, other factories overseas still use the chemical in making fluoropolymers.

The debate over fluoropolymers' inclusion in state bans is part of a larger argument made by industry and business groups: that states are defining PFAS chemicals too broadly, opening the door to overregulation of safe products. A position paper from the Cookware Sustainability Alliance provided to WIRED lambasts the "indiscriminate definition of PFAS" in many states with recent bans or restrictions.

"Our argument is that fluoropolymers are very different from PFAS chemicals of concern," Burns says.

Some advocates disagree. The exemption of fluoropolymers from New Mexico's ban, along with a host of other industry-specific exemptions in the bill, means that the legislation "is not going to meet the stated intentions of what the bill's sponsors want it to do," says Gretchen Salter, the policy director at Safer States.

Advocates like Salter have concerns around the use of forever chemicals in the production of fluoropolymers as well as their durability throughout their life cycles. "Fluoropolymers are PFAS. PFAS plastics are PFAS. They are dangerous at every stage of their life, from production to use to disposal," she claims.

Kenney acknowledges that the fluoropolymer exemption has garnered a "little bit of criticism." But he says that this bill is meant to be a starting point.

"We're not trying to demonize PFAS—it's in a lot of things that we rightfully still use—but we are trying to gauge the risk," he says. "We don't expect this to be a one and done. We expect science to grow and the exemptions to change."

Correction: 4/7/2025 10 AM EDT: WIRED has corrected the name spelling of the spokesperson for the American Chemistry Council.Wired has also removed reference to Sec Kenney's husband, whose profession was stated inaccurately.

See Related:

Dolphins Are Dying From Toxic Chemicals Banned Since the 1980s


Original Submission

posted by janrinok on Friday April 18, @02:34PM   Printer-friendly

An interesting article about the decline in friendships.

The so-called "Friendship Recession" is making its way into the vernacular—a profound shift in how Americans experience and sustain friendships. The data paints a stark picture. According to the American Perspectives Survey, the percentage of U.S. adults who report having no close friends has quadrupled to 12% since 1990, while the percentage of those with ten or more close friends has fallen by nearly threefold. The foundations of the crisis were laid long before lockdowns. For decades, Americans consistently spent about 6.5 hours a week with friends. Then, between 2014 and 2019, that number plummeted to just four hours per week.

To be sure, systemic forces underlie this shift. Suburban sprawl has physically distanced us from one another. The government slowed down its investment in and construction of third spaces—such as community centers, parks, and coffee shops—which has left fewer places for organic social interactions. The rise of the gig economy and economic pressures have made free time a luxury. These factors have made friendship more difficult, and policymakers, urban planners, and venture capitalists are searching for solutions.

However, these structural forces alone can't fully account for the larger shift. If inaccessibility were the primary driver, we wouldn't see relatively stable connection rates among older adults over the last several decades. If wealthier individuals have more access to communal spaces, why has solo dining increased by 29% in the past two years? If this were purely circumstantial, why would Stanford now offer Design for Healthy Friendships—a class dedicated to helping students structure their social lives with intention?

[...] While these prescriptions might sound easy, the reality is that culture change is hard, and its effects aren't seen overnight. It would be easier to scapegoat external forces, build yet another friend-finding app, and call it a day. While broader policy changes and social infrastructure certainly are needed and will help, we also must recognize that change starts with us. The small, daily choices we make—to reach out, to show up, to invest in relationships—add up to and actively shape the culture we live in. Imagine what could happen if we're better, together.

[Source]: Harvard Kennedy School

What has been your experience in this regard ?


Original Submission

posted by janrinok on Friday April 18, @09:52AM   Printer-friendly

https://marylandmatters.org/2025/04/14/end-of-an-era-the-last-radioshack-in-maryland-is-closing-its-doors/

Before the 2000s, RadioShack was the place to go if you needed a cable or help with anything tech related. Now, the last brick-and-mortar store in Maryland is closing its doors.

"I'm not one to sit at home, so I'm going to find something to do," said Cindy Henning, the store's manager and sole employee.

After more than 40 years, the RadioShack in Prince Frederick is shutting down.

Henning told WTOP she's going to miss it dearly. She's worked there for three decades.

"We would have a lot of fun. That was half of our day was to have fun with people and show them how electronics work," Henning said.

It was owned and operated by longtime local resident Michael King, who passed away at the end of January at the age of 79. His son Edward has taken over as owner. It's the end of an era," he said.

King said his grandfather owned a TV repair shop in the '50s and then his dad worked with him. They started carrying RadioShack products and grew to franchise three stores in Maryland. The RadioShack franchise first declared bankruptcy in 2015. King said they used the RadioShack name, but they don't have a warehouse in the U.S., so they were buying product from other wholesalers and selling it. "It was fun while it lasted, but it's not the same anymore," King said. "I know my dad realized that." The store's last day is Saturday, April 26.

[There are no hardware shops near to where I live now. I have to do all of my shopping online. I do use Amazon but where I can I prefer to use the original supplier, particular if one can build a good relationship with them. I have received small but welcome discounts on some of the items that I have purchased. What do you do now for hardware and components?--JR]


Original Submission

posted by janrinok on Friday April 18, @05:07AM   Printer-friendly

Oxygen discovered in most distant known galaxy:

Two different teams of astronomers have detected oxygen in the most distant known galaxy, JADES-GS-z14-0. The discovery, reported in two separate studies, was made possible thanks to the Atacama Large Millimeter/submillimeter Array (ALMA), in which the European Southern Observatory (ESO) is a partner. This record-breaking detection is making astronomers rethink how quickly galaxies formed in the early Universe.

Discovered last year, JADES-GS-z14-0 is the most distant confirmed galaxy ever found: it is so far away, its light took 13.4 billion years to reach us, meaning we see it as it was when the Universe was less than 300 million years old, about 2% of its present age. The new oxygen detection with ALMA, a telescope array in Chile's Atacama Desert, suggests the galaxy is much more chemically mature than expected.

"It is like finding an adolescent where you would only expect babies," says Sander Schouws, a PhD candidate at Leiden Observatory, the Netherlands, and first author of the Dutch-led study, now accepted for publication in The Astrophysical Journal. "The results show the galaxy has formed very rapidly and is also maturing rapidly, adding to a growing body of evidence that the formation of galaxies happens much faster than was expected."

Galaxies usually start their lives full of young stars, which are made mostly of light elements like hydrogen and helium. As stars evolve, they create heavier elements like oxygen, which get dispersed through their host galaxy after they die. Researchers had thought that, at 300 million years old, the Universe was still too young to have galaxies ripe with heavy elements. However, the two ALMA studies indicate JADES-GS-z14-0 has about 10 times more heavy elements than expected.

"I was astonished by the unexpected results because they opened a new view on the first phases of galaxy evolution," says Stefano Carniani, of the Scuola Normale Superiore of Pisa, Italy, and lead author on the paper now accepted for publication in Astronomy & Astrophysics. "The evidence that a galaxy is already mature in the infant Universe raises questions about when and how galaxies formed."

The oxygen detection has also allowed astronomers to make their distance measurements to JADES-GS-z14-0 much more accurate. "The ALMA detection offers an extraordinarily precise measurement of the galaxy's distance down to an uncertainty of just 0.005 percent. This level of precision — analogous to being accurate within 5 cm over a distance of 1 km — helps refine our understanding of distant galaxy properties," adds Eleonora Parlanti, a PhD student at the Scuola Normale Superiore of Pisa and author on the Astronomy & Astrophysicsstudy [1].

"While the galaxy was originally discovered with the James Webb Space Telescope, it took ALMA to confirm and precisely determine its enormous distance," [2] says Associate Professor Rychard Bouwens, a member of the team at Leiden Observatory. "This shows the amazing synergy between ALMA and JWST to reveal the formation and evolution of the first galaxies."

Gergö Popping, an ESO astronomer at the European ALMA Regional Centre who did not take part in the studies, says: "I was really surprised by this clear detection of oxygen in JADES-GS-z14-0. It suggests galaxies can form more rapidly after the Big Bang than had previously been thought. This result showcases the important role ALMA plays in unraveling the conditions under which the first galaxies in our Universe formed."


Original Submission

posted by janrinok on Friday April 18, @12:21AM   Printer-friendly

Phase two of military AI has arrived:

As I also write in my story, this push raises alarms from some AI safety experts about whether large language models are fit to analyze subtle pieces of intelligence in situations with high geopolitical stakes. It also accelerates the US toward a world where AI is not just analyzing military data but suggesting actions—for example, generating lists of targets. Proponents say this promises greater accuracy and fewer civilian deaths, but many human rights groups argue the opposite.

With that in mind, here are three open questions to keep your eye on as the US military, and others around the world, bring generative AI to more parts of the so-called "kill chain."

What are the limits of "human in the loop"?

Talk to as many defense-tech companies as I have and you'll hear one phrase repeated quite often: "human in the loop." It means that the AI is responsible for particular tasks, and humans are there to check its work. It's meant to be a safeguard against the most dismal scenarios—AI wrongfully ordering a deadly strike, for example—but also against more trivial mishaps. Implicit in this idea is an admission that AI will make mistakes, and a promise that humans will catch them.

But the complexity of AI systems, which pull from thousands of pieces of data, make that a herculean task for humans, says Heidy Khlaaf, who is chief AI scientist at the AI Now Institute, a research organization, and previously led safety audits for AI-powered systems.

"'Human in the loop' is not always a meaningful mitigation," she says. When an AI model relies on thousands of data points to draw conclusions, "it wouldn't really be possible for a human to sift through that amount of information to determine if the AI output was erroneous." As AI systems rely on more and more data, this problem scales up.

Is AI making it easier or harder to know what should be classified?

In the Cold War era of US military intelligence, information was captured through covert means, written up into reports by experts in Washington, and then stamped "Top Secret," with access restricted to those with proper clearances. The age of big data, and now the advent of generative AI to analyze that data, is upending the old paradigm in lots of ways.

One specific problem is called classification by compilation. Imagine that hundreds of unclassified documents all contain separate details of a military system. Someone who managed to piece those together could reveal important information that on its own would be classified. For years, it was reasonable to assume that no human could connect the dots, but this is exactly the sort of thing that large language models excel at.

With the mountain of data growing each day, and then AI constantly creating new analyses, "I don't think anyone's come up with great answers for what the appropriate classification of all these products should be," says Chris Mouton, a senior engineer for RAND, who recently tested how well suited generative AI is for intelligence and analysis. Underclassifying is a US security concern, but lawmakers have also criticized the Pentagon for overclassifying information.

How high up the decision chain should AI go?

Zooming out for a moment, it's worth noting that the US military's adoption of AI has in many ways followed consumer patterns. Back in 2017, when apps on our phones were getting good at recognizing our friends in photos, the Pentagon launched its own computer vision effort, called Project Maven, to analyze drone footage and identify targets.

Now, as large language models enter our work and personal lives through interfaces such as ChatGPT, the Pentagon is tapping some of these models to analyze surveillance.

So what's next? For consumers, it's agentic AI, or models that can not just converse with you and analyze information but go out onto the internet and perform actions on your behalf. It's also personalized AI, or models that learn from your private data to be more helpful.

All signs point to the prospect that military AI models will follow this trajectory as well. A report published in March from Georgetown's Center for Security and Emerging Technology found a surge in military adoption of AI to assist in decision-making. "Military commanders are interested in AI's potential to improve decision-making, especially at the operational level of war," the authors wrote.

In October, the Biden administration released its national security memorandum on AI, which provided some safeguards for these scenarios. This memo hasn't been formally repealed by the Trump administration, but President Trump has indicated that the race for competitive AI in the US needs more innovation and less oversight. Regardless, it's clear that AI is quickly moving up the chain not just to handle administrative grunt work, but to assist in the most high-stakes, time-sensitive decisions.


Original Submission

posted by janrinok on Thursday April 17, @07:42PM   Printer-friendly

Disasters spur investment in flood and fire risk tech:

When Storm Babet hit the town of Trowell in Nottingham in 2023, Claire Sneddon felt confident her home would not be affected.

After all, when she bought the property in 2021, she was told by the estate agent that a previous flood the year before, which had reached but not effected the property, was a once-in-a-lifetime event, and that flooding measures to protect the properties on the cul-de-sac would be put in place.

However, when Storm Babet tore through the UK two years later, Ms Sneddon's home flooded after several days of rain.

"We knew there would be water on the cul-de-sac but no one expected it to flood internally again. However, water entered the property for five hours," she says. "It reached to the top of the skirting boards. We had to have all the flooring, woodwork and lower kitchen replaced, which took nearly 12 months." Their final insurance bill was around £45,000. She says they were fortunate to have qualified for a government scheme providing affordable insurance for homeowners in areas of high-flood risk.

While it might be too late for Ms Sneddon and other homeowners, new tools are being developed to help people and companies assess climate risk.

[...] Last December, the UK Environment Agency updated its National Flood Risk Assessment (NaFRA), showing current and future flood risk from rivers, the sea and surface water for England. It used its own data alongside that of local authorities and climate data from the Met Office. It also brought up to date the National Coastal Erosion Risk Map (NCERM). They were both last updated in 2018 and 2017 respectively.

The new NaFRA data shows as many as 6.3 million properties in England are in areas at risk of flooding from rivers, the sea or surface water, and with climate change this could increase to around eight million by 2050.

"We have spent the last few years transforming our understanding of flood and coastal erosion risk in England, drawing on the best available data... as well as improved modelling and technological advances," says Julie Foley, director of flood risk strategy at the Environment Agency.

"When we account for the latest climate projections, one in four properties could be in areas at risk of flooding by the middle of the century."

The Environment Agency plans to launch a portal, external where users can check their long-term flood risk. Similar resources exist for Scotland, Northern Ireland, and Wales through the ABI.

"We can no longer rely on historical data," says Lukky Ahmed, co-founder of Climate X.

The London-based climate risk firm offers a digital twin of the Earth, which simulates different extreme weather events and their potential impact on properties, infrastructure and assets under different emissions scenarios.

It combines artificial intelligence with physics-based climate models. "While many climate models might tell you how much rainfall to expect, they don't say what happens when that water hits the ground," he says. "Our models simulate, for example, what happens when the water hits, where it travels and what the impact of the flooding will be."

While banks and other lenders are testing their product, property companies are currently using their services when considering new developments.

"They log into our platform and identify locations and existing building stock and in return they receive risk rating and severity metrics tied to hazards," says Mr Ahmed.

Many parts of the world have much more extreme weather than the UK.

In the US in January, devastating wild fires tore through parts of Los Angeles. Meanwhile Hurricane Milton, which landed last October, is likely to be one of the costliest hurricanes to hit west Florida, external.

To help insurers manage those costs, New York-based Faura analyses the resilience of homes and commercial buildings. "We look at the different elements of a property to understand how likely it is to survive and pinpoint resilience and survivability of a property," says Faura co-founder Valkyrie Holmes.

"We tell companies and homeowners whether their property will still be standing after a disaster, not just whether a disaster will happen in an area," he adds.

Faura bases its assessments on satellite and aerial imagery and data from surveys and disaster reports. "Insurance companies technically have the data to be able to do this but have not build out the models to quantify it," says Mr Holmes.

Other services are popping up for homebuyers. For the properties it markets, US firm Redfin, estimates the percentage chance of natural disasters, such as flooding and wildfires, occurring up to the next 30 years across each property.

"If people are looking at two homes with the same layout in the same neighbourhood, then climate risk will make or break [their decision]," says Redfin chief economist Daryl Fairweather.

As for Ms Sneddon, following her personal experience, she now works for flood risk company The FPS Group. "Flood risk is only going to get worse over the coming years so it is essential to find out as much as you can about the flood risk to a property," she advises.

"Flooding has a huge impact on communities and mental health. You are supposed to feel safe in your home, it shouldn't be a place of worry and anxiety."


Original Submission

posted by janrinok on Thursday April 17, @02:54PM   Printer-friendly
from the home-made-chips dept.

Advanced Micro Devices (AMD.O), said on Tuesday its key processor chips would soon be made at TSMC's (2330.TW), opens new tab new production site in Arizona, marking the first time that its products will be manufactured in the United States:

Though AMD's plans predate U.S. President Donald Trump's return to office, tech companies' efforts to diversify their supply chains have taken on added significance given Trump's escalating tariff war.

His administration is currently investigating whether imports of semiconductors threaten national security, which could be a precursor to slapping tariffs on those products.

"Our new fifth-generation EPYC is doing very well, so we're ready to start production," AMD Chief Executive Lisa Su told reporters in Taipei, referring to the company's central processing unit (CPU) for data centres.

Until now, the U.S. company's products have been made at contract chip manufacturer TSMC's facilities in Taiwan.

Also at ZeroHedge.

Related:


Original Submission

posted by hubie on Thursday April 17, @10:08AM   Printer-friendly
from the Smilie-Happy-Router dept.

The OpenWrt community is proud to announce the newest stable release of the OpenWrt 24.10 stable series.

The OpenWrt Project is a Linux operating system targeting embedded devices. It is a complete replacement for the vendor-supplied firmware of a wide range of wireless routers and non-network devices.

Instead of trying to create a single, static firmware, OpenWrt provides a fully writable filesystem with package management. This frees you from the application selection and configuration provided by the vendor and allows you to customize the device through the use of packages to suit any application. For developers, OpenWrt is the framework to build an application without having to build a complete firmware around it; for users this means the ability for full customization, to use the device in ways never envisioned.

If you're not familiar with OpenWrt, it really is quite a nifty OS ecosystem and many commercially available routers even run OpenWrt under the hood behind a manufacturer-specific user-facing web interface.

While installing OpenWrt on a device will not magically transform older, less capable hardware into faster Wi-Fi for your home or something, many devices are effectively crippled from the factory as to the hardware capabilities you can utilize, the options, packages and software capabilities you can use by their stock firmware.

Newer devices gain all possible functionality through a fully capable software suite and extensible packages. Devices with bugs, security issues or simply abandoned by their manufacturer but still capable of good performance from the hardware can be brought up to date and used successfully with the updated OS. Older devices no longer suited to their original, intended purpose (like a slow Wi-Fi chip) can be re-purposed into something useful, for example using an old router with a USB port as a NAS server for your LAN by simply connecting storage.

This latest 24.10.1 release addresses some of the various issues and regressions caused by some of the underlying fundamental changes from the previous 23.5.x series to the initial 24.10.0 release.

Personally, I've come to use it quite extensively across a wide range of devices. Note though that as of this moment, many of the firmware download links, etc. have yet to be updated to specifically point to 24.10.1 as the release roll-out proceeds.


Original Submission

posted by hubie on Thursday April 17, @05:23AM   Printer-friendly
from the resistance-is-futile dept.

https://arstechnica.com/gadgets/2025/04/a-history-of-the-internet-part-1-an-arpa-dream-takes-form/

In a very real sense, the Internet, this marvelous worldwide digital communications network that you're using right now, was created because one man was annoyed at having too many computer terminals in his office.

The year was 1966. Robert Taylor was the director of the Advanced Research Projects Agency's Information Processing Techniques Office. The agency was created in 1958 by President Eisenhower in response to the launch of Sputnik.
[...]
He had three massive terminals crammed into a room next to his office. Each one was connected to a different mainframe computer. They all worked slightly differently, and it was frustrating to remember multiple procedures to log in and retrieve information.
[...]
Taylor's predecessor, Joseph "J.C.R." Licklider, had released a memo in 1963 that whimsically described an "Intergalactic Computer Network" that would allow users of different computers to collaborate and share information. The idea was mostly aspirational, and Licklider wasn't able to turn it into a real project. But Taylor knew that he could.
[...]
Taylor marched into the office of his boss, Charles Herzfeld. He described how a network could save ARPA time and money by allowing different institutions to share resources. He suggested starting with a small network of four computers as a proof of concept.

"Is it going to be hard to do?" Herzfeld asked.

"Oh no. We already know how to do it," Taylor replied.

"Great idea," Herzfeld said. "Get it going. You've got a million dollars more in your budget right now. Go."

Taylor wasn't lying—at least, not completely.


Original Submission