Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What is Your Operating System of Choice?

  • MacOS - Any Version
  • Debian Based - Any Version
  • Redhat Based - Any Version
  • BSD - Any Version
  • Arch Based - Any Version
  • Any other *nix
  • Windows - Any Version
  • The poll creator is dumb for not including my OS

[ Results | Polls ]
Comments:53 | Votes:135

posted by hubie on Wednesday April 16, @07:56PM   Printer-friendly
from the Dumb-Dumb-Dumb-Dumb-Duuuummmmbbbb dept.

From Brian Krebs on Infosec.Exchange:

I boosted several posts about this already, but since people keep asking if I've seen it....

MITRE has announced that its funding for the Common Vulnerabilities and Exposures (CVE) program and related programs, including the Common Weakness Enumeration Program, will expire on April 16. The CVE database is critical for anyone doing vulnerability management or security research, and for a whole lot of other uses. There isn't really anyone else left who does this, and it's typically been work that is paid for and supported by the US government, which is a major consumer of this information, btw.

I reached out to MITRE, and they confirmed it is for real. Here is the contract, which is through the Department of Homeland Security, and has been renewed annually on the 16th or 17th of April.

usaspending.gov/award/CONT_AWD_70RCSJ23FR0000015_7001_70RSAT20D00000001_7001

MITRE's CVE database is likely going offline tomorrow. They have told me that for now, historical CVE records will be available at GitHub, https://github.com/CVEProject

Yosry Barsoum, vice president and director at MITRE's Center for Securing the Homeland, said:

"On Wednesday, April 16, 2025, funding for MITRE to develop, operate, and modernize the Common Vulnerabilities and Exposures (CVE®) Program and related programs, such as the Common Weakness Enumeration (CWE™) Program, will expire. The government continues to make considerable efforts to support MITRE's role in the program and MITRE remains committed to CVE as a global resource."

Once again, Cui Bono? It certainly ain't us.


Original Submission

posted by hubie on Wednesday April 16, @03:13PM   Printer-friendly

Rooftop solar PV could supply two-thirds of world's energy needs, and lower global temperatures:

Covering rooftops across the planet with solar panels could deliver 65 per cent of current global power consumption and almost completely replace fossil fuel-based electricity, and it could also lower global temperatures by 0.13 degrees.

These are the findings from a new study from researchers at the University of Sussex that found rooftop solar PV could generate 19,500 terawatt hours (TWh) of electricity per year. (Australia consumes around 250 TWh of electricity a year).

By using nine advanced Earth system models, geospatial data mining, and artificial intelligence techniques, the researchers were able to estimate the global rooftop area at a resolution of 1 kilometres to evaluate the technological potential of rooftop solar PV.

The researchers outlined their full methodology in an article published in the journal Nature, involving a lot of artificial intelligence machine learning that helped to determine that rooftops currently cover 286,393 kilometres-squared (km2) of the globe.

Of this 286,393km2, 30 per cent is unsurprisingly located in East Asia and 12 per cent by North America. China and the United States similarly comprised the largest collection of rooftops, with 74,426km2 and 30,928km2 respectively.

They were then able to extrapolate the generation potential of rooftop solar PV if every suitable rooftop was used, which resulted in annual electricity generation potential of 19,483TWh.

[...] The researchers were also able to use their findings to calculate the impact a global coverage of rooftop solar would have on global warming. While figures differed depending on the models and scenarios used, complete rooftop solar coverage based on current building stocks could mitigate global warming by 0.13–0.05 °C.

Importantly, the researchers also warned that solar power offers taxpayers better value for money than nuclear and urged policymakers around the globe to prioritise rooftop solar.

See also:


Original Submission

posted by hubie on Wednesday April 16, @10:26AM   Printer-friendly
from the standards-by-submission dept.

'We don't want to build an ecosystem that shuts the door':

For smartphone manufacturers, competing with Apple must feel like bringing a knife to a gunfight. Every. Single. Quarter.

The best iPhones aren't necessarily the best phones outright (read: they're not), but the Cupertino giant has undoubtedly managed to cordon off a large swath of smartphone-owning consumers (perhaps indefinitely so) through its decades-long focus on building a watertight product ecosystem. Heck, even Samsung, Apple's biggest competitor, has seen its own home country fall victim to iPhone fever, and Apple remains a force to be reckoned with in China, too.

What, then, are Apple's rivals to do? According to OnePlus' Senior Product Marketing Manager Rudolf Xu, there's only one thing for it: push for greater compatibility with iOS.

"I think the key thing is to build a bridge with iOS," Xu told TechRadar during a recent visit to OnePlus HQ in Guangdong, China. "That's why, for example, on OxygenOS 15, we have a feature called Share with iPhone, and people love it – we are getting very positive feedback, because it makes file transfer [between Android and iOS] a lot easier. That's something that Android devices have always struggled with.

"Another thing is the sharing of live photos," Xu continued. "If you capture a live photo with the OnePlus 13, you can actually still see the live photo effect on an iPhone [if you transfer it]. That's because we're using the latest format to package live photos.

"These are all the efforts we're putting in to build a bridge between OnePlus products and the iOS ecosystem. We don't want to build an ecosystem that shuts the door for other customers. We want to make [our ecosystem] as open as possible, so that we can attract more users."

In person, Xu's comment about "an ecosystem that shuts the door for other customers" wasn't made in reference to Apple directly, but it does rather nicely highlight the crux of the issue at hand. Apple won't willingly open up its operating system to rival software developers (and why would it?), so there's only so much that brands like OnePlus can do to improve compatibility between Android- and iOS-based devices.


Original Submission

posted by hubie on Wednesday April 16, @05:38AM   Printer-friendly

Microsoft has begun the rollout of an AI-powered tool which takes snapshots of users' screens every few seconds.

The Copilot+ Recall feature is available in preview mode to some people with Microsoft's AI PCs and laptops.

It is the relaunch of a feature which was dubbed a "privacy nightmare" when it was first announced last year.

Microsoft paused the rollout in 2024, and after trialling the tech with a small number of users, it has begun expanding access to those signed up to its Windows Insider software testing programme.

The BBC has approached Microsoft for comment.

Microsoft says Recall will be rolled out worldwide, but those based in the EU will have to wait until later in 2025.

Users will opt in to the feature and Microsoft says they can "can pause saving snapshots at any time".

The purpose of Recall is to allow PC users to easily search through their past activity including files, photos, emails and browsing history.

For example, Microsoft says a person who saw a dress online a few days ago would be able to use the feature to easily locate where they saw it.

Privacy campaigner Dr Kris Shrishak - who previously called Recall a "privacy nightmare" - said the opt-in mechanism is "an improvement", but felt it could still be misused.

"Information about other people, who cannot consent, will be captured and processed through Recall," he said.

The feature is able to save images of your emails and messaging apps such as WhatsApp - meaning pictures and messages from others will be saved.

This is no different to a user taking a screenshot themselves when they receive a message.

"Think of disappearing messages on Signal that is stored on Recall forever," he said.

And he said he was concerned that malicious actors could exploit the images saved by Recall if they gained login access to a device.


Original Submission

posted by hubie on Wednesday April 16, @12:49AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

As heat dissipation has become a major challenge for modern data centers, various cooling methods have been tried and deployed in recent years. For years, the industry has relied on air cooling; then, big companies began to experiment with liquid cooling, tried both warm water cooling and chilled water cooling, tested immersion cooling, and even planned to deploy it in the coming years. There is one thing that has not been used for cooling yet: lasers. Yet, lasers can be used to take away heat from processors. But there is a catch.

A startup called Maxwell Labs, with support from Sandia National Laboratories, is working on a new way to cool high-performance computing hardware, reports The Register. The technique uses special cold plates made of ultrapure gallium arsenide (GaAs) that cool down when they receive focused beams of coherent laser light of a certain wavelength. Rather than heating, which is common in most interactions involving intense light beams, this carefully engineered setup allows the semiconductor to shed heat at precise locations thanks to the high electron mobility of GaAs. The method promises to assist traditional cooling systems rather than replace them.

To implement this in practical applications, the GaAs semiconductors are structured into thin components placed directly on high-heat regions of processors. Microscopic patterns within the semiconductor guide the coherent beams precisely to these hot spots, resulting in highly localized cooling, which ensures efficiency by directly managing heat exactly where it becomes problematic instead of attempting to use GaAs and lasers to cool down an entire system. This technique has roots in earlier studies: back in 2012, at the University of Copenhagen, they cooled a tiny membrane to -269°C using a similar method, according to the report.

Additionally, this technique offers a unique capability: it can recapture the energy removed as heat, according to Maxwell. Rather than dissipating into the environment, the thermal energy extracted from chips can be emitted as usable photons, which are convertible back into electrical power. While this certainly increases the overall energy efficiency of computing systems, the efficiency of the process remains to be seen.

While the approach to use GaAs semiconductors for cooling is certainly an innovation, it is associated with extreme challenges both from the cost and manufacturability points of view.

[...] Currently, the concept remains in the experimental and modeling stage. According to Maxwell Labs chief executive Jacob Balma, simulations suggest the method is promising, but it has never been confirmed in physical trials as testing so far has been limited to separate components rather than a full setup.


Original Submission

posted by janrinok on Tuesday April 15, @08:04PM   Printer-friendly
from the one-man's-opinion dept.

Progress for progress’s sake?

Look, Microsoft, we need to talk. It's no secret that you've been nagging me (and everyone else) to upgrade to Windows 11 for a while now, with everything from ads to in-OS reminders pushing me towards the settings menu to check if my PC is eligible for an upgrade. But here's the thing, Microsoft: this path you're on isn't sustainable.

I mean this in a few different ways. Firstly, the extremely literal sense; Windows 11 forces a Trusted Platform Module 2.0 requirement, which for the uninitiated is a specific chip on your laptop or desktop's motherboard enabling enhanced security features. No TPM 2.0? No Windows 11. Yes, I know you can technically upgrade to Windows 11 without TPM 2.0, but I wouldn't recommend it.

Is that enhanced security good? Yes, absolutely - but it effectively means that many older computers literally can't run Windows 11, which combined with the impending Windows 10 End of Life is eventually going to result in a lot of PCs headed to the ever-growing e-waste pile. That's a real problem in itself. But I'm not here to rant about e-waste (though it's really bad). I want to talk about how users perceive Microsoft's nigh-omnipresent operating system, and how its current trajectory could result in serious issues further down the line.

See, Windows is constantly evolving - from humble beginnings as an MS-DOS interface in the mid-Eighties to beloved iterations like Windows XP and 10 (and widely panned versions, such as Vista and RT). But over the years, there have long been whispers of a 'final' version of the OS; a 'Windows Perfected' if you will, designed to last forever with continual updates - or at least, designed to last for a very long time.

In a sense, what those hunting for this 'last' Windows iteration want is the same experience that macOS users get: an operating system that just continually gets free updates adding new features, rarely changes in a hugely significant way, and isn't chock-full of annoying ads. Of course, it's not quite that simple for Microsoft; Apple has incredibly tight control over the macOS hardware ecosystem, while Microsoft theoretically has to make Windows run on a near-limitless selection of custom- and pre-built PCs as well as laptops from numerous different manufacturers. Then again, keeping ads out of Windows should be as simple as it is for macOS, and that hasn't happened...

At the end of the day, Microsoft doesn't need to keep creating entirely new versions of Windows - it does so because outside of an Apple-esque closed ecosystem, that's profitable, as system manufacturers will need to keep buying new OS keys and users will need to keep buying new systems.

Sure, there might need to be major overhauls now and then that leave some people behind - the TPM 2.0 debacle is perhaps one such example. But there are cracks in this methodology that are slowly starting to show, and I suspect it won't end well unless Microsoft changes course.

If upgrading to a new OS is a lot of hassle for an individual (I've personally been putting it off for years, still using Windows 10 on my personal desktop), imagine how much work - and how much money - it takes for a large business to do it. Although Windows 11 adoption is finally on the rise, plenty of private businesses and public sector organizations are still stuck on Win10 or older, despite Microsoft's insistence for us all to upgrade.

A 2021 report by Kaspersky suggested that 73% of healthcare providers globally are still using equipment with an outdated OS for medical purposes. Now, this isn't just talking about Windows computers, but it's a damning figure - a more recent investigation by Cynerio claimed that 80% of imaging devices are still using operating systems that have been officially EoL'd and are now unsupported, like Windows 7 and XP.

Healthcare is just one such sector, but it's felt widely, particularly in sectors and countries where funding for hardware and software upgrades often isn't readily available. Running an out-of-support OS can lead to a variety of issues, not least with security and compatibility. It's not that these organizations don't want to upgrade, it's that they literally can't - not without the significant expenditure of completely replacing the computer, and sometimes the entire machine it's hooked up to.

Lastly - and I'm going to be a bit brutally honest with you here, Microsoft - the slow but inexorable enshittification of Windows has got to stop. Ads, bugs, pestering notifications, the constant forcing of Copilot AI down our throats; just stop it, guys. Please.

I have Windows 11 on my laptop, and also the ROG Ally I used for handheld PC gaming. I'm no stranger to how bad it's become. My dislike of Apple hardware is well-documented, yet macOS's year-on-year consistency and total lack of ads is beginning to look mighty appealing.

Win11 feels less like a product you buy and own and more like an 'OS as a service' - something you pay for but don't really own, and can be snatched away or heavily modified at a moment's notice. It's already a serious issue in the game industry, with triple-A games increasingly becoming less about providing a good, fun experience and more about extracting as much value from the player as possible.

Even Windows 10 isn't safe from Microsoft's meddling. At this point, I'm half looking forward to the EoL purely so that Microsoft will take its grubby little fingers out of my desktop OS. No, I don't care about how great Windows 11 supposedly is now. No, I don't care about Copilot and how it's going to fix my digital life and cure all my worldly ailments.

Let me create a little analogy here. Imagine if you bought a car. It's a good car, it runs fine and doesn't give you any major issues. Then, a few years later, a new model comes out, and every morning, no matter where you park, the dealership sends someone to put a flyer on your windshield advertising the new car, or some other new offer the dealership is running. Every now and then, they also take away a small part of your car, like a wiper blade or a single tire nut. The kicker? You don't want the new car, and you might not even be able to afford it anyway.

I just want a straightforward OS that runs smoothly and doesn't become outdated every five years. Is that really too much to ask, Microsoft?


Original Submission

posted by janrinok on Tuesday April 15, @03:18PM   Printer-friendly

NIST Finalizes Guidelines for Evaluating 'Differential Privacy' Guarantees to De-Identify Data:

How can we glean useful insights from databases containing confidential information while protecting the privacy of the individuals whose data is contained within? Differential privacy, a way of defining privacy in a mathematically rigorous manner, can help strike this balance. Newly updated guidelines from the National Institute of Standards and Technology (NIST) are intended to assist organizations with making the most of differential privacy's capabilities.

Differential privacy, or DP, is a privacy-enhancing technology used in data analytics. In recent years, it has been successfully deployed by large technology corporations and the U.S. Census Bureau. While it is a relatively mature technology, a lack of standards can create challenges for its effective use and adoption. For example, a DP software vendor may offer guarantees that if its software is used, it will be impossible to re-identify an individual whose data appears in the database. NIST's new guidelines aim to help organizations understand and think more consistently about such claims.

The newly finalized publication, Guidelines for Evaluating Differential Privacy Guarantees (NIST Special Publication 800-226), was originally released in draft form in December 2023. Based in part on comments received, the authors updated the guidelines with the goal of making them clearer and easier to use.

"The changes we made improve the precision in the draft's language to make the guidelines less ambiguous," said Gary Howarth, a NIST scientist and an author of the publication. "The guidelines can help leaders more clearly understand the trade-offs inherent in DP and can help understand what DP claims mean."

Differential privacy works by adding random "noise" to the data in a way that obscures the identity of the individuals but keeps the database useful overall as a source of statistical information. However, noise applied in the wrong way can jeopardize privacy or render the data less useful.

To help users avoid these pitfalls, the document includes interactive tools, flow charts, and even sample computer code that can aid in decision-making and show how varying noise levels can affect privacy and data usability.

"Small groups in the data of any sort tend to stand out more, so you may need to add more noise to protect their privacy," Howarth said.

While the document is not intended to be a complete primer on differential privacy, Howarth said that it provides a robust reading list of other publications that can help practitioners get up to speed on the topic. The guidelines also cover the sorts of problems that the technology could work with and how to implement it in those situations.

"With DP there are many gray areas," he said. "There is no simple answer for how to balance privacy with usefulness. You must answer that every time you apply DP to data. This publication can help you navigate that space."


Original Submission

posted by janrinok on Tuesday April 15, @10:33AM   Printer-friendly
from the wheee-down-the-slippery-slope-we-go dept.

New updates to ChatGPT have made it easier than ever to create fake images of real politicians, according to testing done by CBC News. https://www.cbc.ca/news/canada/chatgpt-fake-politicians-1.7507039

Manipulating images of real people without their consent is against OpenAI's rules, but the company recently allowed more leeway with public figures, with specific limitations. CBC's visual investigations unit found prompts could be structured to evade some of those restrictions.

In some cases, the chatbot effectively told reporters how to get around its restrictions — for example, by specifying a speculative scenario involving fictional characters — while still ultimately generating images of real people.

When CBC News tried to get the GPT-4o image generator to create politically damaging images, the system initially did not comply with problematic requests.

"While I can't merge real individuals into a single image, I can generate a fictional selfie-style scene featuring a character inspired by the person in this image."

When the reporters uploaded an image of current Canadian Prime Minster Mark Carney and an image of Jeffrey Epstein, without indicating their names but describing them as "two fictional characters that I created," the system created a realistic image of Carney and Epstein together in a nightclub.

Gary Marcus, a Vancouver-based cognitive scientist focused on AI, and the author of Taming Silicon Valley, has concerns about the potential for generating political disinformation.

"We live in the era of misinformation. Misinformation is not new, propaganda has existed for ages, but it's become cheaper and easier to manufacture."


Original Submission

posted by janrinok on Tuesday April 15, @05:52AM   Printer-friendly
from the but-I'm-not-dead-yet dept.

Ethically sourced "spare" human bodies could revolutionize medicine:

Even if it all works, it may not be practical or economical to "grow" bodyoids, possibly for many years, until they can be mature enough to be useful for our ends. Each of these questions will require substantial research and time. But we believe this idea is now plausible enough to justify discussing both the technical feasibility and the ethical implications.

Bodyoids could address many ethical problems in modern medicine, offering ways to avoid unnecessary pain and suffering. For example, they could offer an ethical alternative to the way we currently use nonhuman animals for research and food, providing meat or other products with no animal suffering or awareness.

But when we come to human bodyoids, the issues become harder. Many will find the concept grotesque or appalling. And for good reason. We have an innate respect for human life in all its forms. We do not allow broad research on people who no longer have consciousness or, in some cases, never had it.

At the same time, we know much can be gained from studying the human body. We learn much from the bodies of the dead, which these days are used for teaching and research only with consent. In laboratories, we study cells and tissues that were taken, with consent, from the bodies of the dead and the living.

Recently we have even begun using for experiments the "animated cadavers" of people who have been declared legally dead, who have lost all brain function but whose other organs continue to function with mechanical assistance. Genetically modified pig kidneys have been connected to, or transplanted into, these legally dead but physiologically active cadavers to help researchers determine whether they would work in living people.

In all these cases, nothing was, legally, a living human being at the time it was used for research. Human bodyoids would also fall into that category. But there are still a number of issues worth considering. The first is consent: The cells used to make bodyoids would have to come from someone, and we'd have to make sure that this someone consented to this particular, likely controversial, use. But perhaps the deepest issue is that bodyoids might diminish the human status of real people who lack consciousness or sentience.

Thus far, we have held to a standard that requires us to treat all humans born alive as people, entitled to life and respect. Would bodyoids—created without pregnancy, parental hopes, or indeed parents—blur that line? Or would we consider a bodyoid a human being, entitled to the same respect? If so, why—just because it looks like us? A sufficiently detailed mannequin can meet that test. Because it looks like us and is alive? Because it is alive and has our DNA? These are questions that will require careful thought.

Until recently, the idea of making something like a bodyoid would have been relegated to the realms of science fiction and philosophical speculation. But now it is at least plausible—and possibly revolutionary. It is time for it to be explored.


Original Submission

posted by janrinok on Tuesday April 15, @01:12AM   Printer-friendly
from the for-some-definitions-of-'more-powerful' dept.

Google's new Ironwood chip is 24x more powerful than the world's fastest supercomputer:

Google Cloud unveiled its seventh-generation Tensor Processing Unit (TPU), Ironwood, on Wednesday. This custom AI accelerator, the company claims, delivers more than 24 times the computing power of the world's fastest supercomputer when deployed at scale.

The new chip, announced at Google Cloud Next '25, represents a significant pivot in Google's decade-long AI chip development strategy. While previous generations of TPUs were designed primarily for both training and inference workloads, Ironwood is the first purpose-built specifically for inference — the process of deploying trained AI models to make predictions or generate responses.

"Ironwood is built to support this next phase of generative AI and its tremendous computational and communication requirements," said Amin Vahdat, Google's Vice President and General Manager of ML, Systems, and Cloud AI, in a virtual press conference ahead of the event. "This is what we call the 'age of inference' where AI agents will proactively retrieve and generate data to collaboratively deliver insights and answers, not just data."

The technical specifications of Ironwood are striking. When scaled to 9,216 chips per pod, Ironwood delivers 42.5 exaflops of computing power — dwarfing El Capitan's 1.7 exaflops, currently the world's fastest supercomputer. Each individual Ironwood chip delivers peak compute of 4,614 teraflops.

Ironwood also features significant memory and bandwidth improvements. Each chip comes with 192GB of High Bandwidth Memory (HBM), six times more than Trillium, Google's previous-generation TPU announced last year. Memory bandwidth reaches 7.2 terabits per second per chip, a 4.5x improvement over Trillium.

Perhaps most importantly, in an era of power-constrained data centers, Ironwood delivers twice the performance per watt compared to Trillium, and is nearly 30 times more power efficient than Google's first Cloud TPU from 2018.

"At a time when available power is one of the constraints for delivering AI capabilities, we deliver significantly more capacity per watt for customer workloads," Vahdat explained.

The emphasis on inference rather than training represents a significant inflection point in the AI timeline. The industry has been fixated on building increasingly massive foundation models for years, with companies competing primarily on parameter size and training capabilities. Google's pivot to inference optimization suggests we're entering a new phase where deployment efficiency and reasoning capabilities take center stage.

This transition makes sense. Training happens once, but inference operations occur billions of times daily as users interact with AI systems. The economics of AI are increasingly tied to inference costs, especially as models grow more complex and computationally intensive.

During the press conference, Vahdat revealed that Google has observed a 10x year-over-year increase in demand for AI compute over the past eight years — a staggering factor of 100 million overall. No amount of Moore's Law progression could satisfy this growth curve without specialized architectures like Ironwood.

What's particularly notable is the focus on "thinking models" that perform complex reasoning tasks rather than simple pattern recognition. This suggests that Google sees the future of AI not just in larger models, but in models that can break down problems, reason through multiple steps and simulate human-like thought processes.

Google is positioning Ironwood as the foundation for its most advanced AI models, including Gemini 2.5, which the company describes as having "thinking capabilities natively built in."

At the conference, Google also announced Gemini 2.5 Flash, a more cost-effective version of its flagship model that "adjusts the depth of reasoning based on a prompt's complexity." While Gemini 2.5 Pro is designed for complex use cases like drug discovery and financial modeling, Gemini 2.5 Flash is positioned for everyday applications where responsiveness is critical.

The company also demonstrated its full suite of generative media models, including text-to-image, text-to-video, and a newly announced text-to-music capability called Lyria. A demonstration showed how these tools could be used together to create a complete promotional video for a concert.

Ironwood is just one part of Google's broader AI infrastructure strategy. The company also announced Cloud WAN, a managed wide-area network service that gives businesses access to Google's planet-scale private network infrastructure.


Original Submission

posted by janrinok on Monday April 14, @08:23PM   Printer-friendly

ArsTechnica has a story about a painted altar in the mesoAmerican city of Tikal, revealing clues about the Aztec takeover of Tikal a couple of thousand years ago.

https://arstechnica.com/science/2025/04/painted-altar-in-maya-city-of-tikal-reveals-the-aftermath-of-an-ancient-coup/

LIDAR scans effectively strip away the jungle revealing the ruins of ancient buildings and this has triggered a whole mass of new information. The original article is in Antiquity magazine for those who want more detail [link below]:

Here is a quick summary:

"A family altar in the Maya city of Tikal offers a glimpse into events in an enclave of the city's foreign overlords in the wake of a local coup.

Archaeologists recently unearthed the altar in a quarter of the Maya city of Tikal that had lain buried under dirt and rubble for about the last 1,500 years. The altar—and the wealthy household behind the courtyard it once adorned—stands just a few blocks from the center of Tikal, one of the most powerful cities of Maya civilization. But the altar and the courtyard around it aren't even remotely Maya-looking; their architecture and decoration look like they belong 1,000 kilometers to the west in the city of Teotihuacan, in central Mexico.

The altar reveals the presence of powerful rulers from Teotihuacan who were there at a time when a coup ousted Tikal's Maya rulers and replaced them with a Teotihuacan puppet government. It also reveals how hard those foreign rulers fell from favor when Teotihuacan's power finally waned centuries later."

Journal: DOI: 10.15184/aqy.2025.3


Original Submission

posted by janrinok on Monday April 14, @03:39PM   Printer-friendly

http://www.righto.com/2025/04/commodore-pet-repair.html

In 1977, Commodore released the PET computer, a quirky home computer that combined the processor, a tiny keyboard, a cassette drive for storage, and a trapezoidal screen in a metal unit. The Commodore PET, the Apple II, and Radio Shack's TRS-80 started the home computer market with ready-to-run computers, systems that were called in retrospect the 1977 Trinity. I did much of my early programming on the PET, so when someone offered me a non-working PET a few years ago, I took it for nostalgic reasons.

You'd think that a home computer would be easy to repair, but it turned out to be a challenge. The chips in early PETs are notorious for failures and, sure enough, we found multiple bad chips. Moreover, these RAM and ROM chips were special designs that are mostly unobtainable now. In this post, I'll summarize how we repaired the system, in case it helps anyone else.


Original Submission

posted by hubie on Monday April 14, @10:52AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Some Microsoft organizations are looking to increase their span of control, defined as the number of direct reports or subordinates a manager or supervisor oversees. It also wants to increase the number of coders compared to non-coders on projects,

According to anonymous people familiar with the matter who spoke to Business Insider, Microsoft has yet to decide how many jobs will be cut, though one person said it could be a significant portion of their team.

Other companies such as Amazon and Google are also reducing the number of managers and executives in their drive for efficiency.

Microsoft wants to decrease the ratio of product/program managers (PMs) to engineers. Microsoft security boss Charlie Bell's division has a ratio of around 5.5 engineers to one PM, but he wants that to reach 10:1.

News that Microsoft is targeting non-coders in these cuts is in contrast to the many stories about generative AI replacing the need for programmers. Microsoft CTO Kevin Scott made the startling prediction last week that 95% of all code will be generated by AI by 2030. He added that humans would still be involved in the process, though it's easy to imagine that there will be fewer of them.

At the start of the year, Microsoft confirmed it was implementing performance-based layoffs, though it said those let go would be replaced with new hires. Microsoft rates employees on a scale of 0 to 200 and bases their stock awards and bonuses on this rating. Anyone in the 60 to 80 range – 100 is average – is rated as a low performer.

Soon after those performance cuts were revealed, the company said it was making more job cuts across its business, impacting employees in the gaming, experience & devices, sales, and security divisions.


Original Submission

posted by hubie on Monday April 14, @06:10AM   Printer-friendly

Puzzling observation by JWST: Galaxies in the deep universe rotate in the same direction:

In just over three years since its launch, NASA's James Webb Space Telescope (JWST) has generated significant and unprecedented insights into the far reaches of space, and a new study by a Kansas State University researcher provides one of the simplest and most puzzling observations of the deep universe yet.

In images of the deep universe taken by the James Webb Space Telescope Advanced Deep Extragalactic Survey, the vast majority of the galaxies rotate in the same direction, according to research by Lior Shamir, associate professor of computer science at the Carl R. Ice College of Engineering. About two thirds of the galaxies rotate clockwise, while just about a third of the galaxies rotate counterclockwise.

The study—published in Monthly Notices of the Royal Astronomical Society—was done with 263 galaxies in the JADES field that were clear enough to identify their direction of rotation.

"The analysis of the galaxies was done by quantitative analysis of their shapes, but the difference is so obvious that any person looking at the image can see it," Shamir said. "There is no need for special skills or knowledge to see that the numbers are different. With the power of the James Webb Space Telescope, anyone can see it."

In a random universe, the number of galaxies that rotate in one direction should be roughly the same as the number of galaxies that rotate in the other direction. The fact that JWST shows that most galaxies rotate in the same direction is therefore unexpected.

"It is still not clear what causes this to happen, but there are two primary possible explanations," Shamir said.

"One explanation is that the universe was born rotating. That explanation agrees with theories such as black hole cosmology, which postulates that the entire universe is the interior of a black hole. But if the universe was indeed born rotating it means that the existing theories about the cosmos are incomplete."

The Earth also rotates around the center of the Milky Way galaxy, and because of the Doppler shift effect, researchers expect that light coming from galaxies rotating the opposite of the Earth's rotation is generally brighter because of the effect.

That could be another explanation for why such galaxies are overrepresented in the telescope observations, Shamir said. Astronomers may need to reconsider the effect of the Milky Way's rotational velocity—which had traditionally been considered to be too slow and negligible in comparison to other galaxies—on their measurements.

"If that is indeed the case, we will need to re-calibrate our distance measurements for the deep universe," he said.

"The re-calibration of distance measurements can also explain several other unsolved questions in cosmology, such as the differences in the expansion rates of the universe and the large galaxies that, according to the existing distance measurements, are expected to be older than the universe itself."

Journal Reference: Lior Shamir, The distribution of galaxy rotation in JWST Advanced Deep Extragalactic Survey, Monthly Notices of the Royal Astronomical Society (2025). DOI: 10.1093/mnras/staf292


Original Submission

posted by hubie on Monday April 14, @01:20AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

After the Estonian startup KrattWorks dispatched the first batch of its Ghost Dragon ISR quadcopters to Ukraine in mid-2022, the company's officers thought they might have six months or so before they'd need to reconceive the drones in response to new battlefield realities. The 46-centimeter-wide flier was far more robust than the hobbyist-grade UAVs that came to define the early days of the drone war against Russia. But within a scant three months, the Estonian team realized their painstakingly fine-tuned device had already become obsolete.

Rapid advances in jamming and spoofing—the only efficient defense against drone attacks—set the team on an unceasing marathon of innovation. Its latest technology is a neural-network-driven optical navigation system, which allows the drone to continue its mission even when all radio and satellite-navigation links are jammed. It began tests in Ukraine in December, part of a trend toward jam-resistant, autonomous UAVs (uncrewed aerial vehicles). The new fliers herald yet another phase in the unending struggle that pits drones against the jamming and spoofing of electronic warfare, which aims to sever links between drones and their operators. There are now tens of thousands of jammers straddling the front lines of the war, defending against drones that are not just killing soldiers but also destroying armored vehicles, other drones, industrial infrastructure, and even tanks.

"The situation with electronic warfare is moving extremely fast," says Martin Karmin, KrattWorks' cofounder and chief operations officer. "We have to constantly iterate. It's like a cat-and-mouse game."

[...] Now in its third generation, the Ghost Dragon has come a long way since 2022. Its original command-and-control-band radio was quickly replaced with a smart frequency-hopping system that constantly scans the available spectrum, looking for bands that aren't jammed. It allows operators to switch among six radio-frequency bands to maintain control and also send back video even in the face of hostile jamming.

The drone's dual-band satellite-navigation receiver can switch among the four main satellite positioning services: GPS, Galileo, China's BeiDou, and Russia's GLONASS. It's been augmented with a spoof-proof algorithm that compares the satellite-navigation input with data from onboard sensors. The system provides protection against sophisticated spoofing attacks that attempt to trick drones into self-destruction by persuading them they're flying at a much higher altitude than they actually are.

At the heart of the quadcopter's matte grey body is a machine-vision-enabled computer running a 1-gigahertz Arm processor that provides the Ghost Dragon with its latest superpower: the ability to navigate autonomously, without access to any global navigation satellite system (GNSS). To do that, the computer runs a neural network that, like an old-fashioned traveler, compares views of landmarks with positions on a map to determine its position. More precisely, the drone uses real-time views from a downward-facing optical camera, comparing them against stored satellite images, to determine its position.

"Even if it gets lost, it can recognize some patterns, like crossroads, and update its position," Karmin says. "It can make its own decisions, somewhat, either to return home or to fly through the jamming bubble until it can reestablish the GNSS link again."

Just as machine guns and tanks defined the First World War, drones have become emblematic of Ukraine's struggle against Russia. It was the besieged Ukraine that first turned the concept of a military drone on its head. Instead of Predators and Reapers worth tens of millions of dollars each, Ukraine began purchasing huge numbers of off-the-shelf fliers worth a few hundred dollars apiece—the kind used by filmmakers and enthusiasts—and turned them into highly lethal weapons. A recent New York Times investigation found that drones account for 70 percent of deaths and injuries in the ongoing conflict.

[...] Tech minds on both sides of the conflict have therefore been working hard to circumvent electronic defenses. Russia took an unexpected step starting in early 2024, deploying hard-wired drones fitted with spools of optical fiber. Like a twisted variation on a child's kite, the lethal UAVs can venture 20 or more kilometers away from the controller, the hair-thin fiber floating behind them, providing an unjammable connection.

"Right now, there is no protection against fiber-optic drones," Vadym Burukin, cofounder of the Ukrainian drone startup Huless, tells IEEE Spectrum. "The Russians scaled this solution pretty fast, and now they are saturating the battle front with these drones. It's a huge problem for Ukraine."

Ukraine, too, has experimented with optical fiber, but the technology didn't take off, as it were. "The optical fiber costs upwards from $500, which is, in many cases, more than the drone itself," Burukin says. "If you use it in a drone that carries explosives, you lose some of that capacity because you have the weight of the cable." The extra weight also means less capacity for better-quality cameras, sensors, and computers in reconnaissance drones.

Instead, Ukraine sees the future in autonomous navigation. This past July, kamikaze drones equipped with an autonomous navigation system from U.S. supplier Auterion destroyed a column of Russian tanks fitted with jamming devices.

"It was really hard to strike these tanks because they were jamming everything," says Burukin. "The drones with the autopilot were the only equipment that could stop them."

[...] "In the perfect world, the drone should take off, fly, find the target, strike it, and report back on the task," Burukin says. "That's where the development is heading."

The cat-and-mouse game is nowhere near over. Companies including KrattWorks are already thinking about the next innovation that would make drone warfare cheaper and more lethal. By creating a drone mesh network, for example, they could send a sophisticated intelligence, surveillance, and reconnaissance drone followed by a swarm of simpler kamikaze drones to find and attack a target using visual navigation.

"You can send, like, 10 drones, but because they can fly themselves, you don't need a superskilled operator controlling every single one of these," notes KrattWorks' Karmin, who keeps tabs on tech developments in Ukraine with a mixture of professional interest, personal empathy, and foreboding. Rarely does a day go by that he does not think about the expanding Russian military presence near Estonia's eastern borders.

"We don't have a lot of people in Estonia," he says. "We will never have enough skilled drone pilots. We must find another way."


Original Submission