Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How long has it been since you last tested your backups? Honestly?

  • one day
  • one week
  • one month
  • one year
  • more than one year
  • never tested my backups
  • what are backups?
  • of course they will work, they are in a repo!?....

[ Results | Polls ]
Comments:41 | Votes:94

posted by janrinok on Thursday May 15, @08:00PM   Printer-friendly
from the ai-job-watch dept.

Arthur T Knackerbracket has processed the following story:

Just two years ago, prompt engineering was the talk of the tech world – a seemingly essential new job born from the rapid rise of artificial intelligence. Companies were eager to hire specialists who could craft the right questions for large language models, ensuring optimal AI performance. The role was accessible, required little technical background, and was seen as a promising entry point into a booming industry.

Today, however, prompt engineering as a standalone role has all but disappeared. What was once a highly touted skill set is now simply expected of anyone working with AI. In an ironic twist, some companies are even using AI to generate prompts for their own AI systems, further diminishing the need for human prompt engineers.

The brief rise and rapid fall of prompt engineering highlights a broader truth about the AI job market: new roles can vanish as quickly as they appear. "AI is already eating its own," says Malcolm Frank, CEO of TalentGenius, in an interview with Fast Company.

"Prompt engineering has become something that's embedded in almost every role, and people know how to do it. Also, now AI can help you write the perfect prompts that you need. It's turned from a job into a task very, very quickly."

The initial appeal of prompt engineering was its low barrier to entry. Unlike many tech roles, it didn't require years of specialized education or coding experience, making it especially attractive to job seekers hoping to break into AI. In 2023, LinkedIn profiles were filled with self-described prompt engineers, and the North American market for prompt engineering was valued at $75.5 million, growing at a rate of 32.8 percent annually.

Yet the hype outpaced reality. According to Allison Shrivastava, an economist at the Indeed Hiring Lab, prompt engineering was rarely listed as an official job title. Instead, it has typically been folded into roles like machine learning engineer or automation architect. "I'm not seeing it as a standalone job title," she added.

As the hype fades, the AI job market is shifting toward roles that require deeper technical expertise. The distinction is clear: while prompt engineers focused on crafting queries for LLMs, machine learning engineers are the ones building and improving those models.

Lerner notes that demand for mock interviews for machine learning engineers has surged, increasing more than threefold in just two months. "The future is working on the LLM itself and continuing to make it better and better, rather than needing somebody to interpret it," she says.

This shift is also evident in hiring trends. Shrivastava points out that while demand for general developers is declining, demand for engineering roles overall is rising. For those without a coding background, options are narrowing.

Founding a company or moving into management consulting, where expertise in AI implementation is increasingly valued, may be the best routes forward. As of February, consulting positions made up 12.4% of AI job titles on Indeed, signaling a boom in advisory roles as organizations seek to integrate AI into their operations.

Tim Tully, a partner at Menlo Ventures, has seen firsthand how AI is changing the nature of work, not necessarily by creating new jobs, but by reshaping existing ones. "I wouldn't say that [there are] new jobs, necessarily; it's more so that it's changing how people work," Tully says. "You're using AI all the time now, whether you like it or not, and it's accelerating what you do."


Original Submission

Processed by kolie

posted by hubie on Thursday May 15, @03:16PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

This engineering marvel necessitated custom userspace GPU drivers and probably a patched adapter firmware as well.

External GPU (eGPU) support on Apple Silicon Macs and MacBooks has been a persistent pain point for AI/ML developers. Through what some may consider to be black magic, Tiny Corp has managed to get an AMD eGPU working in Tiny Grad over USB3, a standard that inherently lacks PCIe capabilities. As they're using libusb, this functionality extends to Windows, Linux, and even macOS, including devices with Apple Silicon.

Traditionally, GPUs are connected through PCIe slots or the Thunderbolt/USB4 interfaces, which offer PCI Express tunneling support. As such, external GPU solutions rely on the aforementioned interfaces, which limits their support for older systems and laptops. Unlike Intel-based Macs/MacBooks, Apple Silicon based devices do not support external GPUs, mainly due to the lack of driver support and architectural differences. So, despite their efficiency compared to traditional x86-based systems, users have reported challenges in AI workloads, especially when it comes to prompt processing.

Requirements for running an eGPU through a USB3 interface at this time include the use of an ASM2464PD-based adapter and an AMD GPU. For its tests, Tiny Corp used the ADT-UT3G adapter, which uses the same ASM2464PD chip, but out of the box, it only works with Thunderbolt 3, Thunderbolt 4, or USB 4 interfaces. The team likely employed a custom firmware to enable USB3 mode that works without direct PCIe communication. Technical details are murky, however, the controller appears to be translating PCIe commands to USB packets and vice versa.

The solution is quite hacky, as it bypasses kernel-level GPU drivers, requires specific hardware, and uses USB3, which was not originally intended for GPU communication. It essentially offloads the computation part, referring to kernel executions, from your system to the eGPU. The constraint here is that data transfer speeds are capped at 10 Gbps due to the USB3 standard used, so loading models into the GPU will take much longer than if you were to use a standard PCIe connection.

Since it uses custom user-space drivers to avoid tinkering with the kernel, the feature is limited to AMD's RDNA 3/4 GPUs, although there's a hint of potential RDNA 2 support in the future. USB3 eGPU functionality has been upstreamed to Tiny Grad's master branch, so if you have an AMD GPU and a supported adapter, feel free to try it out. We can expect Tiny Corp to provide a more detailed and technical breakdown once its developers done tidying up the code.


Original Submission

posted by hubie on Thursday May 15, @10:30AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Lazarus 4 is the latest version of the all-FOSS but Delphi-compatible IDE for the FreePascal compiler.

The IDE is developed independently from the underlying Pascal language that it's written in, so this doesn't mean a whole new version of FreePascal: Lazarus 4 was built with FreePascal 3.2.2 which was released in 2021. It replaces Lazarus 3.8, which was the current version when we talked about it for Delphi's 30th anniversary back in February.

[...] It's a multi-platform IDE, and the Sourceforge page has packages for both 32-bit and 64-bit Windows, Linux, and FreeBSD. On Apple hardware, it offers PowerPC, x86 and Arm64 versions; Cocoa development needs macOS 12 or higher, but using the older Carbon APIs it supports OS X 10.5 to 10.14. There's also a Raspberry Pi version for the Pi 4 and later. It supports a wide variety of toolkits for GUI programming, as the project wiki shows: Win32, Gtk2 and work-in-progress Gtk3, and Qt versions 4, 5 and 6, among others.

One criticism we've seen of the FreePascal project in general concerns its documentation, although there is quite a lot of it: eight FPC manuals, and lengthy Lazarus docs in multiple languages. There is a paid-for tutorial e-book available, too.

Something which might help newcomers to the language here is a new e-book: FreePascal From Square One by Jeff Duntemann. The author says:

It's a distillation of the four editions of my Pascal tutorial, Complete Turbo Pascal, which first appeared in 1985 and culminated in Borland Pascal 7 From Square One in 1993. I sold a lot of those books and made plenty of money, so I'm now giving it away, in hopes of drawing more people into the Pascal universe.

[...] There are other free resources out there, such as this course in Modern Pascal. The more, the merrier, though.

Pascal isn't cool or trendy any more, but even so, it remains in the top ten on the TIOBE index. Perhaps these new releases will help it to rise up the ratings a little more.


Original Submission

posted by hubie on Thursday May 15, @05:47AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The head of the US Copyright Office has reportedly been fired, the day after agency concluded that builders of AI models use of copyrighted material went beyond existing doctrines of fair use.

The office’s opinion on fair use came in a draft of the third part of its report on copyright and artificial intelligence. The first part considered digital replicas and the second tackled whether it is possible to copyright the output of generative AI.

The office published the draft [PDF] of Part 3, which addresses the use of copyrighted works in the development of generative AI systems, on May 9th.

The draft notes that generative AI systems “draw on massive troves of data, including copyrighted works” and asks: “Do any of the acts involved require the copyright owners’ consent or compensation?”

That question is the subject of several lawsuits, because developers of AI models have admitted to training their products on content scraped from the internet and other sources without compensating content creators or copyright owners. AI companies have argued fair use provisions of copyright law mean they did no wrong.

As the report notes, one test courts use to determine fair use considers “the effect of the use upon the potential market for or value of the copyrighted work”. If a judge finds an AI company’s use of copyrighted material doesn’t impact a market or value, fair use will apply.

The report finds AI companies can’t sustain a fair use defense in the following circumstances:

When a model is deployed for purposes such as analysis or research… the outputs are unlikely to substitute for expressive works used in training. But making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.

The office will soon publish a final version of Part 3 that it expects will emerge “without any substantive changes expected in the analysis or conclusions.”

Tech law professor Blake. E Reid described the report as “very bad news for the AI companies in litigation” and “A straight-ticket loss for the AI companies”.

Among the AI companies currently in litigation on copyright matters are Google, Meta, OpenAI, and Microsoft. All four made donations to Donald Trump’s inauguration fund.

Reid’s post also pondered the timing of the Part 3 report – despite the office saying it was released “in response to congressional inquiries and expressions of interest from stakeholders” – and wrote “I continue to wonder (speculatively!) if a purge at the Copyright Office is incoming and they felt the need to rush this out.”

Reid looks prescient as the Trump administration reportedly fired the head of the Copyright Office, Shira Perlmutter, on Saturday.

Representative Joe Morelle (D-NY), wrote the termination was “…surely no coincidence he acted less than a day after she refused to rubber-stamp Elon Musk’s efforts to mine troves of copyrighted works to train AI models.”

[...] There’s another possible explanation for Perlmutter’s ousting: The Copyright Office is a department of the Library of Congress, whose leader was last week fired on grounds of “quite concerning things that she had done … in the pursuit of DEI [diversity, equity, and inclusion] and putting inappropriate books in the library for children," according to White House press secretary Karoline Leavitt.

So maybe this is just the Trump administration enacting its policy on diversity without regard to the report’s possible impact on donors or Elon Musk.


Original Submission

posted by janrinok on Thursday May 15, @01:01AM   Printer-friendly
from the fishing-for-fissiles dept.

Arthur T Knackerbracket has processed the following story:

Chinese researchers have developed an extremely energy efficient and low-cost technology for extracting uranium from seawater, a potential boon to the country’s nuclear power ambitions. China currently leads the world in building new nuclear power plants, and shoring up its supply of uranium will help these efforts.

The world’s oceans hold an estimated 4.5 billion tonnes of uranium – more than 1000 times that available to mining – but it is extremely dilute. Previous experimental efforts have harvested uranium from seawater by physically soaking it up with artificial sponges or a polymer material inspired by blood vessel patterns, or by the more efficient and more expensive electrochemical method of trapping uranium atoms with electric fields.

This approach was able to extract 100 per cent of the uranium atoms from a salty seawater-like solution within 40 minutes. By comparison, some physical adsorption methods extract less than 10 per cent of the available uranium.

The system is “very innovative” and “a significant step forward compared to… existing uranium extraction methods”, says Shengqian Ma at the University of North Texas, who wasn’t involved in the new research.

[...]

When tested with small amounts of natural seawater – about 1 litre running through the system at any time – the new method was able to extract 100 per cent of uranium from East China Sea water and 85 per cent from South China Sea water. In the latter case, the researchers also achieved 100 per cent extraction with larger electrodes.

The experiments also showed the energy required was more than 1000-fold less than other electrochemical methods. The whole process cost about $83 per kilogram of extracted uranium. That is twice as cheap as physical adsorption methods, which cost about $205 per kilogram, and four times as cheap as previous electrochemical methods, which cost $360 per kilogram.

Scaling up the size and volume of the new devices – along with potentially stacking or connecting them together – could lead to “industrialisation of uranium extraction from seawater in the future”, the researchers wrote. Given a 58-hour test in 100 litres of seawater, their largest experimental array extracted more than 90 per cent of the available uranium.

One of the most successful previous demonstrations of harvesting uranium from seawater came in the 1990s, when the Japan Atomic Energy Agency extracted a kilogram of the element from the ocean using a physical adsorption method. That set a milestone that has inspired Chinese academic and industry researchers ever since.

In 2019, a Chinese state-owned nuclear company teamed up with research institutes to form the Seawater Uranium Extraction Technology Innovation Alliance. This organisation aims to build a demonstration plant by 2035 and achieve ongoing industrial production by 2050, according to the South China Morning Post.

“From an engineering perspective, there is still a long way to go before implementing this method and any electrochemical-based method for large-scale uranium extraction from seawater,” says Ma.

Half of the nuclear reactor projects currently under construction are in China. The country is on track to surpass the US and the European Union in total installed nuclear power capacity by 2030, according to the International Energy Agency.

But China’s nuclear industry also imports most of the uranium that it uses. So any it can economically extract from seawater will be more than welcome.

Journal reference

Nature Sustainability DOI: 1038/s41893-025-01567-z


Original Submission

Processed by kolie

posted by janrinok on Wednesday May 14, @08:13PM   Printer-friendly

So, I need to develop a service onto some server software on Github.

This open source project runs for well over 12 years now (it started back on SourceForge), and seems to be the only reliable piece of software implementing the protocol I need.

Still driven by its original author, it currently counts 573 files spread out over 131 directories, using 2 different programming languages, one macro language, 2 scripting languages and ofcourse the shell and Makefile.

Documentation exists for some functions, but not, ofcourse, for [an unknown number of] others. Documentation -- apart from one-line comments interspersed within the code -- consists of a short functionality description, parameters and return type. There is no architecture design nor much of explanation about how the different parts fit together.

I've already managed to insert a small proto service. In doing so, I noticed that, for one reason or another, I cannot directly write to the outside world; and also that the developer(s) implemented their own versions of specific standard library functions.

I've already sacrificed a newborn lamb and splattered its blood over my laptop, but I wonder, oh Soylentils, how would you approach this task? What steps would you take, what tools would you use, and what sacrifices would you make?


Original Submission

Prepared by kolie

posted by janrinok on Wednesday May 14, @03:30PM   Printer-friendly
from the not-a-slap-on-the-wrist dept.

Google Pays $1.375 Billion to Texas Over Unauthorized Tracking and Biometric Data Collection:

Google has agreed to pay the U.S. state of Texas nearly $1.4 billion to settle two lawsuits that accused the company of tracking users' personal location and maintaining their facial recognition data without consent.

The $1.375 billion payment dwarfs the fines the tech giant has paid to settle similar lawsuits brought by other U.S. states. In November 2022, it paid $391 million to a group of 40 states. In January 2023, it paid $29.5 million to Indiana and Washington. Later that September, it forked out another $93 million to settle with California.

The case, originally filed in 2022, related to unlawful tracking and collection of user data, regarding geolocation, incognito searches, and biometric data, tracking users' whereabouts even when the Location History setting was disabled and collecting the biometric data without informed consent.

"For years, Google secretly tracked people's movements, private searches, and even their voiceprints and facial geometry through their products and services," Texas Attorney General Ken Paxton said in a statement.

"This $1.375 billion settlement is a major win for Texans' privacy and tells companies that they will pay for abusing our trust."

Last year, Google announced plans to store Maps Timeline data locally on users' devices instead of their Google accounts. The company has also rolled out other privacy controls that allow users to auto-delete location information when the Location History setting is enabled.

The payment also rivals a $1.4 billion fine that Meta paid Texas to settle a lawsuit over allegations that it illegally collected the biometric data of millions of users without their permission.

The development comes at a time when Google is the subject of intense regulatory scrutiny on both sides of the Atlantic, facing calls to break up parts of its business to satisfy antitrust concerns.

See also:


Original Submission

Prepared by kolie

posted by hubie on Wednesday May 14, @10:37AM   Printer-friendly

Rapid7 threat hunter wrote a PoC. No, he's not releasing it.

RSAC If Rapid7's Christiaan Beek decided to change careers and become a ransomware criminal, he knows exactly how he'd innovate: CPU ransomware.

The senior director of threat analytics for the cybersecurity company got the idea from a bad bug in AMD Zen chips that, if exploited by highly skilled attackers, would allow those intruders to load unapproved microcode into the processors, breaking encryption at the hardware level and modifying CPU behavior at will.

Typically, only chip manufacturers can provide the correct microcode for their CPUs, which they might do to improve performance or fix holes. While it's difficult for outsiders to figure out how to write new microcode, it's not impossible - in the case of the AMD bug, Google demonstrated it could inject microcode to make the chip always choose the number 4 when asked for a random number.

"Coming from a background in firmware security, I was like, woah, I think I can write some CPU ransomware," Beek told The Register.

Spoiler alert: Beek followed through and wrote proof-of-concept code for ransomware that hides in the computer's processor. "Of course, we won't release that, but it's fascinating, right?"

This, according to Beek, is the worst-case scenario. "Ransomware at the CPU level, microcode alteration, and if you are in the CPU or the firmware, you will bypass every freaking traditional technology we have out there."

[...] While Beek says he hasn't yet found a working malware sample in the wild, "if they worked on it a few years ago, you can bet some of them will get smart enough at some point and start creating this stuff."

Beek knows it's possible because he's already done it himself.

"We should not be talking about ransomware in 2025 — and that fault falls on everyone: the vendors, the end users, cyber insurers," Beek told The Register.

"Twelve years later, we're still fighting the battle," he said. "While we're still seeing a lot of technological evolution, everybody's shouting agentic, AI, ML. And if we're bloody honest, we still haven't fixed our foundations."

How attackers break in "is not rocket science," he added. "What I'm seeing with a lot of ransomware breaches: it's a high-risk vulnerability, or a weak password, or we haven't deployed multi-factor authentication, or it's wrongly deployed. That is frustrating."

What should organizations do? Beek urges everyone to focus on cybersecurity basics. "We spend a lot of our time and money as an industry on innovation," he said. "But at the same time, our cyber hygiene is not improving."


Original Submission #1Original Submission #2

posted by hubie on Wednesday May 14, @05:51AM   Printer-friendly
from the Lovely-spam!-Wonderful-spam! dept.

Arthur T Knackerbracket has processed the following story:

The Norse ravaged much of Europe for centuries. They were also cosmopolitan explorers who followed trade winds into the Far East.

In the middle of the 9th century, in an office somewhere in the Jibāl region of what is now western Iran, a man is dictating to a scribe. It is the 840s of the Common Era, though the people in this eastern province of the great Caliphate of the ’Abbāsids – an Islamic superpower with its capital in Baghdad – live by the Hijri calendar. The man’s name is Abu ’l-Qāsim ʿUbayd Allāh b ʿAbd Allāh Ibn Khurradādhbih, and he is the director of posts and police for this region.

In his office, he is compiling a report as part of his duties. As his job title implies, he oversees communications and security in the Jibāl region, reporting to officials in Baghdad. What he provides is an intelligence service: in essence, Ibn Khurradādhbih is what we would call a station chief, like those CIA officials who manage clandestine operations abroad. The report he’s working on is part of a much larger document that will one day be known as Kitāb al-Masālik wa l-mamālik (the ‘Book of Itineraries and Kingdoms’), a summary of exactly the kind of thing that governments usually want to know: who was visiting their territory, where they came from, where they were going, and why. This is what he says about a group of people known as the Rus’:

For many decades, the second paragraph of this rather dense text was thought to refer to a totally different group of merchants from those described in the first, for the simple reason that scholars just didn’t believe that the Rūs (or the Rus’, as the word is usually spelled today) really went so far east. And yet, the text is clear. The two sections run on from each other, and both refer to the same people. So why do Ibn Khurradādhbih’s observations about them matter today?

We used to think of the time of the vikings, the three long centuries from around 750 to 1050 CE, as an age of expansion, when the Scandinavian peoples burst out upon an unsuspecting world with fire and sword. Over the past 40 years or so, that picture has become much more nuanced, as we see the poets, traders and settlers alongside the stereotypical raiders (who were nonetheless real) that most people imagine when they think of the vikings. However, our view of these events has recently changed. We no longer see an outward impulse of intention and process, but a much more haphazard and varied diaspora of Norse peoples, in which individuals with their own motives and missions shift across the northern world.

What does that diaspora look like? A settler on Orkney might divide the year between fishing and overseas piracy. A wealthy woman in a Swedish town might sponsor raids in the west. A person in Arctic Fennoscandia might span the very different worlds of the Norse and Saami. Another might journey deep into the rivers of Eurasia, only to die in the oasis of Khwarezm (in today’s Uzbekistan), but his companions would return to Scandinavia with the news. The ‘Norse’ voyages to North America would be crewed by people who included Icelanders, Greenlanders, a Turk, and two Scots. All these are taken from archaeological or textual sources, and serve as but a few examples of what the diaspora really meant.

[...] Given the astonishing geographical range of their travels in his account, it is perhaps surprising to realise that, with some necessary caveats, Rus’ was the name used by the peoples of the east to refer to the vikings. The routes that they took, according to his report, exactly match with what scholars of our own time would come to call the Silk Roads.

Many scholars now use vikings in lowercase to refer to the raiders themselves, adding an initial capital when talking about the time period. Many also employ a word such as Norse as an approximation for ‘everybody else over there in those days’. None of this is very satisfactory, but big-V vikings are almost impossible to shift from the public consciousness, and while there are problems with ‘Norse’ (it’s mainly a linguistic term, and Scandinavia was by no means a monoculture), it will do. During the Viking Age, most of their neighbours referred to them as ‘Northerners’, which is too Eurocentric a perspective to function today, but Norse comes close enough and has the virtue of being relatively specific.

In the west, the Rus’ were regarded as synonymous with the Norse, in fact with actual vikings

[...] As part of their travels, some Rus’ settled temporarily in the near east. Scandinavians served successive Byzantine emperors as mercenaries in the elite Varangian Guard (the name references an Old Norse word meaning those who have sworn an oath). Indeed, an officer’s posting there was a recognised stepping stone to political power back home. Rus’ contacts with Byzantium were by no means always peaceful, extending to all-out war on occasion, and they even besieged the city itself. There are also extensive Rus’ raids recorded around the Caspian Sea that appear identical in nature to the more famous viking assaults in western Europe.

It is clear that, in the west, the Rus’ were regarded as synonymous with the Norse, in fact with actual vikings. There are independent accounts making exactly this comparison from the Frankish court and also from Muslim Andalusia. It’s therefore worth asking if it is only modern historians who tend to separate them, based on the different labels used in east and west, but also on the legacies of the Cold War that drew such sharp, artificial barriers between researchers.

But the revisionist transformation of research into the Viking Age directs us beyond terminology. For not only is the definition of key terms changing, but also the very geography of the period. Our understanding of the Norse is now taking them far from their ‘northern’ homelands.

But can it really be that they themselves travelled, as Ibn Khurradādhbih says, throughout North Africa, western and central Asia, Transoxiana, Sindh, India and ultimately to al-Ṣīn, which perhaps denotes the Khaganate of the Uyghurs or possibly even the territories of the Tang dynasty? In fact, this should not surprise us, because indications of Norse connections with Asia have long been known from the archaeology of Scandinavia.

[...] Even scholars seem startled that more than 100,000 objects of Islamic origin have been excavated from Viking Age contexts in Scandinavia: these are, of course, the dirhams, and furthermore represent only a small fraction of the actual trade, which ran into the high millions. Each one bore an Arabic inscription praising Allah as the only god, usually with an indication of the caliph under whose control the coin had been made, and the location of the mint, which were scattered from Morocco to Afghanistan. It is very hard to imagine that nobody in the north ever wondered what the wavy signs on all those coins (and on some other objects, too) really meant. It must have been obvious that it was writing, and surely somebody understood that it was an exhortation to the divine – in other words, a religious text. Arabic was also inscribed on bronze weights, and it has long been clear that the Norse adopted the standard system of measurement used in the Caliphate. Archaeologists also find locally made weights in Scandinavia that have been given attempts at inscriptions that are just squiggly lines, clearly because ‘everyone knew’ that this is what proper weights should look like. Some scholars have even speculated that all this messaging was part of a (failed) Islamic mission to convert the Scandinavians. To be clear, there is no evidence that any of the Norse accepted the Muslim faith, other than a few who stayed in the Caliphate itself, but curiosity and receptiveness to other cultures were consistent features of their society.

So, rather than marauding through Europe, we find the Norse as traders and collectors of treasured Islamic and even Buddhist objects from as far away as modern-day Iran and Pakistan. And this trading and collecting was not simply haphazard or random. However, for all the detail and range of these contacts, the full implications have not been taken to their obvious conclusion.

[...] Importantly, these are not the marauding Scandinavians of legend, nor were they pursuing the muscular commerce of aggressive trade as they did on the eastern European rivers. They were cosmopolitans and explorers, but also pragmatists, who would have had to learn new languages and fit in to a succession of new surroundings. This was no ‘viking empire’ or colonial endeavour. The Rus’ who travelled to eastern Asia did so in small groups, perhaps in the company of others – a few of them onboard ships, or joining their camels and horses to a caravan. They were in a minority and at a disadvantage.

If all this seems very far from the classic stereotype of ‘the viking’ – an armed man (it’s almost always a man) standing on the deck of a longship in rough seas, on his way westwards to plunder and violent glory – then this is no bad thing. The vulnerability of the Norse in the far east is a reality, not a projection, but it also usefully undermines the clichés that have attached to the period. In the coming years, through the efforts of researchers from across Asia and Europe, the map of the Norse diaspora is going to be redrawn and also re-evaluated. The Viking Age may not be the same again.


Original Submission

posted by hubie on Wednesday May 14, @01:04AM   Printer-friendly

Smarter agents, continuous updates, and the eternal struggle to prove ROI:

As Nvidia releases its NeMo microservices to embed AI agents into enterprise workflows, research has found that almost half of businesses are seeing only minor gains from their investments in AI.

NeMo microservices are a set of tools, some already available, which developers can use to build AI agents capable of integrating with existing applications and services to automate tasks, and manage the lifecycle of agents to keep them updated as necessary with the latest information.

"There are over a billion knowledge workers across many industries, geographies, and locations, and our view is that digital employees or AI agents will be able to help enterprises get more work done in this variety of domains and scenarios," said Joey Conway, Nvidia's senior director of generative AI software for enterprise.

[...] Nvidia envisions these microservices working in a circular pipeline, taking new data and user feedback, using this to improve the AI model, then redeploying it. Nvidia refers to this as a "data flywheel," although we can't help feeling that this misunderstands what an actual flywheel does.

[...] Examples where NeMo microservices are already being put to work include Amdocs, which is laboring on three types of agents for its telecoms operator customers, Nvidia said.

These comprise a billing agent, a sales agent, and a network agent. The billing agent focuses on query resolution, while the sales agent works on personalized offers and customer engagement as part of deal closure. The network agent will analyze logs and network information across geographic regions and countries to proactively identify service issues.

[...] The research was commissioned by Storyblok, provider of CMS software for marketers and developers, which said that businesses need to look beyond surface-level implementations and integrate AI in a way that drives meaningful transformation.

It found the most popular use cases for AI among UK business leaders are website content creation, customer service, marketing analysis, translation services, and marketing content creation.


Original Submission

posted by hubie on Tuesday May 13, @08:19PM   Printer-friendly

New requirements could see more 'bad content' on Wikipedia, its owners warn:

The non-profit organisation behind Wikipedia has launched a legal challenge against the government's Online Safety Act, arguing that the law's requirements threaten the site's open editing model and could lead to a surge in misinformation and vandalism.

The challenge focuses on the Act's categorisation of Wikipedia as a 'Category 1' service, subjecting it to the highest level of content moderation duties.

This designation, Wikimedia Foundation owners argue, would force the site to implement user verification and content filtering measures, undermining the platform's unique system of volunteer editors and reviewers.

A key concern is the requirement to allow any user to block unverified users from editing or removing content. This, the foundation warns, disrupts the established hierarchy of volunteer editors and moderators, potentially empowering malicious actors to post harmful or false information while preventing its removal.

It also argues that this could lead to an increase in misinformation and vandalism on the platform, directly contradicting the aims of the Online Safety Act. The legal challenge seeks to revise Wikipedia's categorisation under the Act, protecting its collaborative editing model while maintaining its commitment to accuracy and user safety.

"Wikipedia is kept free of bad content because of the important work of thousands of members of the public, who can review and improve the content on the website to ensure it is neutral, fact-based and well-sourced," the Wikimedia Foundation said in a blog post.

Sophisticated volunteer communities, working in more than 300 languages, collectively govern almost every aspect of day-to-day life on Wikipedia.

Their ability to set and enforce policies, and to review, improve or remove what other volunteers post, is central to Wikipedia's success, notably in resisting vandalism, abuse and misinformation.

[...] The Wikimedia Foundation said it did not oppose online safety regulation, or even the use of a categories system, but said it felt it would be "overregulated" if designated as a category one service and felt compelled to act.

It added: "Although the UK Government felt this category one duty (which is just one of many) would usefully support police powers 'to tackle criminal anonymous abuse' on social media, Wikipedia is not like social media.

"Wikipedia relies on empowered volunteer users working together to decide what appears on the website. This new duty would be exceptionally burdensome (especially for users with no easy access to digital ID).

"Worse still, it could expose users to data breaches, stalking, vexatious lawsuits or even imprisonment by authoritarian regimes. Privacy is central to how we keep users safe and empowered.

"Designed for social media, this is just one of several category one duties that could seriously harm Wikipedia."

See also:


Original Submission

posted by hubie on Tuesday May 13, @03:35PM   Printer-friendly
from the playing-god dept.

A recent US Congressional Report suggests: "We stand at the edge of a new industrial revolution, one that depends on our ability to engineer biology. Emerging biotechnology, coupled with artificial intelligence, will transform everything from the way we defend and build our nation to how we nourish and provide care for Americans."

From the Executive Summary:

Imagine a not-so-distant future where researchers in Shanghai develop a breakthrough drug that can eliminate malignant cells, effectively ending cancer as we know it. But when tensions over Taiwan reach a breaking point, the Chinese Communist Party (CCP), the strategic apparatus of the Chinese government, hoards the treatment under the guise of national security, cutting off supply to the United States. After years of access, this lifesaving drug is immediately in shortage, requiring doctors to ration it while American biotechnology companies scramble to reconstitute production in the United States. The streets and social media overflow with people demanding that the United States abandon Taiwan. The Administration faces an agonizing choice between geopolitical priorities and public health.

This scenario is fiction. But something like it could soon become reality as biotechnology takes center stage in the unfolding strategic competition between the United States and People's Republic of China (China).

[...] Biology has been a well-defined scientific discipline for more than 200 years. But thanks to breakthroughs in artificial intelligence (AI), engineering, and automation, biology is becoming more than just a field of discovery; it is becoming a field of design. Chemistry made this leap in the 1880s when chemical engineering unlocked rubber, plastic, and synthetic fibers, materials that transformed society. Physics followed in the 1940s, when academic theory led to the atomic bomb, semiconductors, and computers. Now for the first time in recent history, the United States finds itself competing with a rival over a new form of engineering that will create tremendous wealth, but, in the wrong hands, could be used to develop powerful weapons. Countries that win the innovation race tend to win actual wars, too.

The Congressional Report Chapters:
1. Prioritize Biotechnology at the National Level
2. Mobilize the Private Sector to get U.S. Products to Scale
3. Maximize the Benefits of Biotechnology and Defense
4. Out-innovate our Strategic Partners
5. Build the Biotechnology Workforce of the Future
6. Mobilize the Collective Strengths of our Allies and Partners

Here is some commentary by Eric Schmidt on the report including his predictions on the future of AI applied to biotech and other areas over the next few years.


Original Submission

posted by hubie on Tuesday May 13, @10:46AM   Printer-friendly

As Big Tech gets used to the pain, smaller vendors urged to up their game:

Google says that despite a small dip in the number of exploited zero-day vulnerabilities in 2024, the number of attacks using these novel bugs continues on an upward trend overall

Data released by Google Threat Intelligence Group (GTIG) today, timed with the ongoing RSA Conference 2025, revealed that 75 zero-days were exploited last year. The number is down from 2023's figure of 98, but an increase from 63 the year before, suggesting that zero-days continue to be a hot commodity for the most well-resourced attackers.

Over 50 percent of the confirmed zero-days were used for cyberespionage campaigns carried out by state-sponsored groups and customers of spyware companies, or as Google calls them, "commercial surveillance vendors."

Google's researchers highlighted China and spyware companies – none of which were named specifically – as the main culprits here, exploiting five and eight zero-days respectively in 2024.

However, North Korea also featured with its state-backed attackers accounting for five zero-day exploits – the first time the country has been mentioned in the same breath as the usual leaders in this regard.

"GTIG tracked 75 exploited-in-the-wild zero-day vulnerabilities that were disclosed in 2024," said Google's researchers. "This number appears to be consistent with a consolidating upward trend that we have observed over the last four years.

"After an initial spike in 2021, yearly counts have fluctuated but not returned to the lower numbers we saw in 2021 and prior."

Google noted, however, that the surge in confirmed zero-day exploits from 2021 onward, compared to figures from years before that, could well be due to the industry's improvements both in technical detections and public disclosures of such attacks.

[...] All the signs point to zero-days maintaining their popularity. Disregarding the inherent, obvious advantage that novel, patchless vulnerabilities provide to attackers, it's not just Google saying that zero-days are easier to come by these days.

The underground marketplace for such exploits is thriving at the moment, with so-called zero-day brokers reportedly earning multiple millions for single vulnerabilities. Plus, with the slow uptake of secure-by-design and secure-by-default development practices, which are allowing decades-old vulnerability classes to continually crop up in widely used software, the current environment lends itself well to the procurement of zero-days.

The Five Eyes intelligence alliance warned in November 2024 that the majority of the most prolifically abused vulnerabilities last year were zero-days – a trend that continued from the year before.

Ollie Whitehouse, CTO at the UK's NCSC, said at the time that it was imperative that vendors stay on the front foot by proactively improving their processes to reduce the number of vulnerabilities present in their products, and issue patches quickly. Equally, defenders were urged to be vigilant when it comes to vulnerability management.

"More routine initial exploitation of zero-day vulnerabilities represents the new normal, which should concern end-user organizations and vendors alike as malicious actors seek to infiltrate networks," he added.

Likewise, Google said that due to big tech companies routinely being at the center of zero-day attacks, their experience with handling these will likely mean they approach zero-days as "a more manageable problem" rather than a catastrophic business risk. For smaller vendors or those with emerging products, preventing zero-days will require more proactive effort on their part, including the adoption of safer development practices.

Google also expects zero-day exploitation to steadily increase over the coming years, especially in enterprise tech, despite vendors improving their security practices and historically targeted products like smartphones and browsers.


Original Submission

posted by hubie on Tuesday May 13, @06:01AM   Printer-friendly

https://phys.org/news/2025-05-people-ai-colleagues-lazier.html

A trio of business analysts at Duke University has found that people who use AI apps at work are perceived by their colleagues as less diligent, lazier and less competent than those who do not use them.

In their study, published in Proceedings of the National Academy of Sciences, Jessica Reif, Richard Larrick and Jack Soll carried out four online experiments asking 4,400 participants to imagine they were in scenarios in which some workers used AI and some did not, and how they viewed themselves or others working under such circumstances.

[...] The first experiment involved asking participants to imagine themselves using an AI app or dashboard creation tool to complete work projects. The next part of the experiment involved asking those same users how they thought others in their workplace would view them if they used such applications. The researchers found that many of the respondents believed they would be judged as lazy, less diligent and less competent. They also suggested they might be viewed as more easily replaced than those who refused to use such apps to get their work done.

The second experiment involved asking participants to describe how they viewed colleagues at work who used AI apps to get their work done. The researchers found many viewed such colleagues as less competent at their jobs, lazy, less independent, less self-assured and less diligent.

In the third experiment, participants were asked to pretend they were managers who were hiring someone for a position at work. They found that such managers were less likely to hire someone if that candidate admitted to using AI to get their work done. One exception was when the manager was someone who used AI at work.

The fourth experiment involved asking participants about another aspect of AI use on the job: when it was known to be helpful. In such scenarios, negative perceptions diminished for the most part.

The research team notes that one factor made a difference in all their experiments: If the participants actually used AI at work, they saw its use by themselves or others in a much more positive light.

Journal Reference: Jessica A. Reif et al, Evidence of a social evaluation penalty for using AI, Proceedings of the National Academy of Sciences (2025). DOI: 10.1073/pnas.2426766122


Original Submission

posted by hubie on Tuesday May 13, @01:16AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A gauntlet of engineering challenges await a search for evidence of alien life.

Some time in next ten years, a Chinese mission aims to do what’s never been done before: collect cloud particles from Venus and bring them home. But achieving that goal will mean overcoming one of the most hostile environments in the solar system—the planet’s cloaking clouds are primarily made up of droplets of sulfuric acid.

When China unveiled a long-term roadmap for space science and exploration last fall, its second phase (2028-2035) included an unprecedented Venus atmosphere sample return mission. As is typical for Chinese space missions, few details were made public. But information in a recent presentation shared on Chinese social media gives us new insight into early mission plans.

The slide shows that the key scientific questions being targeted include the potential for life on Venus, the planet’s atmospheric evolution, and the mystery of UV absorbers in its clouds. The mission will carry a sample collection device as well as in-situ atmospheric analysis equipment. The search for life is, in part, due to the interest generated by a controversial study published in Nature Astronomy in 2020 that suggested that traces of phosphine in Venus’ atmosphere could be an indication of a biological process.

Mission proposals like MIT’s offer a window into the daunting technical challenges that China’s team is facing. Getting to Venus, entering its thick atmosphere, collecting samples and getting back into Venus orbit to a waiting orbiter to return the samples the Earth, all come with various challenges. But the potential scientific payoff clearly makes these hurdles worth clearing.

[...] “I’m super excited about this,” says Seager. “Even if there’s no life, we know there’s interesting organic chemistry, for sure. And it would be amazing to get samples in hand to really solve some of the big mysteries on Venus.”


Original Submission

Today's News | May 16 | May 14  >