Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Arthur T Knackerbracket has processed the following story:
One of the ultimate goals of medieval alchemy has been realized, but only for a fraction of a second. Scientists with the European Organization for Nuclear Research, better known as CERN, were able to convert lead into gold using the Large Hadron Collider (LHC), the world's most powerful particle accelerator. Unlike the examples of transmutation we see in pop culture, these experiments with the LHC involve smashing subatomic particles together at ridiculously high speeds to manipulate lead's physical properties to become gold.
The LHC is often used to smash lead ions together to create extremely hot and dense matter similar to what was observed in the universe following the Big Bang. While conducting this analysis, the CERN scientists took note of the near-misses that caused a lead nucleus to drop its neutrons or protons. Lead atoms only have three more protons than gold atoms, meaning that in certain cases the LHC causes the lead atoms to drop just enough protons to become a gold atom for a fraction of a second — before immediately fragmenting into a bunch of particles.
Alchemists back in the day may be astonished by this achievement, but the experiments conducted between 2015 and 2018 only produced about 29 picograms of gold, according to CERN. The organization added that the latest trials produced almost double that amount thanks to regular upgrades to the LHC, but the mass made is still trillions of times less than what's necessary for a piece of jewelry. Instead of trying to chase riches, the organization's scientists are more interested in studying the interaction that leads to this transmutation.
"It is impressive to see that our detectors can handle head-on collisions producing thousands of particles, while also being sensitive to collisions where only a few particles are produced at a time, enabling the study of electromagnetic 'nuclear transmutation' processes," Marco Van Leeuwen, spokesperson for the A Large Ion Collider Experiment project at the LHC, said in a statement.
Processed by drussell
canopic jug writes:
The 1517 Fund has an article exploring why Bell Labs worked so well, and what is lacking in today's society to recreate such a research environment:
There have been non-profit and corporate giants with larger war chests than Ma Bell. AT&T started Bell Labs when its revenue was under $13 B (current USD). During the great depression, when Mervin Kelly laid the foundation for the lab, AT&T's revenue was $22 B (current USD).
Inflation adjusted, Google has made more than AT&T did at Bell Labs' start since 2006. Microsoft, 1996. Apple, 1992.
Each has invested in research. None have a Bell Labs.
Academia's worse. Scientists at the height of their careers spend more time writing grants than doing research. Between 1975 and 2005, the amount of time scientists at top tier universities spent on research declined by 20%. Time spent on paperwork increased by 100%. To quote the study, "experienced secular decline in research time, on the order of 10h per week." 2
[...] Reportedly, Kelly and others would hand people problems and then check in a few years later.3 Most founders and executives I know balk at this idea. After all, "what's stopping someone from just slacking off?" Kelly would contend that's the wrong question to ask. The right question is, "Why would you expect information theory from someone who needs a babysitter?"
Micromanagement and quantification also take their toll.
Previously:
(2024) The Incredible Story Behind the First Transistor Radio
(2024) Is It Possible to Recreate Bell Labs?
(2022) Unix History: A Mighty Origin Story
(2019) Vintage Computer Federation East 2019 -- Brian Kernighan Interviews Ken Thompson
(2017) US Companies are Investing Less in Science
Processed by kolie
Arthur T Knackerbracket has processed the following story:
In 2018, about 13.5 percent of the more than 2.6 million deaths from cardiovascular disease among people ages 55 to 64 globally could have been related to exposure to a type of chemical called a phthalate, researchers report April 28 in eBioMedicine.
Phthalates are a group of chemicals found in shampoos, lotions, food packaging and medical supplies including blood bags. The chemicals are often added to plastics to make them softer and more flexible.
Phthalates can enter the body when you consume contaminated food, breathe them in or absorb them through the skin. Once inside, they act as endocrine disruptors, which means they affect hormones. Previous research has also linked the chemicals to diabetes, obesity, pregnancy complications and heart disease.
The new study looked at the effects of one particular phthalate, known as di-2-ethylhexylphthalate, or DEHP, which is often added to PVC plastics to soften them. Sara Hyman, a research scientist at NYU Langone Health, and colleagues focused on the relationship between DEHP exposure levels and cardiovascular disease, the leading cause of death worldwide. Hyman and colleagues compared estimated DEHP exposure in 2008 with death rates from cardiovascular disease ten years later in different parts of the world. By studying how the two changed together, they determined what portion of those deaths might be attributable to phthalates.
More than 350,000 excess deaths worldwide were associated with DEHP exposure in 2018, the team found. About three-quarters of those occurred in the Middle East, South Asia, East Asia and the Pacific. This disparity might be due to the regions’ growing plastics industries, the researchers suggest. The new work does not show that DEHP exposure directly causes heart disease, though — only that there’s an association between the two.
[...] The findings offer yet another reason to decrease plastic use, researchers say. “We’re going to become the plastic planet,” Zhou says. “We need to start to really address this serious issue.”
S. Hyman et al. Phthalate exposure from plastics and cardiovascular disease: global estimates of attributable mortality and years life lost. eBioMedicine, 105730. Published online April 28, 2025. doi: 10.1016/j.ebiom.2025.105730.
Arthur T Knackerbracket has processed the following story:
The Federal Trade Commission has delayed the start of a rule that aims to make the process of canceling subscriptions less of a nightmare. Last year, the FTC voted to ratify amendments to a regulation known as the Negative Option Rule, adding a new "click-to-cancel" rule that requires companies to be upfront about the terms of subscription signups and prohibits them "from making it any more difficult for consumers to cancel than it was to sign up." Surprising no one, telecom companies were not happy, and sued the FTC. While the rule was nevertheless set to be implemented on May 14, the FTC now says enforcement has been pushed back 60 days to July 14.
Some parts of the updated Negative Option Rule went into effect on January 19, but the enforcement of certain provisions were deferred to May 14 by the previous administration to give companies more time to comply. Under the new administration, the FTC says it has "conducted a fresh assessment of the burdens that forcing compliance by this date would impose" and decided it "insufficiently accounted for the complexity of compliance."
Once the July 14 deadline hits, the FTC says "regulated entities must be in compliance with the whole of the Rule because the Commission will begin enforcing it." But, the statement adds, "if that enforcement experience exposes problems with the Rule, the Commission is open to amending" it.
Previously:
• Judge Rules SiriusXM's Annoying Cancellation Process is Illegal
• The US Government Wants to Make It Easier for You to Click the 'Unsubscribe' Button
• Clingy Virgin Media Won't Let Us Go, Customers Complain
• Publishers and Advertisers Push Back at FTC's 'Click-to-Cancel' Proposal
• The End of "Click to Subscribe, Call to Cancel"? - News Industry's Favorite Retention Tactic
Research out of the University of Connecticut proposes neural resonance theory, which says neurons in our body physically synchronize with music that create stable patterns that affect our entire body.
In a nutshell
• Brain-music synchronization: Your brain doesn't just predict music patterns—it physically synchronizes with them through neural oscillations that affect your entire body.
• Stability creates preference: Musical sounds with simple frequency relationships (like perfect fifths) create more stable neural patterns, explaining why certain combinations sound pleasant across cultures.
• Cultural attunement: While some aspects of music perception are universal, your brain becomes "attuned" to the music you frequently hear, explaining cultural preferences while maintaining recognition of basic musical structures.
What is Neural Resonance Theory?
Neural Resonance Theory (NRT) is a scientific approach that explains how your brain processes music using fundamental physics principles rather than abstract predictions.
In simpler terms, NRT suggests that:
• Your brain contains billions of neurons that naturally oscillate (rhythmically fire) at different frequencies
• When you hear music, these neural oscillations physically synchronize with the sound waves
• This synchronization creates stable patterns in your brain that correspond to musical elements
• The more stable these patterns are, the more pleasant or "right" the music feelsUnlike traditional theories that say your brain is constantly making predictions about what comes next in music, NRT proposes that your brain actually embodies the music's structure through its own physical patterns.
This physical synchronization explains why music can directly affect your movements and emotions without conscious thought—your brain and body are literally vibrating in harmony with the music.
Read the rest of the article: https://studyfinds.org/brain-cells-synchronize-to-music/
Journal Reference: Harding, E.E., Kim, J.C., Demos, A.P. et al. Musical neurodynamics. Nat. Rev. Neurosci. 26, 293–307 (2025). https://doi.org/10.1038/s41583-025-00915-4
Arthur T Knackerbracket has processed the following story:
Just two years ago, prompt engineering was the talk of the tech world – a seemingly essential new job born from the rapid rise of artificial intelligence. Companies were eager to hire specialists who could craft the right questions for large language models, ensuring optimal AI performance. The role was accessible, required little technical background, and was seen as a promising entry point into a booming industry.
Today, however, prompt engineering as a standalone role has all but disappeared. What was once a highly touted skill set is now simply expected of anyone working with AI. In an ironic twist, some companies are even using AI to generate prompts for their own AI systems, further diminishing the need for human prompt engineers.
The brief rise and rapid fall of prompt engineering highlights a broader truth about the AI job market: new roles can vanish as quickly as they appear. "AI is already eating its own," says Malcolm Frank, CEO of TalentGenius, in an interview with Fast Company.
"Prompt engineering has become something that's embedded in almost every role, and people know how to do it. Also, now AI can help you write the perfect prompts that you need. It's turned from a job into a task very, very quickly."
The initial appeal of prompt engineering was its low barrier to entry. Unlike many tech roles, it didn't require years of specialized education or coding experience, making it especially attractive to job seekers hoping to break into AI. In 2023, LinkedIn profiles were filled with self-described prompt engineers, and the North American market for prompt engineering was valued at $75.5 million, growing at a rate of 32.8 percent annually.
Yet the hype outpaced reality. According to Allison Shrivastava, an economist at the Indeed Hiring Lab, prompt engineering was rarely listed as an official job title. Instead, it has typically been folded into roles like machine learning engineer or automation architect. "I'm not seeing it as a standalone job title," she added.
As the hype fades, the AI job market is shifting toward roles that require deeper technical expertise. The distinction is clear: while prompt engineers focused on crafting queries for LLMs, machine learning engineers are the ones building and improving those models.
Lerner notes that demand for mock interviews for machine learning engineers has surged, increasing more than threefold in just two months. "The future is working on the LLM itself and continuing to make it better and better, rather than needing somebody to interpret it," she says.
This shift is also evident in hiring trends. Shrivastava points out that while demand for general developers is declining, demand for engineering roles overall is rising. For those without a coding background, options are narrowing.
Founding a company or moving into management consulting, where expertise in AI implementation is increasingly valued, may be the best routes forward. As of February, consulting positions made up 12.4% of AI job titles on Indeed, signaling a boom in advisory roles as organizations seek to integrate AI into their operations.
Tim Tully, a partner at Menlo Ventures, has seen firsthand how AI is changing the nature of work, not necessarily by creating new jobs, but by reshaping existing ones. "I wouldn't say that [there are] new jobs, necessarily; it's more so that it's changing how people work," Tully says. "You're using AI all the time now, whether you like it or not, and it's accelerating what you do."
Processed by kolie
Arthur T Knackerbracket has processed the following story:
This engineering marvel necessitated custom userspace GPU drivers and probably a patched adapter firmware as well.
External GPU (eGPU) support on Apple Silicon Macs and MacBooks has been a persistent pain point for AI/ML developers. Through what some may consider to be black magic, Tiny Corp has managed to get an AMD eGPU working in Tiny Grad over USB3, a standard that inherently lacks PCIe capabilities. As they're using libusb, this functionality extends to Windows, Linux, and even macOS, including devices with Apple Silicon.
Traditionally, GPUs are connected through PCIe slots or the Thunderbolt/USB4 interfaces, which offer PCI Express tunneling support. As such, external GPU solutions rely on the aforementioned interfaces, which limits their support for older systems and laptops. Unlike Intel-based Macs/MacBooks, Apple Silicon based devices do not support external GPUs, mainly due to the lack of driver support and architectural differences. So, despite their efficiency compared to traditional x86-based systems, users have reported challenges in AI workloads, especially when it comes to prompt processing.
Requirements for running an eGPU through a USB3 interface at this time include the use of an ASM2464PD-based adapter and an AMD GPU. For its tests, Tiny Corp used the ADT-UT3G adapter, which uses the same ASM2464PD chip, but out of the box, it only works with Thunderbolt 3, Thunderbolt 4, or USB 4 interfaces. The team likely employed a custom firmware to enable USB3 mode that works without direct PCIe communication. Technical details are murky, however, the controller appears to be translating PCIe commands to USB packets and vice versa.
The solution is quite hacky, as it bypasses kernel-level GPU drivers, requires specific hardware, and uses USB3, which was not originally intended for GPU communication. It essentially offloads the computation part, referring to kernel executions, from your system to the eGPU. The constraint here is that data transfer speeds are capped at 10 Gbps due to the USB3 standard used, so loading models into the GPU will take much longer than if you were to use a standard PCIe connection.
Since it uses custom user-space drivers to avoid tinkering with the kernel, the feature is limited to AMD's RDNA 3/4 GPUs, although there's a hint of potential RDNA 2 support in the future. USB3 eGPU functionality has been upstreamed to Tiny Grad's master branch, so if you have an AMD GPU and a supported adapter, feel free to try it out. We can expect Tiny Corp to provide a more detailed and technical breakdown once its developers done tidying up the code.
Arthur T Knackerbracket has processed the following story:
Lazarus 4 is the latest version of the all-FOSS but Delphi-compatible IDE for the FreePascal compiler.
The IDE is developed independently from the underlying Pascal language that it's written in, so this doesn't mean a whole new version of FreePascal: Lazarus 4 was built with FreePascal 3.2.2 which was released in 2021. It replaces Lazarus 3.8, which was the current version when we talked about it for Delphi's 30th anniversary back in February.
[...] It's a multi-platform IDE, and the Sourceforge page has packages for both 32-bit and 64-bit Windows, Linux, and FreeBSD. On Apple hardware, it offers PowerPC, x86 and Arm64 versions; Cocoa development needs macOS 12 or higher, but using the older Carbon APIs it supports OS X 10.5 to 10.14. There's also a Raspberry Pi version for the Pi 4 and later. It supports a wide variety of toolkits for GUI programming, as the project wiki shows: Win32, Gtk2 and work-in-progress Gtk3, and Qt versions 4, 5 and 6, among others.
One criticism we've seen of the FreePascal project in general concerns its documentation, although there is quite a lot of it: eight FPC manuals, and lengthy Lazarus docs in multiple languages. There is a paid-for tutorial e-book available, too.
Something which might help newcomers to the language here is a new e-book: FreePascal From Square One by Jeff Duntemann. The author says:
It's a distillation of the four editions of my Pascal tutorial, Complete Turbo Pascal, which first appeared in 1985 and culminated in Borland Pascal 7 From Square One in 1993. I sold a lot of those books and made plenty of money, so I'm now giving it away, in hopes of drawing more people into the Pascal universe.
[...] There are other free resources out there, such as this course in Modern Pascal. The more, the merrier, though.
Pascal isn't cool or trendy any more, but even so, it remains in the top ten on the TIOBE index. Perhaps these new releases will help it to rise up the ratings a little more.
Arthur T Knackerbracket has processed the following story:
The head of the US Copyright Office has reportedly been fired, the day after agency concluded that builders of AI models use of copyrighted material went beyond existing doctrines of fair use.
The office’s opinion on fair use came in a draft of the third part of its report on copyright and artificial intelligence. The first part considered digital replicas and the second tackled whether it is possible to copyright the output of generative AI.
The office published the draft [PDF] of Part 3, which addresses the use of copyrighted works in the development of generative AI systems, on May 9th.
The draft notes that generative AI systems “draw on massive troves of data, including copyrighted works” and asks: “Do any of the acts involved require the copyright owners’ consent or compensation?”
That question is the subject of several lawsuits, because developers of AI models have admitted to training their products on content scraped from the internet and other sources without compensating content creators or copyright owners. AI companies have argued fair use provisions of copyright law mean they did no wrong.
As the report notes, one test courts use to determine fair use considers “the effect of the use upon the potential market for or value of the copyrighted work”. If a judge finds an AI company’s use of copyrighted material doesn’t impact a market or value, fair use will apply.
The report finds AI companies can’t sustain a fair use defense in the following circumstances:
When a model is deployed for purposes such as analysis or research… the outputs are unlikely to substitute for expressive works used in training. But making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.
The office will soon publish a final version of Part 3 that it expects will emerge “without any substantive changes expected in the analysis or conclusions.”
Tech law professor Blake. E Reid described the report as “very bad news for the AI companies in litigation” and “A straight-ticket loss for the AI companies”.
Among the AI companies currently in litigation on copyright matters are Google, Meta, OpenAI, and Microsoft. All four made donations to Donald Trump’s inauguration fund.
Reid’s post also pondered the timing of the Part 3 report – despite the office saying it was released “in response to congressional inquiries and expressions of interest from stakeholders” – and wrote “I continue to wonder (speculatively!) if a purge at the Copyright Office is incoming and they felt the need to rush this out.”
Reid looks prescient as the Trump administration reportedly fired the head of the Copyright Office, Shira Perlmutter, on Saturday.
Representative Joe Morelle (D-NY), wrote the termination was “…surely no coincidence he acted less than a day after she refused to rubber-stamp Elon Musk’s efforts to mine troves of copyrighted works to train AI models.”
[...] There’s another possible explanation for Perlmutter’s ousting: The Copyright Office is a department of the Library of Congress, whose leader was last week fired on grounds of “quite concerning things that she had done … in the pursuit of DEI [diversity, equity, and inclusion] and putting inappropriate books in the library for children," according to White House press secretary Karoline Leavitt.
So maybe this is just the Trump administration enacting its policy on diversity without regard to the report’s possible impact on donors or Elon Musk.
Arthur T Knackerbracket has processed the following story:
Chinese researchers have developed an extremely energy efficient and low-cost technology for extracting uranium from seawater, a potential boon to the country’s nuclear power ambitions. China currently leads the world in building new nuclear power plants, and shoring up its supply of uranium will help these efforts.
The world’s oceans hold an estimated 4.5 billion tonnes of uranium – more than 1000 times that available to mining – but it is extremely dilute. Previous experimental efforts have harvested uranium from seawater by physically soaking it up with artificial sponges or a polymer material inspired by blood vessel patterns, or by the more efficient and more expensive electrochemical method of trapping uranium atoms with electric fields.
This approach was able to extract 100 per cent of the uranium atoms from a salty seawater-like solution within 40 minutes. By comparison, some physical adsorption methods extract less than 10 per cent of the available uranium.
The system is “very innovative” and “a significant step forward compared to… existing uranium extraction methods”, says Shengqian Ma at the University of North Texas, who wasn’t involved in the new research.
[...]
When tested with small amounts of natural seawater – about 1 litre running through the system at any time – the new method was able to extract 100 per cent of uranium from East China Sea water and 85 per cent from South China Sea water. In the latter case, the researchers also achieved 100 per cent extraction with larger electrodes.
The experiments also showed the energy required was more than 1000-fold less than other electrochemical methods. The whole process cost about $83 per kilogram of extracted uranium. That is twice as cheap as physical adsorption methods, which cost about $205 per kilogram, and four times as cheap as previous electrochemical methods, which cost $360 per kilogram.
Scaling up the size and volume of the new devices – along with potentially stacking or connecting them together – could lead to “industrialisation of uranium extraction from seawater in the future”, the researchers wrote. Given a 58-hour test in 100 litres of seawater, their largest experimental array extracted more than 90 per cent of the available uranium.
One of the most successful previous demonstrations of harvesting uranium from seawater came in the 1990s, when the Japan Atomic Energy Agency extracted a kilogram of the element from the ocean using a physical adsorption method. That set a milestone that has inspired Chinese academic and industry researchers ever since.
In 2019, a Chinese state-owned nuclear company teamed up with research institutes to form the Seawater Uranium Extraction Technology Innovation Alliance. This organisation aims to build a demonstration plant by 2035 and achieve ongoing industrial production by 2050, according to the South China Morning Post.
“From an engineering perspective, there is still a long way to go before implementing this method and any electrochemical-based method for large-scale uranium extraction from seawater,” says Ma.
Half of the nuclear reactor projects currently under construction are in China. The country is on track to surpass the US and the European Union in total installed nuclear power capacity by 2030, according to the International Energy Agency.
But China’s nuclear industry also imports most of the uranium that it uses. So any it can economically extract from seawater will be more than welcome.
Journal reference
Nature Sustainability DOI: 1038/s41893-025-01567-z
Processed by kolie
So, I need to develop a service onto some server software on Github.
This open source project runs for well over 12 years now (it started back on SourceForge), and seems to be the only reliable piece of software implementing the protocol I need.
Still driven by its original author, it currently counts 573 files spread out over 131 directories, using 2 different programming languages, one macro language, 2 scripting languages and ofcourse the shell and Makefile.
Documentation exists for some functions, but not, ofcourse, for [an unknown number of] others. Documentation -- apart from one-line comments interspersed within the code -- consists of a short functionality description, parameters and return type. There is no architecture design nor much of explanation about how the different parts fit together.
I've already managed to insert a small proto service. In doing so, I noticed that, for one reason or another, I cannot directly write to the outside world; and also that the developer(s) implemented their own versions of specific standard library functions.
I've already sacrificed a newborn lamb and splattered its blood over my laptop, but I wonder, oh Soylentils, how would you approach this task? What steps would you take, what tools would you use, and what sacrifices would you make?
Prepared by kolie
Google Pays $1.375 Billion to Texas Over Unauthorized Tracking and Biometric Data Collection:
Google has agreed to pay the U.S. state of Texas nearly $1.4 billion to settle two lawsuits that accused the company of tracking users' personal location and maintaining their facial recognition data without consent.
The $1.375 billion payment dwarfs the fines the tech giant has paid to settle similar lawsuits brought by other U.S. states. In November 2022, it paid $391 million to a group of 40 states. In January 2023, it paid $29.5 million to Indiana and Washington. Later that September, it forked out another $93 million to settle with California.
The case, originally filed in 2022, related to unlawful tracking and collection of user data, regarding geolocation, incognito searches, and biometric data, tracking users' whereabouts even when the Location History setting was disabled and collecting the biometric data without informed consent.
"For years, Google secretly tracked people's movements, private searches, and even their voiceprints and facial geometry through their products and services," Texas Attorney General Ken Paxton said in a statement.
"This $1.375 billion settlement is a major win for Texans' privacy and tells companies that they will pay for abusing our trust."
Last year, Google announced plans to store Maps Timeline data locally on users' devices instead of their Google accounts. The company has also rolled out other privacy controls that allow users to auto-delete location information when the Location History setting is enabled.
The payment also rivals a $1.4 billion fine that Meta paid Texas to settle a lawsuit over allegations that it illegally collected the biometric data of millions of users without their permission.
The development comes at a time when Google is the subject of intense regulatory scrutiny on both sides of the Atlantic, facing calls to break up parts of its business to satisfy antitrust concerns.
See also:
Prepared by kolie
Rapid7 threat hunter wrote a PoC. No, he's not releasing it.
RSAC If Rapid7's Christiaan Beek decided to change careers and become a ransomware criminal, he knows exactly how he'd innovate: CPU ransomware.
The senior director of threat analytics for the cybersecurity company got the idea from a bad bug in AMD Zen chips that, if exploited by highly skilled attackers, would allow those intruders to load unapproved microcode into the processors, breaking encryption at the hardware level and modifying CPU behavior at will.
Typically, only chip manufacturers can provide the correct microcode for their CPUs, which they might do to improve performance or fix holes. While it's difficult for outsiders to figure out how to write new microcode, it's not impossible - in the case of the AMD bug, Google demonstrated it could inject microcode to make the chip always choose the number 4 when asked for a random number.
"Coming from a background in firmware security, I was like, woah, I think I can write some CPU ransomware," Beek told The Register.
Spoiler alert: Beek followed through and wrote proof-of-concept code for ransomware that hides in the computer's processor. "Of course, we won't release that, but it's fascinating, right?"
This, according to Beek, is the worst-case scenario. "Ransomware at the CPU level, microcode alteration, and if you are in the CPU or the firmware, you will bypass every freaking traditional technology we have out there."
[...] While Beek says he hasn't yet found a working malware sample in the wild, "if they worked on it a few years ago, you can bet some of them will get smart enough at some point and start creating this stuff."
Beek knows it's possible because he's already done it himself.
"We should not be talking about ransomware in 2025 — and that fault falls on everyone: the vendors, the end users, cyber insurers," Beek told The Register.
"Twelve years later, we're still fighting the battle," he said. "While we're still seeing a lot of technological evolution, everybody's shouting agentic, AI, ML. And if we're bloody honest, we still haven't fixed our foundations."
How attackers break in "is not rocket science," he added. "What I'm seeing with a lot of ransomware breaches: it's a high-risk vulnerability, or a weak password, or we haven't deployed multi-factor authentication, or it's wrongly deployed. That is frustrating."
What should organizations do? Beek urges everyone to focus on cybersecurity basics. "We spend a lot of our time and money as an industry on innovation," he said. "But at the same time, our cyber hygiene is not improving."
Arthur T Knackerbracket has processed the following story:
The Norse ravaged much of Europe for centuries. They were also cosmopolitan explorers who followed trade winds into the Far East.
In the middle of the 9th century, in an office somewhere in the Jibāl region of what is now western Iran, a man is dictating to a scribe. It is the 840s of the Common Era, though the people in this eastern province of the great Caliphate of the ’Abbāsids – an Islamic superpower with its capital in Baghdad – live by the Hijri calendar. The man’s name is Abu ’l-Qāsim ʿUbayd Allāh b ʿAbd Allāh Ibn Khurradādhbih, and he is the director of posts and police for this region.
In his office, he is compiling a report as part of his duties. As his job title implies, he oversees communications and security in the Jibāl region, reporting to officials in Baghdad. What he provides is an intelligence service: in essence, Ibn Khurradādhbih is what we would call a station chief, like those CIA officials who manage clandestine operations abroad. The report he’s working on is part of a much larger document that will one day be known as Kitāb al-Masālik wa l-mamālik (the ‘Book of Itineraries and Kingdoms’), a summary of exactly the kind of thing that governments usually want to know: who was visiting their territory, where they came from, where they were going, and why. This is what he says about a group of people known as the Rus’:
For many decades, the second paragraph of this rather dense text was thought to refer to a totally different group of merchants from those described in the first, for the simple reason that scholars just didn’t believe that the Rūs (or the Rus’, as the word is usually spelled today) really went so far east. And yet, the text is clear. The two sections run on from each other, and both refer to the same people. So why do Ibn Khurradādhbih’s observations about them matter today?
We used to think of the time of the vikings, the three long centuries from around 750 to 1050 CE, as an age of expansion, when the Scandinavian peoples burst out upon an unsuspecting world with fire and sword. Over the past 40 years or so, that picture has become much more nuanced, as we see the poets, traders and settlers alongside the stereotypical raiders (who were nonetheless real) that most people imagine when they think of the vikings. However, our view of these events has recently changed. We no longer see an outward impulse of intention and process, but a much more haphazard and varied diaspora of Norse peoples, in which individuals with their own motives and missions shift across the northern world.
What does that diaspora look like? A settler on Orkney might divide the year between fishing and overseas piracy. A wealthy woman in a Swedish town might sponsor raids in the west. A person in Arctic Fennoscandia might span the very different worlds of the Norse and Saami. Another might journey deep into the rivers of Eurasia, only to die in the oasis of Khwarezm (in today’s Uzbekistan), but his companions would return to Scandinavia with the news. The ‘Norse’ voyages to North America would be crewed by people who included Icelanders, Greenlanders, a Turk, and two Scots. All these are taken from archaeological or textual sources, and serve as but a few examples of what the diaspora really meant.
[...] Given the astonishing geographical range of their travels in his account, it is perhaps surprising to realise that, with some necessary caveats, Rus’ was the name used by the peoples of the east to refer to the vikings. The routes that they took, according to his report, exactly match with what scholars of our own time would come to call the Silk Roads.
Many scholars now use vikings in lowercase to refer to the raiders themselves, adding an initial capital when talking about the time period. Many also employ a word such as Norse as an approximation for ‘everybody else over there in those days’. None of this is very satisfactory, but big-V vikings are almost impossible to shift from the public consciousness, and while there are problems with ‘Norse’ (it’s mainly a linguistic term, and Scandinavia was by no means a monoculture), it will do. During the Viking Age, most of their neighbours referred to them as ‘Northerners’, which is too Eurocentric a perspective to function today, but Norse comes close enough and has the virtue of being relatively specific.
In the west, the Rus’ were regarded as synonymous with the Norse, in fact with actual vikings
[...] As part of their travels, some Rus’ settled temporarily in the near east. Scandinavians served successive Byzantine emperors as mercenaries in the elite Varangian Guard (the name references an Old Norse word meaning those who have sworn an oath). Indeed, an officer’s posting there was a recognised stepping stone to political power back home. Rus’ contacts with Byzantium were by no means always peaceful, extending to all-out war on occasion, and they even besieged the city itself. There are also extensive Rus’ raids recorded around the Caspian Sea that appear identical in nature to the more famous viking assaults in western Europe.
It is clear that, in the west, the Rus’ were regarded as synonymous with the Norse, in fact with actual vikings. There are independent accounts making exactly this comparison from the Frankish court and also from Muslim Andalusia. It’s therefore worth asking if it is only modern historians who tend to separate them, based on the different labels used in east and west, but also on the legacies of the Cold War that drew such sharp, artificial barriers between researchers.
But the revisionist transformation of research into the Viking Age directs us beyond terminology. For not only is the definition of key terms changing, but also the very geography of the period. Our understanding of the Norse is now taking them far from their ‘northern’ homelands.
But can it really be that they themselves travelled, as Ibn Khurradādhbih says, throughout North Africa, western and central Asia, Transoxiana, Sindh, India and ultimately to al-Ṣīn, which perhaps denotes the Khaganate of the Uyghurs or possibly even the territories of the Tang dynasty? In fact, this should not surprise us, because indications of Norse connections with Asia have long been known from the archaeology of Scandinavia.
[...] Even scholars seem startled that more than 100,000 objects of Islamic origin have been excavated from Viking Age contexts in Scandinavia: these are, of course, the dirhams, and furthermore represent only a small fraction of the actual trade, which ran into the high millions. Each one bore an Arabic inscription praising Allah as the only god, usually with an indication of the caliph under whose control the coin had been made, and the location of the mint, which were scattered from Morocco to Afghanistan. It is very hard to imagine that nobody in the north ever wondered what the wavy signs on all those coins (and on some other objects, too) really meant. It must have been obvious that it was writing, and surely somebody understood that it was an exhortation to the divine – in other words, a religious text. Arabic was also inscribed on bronze weights, and it has long been clear that the Norse adopted the standard system of measurement used in the Caliphate. Archaeologists also find locally made weights in Scandinavia that have been given attempts at inscriptions that are just squiggly lines, clearly because ‘everyone knew’ that this is what proper weights should look like. Some scholars have even speculated that all this messaging was part of a (failed) Islamic mission to convert the Scandinavians. To be clear, there is no evidence that any of the Norse accepted the Muslim faith, other than a few who stayed in the Caliphate itself, but curiosity and receptiveness to other cultures were consistent features of their society.
So, rather than marauding through Europe, we find the Norse as traders and collectors of treasured Islamic and even Buddhist objects from as far away as modern-day Iran and Pakistan. And this trading and collecting was not simply haphazard or random. However, for all the detail and range of these contacts, the full implications have not been taken to their obvious conclusion.
[...] Importantly, these are not the marauding Scandinavians of legend, nor were they pursuing the muscular commerce of aggressive trade as they did on the eastern European rivers. They were cosmopolitans and explorers, but also pragmatists, who would have had to learn new languages and fit in to a succession of new surroundings. This was no ‘viking empire’ or colonial endeavour. The Rus’ who travelled to eastern Asia did so in small groups, perhaps in the company of others – a few of them onboard ships, or joining their camels and horses to a caravan. They were in a minority and at a disadvantage.
If all this seems very far from the classic stereotype of ‘the viking’ – an armed man (it’s almost always a man) standing on the deck of a longship in rough seas, on his way westwards to plunder and violent glory – then this is no bad thing. The vulnerability of the Norse in the far east is a reality, not a projection, but it also usefully undermines the clichés that have attached to the period. In the coming years, through the efforts of researchers from across Asia and Europe, the map of the Norse diaspora is going to be redrawn and also re-evaluated. The Viking Age may not be the same again.
Smarter agents, continuous updates, and the eternal struggle to prove ROI:
As Nvidia releases its NeMo microservices to embed AI agents into enterprise workflows, research has found that almost half of businesses are seeing only minor gains from their investments in AI.
NeMo microservices are a set of tools, some already available, which developers can use to build AI agents capable of integrating with existing applications and services to automate tasks, and manage the lifecycle of agents to keep them updated as necessary with the latest information.
"There are over a billion knowledge workers across many industries, geographies, and locations, and our view is that digital employees or AI agents will be able to help enterprises get more work done in this variety of domains and scenarios," said Joey Conway, Nvidia's senior director of generative AI software for enterprise.
[...] Nvidia envisions these microservices working in a circular pipeline, taking new data and user feedback, using this to improve the AI model, then redeploying it. Nvidia refers to this as a "data flywheel," although we can't help feeling that this misunderstands what an actual flywheel does.
[...] Examples where NeMo microservices are already being put to work include Amdocs, which is laboring on three types of agents for its telecoms operator customers, Nvidia said.
These comprise a billing agent, a sales agent, and a network agent. The billing agent focuses on query resolution, while the sales agent works on personalized offers and customer engagement as part of deal closure. The network agent will analyze logs and network information across geographic regions and countries to proactively identify service issues.
[...] The research was commissioned by Storyblok, provider of CMS software for marketers and developers, which said that businesses need to look beyond surface-level implementations and integrate AI in a way that drives meaningful transformation.
It found the most popular use cases for AI among UK business leaders are website content creation, customer service, marketing analysis, translation services, and marketing content creation.