Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How often do registered SoylentNews users post as Anonymous Coward, and why?

  • 0%
  • 25%
  • 50%
  • 75%
  • 90%+
  • I pretend to be different people
  • I AM different people - what day is it?

[ Results | Polls ]
Comments:126 | Votes:100

posted by janrinok on Wednesday January 22, @07:51PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

After a lengthy court battle with broadband industry lobbyists, New York will soon start enforcing a law that passed in 2021. The state law requires ISPs, like Verizon, to offer $15 or $20 per month internet service plans to low-income households.

Although ISPs got an initial win by blocking the Affordable Broadband Act (ABA) in June 2021, this ruling was reversed in April 2024 after the case went to the US appeals court. Last month, the Supreme Court decided not to hear the broadband industry’s challenge, which means the appeals court ruling is the final word on the issue. ISPs will now have to comply with the ABA, which will start being enforced on January 15.

As reported by Ars Technica, New York-based internet providers will now need to either offer a $15/month plan with at least 25Mbps download speeds, or a $20/month plan with 200Mbps download speeds. Included with the price are “any recurring taxes and fees such as recurring rental fees for service provider equipment required to obtain broadband service and usage fees.” Prices can be increased, but increases are capped at 2% per year and state officials can decide if the minimum speeds need to be raised. If a company is non-compliant with the law, it could be fined up to $1,000 per violation.

An ISP can obtain an exemption from the ABA if it serves 20,000 households or fewer and the Commission deems that compliance would have an unreasonable or unsustainable financial effect on the business. With the law going into effect tomorrow, these ISPs will be given a grace period of one month if they file their paperwork by Wednesday claiming that they meet the threshold. They’ll be able to get longer exemptions if they file detailed financial information by February 15.

Earlier this year, the FCC’s attempt to restore certain net neutrality rules was shot down by a federal appeals court. The enforcement of ABA shows how states can regulate ISPs even if the FCC can’t.


Original Submission

posted by janrinok on Wednesday January 22, @03:08PM   Printer-friendly

How do you fit a dictionary in 64kb RAM? Unix engineers solved it with clever data structures and compression tricks. Here's the fascinating story behind it:

How do you fit a 250kB dictionary in 64kB of RAM and still perform fast lookups? For reference, even with modern compression techniques like gzip -9, you can't compress this file below 85kB.

In the 1970s, Douglas McIlroy faced this exact challenge while implementing the spell checker for Unix at AT&T. The constraints of the PDP-11 computer meant the entire dictionary needed to fit in just 64kB of RAM. A seemingly impossible task.

Instead of relying on generic compression techniques, he took advantage of the properties of the data and developed a compression algorithm that came within 0.03 bits of the theoretical limit of possible compression. To this day, it remains unbeaten.

The story of Unix spell is more than just historical curiosity. It's a masterclass in engineering under constraints: how to analyze a problem from first principles, leverage mathematical insights, and design elegant solutions that work within strict resource limits.

If you're short on time, here's the key engineering story:

    • The Unix spell started in the 1970s as an afternoon prototype by Steve Johnson at AT&T, before Douglas McIlroy rewrote it to improve its performance and accuracy.

    • McIlroy's first innovation was a clever linguistics-based stemming algorithm that reduced the dictionary to just 25,000 words while improving accuracy.

    • For fast lookups, he initially used a Bloom filter—perhaps one of its first production uses. Interestingly, Dennis Ritchie provided the implementation. They tuned it to have such a low false positive rate that they could skip actual dictionary lookups.

    • When the dictionary grew to 30,000 words, the Bloom filter approach became impractical, leading to innovative hash compression techniques.

    • They computed that 27-bit hash codes would keep collision probability acceptably low, but needed compression.

    • McIlroy's solution was to store differences between sorted hash codes, after discovering these differences followed a geometric distribution.

    • Using Golomb's code, a compression scheme designed for geometric distributions, he achieved 13.60 bits per word—remarkably close to the theoretical minimum of 13.57 bits.

    • Finally, he partitioned the compressed data to speed up lookups, trading a small memory increase (final size ~14 bits per word) for significantly faster performance.

The rest of the article expands each of these points and gives a detailed explanation with all the math and logic behind them.


Original Submission

posted by hubie on Wednesday January 22, @09:22AM   Printer-friendly

https://www.righto.com/2025/01/pentium-carry-lookahead-reverse-engineered.html

Addition is harder than you'd expect, at least for a computer. Computers use multiple types of adder circuits with different tradeoffs of size versus speed. In this article, I reverse-engineer an 8-bit adder in the Pentium's floating point unit. This adder turns out to be a carry-lookahead adder, in particular, a type known as "Kogge-Stone."1 In this article, I'll explain how a carry-lookahead adder works and I'll show how the Pentium implemented it. Warning: lots of Boolean logic ahead.


Original Submission

posted by hubie on Wednesday January 22, @04:37AM   Printer-friendly
from the First-our-jobs,-now-our-hobbies?! dept.

Beijing's half-marathon will see humans racing alongside humanoid robots as China pushes to dominate global robotics:

China is organizing what could be one of the weirdest races in history: a half-marathon where 12,000 humans will compete against an army of humanoid robots to see who's the best long-distance runner.

The 21-kilometer race in Beijing's Daxing district isn't just another tech demo. More than 20 companies are bringing their best walking robots to compete, and they're playing for real money—the top three finishers get prizes regardless of whether they're made of flesh or metal.

This would be the first time humanoid robots race a full 21-kilometer course. Last year, robots were able to join a race without having to complete the full route.

[The event] includes a strict no-wheels policy, and the bots actually need to look human-ish and walk on two legs. They need to be between 0.5 and 2 meters tall—so no giant mechs or tiny robot cars will be sneaking in.

One of the early favorites is Tiangong, a humanoid that can run 10km/h. It also crossed the line alongside some of the fastest humans during last year's half marathon—after joining for the last 100 meters.

The Tesla Optimus Gen-2 peaks at 8km/h.
Atlas (built by Boston Dynamics) is a bit faster at 9km/h.
The OpenAI-backed 1X NEO... reaches a theoretical speed of 12km/h.

For reference: https://en.wikipedia.org/wiki/Foot_speed

In the 2023 Chicago Marathon [42km, not a half-marathon], Kelvin Kiptum set a time of 2:00:35. That equates to an average speed above 20 km/h,(12.47mph) for two hours.


Original Submission

posted by hubie on Tuesday January 21, @11:54PM   Printer-friendly
from the and-who-are-you-guys? dept.

Arthur T Knackerbracket has processed the following story:

TSMC has ceased its relationship with Singapore-based PowerAIR after a client review raised concerns about potential violations of U.S. export controls, reports the South China Morning Post, citing people familiar with the matter. As TSMC could not identify the end user of PowerAIR's chips that it ordered, it reportedly presumed that it was dealing with an entity with possible connections to Huawei, which has been under the U.S. technology embargo since 2020.

TSMC's action follows the discovery of a TSMC-made chiplet in a recently assembled Huawei Ascend 910 AI processor. That particular chiplet was ordered by Sophgo, a relatively unknown entity. Singapore-based PowerAIR is just as unknown as Sophgo, it seems. The firm was incorporated as a private company working on engineering design and consultancy back in September 2023. It lacks an official online presence or publicly listed contact information, according to SCMP. The company was flagged after TSMC identified a possible link between its chip designs and Huawei's.

This is not the first, but the second time that an entity disguised under an 'unknown' brand provided the blacklisted Huawei with high-end technologies that help China's economic and therefore military development, SCMP reports. At this point, we do not know whether we are dealing with the second or the third high-end processor destined for Huawei and allegedly made by TSMC.

Considering the fact that PowerAIR is an unknown entity probably with few (if any) engineers and with no publicly known contracts with companies like Andes, Alchip, or Alphawave, or entities known for designing high-performance IP, TSMC had all the reasons to be suspicious. Being suspicious enough, TSMC seemingly linked PowerAIR to Huawei and therefore was obliged to cease the contract. Per the report, TSMC did just that.

Since September 2020, Huawei has been prohibited from legally purchasing chips made with American technology, which encompasses nearly all chips. To circumvent this restriction, Huawei reportedly employs intermediaries to place orders or acquire components. In 2024, the company used Sophgo, a Bitmain affiliate, to order Huawei-designed Virtuvian computing chiplets for its Ascend 910 processor, violating U.S. sanctions. This violation was uncovered by TechInsights during a teardown of the Ascend 910 processor. Upon confirming the match, TSMC halted shipments to Sophgo and reported the issue to U.S. and Taiwanese authorities.


Original Submission

posted by hubie on Tuesday January 21, @07:10PM   Printer-friendly
from the do-as-we-say,-not-as-we-do dept.

Arthur T Knackerbracket has processed the following story:

The Department of Justice and the FBI shared today that they have completed a project to remove malware used by Chinese hackers from computers in the US. The effort was essentially a court-approved counter-hack that remotely deleted malware known as PlugX from more than 4,200 computers. The agencies will notify the US owners of those impacted machines about the operation through their internet service providers.

According to the DOJ press release, hacker groups known as Mustang Panda and Twill Typhoon received backing from the Chinese government to use PlugX to infect, control and gather information from computers outside China. The action to delete the PlugX malware from US computers began in August 2024. It was conducted in cooperation with French law enforcement and with Sekoia.io, a France-based private cybersecurity company. Sekoia.io has found PlugX malware in more than 170 countries.

The Mustang Panda group has been conducting infiltration efforts around the world since at least 2014. For instance, cybersecurity firm ESET found that Mustang Panda gained access to cargo shipping companies' computers in Norway, Greece and the Netherlands in March. And the group was one of several China-linked hacking organizations identified as compromising telecommunications systems across the Asia-Pacific region in reports last summer.


Original Submission

posted by hubie on Tuesday January 21, @02:24PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Europe and Japan’s BepiColombo beamed back close-up images of the solar system’s innermost planet, flying through Mercury’s shadow to peer directly onto craters that are permanently hidden in the shadows.

BepiColombo, consisting of two conjoined spacecraft, flew past Mercury for the sixth and final time on Wednesday, using the planet’s gravitational pull to adjust its trajectory for an eventual orbital insertion in 2026. The mission launched in October 2018 as a joint venture between the European Space Agency (ESA) and the Japan Aerospace Exploration Agency (JAXA), each providing an orbiter to explore Mercury. During its latest flyby, the twin spacecraft flew above the surface of Mercury at a distance of around 180 miles (295 kilometers), according to ESA.

From this close distance, BepiColombo captured images of Mercury’s cratered surface, starting with the planet’s cold, permanently dark night side near the north pole before moving toward its sunlit northern regions.

Using its monitoring cameras (M-CAM 1), BepiColombo got its first close-up view of the boundary that separates the day and night side of Mercury. In the image above, the rims of Prokofiev, Kandinsky, Tolkien, and Gordimer craters can be seen littered across the surface of Mercury, casting permanent shadows that may contain pockets of frozen water.

Indeed, a key goal of the mission is to investigate whether Mercury holds water in its shadows, despite its close proximity to the Sun.

The massive Caloris Basin, Mercury’s largest impact crater, stretches more than 930 miles (1,500 kilometers) across and is visible at the bottom left of the image.

Although Mercury is a largely dark planet, its younger features (or more recent scarring) appear brighter on the surface. Scientists aren’t quite sure what Mercury is made of, but material that had been dug up from beneath the surface of the planet gradually grows darker with time.

ESA released a movie of the flyby you can download.


Original Submission

posted by hubie on Tuesday January 21, @09:39AM   Printer-friendly
from the avoiding-the-ouroboros-of-LLM-slop dept.

Blogger Matt Webb point out that nations have begun to need a strategic fact reserve, in light of the problem arising from LLMs and other AI models starting to consume and re-process the slop which they themselves have produced.

The future needs trusted, uncontaminated, complete training data.

From the point of view of national interests, each country (or each trading bloc) will need its own training data, as a reserve, and a hedge against the interests of others.

Probably the best way to start is to take a snapshot of the internet and keep it somewhere really safe. We can sift through it later; the world's data will never be more available or less contaminated than it is today. Like when GitHub stored all public code in an Arctic vault (02/02/2020): a very-long-term archival facility 250 meters deep in the permafrost of an Arctic mountain. Or the Svalbard Global Seed Vault.

But actually I think this is a job for librarians and archivists.

What we need is a long-term national programme to slowly, carefully accept digital data into a read-only archive. We need the expertise of librarians, archivists and museums in the careful and deliberate process of acquisition and accessioning (PDF).

(Look and if this is an excuse for governments to funnel money to the cultural sector then so much the better.)

It should start today.

Already, AI slop is filling the WWW and starting to drown out legitimate, authoritative sources through sheer volume.

Previously
(2025) Meta's AI Profiles Are Already Polluting Instagram and Facebook With Slop
(2024) Thousands Turned Out For Nonexistent Halloween Parade Promoted By AI Listing
(2024) Annoyed Redditors Tanking Google Search Results Illustrates Perils of AI Scrapers


Original Submission

posted by hubie on Tuesday January 21, @04:56AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The sun's energy is plentiful. And China is capitalizing.

Images captured by two Earth-observing satellites, operated by the U.S. Geological Survey, revealed a rapid expansion of solar farms in a remote northern Chinese region, the Kubuqi Desert.

"The construction is part of China’s multiyear plan to build a 'solar great wall' designed to generate enough energy to power Beijing," writes NASA's Earth Observatory. (For reference, although all this energy won't directly power the Chinese capital, around 22 million people live in Beijing; that's over two and a half times the population of New York City.)

The two Landsat satellite images below show a section of the major solar expansion between 2017 and 2024. Use the slider tool to reveal the changes. (For a size and scale reference, the images below are about 10 kilometers, or 6.2 miles, across.)

And the solar complex is still growing. It will be 250 miles long and 3 miles wide by 2030, according to NASA.

Though China's energy mix is still dominated by fossil fuels — coal, oil, and gas comprised 87 percent of its energy supply as of 2022 — the nation clearly sees value in expanding renewable energy.

"As of June 2024, China led the world in operating solar farm capacity with 386,875 megawatts, representing about 51 percent of the global total, according to Global Energy Monitor’s Global Solar Power Tracker," NASA explained. "The United States ranks second with 79,364 megawatts (11 percent), followed by India with 53,114 megawatts (7 percent)."

Energy experts say that solar energy, like wind, is an important part of an energy supply, as they're renewable and have been shown to reduce energy costs. Fossil fuels, of course, still play a prominent role in most states' energy mix today.

But the economics of solar are clearly there. The proof, via U.S. satellites, is in the Kubuqi Desert.


Original Submission

posted by janrinok on Tuesday January 21, @12:09AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Meta is preparing for even more layoffs, according to reporting by Bloomberg. CEO Mark Zuckerberg said in a company memo that he plans on cutting about five percent of its "low-performers."

“I’ve decided to raise the bar on performance management and move out low-performers faster,” Zuckerberg said in the memo. “We typically manage out people who aren’t meeting expectations over the course of a year, but now we’re going to do more extensive performance-based cuts during this cycle.”

All told, this could result in 10 percent fewer staff at Meta, once attrition is accounted for. Bloomberg suggested that the forthcoming pink slips will focus on people “who have been with the company long enough to receive a performance rating.”

Between increased layoffs and attrition, nearly 7,000 Meta staff might be leaving the company in the near future. This follows a firing spree that began in late 2022, eventually impacting over 20,000 workers. The company also laid off 60 technical program managers earlier this month.

"A leaner org will execute its highest priorities faster. People will be more productive, and their work will be more fun and fulfilling," Zuckerberg said in 2024. Nothing says “fun and fulfilling” like living in constant fear of being fired.


Original Submission

posted by janrinok on Monday January 20, @07:24PM   Printer-friendly

European Union orders X to hand over algorithm documents:

Brussels has ordered Elon Musk to fully disclose recent changes made to recommendations on X, stepping up an investigation into the role of the social media platform in European politics.

The expanded probe by the European Commission, announced on Friday, requires X to hand over internal documents regarding its recommendation algorithm. The Commission also issued a "retention order" for all relevant documents relating to how the algorithm could be amended in future.

In addition, the EU regulator requested access to information on how the social media network moderates and amplifies content.

The move follows complaints from politicians in Germany that X's algorithm is promoting content by the far right ahead of the country's February 23 elections. Musk has come out in favour of Alternative for Germany (AfD), arguing that it will save Europe's largest nation from "economic and cultural collapse." The German domestic intelligence service has designated parts of the AfD as right-wing extremist.

Speaking on Friday, German chancellor Olaf Scholz toughened his language towards the world's richest man, describing Musk's support for the AfD as "completely unacceptable." The party is currently second place in the polls with around 20 percent support, ahead of Scholz's Social Democrats and behind the opposition Christian Democratic Union.

Earlier in the week, Germany's defence ministry and foreign ministry said they were suspending their activity on X, with the defence ministry saying it had become increasingly "unhappy" with the platform.

posted by janrinok on Monday January 20, @02:38PM   Printer-friendly

'ELIZA,' the world's 1st chatbot, was just resurrected from 60-year-old computer code:

Scientists have just resurrected "ELIZA," the world's first chatbot, from long-lost computer code — and it still works extremely well.

Using dusty printouts from MIT archives, these "software archaeologists" discovered defunct code that had been lost for 60 years and brought it back to life.

ELIZA was developed in the 1960s by MIT professor Joseph Weizenbaum and named for Eliza Doolittle, the protagonist of the play "Pygmalion," who was taught how to speak like an aristocratic British woman.

As a language model that the user could interact with, ELIZA had a significant impact on today's artificial intelligence (AI), the researchers wrote in a paper posted to the preprint database arXiv Sunday (Jan. 12). The "DOCTOR" script written for ELIZA was programmed to respond to questions as a psychotherapist would. For example, ELIZA would say, "Please tell me your problem." If the user input "Men are all alike," the program would respond, "In what way."

Weizenbaum wrote ELIZA in a now-defunct programming language he invented, called Michigan Algorithm Decoder Symmetric List Processor (MAD-SLIP), but it was almost immediately copied into the language Lisp. With the advent of the early internet, the Lisp version of ELIZA went viral, and the original version became obsolete.

Experts thought the original 420-line ELIZA code was lost until 2021, when study co-author Jeff Shrager, a cognitive scientist at Stanford University, and Myles Crowley, an MIT archivist, found it among Weizenbaum's papers.

"I have a particular interest in how early AI pioneers thought," Shrager told Live Science in an email. "Having computer scientists' code is as close to having a record of their thoughts, and as ELIZA was — and remains, for better or for worse — a touchstone of early AI, I want to know what was in his mind." But why the team wanted to get ELIZA working is more complex, he said.

"From a technical point of view, we did not even know that the code we had found — the only version ever discovered — actually worked," Shrager said. So they realized they had to try it.

Bringing ELIZA back to life was not straightforward. It required the team to clean and debug the code and create an emulator that would approximate the kind of computer that would have run ELIZA in the 1960s. After restoring the code, the team got ELIZA running — for the first time in 60 years — on Dec. 21.

"By making it run, we demonstrated that this was, in fact, a part of the actual ELIZA lineage and that it not only worked, but worked extremely well," Shrager said.

But the team also found a bug in the code, which they elected not to fix. "It would ruin the authenticity of the artifact," Shrager explained, "like fixing a mis-stroke in the original Mona Lisa." The program crashes if the user enters a number, such as "You are 999 today," they wrote in the study.

Even though it was intended to be a research platform for human-computer communication, "ELIZA was such a novelty at the time that its 'chatbotness' overwhelmed its research purposes," Shrager said.

That legacy continues today, as ELIZA is often compared to current large-language models (LLMs) and other artificial intelligence.

Even though it does not compare to the abilities of modern LLMs like ChatGPT, "ELIZA is really remarkable when you consider that it was written in 1965," David Berry, a digital humanities professor at the University of Sussex in the U.K. and co-author of the paper, told Live Science in an email. "It can hold its own in a conversation for a while."

One thing ELIZA did better than modern chatbots, Shrager said, is listen. Modern LLMs only try to complete your sentences, whereas ELIZA was programmed to prompt the user to continue a conversation. "That's more like what 'chatting' is than any intentional chatbot since," Shrager said.

"Bringing ELIZA back, one of the most — if not most — famous chatbots in history, opens people's eyes up to the history that is being lost," Berry said. Because the field of computer science is so forward-looking, practitioners tend to consider its history obsolete and don't preserve it.

Berry, though, believes that computing history is also cultural history.

"We need to work harder as a society to keep these traces of the nascent age of computation alive," Berry said, "because if we don't then we will have lost the digital equivalents of the Mona Lisa, Michelangelo's David or the Acropolis."


Original Submission

posted by hubie on Monday January 20, @09:52AM   Printer-friendly
from the "for-a-better-customer-experience" dept.

Google begins requiring JavaScript for Google Search:

Google says it has begun requiring users to turn on JavaScript, the widely used programming language to make web pages interactive, in order to use Google Search.

In an email to TechCrunch, a company spokesperson claimed that the change is intended to "better protect" Google Search against malicious activity, such as bots and spam, and to improve the overall Google Search experience for users. The spokesperson noted that, without JavaScript, many Google Search features won't work properly and that the quality of search results tends to be degraded.

"Enabling JavaScript allows us to better protect our services and users from bots and evolving forms of abuse and spam," the spokesperson told TechCrunch, "and to provide the most relevant and up-to-date information."

Many major websites rely on JavaScript. According to a 2020 GitHub survey, 95% of sites around the web employ the language in some form. But as users on social media point out, Google's decision to require it could add friction for those who rely on accessibility tools, which can struggle with certain JavaScript implementations.

JavaScript is also prone to security vulnerabilities. In its 2024 annual security survey, tech company Datadog found that around 70% of JavaScript services are vulnerable to one or more "critical" or "high-severity" vulnerabilities introduced by a third-party software library.

The Google spokesperson told TechCrunch that, on average, "fewer than .1%" of searches on Google are done by people who disable JavaScript. That's no small number at Google scale. Google processes around 8.5 billion searches per day, so one can assume that millions of people performing searches through Google aren't using JavaScript.

One of Google's motivations here may be inhibiting third-party tools that give insights into Google Search trends and traffic. According to a post on Search Engine Roundtable on Friday, a number of "rank-checking" tools — tools that indicate how websites are performing in search engines — began experiencing issues with Google Search around the time Google's JavaScript requirement came into force.

The Google spokesperson declined to comment on Search Engine Roundtable's reporting.


Original Submission

posted by hubie on Monday January 20, @05:07AM   Printer-friendly
from the is-it-a-salt-water-solution? dept.

Arthur T Knackerbracket has processed the following story:

A worrying study published last month in Environmental Challenges claims that nearly two-thirds of the Great Salt Lake’s shrinkage is attributable to human use of river water that otherwise would have replenished the lake.

Utah’s Great Salt Lake is a relic of a once-vast lake that occupied the same site during the Ice Age. The lake’s level has fluctuated since measurements of it began in 1847, but it’s about 75 miles (120 kilometers) long by 35 miles (56 km) wide with a maximum depth of 33 feet (10 meters). The Great Salt Lake’s water levels hit a record low in 2021, which was usurped the following year.

According to the recent paper, about 62% of the river water that otherwise would have refilled the lake has instead been used for “anthropogenic consumption.” The research team found that agricultural use cases were responsible for 71% of those human-driven depletions; furthermore, about 80% of the agricultural water is used for crops to feed just under one million cattle.

[...] The researchers proposed a goal of reducing anthropogenic river water consumption in the area by 35% to begin refilling the lake, as well as a detailed breakdown of specific reductions within livestock feed production.

“We find that the most potent solutions would involve a 61% reduction in alfalfa production along with fallowing of 26–55% of grass hay production,” the team wrote, “resulting in reductions of agricultural revenues of US$97 million per year, or 0.04% of the state’s GDP.” The team added that Utah residents could be compensated for their loss of revenue. It’s an easier plan to propose on paper than sell folks on as a reality, but it is a pathway towards recovery for the Great Salt Lake.

As the team added, the lake directly supports 9,000 jobs and $2.5 billion in economic productivity, primarily from mining, recreation, and fishing of brine shrimp. Exposed saline lakebeds (as the Great Salt Lake’s increasingly are with its decreasing water levels) are also associated with dust that can pose health risks due to its effects on the human respiratory system.

For now, the Great Salt Lake's average levels and volume continue to decrease. But the team's research has revealed a specific pain point and suggested ways to reduce the strain on the great—but diminishing—water body.

The elephant in the room that isn't mentioned are all of the data centers in the Salt Lake region. It seems data on their water usage is considered Confidential Business Information and doesn't need to be reported, so much discussion on this gets presented as a farmer-vs-resident issue.


Original Submission

posted by hubie on Monday January 20, @12:21AM   Printer-friendly

[Ed. note: DVLA == Driver and Vehicle Licensing Agency]

https://dafyddvaughan.uk/blog/2025/why-some-dvla-digital-services-dont-work-at-night/

Every few months or so, somebody asks on social media why a particular DVLA digital service is turned off over night. Why is it, in the 21st century, a newish online service only operates for some hours of the day? Rather than answering it every time, I've decided to write this post, so I can point people at it in future.

It's also a great case study to show why making government services digitally native can be quite complicated. Unless you're a start up, you're rarely working in a greenfield environment, and have legacy technology and old working practices to contend with. Transforming government services isn't as easy as the tech bros and billionaires make it out to be.

[...] DVLA is around 60 years old and manages driving licences and vehicle records for England, Scotland and Cymru.

At the time, many of DVLA's services - particularly those relating to driving licences were still backed by an old IBM mainframe from the 1980s - fondly known as Drivers-90 (or D90 for short). D90 was your typical mainframe - code written in COBOL using the ADABAS database package. Most data processing happened 'offline' - through batch jobs which ran during an overnight window.

In the early 2000s, there had been an attempt by DVLA's IT suppliers to modernise the systems. They'd designed a new set of systems using Java and WebLogic, with Oracle Databases - which they referred to as the New Systems Landscape (or NSL). To speed up the migration, they'd used tools to automatically convert the code and database structures.

As often happens in large behind-the-scenes IT modernisation projects, this upgrade effort ran out of energy and money, so it never finished. This left a complex infrastructure in place - with some services using the new architecture, some using the mainframe, and some using both at the same time.

[...] It's now 2024 - 10 years on from the launch of the first service. The legacy infrastructure, which really should have been replaced by now, is probably still the reason why the services are still offline overnight.

Is this acceptable? Not really. Is it understandable? Absolutely.

Legacy tech is complicated. It's one of the biggest barriers for organisations undertaking digital transformation.


Original Submission