Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How often do registered SoylentNews users post as Anonymous Coward, and why?

  • 0%
  • 25%
  • 50%
  • 75%
  • 90%+
  • I pretend to be different people
  • I AM different people - what day is it?

[ Results | Polls ]
Comments:126 | Votes:100

posted by janrinok on Monday January 20, @07:24PM   Printer-friendly

European Union orders X to hand over algorithm documents:

Brussels has ordered Elon Musk to fully disclose recent changes made to recommendations on X, stepping up an investigation into the role of the social media platform in European politics.

The expanded probe by the European Commission, announced on Friday, requires X to hand over internal documents regarding its recommendation algorithm. The Commission also issued a "retention order" for all relevant documents relating to how the algorithm could be amended in future.

In addition, the EU regulator requested access to information on how the social media network moderates and amplifies content.

The move follows complaints from politicians in Germany that X's algorithm is promoting content by the far right ahead of the country's February 23 elections. Musk has come out in favour of Alternative for Germany (AfD), arguing that it will save Europe's largest nation from "economic and cultural collapse." The German domestic intelligence service has designated parts of the AfD as right-wing extremist.

Speaking on Friday, German chancellor Olaf Scholz toughened his language towards the world's richest man, describing Musk's support for the AfD as "completely unacceptable." The party is currently second place in the polls with around 20 percent support, ahead of Scholz's Social Democrats and behind the opposition Christian Democratic Union.

Earlier in the week, Germany's defence ministry and foreign ministry said they were suspending their activity on X, with the defence ministry saying it had become increasingly "unhappy" with the platform.

posted by janrinok on Monday January 20, @02:38PM   Printer-friendly

'ELIZA,' the world's 1st chatbot, was just resurrected from 60-year-old computer code:

Scientists have just resurrected "ELIZA," the world's first chatbot, from long-lost computer code — and it still works extremely well.

Using dusty printouts from MIT archives, these "software archaeologists" discovered defunct code that had been lost for 60 years and brought it back to life.

ELIZA was developed in the 1960s by MIT professor Joseph Weizenbaum and named for Eliza Doolittle, the protagonist of the play "Pygmalion," who was taught how to speak like an aristocratic British woman.

As a language model that the user could interact with, ELIZA had a significant impact on today's artificial intelligence (AI), the researchers wrote in a paper posted to the preprint database arXiv Sunday (Jan. 12). The "DOCTOR" script written for ELIZA was programmed to respond to questions as a psychotherapist would. For example, ELIZA would say, "Please tell me your problem." If the user input "Men are all alike," the program would respond, "In what way."

Weizenbaum wrote ELIZA in a now-defunct programming language he invented, called Michigan Algorithm Decoder Symmetric List Processor (MAD-SLIP), but it was almost immediately copied into the language Lisp. With the advent of the early internet, the Lisp version of ELIZA went viral, and the original version became obsolete.

Experts thought the original 420-line ELIZA code was lost until 2021, when study co-author Jeff Shrager, a cognitive scientist at Stanford University, and Myles Crowley, an MIT archivist, found it among Weizenbaum's papers.

"I have a particular interest in how early AI pioneers thought," Shrager told Live Science in an email. "Having computer scientists' code is as close to having a record of their thoughts, and as ELIZA was — and remains, for better or for worse — a touchstone of early AI, I want to know what was in his mind." But why the team wanted to get ELIZA working is more complex, he said.

"From a technical point of view, we did not even know that the code we had found — the only version ever discovered — actually worked," Shrager said. So they realized they had to try it.

Bringing ELIZA back to life was not straightforward. It required the team to clean and debug the code and create an emulator that would approximate the kind of computer that would have run ELIZA in the 1960s. After restoring the code, the team got ELIZA running — for the first time in 60 years — on Dec. 21.

"By making it run, we demonstrated that this was, in fact, a part of the actual ELIZA lineage and that it not only worked, but worked extremely well," Shrager said.

But the team also found a bug in the code, which they elected not to fix. "It would ruin the authenticity of the artifact," Shrager explained, "like fixing a mis-stroke in the original Mona Lisa." The program crashes if the user enters a number, such as "You are 999 today," they wrote in the study.

Even though it was intended to be a research platform for human-computer communication, "ELIZA was such a novelty at the time that its 'chatbotness' overwhelmed its research purposes," Shrager said.

That legacy continues today, as ELIZA is often compared to current large-language models (LLMs) and other artificial intelligence.

Even though it does not compare to the abilities of modern LLMs like ChatGPT, "ELIZA is really remarkable when you consider that it was written in 1965," David Berry, a digital humanities professor at the University of Sussex in the U.K. and co-author of the paper, told Live Science in an email. "It can hold its own in a conversation for a while."

One thing ELIZA did better than modern chatbots, Shrager said, is listen. Modern LLMs only try to complete your sentences, whereas ELIZA was programmed to prompt the user to continue a conversation. "That's more like what 'chatting' is than any intentional chatbot since," Shrager said.

"Bringing ELIZA back, one of the most — if not most — famous chatbots in history, opens people's eyes up to the history that is being lost," Berry said. Because the field of computer science is so forward-looking, practitioners tend to consider its history obsolete and don't preserve it.

Berry, though, believes that computing history is also cultural history.

"We need to work harder as a society to keep these traces of the nascent age of computation alive," Berry said, "because if we don't then we will have lost the digital equivalents of the Mona Lisa, Michelangelo's David or the Acropolis."


Original Submission

posted by hubie on Monday January 20, @09:52AM   Printer-friendly
from the "for-a-better-customer-experience" dept.

Google begins requiring JavaScript for Google Search:

Google says it has begun requiring users to turn on JavaScript, the widely used programming language to make web pages interactive, in order to use Google Search.

In an email to TechCrunch, a company spokesperson claimed that the change is intended to "better protect" Google Search against malicious activity, such as bots and spam, and to improve the overall Google Search experience for users. The spokesperson noted that, without JavaScript, many Google Search features won't work properly and that the quality of search results tends to be degraded.

"Enabling JavaScript allows us to better protect our services and users from bots and evolving forms of abuse and spam," the spokesperson told TechCrunch, "and to provide the most relevant and up-to-date information."

Many major websites rely on JavaScript. According to a 2020 GitHub survey, 95% of sites around the web employ the language in some form. But as users on social media point out, Google's decision to require it could add friction for those who rely on accessibility tools, which can struggle with certain JavaScript implementations.

JavaScript is also prone to security vulnerabilities. In its 2024 annual security survey, tech company Datadog found that around 70% of JavaScript services are vulnerable to one or more "critical" or "high-severity" vulnerabilities introduced by a third-party software library.

The Google spokesperson told TechCrunch that, on average, "fewer than .1%" of searches on Google are done by people who disable JavaScript. That's no small number at Google scale. Google processes around 8.5 billion searches per day, so one can assume that millions of people performing searches through Google aren't using JavaScript.

One of Google's motivations here may be inhibiting third-party tools that give insights into Google Search trends and traffic. According to a post on Search Engine Roundtable on Friday, a number of "rank-checking" tools — tools that indicate how websites are performing in search engines — began experiencing issues with Google Search around the time Google's JavaScript requirement came into force.

The Google spokesperson declined to comment on Search Engine Roundtable's reporting.


Original Submission

posted by hubie on Monday January 20, @05:07AM   Printer-friendly
from the is-it-a-salt-water-solution? dept.

Arthur T Knackerbracket has processed the following story:

A worrying study published last month in Environmental Challenges claims that nearly two-thirds of the Great Salt Lake’s shrinkage is attributable to human use of river water that otherwise would have replenished the lake.

Utah’s Great Salt Lake is a relic of a once-vast lake that occupied the same site during the Ice Age. The lake’s level has fluctuated since measurements of it began in 1847, but it’s about 75 miles (120 kilometers) long by 35 miles (56 km) wide with a maximum depth of 33 feet (10 meters). The Great Salt Lake’s water levels hit a record low in 2021, which was usurped the following year.

According to the recent paper, about 62% of the river water that otherwise would have refilled the lake has instead been used for “anthropogenic consumption.” The research team found that agricultural use cases were responsible for 71% of those human-driven depletions; furthermore, about 80% of the agricultural water is used for crops to feed just under one million cattle.

[...] The researchers proposed a goal of reducing anthropogenic river water consumption in the area by 35% to begin refilling the lake, as well as a detailed breakdown of specific reductions within livestock feed production.

“We find that the most potent solutions would involve a 61% reduction in alfalfa production along with fallowing of 26–55% of grass hay production,” the team wrote, “resulting in reductions of agricultural revenues of US$97 million per year, or 0.04% of the state’s GDP.” The team added that Utah residents could be compensated for their loss of revenue. It’s an easier plan to propose on paper than sell folks on as a reality, but it is a pathway towards recovery for the Great Salt Lake.

As the team added, the lake directly supports 9,000 jobs and $2.5 billion in economic productivity, primarily from mining, recreation, and fishing of brine shrimp. Exposed saline lakebeds (as the Great Salt Lake’s increasingly are with its decreasing water levels) are also associated with dust that can pose health risks due to its effects on the human respiratory system.

For now, the Great Salt Lake's average levels and volume continue to decrease. But the team's research has revealed a specific pain point and suggested ways to reduce the strain on the great—but diminishing—water body.

The elephant in the room that isn't mentioned are all of the data centers in the Salt Lake region. It seems data on their water usage is considered Confidential Business Information and doesn't need to be reported, so much discussion on this gets presented as a farmer-vs-resident issue.


Original Submission

posted by hubie on Monday January 20, @12:21AM   Printer-friendly

[Ed. note: DVLA == Driver and Vehicle Licensing Agency]

https://dafyddvaughan.uk/blog/2025/why-some-dvla-digital-services-dont-work-at-night/

Every few months or so, somebody asks on social media why a particular DVLA digital service is turned off over night. Why is it, in the 21st century, a newish online service only operates for some hours of the day? Rather than answering it every time, I've decided to write this post, so I can point people at it in future.

It's also a great case study to show why making government services digitally native can be quite complicated. Unless you're a start up, you're rarely working in a greenfield environment, and have legacy technology and old working practices to contend with. Transforming government services isn't as easy as the tech bros and billionaires make it out to be.

[...] DVLA is around 60 years old and manages driving licences and vehicle records for England, Scotland and Cymru.

At the time, many of DVLA's services - particularly those relating to driving licences were still backed by an old IBM mainframe from the 1980s - fondly known as Drivers-90 (or D90 for short). D90 was your typical mainframe - code written in COBOL using the ADABAS database package. Most data processing happened 'offline' - through batch jobs which ran during an overnight window.

In the early 2000s, there had been an attempt by DVLA's IT suppliers to modernise the systems. They'd designed a new set of systems using Java and WebLogic, with Oracle Databases - which they referred to as the New Systems Landscape (or NSL). To speed up the migration, they'd used tools to automatically convert the code and database structures.

As often happens in large behind-the-scenes IT modernisation projects, this upgrade effort ran out of energy and money, so it never finished. This left a complex infrastructure in place - with some services using the new architecture, some using the mainframe, and some using both at the same time.

[...] It's now 2024 - 10 years on from the launch of the first service. The legacy infrastructure, which really should have been replaced by now, is probably still the reason why the services are still offline overnight.

Is this acceptable? Not really. Is it understandable? Absolutely.

Legacy tech is complicated. It's one of the biggest barriers for organisations undertaking digital transformation.


Original Submission

posted by janrinok on Sunday January 19, @07:39PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Intel and AMD engineers have stepped in at the eleventh hour to deal with a code contribution from a Microsoft developer that could have broken Linux 6.13 on some systems.

The change, made in the autumn, was a useful improvement at face value. It was a modification to Linux x86_64 to use large read-only execute (ROX) pages for caching executable pages. The theory was that the alteration would result in increased performance.

However, the code caused problems on some setups and an urgent patch from Intel's Peter Zijlstra was committed yesterday to disable it. The stable release of the 6.13 kernel was due this coming weekend.

Zijlstra wrote: "The whole module_writable_address() nonsense made a giant mess of alternative.c, not to mention it still contains bugs -- notable (sic) some of the CFI variants crash and burn.

Control Flow Integrity (CFI) is an anti-malware technology aimed at preventing attackers from redirecting the control flow of a program. The change can cause issues on some CFI-enabled setups and reports have included Intel Alder Lake-powered machines failing to resume from hibernation.

Zijlstra said the Microsoft engineer "has been working on patches to clean all this up again, but given the current state of things, this stuff just isn't ready. Disable for now, let's try again next cycle."

The offending source is still present, but won't be included in the upcoming stable kernel build.

AMD engineer Borislav Petkov noted that the Linux x86_64 maintainers had not signed off on the change, commenting: "I just love it how this went in without a single x86 maintainer Ack, it broke a bunch of things and then it is still there instead of getting reverted. Let's not do this again please."

Microsoft is notable for dubious quality control standards regarding releases of its flagship operating system, Windows. That one of its engineers should drop some dodgy code into the Linux kernel is not hugely surprising, and the unfortunate individual is not the first and will not be the last to do so, regardless of their employer.

However, the processes that allowed it to remain in the build this close to public release will be a concern. While it is amusing that engineers from both Intel and AMD were involved in dealing with the issues arising from the contribution of a Microsoft engineer, and the problem never reached the stable release, it is concerning. Petkov will not be the only one wondering how the change made it in without a review by the Linux x86/x86_64 maintainers.


Original Submission

posted by janrinok on Sunday January 19, @02:53PM   Printer-friendly

https://spectrum.ieee.org/isaac-asimov-robotics

In 1942, the legendary science fiction author Isaac Asimov introduced his Three Laws of Robotics in his short story "Runaround." The laws were later popularized in his seminal story collection I, Robot.

  • First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

While drawn from works of fiction, these laws have shaped discussions of robot ethics for decades. And as AI systems—which can be considered virtual robots—have become more sophisticated and pervasive, some technologists have found Asimov's framework useful for considering the potential safeguards needed for AI that interacts with humans.

But the existing three laws are not enough. Today, we are entering an era of unprecedented human-AI collaboration that Asimov could hardly have envisioned. The rapid advancement of generative AI capabilities, particularly in language and image generation, has created challenges beyond Asimov's original concerns about physical harm and obedience.

The proliferation of AI-enabled deception is particularly concerning. According to the FBI's 2024 Internet Crime Report, cybercrime involving digital manipulation and social engineering resulted in losses exceeding US $10.3 billion. The European Union Agency for Cybersecurity's 2023 Threat Landscape specifically highlighted deepfakes—synthetic media that appears genuine—as an emerging threat to digital identity and trust.

Social media misinformation is spreading like wildfire. I studied it during the pandemic extensively and can only say that the proliferation of generative AI tools has made its detection increasingly difficult. To make matters worse, AI-generated articles are just as persuasive or even more persuasive than traditional propaganda, and using AI to create convincing content requires very little effort.

Deepfakes are on the rise throughout society. Botnets can use AI-generated text, speech, and video to create false perceptions of widespread support for any political issue. Bots are now capable of making and receiving phone calls while impersonating people. AI scam calls imitating familiar voices are increasingly common, and any day now, we can expect a boom in video call scams based on AI-rendered overlay avatars, allowing scammers to impersonate loved ones and target the most vulnerable populations. Anecdotally, my very own father was surprised when he saw a video of me speaking fluent Spanish, as he knew that I'm a proud beginner in this language (400 days strong on Duolingo!). Suffice it to say that the video was AI-edited.
...
In his 2019 book Human Compatible, the eminent computer scientist Stuart Russell argues that AI systems' ability to deceive humans represents a fundamental challenge to social trust. This concern is reflected in recent policy initiatives, most notably the European Union's AI Act, which includes provisions requiring transparency in AI interactions and transparent disclosure of AI-generated content. In Asimov's time, people couldn't have imagined how artificial agents could use online communication tools and avatars to deceive humans.

Therefore, we must make an addition to Asimov's laws.

        Fourth Law: A robot or AI must not deceive a human by impersonating a human being.


Original Submission

posted by janrinok on Sunday January 19, @10:04AM   Printer-friendly
from the consolation-prize dept.

The order comes after GM was caught selling customer data to third-party data brokers and insurance companies — without consent:

General Motors and its subsidiary OnStar are banned from selling customer geolocation and driving behavior data for five years, the Federal Trade Commission announced Thursday.

The settlement comes after a New York Times investigation found that GM had been collecting micro-details about its customers' driving habits, including acceleration, braking, and trip length — and then selling it to insurance companies and third-party data brokers like LexisNexis and Verisk. Clueless vehicle owners were then left wondering why their insurance premiums were going up.

[...] FTC accused GM of using a "misleading enrollment process" to get vehicle owners to sign up for its OnStar connected vehicle service and Smart Driver feature. The automaker failed to disclose to customers that it was collecting their data, nor did GM seek out their consent to sell it to third parties. After the Times exposed the practice, GM said it was discontinuing its OnStar Smart Driver program.

Also at AP, Detroit Free Press and Engadget.

Previously:


Original Submission

posted by mrpg on Sunday January 19, @05:15AM   Printer-friendly
from the How's-gonna-write-my-email-replies-for-me? dept.

Arthur T Knackerbracket has processed the following story:

The White House just released restrictions on the global sale of AI chips and GPUs, which more than limit who can buy these chips but also tell them where to use them. This new rule has got Nvidia and the Semiconductor Industry Association (SIA) up in arms, and now the European Commission (EC) is also protesting it. However, the rule will come into force 60 days after its announcement—well into Trump’s second presidency—giving the EU and other concerned parties time to negotiate with his administration to defer or cancel it.

Ten EU members—Belgium, Denmark, Finland, France, Germany, Ireland, Italy, the Netherlands, Norway, and Sweden—would have Tier 1 status, meaning they have ‘near-unrestricted access’ to advance American AI chips. However, they must still abide by U.S. security requirements and keep at least 75% of their processing capabilities within Tier 1 countries. Although they could install the rest of their AI chips in Tier 2 countries, they cannot put over 7% of these chips in any single nation, meaning they have to spread operations to over four countries if they want to do so.


Original Submission

posted by mrpg on Sunday January 19, @12:30AM   Printer-friendly
from the this-holds-promise... dept.

https://phys.org/news/2025-01-chainmail-material-future-armor.html

In a remarkable feat of chemistry, a Northwestern University-led research team has developed the first two-dimensional (2D) mechanically interlocked material.

Resembling the interlocking links in chainmail, the nanoscale material exhibits exceptional flexibility and strength. With further work, it holds promise for use in high-performance, light-weight body armor and other uses that demand lightweight, flexible and tough materials.

Publishing on Jan. 17 in the journal Science, the study marks several firsts for the field. Not only is it the first 2D mechanically interlocked polymer, but the novel material also contains 100 trillion mechanical bonds per 1 square centimeter—the highest density of mechanical bonds ever achieved.

The researchers produced this material using a new, highly efficient and scalable polymerization process.

"We made a completely new polymer structure," said Northwestern's William Dichtel, the study's corresponding author.

"It's similar to chainmail in that it cannot easily rip because each of the mechanical bonds has a bit of freedom to slide around. If you pull it, it can dissipate the applied force in multiple directions. And if you want to rip it apart, you would have to break it in many, many different places. We are continuing to explore its properties and will probably be studying it for years."


Original Submission

posted by janrinok on Saturday January 18, @07:42PM   Printer-friendly
from the new-battery-tech-that-isn't-vapor? dept.

Samsung could make the Galaxy S26 extra thin with new battery tech:

The batteries that power today's electronics are a lot more incredible than we often give them credit for. We've come a long way from the days of nickel-cadmium and even nickel-metal hydride cells, with lithium-based chemistry offering superior capacity and discharge characteristics (if only they didn't have that annoying tendency to burst into flame). But for as far as we've come, it always feels like the next big thing could be right around the corner, as advocates hype next-gen battery tech. We've only just started to see silicon-carbon batteries emerge, capable of storing even more energy in smaller spaces, and we've been hugely curious to see who might take advantage of them next.

[...] asserting that Samsung is planning to use a silicon-carbon battery in the Galaxy S26.

That could be a major step forward, and may not even be the only new battery technology Samsung could be thinking of using; this rumor arrives just a day after a report in South Korea's TheElec that discussed the company's interest in novel battery construction, and specifically, increasing capacity through stacked electrodes. Already used in larger solutions like car batteries, Samsung may be considering a similar approach for smartphones.

See also:


Original Submission

posted by janrinok on Saturday January 18, @02:59PM   Printer-friendly
from the oldie-but-goodie dept.

Digital archivist, David Rosenthal, reviews an old, obscure, but prescient, document from computer scientist Clifford Lynch on various aspects of the then nascent WWW. Lynch's document, Accessibility and Integrity of Networked Information Collections. Background Paper, from 1993, which covered topics ranging from the First Sale Doctrine, what is now called surveillance capitalism, pay walls, and disinformation:

While doing the research for a future talk, I came across an obscure but impressively prophetic report entitled Accessibility and Integrity of Networked Information Collections that Cliff Lynch wrote for the federal Office of Technology Assessment in 1993, 32 years ago. I say "obscure" because it doesn't appear in Lynch's pre-1997 bibliography.

To give you some idea of the context in which it was written, unless you are over 70, it was more than half your life ago when in November 1989 Tim Berners-Lee's browser first accessed a page from his Web server. It was only about the same time that the first commercial, as opposed to research, Internet Service Providers started with the ARPANET being decommissioned the next year. Two years later, in December of 1991, the Stanford Linear Accelerator Center put up the first US Web page. In 1992 Tim Berners-Lee codified and extended the HTTP protocol he had earlier implemented. It would be another two years before Netscape became the first browser to support HTTPS. It would be two years after that before the ITEF approved HTTP/1.0 in RFC 1945. As you can see, Lynch was writing among the birth-pangs of the Web.

Although Lynch was insufficiently pessimistic, he got a lot of things exactly right. Below the fold I provide four out of many examples.

Rosenthal's summary includes a link to a digital copy at the Education Resources Information Center.


Original Submission

posted by hubie on Saturday January 18, @10:54AM   Printer-friendly
from the lawful-and-totally-not-discriminatory dept.

New Ohio Law Allows Cops To Charge $75/Hr. To Process Body Cam Footage:

Ohio residents pay for the cops. They pay for the cameras. Now, they're expected to pay for the footage generated by cops and their cameras. Governor Mike DeWine, serving no one but cops and their desire for opacity, recently signed a bill into law that will make it much more expensive for residents to exercise their public records rights.

And it was done in possibly the shadiest way possible — at the last minute and with zero transparency.

[...] Reporter Morgan Trau had questions following the passage of this measure. Gov. DeWine had answers. But they're completely unsatisfactory.

"These requests certainly should be honored, and we want them to be honored. We want them to be honored in a swift way that's very, very important," DeWine responded. "We also, though — if you have, for example, a small police department — very small police department — and they get a request like that, that could take one person a significant period of time."

Sure, that's part of the equation. Someone has to take time to review information requested via a public records request. But that's part of the government's job. It's not an excuse to charge a premium just to fulfill the government's obligations to the public.

DeWine had more of the same in his official statement on this line item — a statement he was presumably compelled to issue due to many people having these exact same questions about charging people a third time for something they'd already paid for twice.

No law enforcement agency should ever have to choose between diverting resources for officers on the street to move them to administrative tasks like lengthy video redaction reviews for which agencies receive no compensation–and this is especially so for when the requestor of the video is a private company seeking to make money off of these videos. The language in House Bill 315 is a workable compromise to balance the modern realities of preparing these public records and the cost it takes to prepare them.

Well, the biggest problem with this assertion is that no law enforcement agency ever has to choose between reviewing footage for release and keeping an eye on the streets. I realize some smaller agencies may not have a person dedicated to public records responses, but for the most part, I would prefer someone other than Officer Johnny Trafficstop handle public records releases. First, they're not specifically trained to handle this job. Second, doing this makes it a fox-in-the-hen-house situation, where officers might be handling information involving themselves, which is a clear conflict of interest.

[...] This argument isn't much better:

Marion Police Chief Jay McDonald, also the president of the Ohio FOP, showed me that he receives requests from people asking for drunk and disorderly conduct videos. Oftentimes, these people monetize the records on YouTube, he added.

Moving past the conflict of interest that is a police chief also being the head of a police union, the specific problem with this argument is that it suggests it's ok to financially punish everyone just because a small minority of requesters are abusing the system for personal financial gain. Again, while it sounds like a plausible argument for charging processing fees, the real benefit isn't in deterring YouTube opportunists, but in placing a tax on transparency most legitimate requesters simply won't be able to pay. And that's the obvious goal here. If it wasn't, this proposal would have gone up for discussion, rather than tacked onto the end of 315-page omnibus bill at the last minute. This is nothing but what it looks like: people in the legislature doing a favor for cops... and screwing over their own constituents.


Original Submission

posted by hubie on Saturday January 18, @06:09AM   Printer-friendly

By 2005, computer chips were running a billion times faster than the Z3 in the region of 5GHz. But then progress stalled. Today, state-of-the-art chips still operate at around 5GHz, a limit bottleneck that has significantly restricted progress in fields requiring ultrafast data processing.

Now that looks set to change thanks to the work of Gordon Li and Midya Parto at the California Institute of Technology in Pasadena, and colleagues, who have designed and tested an all-optical computer capable of clock speeds exceeding 100 GHz. "The all-optical computer realizes linear operations, nonlinear functions, and memory entirely in the optical domain with > 100 GHz clock rates," they say. Their work paves the way for a new era of ultrafast computing with applications in fields ranging from signal processing to pattern recognition and beyond.

Discover Magazine

Ref: All-Optical Computing With Beyond 100-Ghz Clock Rates : https://arxiv.org/abs/2501.05756


Original Submission

posted by hubie on Saturday January 18, @01:26AM   Printer-friendly
from the ascii-art dept.

https://arstechnica.com/gaming/2025/01/this-pdf-contains-a-playable-copy-of-doom/

Here at Ars, we're suckers for stories about hackers getting Doom running on everything from CAPTCHA robot checks and Windows' notepad.exe to AI hallucinations and fluorescing gut bacteria. Despite all that experience, we were still thrown for a loop by a recent demonstration of Doom running in the usually static confines of a PDF file.

On the Github page for the quixotic project, coder ading2210 discusses how Adobe Acrobat included some robust support for JavaScript in the PDF file format.

[...] the Doom PDF can take inputs via the user typing in a designated text field and generate "video" output in the form of converted ASCII text fed into 200 individual text fields, each representing a horizontal line of the Doom display. The text in those fields is enough to simulate a six-color monochrome display at a "pretty poor but playable" 13 frames per second (about 80 ms per frame).

[...] have to dock at least a few coolness points because the port doesn't actually work on generic desktop versions of Adobe Acrobat—you need to load it through a Chromium-based web browser. But the project gains those coolness points back with a web front-end that lets users load generic WAD files into a playable PDF.

Related stories on SoylentNews:
Hitting the Books: The Programming Trick That Gave Us DOOM Multiplayer - 20230912
Can Doom Run It? An Adding Machine in Doom - 20221224
Def Con Hacker Shows John Deere's Tractors Can Run Doom - 20220817
Even DOOM Can Now Run DOOM! - 20220715
You Can Play 'Doom' Inside 'Minecraft' Using a Virtual PC - 20200726
Explore This 3D World Rendered In ASCII Art - 20200102
'Doomba' Turns Your Roomba's Cleaning Maps Into Doom Levels - 20181227
Modder Gets Half-Life Running on an Android Smartwatch - 20150724
Run Doom on your Printer - 20140917


Original Submission