Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Idiosyncratic use of punctuation - which of these annoys you the most?

  • Declarations and assignments that end with }; (C, C++, Javascript, etc.)
  • (Parenthesis (pile-ups (at (the (end (of (Lisp (code))))))))
  • Syntactically-significant whitespace (Python, Ruby, Haskell...)
  • Perl sigils: @array, $array[index], %hash, $hash{key}
  • Unnecessary sigils, like $variable in PHP
  • macro!() in Rust
  • Do you have any idea how much I spent on this Space Cadet keyboard, you insensitive clod?!
  • Something even worse...

[ Results | Polls ]
Comments:61 | Votes:108

posted by hubie on Thursday August 24 2023, @09:51PM   Printer-friendly

"US copyright law protects only works of human creation," judge writes:

Art generated entirely by artificial intelligence cannot be copyrighted because "human authorship is an essential part of a valid copyright claim," a federal judge ruled on Friday.

The US Copyright Office previously rejected plaintiff Stephen Thaler's application for a copyright because the work lacked human authorship, and he challenged the decision in US District Court for the District of Columbia. Thaler and the Copyright Office both moved for summary judgment in motions that "present the sole issue of whether a work generated entirely by an artificial system absent human involvement should be eligible for copyright," Judge Beryl Howell's memorandum opinion issued Friday noted.

Howell denied Thaler's motion for summary judgment, granted the Copyright Office's motion, and ordered that the case be closed.

Thaler sought a copyright for an image titled, "A Recent Entrance to Paradise," which was produced by a computer program that he developed, the ruling said. In his application for a copyright, he identified the author as the Creativity Machine, the name of his software.

Thaler's application "explained the work had been 'autonomously created by a computer algorithm running on a machine,' but that plaintiff sought to claim the copyright of the 'computer-generated work' himself 'as a work-for-hire to the owner of the Creativity Machine,'" Howell wrote. "The Copyright Office denied the application on the basis that the work 'lack[ed] the human authorship necessary to support a copyright claim,' noting that copyright law only extends to works created by human beings."

[...] In the Friday ruling on copyright of an AI-generated image, Judge Howell wrote that Thaler attempted "to complicate the issues presented by devoting a substantial portion of his briefing to the viability of various legal theories under which a copyright in the computer's work would transfer to him, as the computer's owner; for example, by operation of common law property principles or the work-for-hire doctrine." But these arguments "put the cart before the horse" because they only address "to whom a valid copyright should have been registered," not whether a copyright can be granted for a work generated without human involvement, Howell wrote.

"United States copyright law protects only works of human creation," Howell wrote.

[...] Thaler pointed out that the Copyright Act does not define the word "author." But Howell wrote that the law's "'authorship' requirement as presumptively being human rests on centuries of settled understanding."

[...] The US Constitution conceived of copyrights and patents "as forms of property that the government was established to protect, and it was understood that recognizing exclusive rights in that property would further the public good by incentivizing individuals to create and invent... Non-human actors need no incentivization with the promise of exclusive rights under United States law, and copyright was therefore not designed to reach them."

Copyright has never stretched far enough "to protect works generated by new forms of technology operating absent any guiding human hand, as plaintiff urges here. Human authorship is a bedrock requirement of copyright," Howell wrote.

[...] Future cases are likely to present more challenging legal questions "regarding how much human input is necessary to qualify the user of an AI system as an 'author' of a generated work" and "how to assess the originality of AI-generated works where the systems may have been trained on unknown pre-existing works," Howell wrote. But Thaler's case "is not nearly so complex."


Original Submission

posted by requerdanos on Thursday August 24 2023, @07:00PM   Printer-friendly
from the governance-in-action dept.

Meeting Announcement: The next meeting of the SoylentNews governance committee will be Friday, August 25th, 2023 at 20:30 UTC (1:30pm PDT, 4:30pm EDT) in #governance on SoylentNews IRC. Logs of the meeting will be available afterwards for review, and minutes will be published when available.

The agenda for the upcoming meeting will be published when available. In the meeting we plan to discuss mechanicjay's report on different entity types and the first draft of the bylaws, which was posted to janrinok's journal previously.

Minutes and agenda, and other governance committee information are to be found on the SoylentNews Wiki at: https://wiki.staging.soylentnews.org/wiki/Governance

Our community is encouraged to observe and participate, and is therefore invited to the meeting. SoylentNews is, after all, People!

posted by hubie on Thursday August 24 2023, @05:11PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

One of the most prominent pirated book repositories used for training AI, Books3, has been kicked out from the online nest it had been roosting in for nearly three years. Rights-holders have been at war with online pirates for decades, but artificial intelligence is like oil seeping into copyright law’s water. The two simply do not mix, and the fumes rising from the surface just need a spark to set the entire concept of intellectual property rights alight.

As first reported by TorrentFreak, the large pirate repository The Eye took down the Books3 dataset after the Danish anti-piracy group Rights Alliance sent the site a DMCA takedown. Now trying to access that dataset gives a 404 error. The Eye still hosts other training data for AI, but the portion allotted for books has vanished.

[...] . The nonprofit research group EleutherAI originally released Books3 as a part of the AI training set The Pile, an 800 GB open source chunk of training data comprising 22 other datasets specifically designed for training language models. Rights Group said the organization “denied responsibility” for Books3. Gizmodo reached out to EleutherAI for comment, but we did not receive a response.

The Eye claims it regularly complies with all valid DMCA requests, though that data set was originally uploaded by AI developer and prominent open source AI proponent Shawn Presser back in 2020. His stated goal at the time was to open up AI development beyond companies like OpenAI, which trained its earlier large language models on the still-unknown “Books1” and “Books2” repositories. The Books3 repository contained 196,640 books all in plain.txt format and was supposed to give fledgling AI projects a leg up against the likes of ChatGPT-maker OpenAI.

Over Twitter DM, Presser called the attack on Books3 a travesty for open source AI. While other major companies and VC-funded startups get away with including copyrighted data in their training data, grassroots projects need something to compete—and that’s what Books3 was for.

[...] “My goal was to make it so that anybody could [create these models.] It felt crucial that you and I could create our own ChatGPT if we wanted to,” Presser said. “Unless authors intend to somehow take ChatGPT offline, or sue them out of existence, then it’s crucial that you and I can make our own ChatGPTs, for the same reason it was crucial that anybody could make their own website back in the ‘90s.”

[...] As noted in past forum comments, Presser actively worked with EleutherAI to add the Books3 dataset to The Pile. EleutherAI has used The Pile and other data to craft its own AI models, including one called GPT-J that was originally meant to compete with OpenAI’s GPT-3.

Meta went as far as to claim that the original LlaMA-65B model didn’t perform as well as some other, larger models like the PaLM-540B because it “used a limited amount of books and academic papers” in its pre-training data. The original LlaMA was also formatted on C4, a version of Common Crawl that was a large dataset of mass amounts of internet data. Researchers found that the C4 training set included mass amounts of published work, including propaganda and far-right websites. Those researchers told the Washington Post the copyright symbol appeared more than 200 million times in the C4 training set.

Since then, Meta has clammed up hard about what goes into its language models. Last month, Meta released a newer, bigger language model called LlaMA 2. This time, Meta worked with Microsoft to add 40% more data than its previous model, though in its whitepaper the company was much more hesitant to outright state what data its latest LM was trained on. The only reference to its training data was that it’s “a new mix of publicly available online data.” As the friction between AI and copyright grows hotter, companies are less and less likely to share exactly what’s contained in the morass of AI training data.


Original Submission

posted by janrinok on Thursday August 24 2023, @12:22PM   Printer-friendly
from the attack-on-public-domain dept.

https://arstechnica.com/tech-policy/2023/08/record-labels-sue-internet-archive-for-digitizing-obsolete-vintage-records/

Major record labels are suing the Internet Archive, accusing the nonprofit of "massive" and "blatant" copyright infringement "of works by some of the greatest artists of the Twentieth Century."

The lawsuit was filed Friday in a US district court in New York by UMG Recordings, Capitol Records, Concord Bicycle Assets, CMGI, Sony Music Entertainment, and Arista Music. It targets the Internet Archive's "Great 78 Project," which was launched in 2006.

For the Great 78 Project, the Internet Archive partners with recording engineer George Blood— who is also a defendant in the lawsuit—to digitize sound recordings on 78 revolutions-per-minute (RPM) records. These early sound recordings are typically of poor quality and were made between 1898 and the late 1950s by using very brittle materials. The goal of the Great 78 Project was to preserve these early recordings so they would not be lost as records break and could continue to be studied as originally recorded.

In a blog post responding to the record labels' lawsuit, Internet Archive founder Brewster Kahle said that the Internet Archive is currently reviewing the challenge and taking it seriously. However, Kahle characterizes the Great 78 Project as providing "free public access to a largely forgotten but culturally important medium," claiming that "there shouldn't be conflict here."
[...]
Days after record labels filed their complaint, a court document was unsealed showing that the Internet Archive had reached a joint agreement with book publishers due to the publishers' legal victory earlier this year. If the judge signs off, the agreement would permanently bar the Internet Archive from lending any unauthorized scans of books when authorized e-book versions exist, Publishers Weekly reported.

The Internet Archive would also be barred from profiting from any infringing works.

It's unclear how high the damages are in this case, but the agreement includes an all-inclusive "confidential monetary settlement" that "substantially" covers publishers' legal fees and costs, as well as damages and other claims. Previously it was estimated that the case could cost Internet Archive more than $19 million, which The Register reported amounts to approximately half of the nonprofit's 2019 budget.


Original Submission

posted by janrinok on Thursday August 24 2023, @10:21AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Warnock and Chuck Geschke co-created the Postcript page-description language, and in Warnock's garage in 1982, they started Adobe Systems to turn it into a product. Dr Geschke died in 2021, and in our obituary for him we discussed Postscript's significance.

Steve Jobs attempted to buy Adobe – named after the creek at the back of Warnock's garage – for five million dollars the year it was founded. Warnock and Geschke refused, but sold him 19 percent of the company and licensed their software to Apple, which used it in the Apple LaserWriter – arguably the product which saved the Macintosh. Postscript was very much not Warnock's only gift to posterity, though.

John Warnock was born in Salt Lake City, Utah, in 1940, and obtained all three of his degrees (bachelor's, master's, and doctorate) from the University of Utah. He also met his future wife, graphic designer Marva (née Mullins), while studying at the university.

[...] His 1969 doctoral thesis, "A Hidden Surface Algorithm for Computer Generated Halftone Pictures," described what is now known as the Warnock Algorithm, and in his words [PDF], he has "the dubious distinction of having written the shortest doctoral thesis in University of Utah history." It is a mere 32 pages long [PDF], and notably contains no computer code whatsoever.

As he said in a 1986 interview:

I've always liked mathematics; problem solving has always been fun. My saving grace in life is that I was not introduced to computers at an early age. […]

I went through the university, all the way to the master's level, so I got a good, solid liberal education. I believe it's really important to have a very solid foundation in mathematics, English, and the basic sciences. Then, when you become a graduate student, it's okay to learn as much as you can about computers.

If you really want to be successful, being acculturated to the rest of the society and then going into computers is a much more reasonable way to approach the problem.

Among other former students, Evans & Sutherland hired the young John Warnock. He later left for a job at Xerox PARC in 1978, where he worked for Chuck Geschke. Together, they worked on a page-description language called Interpress, as described in this 1983 Usenet post. Much as was the case with the Xerox Alto, Warnock and Geschke were unable to persuade Xerox management of the commercial potential of their work, so they left to start their own company.

[...] He retired as Adobe's CEO in 2001, and co-chaired its board with Geschke until 2017. He received many awards for his work, both on his own and with Geschke, including from the Association for Computing Machinery, the Edwin H Land medal, the Bodley Medal, the Lovelace Medal, the National Medal of Technology and Innovation, and the Marconi Prize. Warnock was a keen skier, and after they retired, he and Marva ran the Blue Boar Inn in Midway, near the ski resorts of Park City and Deer Valley.

Warnock died on August 19, aged 82, surrounded by his family. He leaves behind Marva and three children.


Original Submission

posted by janrinok on Thursday August 24 2023, @07:34AM   Printer-friendly
from the Powered-by-carbon dept.

Collecting charge from a ribbon of single layer graphene, with a pair of diodes at the nanometer scale is a novel approach to converting heat to electricity.

Scitechdaily.com article on Non Linear power capture

From a new study published in the journal Physical Review E titled Charging capacitors from thermal fluctuations using diodes by P. M. Thibado, J. C. Neu, Pradeep Kumar, Surendra Singh and L. L. Bonilla, 16 August 2023, Physical Review

We theoretically consider a graphene ripple as a Brownian particle coupled to an energy storage circuit. When circuit and particle are at the same temperature, the second law forbids harvesting energy from the thermal motion of the Brownian particle, even if the circuit contains a rectifying diode. However, when the circuit contains a junction followed by two diodes wired in opposition, the approach to equilibrium may become ultraslow. Detailed balance is temporarily broken as current flows between the two diodes and charges storage capacitors. The energy harvested by each capacitor comes from the thermal bath of the diodes while the system obeys the first and second laws of thermodynamics.

[...] The scientists discovered that when the storage capacitors have an initial charge of zero, the circuit draws power from the thermal environment to charge them. The team then demonstrated that the system satisfies both the first and second laws of thermodynamics throughout the charging process. They also found that larger storage capacitors yield more stored charge and that a smaller graphene capacitance provides both a higher initial rate of charging and a longer time to discharge. These characteristics are important because they allow time to disconnect the storage capacitors from the energy harvesting circuit before the net charge is lost.

Journal Reference:
P. M. Thibado, J. C. Neu, Pradeep Kumar, Surendra Singh, and L. L. Bonilla, Charging capacitors from thermal fluctuations using diodes, Phys. Rev. E 108, 024130 – Published 16 August 2023. DOI: https://doi.org/10.1103/PhysRevE.108.024130


Original Submission

posted by hubie on Thursday August 24 2023, @02:51AM   Printer-friendly
from the welcome-to-the-soft-landing-club dept.

https://www.bbc.com/news/world-asia-india-66541956.amp:

India's space agency has released latest images of the Moon as its third lunar mission starts descending towards the little-explored south pole.

The pictures have been taken by Vikram, Chandrayaan-3's lander, which began the last phase of its mission on Thursday.

Vikram, which carries a rover in its belly, is due to land near the south pole on 23 August.

The lander detached from the propulsion module, which carried it close to the Moon, on Thursday.

Now updated with a story about the successful landing: Chandrayaan-3: India makes historic landing near Moon's south pole:

The Vikram lander from Chandrayaan-3 successfully touched down as planned at 18:04 local time (12:34 GMT).

[...] On Wednesday, tense moments preceded the touchdown as the lander - called Vikram after Isro founder Vikram Sarabhai - began its precarious descent, carrying within its belly the 26kg rover named Pragyaan (the Sanskrit word for wisdom).

The lander's speed was gradually reduced from 1.68km per second to almost zero, enabling it to make a soft landing on the lunar surface.

[...] One of the mission's major goals is to hunt for water-based ice which, scientists say, could support human habitation on the Moon in future. It could also be used for supplying propellant for spacecraft headed to Mars and other distant destinations. Scientists say the surface area that remains in permanent shadow there is huge and could hold reserves of water ice.


Original Submission

posted by hubie on Wednesday August 23 2023, @10:01PM   Printer-friendly

Modders just changed GPU overclocking forever:

Modders have released two new tools that could change GPU overclocking significantly. OMGVflash and NVflashk have effectively cracked a security feature on recent Nvidia GPUs, allowing extreme overclockers to flash new vBIOS files to graphics cards.

About a decade ago, Nvidia locked down its GPUs. Graphics cards are governed by a vBIOS, which specifies such things as the power limit of the GPU, maximum clock speed, and parameters like when the GPU will shut down due to thermals. Prior to Nvidia's GeForce 900-series GPUs, extreme overclockers could flash a new vBIOS onto the GPU to achieve higher levels of performance. But Nvidia locked this functionality with an on-chip security processor.

The ability is returning with the new tools. Both were developed by members of the TechPowerUp forums independently, and the outlet says it has "hand-inspected the binary code" to ensure they're free of viruses.

Outside of tinkering, the tools can boost performance on cheaper graphics cards. Many brands sell a model close to list price and an overclocked model for slightly more. The only difference between the cards, in most cases, is the vBIOS file. The tools allow for cross-flashing as well, allowing you to flash a vBIOS from one vendor onto the card of a different vendor.


Original Submission

posted by mrpg on Wednesday August 23 2023, @05:23PM   Printer-friendly
from the puzzle-of-complexity-and-unpredictability dept.

Positive Reviews Signal Film Will Be A Flop; Negative Reviews — a Hit:

When one thinks of movie reviews, one might see them as harbingers of success or failure at the box office. Some researchers have previously found that both positive and negative reviews correlate to box office revenues, and the effect of negative reviews diminishes over time.

However, researchers at the University of California, Davis, suggest that is not the case.

Researchers analyzed pre-release commentary and opening weekend box office revenue, turning the impact of movie reviews on its head and revealing an unexpected harbinger of failure phenomenon in the movie industry.

[...] The study analyzed a plethora of pre-release movie reviews penned by film critics on Rotten Tomatoes.

Researchers wanted to see if they could predict a movie's success based on these reviews. As it turned out, the so-called harbingers of failure did exist.

"Interestingly, when these critics penned positive pre-release reviews, they signaled that the movie would be a flop," said Loupos. "Conversely, their negative reviews hinted towards the film being a success. The stronger the sentiment in either direction, the stronger the predictive signal."

[...] What's more surprising, this pattern persisted even with top critics. Expertise, it seems, does not always lead to accurate predictions, Loupos said. "This surprising outcome challenges the prevailing belief that positive reviews equate to better box office revenues," he said.

[...] "Our fresh perspective on the role of critics' personalities opens up new avenues in our understanding of the film review space," Loupos remarked. "It's an important acknowledgment that the movie industry is a puzzle of complexity and unpredictability."

Got any good movie recommendations?

Journal Reference:
Loupos, P., Peng, Y., Li, S. et al. What reviews foretell about opening weekend box office revenue: the harbinger of failure effect in the movie industry [open]. Mark Lett 34, 513–534 (2023). https://doi.org/10.1007/s11002-023-09665-8


Original Submission

posted by mrpg on Wednesday August 23 2023, @12:51PM   Printer-friendly
from the Mystery-cleaning-service-3000 dept.

Autonomous Products Like Robot Vacuums Make Our Lives Easier. But Do They Deprive Us of Meaningful Experiences?

[...] Whether it is cleaning homes or mowing lawns, consumers increasingly delegate manual tasks to autonomous products. These gadgets operate without human oversight and free consumers from mundane chores. However, anecdotal evidence suggests that people feel a sense of satisfaction when they complete household chores. Are autonomous products such as robot vacuums and cooking machines depriving consumers from meaningful experiences?

This new research shows that, despite unquestionable benefits such as gains in efficiency and convenience, autonomous products strip away a source of meaning in life. As a result, consumers are hesitant to buy these products.

The researchers argue that manual labor is an important source of meaning in life. This is in line with research showing that everyday tasks have value—chores such as cleaning may not make us happy, but they add meaning to our lives. As de Bellis explains, "Our studies show that 'meaning of manual labor' causes consumers to reject autonomous products. For example, these consumers have a more negative attitude toward autonomous products and are also more prone to believe in the disadvantages of autonomous products relative to their advantages."

[...] This study demonstrates that the perceived meaning of manual labor (MML) – a novel concept introduced by the researchers – is key to predicting the adoption of autonomous products. Poletti says that "Consumers with a high MML tend to resist the delegation of manual tasks to autonomous products, irrespective of whether these tasks are central to one's identity or not. Marketers can start by segmenting consumers into high and low MML consumers." Unlike other personality variables that can only be reliably measured using complex psychometric scales, the extent of consumers' MML might be assessed simply by observing their behavioral characteristics, such as whether consumers tend to do the dishes by hand, whether they prefer a manual car transmission, or what type of activities and hobbies they pursue. Activities like woodworking, cookery, painting, and fishing are likely predictors of high MML. Similarly, companies can measure likes on social media for specific activities and hobbies that involve manual labor. Finally, practitioners can ask consumers to rate the degree to which manual versus cognitive tasks are meaningful to them. Having segmented consumers according to their MML, marketers can better target and focus their messages and efforts.

Journal Reference:
de Bellis, E., Johar, G. V., & Poletti, N. (2023). Meaning of Manual Labor Impedes Consumer Adoption of Autonomous Products. Journal of Marketing, 2023. https://doi.org/10.1177/00222429231171841


Original Submission

posted by hubie on Wednesday August 23 2023, @08:06AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Green Energy Partners (GEP) has tapped IP3 International to help realize its dream of a massive datacenter campus in Virginia powered entirely by small modular nuclear reactors (SMRs) and hydrogen gas generators.

The joint venture between the two companies will see the formation of a 641-acre industrial park located in Surry County, Virginia, called the Surry Green Energy Center (SGEC). Situated in close proximity to the Surry Power Station's two 800MW reactors, GEP and IP3 hope to attract datacenter operators to set up shop during the first phase of the project.

"We're going to create a datacenter park first, and that datacenter park will get power from the local utility, and we will build lots, and we will sell those lots to datacenter providers," IP3 CEO Michael Hewitt, whose company specializes in supporting the development of nuclear power plants in the US and Europe, told The Register. "We see that as a very lucrative investment, particularly when you look at the going rate for a datacenter lot in, say, Northern Virginia."

If successful, these datacenters will serve as the customer base for private investment in the development of SMRs on the site during the second phase of the project.

[...] The idea of using SMRs to power datacenters is by no means a new concept. We spoke with analysts at Omdia last year about the potential for these miniaturized nuclear reactors to alleviate pressure on local utilities, particularly in power-challenged regions, like Virginia.

[...] As for when this might happen, Hewitt conservatively hopes to see the site running on SMRs within a decade. 

If and when SMRs have been deployed on site in adequate numbers, GEP and IP3 plan to use thermal energy from them to facilitate the electrolysis of water into clean hydrogen. This hydrogen could be used to fuel backup generators on site or exported to support the state's power grid.

Hydrogen as a fuel for backup power is yet another technology being explored as an alternative to diesel generators. As we reported last fall, Equinix is already testing the tech in collaboration with the National University of Singapore.

"We believe that this location is ideal for the logistics of hydrogen shipping and distribution," Hewitt said.

If all this sounds too good to be true, that's because there's a lot that needs to happen before GEP's vision can become a reality. This is part of the reason the two companies are hedging their bets on datacenters from the get go, Hewitt explained. "Let's say that the nuclear power part of it kind of falls apart in terms of an opportunity; we still have a datacenter site that makes money and has a great client."

Of course if SMRs don't pan out, clean hydrogen generation at the site is unlikely to either.


Original Submission

posted by requerdanos on Wednesday August 23 2023, @03:21AM   Printer-friendly
from the oops dept.

OpenAI could be fined up to $150,000 for each piece of infringing content:

Weeks after The New York Times updated its terms of service (TOS) to prohibit AI companies from scraping its articles and images to train AI models, it appears that the Times may be preparing to sue OpenAI. The result, experts speculate, could be devastating to OpenAI, including the destruction of ChatGPT's dataset and fines up to $150,000 per infringing piece of content.

NPR spoke to two people "with direct knowledge" who confirmed that the Times' lawyers were mulling whether a lawsuit might be necessary "to protect the intellectual property rights" of the Times' reporting.

Neither OpenAI nor the Times immediately responded to Ars' request to comment.

If the Times were to follow through and sue ChatGPT-maker OpenAI, NPR suggested that the lawsuit could become "the most high-profile" legal battle yet over copyright protection since ChatGPT's explosively popular launch. This speculation comes a month after Sarah Silverman joined other popular authors suing OpenAI over similar concerns, seeking to protect the copyright of their books.

[...] In April, the News Media Alliance published AI principles, seeking to defend publishers' intellectual property by insisting that generative AI "developers and deployers must negotiate with publishers for the right to use" publishers' content for AI training, AI tools surfacing information, and AI tools synthesizing information.

Previously:
Sarah Silverman Sues OpenAI, Meta for Being "Industrial-Strength Plagiarists" - 20230711

Related:
The Internet Archive Reaches An Agreement With Publishers In Digital Book-Lending Case - 20230815


Original Submission

posted by requerdanos on Tuesday August 22 2023, @10:36PM   Printer-friendly
from the post-quantum-cryptography dept.

Google announces new algorithm that makes FIDO encryption safe from quantum computers:

The FIDO2 industry standard adopted five years ago provides the most secure known way to log in to websites because it doesn't rely on passwords and has the most secure form of  built-in two-factor authentication. Like many existing security schemes today, though, FIDO faces an ominous if distant threat from quantum computing, which one day will cause the currently rock-solid cryptography the standard uses to completely crumble.

Over the past decade, mathematicians and engineers have scrambled to head off this cryptopocalypse with the advent of PQC—short for post-quantum cryptography—a class of encryption that uses algorithms resistant to quantum-computing attacks. This week, researchers from Google announced the release of the first implementation of quantum-resistant encryption for use in the type of security keys that are the basic building blocks of FIDO2.

The best known implementation of FIDO2 is the passwordless form of authentication: passkeys. So far, there are no known ways passkeys can be defeated in credential phishing attacks. Dozens of sites and services now allow users to log in using passkeys, which use cryptographic keys stored in security keys, smartphones, and other devices.

"While quantum attacks are still in the distant future, deploying cryptography at Internet scale is a massive undertaking which is why doing it as early as possible is vital," Elie Bursztein and Fabian Kaczmarczyck, cybersecurity and AI research director, and software engineer, respectively, at Google wrote. "In particular, for security keys this process is expected to be gradual as users will have to acquire new ones once FIDO has standardized post-quantum cryptography resilient cryptography and this new standard is supported by major browser vendors."

More about security keys from Wikipedia.


Original Submission

posted by janrinok on Tuesday August 22 2023, @05:51PM   Printer-friendly

https://www.righto.com/2023/08/datapoint-to-8086.html

The Intel 8086 processor started the x86 architecture that is still extensively used today. The 8086 has some quirky characteristics: it is little-endian, has a parity flag, and uses explicit I/O instructions instead of just memory-mapped I/O. It has four 16-bit registers that can be split into 8-bit registers, but only one that can be used for memory indexing. Surprisingly, the reason for these characteristics and more is compatibility with a computer dating back before the creation of the microprocessor: the Datapoint 2200, a minicomputer with a processor built out of TTL chips. In this blog post, I'll look in detail at how the Datapoint 2200 led to the architecture of Intel's modern processors, step by step through the 8008, 8080, and 8086 processors.

The Datapoint 2200

In the late 1960s, 80-column IBM punch cards were the primary way of entering data into computers, although CRT terminals were growing in popularity. The Datapoint 2200 was designed as a low-cost terminal that could replace a keypunch, with a squat CRT display the size of a punch card. By putting some processing power into the Datapoint 2200, it could perform data validation and other tasks, making data entry more efficient. Even though the Datapoint 2200 was typically used as an intelligent terminal, it was really a desktop minicomputer with a "unique combination of powerful computer, display, and dual cassette drives." Although now mostly forgotten, the Datapoint 2200 was the origin of the 8-bit microprocessor, as I'll explain below.

The memory storage of the Datapoint 2200 had a large impact on its architecture and thus the architecture of today's computers. In the 1960s and early 1970s, magnetic core memory was the dominant form of computer storage. It consisted of tiny ferrite rings, threaded into grids, with each ring storing one bit. Magnetic core storage was bulky and relatively expensive, though.


Original Submission

posted by janrinok on Tuesday August 22 2023, @01:09PM   Printer-friendly
from the your-happy-thought-for-the-day dept.

Eventually everything will evaporate, not only black holes:

New theoretical research by Michael Wondrak, Walter van Suijlekom and Heino Falcke of Radboud University has shown that Stephen Hawking was right about black holes, although not completely. Due to Hawking radiation, black holes will eventually evaporate, but the event horizon is not as crucial as has been believed. Gravity and the curvature of spacetime cause this radiation too. This means that all large objects in the universe, like the remnants of stars, will eventually evaporate.

[...] Van Suijlekom: 'We show that far beyond a black hole the curvature of spacetime plays a big role in creating radiation. The particles are already separated there by the tidal forces of the gravitational field.' Whereas it was previously thought that no radiation was possible without the event horizon, this study shows that this horizon is not necessary.

Falcke: 'That means that objects without an event horizon, such as the remnants of dead stars and other large objects in the universe, also have this sort of radiation. And, after a very long period, that would lead to everything in the universe eventually evaporating, just like black holes. This changes not only our understanding of Hawking radiation but also our view of the universe and its future.'

Journal Reference:
Michael F. Wondrak, Walter D. van Suijlekom and Heino Falcke, Gravitational Pair Production and Black Hole Evaporation, Phys. Rev. Lett., 2 June 2023. DOI: 10.1103/PhysRevLett.130.221502


Original Submission