Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How clean is your desktop?

  • Zero icons
  • One icon
  • Over one hundred icons
  • Papers, books, scissors, red stapler and other junk
  • A clean desk is the sign of a sick mind
  • I use the command line you insensitive clod
  • Other (please specify in comments)

[ Results | Polls ]
Comments:35 | Votes:122

posted by janrinok on Thursday February 22, @08:55PM   Printer-friendly
from the faster-than-a-speeding-radio-wave dept.

MIT engineers developed a tag that can reveal with near-perfect accuracy whether an item is real or fake. The key is in the glue on the back of the tag.

A few years ago, MIT researchers invented a cryptographic ID tag that is several times smaller and significantly cheaper than the traditional radio frequency tags (RFIDs) that are often affixed to products to verify their authenticity.

This tiny tag, which offers improved security over RFIDs, utilizes terahertz waves, which are smaller and travel much faster than radio waves [SIC - er, no] . But this terahertz tag shared a major security vulnerability with traditional RFIDs: A counterfeiter could peel the tag off a genuine item and reattach it to a fake, and the authentication system would be none the wiser.

The researchers have now surmounted this security vulnerability by leveraging terahertz waves to develop an antitampering ID tag that still offers the benefits of being tiny, cheap, and secure.

They mix microscopic metal particles into the glue that sticks the tag to an object, and then use terahertz waves to detect the unique pattern those particles form on the item's surface. Akin to a fingerprint, this random glue pattern is used to authenticate the item, explains Eunseok Lee, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on the antitampering tag.

"These metal particles are essentially like mirrors for terahertz waves. If I spread a bunch of mirror pieces onto a surface and then shine light on that, depending on the orientation, size, and location of those mirrors, I would get a different reflected pattern. But if you peel the chip off and reattach it, you destroy that pattern," adds Ruonan Han, an associate professor in EECS, who leads the Terahertz Integrated Electronics Group in the Research Laboratory of Electronics.

[...] This research project was partly inspired by Han's favorite car wash. The business stuck an RFID tag onto his windshield to authenticate his car wash membership. For added security, the tag was made from fragile paper so it would be destroyed if a less-than-honest customer tried to peel it off and stick it on a different windshield.

But that is not a terribly reliable way to prevent tampering. For instance, someone could use a solution to dissolve the glue and safely remove the fragile tag.

Rather than authenticating the tag, a better security solution is to authenticate the item itself, Han says. To achieve this, the researchers targeted the glue at the interface between the tag and the item's surface.


Original Submission

posted by hubie on Thursday February 22, @04:13PM   Printer-friendly

https://newatlas.com/energy/domes-solar-cells-boost-efficiency-two-thirds/

Solar cell efficiency may get a bump from bumps. New research suggests that building tiny domes into the surface of organic solar cells could boost their efficiency by up to two-thirds, while capturing light from a wider angle.

Solar cells are usually flat, which maximizes how much of the surface is exposed to sunlight at any given time. This design works best when the Sun is within a certain angle, so the panels are usually tilted between 15 and 40 degrees to get the most out of the day.

Scientists have toyed with other shapes for the surface, including embedding spherical nanoshells of silica which trap and circulate sunlight to allow the device to capture more energy from it. For the new study, scientists at Abdullah Gül University in Türkiye ran complex simulations of how dome-shaped bumps might boost organic solar surfaces.

The team studied photovoltaic cells made with an organic polymer called P3HT:ICBA as the active layer, above a layer of aluminum and a substrate of PMMA, capped off with a transparent protective layer of indium tin oxide (ITO). This sandwich structure was kept through the whole dome, or "hemispherical shell" as the team calls it.
...
Compared to flat surfaces, solar cells dotted with bumps showed 36% and 66% improvements in light absorption, depending on the polarization of the light. Those bumps also allowed light to enter from a wider range of directions than a flat surface, providing an angular coverage of up to 82 degrees.

Journal Reference:
Dooyoung Hah, Hemispherical-shell-shaped organic photovoltaic cells for absorption enhancement and improved angular coverage, Journal of Photonics for Energy, Vol. 14, Issue 1, 018501 (February 2024). https://doi.org/10.1117/1.JPE.14.018501


Original Submission

posted by hubie on Thursday February 22, @11:28AM   Printer-friendly
from the when-the-cat's-away-the-mice-will-play dept.

The University at Buffalo reports on a recent School of Management study https://www.buffalo.edu/news/news-releases.host.html/content/shared/mgt/news/when-newspapers-close-nonprofit-executive-salaries-go-way-up.detail.html that correlated closings of local newspapers and c-suite salaries of nearby non-profits:

Forthcoming in the Journal of Accounting and Public Policy, the study found that when a newspaper goes out of business, total executive compensation at local nonprofits goes up by more than $38,000 on average — an increase of nearly 32%.

"Donors and volunteers expect their contributions to go to the execution of the nonprofits' mission, rather than leadership salaries, so unreasonably high compensation represents a serious problem for these organizations,"[...]

[...] the researchers ran a series of tests using financial information of nonprofits from 2008 to 2017 obtained from the IRS, as well as local newspaper closure data from previous research studies and from the University of North Carolina's Center for Innovation and Sustainability in Local Media.

Their findings show that nonprofit executive spending increases the same year a local newspaper closes, and that persists over the next three years. They also observed a decline in residual cash and donations but did not find any changes in program spending or long-term investments, suggesting that the increased compensation is not due to increased performance, but rather the loss of the monitoring newspaper.

"We found declines in both endowments and donor contributions at nonprofits after a local paper closes," says Khavis. "This suggests that the executives' pay increases are funded by spending down endowments, and that donors react to the loss of external monitoring by withholding their donations — which is consistent with the findings of previous studies."

I'm a proud subscriber to my local dead trees newspaper. Yes, it costs through the nose these days, but I think it's the least I can do for my community.

First seen in a Buffalo News story (paywalled?) which ends with this:

Khavis said even in areas where newspapers didn't completely close, but downsized or merged to the point where "they were open in name only," they saw the same effect on nonprofit executive pay.

The number of papers closing does not bode well for "misbehavior" by institutions, Khavis said. Between 2004 and 2015, the U.S. newspaper industry lost more than 1,800 print outlets to closures and mergers. A study by Northwestern University's Medill School of Journalism found that the rate of local newspaper closings accelerated to 2.5 per week in 2023.


Original Submission

posted by janrinok on Thursday February 22, @06:46AM   Printer-friendly
from the data-hoovering dept.

https://arstechnica.com/tech-policy/2024/02/why-the-new-york-times-might-win-its-copyright-lawsuit-against-openai/

The day after The New York Times sued OpenAI for copyright infringement, the author and systems architect Daniel Jeffries wrote an essay-length tweet arguing that the Times "has a near zero probability of winning" its lawsuit. As we write this, it has been retweeted 288 times and received 885,000 views.

"Trying to get everyone to license training data is not going to work because that's not what copyright is about," Jeffries wrote. "Copyright law is about preventing people from producing exact copies or near exact copies of content and posting it for commercial gain. Period. Anyone who tells you otherwise is lying or simply does not understand how copyright works."

[...] Courts are supposed to consider four factors in fair use cases, but two of these factors tend to be the most important. One is the nature of the use. A use is more likely to be fair if it is "transformative"—that is, if the new use has a dramatically different purpose and character from the original. Judge Rakoff dinged MP3.com as non-transformative because songs were merely "being retransmitted in another medium."

In contrast, Google argued that a book search engine is highly transformative because it serves a very different function than an individual book. People read books to enjoy and learn from them. But a search engine is more like a card catalog; it helps people find books.

The other key factor is how a use impacts the market for the original work. Here, too, Google had a strong argument since a book search engine helps people find new books to buy.

[...] In 2015, the Second Circuit ruled for Google. An important theme of the court's opinion is that Google's search engine was giving users factual, uncopyrightable information rather than reproducing much creative expression from the books themselves.

[...] Recently, we visited Stability AI's website and requested an image of a "video game Italian plumber" from its image model Stable Diffusion.

[...] Clearly, these models did not just learn abstract facts about plumbers—for example, that they wear overalls and carry wrenches. They learned facts about a specific fictional Italian plumber who wears white gloves, blue overalls with yellow buttons, and a red hat with an "M" on the front.

These are not facts about the world that lie beyond the reach of copyright. Rather, the creative choices that define Mario are likely covered by copyrights held by Nintendo.

We are not the first to notice this issue. When one of us (Tim) first wrote about these lawsuits last year, he illustrated his story with an image of Mickey Mouse generated by Stable Diffusion. In a January piece for IEEE Spectrum, cognitive scientist Gary Marcus and artist Reid Southen showed that generative image models produce a wide range of potentially infringing images—not only of copyrighted characters from video games and cartoons but near-perfect copies of stills from movies like Black Widow, Avengers: Infinity War, and Batman v Superman.

In its lawsuit against OpenAI, the New York Times provided 100 examples of GPT-4 generating long, near-verbatim excerpts from Times articles

[...] Those who advocate a finding of fair use like to split the analysis into two steps, which you can see in OpenAI's blog post about The New York Times lawsuit. OpenAI first categorically argues that "training AI models using publicly available Internet materials is fair use." Then in a separate section, OpenAI argues that "'regurgitation' is a rare bug that we are working to drive to zero."

But the courts tend to analyze a question like this holistically; the legality of the initial copying depends on details of how the copied data is ultimately used.

Previously on SoylentNews:
New York Times Sues Microsoft, ChatGPT Maker OpenAI Over Copyright Infringement - 20231228
Report: Potential NYT lawsuit could force OpenAI to wipe ChatGPT and start over - 20230821

Related stories on SoylentNews:
Microsoft in Deal With Semafor to Create News Stories With Aid of AI Chatbot - 20240206
AI Threatens to Crush News Organizations. Lawmakers Signal Change Is Ahead - 20240112
Writers and Publishers Face an Existential Threat From AI: Time to Embrace the True Fans Model - 20230415


Original Submission

posted by janrinok on Thursday February 22, @02:05AM   Printer-friendly

Widely used machine learning models reproduce dataset bias: Study:

Rice University computer science researchers have found bias in widely used machine learning tools used for immunotherapy research.

[...] HLA is a gene in all humans that encodes proteins working as part of our immune response. Those proteins bind with protein chunks called peptides in our cells and mark our infected cells for the body's immune system, so it can respond and, ideally, eliminate the threat.

Different people have slightly different variants in genes, called alleles. Current immunotherapy research is exploring ways to identify peptides that can more effectively bind with the HLA alleles of the patient.

The end result, eventually, could be custom and highly effective immunotherapies. That is why one of the most critical steps is to accurately predict which peptides will bind with which alleles. The greater the accuracy, the better the potential efficacy of the therapy.

But calculating how effectively a peptide will bind to the HLA allele takes a lot of work, which is why machine learning tools are being used to predict binding. This is where Rice's team found a problem: The data used to train those models appears to geographically favor higher-income communities.

Why is this an issue? Without being able to account for genetic data from lower-income communities, future immunotherapies developed for them may not be as effective.

"Each and every one of us has different HLAs that they express, and those HLAs vary between different populations," Fasoulis said. "Given that machine learning is used to identify potential peptide candidates for immunotherapies, if you basically have biased machine models, then those therapeutics won't work equally for everyone in every population."

Regardless of the application, machine learning models are only as good as the data you feed them. A bias in the data, even an unconscious one, can affect the conclusions made by the algorithm.

Machine learning models currently being used for pHLA binding prediction assert that they can extrapolate for allele data not present in the dataset those models were trained on, calling themselves "pan-allele" or "all-allele." The Rice team's findings call that into question.

"What we are trying to show here and kind of debunk is the idea of the 'pan-allele' machine learning predictors," Conev said. "We wanted to see if they really worked for the data that is not in the datasets, which is the data from lower-income populations."

Fasoulis' and Conev's group tested publicly available data on pHLA binding prediction, and their findings supported their hypothesis that a bias in the data was creating an accompanying bias in the algorithm. The team hopes that by bringing this discrepancy to the attention of the research community, a truly pan-allele method of predicting pHLA binding can be developed.

Ferreira, faculty advisor and paper co-author, explained that the problem of bias in machine learning can't be addressed unless researchers think about their data in a social context. From a certain perspective, datasets may appear as simply "incomplete," but making connections between what is or what is not represented in the dataset and underlying historical and economic factors affecting the populations from which data was collected is key to identifying bias.

"Researchers using machine learning models sometimes innocently assume that these models may appropriately represent a global population," Ferreira said, "but our research points to the significance of when this is not the case." He added that "even though the databases we studied contain information from people in multiple regions of the world, that does not make them universal. What our research found was a correlation between the socioeconomic standing of certain populations and how well they were represented in the databases or not."

More information: Anja Conev et al, HLAEquity: Examining biases in pan-allele peptide-HLA binding predictors, iScience (2023). DOI: 10.1016/j.isci.2023.108613

Journal information:iScience


Original Submission

posted by janrinok on Wednesday February 21, @09:12PM   Printer-friendly
from the ai-says-ai-is-not-taking-over dept.

Google Lays Off Thousands More Employees Despite Record Profits One Year After Laying off 12,000 Employees As Workers Begin Worrying AI is Slowly Replacing Them

Google has initiated significant layoffs across its various teams, [...] marking a continuation of the tech industry's trend towards reducing workforce expenses. The layoffs have affected hundreds of employees within the Voice Assistant unit; hardware teams responsible for Pixel, Nest and Fitbit products; and a considerable portion of the augmented reality (AR) team. This move is part of Google's broader effort to streamline operations and align resources with its most significant product priorities​​.

[....] This comes at a time when Google parent, Alphabet Inc., reported record profits in late January. The company reported $20.4 billion in net income in Q4.

[....] The layoffs have sparked widespread concern among Google employees, not just about job security but also about the ethical implications of their work, especially as the company continues to invest heavily in advancing AI technology.

What are the executive priorities that Google is trying to align resources with?


Original Submission

posted by janrinok on Wednesday February 21, @04:26PM   Printer-friendly

Targeting 'undruggable' proteins promises new approach for treating neurodegenerative diseases:

Researchers led by Northwestern University and the University of Wisconsin-Madison have introduced a pioneering approach aimed at combating neurodegenerative diseases such as Alzheimer's disease, Parkinson's disease and Amyotrophic lateral sclerosis (ALS).

In a new study, researchers discovered a new way to enhance the body's antioxidant response, which is crucial for cellular protection against the oxidative stress implicated in many neurodegenerative diseases.

[...] Alzheimer's disease, characterized by the accumulation of beta-amyloid plaques and tau protein tangles; Parkinson's disease, known for its loss of dopaminergic neurons and presence of Lewy bodies; and ALS, involving the degeneration of motor neurons, all share a common thread of oxidative stress contributing to disease pathology.

The study focuses on disrupting the Keap1/Nrf2 protein-protein interaction (PPI), which plays a role in the body's antioxidant response. By preventing the degradation of Nrf2 through selective inhibition of its interaction with Keap1, the research holds promise for mitigating the cellular damage that underlies these debilitating conditions.

"We established Nrf2 as a principal target for the treatment of neurodegenerative diseases over the past two decades, but this novel approach for activating the pathway holds great promise to develop disease-modifying therapies," Jeffrey Johnson said.

The research team embarked on addressing one of the most challenging aspects of neurodegenerative disease treatment: the precise targeting of PPIs within the cell. Traditional methods, including small molecule inhibitors and peptide-based therapies, have fallen short due to lack of specificity, stability and cellular uptake.

The study introduces an innovative solution: protein-like polymers, or PLPs, are high-density brush macromolecular architectures synthesized via the ring-opening metathesis polymerization (ROMP) of norbornenyl-peptide-based monomers. These globular, proteomimetic structures display bioactive peptide side chains that can penetrate cell membranes, exhibit remarkable stability and resist proteolysis.

This targeted approach to inhibit the Keap1/Nrf2 PPI represents a significant leap forward. By preventing Keap1 from marking Nrf2 for degradation, Nrf2 accumulates in the nucleus, activating the Antioxidant Response Element (ARE) and driving the expression of detoxifying and antioxidant genes. This mechanism effectively enhances the cellular antioxidant response, providing a potent therapeutic strategy against the oxidative stress implicated in many neurodegenerative diseases.

PLPs [ protein-like polymers], developed by Gianneschi's team, could represent a significant breakthrough in halting or reversing damage offering hope for improved treatments and outcomes.

Focusing on the challenge of activating processes crucial for the body's antioxidant response, the team's research offers a novel solution. The team provides a robust, selective method enabling enhanced cellular protection and offering a promising therapeutic strategy for a range of diseases including neurodegenerative conditions.

"Through modern polymer chemistry, we can begin to think about mimicking complex proteins," Gianneschi said. "The promise lies in the development of a new modality for the design of therapeutics. This could be a way to address diseases like Alzheimer's and Parkinson's among others where traditional approaches have struggled."

This approach not only represents a significant advance in targeting transcription factors and disordered proteins, but also showcases the PLP technology's versatility and potential to revolutionize the development of therapeutics. The technology's modularity and efficacy in inhibiting the Keap1/Nrf2 interaction underscore its potential for impact as a therapeutic, but also as a tool for studying the biochemistry of these processes.

More information: Kendal P. Carrow et al, Inhibiting the Keap1/Nrf2 Protein‐Protein Interaction with Protein‐Like Polymers, Advanced Materials (2024). DOI: 10.1002/adma.202311467

Journal information:Advanced Materials


Original Submission

posted by janrinok on Wednesday February 21, @11:42AM   Printer-friendly

SETI Institute Employs SETI Ellipsoid Technique:

In a paper published in the Astronomical Journal, a team of researchers from the SETI Institute, Berkeley SETI Research Center and the University of Washington reported an exciting development for the field of astrophysics and the search for extraterrestrial intelligence (SETI), using observations from the Transiting Exoplanet Survey Satellite (TESS) mission to monitor the SETI Ellipsoid, a method for identifying potential signals from advanced civilizations in the cosmos. The SETI Ellipsoid is a strategic approach for selecting potential technosignature candidates based on the hypothesis that extraterrestrial civilizations, upon observing significant galactic events such as supernova 1987A, might use these occurrences as a focal point to emit synchronized signals to announce their presence.

In this work, researchers show that the SETI Ellipsoid method can leverage continuous, wide-field sky surveys, significantly enhancing our ability to detect these potential signals. By compensating for the uncertainties in the estimated time-of-arrival of such signals using observations that span up to a year, the team implements the SETI Ellipsoid strategy in an innovative way using state-of-the-arc technology.

[...] In examining data from the TESS continuous viewing zone, covering 5% of all TESS data from the first three years of its mission, researchers utilized the advanced 3D location data from Gaia Early Data Release 3. This analysis identified 32 prime targets within the SETI Ellipsoid in the southern TESS continuous viewing zone, all with uncertainties refined to better than 0.5 light-years. While the initial examination of TESS light curves during the Ellipsoid crossing event revealed no anomalies, the groundwork laid by this initiative paves the way for expanding the search to other surveys, a broader array of targets, and exploring diverse potential signal types.

[...] The SETI Ellipsoid method, combined with Gaia's distance measurements, offers a robust and adaptable framework for future SETI searches. Researchers can retrospectively apply it to sift through archival data for potential signals, proactively select targets, and schedule future monitoring campaigns.

"As Dr. Jill Tarter often points out, SETI searches are like looking for a needle in a 9-D haystack," said co-author Dr. Sofia Sheikh. "Any technique that can help us prioritize where to look, such as the SETI Ellipsoid, could potentially give us a shortcut to the most promising parts of the haystack. This work is the first step in searching those newly-highlighted parts of parameter space, and is an exciting precedent for upcoming large survey projects like LSST."

Journal Reference:
Bárbara Cabrales et al 2024 AJ 167 101 DOI 10.3847/1538-3881/ad2064


Original Submission

posted by hubie on Wednesday February 21, @06:57AM   Printer-friendly
from the it-ain't-over-until-there's-a-Zuckerberg-apology dept.

Facebook £3bn legal action given go-ahead in London:

A judge has given the go-ahead to a mass legal action against Facebook owner Meta, potentially worth £3bn.

The case is being brought by legal academic Dr Liza Lovdahl Gormsen, on behalf of 45 million Facebook users.

Her original claim was refused, in 2023, but a revised version has now been accepted, with early 2026 said to be the latest it could be heard.

Meta said the claims "remain entirely without merit and we will vigorously defend against them".

The new claim says: "Facebook has struck an unfair bargain with its users," according to legal documents.

Facebook abused its dominance by making users give it their data from non-Facebook products, including Meta-owned Instagram and other third-party sites.

And sharing data with third parties had become "a condition of accessing the Facebook platform, pursuant to a 'take-it-or-leave-it' offer".

[...] Meta said the "fundamental concerns identified by the tribunal in its February 2023 judgement have not been resolved".

It was "committed to giving people meaningful control" of the information they shared on its platforms and to "invest heavily to create tools that allow them to do so."

The legal action is being funded by Innsworth, a company backed by an investment management fund, which has also funded mass legal actions against Mastercard, Ericsson and Volkswagen.


Original Submission

posted by hubie on Wednesday February 21, @02:13AM   Printer-friendly

Study shows background checks don't always check out:

Employers making hiring decisions, landlords considering possible tenants and schools approving field trip chaperones all widely use commercial background checks. But a new multi-institutional study co-authored by a University of Maryland researcher shows that background checks themselves can't be trusted.

Assistant Professor Robert Stewart of the Department of Criminology and Criminal Justice and Associate Professor Sarah Lageson of Rutgers University suspected that the loosely regulated entities that businesses and landlords rely on to run background checks produce faulty reports, and their research bore out this hunch. The results were published last week in Criminology.

"There's a common, taken-for-granted assumption that background checks are an accurate reflection of a person's criminal record, but our findings show that's not necessarily the case," Stewart said. "My co-author and I found that there are lots of inaccuracies and mistakes in background checks caused, in part, by imperfect data aggregation techniques that rely on names and birth dates rather than unique identifiers like fingerprints."

The erroneous results of a background check can "go both ways," Stewart said, They can miss convictions that a potential employer would want to know about, or they can falsely assign a conviction to an innocent person through transposed numbers in a birth date, incorrect spelling of a name or simply the existence of common aliases.

Stewart and Lageson's study is based on the examination of official state rap sheets containing all arrests, criminal charges, and case dispositions recorded in the state linked to the record subject's name and fingerprints for 101 study participants in New Jersey. Then, the researchers ordered background checks from a regulated service provider—the same type of company that an employer, a landlord, or a school system might use. The researchers also looked up background checks on the same study participants from an unregulated data provider, such as popular "people search" websites.

"We find that both types of background checks have numerous 'false positive' results, reporting charges that our study participants did not have, as well as 'false negatives,' not reporting charges that our study participants did have," Stewart said.

[...] Stewart said that public awareness of the potentially erroneous and incomplete results of background checks will be key to addressing this systemic social problem.

"Other countries are handling background checks in different ways, ways that may take more time, but there are better models out there," Stewart said. "It may be better for background checks to be done through the state, or the FBI, or through other ways that use biometric data. It's important for people to realize that there's a lot at stake."

Journal Reference:
Sarah Lageson et al, The problem with criminal records: Discrepancies between state reports and private‐sector background checks, Criminology (2024). DOI: 10.1111/1745-9125.12359


Original Submission

posted by janrinok on Tuesday February 20, @09:27PM   Printer-friendly

https://scarybeastsecurity.blogspot.com/2020/11/reverse-engineering-forgotten-1970s.html

ISA = Instruction Set Architecture

"As I recall, those two chips were fairly large. And fairly late -- to the marketplace. We had lots of issues with them. [...] Sometimes the elegant solution isn't the best solution." -- Dave House, digressing to the 8271 during "Oral History Panel on the Development and Promotion of the Intel 8080 Microprocessor" [link], April 26th 2007, Computer History Museum, Mountain View, California. Introduction

Around 1977, Intel released a floppy disc controller (FDC) chip called the 8271. This controller isn't particularly well known. It was mainly used in business computers and storage solutions, but its one breakthrough into the consumer space was with the BBC Micro, a UK-centric computer released in 1981.

There are very few easily discovered details about this chip online, aside from the useful datasheet. This, combined with increasing observations of strange behavior, make the chip a bit of an enigma. My interest in the chip was piqued when I accidentally triggered a wild test mode that managed to corrupt one of my floppy discs even though the write protect tab was present!


Original Submission

posted by janrinok on Tuesday February 20, @03:42PM   Printer-friendly
from the its-DNSSEC-not-DNSSEX dept.

Just one bad packet can bring down a vulnerable DNS server thanks to DNSSEC

'You don't have to do more than that to disconnect an entire network' El Reg told as patches emerge

A single packet can exhaust the processing capacity of a vulnerable DNS server, effectively disabling the machine, by exploiting a 20-plus-year-old design flaw in the DNSSEC specification.

That would make it trivial to take down a DNSSEC-validating DNS resolver that has yet to be patched, upsetting all the clients relying on that service and make it seem as though websites and apps were offline.

The academics who found this flaw – associated with the German National Research Center for Applied Cybersecurity (ATHENE) in Darmstadt – claimed DNS server software makers briefed about the vulnerability described it as "the worst attack on DNS ever discovered."

[....] The researchers said lone DNS packets exploiting KeyTrap could stall public DNSSEC-validated DNS services, such as those provided by Google and Cloudflare, by making them do calculations that overtax server CPU cores.

This disruption of DNS could not only deny people's access to content but could also interfere with other systems, including spam defenses, cryptographic defenses (PKI), and inter-domain routing security (RPKI), the researchers assert.

"Exploitation of this attack would have severe consequences for any application using the Internet including unavailability of technologies such as web-browsing, e-mail, and instant messaging," they claimed. "With KeyTrap, an attacker could completely disable large parts of the worldwide internet."

I thought overtaxed CPU cores were the domain of cryptocurrency and large language models.


Original Submission

posted by janrinok on Tuesday February 20, @10:58AM   Printer-friendly

https://phys.org/news/2024-02-acoustic-ultrasound-access-enclosed-metal.html

The inside of underwater pipes and enclosed nuclear containers were inaccessible—until recently. Acoustics researchers in Penn State's College of Engineering have developed a way to convey energy and transmit communications through metal walls using ultrasound.

They published their innovation, a pillar-based acoustic metamaterial that operates at ultrasound frequency range, in Physical Review Applied. The work could have implications for research in space, according to the researchers.

"If you wanted to power a device, such as a temperature sensor, inside a metal enclosure like a pipe, ultrasound waves can carry that energy to the device," said Yun Jing, professor of acoustics and biomedical engineering and corresponding author on the paper. "But previously, the waves could not pass through metal barriers that would block sound, unless the transducers were in direct contact with the barrier."

The researchers created a pillar-based metamaterial: an array of tiny, cylindrical pillars positioned on a metal plate that work as resonators, which vibrate or oscillate to create acoustic resonance.

When the metamaterial is situated between a transducer transmitter and a receiver, it dramatically enhances the ultrasonic power transmission rate through a metal barrier, without requiring direct contact between transducers and the barrier. Previously, faint ultrasound waves could pass through metal, but they lacked sufficient energy to power a sensor or pass messages through the metal.

"With a narrow end and a wider end like a pillar, the acoustic metamaterial is designed like an acoustic resonator," said first author Jun Ji, who recently earned his doctorate in acoustics from Penn State. "The shape of the metamaterial allows for a wireless transmission and reception of ultrasound through a metal barrier."

The researchers tested the function of the metamaterial sample in two experiments. In the first, they wirelessly transmitted power through a metal plate with the metamaterial using an ultrasonic transmitter and a receiver, successfully powering an LED light on the other side. This confirmed the metamaterial's ability to transmit power through metal walls.

In a second test case, they transmitted an image of the letters "PSU" through a metal plate with the metamaterial using encoded ultrasonic signal, confirming that communications are possible with the use of the metamaterial strengthening the transmission of ultrasound waves through metal barriers.

More information: Jun Ji et al, Metamaterial-enabled wireless and contactless ultrasonic power transfer and data transmission through a metallic wall, Physical Review Applied (2024). DOI: 10.1103/PhysRevApplied.21.014059


Original Submission

posted by hubie on Tuesday February 20, @06:13AM   Printer-friendly
from the Who-woulda-thunk-it? dept.

Looks like Microsoft is preparing yet more helpful tech reps...

https://www.naturalnews.com/2024-02-12-microsoft-to-equip-2m-indians-with-ai-skills.html

Tech billionaire and globalist Bill Gates' Microsoft has announced plans to recruit up to two million workers from India who will be trained to use artificial intelligence.

According to Microsoft Chairman and Chief Executive Officer Satya Nadella, the company will equip two million Indians with AI skills by 2025 in an effort to generate more jobs in the nation of nearly 1.5 billion. (Related: Technocrats Gates and Altman admit current AI is the stupidest version of AGI but believe it can eventually "overcome polarization" – or in reality – censor views.)

"We are devoted to equip two million-plus people in India with AI skills, that is, really taking the workforce and making sure that they have the right skills in order to be able to be a part of this domain," said Nadella on Wednesday, Feb. 7, during a Microsoft CEO Connection event in Mumbai. "But it's not just the skills, it's even the jobs that they create."

The skilling program will focus on training individuals in Tier-2 and Tier-3 cities – cities with 50,000 to 99,999 residents and 20,000 to 49,999 residents, respectively – as well as rural areas with below 20,000 residents in an effort to "unlock inclusive socio-economic progress," according to the company in a statement.


Original Submission

posted by janrinok on Tuesday February 20, @01:30AM   Printer-friendly

Time.com

One overlooked aspect of the resignation of Harvard University President Claudine Gay in January is why Harvard's 400-year-old governing corporation, comprised of titans of industry, academia, and government, appeared so caught off guard by the public's reaction to Gay's quickly mounting problems over her congressional testimony and plagiarism charges. Damning reports described flat-footed board members marooned at holiday destinations engaged in reactive decision-making.

Similar questions arose regarding Boeing's paralysis over a years-long crisis in quality control of its 737 MAX fleet. Did no one in charge think the public would react poorly to news that the plane was deemed sufficiently dodgy that Alaska Airlines had restricted it from flying over open water to Hawaii?

Like all organizations, non-profit and corporate boards can fall prey to groupthink, silo effects, or short-term or trendy thinking that ultimately work against the interests of the entity they are entrusted to oversee. A large scientific literature has explored factors affecting board function, from disciplinary and sociodemographic diversity to size and deliberation processes. A less explored factor is how the internal network structure of the board can affect its performance.

[...] Boards face challenges, and their problems are not just a function of mis-directed objectives (such as an overly narrow focus on quarterly earnings or fealty to some ideological commitment) nor a function of the self-perpetuating insularity that makes them ignore external pressures or information. Rather, the way many boards themselves are structured may make them less capable of confronting the enduring reality of such stresses. Efforts to make it difficult or impossible for outsiders to join boards (such as happened at Yale a couple of years ago) will only exacerbate such problems.

Network insights can provide the right kind of disruption here—the kind that fosters creativity and fiscal stewardship, whether that is by bringing leaders down or keeping planes up.

An interesting article about the possible reasons why boards fail to take the right decisions ...


Original Submission