Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What was highest label on your first car speedometer?

  • 80 mph
  • 88 mph
  • 100 mph
  • 120 mph
  • 150 mph
  • it was in kph like civilized countries use you insensitive clod
  • Other (please specify in comments)

[ Results | Polls ]
Comments:45 | Votes:100

posted by Fnord666 on Saturday May 14 2022, @11:34PM   Printer-friendly
from the calm-down-aes-is-not-broken dept.

Practical bruteforce of AES-1024 military grade encryption:

I recently presented work on the analysis of a file encryption solution that claimed to implement "AES-1024 military grade encryption". Spoiler alert: I did not break AES, and this work does not concern the security of AES. You may find advanced research regarding this topic.

This project started during a forensic analysis. One of my colleagues came with a USB stick containing a vault encrypted with SanDisk Secure Access software. He asked me if it was possible to bruteforce the password of the vault to recover the content. I did not know this software thus, I started to research. It appeared that this solution is distributed by Sandisk by default on any storage device you buy from them.

The solution is convenient, it allows a user to run the binary on the disk and after entering her correct password her vault is unlocked and the files are accessible. Once the software is closed, the files are encrypted back and not accessible anymore. So far nothing uncommon, but one thing drew my attention. In the Options menu, you can choose your "Preferred encryption method".

[...] They claimed to provide "Ultimate encryption using 1024 bit AES keys, Military grade". Thus for all those reasons, I decided to analyze the solution to figure out how it was implemented.

[...] In fact from a general point of view, I was analyzing a password hash function. The function takes as input a user password and hashes it to a key which is later used to encrypt or decrypt data. Usually, the password hash function takes as input a unique and randomly generated salt to avoid precomputed attacks like dictionary or rainbow table attacks. Another common parameter of the hash function is the iterations number which allows to adapt the work factor. The higher the iteration number is the longer it will takes to compute the hash and thus, the harder it will to bruteforce the password for an attacker. Currently the are various recommended algorithms like: PBKDF2, Scrypt or Argon2. Argon2 is the winner of the Password Hashing Competition and is now considered as the state of the art for password hashing.

For this analysis, I only needed to focus on PBKDF2. Its design is simple:

[...] It looks randomly generated but it is definitively not unique since all vaults created with the software would use the same salt for the key derivation. In addition, users using the same password would end up with the same decryption key. Later I discovered that the same salt value is also shared among all the vendors: SanDisk, Sony and Lexar. A less critical problem is that the number of iterations is also fixed and set to 1000. This number of iterations was good when PBKDF2 was designed but nowadays the iteration number has to be higher. For example, OWASP recommends now 310000 iterations when PBKDF2 uses HMAC-SHA-256. Nevertheless, the construction itself of the key derivation function is flawed.

[...] Now that I got the key derivation function, I checked how the password was verified to be correct. In fact, a file name filesystem.dat sored[sic] in the folder C:\Users\user\AppData\Local\ENCSecurityBV\ENCDataVault70\ENCDataVault contains an encrypted magic value. When the decryption of this magic value gives 0xd2c3b4a1 then the password is considered correct. The decryption algorithm used OpenSSL AES encryption. In fact for the AES-128 option, the encryption is simply AES in CTR mode with a 128-bit key generated from the key derivation function described earlier. However for the other modes the construction is more curious.

[...] I got everything I needed to implement a John the ripper plugin that allows everybody to bruteforce AES-1024 military grade encryption! The plugin is now integrated into the main repository and also includes also the bruteforce of the new key derivation function based on HMAC-PBKDF-SHA256.

[...] This analysis shows again that it is difficult to roll a custom cryptographic algorithm and also that the level of security of a solution does not depend on the number of encryptions performed.


Original Submission

posted by Fnord666 on Saturday May 14 2022, @06:48PM   Printer-friendly

Google Cloud launches its own version of PostgreSQL:

At its Google IO 2022 event, the company pitched AlloyDB as a new modernization option for users transitioning away from legacy databases.

Google claims that compared with standard PostgreSQL, AlloyDB was more than four times faster for transactional workloads in its performance tests, and up to 100 times faster for analytical queries.

AlloyDB was also two times faster for transactional workloads than Amazon's comparable service, Google claimed in a dig at its cloud hosting rival.

In addition, Google says the service uses the same blocks that power Google services such as YouTube, Search, Maps, and Gmail.

[...] The new service has also maintained full compatibility with PostgreSQL 14 according to Google, the latest version of the open-source database, enabling users to reuse their existing development skills and tools, and migrate existing PostgreSQL applications without code changes


Original Submission

posted by Fnord666 on Saturday May 14 2022, @02:04PM   Printer-friendly
from the where-the-marsh-grass-dances dept.

Land-Building Marsh Plants are Champions of Carbon Capture:

Human activities such as marsh draining for agriculture and logging are increasingly eating away at saltwater and freshwater wetlands that cover only 1% of Earth's surface but store more than 20% of all the climate-warming carbon dioxide absorbed by ecosystems worldwide.

A new study published May 5 in Science by a team of Dutch, American and German scientists shows that it's not too late to reverse the losses.

The key to success, the paper's authors say, is using innovative restoration practices -- identified in the new paper -- that replicate natural landscape-building processes and enhance the restored wetlands' carbon-storing potential.

And doing it on a large scale.

[...] "More than half of all wetland restorations fail because the landscape-forming properties of the plants are insufficiently taken into account," said study coauthor Tjisse van der Heide of the Royal Institute for Sea Research and the University of Groningen in the Netherlands. Planting seedlings and plugs in orderly rows equidistant from each other may seem logical, but it's counter-productive, he said.

"Restoration is much more successful when the plants are placed in large dense clumps, when their landscape-forming properties are mimicked, or simply when very large areas are restored in one go," van der Heide said.

"Following this guidance will allow us to restore lost wetlands at a much larger scale and increase the odds that they will thrive and continue to store carbon and perform other vital ecosystem services for years to come," Silliman said. "The plants win, the planet wins, we all win."

Journal Reference:
R.J.M. Temmink, L.P.M. Lamers, C. Angelini, et al., Recovering Wetland Biogeomorphic Feedbacks to Restore the World's Biotic Carbon Hotspots, Science, 2022.
DOI: 10.1126/science.abn1479


Original Submission

posted by Fnord666 on Saturday May 14 2022, @09:18AM   Printer-friendly
from the put-down-the-phone-and-back-away dept.

Results of a study that asked participants to take a week-long break from social media find positive effects for wellbeing, depression and anxiety.

Asking people to stop using social media for just one week could lead to significant improvements in their wellbeing, depression and anxiety and could, in the future, be recommended as a way to help people manage their mental health say the authors of a new study.

The study, carried out by a team of researchers at the University of Bath (UK), studied the mental health effects of a week-long social media break. For some participants in the study, this meant freeing-up around nine hours of their week which would otherwise have been spent scrolling Instagram, Facebook, Twitter and TikTok.

[...] Participants reported spending an average of 8 hours per week on social media at the start of the study. One week later, the participants who were asked to take the one-week break had significant improvements in wellbeing, depression, and anxiety than those who continued to use social media, suggesting a short-term benefit.

[...] "Of course, social media is a part of life and for many people, it's an indispensable part of who they are and how they interact with others. But if you are spending hours each week scrolling and you feel it is negatively impacting you, it could be worth cutting down on your usage to see if it helps."

[...] Over the past 15 years, social media has revolutionised how we communicate, underscored by the huge growth the main platforms have observed. In the UK the number of adults using social media increased from 45% in 2011 to 71% in 2021. Among 16 to 44-year-olds, as many as 97% of us use social media and scrolling is the most frequent online activity we perform.

Journal Reference:
Jeffrey Lambert et al. Taking a One-Week Break from Social Media Improves Well-Being, Depression, and Anxiety: A Randomized Controlled Trial [open] Cyberpsychology, Behavior, and Social Networking, 2022
DOI: 10.1089/cyber.2021.0324


Original Submission

posted by Fnord666 on Saturday May 14 2022, @04:37AM   Printer-friendly
from the pants-on-fire dept.

A new method of lie detection shows that lie tellers who are made to multi-task while being interviewed are easier to spot:

It is well documented that lying during interviews takes up more cognitive energy than telling the truth. A new study by the University of Portsmouth found that investigators who used this finding to their advantage by asking a suspect to carry out an additional, secondary, task while being questioned were more likely to expose lie tellers. The extra brain power needed to concentrate on a secondary task (other than lying) was particularly challenging for lie tellers.

[...] "Our research has shown that truths and lies can sound equally plausible as long as lie tellers are given a good opportunity to think what to say. When the opportunity to think becomes less, truths often sound more plausible than lies. Lies sounded less plausible than truths in our experiment, particularly when the interviewees also had to carry out a secondary task and were told that this task was important."

[...] Professor Vrij said: "The pattern of results suggests that the introduction of secondary tasks in an interview could facilitate lie detection but such tasks need to be introduced carefully. It seems that a secondary task will only be effective if lie tellers do not neglect it. This can be achieved by either telling interviewees that the secondary task is important, as demonstrated in this experiment, or by introducing a secondary task that cannot be neglected (such as gripping an object, holding an object into the air, or driving a car simulator). Secondary tasks that do not fulfil these criteria are unlikely to facilitate lie detection."

So if you think your significant other is hiding something from you, grill them when they're driving a car.

Journal Reference:
Aldert Vrij et al., The Effects of a Secondary Task on True and False Opinion Statements [open], Int J Psychol Behav Anal, 8, 2022
DOI: 10.15344/2455-3867/2022/185


Original Submission

posted by hubie on Friday May 13 2022, @10:56PM   Printer-friendly
from the somebody's-watching-you dept.

From Malware Bytes Blog

On May 11, 2022, the EU will publicize a proposal for a law on mandatory chat control. The European Commission wants all providers of email, chat and messaging services to search for suspicious messages in a fully automated way and forward them to the police in the fight against child pornography.

[...] Similar developments are taking place in the US and the supporting narrative has expanded from domestic terrorism to other illegal content and activity, such as child sexual exploitation and abuse, terrorism, foreign adversaries‚ and attempts to undermine democratic values and institutions.

[...] What most, if not all, of these activities have in common is that you usually won't see the criminals using the same platforms as those of us that want to stay in touch with friends and relatives. They are already conducting their "business" in illegal marketplaces on the Dark Web, or they are using encrypted phone services.

[...] Since client-side scanning technologies may represent the most powerful surveillance system ever imagined, it is imperative that we find a way to make them abuse-resistant and auditable before we decide to start using them. Failures from the past have taught us that it's often the other way around. We learn from our mistakes, but how costly are they?

Also at:
    The Guardian
    Patrick Breyer


Original Submission

posted by hubie on Friday May 13 2022, @08:11PM   Printer-friendly
from the you-can-tell-by-the-way-I-walk dept.

Slow walking may be to blame for perceived congestion in pedestrian areas:

If you live in a town or city, you are probably experienced in the art of navigating through crowded areas. But sometimes you can't help but feel like your surroundings are too congested for comfort. Intuition tells us this feeling must be because of the sheer volume of people around us in these moments that causes the perception of somewhere being too congested. But Project Assistant Professor Jia Xiaolu from the Research Center for Advanced Science and Technology at the University of Tokyo wanted to verify this assumption, and ended up proving that it might not actually be the entire truth of the matter.

"Perception of congestion is an important matter for those designing spaces to be used by people, so if there's a way to estimate this perceptual value, it would be useful to know," said Xiaolu. [...]

"That the velocity of pedestrians rather than density of the crowd better indicates perceived congestion was a bit of a surprise," said Xiaolu. "But it leads us to believe that people perceive a space too congested when they are simply unable to walk at the speed they wish to; there is a gap between their desired and actual velocity. [...]

"We found that women and also older people generally felt less constrained than men and younger people, which is probably due to their lower desired velocity, thus a smaller gap between their desired and actual velocity," said Xiaolu. "And while this is interesting, I think our future studies will focus on spaces where the objective is not so much about getting from A to B, but more goal oriented, such as interacting with a service in a store, gallery or other destination."

Original material: https://www.u-tokyo.ac.jp/focus/en/press/z0508_00219.html

Journal Reference:
Xiaolu Jia et al., Revisiting the level-of-service framework for pedestrian comfortability: velocity depicts more accurate perceived congestion than local density, Transportation Research, 2022.
DOI: 10.1016/j.trf.2022.04.007


Original Submission

posted by hubie on Friday May 13 2022, @05:27PM   Printer-friendly
from the electric-slime dept.

Algae-powered computing: Scientists create reliable and renewable biological photovoltaic cell:

Researchers have used a widespread species of blue-green algae to power a microprocessor continuously for a year -- and counting -- using nothing but ambient light and water. Their system has potential as a reliable and renewable way to power small devices.

The system, comparable in size to an AA battery, contains a type of non-toxic algae called Synechocystis that naturally harvests energy from the sun through photosynthesis. The tiny electrical current this generates then interacts with an aluminium electrode and is used to power a microprocessor.

[...] "The growing Internet of Things needs an increasing amount of power, and we think this will have to come from systems that can generate energy, rather than simply store it like batteries," said Professor Christopher Howe in the University of Cambridge's Department of Biochemistry, joint senior author of the paper.

[...] In the experiment, the device was used to power an Arm Cortex M0+, which is a microprocessor used widely in Internet of Things devices. It operated in a domestic environment and semi-outdoor conditions under natural light and associated temperature fluctuations, and after six months of continuous power production the results were submitted for publication.

Journal Reference:
P. Bombelli, A. Savanth, A. Scarampi, et al. Powering a microprocessor by photosynthesis, Energy & Environmental Science, 2022.
DOI: 10.1039/D2EE00233G


Original Submission

posted by hubie on Friday May 13 2022, @02:41PM   Printer-friendly
from the you-mean-there-really-was-a-game-in-development? dept.

Two submitted stories talk about new developments in the DNF saga. Both stories are much longer than can be summarized here, but are worth the read (and pictures):

Duke Nukem Forever's 2001 build appears online, may fully leak in June

The game's latest leak, posted to 4chan on Sunday and widely shared by Duke Nukem fansite duke4.net, appears to be made of original 2001 code and assets. It includes a one-minute video of first-person carnage in a very Duke-appropriate environment of a strip club called "Slick Willy." The sequence was apparently played and captured by the build's leaker.

In addition, the leaker suggested that the build's playable files, source code, and official map editor could be released in June—which would coincide with the E3 trailer's 21st anniversary—and responded to various 4chan doubters by posting additional images based on their requests. These included screengrabs of the build's file and folder lists, along with images from other sections of the game and a higher-res peek at "the redneck from the E3 trailer."

Shortly after the video and its related screencaps made the rounds, former Duke Nukem Forever project lead George Broussard confirmed its apparent authenticity on Twitter, telling fans that "the leak looks real." He said that while it may be playable, it shouldn't be looked at as a game, "just a smattering of barely populated test levels."

We have played the lost Duke Nukem Forever build from 2001

Earlier this week, a retro game leaker teased '90s shooter fans with something they'd never seen before [...] Was this an elaborate fan-made fake of Duke-like content in a dated 3D engine, or would this turn out to be the real deal?

We thought we'd have to wait until June for an answer, as this week's leaker suggested that the build and its source code would be released to coincide with the 21st anniversary of the game's tantalizing E3 2001 trailer. But after this week's tease, the leakers decided to jump the gun. On Tuesday, 1.9GB of Duke Nukem Forever files landed on various file-sharing sites (which we will not link here), and Ars Technica has confirmed that those files are legitimate.

As it turns out, this is a surprisingly playable version of Duke Nukem Forever from October 2001, though with so many bugs and incomplete sections, that's not saying much. Most of this content, which includes moments from the aforementioned E3 trailer, was shelved by the time the game reached a cobbled-together retail state in 2011. So we're finally getting a closer look at how the game could have turned out differently if it had launched closer to 2001.

Now that the code is out, do you think the community can finish the game in a state that will live up to its original promises?


Original Submission #1Original Submission #2

posted by hubie on Friday May 13 2022, @11:57AM   Printer-friendly
from the hearing-is-believing dept.

Restoring Hearing: New Tool To Create Ear Hair Cells Lost Due to Aging or Noise:

Hearing loss caused by aging, noise, and some cancer therapy medications and antibiotics has been irreversible because scientists have not been able to reprogram existing cells to develop into the outer and inner ear sensory cells — essential for hearing — once they die.

But Northwestern Medicine scientists have discovered a single master gene that programs ear hair cells into either outer or inner ones, overcoming a major hurdle that had previously prevented the development of these cells to restore hearing, according to new research published today (May 4, 2022) in the journal Nature.

[...] Currently, scientists can produce an artificial hair cell, but it does not differentiate into an inner or outer cell, each of which provides different essential functions to produce hearing. The discovery is a major step toward developing these specific cells.

The death of outer hair cells made by the cochlea is most often the cause of deafness and hearing loss. The cells develop in the embryo and do not reproduce. The outer hair cells expand and contract in response to the pressure from sound waves and amplify sound for the inner hair cells. The inner cells transmit those vibrations to the neurons to create the sounds we hear.

[...] . "We can now figure out how to make specifically inner or outer hair cells and identify why the latter are more prone to dying and cause deafness, "García-Añoveros said. He stressed this research is still in the experimental stage.

Journal Reference:
Jaime García-Añoveros et al. Tbx2 is a master regulator of inner versus outer hair cell differentiation, Nature, 2022
DOI: 10.1038/s41586-022-04668-3


Original Submission

posted by hubie on Friday May 13 2022, @09:11AM   Printer-friendly
from the age-of-the-microsat dept.

UK company reveals micro-launcher rocket:

Orbex's Prime rocket reaching technical readiness represents a significant achievement that brings together key elements of the ground infrastructure and prototype launch vehicle for the first time and is a major step forward for the company and for the U.K. launch industry.

[...] Orbex Prime will launch from Space Hub Sutherland, a new spaceport on the North Coast of Scotland. Space Hub Sutherland was the first vertical spaceport to receive planning permission in the U.K. and has committed to being carbon-neutral, both in its construction and operation.

[...] Orbex Prime is a 19-meter long, two-stage rocket that is powered by seven engines, that is being designed and manufactured in the U.K. and Denmark. The six rocket engines on the first stage of the rocket will propel the vehicle through the atmosphere to an altitude of around 80km. The single engine on the second stage of the rocket will complete the journey to Low Earth Orbit (LEO), allowing the release of its payload of small, commercial satellites into Earth's orbit.

Chris Larmour, CEO, Orbex, said: "This is a major milestone for Orbex and highlights just how far along our development path we now are. From the outside, it might look like an ordinary rocket, but on the inside, Prime is unlike anything else. To deliver the performance and environmental sustainability we wanted from a 21st century rocket we had to innovate in a wide number of areas—low-carbon fuels, fully 3D-printed rocket engines, very lightweight fuel tanks, and a novel, low-mass reusability technology."

Slick Orbex Space promo video on YouTube

They're not making it easy on themselves launching from 58 degrees latitude.


Original Submission

posted by hubie on Friday May 13 2022, @06:26AM   Printer-friendly
from the just-follow-the-light dept.

Light-emitting electrochemical cells for recyclable lighting:

A low-cost and easy-to-manufacture lighting technology can be made with light-emitting electrochemical cells. Such cells are thin-film electronic and ionic devices that generate light after a low voltage is applied. Researchers at the Technical University of Munich (TUM) and the University of Turin have now used extensive data analysis to create first-class electrochemical cells from copper complexes that emit blue and white light.

Light-emitting electrochemical cells (LECs) are the simplest and least expensive thin-film lighting devices available to date. They consist of a single active layer. They are used, for example, as electroluminescent inks and stickers.

The effect of electroluminescence was first demonstrated in 1905. [...] Their prototypes are considered to be the first LEDs. "[...] the light-emitting electrochemical cells or LECs that we are looking at follow a different principle," explains Rubén D. Costa, Professor of Biogenic Functional Materials at TUM.

[...] "The development of inexpensive devices that emit white and blue light is highly desired and holds many benefits. However, the previous lack of blue emitters has hindered the transition from the laboratory to the real market. Accordingly, the creation of blue emitters is a general milestone in thin-film lighting. [...]

After extensive data evaluation of various known approaches, a new design has emerged for blue LECs which provide excellent performance as compared to devices with conventional emitters.

This is another example of design by ML: throwing a bunch of material properties into a machine learning algorithm and seeing what comes out.

Journal Reference:
Luca M. Cavinato et al., Multivariate Analysis Identifying [Cu(N^N)(P^P)] + Design and Device Architecture Enables First‐Class Blue and White Light‐Emitting Electrochemical Cells [open], Advanced Materials (2022).
DOI: 10.1002/adma.202109228


Original Submission

posted by janrinok on Friday May 13 2022, @03:43AM   Printer-friendly
from the I-can't-drive-55 dept.

Laser bursts drive fastest-ever logic gates:

A long-standing quest for science and technology has been to develop electronics and information processing that operate near the fastest timescales allowed by the laws of nature. A promising way to achieve this goal involves using laser light to guide the motion of electrons in matter, and then using this control to develop electronic circuit elements—a concept known as lightwave electronics.

Remarkably, lasers currently allow us to generate bursts of electricity on femtosecond timescales—that is, in a millionth of a billionth of a second. Yet our ability to process information in these ultrafast timescales has remained elusive.

Now, researchers at the University of Rochester and the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) have made a decisive step in this direction by demonstrating a logic gate—the building block of computation and information processing—that operates at femtosecond timescales. The feat, reported in the journal Nature, was accomplished by harnessing and independently controlling, for the first time, the real and virtual charge carriers that compose these ultrafast bursts of electricity.

The researchers' advances have opened the door to information processing at the petahertz limit, where one quadrillion computational operations can be processed per second. That is almost a million times faster than today's computers operating with gigahertz clock rates, where 1 petahertz is 1 million gigahertz.

"This is a great example of how fundamental science can lead to new technologies," says Ignacio Franco, an associate professor of chemistry and physics at Rochester who [...] performed the theoretical studies that lead to this discovery.

In recent years, scientists have learned how to exploit laser pulses that last a few femtoseconds to generate ultrafast bursts of electrical currents. This is done, for example, by illuminating tiny graphene-based wires connecting two gold metals. The ultrashort laser pulse sets in motion, or "excites," the electrons in graphene and, importantly, sends them in a particular direction—thus generating a net electrical current.

Laser pulses can produce electricity far faster than any traditional method—and do so in the absence of applied voltage. Further, the direction and magnitude of the current can be controlled simply by varying the shape of the laser pulse (that is, by changing its phase).

Journal Reference:
Boolakee, Tobias, Heide, Christian, Garzón-Ramírez, Antonio, et al. Light-field control of real and virtual charge carriers, Nature (DOI: 10.1038/s41586-022-04565-9)
Ignacio Franco, Moshe Shapiro, Paul Brumer. Robust Ultrafast Currents in Molecular Wires through Stark Shifts, Physical Review Letters (DOI: 10.1103/PhysRevLett.99.126802)
Schiffrin, Agustin, Paasch-Colberg, Tim, Karpowicz, Nicholas, et al. Optical-field-induced current in dielectrics, Nature (DOI: 10.1038/nature11567)
Chen, Liping, Zhang, Yu, Chen, GuanHua, et al. Stark control of electrons along nanojunctions [open], Nature Communications (DOI: 10.1038/s41467-018-04393-4)


Original Submission

posted by janrinok on Friday May 13 2022, @01:02AM   Printer-friendly
from the no-wireless-and-less-space-than-a-nomad dept.

DailyMail is reporting that early Apple iPod models are selling for absurd amounts after Apple announced that it is discontinuing the iPod.

This week Apple announced that it is discontinuing the iPod Apple launched its first iPod Classic back in 2001 with a $399 price tag Fast-forward to today and old iPods could be worth a huge amount of money An iPod Touch sold on eBay in March for more than $6,500

Amid the news of its discontinuation, listings for iPods on eBay have surged, with many sellers asking for huge sums of money for their retro devices.

Speaking to MailOnline, James Andrews, senior personal finance editor at money.co.uk, said: 'With iPods discontinued, you might be asking whether it's time to cash in on some of your old tech.

'The first thing to say is don't get excited by list prices on ebay. While a few models are selling for thousands, the vast majority are selling for far less.

'But that doesn't mean you couldn't pick up a reasonable amount. Do a search and check recent sold prices for models like your own to see what you're likely to get.

'In general, the best prices go to iPod Classic models, in great condition and with all the leads needed included. If you're lucky enough to have an unopened U2 Special Edition iPod from 2004 in the back of a cupboard, it could make you thousands.'


Original Submission

posted by hubie on Thursday May 12 2022, @10:17PM   Printer-friendly
from the better-very-late-than-never dept.

NVIDIA Transitioning To Official, Open-Source Linux GPU Kernel Driver

The day has finally come: NVIDIA is publishing their Linux GPU kernel modules as open-source! To much excitement and a sign of the times, the embargo has just expired on this super-exciting milestone that many of us have been hoping to see for many years. Over the past two decades NVIDIA has offered great Linux driver support with their proprietary driver stack, but with the success of AMD's open-source driver effort going on for more than a decade, many have been calling for NVIDIA to open up their drivers. Their user-space software is remaining closed-source but as of today they have formally opened up their Linux GPU kernel modules and will be maintaining it moving forward. Here's the scoop on this landmark open-source decision at NVIDIA.

Many have been wondering in recent years what sort of NVIDIA open-source play the company has been working on... Going back to the end of 2019 have been signals of some sort of open-source driver effort and various rumblings have continued since that point. Last month I also pointed out a new open-source kernel driver appearing as part of the NVIDIA Tegra sources. Well, now the embargo has just expired and the lid can be lifted - NVIDIA is providing a fully open-source kernel driver solution for their graphics offerings. This isn't limited to just Tegra or so but spans not only their desktop graphics but is already production-ready for data center GPU usage.


Original Submission

Today's News | May 15 | May 13  >