Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666


Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

I regularly use the following social media programs:

  • Facebook
  • Instagram
  • Twitter
  • Two of these
  • All three of these
  • None of these
  • Other (please specify in comments)
  • My media is not social, you insensitive clod!

[ Results | Polls ]
Comments:25 | Votes:82

posted by janrinok on Sunday January 29, @10:05AM   Printer-friendly
from the depressing-thoughts? dept.

Bacteria evolve drug resistance more readily when antidepressants are around:

Jianhua Guo is a professor at the Australian Centre for Water and Environmental Biotechnology. His research focuses on removing contaminants from wastewater and the environmental dimensions of antimicrobial resistance. One of those dimensions is the overuse of antibiotics, which promotes resistance to these drugs.

Guo wondered if the same might hold true for other types of pharmaceuticals as well. His lab found that they definitely do. Specific antidepressants—SSRIs and SNRIs—promote resistance to different classes of antibiotics. This resistance is heritable over 33 bacterial generations, even once the antidepressant is removed.
[...]
Antibiotic resistance is an enormous threat to human health. Since antidepressants are prescribed and used in such massive quantities, the fact that they can induce antibiotic resistance should not be considered one of the more trivial of their side effects. It might even be taken into account in the design of new, more effective antidepressants.

PNAS, 2023. DOI: 10.1073/pnas.2208344120

Also reported at:


Original Submission

posted by hubie on Sunday January 29, @05:21AM   Printer-friendly
from the your-loss-is-our-gain dept.

Due to a changing economic climate, tech companies like Google and Apple have been laying off employees to cut costs and prepare for a potential recession. Meanwhile, automakers like GM have been taking advantage of this influx of talented workers by hiring them to develop the new age of digital vehicles:

According to a report from Detroit Free Press, GM has loosened up its hiring freeze to exploit the new surplus of skilled workers. This makes sense given that The General had a goal to hire 8,000 employees last year to help it focus on the development of the technology needed for electric vehicles. In fact, GM was looking to hire a number of software developers and engineers for its new end-to-end software platform, Ultifi. As a whole, this incursion of digital-focused employees will help the Detroit-based automaker further develop its EVs and self-driving technologies, like Super Cruise and Ultra Cruise.

"While this isn't a major growth year from a hiring standpoint, we're continuing to hire tech talent," said GM spokeswoman Maria Raynal. "This includes some of the talent in the market due to the tech downsizing, particularly in areas such as EV development, software development and defined vehicle."

The auto industry is not immune to the nationwide problems of too few applicants and employees who just stop showing up. Also, I'm wondering how motivated Silicon Valley tech workers will be to move to Detroit.

Previously: Google Employees Brace for a Cost-Cutting Drive as Anxiety Mounts


Original Submission

posted by hubie on Sunday January 29, @12:33AM   Printer-friendly
from the I-never-forget-a-face dept.

Washed out to sea, a giant beast and its armored skin were left in pristine condition:

Borealopelta mitchelli found its way back into the sunlight in 2017, millions of years after it had died. This armored dinosaur is so magnificently preserved that we can see what it looked like in life. Almost the entire animal—the skin, the armor that coats its skin, the spikes along its side, most of its body and feet, even its face—survived fossilization. It is, according to Dr. Donald Henderson, curator of dinosaurs at the Royal Tyrrell Museum, a one-in-a-billion find.

Beyond its remarkable preservation, this dinosaur is an important key to understanding aspects of Early Cretaceous ecology, and it shows how this species may have lived within its environment. Since its remains were discovered, scientists have studied its anatomy, its armor, and even what it ate in its last days, uncovering new and unexpected insight into an animal that went extinct approximately 100 million years ago.

Borealopelta is a nodosaur, a type of four-legged ankylosaur with a straight tail rather than a tail club. Its finding in 2011 in an ancient marine environment was a surprise, as the animal was terrestrial.

[...] One of the reasons this fossil was so well-preserved is because it was covered in a very thick, very hard concretion—a solid mass that sometimes forms around fossils. The concretion maintained the fossil in 3D, unlike the typically 2D-flattened fossils that occur after millions of years of pressure from overlying rock. Henderson said the concretion helped preserve the skin, preventing even bacteria from breaking it down.

It took the researchers 14 days to excavate the find and bring it back in separate enormous blocks to the museum. There, senior preparation technician Mark Mitchell was tasked with separating the fossil from the stone. This was no small endeavor, taking Mitchell seven hours per day over five and a half years. That task, he wrote in an email, took him a staggering 7,000 hours. The length of time it took and the quality of his work are why this dinosaur was named after him (he's the "Mitchell" in the Borealopelta markmitchelli).

[...] Few people can claim to be the first to see the actual face of an extinct animal with no modern analogs. Mitchell described that experience as "absolutely amazing. This was the first dinosaur I've worked on with skin actually covering the skull, so being able to see what this animal looked like when it was alive was really cool."

But he was also "amazed at the skin impressions on the bottom (pad) of the foot. These matched the patterns seen in footprints left behind by other ankylosaurs preserved in Alberta [and British Columbia]."

[...] "The specimen is impressive in its own right, even without any of the research," Brown wrote. "The combination of preserved soft tissues and retained 3D shape results in the animal looking much like it did back in the Cretaceous... I think ongoing and future research, specifically looking at features such as the preserved skin and stomach contents will continue to add to our understanding of this animal."

A really cool picture of the dinosaur head from the article.


Original Submission

posted by janrinok on Saturday January 28, @07:52PM   Printer-friendly

https://inventlikeanowner.com/blog/the-story-behind-asins-amazon-standard-identification-numbers/

During Amazon's earliest days (1994-1995), CTO Shel Kaphan and Software Engineer Paul (then) Barton-Davis had to write all the software needed to power Amazon.com on the day it offered its website to the world to sell books (official launch date was July 16, 1995). The book catalog was online, and it needed an index (well, it needed several indexes, but that's another story); specifically, it needed a unique key for each item in the catalog. Because the databases they were using to create the catalog were indexed by 10-character-long ISBN (International Standard Book Number), Shel and Paul decided to use ISBN as their key.

Unfortunately — and Shel was well aware of this very quickly, but of course by that time, it was too late — ISBNs are terribly abused in the United States. The company that issues ISBNs, Bowker, charges a lot of money for ISBNs (from the perspective of small publishers, anyway), and publishers don't necessarily read all the rules. Small publishers were re-using ISBNs, and they also took their range of ISBNs and numbered through the entire range, rather than respecting the rule that the final character is actually a checksum, and you can only iterate through some of the digits. (It's actually worse than just not using the last digit, but I'm not getting into that here.)

Shel very quickly removed all 'checksum software checks' (which would have made sure it was a legal ISBN), but Amazon was still stuck with a code base that stored the key value in 10 character strings, and which also stored them in other databases with similar constraints.

Read on to see how the problem was finally resolved - but it wasn't as simple as you might first have thought...


Original Submission

posted by janrinok on Saturday January 28, @03:05PM   Printer-friendly
from the juicy-sweet dept.

On an Alaskan island, wolves adapted to hunt an unexpected aquatic prey:

People love otters, wolves, and deer. Respectively, they're crafty, intelligent, and majestic. Put them all together on an island, though, and things get unpleasant pretty quickly. These are the findings of a new paper analyzing how a wolf population came to Pleasant Island in Alaska, learned to hunt otters, and, using this unexpected food source, thrived to the point of wiping out the native Sitka black-tailed deer population.

"To the best of our knowledge, the deer population is decimated. We haven't found evidence of deer recolonizing the islands," Gretchen Roffler, wildlife research biologist for the Alaska Department of Fish and Game and an author of the paper, told Ars.
[...]
The team studied the wolves on the island by testing DNA found in 689 wolf scats and performing stable isotope analysis on hair and muscle material, which they got from local hunters. The team tracked the wolves between 2015 and 2021.
[...]
From the samples, the researchers saw a diet that consisted primarily of deer shift to one that was made up primarily of sea otters. The research also found that the added and unexpected food source allowed the wolves to reproduce even after the deer population shrank. Ultimately, the wolves killed off the deer population on the island.

In all, though, the deer are the biggest losers in this equation. The wolves appear to still be on the island, and, Roffler said, none of them appear to have died from starvation—though the team intends to keep an eye on them. Considering the sea otters can swim to other parts of the coastal waters, they're also doing fine.

The big takeaway is that wolves can exploit a diversity of prey and learn to do so very quickly—like learning to hunt and kill sea otters in a matter of years. It also suggests that species restoration can bring some surprising sources of nutrients into an ecosystem. Finally, the work "really just confirms something that we already knew, which is that wolves are incredibly adaptable," she said.


Original Submission

posted by janrinok on Saturday January 28, @10:20AM   Printer-friendly

Federal Court Says Scraping Court Records Is Most Likely Protected By The First Amendment:

Automated web scraping can be problematic. Just look at Clearview, which has leveraged open access to public websites to create a facial recognition program it now sells to government agencies. But web scraping can also be quite useful for people who don't have the power or funding government agencies and their private contractors have access to.

The problem is the Computer Fraud and Abuse Act (CFAA). The act was written to give the government a way to go after malicious hackers. But instead of being used to prosecute malicious hackers, the government (and private companies allowed to file CFAA lawsuits) has gone after security researchers, academics, public interest groups, and anyone else who accesses systems in ways their creators haven't anticipated.

Fortunately, things have been changing in recent years. In May of last year, the DOJ changed its prosecution policies, stating that it would not go after researchers and others who engaged in "good faith" efforts to notify others of data breaches or otherwise provide useful services to internet users. Web scraping wasn't specifically addressed in this policy change, but the alteration suggested the DOJ was no longer willing to waste resources punishing people for being useful.

Web scraping is more than a CFAA issue. It's also a constitutional issue. None other than Clearview claimed it had a First Amendment right to gather pictures, data, and other info from websites with its automated scraping.

Clearview may have a point. A few courts have found scraping of publicly available data to be something protected by the First Amendment, rather than a violation of the CFAA.

In an important victory, a federal judge in South Carolina ruled that a case to lift the categorical ban on automated data collection of online court records – known as "scraping" – can move forward. The case claims the ban violates the First Amendment.

The decision came in NAACP v. Kohn, a lawsuit filed by the American Civil Liberties Union, ACLU of South Carolina, and the NAACP on behalf of the South Carolina State Conference of the NAACP. The lawsuit asserts that the Court Administration's blanket ban on scraping the Public Index – the state's repository of court filings – violates the First Amendment by restricting access to, and use of, public information, and prohibiting recording public information in ways that enable subsequent speech and advocacy.

The bottom line is this: automated access to government records is almost certainly protected by the First Amendment. What will be argued going forward is how much the government can restrict this access without violating the Constitution. There's not a lot on the record at the moment, but this early ruling seems to suggest this court will err on the side of unrestricted access, rather than give its blessing to unfettered fettering of the presumption of open access that guides citizens' interactions with public records.


Original Submission

posted by janrinok on Saturday January 28, @05:37AM   Printer-friendly

Stick with a traditional thermal paste from a reputable brand:

[Source] Editor's take: Hardcore enthusiasts are constantly on the hunt for new techniques and methods to shave a degree or two off their operating temperatures. Most efforts bear little to no fruit, and instead serve as cautionary tales for what not to do. This is one of those examples.

A ComputerBase community member recently put several thermal pads and pastes to the test using an old Radeon R7 240 graphics card. The GPU is normally passively cooled but a fan was added to expedite the testing process. Some unconventional alternatives were thrown in to spice things up, and that's more of what we are interested in here.

The card is clocked at 780MHz @ 1.15V by default and has multiple levels of throttling that automatically dial back the clock speed and voltage supplied. If things get too toasty, it shuts down entirely.

First up is ketchup and after a five-minute torture test in Furmark, the GPU registered a temperature of 71c. Among the wacky alternatives, this was the best performer.

Standard toothpaste reached 90c after a similar run while diaper rash cream, a potato thin and a cheese slice all hit 105c and were thermally throttled.

What's your best bet in pursuit of lower temperatures? For starters, stick with an actual thermal paste. Tried and true options like Arctic MX-4 and Corsair TM30 were among the top performers in the test with temperatures of 49c and 54c, respectively. How you install the paste can also impact temps and some even claim additives like salt can help but we wouldn't recommend trying that.

Can anyone in our community describe bizarre or unlikely solutions to problems that actually work? If you are not too ashamed to admit it - any failures that you might have tried too?


Original Submission

posted by janrinok on Saturday January 28, @12:52AM   Printer-friendly
from the that's-heavy dept.

Scientists Propose Turning Abandoned Mines Into Super-Efficient Gravity Batteries:

As the world comes to terms with the realities of climate change, the pressure to adopt more renewable energy is unavoidable. However, the sun isn't always shining, and the wind isn't always blowing. Worst of all, our ability to store that energy for the cold, still nights is still woefully inadequate. There may be a solution, and it's not a fancy new technology—it's a new take on something decades old. A team from the International Institute for Applied Systems Analysis (IIASA) has developed a plan to create a network of super-efficient gravity batteries that could store tens of terawatt-hours of power.

Humanity has been harnessing small amounts of energy from gravity for centuries—technically, the pendulum clock is a primitive gravity battery. In the 20th century, scientists developed pumped-storage hydroelectricity, which uses elevated water reservoirs to store gravitational potential energy. Several of these facilities exist around the world now, but most areas don't have enough water or the right terrain to make it work. The IIASA proposal for Underground Gravity Energy Storage (UGES) would use something we already have in spades: abandoned mine shafts.

A UGES stores energy when it's plentiful—for example, when the sun is shining on a solar power plant. A heavy container of sand or rocks would be suspended in the previously abandoned mine shaft with an electric motor raising it to the top. As long as the bucket remains at the top of the shaft, the energy isn't going anywhere. When power generation drops, the grid can harvest power from the UGES by letting the vessel drop back down. The UGES would use regenerative brakes on the cabling, similar to the way electric cars extend their range when you apply the brakes. Unlike batteries, all of which lose power via self-discharge over long periods, sand always has the same mass, and we're not going to run out of gravity.


Original Submission

posted by janrinok on Friday January 27, @10:05PM   Printer-friendly
from the death-greatly-exaggerated dept.

Expert says the focus on quantum attacks may distract us from more immediate threats:

Three weeks ago, panic swept across some corners of the security world after researchers discovered a breakthrough that, at long last, put the cracking of the widely used RSA encryption scheme within reach by using quantum computing.

Scientists and cryptographers have known for two decades that a factorization method known as Shor's algorithm makes it theoretically possible for a quantum computer with sufficient resources to break RSA. That's because the secret prime numbers that underpin the security of an RSA key are easy to calculate using Shor's algorithm. Computing the same primes using classical computing takes billions of years.
[...]
The paper, published three weeks ago by a team of researchers in China, reported finding a factorization method that could break a 2,048-bit RSA key using a quantum system with just 372 qubits when it operated using thousands of operation steps. The finding, if true, would have meant that the fall of RSA encryption to quantum computing could come much sooner than most people believed.

At the Enigma 2023 Conference in Santa Clara, California, on Tuesday, computer scientist and security and privacy expert Simson Garfinkel assured researchers that the demise of RSA was greatly exaggerated. For the time being, he said, quantum computing has few, if any, practical applications.

"In the near term, quantum computers are good for one thing, and that is getting papers published in prestigious journals," Garfinkel, co-author with Chris Hoofnagle of the 2021 book Law and Policy for the Quantum Age, told the audience. "The second thing they are reasonably good at, but we don't know for how much longer, is they're reasonably good at getting funding."

Previously: Breaking RSA With a Quantum Computer


Original Submission

posted by janrinok on Friday January 27, @07:22PM   Printer-friendly

NASA Ultrasound Technique Eliminates Kidney Stones Painlessly:

About one in 11 Americans will experience the discomfort of a kidney stone in their lifetime. While some might think of these pesky mineral clumps as earthly inconveniences, they're a problem up in space, too, prompting NASA to devise a treatment solution appropriate for those suffering among the stars. Their technique could be the secret to eliminating kidney stones quickly and painlessly.

Kidney stones are hard, often jagged mineral deposits that manifest in the ureter, which connects the kidneys with the bladder. While smaller kidney stones (up to 3 mm in diameter) can sometimes pass through the body on their own, larger stones (up to 20 mm) must be broken up within the body or removed surgically. Breaking them up has historically required shock wave lithotripsy (SWL), a technique in which hundreds of shock waves are directed toward the stone from outside of the body.

Though effective, SWL has its caveats. Not only is it effective only half of the time, but it's painful, which means patients must be anesthetized. This makes the procedure more expensive and time-consuming. Anesthesia also isn't ideal for people traveling through space, which is what propelled NASA to devise and test an alternative method. The agency shared its new technique in a recent issue of The Journal of Urology.

Abstract:

Purpose:
Our goal was to test transcutaneous focused ultrasound in the form of ultrasonic propulsion and burst wave lithotripsy to reposition ureteral stones and facilitate passage in awake subjects.
[...]
Conclusions:
This study supports the efficacy and safety of using ultrasonic propulsion and burst wave lithotripsy in awake subjects to reposition and break ureteral stones to relieve pain and facilitate passage

Reference: M. Kennedy Hall, Jeff Thiel, Barbrina Dunmire et al., First Series Using Ultrasonic Propulsion and Burst Wave Lithotripsy to Treat Ureteral Stones, J Urology, 2022. DOI: https://doi.org/10.1097/JU.0000000000002864


Original Submission

posted by janrinok on Friday January 27, @04:36PM   Printer-friendly

https://www.extremetech.com/extreme/342413-us-marines-defeat-darpa-robot-by-hiding-under-a-cardboard-box

The Pentagon's Defense Advanced Research Projects Agency (DARPA) has invested some of its resources into a robot that's been trained—likely among other things—to identify humans. There's just one little problem: The robot is cartoonishly easy to confuse.

Army veteran, former Pentagon policy analyst, and author Paul Scharre is gearing up to release a new book called Four Battlegrounds: Power in the Age of Artificial Intelligence. Despite the fact that the book isn't scheduled to hit shelves until Feb. 28, Twitter users are already sharing excerpts via social media. This includes The Economist's defense editor, Shashank Joshi, who shared a particularly laughable passage on Twitter.

In the excerpt, Scharre describes a week during which DARPA calibrated its robot's human recognition algorithm alongside a group of US Marines. The Marines and a team of DARPA engineers spent six days walking around the robot, training it to identify the moving human form. On the seventh day, the engineers placed the robot at the center of a traffic circle and devised a little game: The Marines had to approach the robot from a distance and touch the robot without being detected.

DARPA was quickly humbled. Scharre writes that all eight Marines were able to defeat the robot using techniques that could have come straight out of a Looney Tunes episode. Two of the Marines somersaulted toward the center of the traffic circle, thus using a form of movement the robot hadn't been trained to identify. Another pair shuffled toward the robot under a cardboard box. One Marine even stripped a nearby fir tree and was able to reach the robot by walking "like a fir tree" (the meaning of which Twitter users are still working to figure out).


Original Submission

posted by janrinok on Friday January 27, @01:51PM   Printer-friendly
from the windows-tco dept.

Developer Robert Graham has written a retrospective on how his proprietary software was able to detect the Microsoft Sapphire Worm, also known as SQL Slammer as it hit due to his design choices. These choices were first, a poll-mode driver instead of interrupt driven and, second, protocol analysis for recognizing the behavior signature rather than pattern matching.

An industry luminary even gave a presentation at BlackHat saying that my claimed performance (2-million packets-per-second) was impossible, because everyone knew that computers couldn't handle traffic that fast. I couldn't combat that, even by explaining with very small words "but we disable interrupts".

Now this is the norm. All network drivers are written with polling in mind. Specialized drivers like PF_RING and DPDK do even better. Networks appliances are now written using these things. Now you'd expect something like Snort to keep up and not get overloaded with interrupts. What makes me bitter is that back then, this was inexplicable magic.

I wrote an article in PoC||GTFO 0x15 that shows how my portscanner masscan uses this driver, if you want more info.

When it hit in January 2003, the Microsoft Sapphire Worm, also known as SQL Slammer, began spreading quickly across the Internet by doubling in size every 8.5 seconds, infecting than 90% of vulnerable, networked Windows systems within 10 minutes.


Original Submission

posted by hubie on Friday January 27, @11:07AM   Printer-friendly
from the try-rebooting-that-fixes-everything dept.

A communications delay timed out the instrument's flight software, and some planned observations will have to be rescheduled:

NASA says the Webb Space Telescope's Near Infrared Imager and Slitless Spectrograph is currently unavailable for science operations following a software glitch earlier this month.

In a release published yesterday, the agency stated that the issue started on January 15, when a communications delay within the instrument caused its flight software to time out. Flight software is a crucial aspect of any instrument operating in space, as it manages a whole suite of operations on a given spacecraft, including its orientation, communications, data collection, and thermal control.

[...] There have also been some software hiccups. In August, the telescope's Mid-Infrared Instrument (or MIRI) had a software glitch that paused its operations through November. And in December, there was an issue with the telescope's attitude control, which manages where the telescope is pointing. The glitch put the telescope into safe mode multiple times last month.

[...] Webb has done some tremendous work so far and will continue to illuminate the most ancient and murky regions of the cosmos. You can check out some of what's on the docket, along with other astronomy plans for the year, here.


Original Submission

posted by hubie on Friday January 27, @08:19AM   Printer-friendly

The project, in concert with US government agency DARPA, aims to develop pioneering propulsion system for space travel as soon as 2027:

The project is intended to develop a pioneering propulsion system for space travel far different from the chemical systems prevalent since the modern era of rocketry dawned almost a century ago.

"Using a nuclear thermal rocket allows for faster transit time, reducing risk for astronauts," Nasa said in a press release.

[...] Using current technology, Nasa says, the 300m-mile journey to Mars would take about seven months. Engineers do not yet know how much time could be shaved off using nuclear technology, but Bill Nelson, the Nasa administrator, said it would allow spacecraft, and humans, to travel in deep space at record speed.

[...] Using low thrust efficiently, nuclear electric propulsion systems accelerate spacecraft for extended periods and can propel a Mars mission for a fraction of the propellant of high-thrust systems.

Also at CNN and Engadget. Link to Nasa press release.


Original Submission

posted by hubie on Friday January 27, @05:32AM   Printer-friendly
from the wait-did-you-say-"insert"-or-"drop"? dept.

They were in the midst of synchronizing databases, the agency revealed:

The contractors working on the Federal Aviation Administration's NOTAM system apparently deleted files by accident, leading to the delays and cancellations of thousands of US flights. If you'll recall, the FAA paused all domestic departures in the US on the morning of January 11th, because its NOTAM or Notice to Air Missions system had failed. NOTAMs typically contain important information for pilots, including warnings for potential hazards along a flight's route, flight restrictions and runway closures. 

[...] The agency later reported that the system failed after "personnel who failed to follow procedures" damaged certain files. Now, it has shared more details as part of the preliminary findings of an ongoing investigation. Apparently, its contractors were synchronizing a main and a back-up database when they "unintentionally deleted files" that turned out to be necessary to keep the alert system running. It also reiterated what it said in the past that it has "so far found no evidence of a cyberattack or malicious intent."


Original Submission