Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Idiosyncratic use of punctuation - which of these annoys you the most?

  • Declarations and assignments that end with }; (C, C++, Javascript, etc.)
  • (Parenthesis (pile-ups (at (the (end (of (Lisp (code))))))))
  • Syntactically-significant whitespace (Python, Ruby, Haskell...)
  • Perl sigils: @array, $array[index], %hash, $hash{key}
  • Unnecessary sigils, like $variable in PHP
  • macro!() in Rust
  • Do you have any idea how much I spent on this Space Cadet keyboard, you insensitive clod?!
  • Something even worse...

[ Results | Polls ]
Comments:39 | Votes:85

posted by hubie on Thursday August 15, @08:39PM   Printer-friendly
from the duck-and-cover dept.

Arthur T Knackerbracket has processed the following story:

The Sun is going through an intense time right now. Our host star is experiencing increased activity, with a series of solar eruptions aimed towards Earth that resulted in a rare geomagnetic storm.

The National Oceanic and Atmospheric Administration’s (NOAA) Space Weather Prediction Center issued a severe geomagnetic storm alert on Monday following a series of coronal mass ejections (CMEs) that emerged last week. The storm reached level G4, meaning it’s severe. The geomagnetic storm triggered bright, colorful auroras last night in different parts of the world, with a chance for more of the celestial lights to take over the skies later tonight.

Space weather forecasters at NOAA had been monitoring at least five CMEs that erupted from the Sun since last week in anticipation that some may be headed towards Earth. “Some seem to have missed Earth, some clipped Earth, and then eventually one of those we were anticipating was much more of a good punch,” Shawn Dahl, service coordinator for the Space Weather Prediction Center, told Gizmodo.

[...] The Sun is approaching its solar maximum, a period of increased activity during its 11-year cycle that’s characterized by intense solar flares, coronal mass ejections, and massive sunspots. Earlier in May, a G5, or extreme, geomagnetic storm hit Earth as a result of large expulsions of plasma from the Sun’s corona (also known as coronal mass ejections). The G5 storm was the first to hit Earth in more than 20 years, and had some effects on Earth’s power grid.

[...] This solar cycle is exceptionally active, with the Sun developing the largest number of sunspots since 2002. CMEs typically erupt from regions on the Sun with increased amounts of magnetic flux associated with sunspots, and so far the Sun has sprouted 299 sunspots during its current solar cycle.

It’s obvious that the Sun isn’t stopping anytime soon. “Bottom line is, we’re going to be under the influence of increased activity all of this year, all of next year, and even in 2026 where we’ll continue to have higher chances this type of activity to continue to happen from time to time over the remainder of this solar cycle maximum that we’re experiencing,” Dahl said.


Original Submission

posted by janrinok on Thursday August 15, @03:54PM   Printer-friendly
from the observing-the-wild-autonomous-vehicle-in-its-natural-habitat dept.

"We just would like for them to stop honking their horn at four in the morning repeatedly," one neighbor said:

In San Francisco's South of Market neighborhood, neighbors say repeated honking from Waymo driverless cars is disturbing their sleep.

Multiple residents in high-rise buildings off of 2nd Street near Harrison Street told NBC Bay Area News they have been hearing Waymo vehicles honking in a nearby parking lot for the past several weeks. They said that the lot began to be occupied by Waymo vehicles just a few weeks ago. The Waymos appear to go to the lot to rest in between rides.

Christopher Cherry who lives in the building next door said he was "really excited" to have Waymo in the neighborhood, thinking it would bring more security and quiet to the area.

"We started out with a couple of honks here and there, and then as more and more cars started to arrive, the situation got worse, " Cherry said.

Cherry said the honking happens daily at different levels, with the most intense honking occurring at around 4:00 a.m. and at evening rush hour times.

[...] Videos from residents of these incidents show Waymo cars filing into the parking lot, and then attempting to back into spots, which appears to trigger honking from the other Waymo cars.

[...] Neighbors note, there is no one in the cars that they can flag down and ask to stop the honking.

"I think the most frustrating thing about this is that there is just nobody to talk to, and even at the corporate level, I am finding it difficult, not impossible," White said.


Original Submission

posted by janrinok on Thursday August 15, @10:08AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

This year's Pwnie Awards Ceremony was held on Saturday at the DEF CON hacker convention in Las Vegas. Now in its 17th year, the Pwnie Awards recognises some of the most outstanding achievements in technology security over the past year — as well as the greatest failures. 

As such, it was obvious that CrowdStrike would take home an award this year. Over 8.5 million Windows computers went down in July after the cybersecurity company pushed out an update to its software, bringing numerous companies and services across the world to a sudden halt. Businesses impacted included banks, airlines, mail carriers, supermarkets, and telecommunications companies.

The CrowdStrike outage was a massive global event, which has now been recognised with a massive Pwnie Award trophy. The two-tiered trophy awarded to CrowdStrike dwarfed the smaller pony-shaped ones for other categories, as befitting the eclipsing size of its blunder.

"Definitely not the award to be proud of receiving," Sentonas said in his acceptance speech, taking the stage to laughter and applause. "I think the team was surprised when I said straight away that I'd be coming to get it. We got this horribly wrong, we've said that a number of different times. It's super important to own it when you do things well, it's super important to own it when you do things horribly wrong, which we did in this case."

Accepting the large golden trophy, Sentonas stated that he intended to display it at CrowdStrike's headquarters in Austin, Texas. His hope is that it will serve as a reminder to CrowdStrike's staff to prevent such mistakes from happening in the future.

"The reason why I wanted the trophy is I'm heading back to headquarters," Sentosas continued. "I'm gonna take the trophy with me, it's gonna sit pride of place, because I want every CrowdStriker who comes to work to see it. Because our goal is to protect people, and we got this wrong, and I want to make sure that everybody understands these things can't happen, and that's what this community's about. So from that perspective I will say thank you."

Sentonas' in-person acceptance of CrowdStrike's Pwnie Award was widely well-received, with social media users praising him for accepting accountability with humility, class, and good humour.

Though CrowdStrike's Most Epic Fail trophy was only awarded this weekend, its win had already been announced alongside the Pwnie Award nominations in late July. This was within mere days of the infamous global outage that took down numerous companies and services worldwide. 

In a post to X at the time, the Pwnie Awards stated that it was granting the early award due to "extenuating circumstances." Said circumstance was likely the fact that CrowdStrike's fail was so epic that no one was likely to match it unless they deliberately tried. Even then, it would still be a difficult task.

While all other categories at the 2024 Pwnie Awards had three finalists, CrowdStrike had no competition for the Epic Fail Award. Instead, nominee details for the category simply read, "Lol. Lmao even."


Original Submission

posted by janrinok on Thursday August 15, @05:22AM   Printer-friendly

2.9 billion hit in one of the largest data breaches ever:

Regardless of how careful you are online, your personal data can still end up in the hands of hackers—and a new data breach that exposed the data of 2.9 billion people is the perfect example of this.

As reported by Bloomberg, news of this massive new data breach was revealed as part of a class action lawsuit that was filed at the beginning of this month. A complaint submitted to the US District Court for the Southern District of Florida claims the exposed personal data belongs to a public records data provider named National Public Data, which specializes in background checks and fraud prevention.

The personal data of 2.9 billion people, which includes full names, former and complete addresses going back 30 years, Social Security Numbers, and more, was stolen from National Public Data by a cybercriminal group that goes by the name USDoD. The complaint goes on to explain that the hackers then tried to sell this huge collection of personal data on the dark web to the tune of $3.5 million. It's worth noting that due to the sheer number of people affected, this data likely comes from both the U.S. and other countries around the world.

Here's everything we know so far about this massive data breach along with some steps you can take to stay safe if your personal information was exposed online.

So how does a firm like National Public Data obtain the personal data of almost 3 billion people? The answer is through scraping which is a technique used by companies to collect data from web sites and other sources online.

What makes the way National Public Data did this more concerning is that the firm scraped personally identifiable information (PII) of billions of people from non-public sources. As a result, many of the people who are now involved in the class action lawsuit did not provide their data to the company willingly.

According to the complaint, one of the plaintiffs who resides in California first found out about the breach because he was using one of the best identity theft protection services which notified him that his data was exposed and leaked on the dark web.

As part of the class action lawsuit, this plaintiff is asking the court to have National Public Data securely dispose of all the personal information it acquired through scraping. However, he also wants the firm to compensate him and the other victims financially while implementing stricter security measures going forward.

With full names, addresses and Social Security Numbers in hand, there's a lot that hackers can do with this information, especially when it was made available for sale on the dark web.


Original Submission

posted by hubie on Thursday August 15, @03:00AM   Printer-friendly
from the patch-quick-and-check-your-logs dept.

The maintainers of the FreeBSD Project have released security updates to address a high-severity flaw in OpenSSH that attackers could potentially exploit to execute arbitrary code remotely with elevated privileges:

The vulnerability, tracked as CVE-2024-7589, carries a CVSS score of 7.4 out of a maximum of 10.0, indicating high severity.

"A signal handler in sshd(8) may call a logging function that is not async-signal-safe," according to an advisory released last week.

"The signal handler is invoked when a client does not authenticate within the LoginGraceTime seconds (120 by default). This signal handler executes in the context of the sshd(8)'s privileged code, which is not sandboxed and runs with full root privileges."

[...] "The faulty code in this case is from the integration of blacklistd in OpenSSH in FreeBSD," the project maintainers said.

"As a result of calling functions that are not async-signal-safe in the privileged sshd(8) context, a race condition exists that a determined attacker may be able to exploit to allow an unauthenticated remote code execution as root."

Users of FreeBSD are strongly advised to update to a supported version and restart sshd to mitigate potential threats.

In cases where sshd(8) cannot be updated, the race condition issue can be resolved by setting LoginGraceTime to 0 in /etc/ssh/sshd_config and restarting sshd(8). While this change makes the daemon vulnerable to a denial-of-service, it safeguards it against remote code execution.


Original Submission

posted by janrinok on Thursday August 15, @12:40AM   Printer-friendly
from the aka-avoiding-the-enshittification-of-software dept.

Here are two related essays on software freedom in light of the current environment where platform decay has become the norm.

Lead developer of Linux-Libre, FSFLA board member, and previous FSF board member, Alexandre Oliva wrote a piece back in June about platform decay (also known colloquially as enshittification) and how to fight it through software freedom. It's from his May 5th, 2024 LibrePlanet presentation with the same title ( video and slides ). This weekend, developer Daniel Cantarín wrote a follow up addressing the nature of software freedom and the increasing communication, philosophical, and political barriers to actually achieving software freedom.

The two essays are essentially in agreement but raise different points and priorities.

Alexandre Oliva's essay includes the following:

[...] Software (static) enshittification

Back in the time when most users could choose which version of a program they wanted to run, upgrading software was not something that happened automagically. Installing a program involved getting a copy of its installable media, and if you wanted to install a newer version, you had to get a copy of the installable media for the newer version.

You could install them side by side, and if you found that the newer version was lacking some feature important to you, or it didn't serve you well, you could roll back to the older version.

This created a scenario in which the old and the new versions competed for users, so in order for the newer version to gain adoption, it had to be more attractive to users than the older one. It had to offer more interesting features, and if it dropped features or engaged in enshittification, it would need even more interesting features to make up.

This limits how much enshittification can be imposed on users in newer versions. It was much harder to pull feature from under users in that static arrangement.

Software (dynamic) enshittification

But now most users are mistreated with imposed updates, and since they are required to be online all the time, they are vulnerable all the time, and they can't go back to an earlier version that served them well. The following are the most enshittifiable arrangements to offer computing facilities to users. Most enshittifiable so far, Homer Simpson would presumably point out.

Apps that run on remotely-controlled telephones (TRApps) and that are typically automatically updated from exclusive app stores, and their counterparts that run on increasingly enshittified computers (CRApps) are cases in which the programs are installed on your own computer, but are controlled by someone else. They've come to be called apps, so that you'll think of them as appliances rather than as something you can and should be able to tinker with.

Web sites that, every time you visit them, install and demand to run Javascrapped programs on your computer, are a case in which, even if the program is technically Free Software, in this setting, someone else controls which version you get to run, and what that version does.

And then, there are the situations in which, instead of getting a copy of a program, you're offered a service that will do your computing for you, under somebody else's control, substituting software that could have been respectful of your freedom. [...]

And Daniel Cantarín's follow up essay includes the following:

[...]     Mr. Oliva tells us that, between enshittified software and free software, the choice is not hard. It’s the very article’s title, and it alone should scandalize anyone with minimal knowledge in the matter between its implicit lack of touch with objective reality and its close distance with hypocrisy, all that in a very light tone that even had the intention of being somehow funny. And this discourse wasn’t even in a divulgation context, with an auditorium strange to free software: it was for LibrePlanet, where most people use free software and knows its history and details. Considering that Mr. Oliva is a public and important figure inside the community, a referent, and also considering that I can very rarely participate in this kind of community events -because I have very little free time-, I immediately asked myself: is this the kind of stuff the community is talking about? Are this the discursive lines our references tell us to follow?

    No, Mr. Oliva, I’m afraid you’re deeply mistaken: choosing free software is hard. VERY hard. TOO hard, I dare say. And I have my serious suspicions that our leaders/references and the course of our communities has a lot to do with that. But let’s take a look at this argument by contrasting my context with your article.

The tip of the iceberg

    Mr. Oliva tells us about different types of software enshittification in different contexts, both historical and operational. Stuff we all know and hate like forced updates, software stores, remote policying, inability to go back to previous versions, and so on and so on. Please go read the full article, as in this regards is actually fruitful if you don’t know what we’re talking about here. I believe all of Mr. Oliva’s remarks are true: enshittification is a real phenomenon, he’s not the first one to mention it (as he adequately clarifies), and it’s an actual and important issue that we all need to pay attention to. That’s all fine, and the problem with his article of course is not there. The problem is how he talks about it, specially to force his interpretations as if it where some kind of “common sense”. So it’s important to take a look at his arguments.

    Let’s begin by this quote: [...]

Rights which we had in the analog world are getting increasingly difficult to carry over into the digital realm. Whether we can or not will depend on software and the protocols and file formats the software rely upon.

Previously:
(2024) Enshittification of Google and the Men Who Killed Search
(2024) Bruce Perens Solicits Comments on First Draft of a Post-Open License
(2024) Cory Doctorow Has a Plan to Wipe Away the Enshittification of Tech
(2023) Enshittification Everywhere. Your Car, Your Phone, Your Tractor, Your Computer...


Original Submission

posted by janrinok on Wednesday August 14, @07:54PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

As soon as this week, NASA officials will make perhaps the agency's most consequential safety decision in human spaceflight in 21 years.

NASA astronauts Butch Wilmore and Suni Williams are nearly 10 weeks into a test flight that was originally set to last a little more than one week. The two retired US Navy test pilots were the first people to fly into orbit on Boeing's Starliner spacecraft when it launched on June 5. Now, NASA officials aren't sure Starliner is safe enough to bring the astronauts home.

Three of the managers at the center of the pending decision, Ken Bowersox and Steve Stich from NASA and Boeing's LeRoy Cain, either had key roles in the ill-fated final flight of Space Shuttle Columbia in 2003 or felt the consequences of the accident.

At that time, officials misjudged the risk. Seven astronauts died, and the Space Shuttle Columbia was destroyed as it reentered the atmosphere over Texas. Bowersox, Stich, and Cain weren't the people making the call on the health of Columbia's heat shield in 2003, but they had front-row seats to the consequences.

Bowersox was an astronaut on the International Space Station when NASA lost Columbia. He and his crewmates were waiting to hitch a ride home on the next Space Shuttle mission, which was delayed two-and-a-half years in the wake of the Columbia accident. Instead, Bowersox's crew came back to Earth later that year on a Russian Soyuz capsule. After retiring from the astronaut corps, Bowersox worked at SpaceX and is now the head of NASA's spaceflight operations directorate.

Stich and Cain were NASA flight directors in 2003, and they remain well-respected in human spaceflight circles. Stich is now the manager of NASA's commercial crew program, and Cain is now a Boeing employee and chair of the company's Starliner mission director. For the ongoing Starliner mission, Bowersox, Stich, and Cain are in the decision-making chain.

All three joined NASA in the late 1980s, soon after the Challenger accident. They have seen NASA attempt to reshape its safety culture after both of NASA's fatal Space Shuttle tragedies. After Challenger, NASA's astronaut office had a more central role in safety decisions, and the agency made efforts to listen to dissent from engineers. Still, human flaws are inescapable, and NASA's culture was unable to alleviate them during Columbia's last flight in 2003.

[...] "I have wondered if some in management roles today that were here when we lost Challenger and Columbia remember that in both of those tragedies, there were those that were not comfortable proceeding," Milt Heflin, a retired NASA flight director who spent 47 years at the agency, wrote in an email to Ars. "Today, those memories are still around."

"I suspect Stich and Cain are paying attention to the right stuff," Heflin wrote.

The question facing NASA's leadership today? Should the two astronauts return to Earth from the International Space Station in Boeing's Starliner spacecraft, with its history of thruster failures and helium leaks, or should they come home on a SpaceX Dragon capsule?

Under normal conditions, the first option is the choice everyone at NASA would like to make. It would be least disruptive to operations at the space station and would potentially maintain a clearer future for Boeing's Starliner program, which NASA would like to become operational for regular crew rotation flights to the station.

But some people at NASA aren't convinced this is the right call. Engineers still don't fully understand why five of the Starliner spacecraft's thrusters overheated and lost power as the capsule approached the space station for docking in June. Four of these five control jets are now back in action with near-normal performance, but managers would like to be sure the same thrusters—and maybe more—won't fail again as Starliner departs the station and heads for reentry.


Original Submission

posted by janrinok on Wednesday August 14, @03:12PM   Printer-friendly
from the there-is-no-pill dept.

https://arstechnica.com/science/2024/08/mdma-for-ptsd-three-studies-retracted-on-heels-of-fda-rejection/

A scientific journal has retracted three studies underpinning the clinical development of MDMA—aka ecstasy—as a psychedelic treatment for post-traumatic stress disorder. The move came just a day after news broke that the Food and Drug Administration rejected the treatment, despite positive results reported from two Phase III clinical trials.

On Friday, the company developing the therapy, Lykos Therapeutics, announced that it had received a rejection letter from the FDA. Lykos said the letter echoed the numerous concerns raised previously by the agency and its expert advisory committee, which, in June, voted overwhelmingly against approving the therapy. The FDA and its advisers identified flaws in the design of the clinical trials, missing data, and a variety of biases in people involved with the trials, including an alleged cult-like support of psychedelics. Lykos is a commercial spinoff of the psychedelic advocacy nonprofit Multidisciplinary Association for Psychedelic Studies (MAPS).

FDA advisers also noted the public allegations of a sexual assault of a trial participant during a Phase II trial by an unlicensed therapist providing the MDMA-assisted psychotherapy.


Original Submission

posted by hubie on Wednesday August 14, @10:25AM   Printer-friendly
from the what-do-you-mean-"entering"? dept.

Algorithmic collusion appears to be spreading to more and more industries. And existing laws may not be equipped to stop it:

If you rent your home, there's a good chance your landlord uses RealPage to set your monthly payment. The company describes itself as merely helping landlords set the most profitable price. But a series of lawsuits says it's something else: an AI-enabled price-fixing conspiracy.

The classic image of price-fixing involves the executives of rival companies gathering behind closed doors and secretly agreeing to charge the same inflated price for whatever they're selling. This type of collusion is one of the gravest sins you can commit against a free-market economy; the late Justice Antonin Scalia once called price-fixing the "supreme evil" of antitrust law. Agreeing to fix prices is punishable with up to 10 years in prison and a $100 million fine.

But, as the RealPage example suggests, technology may offer a workaround. Instead of getting together with your rivals and agreeing not to compete on price, you can all independently rely on a third party to set your prices for you. Property owners feed RealPage's "property management software" their data, including unit prices and vacancy rates, and the algorithm—which also knows what competitors are charging—spits out a rent recommendation. If enough landlords use it, the result could look the same as a traditional price-fixing cartel: lockstep price increases instead of price competition, no secret handshake or clandestine meeting needed.

Without price competition, businesses lose their incentive to innovate and lower costs, and consumers get stuck with high prices and no alternatives. Algorithmic price-fixing appears to be spreading to more and more industries. And existing laws may not be equipped to stop it.

In 2017, then–Federal Trade Commission Chair Maureen Ohlhausen gave a speech to antitrust lawyers warning about the rise of algorithmic collusion. "Is it okay for a guy named Bob to collect confidential price strategy information from all the participants in a market and then tell everybody how they should price?" she asked. "If it isn't okay for a guy named Bob to do it, then it probably isn't okay for an algorithm to do it either."

[...] According to the lawsuits, RealPage's clients act more like collaborators than competitors. Landlords hand over highly confidential information to RealPage, and many of them recruit their rivals to use the service. "Those kinds of behaviors raise a big red flag," Maurice Stucke, a law professor at the University of Tennessee and a former antitrust attorney at the Department of Justice, told me. When companies are operating in a highly competitive market, he said, they typically go to great lengths to protect any sensitive information that could give their rivals an edge.

The lawsuits also argue that RealPage pressures landlords to comply with its pricing suggestions—something that would make no sense if the company were merely being paid to offer individualized advice. In an interview with ProPublica, Jeffrey Roper, who helped develop one of RealPage's main software tools, acknowledged that one of the greatest threats to a landlord's profits is when nearby properties set prices too low. "If you have idiots undervaluing, it costs the whole system," he said. RealPage thus makes it hard for customers to override its recommendations, according to the lawsuits, allegedly even requiring a written justification and explicit approval from RealPage staff. Former employees have said that failure to comply with the company's recommendations could result in clients being kicked off the service. "This, to me, is the biggest giveaway," Lee Hepner, an antitrust lawyer at the American Economic Liberties Project, an anti-monopoly organization, told me. "Enforced compliance is the hallmark feature of any cartel."

[...] The challenge is this: Under existing antitrust law, showing that companies A and B used algorithm C to raise prices isn't enough; you need to show that there was some kind of agreement between companies A and B, and you need to allege some specific factual basis that the agreement existed before you can formally request evidence of it. This dynamic can place plaintiffs in a catch-22: Plausibly alleging the existence of a price-fixing agreement is hard to do without access to evidence like private emails, internal documents, or the algorithm itself. But they typically can't uncover those kinds of materials until they are given the legal power to request evidence in discovery. "It's like trying to fit a square peg in a round hole," Richard Powers, a former deputy assistant attorney general in the DOJ antitrust division, told me. "It makes the job really hard."

[...] And cases like RealPage and Rainmaker may be the easy ones. In a series of papers, Stucke and his fellow antitrust scholar Ariel Ezrachi have outlined ways in which algorithms could fix prices that would be even more difficult to prevent or prosecute—including situations in which an algorithm learns to fix prices withouts its creators or users intending it to. Something similar could occur even if companies used different third-party algorithms to set prices. They point to a recent study of German gas stations, which found that when one major player adopted a pricing algorithm, its margins didn't budge, but when two major players adopted different pricing algorithms, the margins for both increased by 38 percent. "In situations like these, the algorithms themselves actually learn to collude with each other," Stucke told me. "That could make it possible to fix prices at a scale that we've never seen."

None of the situations Stucke and Ezrachi describe involve an explicit agreement, making them almost impossible to prosecute under existing antitrust laws. Price-fixing, in other words, has entered the algorithmic age, but the laws designed to prevent it have not kept up. Powers said he believes existing antitrust laws cover algorithmic collusion—but he worried that he might be wrong. "That's the thing that kept me up at night," he said about his tenure at the Department of Justice. "The worry that all 100-plus years of case law on price-fixing could be circumvented by technology."

[...] Whether other jurisdictions follow suit remains to be seen. In the meantime, more and more companies are figuring out ways to use algorithms to set prices. If these really do enable de facto price-fixing, and manage to escape legal scrutiny, the result could be a kind of pricing dystopia in which competition to create better products and lower prices would be replaced by coordination to keep prices high and profits flowing. That would mean permanently higher costs for consumers—like an inflation nightmare that never ends. More profound, it would undermine the incentives that keep economies growing and living standards rising. The basic premise of free-market capitalism is that prices are set through open competition, not by a central planner. That goes for algorithmic central planners too.


Original Submission

posted by hubie on Wednesday August 14, @05:42AM   Printer-friendly

https://www.wired.com/story/usps-scam-text-smishing-triad/

The flood of text messages started arriving early this year. They carried a similar thrust: The United States Postal Service is trying to deliver a parcel but needs more details, including your credit card number. All the messages pointed to websites where the information could be entered.

Like thousands of others, security researcher Grant Smith got a USPS package message. Many of his friends had received similar texts. A couple of days earlier, he says, his wife called him and said she'd inadvertently entered her credit card details. With little going on after the holidays, Smith began a mission: Hunt down the scammers.

Over the course of a few weeks, Smith tracked down the Chinese-language group behind the mass-smishing campaign, hacked into their systems, collected evidence of their activities, and started a months-long process of gathering victim data and handing it to USPS investigators and a US bank, allowing people's cards to be protected from fraudulent activity.

In total, people entered 438,669 unique credit cards into 1,133 domains used by the scammers, says Smith, a red team engineer and the founder of offensive cybersecurity firm Phantom Security. Many people entered multiple cards each, he says. More than 50,000 email addresses were logged, including hundreds of university email addresses and 20 military or government email domains. The victims were spread across the United States—California, the state with the most, had 141,000 entries—with more than 1.2 million pieces of information being entered in total.

[...] Chasing down the group didn't take long. Smith started investigating the smishing text message he received by the dodgy domain and intercepting traffic from the website. A path traversal vulnerability, coupled with a SQL injection, he says, allowed him to grab files from the website's server and read data from the database being used.

"I thought there was just one standard site that they all were using," Smith says. Diving into the data from that initial website, he found the name of a Chinese-language Telegram account and channel, which appeared to be selling a smishing kit scammers could use to easily create the fake websites.

[...] "I started reverse engineering it, figured out how everything was being encrypted, how I could decrypt it, and figured out a more efficient way of grabbing the data," Smith says. From there, he says, he was able to break administrator passwords on the websites—many had not been changed from the default "admin" username and "123456" password—and began pulling victim data from the network of smishing websites in a faster, automated way.

[...] The researcher provided the details to a bank that had contacted him after seeing his initial blog posts. Smith declined to name the bank. He also reported the incidents to the FBI and later provided information to the United States Postal Inspection Service (USPIS).

[...] The Smishing Triad sends between 50,000 and 100,000 messages daily, according to Resecurity's research. Its scam messages are sent using SMS or Apple's iMessage, the latter being encrypted. Loveland says the Triad is made up of two distinct groups—a small team led by one Chinese hacker that creates, sells, and maintains the smishing kit, and a second group of people who buy the scamming tool. (A backdoor in the kit allows the creator to access details of administrators using the kit, Smith says in a blog post.)

[...] As a result, smishing has been on the rise in recent years. But there are some tell-tale signs: If you receive a message from a number or email you don't recognize, if it contains a link to click on, or if it wants you to do something urgently, you should be suspicious.


Original Submission

posted by hubie on Wednesday August 14, @12:54AM   Printer-friendly

Elusive temporary star described in historical documents recreated using new computer model, shows it may have recently started generating stellar winds:

For the first time, a mysterious remnant from a rare type of supernova recorded in 1181 has been explained. Two white dwarf stars collided, creating a temporary "guest star," now labeled supernova (SN) 1181, which was recorded in historical documents in Japan and elsewhere in Asia. However, after the star dimmed, its location and structure remained a mystery until a team pinpointed its location in 2021.

Now, through computer modeling and observational analysis, researchers have recreated the structure of the remnant white dwarf, a rare occurrence, explaining its double shock formation. They also discovered that high-speed stellar winds may have started blowing from its surface within just the past 20-30 years. This finding improves our understanding of the diversity of supernova explosions, and highlights the benefits of interdisciplinary research, combining history with modern astronomy to enable new discoveries about our galaxy.

[...] The remnant of this guest star, labeled supernova remnant (SNR) 1181, was found to have been created when two extremely dense, Earth-sized stars, called white dwarfs, collided. This created a rare type of supernova, called a Type Iax supernova, which left behind a single, bright and fast-rotating white dwarf. Aided by observations on its position noted in the historical document, modern astrophysicists finally pinpointed its location in 2021 in a nebula towards the constellation Cassiopeia.

Due to its rare nature and location within our galaxy, SNR 1181 has been the subject of much observational research. This suggested that SNR 1181 is made up of two shock regions, an outer region and an inner one. In this new study, the research group analyzed the latest X-ray data to construct a theoretical computer model to explain these observations, and which has recreated the previously unexplained structure of this supernova remnant.

The main challenge was that according to conventional understanding, when two white dwarfs collide like this, they should explode and disappear. However, this merger left behind a white dwarf. The spinning white dwarf was expected to create a stellar wind (a fast-flowing stream of particles) immediately after its formation. However, what the researchers found was something else.

"If the wind had started blowing immediately after SNR 1181's formation, we couldn't reproduce the observed size of the inner shock region," said Ko. "However, by treating the wind's onset time as variable, we succeeded in explaining all of the observed features of SNR 1181 accurately and unraveling the mysterious properties of this high-speed wind. We were also able to simultaneously track the time evolution of each shock region, using numerical calculations."

The team was very surprised to find that according to their calculations, the wind may have started blowing only very recently, within the past 20-30 years. They suggest this may indicate that the white dwarf has started to burn again, possibly due to some of the matter thrown out by the explosion witnessed in 1181 falling back to its surface, increasing its density and temperature over a threshold to restart burning.

To validate their computer model, the team is now preparing to further observe SNR 1181 using the Very Large Array (VLA) radio telescope based in central New Mexico and the 8.2-meter-class Subaru Telescope in Hawaii.

"The ability to determine the age of supernova remnants or the brightness at the time of their explosion through archaeological perspectives is a rare and invaluable asset to modern astronomy," said Ko. "Such interdisciplinary research is both exciting and highlights the immense potential for combining diverse fields to uncover new dimensions of astronomical phenomena."

Journal Reference: A Dynamical Model for IRAS 00500+6713: The Remnant of a Type Iax Supernova SN 1181 Hosting a Double Degenerate Merger Product WD J005311 - IOPscience, The Astrophysical Journal (DOI: 10.3847/1538-4357/ad4d99)


Original Submission

posted by janrinok on Tuesday August 13, @08:06PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A multi-decade study led by researchers from the University of Sydney has unveiled concerning trends in international trade that are exacerbating inequalities between the Global North rich countries and Global South developing countries.

The research identifies both positive and negative trends driven by international trade but does highlight the role that high-income countries play in driving polarizing trends, undermining progress towards reaching the United Nations Sustainable Development Goals.

[...] As the world approaches the 2030 Agenda for Sustainable Development, the research underlines the urgent need for countries to recognize their influence beyond national borders.

The research lead for the study is Associate Professor Arunima Malik from the Center for Integrated Sustainability Analysis in the Faculty of Science, and Discipline of Accounting, Governance and Regulation in the Business School.

She said, "Sustainable Development Goals are nationally focused and therefore tend not to take international effects into account. This misses the fact that in today's globalized world, consumption in one region can significantly affect the well-being of people in countries far away."

The study takes a global approach to supply chains and is the first to assess the trends over an extended period of the global environmental and social impacts from international trade.

The findings reveal that high-income countries often outsource environmentally and socially detrimental production to low-income nations, resulting in the shifting of burdens that disproportionately affects developing regions.

Co-author Professor Manfred Lenzen, Professor of Sustainability Research at the Center for Integrated Sustainability Analysis, said, "Our findings indicate the Global North's outsourcing practices are contributing to a widening divide between countries that benefit from trade and those that bear the brunt of its adverse effects."

This dynamic not only perpetuates economic disparities, but also exacerbates social and environmental challenges in the Global South.

"It isn't all negative. International trade can also have positive impacts," said co-author, Dr. Mengyu Li, a Horizon Fellow also at the Center for Integrated Analysis in the Faculty of Science. "While trade can promote economic growth and reduce poverty, it can also lead to increasing pollution, waste, resource depletion and social inequalities, especially in the Global South."

The research, which spans three decades from 1990 to 2018, employs a systematic quantitative assessment of 12 selected Sustainable Development Goals. The authors say that the lack of defined consumption-based indicators aligned with the SDG framework has hindered a comprehensive understanding of these trends.

As an alternative, the authors propose the use of consumption-based proxies to analyze global supply chain dynamics, trends and their implications for progress towards the UN SDGs.

The study identified the biggest polarizing effects in SDG13 (Climate Action), SDG11 (Sustainable Cities and Communities) and SDG2 (Zero Hunger). The biggest equalizing effects were identified for SDG8 (Decent Work and Economic Growth) and SDG1 (No Poverty).

Provided by University of Sydney

More information: Arunima Malik et al, Polarizing and equalizing trends in international trade and Sustainable Development Goals, Nature Sustainability (2024). DOI: 10.1038/s41893-024-01397-5


Original Submission

posted by janrinok on Tuesday August 13, @03:21PM   Printer-friendly
from the who-owns-the-Representatives-and-Senators? dept.

https://apnews.com/article/consumer-protection-ftc-fcc-biden-250f6eece6e2665535019128e8fa38da

In the name of consumer protection, a slew of U.S. federal agencies are working to make it easier for Americans to click the unsubscribe button for unwanted memberships and recurring payment services.

A broad new government initiative, dubbed "Time Is Money," includes a rollout of new regulations and the promise of more for industries spanning from healthcare and fitness memberships to media subscriptions.

"The administration is cracking down on all the ways that companies, through paperwork, hold times and general aggravation waste people's money and waste people's time and really hold onto their money," Neera Tanden, White House domestic policy adviser, told reporters Friday in advance of the announcement.

"Essentially in all of these practices, companies are delaying services to you or really trying to make it so difficult for you to cancel the service that they get to hold onto your money for longer and longer," Tanden said. "These seemingly small inconveniences don't happen by accident — they have huge financial consequences."


Original Submission

posted by hubie on Tuesday August 13, @10:37AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

In separate statements, ASUS and MSI announced their plans to deliver the new microcode for 13th and 14th Gen Raptor Lake Core family of CPUs over the course of August.

The updated CPU microcode, which should be finalized in the coming days, is supposed to stop Intel's wobbly desktop microprocessors from crashing at normal clock speeds (an "instability" as the x86 giant puts it) to frying themselves and causing permanent damage if not complete failure.

Apparently, the original microcode for Raptor Lake processors applied too much voltage to chips. While increasing voltage can make it possible to hit higher clock speeds with ironclad stability, too much voltage can be dangerous and degrade the silicon.

Although microcode updates are developed by Intel, they have to be distributed via motherboard BIOSes developed by individual motherboard vendors, including DIY brands like ASUS and MSI, and also OEMs. When it comes to microcode patches, Intel (and its rival AMD) can't guarantee when users will receive it or if all users will even get it at all, since it is up to individual motherboard makers to issue new BIOS versions.

That's not ideal for both Chipzilla and owners of Raptor Lake CPUs, as the longer it takes for the microcode to disseminate, the more opportunities there are for more chips to fail, providing more fuel for potential class action lawsuits.

However, at least ASUS and MSI seem to be working fast on updating their motherboards, with both saying that they'll start distributing BIOSes with the new microcode next week. Intel said the microcode itself wouldn't be done until the middle of the month.

[...] "The two tech companies have yet to update any 600 series boards, however. For its part, Gigabyte says it expects all of its motherboards to get updated by the second week of September at the latest, a representative toldThe Register.


Original Submission

posted by hubie on Tuesday August 13, @05:50AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Engineers on NASA's NEOWISE (Near-Earth Object Wide-field Infrared Survey Explorer) mission commanded the spacecraft to turn its transmitter off for the last time Thursday. This concludes more than 10 years of its planetary defense mission to search for asteroids and comets, including those that could pose a threat to Earth.

[...] NASA ended the mission because NEOWISE will soon drop too low in its orbit around Earth to provide usable science data. An uptick in solar activity is heating the upper atmosphere, causing it to expand and create drag on the spacecraft, which does not have a propulsion system to keep it in orbit. Now decommissioned, NEOWISE is expected to safely burn up in our planet's atmosphere in late 2024.

During its operational lifetime, the infrared survey telescope exceeded scientific objectives for not one but two missions, starting with the WISE (Wide-field Infrared Survey Explorer) mission. Managed by JPL, WISE launched in December 2009 with a seven-month mission to scan the entire infrared sky.

By July 2010, WISE had accomplished this with far greater sensitivity than previous surveys. A few months later, the telescope ran out of the coolant that kept heat produced by the spacecraft from interfering with its infrared observations. (Invisible to the human eye, infrared wavelengths are associated with heat.)

NASA extended the mission under the name NEOWISE until February 2011 to complete a survey of the main belt asteroids, at which point the spacecraft was put into hibernation.

Analysis of this data showed that although the lack of coolant meant the space telescope could no longer observe the faintest infrared objects in the universe, it could still make precise observations of asteroids and comets that generate a strong infrared signal from being heated by the sun as they travel past our planet.

NASA brought the telescope out of hibernation in 2013 under the Near-Earth Object Observations Program, a precursor for the agency's Planetary Defense Coordination Office, to continue the NEOWISE survey of asteroids and comets in the pursuit of planetary defense.

[...] "The NEOWISE mission has provided a unique, long-duration data set of the infrared sky that will be used by scientists for decades to come," said Amy Mainzer, principal investigator for both NEOWISE and NEO Surveyor at the University of California, Los Angeles. "But its additional legacy is that it has helped lay the groundwork for NASA's next planetary defense infrared space telescope."


Original Submission