Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Idiosyncratic use of punctuation - which of these annoys you the most?

  • Declarations and assignments that end with }; (C, C++, Javascript, etc.)
  • (Parenthesis (pile-ups (at (the (end (of (Lisp (code))))))))
  • Syntactically-significant whitespace (Python, Ruby, Haskell...)
  • Perl sigils: @array, $array[index], %hash, $hash{key}
  • Unnecessary sigils, like $variable in PHP
  • macro!() in Rust
  • Do you have any idea how much I spent on this Space Cadet keyboard, you insensitive clod?!
  • Something even worse...

[ Results | Polls ]
Comments:34 | Votes:74

posted by hubie on Wednesday September 25, @09:48PM   Printer-friendly
from the lies-damn-lies-statistics-and-pundits dept.

We are just a few weeks away from the general election in the United States and many publications provide daily updates to election forecasts. One of the most well-known forecasting systems was developed by Nate Silver, originally for the website FiveThirtyEight. Although Silver's model is quite sophisticated and incorporates a considerable amount of data beyond polls, other sites like RealClearPolitics just use a simple average of recent polls. Does all of the complexity of models like Silver's actually improve forecasts, and can we demonstrate that they're superior to a simple average of polls?

Pre-election polls are a bit like a science project that uses a lot of sensors to measure the state of a single system. There's a delay between the time a sensor is polled for data and when it returns a result, so the project uses many sensors to get more frequent updates. However, the electronics shop had a limited quantity of the highest quality sensor, so a lot of other sensors were used that have a larger bias, less accuracy, or use different methods to measure the same quantity. The science project incorporates the noisy data from the heterogeneous sensors to try to produce the most accurate estimate of the state of the system.

Polls are similar to my noisy sensor analogy in that each poll has its own unique methodology, has a different margin of error related to sample size, and may have what Silver calls "house effects" that may result in a tendency for results from polling firms to favor some candidates or political parties. Some of the more complex election forecasting systems like Silver's model attempt to correct for the bias and give more weight to polls with methodologies that are considered to have better polling practices and that use larger sample sizes.

The purpose of the election forecasts is not to take a snapshot of the race at a particular point in time, but instead to forecast the results on election day. For example, after a political party officially selects its presidential candidate at the party's convention, the candidate tends to receive a temporary boost in the polls, which is known as a "post-convention bounce". Although this effect is well-documented through many election cycles, it is temporary, and polls taken during this period tend to overestimate the actual support the candidate will receive on election day. Many forecast models try to adjust for this bias when incorporating polls taken shortly after the convention.

Election models also often incorporate "fundamentals" such as approval ratings and the tendency of a strong economy to favor incumbents. This information can be used separately to predict the outcome of elections or incorporated into a model along with polling data. Some forecast models like Silver's model also incorporate polls from other states that are similar to the state that is being forecasted and data from national polls to try to produce a more accurate forecast and smooth out the noise from individual polls. These models may also incorporate past voting trends, expert ratings of races, and data from prediction markets. The end result is a model that is very complex and incorporates a large amount of data. But does it actually provide more accurate forecasts?

Unusual behaviors have been noted with some models, such as the tails in Silver's model that tended to include some very unusual outcomes. On the other hand, many models predicted that it was nearly certain that Hillary Clinton would defeat Donald Trump in the 2016 election, perhaps underestimating the potential magnitude of polling errors, leading to tails that weren't heavy enough. Election forecasters have to decide what factors to include in their models and how heavily to weight them, sometimes drawing criticism when their models appear to be an outlier. Presidential elections occur only once every four years in the United States, so there are more fundamental questions about whether there's even enough data to verify the accuracy of forecast models. There may even be some evidence of a feedback, where election forecast models could actually influence election results.

Whether the goal is to forecast a presidential election or project a player's statistics in an upcoming baseball season, sometimes even the most complex of forecasting systems struggle to outperform simple prediction models. I'm not interested in discussions about politics and instead pose a fundamental data science question: does all of the complexity of election models like Nate Silver's really make a meaningful difference, or is a simple average of recent polls just as good of a forecast?


Original Submission

posted by hubie on Wednesday September 25, @05:03PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Our planet is choking on plastics. Some of the worst offenders, which can take decades to degrade in landfills, are polypropylene—which is used for things such as food packaging and bumpers—and polyethylene, found in plastic bags, bottles, toys, and even mulch.

Polypropylene and polyethylene can be recycled, but the process can be difficult and often produces large quantities of the greenhouse gas methane. They are both polyolefins, which are the products of polymerizing ethylene and propylene, raw materials that are mainly derived from fossil fuels. The bonds of polyolefins are also notoriously hard to break.

Now, researchers at the University of California, Berkeley have come up with a method of recycling these polymers that uses catalysts that easily break their bonds, converting them into propylene and isobutylene, which are gasses at room temperature. Those gasses can then be recycled into new plastics.

“Because polypropylene and polyethylene are among the most difficult and expensive plastics to separate from each other in a mixed waste stream, it is crucial that [a recycling] process apply to both polyolefins,” the research team said in a study recently published in Science.

The recycling process the team used is known as isomerizing ethenolysis, which relies on a catalyst to break down olefin polymer chains into their small molecules. Polyethylene and polypropylene bonds are highly resistant to chemical reactions because both of these polyolefins have long chains of single carbon-carbon bonds. Most polymers have at least one carbon-carbon double bond, which is much easier to break.

[...] The reaction breaks all the carbon-carbon bonds in polyethylene and polypropylene, with the carbon atoms released during the breaking of these bonds ending up attached to molecules of ethylene.“The ethylene is critical to this reaction, as it is a co-reactant,” researcher R.J. Conk, one of the authors of the study, told Ars Technica. “The broken links then react with ethylene, which removes the links from the chain. Without ethylene, the reaction cannot occur.”

The entire chain is catalyzed until polyethylene is fully converted to propylene, and polypropylene is converted to a mixture of propylene and isobutylene.

This method has high selectivity—meaning it produces a large amount of the desired product. That means propylene derived from polyethylene, and both propylene and isobutylene derived from polypropylene. Both of these chemicals are in high demand, since propylene is an important raw material for the chemical industry, while isobutylene is a frequently used monomer in many different polymers, including synthetic rubber and a gasoline additive.

Because plastics are often mixed at recycling centers, the researchers wanted to see what would happen if polypropylene and polyethylene underwent isomerizing ethenolysis together. The reaction was successful, converting the mixture into propylene and isobutylene, with slightly more propylene than isobutylene.

[...] While this recycling method sounds like it could prevent tons upon tons of waste, it will need to be scaled up enormously for this to happen. When the research team increased the scale of the experiment, it produced the same yield, which looks promising for the future. Still, we’ll need to build considerable infrastructure before this could make a dent in our plastic waste.

“We hope that the work described…will lead to practical methods for…[producing] new polymers,” the researchers said in the same study. “By doing so, the demand for production of these essential commodity chemicals starting from fossil carbon sources and the associated greenhouse gas emissions could be greatly reduced.”

Science, 2024. DOI: 10.1126/science.adq731


Original Submission

posted by hubie on Wednesday September 25, @12:15PM   Printer-friendly
from the lawyer-up dept.

https://arstechnica.com/gaming/2024/09/nintendo-the-pokemon-company-sue-palworld-maker-pocketpair/

Nintendo and The Pokemon Company announced they have filed a patent-infringement lawsuit against Pocketpair, the makers of the heavily Pokémon-inspired Palworld. The Tokyo District Court lawsuit seeks an injunction and damages "on the grounds that Palworld infringes multiple patent rights," according to the announcement.
[...]
The many surface similarities between Pokémon and Palworld are readily apparent, even though Pocketpair's game adds many new features over Nintendo's (such as, uh, guns). But making legal hay over even heavy common ground between games can be an uphill battle. That's because copyright law (at least in the US) generally doesn't apply to a game's mere design elements, and only extends to "expressive elements" such as art, character design, and music.

Generally, even blatant rip-offs of successful games are able to make just enough changes to those "expressive" portions to avoid any legal trouble. But Palworld might clear the high legal bar for infringement if the game's 3D character models were indeed lifted almost wholesale from actual Pokémon game files, as some observers have been alleging since January.
[...]
"Palworld is such a different type of game from Pokémon, it's hard to imagine what patents (*not* copyrights) might have been even plausibly infringed," game industry attorney Richard Hoeg posted on social media Wednesday night. "Initial gut reaction is Nintendo may be reaching."

PocketPair CEO Takuro Mizobe told Automaton Media in January that the game had "cleared legal reviews" and that "we have absolutely no intention of infringing upon the intellectual property of other companies."
[...]
Update (Sept. 19, 2024): In a statement posted overnight, Pocketpair said it was currently "unaware of the specific patents we are accused of infringing upon, and have not been notified of such details.
[...]
Pocketpair promises that it "will continue improving Palworld and strive to create a game that our fans can be proud of."


Original Submission

posted by hubie on Wednesday September 25, @05:21AM   Printer-friendly
from the bad-IoT dept.

"The government's malware disabling commands, which interacted with the malware's native functionality, were extensively tested prior to the operation," according to the DOJ:

U.S. authorities have dismantled a massive botnet run by hackers backed by the Chinese government, according to a speech given by FBI director Christopher Wray on Wednesday. The botnet malware infected a number of different types of internet-connected devices around the world, including home routers, cameras, digital video recorders, and NAS drives. Those devices were used to help infiltrate sensitive networks related to universities, government agencies, telecommunications providers, and media organizations.

Wray explained the operation at the Aspen Digital conference and said the hackers work for a Beijing-based company called Integrity Technology Group, which is known to U.S. researchers as Flax Typhoon. The botnet was launched in mid-2021, according to the FBI, and infected roughly 260,000 devices as of June 2024.

The operation to dismantle the botnet was coordinated by the FBI, the NSA, and the Cyber National Mission Force (CNMF), according to a press release dated Wednesday. The U.S. Department of Justice received a court order to take control of the botnet infrastructure by sending disabling commands to the malware on infected devices. The hackers tried to counterattack by hitting FBI infrastructure but were "ultimately unsuccessful," according to the law enforcement agency.

About half of the devices hijacked were in the U.S., according to Wray, but there were also devices identified as compromised in South America, Europe, Africa, Southeast Asia, and Australia. And the DOJ noted in a press release that authorities in Australia, Canada, New Zealand, and the UK all helped take down the botnet.

Originally spotted on Schneier on Security.

Related: Chinese Malware Removed From SOHO Routers After FBI Issues Covert Commands


Original Submission

posted by janrinok on Wednesday September 25, @12:34AM   Printer-friendly

Starlink imposes $100 "congestion charge" on new users in parts of US:

New Starlink customers have to pay a $100 "congestion charge" in areas where the satellite broadband network has limited capacity.

"In areas with network congestion, there is an additional one-time charge to purchase Starlink Residential services," a Starlink FAQ says. "This fee will only apply if you are purchasing or activating a new service plan. If you change your Service address or Service Plan at a later date, you may be charged the congestion fee."

The charge is unwelcome for anyone wanting Starlink service in a congested area, but it could help prevent the capacity crunch from getting worse by making people think twice about signing up. The SpaceX-owned Internet service provider also seems to anticipate that people who sign up for service in congested areas may change their minds after trying it out for a few weeks.

"Our intention is to no longer charge this fee to new customers as soon as network capacity improves. If you're not satisfied with Starlink and return it within the 30-day return window, the charge will be refunded," the company said.

There is some corresponding good news for people in areas with more Starlink capacity. Starlink "regional savings," introduced a few months ago, provides a $100 service credit in parts of the US "where Starlink has abundant network availability." The credit is $200 in parts of Canada with abundant network availability.

The congestion charge was reported by PCMag on September 13, after being noticed by users of the Starlink subreddit. "The added fee appears to pop up in numerous states, particularly in the south and eastern US, such as Texas, Florida, Kansas, Ohio and Virginia, among others, which have slower Starlink speeds due to the limited network capacity," PCMag noted.

Speed test data showed in 2022 that Starlink speeds dropped significantly as more people signed up for the service, a fact cited by the Federal Communications Commission when it rejected $886 million worth of broadband deployment grants for the company.

This isn't the first time Starlink has varied pricing based on regional congestion. In February 2023, Starlink decided that people in limited-capacity areas would pay $120 a month, and people in excess-capacity areas would pay $90 a month.


Original Submission

posted by janrinok on Tuesday September 24, @07:51PM   Printer-friendly
from the when-politics-and-science-collide dept.

Arthur T Knackerbracket has processed the following story:

Since its founding in 1954, high-energy physics laboratory CERN has been a flagship for international scientific collaboration. That commitment has been under strain since the Russian invasion of Ukraine in 2022. CERN decided to cut ties with Moscow late last year over deaths resulting from the country's "unlawful use of force" in the ongoing conflict.

With the existing international cooperation agreements now lapsing, the Geneva-based organization is expected to expel hundreds of scientists on November 30 affiliated with Russian institutions, Nature reports. However, CERN will maintain its links with the Joint Institute for Nuclear Research, an intergovernmental center near Moscow.

CERN was founded in the wake of World War II as a place dedicated to the peaceful pursuit of science. The organization currently has 24 member states and, in 2019 alone, hosted about 12,400 users from institutions in more than 70 countries. Russia has never been a full member of CERN, but collaborations first began in 1955, with hundreds of Russia-affiliated scientists contributing to experiments in the ensuing decades. Now, that 60-year history of collaboration, and Russia's long-standing observer status, is ending. As World Nuclear News reported earlier this year:

The decision to end the cooperation agreement was taken in December 2023 when CERN's Council passed a resolution "to terminate the International Cooperation Agreement between CERN and the Russian Federation, together with all related protocols and addenda, with effect from 30 November 2024; To terminate ... all other agreements and experiment memoranda of understanding allowing the participation of the Russian Federation and its national institutes in the CERN scientific programme, with effect from 30 November 2024; AFFIRMS That these measures concern the relationship between CERN and Russian and Belarusian institutes and do not affect the relationship with scientists of Russian nationality affiliated with other institutes." The cooperation agreement with Belarus will come to an end on 27 June, before the Russian one ends.

It's unclear how this decision will impact scientific research at CERN. Russia's 4.5 percent contribution to the combined budget for ongoing experiments at the Large Hadron Collider has already been covered by other collaboration members. Some think the effects will be minimal since researchers have had plenty of time to prepare for the exit. Certain essential staff members have successfully found employment outside of Russia so that they can stay on.

Others are less confident. “It will leave a hole. I think it’s an illusion to believe one can cover that very simply by other scientists,” particle physicist and CMS member Hannes Jung of the German Electron Synchrotron in Hamburg told Nature. He's also a member of the Science4Peace Forum, which opposes restrictions on scientific cooperation.


Original Submission

posted by janrinok on Tuesday September 24, @03:08PM   Printer-friendly

The Arc browser that lets you customize websites had a serious vulnerability:

The Arc browser's 'Boosts' feature would've allowed bad actors to edit a website and add a malicious payload that their target could download to their computer.

One of the feature that separates the Arc browser from its competitors is the ability to customize websites. The feature called "Boosts" allows users to change a website's background color, switch to a font they like or one that makes it easier for them to read and even remove an unwanted elements from the page completely. Their alterations aren't supposed to be be visible to anyone else, but they can share them across devices. Now, Arc's creator, the Browser Company, has admitted that a security researcher found a serious flaw that would've allowed attackers to use Boosts to compromise their targets' systems.

The company used Firebase, which the security researcher known as "xyzeva" described as a "database-as-a-backend service" in their post about the vulnerability, to support several Arc features. For Boosts, in particular, it's used to share and sync customizations across devices. In xyzeva's post, they showed how the browser relies on a creator's identification (creatorID) to load Boosts on a device. They also shared how someone could change that element to their target's identification tag and assign that target Boosts that they had created.

If a bad actor makes a Boost with a malicious payload, for instance, they can just change their creatorID to the creatorID of their intended target. When the intended victim then visits the website on Arc, they could unknowingly download the hacker's malware. And as the researcher explained, it's pretty easy to get user IDs for the browser. A user who refer someone to Arc will share their ID to the recipient, and if they also created an account from a referral, the person who sent it will also get their ID. Users can also share their Boosts with others, and Arc has a page with public Boosts that contain the creatorIDs of the people who made them.

In its post, the Browser Company said xyzeva notified it about the security issue on August 25 and that it issued a fix a day later with the researcher's help.


Original Submission

posted by janrinok on Tuesday September 24, @10:18AM   Printer-friendly

Google employees' attempts to hide messages from investigators might backfire:

Google employees liberally labeled their emails as "privileged and confidential" and spoke "off the record" over chat messages, even after being told to preserve their communications for investigators, lawyers for the Justice Department have told a Virginia court over the past couple of weeks.

That strategy could backfire if the judge in Google's second antitrust trial believes the company intentionally destroyed evidence that would have looked bad for it. The judge could go as far as giving an adverse inference about Google's missing documents, which would mean assuming they would have been bad for Google's case.

Documents shown in court regularly display the words "privileged and confidential" as business executives discuss their work, occasionally with a member of Google's legal team looped in. On Friday, former Google sell-side ad executive Chris LaSala said that wasn't the only strategy Google used. He testified that after being placed on a litigation hold in connection with law enforcers' investigation, Google chat messages had history off by default, and his understanding was that needed to be changed for each individual chat that involved substantive work conversations. Multiple former Google employees testified to never changing the default setting and occasionally having substantive business discussions in chats, though they were largely reserved for casual conversations.

LaSala also used that default to his advantage at times, documents shown by the government in court revealed. In one 2020 chat, an employee asked LaSala if they should email two other Google employees about an issue and, soon after, asked, "Or too sensitive for email so keep on ping?" LaSala responded, instructing the employee to "start a ping with history turned off." In a separate 2020 exchange, LaSala again instructed his employee to "maybe start an off the record ping thread with Duke, you, me."

"It was just how we spoke. Everyone used the phrase 'off the record ping,'" LaSala testified. "My MO was mostly off the record, so old tricks die hard."

Still, LaSala said he "tried to follow the terms of the litigation hold," but he acknowledged he "made a mistake." Shortly after a training about the hold, he recalled receiving a chat from a colleague. Though LaSala said he turned history on, he wasn't sure the first message would be preserved. LaSala said he put that message in an email just in case. In general, LaSala said, "We were really good at documenting ... and to the extent I made a mistake a couple times, it was not intentional."

Brad Bender, another Google ad tech executive who testified earlier in the week, described conversations with colleagues over chat as more akin to "bumping into the hall and saying 'hey we should chat.'" The DOJ also questioned former Google executive Rahul Srinivasan about emails he marked privileged and confidential, asking what legal advice he was seeking in those emails. He said he didn't remember.

Google employees were well aware of how their written words could be used against the company, the DOJ argued, pointing to the company's "Communicate with Care" legal training for employees. In one 2019 email, Srinivasan copied a lawyer on an email to colleagues about an ad tech feature and reminded the group to be careful with their language. "We should be particularly careful when framing something as a 'circumvention,'" he wrote. "We should assume that every document (and email) we generate will likely be seen by regulators." The email was labeled "PRIVILEGED and CONFIDENTIAL."

While the many documents shown by the DOJ demonstrate that Google often discussed business decisions in writing, at other times, they seemed to intentionally leave the documentation sparse. "Keeping the notes limited due to sensitivity of the subject," a 2021 Google document says. "Separate privileged emails will be sent to folks to follow up on explicit [action items]."

"We take seriously our obligations to preserve and produce relevant documents," Google spokesperson Peter Schottenfels said in a statement. "We have for years responded to inquiries and litigation, and we educate our employees about legal privilege. In the DOJ cases alone, we have produced millions of documents including chat messages and documents not covered by legal privilege."

The judge in Google's first antitrust battle with the DOJ over its search business declined to go as far as an adverse inference,even though he ruled against Google in most other ways. Still, he made clear he wasn't "condoning Google's failure to preserve chat evidence" and said, "Any company that puts the onus on employees to identify and preserve relevant evidence does so at its own peril. Google avoided sanctions in this case. It may not be so lucky in the next one."


Original Submission

posted by janrinok on Tuesday September 24, @05:33AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The Linux kernel is 33 years old. Its creator, Linus Torvalds, still enjoys an argument or two but is baffled why the debate over Rust has attracted so much heat.

"I'm not sure why Rust has been such a contentious area," Torvalds said during an on-stage chat this week with Dirk Hohndel, Verizon's Head of Open Source.

"It reminds me of when I was young and people were arguing about vi versus Emacs," said the software engineer. Hohndel interjected, "They still are!"

Torvalds laughed, "Maybe they still are! But for some reason, the whole Rust versus C discussion has taken almost religious overtones."

Getting Rust into the Linux kernel has been a hot topic for some time. In 2022, developers were arguing over the language, with some calling the memory safety features of Rust an "insult" to some of the hard work that had gone into the kernel over the years. At the beginning of September, one of the maintainers of the Rust for Linux projects stepped down, citing frustration with "nontechnical nonsense" as a reason for resignation.

During the conversation at the Linux Foundation's Open Source Summit in Vienna this week, Torvalds continued, "Clearly, there are people who just don't like the notion of Rust, and having Rust encroach on their area.

"People have even been talking about the Rust integration being a failure … We've been doing this for a couple of years now so it's way too early to even say that, but I also think that even if it were to become a failure – and I don't think it will – that's how you learn," he said.

"So I see the whole Rust thing as positive, even if the arguments are not necessarily always [so]."

Keen to pull those positives from the row, Torvalds added, "One of the nice parts about Rust has been how it's livened up discussions," before acknowledging, "some of the arguments get nasty, and people do actually - yes - decide 'this is not worth my time,' but at the same time it's kind of interesting, and I think it shows how much people care."

"C is, in the end, a very simple language. It's one of the reasons I enjoy C and why a lot of C programmers enjoy C, even if the other side of that picture is obviously that because it's simple it's also very easy to make mistakes," he argued.

With impressive diplomacy, considering his outbursts of years past, Torvalds went on, "There's a lot of people who are used to the C model, and they don't necessarily like the differences... and that's ok.

"Some people care about specific architectures, and some people like file systems, and that's how it should be. That's how I see Rust."


Original Submission

posted by hubie on Tuesday September 24, @12:51AM   Printer-friendly
from the dystopia-is-now! dept.

https://arstechnica.com/information-technology/2024/09/dead-internet-theory-comes-to-life-with-new-ai-powered-social-media-app/

For the past few years, a conspiracy theory called "Dead Internet theory" has picked up speed as large language models (LLMs) like ChatGPT increasingly generate text and even social media interactions found online. The theory says that most social Internet activity today is artificial and designed to manipulate humans for engagement.

On Monday, software developer Michael Sayman launched a new AI-populated social network app called SocialAI that feels like it's bringing that conspiracy theory to life, allowing users to interact solely with AI chatbots instead of other humans. It's available on the iPhone app store, but so far, it's picking up pointed criticism.

After its creator announced SocialAI as "a private social network where you receive millions of AI-generated comments offering feedback, advice & reflections on each post you make," computer security specialist Ian Coldwater quipped on X, "This sounds like actual hell." Software developer and frequent AI pundit Colin Fraser expressed a similar sentiment: "I don't mean this like in a mean way or as a dunk or whatever but this actually sounds like Hell. Like capital H Hell."
[...]
As The Verge reports in an excellent rundown of the example interactions, SocialAI lets users choose the types of AI followers they want, including categories like "supporters," "nerds," and "skeptics." These AI chatbots then respond to user posts with brief comments and reactions on almost any topic, including nonsensical "Lorem ipsum" text.

Sometimes the bots can be too helpful. On Bluesky, one user asked for instructions on how to make nitroglycerin out of common household chemicals and received several enthusiastic responses from bots detailing the steps, although several bots provided different recipes, none of which may be wholly accurate.
[...]
None of this would be possible without access to inexpensive LLMs like the kind that power ChatGPT. So far, SocialAI creator Sayman has said he is using a "custom mix" of AI models

[...] On Bluesky, evolutionary biologist and frequent AI commentator Carl T. Bergstrom wrote, "So I signed up for the new heaven-ban SocialAI social network where you're all alone in a world of bots. It is so much worse than I ever imagined. It's not GPT-level AI; it's more like ELIZA level, if the ELIZAs were lazily written stereotypes of every douchebag on ICQ circa 1999."
[...]
As a piece of prospective performance art, SocialAI may be genius. Or perhaps you could look at it as a form of social commentary on the vapidity of social media or about the harm of algorithmic filter bubbles that only feed you what you want to see and hear. But since its creator seems sincere, we're unsure how the service may fit into the future of social media apps.

For now, the app has already picked up a few positive reviews on the app store from people who seem to enjoy this taste of the hypothetical "dead Internet" by verbally jousting with the bots for entertainment: "5 stars and I've been using this for 10 minutes. I could argue with this AI for HOURS 😭 it's actually so much fun to see what it will say to the most random stuff 💀."


Original Submission

posted by hubie on Monday September 23, @08:04PM   Printer-friendly
from the redundant-redundant dept.

https://arstechnica.com/gadgets/2024/09/microsoft-releases-a-new-windows-app-called-windows-app-for-running-windows-apps/

Microsoft announced today that it's releasing a new app called Windows App as an app for Windows that allows users to run Windows and also Windows apps (it's also coming to macOS, iOS, web browsers, and is in public preview for Android).

On most of those platforms, Windows App is a replacement for the Microsoft Remote Desktop app, which was used for connecting to a copy of Windows running on a remote computer or server—for some users and IT organizations, a relatively straightforward way to run Windows software on devices that aren't running Windows or can't run Windows natively.
[...]
Microsoft says that aside from unifying multiple services into a single app, Windows App's enhancements include easier account switching, better device management for IT administrators, support for the version of Windows 365 for frontline workers, and support for Microsoft's "Relayed RDP Shortpath," which can enable Remote Desktop on networks that normally wouldn't allow it.
[...]
For connections to your own Remote Desktop-equipped PCs, Windows App has most of the same features and requirements as the Remote Desktop Connection app did before, including support for multiple monitors, device redirection for devices like webcams and audio input/output, and dynamic resolution support (so that your Windows desktop resizes as you resize the app window).

[Obligatory: Which Windows App did you want?]


Original Submission

posted by janrinok on Monday September 23, @06:45PM   Printer-friendly
from the free-speech-absolutist dept.

Earlier we reported that Twitter has been blocked in Brasil after non-compliance with court orders. Good news for everyone trying to fevereshy switch to competitors, it seems that Twitter will indeed comply with the orders, pay fines and appoint a legal representative to be unblocked in the country. Based on reporting from New York Times, Musk Backs Down In Brazil: X May Return After Complying With Court Orders

X's lawyers said in a Friday court filing cited by the Times that X complied with the orders asking the social network to remove accounts accused of engaging in disinformation, as well as demands from the Supreme Court regarding fines and the assignment of a new legal representative for X in Brazil. The Brazilian Supreme Court confirmed the compliance in its own filing Saturday, though it noted X has yet to file the proper documents to move forward with its case and will have five days to do so, the Times reported. André Zonaro Giacchetta, one of X's new lawyers in Brazil, told the Times the conditions for X's return in Brazil "have already been met, but it depends on the assessment of" the country's supreme court.

[Editor's Note: Thanks, gnuman.]


Original Submission

posted by hubie on Monday September 23, @03:16PM   Printer-friendly

https://hackaday.com/2024/09/12/review-ifixits-fixhub-may-be-the-last-soldering-iron-you-ever-buy/

Like many people who solder regularly, I decided years ago to upgrade from a basic iron and invest in a soldering station. My RadioShack digital station has served me well for the better part of 20 years. It heats up fast, tips are readily available, and it's a breeze to dial in whatever temperature I need. It's older than both of my children, has moved with me to three different homes, and has outlived two cars and one marriage (so far, anyway).

As such, when the new breed of "smart" USB-C soldering irons started hitting the scene, I didn't find them terribly compelling. Oh sure, I bought a Pinecil. But that's because I'm an unrepentant open source zealot and love the idea that there's a soldering iron running a community developed firmware. In practice though, I only used the thing a few times, and even then it was because I needed something portable. Using it at home on the workbench? It just never felt up to the task of daily use.

So when iFixit got in contact a couple weeks back and said they had a prototype USB-C soldering iron they wanted me to take a look at, I was skeptical to say the least. But then I started reading over the documentation they sent over, and couldn't deny that they had some interesting ideas. For one, it was something of a hybrid iron. It was portable when you needed it to be, yet offered the flexibility and power of a station when you were at the bench.

Question from the editor: Has anyone used the Pinecil and how do you think this compares?


Original Submission

posted by janrinok on Monday September 23, @10:31AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

On Tuesday, Chief US District Judge Robert Shelby granted a preliminary injunction to block Utah from limiting the social media usage of minors. Republican Governor Spencer Cox had signed the Utah Minor Protection in Social Media Act earlier in March. It was supposed to take effect on October 1, but the court’s decision to block the law is a victory for young social media users in Utah.

This isn’t the first time Utah’s governor has attempted to limit social media use among the youths in the state. Last year, he signed two bills that required parents to grant permission for teens to create social media accounts, and these accounts had limitations like curfews and age verification. He replacing the older laws in March due to lawsuits challenging their legality.

Under the law, social media companies would have been forced to verify the age of all users. If a minor registers for an account, they are subject to various limitations. The content they share would be seen only by connected accounts. Additionally, minor accounts could not be searched for or messaged by non-followers or friends, effectively nonexistent to strangers.

The primary reason for the preliminary injunction is due to NetChoice’s claim that the law constitutes a violation of the First Amendment. NetChoice is a trade association formed by tech giants such as X (formerly Twitter), Snap, Meta and Google. The association has managed to win in court battles and block similar laws entirely or in part in states like Arkansas, California and Texas.


Original Submission

posted by janrinok on Monday September 23, @05:43AM   Printer-friendly
from the like-sands-through-the-hourglass dept.

Plenty of ups-and-downs are key to a great story, new research finds:

Since at least Aristotle, writers and scholars have debated what makes for a great story. One of them is Samsun Knight, a novelist who is also an economist and assistant professor of marketing at the University of Toronto's Rotman School of Management. With a scientist's tools, he's done what previous theorizers have failed to: put theory to the test and demonstrate the key factor for empirically predicting which stories will be snore fests and which will leave audiences hungry for more.

It turns out to be "narrative reversals" -- lots of them and the bigger the better. Commonly known as changes of fortune or turning points, where characters' fortunes swing from good to bad and vice versa, Prof. Knight and fellow researchers found that stories rich in these mechanisms boosted popularity and engagement with audiences through a range of media, from television to crowdfunding pitches.

"The best-written stories were always either 'building up' a current reversal, or introducing a new plot point," says Prof. Knight. "In our analysis, the best writers were those that were able to maintain both many plot points and strong build-up for each plot point across the course of the narrative."

The researchers analyzed nearly 30,000 television shows, movies, novels, and crowdfunding pitches using computational linguistics, a blend of computer science and language analysis. This allowed them to quantify not only the number of a reversals in a text but also their degree or intensity by assigning numerical values to words based on how positive or negative they were.

Movies and television shows with more and bigger reversals were better rated on the popular ratings site IMDb. Books with the most and biggest reversals were downloaded more than twice as much as books with the fewest reversals from the free online library Project Gutenberg. And GoFundMe pitches with more and larger reversals were more likely to hit their fundraising goal, by as much as 39 per cent.

The Greek philosopher Aristotle was the first to identify peripeteia, the sudden reversal of circumstances, as a key feature of a good story. Other thinkers have added their ideas since then, including American playwright and dramaturg Leon Katz whose scholarship particularly inspired Prof. Knight's research. Katz "described the reversal as the basic unit of narrative, just as a sentence is the basic unit of a paragraph, or the syllogism is the basic unit of a logical proof," says Prof. Knight.

In addition to helping psychologists understand how narrative works to educate, inform and inspire people, the findings may also benefit storytellers of all kinds.

"Hopefully our research can help build a pedagogy for writers that allows them to rely on the accumulated knowledge of Aristotle et al. without having to 'reinvent the wheel' on their own every time," says Prof. Knight.

Journal Reference:Samsun Knight, Matthew D. Rocklage, and Yakov Bart, Narrative reversals and story success, Sci. Adv., 21 Aug 2024
Vol 10, Issue 34. DOI: 10.1126/sciadv.adl2013


Original Submission