Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Idiosyncratic use of punctuation - which of these annoys you the most?

  • Declarations and assignments that end with }; (C, C++, Javascript, etc.)
  • (Parenthesis (pile-ups (at (the (end (of (Lisp (code))))))))
  • Syntactically-significant whitespace (Python, Ruby, Haskell...)
  • Perl sigils: @array, $array[index], %hash, $hash{key}
  • Unnecessary sigils, like $variable in PHP
  • macro!() in Rust
  • Do you have any idea how much I spent on this Space Cadet keyboard, you insensitive clod?!
  • Something even worse...

[ Results | Polls ]
Comments:60 | Votes:106

posted by Fnord666 on Monday September 18 2023, @08:44PM   Printer-friendly
from the AI-overlords dept.

https://arstechnica.com/information-technology/2023/09/ai-hype-reaches-coca-cola-with-new-y3000-flavor-co-created-with-ai/

Coca-Cola has taken a fizzy leap into the future of AI hype with the release of Coca‑Cola Y3000 Zero Sugar, a "limited-edition" beverage reportedly co-created with artificial intelligence. Its futuristic name evokes flavor in the year 3000 (still 977 years away), but its marketing relies on AI-generated imagery from 2023—courtesy of the controversial image synthesis model Stable Diffusion.

Stable Diffusion, a technology that is mentioned by name when launching the "Coca-Cola Y3000 AI Cam" mobile app, gained its ability to generate images by scraping hundreds of millions of copyrighted works found on the Internet without copyright-holder permission and is currently the subject of litigation related to copyright infringement.
[...]
Coca-Cola says that the zero-sugar version of the new AI-augmented soda will be available for a limited time in "select markets" including the United States, Canada, China, Europe, and Africa. Thirsty futuristic folks in the US, Canada, and Mexico will also be able to buy an "original taste version" of Coca‑Cola Y3000 soon.


Original Submission

posted by Fnord666 on Monday September 18 2023, @03:59PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

In a controversial bid to expose supposed bias in a top journal, a US climate expert shocked fellow scientists by revealing he tailored a wildfire study to emphasize global warming.

While supporters applauded Patrick T. Brown for flagging what he called a one-sided climate "narrative" in academic publishing, his move surprised at least one of his co-authors—and angered the editors of leading journal Nature.

"I left out the full truth to get my climate change paper published," read the headline to an article signed by Brown in the news site The Free Press on September 5.

He said he deliberately focused on the impact from higher temperatures on wildfire risk in a study in the journal, excluding other factors such as land management.

"I just got published in Nature because I stuck to a narrative I knew the editors would like," the article read. "That's not the way science should work."

One of the named co-authors of the study, Steven J. Davis, a professor in the earth system science department at the University of California, Irvine, told AFP Brown's comments took him "by surprise".

"Patrick may have made decisions that he thought would help the paper be published, but we don't know whether a different paper would have been rejected," he said in an email.

"I don't think he has much evidence to support his strong claims that editors and reviewers are biased."

[...] "It is unfortunate, but not surprising, that Patrick felt like he had to be a willing participant in oversimplifying his work to have a career in science. In that long run, that is not a service to him, the field, or humanity."


Original Submission

posted by Fnord666 on Monday September 18 2023, @11:14AM   Printer-friendly
from the haxor dept.

https://hackaday.com/2023/09/11/cheap-lcd-uses-usb-serial/

Browsing the Asian marketplaces online is always an experience. Sometimes, you see things at ridiculously low prices. Other times, you see things and wonder who is buying them and why — a shrimp pillow? But sometimes, you see something that probably could have a more useful purpose than the proposed use case.

That's the case with the glut of "smart displays" you can find at very low prices.
[...]
Like a lot of this cheap stuff, these screens are sold under a variety of names, and apparently, there are some subtle differences. Two of the main makers of these screens are Turing and XuanFang, although you rarely see those names in the online listings. As you might expect, though, someone has reverse-engineered the protocol, and there is Python software that will replace the stock Windows software the devices use.
[...]
We are still tempted to reflash the CH552 to convert it to use a normal serial port. If you decide to give it a go, you'll need to figure out programming.


Original Submission

posted by hubie on Monday September 18 2023, @06:29AM   Printer-friendly
from the Oh-s#!t-Sherlock dept.

Arthur T Knackerbracket has processed the following story:

Israeli software maker Insanet has reportedly developed a commercial product called Sherlock that can infect devices via online adverts to snoop on targets and collect data about them for the biz's clients.

This is according to an investigation by Haaretz, which this week claimed the spyware system had been sold to a country that is not a democracy.

The newspaper's report, we're told, marks the first time details of Insanet and its surveillanceware have been made public. Furthermore, Sherlock is capable of drilling its way into Microsoft Windows, Google Android, and Apple iOS devices, according to cited marketing bumf.

[...] To market its snoopware, Insanet reportedly teamed up with Candiru, an Israel-based spyware maker that has been sanctioned in the US, to offer Sherlock along with Candiru's spyware – an infection of Sherlock will apparently set a client back six million euros ($6.7 million, £5.2 million), mind you.

[...] The Electronic Frontier Foundation's Director of Activism Jason Kelley said Insanet's use of advertising technology to infect devices and spy on clients' targets makes it especially worrisome. Dodgy online ads don't just provide a potential vehicle for delivering malware, such as via carefully crafted images or JavaScript in the ads that exploit vulnerabilities in browsers and OSes, they can be used to go after specific groups of people – such as those who are interested in open source code, or who frequently travel to Asia – that someone might be interested in snooping on.

"This method of surveillance and targeting uses commercially available data that's very difficult to erase from the internet," Kelley told The Register. "Most people have no idea how much of their information has been compiled or shared by data brokers and ad tech companies, and have little ability to erase it."

It's an interesting twist. Sherlock seems designed to use legal data collection and digital advertising technologies — beloved by Big Tech and online media — to target people for government-level espionage. Other spyware, such as NSO Group's Pegasus or Cytrox's Predator and Alien, tends to be more precisely targeted.

"Threat-wise, this can be compared to malvertising where a malicious advertisement is blanket-pushed to unsuspecting users," Qualys threat research manager Mayuresh Dani told The Register.

[...] The good news for some, at least: it likely poses a minimal threat to most people, considering the multi-million-dollar price tag and other requirements for developing a surveillance campaign using Sherlock, Kelley noted. 

Still, "it's just one more way that spyware companies can surveil and target activists, reporters, and government officials," he said.

[...] "Data finds its way to being used for surveillance, and worse, all the time," he continued. "Stop making the data collection profitable, and this goes away. If behavioral advertising were banned, the industry wouldn't exist."


Original Submission

posted by hubie on Monday September 18 2023, @01:40AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

California has become the third US state to pass a right-to-repair bill for consumer electronics. After a unanimous vote in favor, Sacramento lawmakers expect Governor Gavin Newsom to sign the bill into law.

Senate Bill 244 (SB-244) contains more consumer protections than similar laws passed in New York and Minnesota. It stipulates that for electronics costing between $50 and $100, manufacturers must provide consumers and independent repair shops with replacement parts and repair manuals for three years after the initial manufacture date. That timespan extends to seven years for devices costing over $100. Although the law goes into effect on July 1, 2024, it applies retroactively to products manufactured after July 1, 2021.

The law mainly applies to devices like phones, tablets, laptops, and other general-purpose appliances, but not alarm systems or video game consoles. Although manufacturers extracted fewer concessions in California than Minnesota or New York, a few significant ones remain.

First, the bill doesn't require companies to provide instructions for bypassing security measures, which can often prove a significant obstacle to independent repairs. John Deere is notorious for using software locks to force users to spend extra money on first-party maintenance and replacements.

Another caveat to the California bill is that independent repair vendors must disclose when they use refurbished replacement parts or originate components from third-party makers. This condition could affect how companies handle issues such as official repairs or warranties.


Original Submission

posted by hubie on Sunday September 17 2023, @08:56PM   Printer-friendly
from the dirty-pool dept.

https://arstechnica.com/health/2023/09/the-spectacular-downfall-of-a-common-useless-cold-medicine/

After spending decades on pharmacy shelves, the leading nasal decongestant in over-the-counter cold and allergy medicines has met its downfall.

Advisers for the Food and Drug Administration this week voted unanimously, 16 to 0, that oral doses of phenylephrine—found in brand-name products like Sudafed PE, Benadryl Allergy Plus Congestion, Mucinex Sinus-Max, and Nyquil Severe Cold & Flu—are not effective at treating a stuffy nose.

The vote was years in the making. In 2007, amid doubts, FDA advisers called for more studies. With the data that has trickled in since then, the agency's own scientists conducted a careful review and came to the firm conclusion that oral phenylephrine "is not effective as a nasal decongestant."


Original Submission

posted by janrinok on Sunday September 17 2023, @04:13PM   Printer-friendly

https://arstechnica.com/information-technology/2023/09/private-ai-summit-with-senate-titans-of-tech-garners-controversy/

On Wednesday, US Senator Chuck Schumer (D-N.Y.) hosted an "AI Insight Forum" in the Senate's office building about potential AI regulation. Attendees included billionaires and modern-day industry titans such as Elon Musk, Bill Gates, Mark Zuckerberg, OpenAI's Sam Altman, and Jensen Huang of Nvidia. But this heavily corporate guest list—with 14 out of 22 being CEOs—had some scratching their heads.

"This is the room you pull together when your staffers want pictures with tech industry AI celebrities. It's not the room you'd assemble when you want to better understand what AI is, how (and for whom) it functions, and what to do about it," wrote Signal President Meredith Whittaker on X.

Tech Industry Leaders Endorse Regulating Artificial Intelligence at Rare Summit in Washington:

The nation's biggest technology executives on Wednesday loosely endorsed the idea of government regulations for artificial intelligence at an unusual closed-door meeting in the U.S. Senate. But there is little consensus on what regulation would look like, and the political path for legislation is difficult.

Senate Majority Leader Chuck Schumer, who organized the private forum on Capitol Hill as part of a push to legislate artificial intelligence, said he asked everyone in the room — including almost two dozen tech executives, advocates and skeptics — whether government should have a role in the oversight of artificial intelligence, and "every single person raised their hands, even though they had diverse views," he said.

Among the ideas discussed was whether there should be an independent agency to oversee certain aspects of the rapidly-developing technology, how companies could be more transparent and how the United States can stay ahead of China and other countries.

"The key point was really that it's important for us to have a referee," said Elon Musk, CEO of Tesla and X, during a break in the daylong forum. "It was a very civilized discussion, actually, among some of the smartest people in the world."

Schumer will not necessarily take the tech executives' advice as he works with colleagues on the politically difficult task of ensuring some oversight of the burgeoning sector. But he invited them to the meeting in hopes that they would give senators some realistic direction for meaningful regulation.

Congress should do what it can to maximize AI's benefits and minimize the negatives, Schumer said, "whether that's enshrining bias, or the loss of jobs, or even the kind of doomsday scenarios that were mentioned in the room. And only government can be there to put in guardrails."

Other executives attending the meeting were Meta's Mark Zuckerberg, former Microsoft CEO Bill Gates and Google CEO Sundar Pichai. Musk said the meeting "might go down in history as being very important for the future of civilization."

First, though, lawmakers have to agree on whether to regulate, and how.

Congress has a lackluster track record when it comes to regulating new technology, and the industry has grown mostly unchecked by government in the past several decades. Many lawmakers point to the failure to pass any legislation surrounding social media, such as for stricter privacy standards.

Schumer, who has made AI one of his top issues as leader, said regulation of artificial intelligence will be "one of the most difficult issues we can ever take on," and he listed some of the reasons why: It's technically complicated, it keeps changing and it "has such a wide, broad effect across the whole world," he said.

Sparked by the release of ChatGPT less than a year ago, businesses have been clamoring to apply new generative AI tools that can compose human-like passages of text, program computer code and create novel images, audio and video. The hype over such tools has accelerated worries over its potential societal harms and prompted calls for more transparency in how the data behind the new products is collected and used.

Republican Sen. Mike Rounds of South Dakota, who led the meeting with Schumer, said Congress needs to get ahead of fast-moving AI by making sure it continues to develop "on the positive side" while also taking care of potential issues surrounding data transparency and privacy.

"AI is not going away, and it can do some really good things or it can be a real challenge," Rounds said.

The tech leaders and others outlined their views at the meeting, with each participant getting three minutes to speak on a topic of their choosing. Schumer and Rounds then led a group discussion.

During the discussion, according to attendees who spoke about it, Musk and former Google CEO Eric Schmidt raised existential risks posed by AI, and Zuckerberg brought up the question of closed vs. "open source" AI models. Gates talked about feeding the hungry. IBM CEO Arvind Krishna expressed opposition to proposals favored by other companies that would require licenses.

In terms of a potential new agency for regulation, "that is one of the biggest questions we have to answer and that we will continue to discuss," Schumer said. Musk said afterward he thinks the creation of a regulatory agency is likely.

Outside the meeting, Google CEO Pichai declined to give details about specifics but generally endorsed the idea of Washington involvement.

"I think it's important that government plays a role, both on the innovation side and building the right safeguards, and I thought it was a productive discussion," he said.

Some senators were critical that the public was shut out of the meeting, arguing that the tech executives should testify in public.

Sen. Josh Hawley, R-Mo., said he would not attend what he said was a "giant cocktail party for big tech." Hawley has introduced legislation with Sen. Richard Blumenthal, D-Conn., to require tech companies to seek licenses for high-risk AI systems.

"I don't know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money and then close it to the public," Hawley said.

While civil rights and labor groups were also represented at the meeting, some experts worried that Schumer's event risked emphasizing the concerns of big firms over everyone else.

Sarah Myers West, managing director of the nonprofit AI Now Institute, estimated that the combined net worth of the room Wednesday was $550 billion and it was "hard to envision a room like that in any way meaningfully representing the interests of the broader public." She did not attend.

In the United States, major tech companies have expressed support for AI regulations, though they don't necessarily agree on what that means. Similarly, members of Congress agree that legislation is needed, but there is little consensus on what to do.

There is also division, with some members of Congress worrying more about overregulation of the industry while others are concerned more about the potential risks. Those differences often fall along party lines.


Original Submission

posted by janrinok on Sunday September 17 2023, @11:31AM   Printer-friendly

As reported in New Atlas and many other places, the 2023 Ig Nobel winners have been announced.

The Ig Nobel Prize celebrates the most trivial and ridiculous things our best and brightest have studied. The 2023 award winners are now world-class experts that have advanced mankind's knowledge on big questions. Questions like "how much do horny anchovies influence ocean water mixing?" Or "if a group of people stand in the street looking upward, does the size of the group influence how many unrelated passers-by also decide to look up?" Or that old chestnut, "do we need toilets that analyze our excreta and identify us by taking photos of our anuses?"

Just a couple of examples to raise the level of excitement...

Medicine Prize: Christine Pham, Bobak Hedayati, Kiana Hashemi, Ella Csuka, Tiana Mamaghani, Margit Juhasz, Jamie Wikenheiser, and Natasha Mesinkovska, for using cadavers to explore whether there is an equal number of hairs in each of a person's two nostrils. Don't miss their riveting work "The Quantification and Measurement of Nasal Hairs in a Cadaveric Population."

Education Prize: Katy Tam, Cyanea Poon, Victoria Hui, Wijnand van Tilburg, Christy Wong, Vivian Kwong, Gigi Yuen, and Christian Chan, for methodically studying the boredom of teachers and students. If anything could be more exciting than boredom, it's surely their paper, "Boredom Begets Boredom: An Experience Sampling Study on the Impact of Teacher Boredom on Student Boredom and Motivation."


Original Submission

posted by janrinok on Sunday September 17 2023, @06:49AM   Printer-friendly
from the whoops dept.

Mistranslation Of Newton's First Law Discovered After Nearly 300 Years:

For hundreds of years, we have been told what Newton's First Law of Motion supposedly says, but recently a paper published in Philosophy of Science (preprint) by [Daniel Hoek] argues that it is based on a mistranslation of the original Latin text. As noted by [Stephanie Pappas] in Scientific American, this would seem to be a rather academic matter as Newton's Laws of Motion have been superseded by General Relativity and other theories developed over the intervening centuries. Yet even today Newton's theories are highly relevant, as they provide very accessible approximations for predicting phenomena on Earth.

Similarly, we owe it to scientific and historical accuracy to address such matters, all of which seem to come down to an awkward translation of Isaac Newton's original Latin text in the 1726 third edition to English by Andrew Motte in 1729. This English translation is what ended up defining for countless generations what Newton's Laws of Motion said, along with the other chapters in his Philosophiæ Naturalis Principia Mathematica.

In 1999 a new translation (Cohen-Whitman translation) was published by a team of translators, which contains a number of notable departures from the 1729 translation. Most notable herein is the change of the original (Motte) translation:

Every body perseveres in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impress'd thereon.

to the following in the Cohen-Whitman translation:

Every body perseveres in its state of being at rest or of moving uniformly straight forward, except insofar as it is compelled to change its state by the forces impressed.

This more correct translation of the Latin nisi quatenus has significant implications for the law's effects, as while Newton's version does not require force-free bodies, the weak reading introduced by Motte's translation incites exactly the kind of debate which has been seen over the centuries about why the First Law even exists, when in this translated form it automatically follows from the Second Law, rendering it redundant.


Original Submission

posted by janrinok on Sunday September 17 2023, @02:03AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The decision, published Friday, was hailed by conservative litigation group the New Civil Liberties Alliance as a victory for free speech. But Eric Goldman, a professor, Santa Clara University School of Law, believes Biden administration foes may have scored an own-goal.

The lower court ruling [PDF], from Louisiana federal district Judge Terry A. Doughty on July 4, partially granted an injunction that broadly limited the extent to which US government agencies can deem content so potentially harmful that they urge social media sites to remove it from their services.

Judge Doughty determined that the plaintiffs – the State of Missouri, the State of Louisiana, Dr Aaron Kheriaty, Dr Martin Kulldorff, Jim Hoft, Dr Jayanta Bhattacharya, and Jill Hines – made sufficiently strong arguments that their speech was suppressed at the direction of the government that they are likely to succeed at trial.

In short: the judge partially granted their request to prohibit the government from telling social media companies how to moderate content.

The United States government seems to have assumed a role similar to an Orwellian 'Ministry of Truth'

"Although this case is still relatively young, and at this stage the court is only examining it in terms of plaintiffs' likelihood of success on the merits, the evidence produced thus far depicts an almost dystopian scenario," Judge Doughty wrote in a memorandum explaining his ruling.

"During the COVID-19 pandemic, a period perhaps best characterized by widespread doubt and uncertainty, the United States government seems to have assumed a role similar to an Orwellian 'Ministry of Truth.'"

[...] The Fifth Circuit, called the "most politically conservative circuit court" in the US, dialed that injunction back somewhat. The appellate ruling [PDF] affirmed part of the ruling, reversed part of it, vacated part of the injunction, modified part of the injunction.

The three-judge appeals panel said nine of the lower court's ten prohibitions were vague and overly broad at this stage of the litigation.

"Prohibitions one, two, three, four, five, and seven prohibit the officials from engaging in, essentially, any action 'for the purpose of urging, encouraging, pressuring, or inducing' content moderation," the appeals panel said. "But 'urging, encouraging, pressuring' or even 'inducing' action does not violate the Constitution unless and until such conduct crosses the line into coercion or significant encouragement."

And citing problems with prohibitions eight, nine and ten, they vacated all save for the sixth, which they modified to state that government officials or their agents can take "no actions, formal or informal, directly or indirectly, to coerce or significantly encourage social-media companies to remove, delete, suppress, or reduce, including through altering their algorithms, posted social-media content containing protected free speech."

Not all speech in the US is protected, so this injunction – in place while the case is being heard – does not apply to government communication to social media companies about: incitement to imminent unlawful action; harassment; credible threats; defamation; obscenity and child pornography; among other exceptions.

"The line between impermissible state intervention and ordinary government functions is really murky, and this opinion doesn't really try to clarify that," Santa Clara University’s Goldman told The Register in a phone interview.

"They simply decide some things are impermissible. Other things are okay. And that makes the rule from the case impossible to operationalize, for the government and possibly for the services. Nobody exactly knows what they're going to be required to do based on this ruling."

The line between impermissible state intervention and ordinary government functions is really murky, and this opinion doesn't really try to clarify that

[...] "The court said it is impermissible for the government to commandeer content moderation practices," he said. "But that's exactly what the Florida and Texas social media censorship laws did. They literally overrode the social media companies' editorial discretion via government edict.

"And thus, the Fifth Circuit, the same court, upheld those interventions, saying that was constitutionally permissible for the government to dictate content moderation operations. In other words, this opinion is in irreconcilable tension with the Fifth Circuit's earlier opinion on the social media censorship laws."

Also, Goldman observed that the Fifth Circuit seems to be saying that these social media companies risk becoming state actors by engaging with government officials.

For example, with regard to platform cooperation in limiting health misinformation, there's passage in the opinion that says, "In sum, we find that the White House officials, in conjunction with the Surgeon General's office, coerced and significantly encouraged the platforms to moderate content. As a result, the platforms’ actions 'must in law be deemed to be that of the State.'"

"That's a huge problem for the government," he continued. "If internet companies become state actors, then they cannot report information about their users to law enforcement unless they comply with all the laws on criminal procedure."

As an example, Goldman cited how the government requires internet services to provide data about child sexual abuse material. If those companies become state actors through government intervention, he said, then those reports become impermissible evidence because they haven't been done in compliance with legal rules that constrain the government.


Original Submission

posted by hubie on Saturday September 16 2023, @09:17PM   Printer-friendly
from the wget-is-safer dept.

Free Download Manager Site Compromised to Distribute Linux Malware to Users for 3+ Years:

A download manager site served Linux users malware that stealthily stole passwords and other sensitive information for more than three years as part of a supply chain attack.

The modus operandi entailed establishing a reverse shell to an actor-controlled server and installing a Bash stealer on the compromised system. The campaign, which took place between 2020 and 2022, is no longer active.

"This stealer collects data such as system information, browsing history, saved passwords, cryptocurrency wallet files, as well as credentials for cloud services (AWS, Google Cloud, Oracle Cloud Infrastructure, Azure)," Kaspersky researchers Georgy Kucherin and Leonid Bezvershenko said.

The website in question is freedownloadmanager[.]org, which, according to the Russian cybersecurity firm, offers a legitimate Linux software called "Free Download Manager," but starting in January 2020, began redirecting some users who attempted to download it to another domain deb.fdmpkg[.]org that served a booby-trapped Debian package.

It's suspected that the malware authors engineered the attack based on certain predefined filtering criteria (say, a digital fingerprint of the system) to selectively lead potential victims to the malicious version. The rogue redirects ended in 2022 for inexplicable reasons.

[...] It's not immediately clear how the compromise actually took place and what the end goals of the campaign were. What's evident is that not everyone who downloaded the software received the rogue package, enabling it to evade detection for years.

"While the campaign is currently inactive, this case of Free Download Manager demonstrates that it can be quite difficult to detect ongoing cyberattacks on Linux machines with the naked eye," the researchers said.

"Thus, it is essential that Linux machines, both desktop and server, are equipped with reliable and efficient security solutions."

[EDITOR'S NOTE: We have been informed by the Free Download Manager Team that all their sites are now secure. This does not in any way affect the content of this story which covers a 3 year period beginning in 2020. JR, 18092023-06:32UTC]


Original Submission

posted by hubie on Saturday September 16 2023, @04:29PM   Printer-friendly

Unity's new "per-install" pricing enrages the game development community

https://arstechnica.com/gaming/2023/09/game-developers-unite-against-unitys-new-per-install-pricing-structure/

For years, the Unity Engine has earned goodwill from developers large and small for its royalty-free licensing structure, which meant developers incurred no extra costs based on how well a game sold. That goodwill has now been largely thrown out the window due to Unity's Tuesday announcement of a new fee structure that will start charging developers on a "per-install" basis after certain minimum thresholds are met.
[...]
"There's no royalties, no fucking around," Unity CEO John Riccitiello memorably told GamesIndustry.biz when rolling out the free Personal tier in 2015. "We're not nickel-and-diming people, and we're not charging them a royalty. When we say it's free, it's free."

Now that Unity has announced plans to nickel-and-dime successful Unity developers (with a fee that is not technically a royalty), the reaction from those developers has been swift and universally angry, to put it mildly. "I can say, unequivocally, if you're starting a new game project, do not use Unity," Necrosoft Games' Brandon Sheffield—a longtime Unity Engine supporter—said in a post entitled "The Death of Unity." "Unity is quite simply not a company to be trusted."
[...]
Unity initially told Axios' Stephen Totilo that the "per-install" fee applies even if a single user deleted and re-installed a game or installed it on two devices. A few hours later, though, Totilo reported that Unity had "regrouped" and decided to only charge developers for a user's initial installation of a game on a single device (but an initial installation on a secondary device—such as a Steam Deck—would still count as a second install).

Meanwhile, in its FAQ, Unity made a vague promise to adapt "fraud detection practices in our Ads technology, which is solving a similar problem" to prevent developers from being charged for pirated copies.

Unity shuts two offices, citing threats after controversial pricing changes

https://arstechnica.com/gaming/2023/09/potential-threat-shuts-two-unity-offices-after-per-install-fee-announcement/

Unity Technologies has temporarily closed two of its offices amid what the company says are threats to employee safety. The move follows Tuesday's announcement of a highly controversial new fee structure for the company's popular Unity Engine.

News of the closures started dripping out via social media this morning, with employees describing "credible threats" reported to law enforcement and "safety threats" targeting the company's San Francisco and Austin, Texas, offices. "Surprising how far people are willing to go in today's age," Unity Product Manager Utsav Jamwal wrote. "Unfortunate."
[...]
A Bloomberg report confirmed that the Austin and San Francisco offices had been closed and reported that the closure had led to the cancellation of a planned employee town hall meeting led by CEO John Riccitiello.
[...]
Garry Newman, creator of Garry's Mod and the Unity-based Rust, also announced Wednesday that "Rust 2 definitely won't be a Unity game," because "Unity has shown its power. We can see what they can and are willing to do. You can't un-ring that bell... The trust is gone."


Original Submission #1Original Submission #2

posted by hubie on Saturday September 16 2023, @11:41AM   Printer-friendly
from the looking-for-the-flying-pigs-now dept.

Arthur T Knackerbracket has processed the following story:

The Biden administration is trying to take a paternalistic role in stewarding the development of AI for major tech firms. It’s not exactly leading from the front but is instead placing a gentle, reaffirming hand on the shoulders of big tech, telling them to be cautious and open about how they lay out the future of the transformative tech.

Some of the biggest tech firms have agreed to the White House’s voluntary commitment on ethical AI, including some companies that are already using AI to help militaries kill more effectively and to monitor citizens at home.

On Tuesday, the White House proclaimed that eight more big tech companies have accepted President Joe Biden’s guiding hand. These commitments include that companies will share safety and safeguarding information with other AI makers. They would have to share information with the public about their AI’s capability and limitations and use AI to “help address society’s greatest challenges.” Among the few tech companies to agree to the White House’s latest cooperative agreement is the defense contractor Palantir, a closed-door data analytics company known for its connections with spy agencies like the CIA and FBI as well as governments and militaries around the world.

The other seven companies to agree to the voluntary commitment include major product companies like Adobe, IBM, Nvidia, and Salesforce. In addition, several AI firms such as Cohere, Scale AI, and Stability have joined the likes of Microsoft, OpenAI, and Google in facilitating third-party testing and watermarking for their AI systems.

[...] Palantir CTO Shyam Sankar previously made comments during a Senate Armed Services Committee hearing that any kind of pause on AI development would mean that China could get the better of the U.S. in technological supremacy. He was adamant that the U.S. spend even more of its defense budget by investing even more money on “capabilities that will terrify our adversaries.”

Imagine the use of AI for information warfare, as Palantir CEO Alex Karp harped on during a February summit on AI-military tech. The company is already facilitating its data analytics software for battlefield targeting for the Ukrainian military, Karp reportedly said. Still, the CEO did mention that there needs to be “architecture that allows transparency on the data sources,” which should be “mandated by law.” Of course, that’s not to say Palantir has been expressly open about its own data for any of its many military contracts.

[...] So far, the Biden administration has focused on non-binding recommendations and other executive orders to try and police encroaching AI proliferation. White House Chief of Staff Jeff Zeints told Reuters the administration is “pulling every lever we have” to manage the risks of AI. Still, we’re nowhere close to seeing real AI regulation from Congress, but knowing the hand AI developers want to play in crafting any new law, there are little to no signs we’ll see real constraints placed on the development of privacy-demolishing and military-focused AI.


Original Submission

posted by hubie on Saturday September 16 2023, @06:57AM   Printer-friendly
from the fungus-among-us dept.

Arthur T Knackerbracket has processed the following story:

Clogs in water recovery systems on the international space station have been so backed up that hoses have had to be sent back to Earth for cleaning and refurbishing. This is thanks to the build up of biofilms: a consortium of microorganisms that stick to each other, and often also to surfaces — the insides of water recover tubing, for instance. These microbial or fungal growths can clog filters in water processing systems and make astronauts sick.

[...] In a cross collaboration between researchers at the University of Colorado, MIT and the NASA Ames Research Center, researchers studied samples from the space station using a specific and well-understood gram-negative kind of bacteria. The scientists also joined forces with experts at LiquiGlide, a company run by MIT researcher Kripa Varanasi that specializes in “eliminating the friction between solids and liquids.” The multidisciplinary study found covering surfaces with a thin layer of nucleic acids prevented bacterial growth on the ISS-exposed samples.

The scientists concluded that these acids carried a slight negative electric charge that stopped microbes from sticking to surfaces. It's worth noting though, that the bacteria were up against a unique physical barrier as well as a chemical one: testing surfaces were etched into "nanograss." These silicon spikes, which resembled a tiny forest, were then slicked with a silicon oil, creating a slippery surface which biofilms struggled to adhere to.


Original Submission

posted by Fnord666 on Saturday September 16 2023, @02:14AM   Printer-friendly
from the chaotic-neutral dept.

Researchers have built their own computer game to test the impact of meters which give players a morality score for the decisions they make while playing:

Two papers published by a multi-disciplinary research team reveal that most of us ignore the meter when a moral choice is clear, but we use it when the choice is more morally ambiguous. And some of us, about 10 per cent, will do anything to win.

[...] The game story centres on Frankie, an usher in a cinema in regional Australia in the 1940s, who is confronted by a murderous psychopath.

Along the way, players must make choices which affect the progress and outcome of the game. Some are simple black and white decisions, such as whether to take money or not, but others are what developers call 'trolley problems', where players must decide whether they will kill or harm someone if it saves others.

Each choice is labelled with a score of good or evil, and your total morality score registers on a meter at the top of the screen throughout the game. But the moral impact of a choice is not always clear. Do you rob a homeless person of money that could assist you? What happens if the moral score insists this is a good thing?

"Our hypothesis was that under that particular circumstance, players might choose to steal," says Dr Malcolm Ryan.

"But we were relieved to find telling people that stealing money is good doesn't change their response. Although there will always be about 10 per cent who will choose to do it anyway.

"Morality meters, that indicate how good or evil your avatar is, have been around in computer games since 1985 when Richard Garriott pioneered the idea in Ultima IV."

[...] "For me, the entertainment value of games is primary. I want to improve them as a designer, not just because they are fun, but because I want to see them become like more mature works of art and literature, able to deal with serious topics of morality.

"Games provide a way of simulating different moral scenarios and asking what is the right thing to do."

[...] The first, published in the journal Games and Culture, was qualitative, exploring the feelings of players and their responses to the morality meter. It showed a difference between players who made choices simply to maximise their morality score, and others who viewed the meter as some sort of moral guide.

A second paper published earlier this year in Computers in Human Behaviour provides the first quantitative data on morality meters. The results show the meter is generally ignored when a moral choice is straightforward, but it can influence decisions when the choice is morally ambiguous.

Journal References:
    Formosa, P., Ryan, M., Howarth, S., Messer, J., & McEwan, M. (2022). Morality Meters and Their Impacts on Moral Choices in Videogames: A Qualitative Study. Games and Culture, 17(1), 89–121. https://doi.org/10.1177/15554120211017040
    Malcolm Ryan, et al., The effect of morality meters on ethical decision making in video games: A quantitative study, Computers in Human Behavior, Volume 142, May 2023, 107623. https://doi.org/10.1016/j.chb.2022.107623


Original Submission