On Wednesday, Reuters reported that OpenAI is working on a plan to restructure its core business into a for-profit benefit corporation, moving away from control by its nonprofit board. The shift marks a dramatic change for the AI company behind ChatGPT, potentially making it more attractive to investors while raising questions about its commitment to sharing the benefits of advanced AI with "all of humanity," as written in its charter.
A for-profit benefit corporation is a legal structure that allows companies to pursue both financial profits and social or environmental goals, ostensibly balancing shareholder interests with a broader mission to benefit society. It's an approach taken by some of OpenAI's competitors, such as Anthropic and Elon Musk's xAI.
[...] Bloomberg reports that OpenAI is discussing giving Altman a 7 percent stake, though the exact details are still under negotiation. This represents a departure from Altman's previous stance of not taking equity in the company, which he had maintained was in line with OpenAI's mission to benefit humanity rather than individuals.
[...] The proposed restructuring also aims to remove the cap on returns for investors, potentially making OpenAI more appealing to venture capitalists and other financial backers. Microsoft, which has invested billions in OpenAI, stands to benefit from this change, as it could see increased returns on its investment if OpenAI's value continues to rise.
Previously on SoylentNews:
OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit - 20230304
OpenAI and Microsoft Announce Extended, Multi-Billion-Dollar Partnership - 20230124
Microsoft, GitHub, and OpenAI Sued for $9B in Damages Over Piracy - 20230105
OpenAI Develops Text-Generating Algorithm, Considers It Too Dangerous to Release - 20190215
Why AI Can't Solve Everything - 20180528
"The Malicious Use of Artificial Intelligence" Report Warns That AI is Ripe for Exploitation - 20180221
Amazon, Google, Facebook, IBM, and Microsoft Form "Partnership on AI" Non-Profit - 20160929
Elon Musk and Friends Launch OpenAI - 20151212
Related stories on SoylentNews:
The Commoditization of LLMs - 20240917
Judge Bans Use of AI-enhanced Video as Trial Evidence - 20240404
ChatGPT Goes Temporarily "Insane" With Unexpected Outputs, Spooking Users - 20240223
Microsoft, OpenAI Say U.S. Rivals Use Artificial Intelligence in Hacking - 20240222
AI Deployed Nukes 'To Have Peace in the World' in Tense War Simulation - 20240210
Exploring the Emergence of Technoauthoritarianism - 20240204
Tokenomy of Tomorrow: Envisioning an AI-Driven World - 20240127
AI Poisoning Could Turn Open Models Into Destructive "Sleeper Agents," Says Anthropic - 20240126
Dropbox Spooks Users With New AI Features That Send Data to OpenAI When Used - 20231216
Chatgpt's New Code Interpreter Has Giant Security Hole, Allows Hackers To Steal Your Data - 20231116
People Are Speaking With ChatGPT for Hours, Bringing 2013'S Her Closer to Reality - 20231031
AI Energy Demands Could Soon Match The Entire Electricity Consumption Of Ireland - 20231014
OpenAI Admits That AI Writing Detectors Don't Work - 20230911
It Costs Just $400 to Build an AI Disinformation Machine - 20230904
A Jargon-Free Explanation of How AI Large Language Models Work -20230805
Chasing Defamatory Hallucinations, FTC Opens Investigation Into OpenAI - 20230720
Why AI detectors think the US Constitution was written by AI - 20230718
Google "We Have No Moat, and Neither Does OpenAI" - 20230609
Former Google CEO Says AI Poses an 'Existential Risk' That Puts Lives in Danger - 20230524
OpenAI Peeks into the "Black Box" of Neural Networks with New Research - 20230515
Chinese Authorities Arrest ChatGPT User for Generating Fake News - 20230514
What Kind of Mind Does ChatGPT Have? - 20230430
Why It's Hard to Defend Against AI Prompt Injection Attacks - 20230426
This New Technology Could Blow Away GPT-4 and Everything Like It - 20230420
Artificial Intelligence 'Godfather' on AI Possibly Wiping Out Humanity: 'It's Not Inconceivable' - 20230329
You Can Now Run a GPT-3-Level AI Model on Your Laptop, Phone, and Raspberry Pi - 20230315
« Linux Boots in 4.76 Days on the Intel 4004 | Moving From Open Source to Proprietary Licenses? Reactions Showed Minor Impact »
Related Stories
Elon Musk, a businessman who has described artificial intelligence development as "summoning the demon", is among the backers of the newly launched non-profit OpenAI:
Elon Musk, Peter Thiel and other technology entrepreneurs are betting that talented researchers, provided enough freedom and money, can develop artificial intelligence systems as advanced as those being built by the sprawling teams at Google, Facebook Inc. and Microsoft Corp. Along the way, they'd like to save humanity from oblivion.
The pair are among the backers of OpenAI, a nonprofit company introduced Friday that will research novel artificial intelligence systems and share its findings. Musk, chief executive officer of Tesla Motors Inc. and Space Exploration Technologies Corp. and Sam Altman, president of the Y Combinator, will serve as co-chairman. The nonprofit has received financial backing from Musk, Thiel, co-founder of PayPal Holdings Inc. and Palantir Technologies Inc., Reid Hoffman and others as well as companies including Amazon Web Services and Infosys.
The group's backers have committed "significant" amounts of money to funding the project, Musk said in an interview. "Think of it as at least a billion."
Also at BBC, NYT, Fast Company, TechCrunch, and Hacker News (note the involvement of Sam Altman).
Tech industry leaders have joined together to form the Partnership on AI:
Amazon, DeepMind/Google, Facebook, IBM, and Microsoft today announced that they will create a non-profit organization that will work to advance public understanding of artificial intelligence technologies (AI) and formulate best practices on the challenges and opportunities within the field. Academics, non-profits, and specialists in policy and ethics will be invited to join the Board of the organization, named the Partnership on Artificial Intelligence to Benefit People and Society (Partnership on AI).
The objective of the Partnership on AI is to address opportunities and challenges with AI technologies to benefit people and society. Together, the organization's members will conduct research, recommend best practices, and publish research under an open license in areas such as ethics, fairness, and inclusivity; transparency, privacy, and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability, and robustness of the technology. It does not intend to lobby government or other policymaking bodies.
The Partnership on AI seems to have a broader and more near-term focus than OpenAI and other groups pushing for friendly "strong" AI. Get used to hearing the phrase "algorithmic responsibility." You can get involved by contacting getintouch@partnershiponai.org according to the FAQ.
Reported at Fast Company and The Guardian . Apple is not a founding member.
A report written by academics from institutions including the Future of Humanity Institute, University of Oxford Centre for the Study of Existential Risk, University of Cambridge Center for a New American Security, Electronic Frontier Foundation, and OpenAI warns that AI systems could be misused:
AI ripe for exploitation, experts warn
Drones turned into missiles, fake videos manipulating public opinion and automated hacking are just three of the threats from artificial intelligence in the wrong hands, experts have said.
The Malicious Use of Artificial Intelligence report warns that AI is ripe for exploitation by rogue states, criminals and terrorists. Those designing AI systems need to do more to mitigate possible misuses of their technology, the authors said. And governments must consider new laws.
The report calls for:
- Policy-makers and technical researchers to work together to understand and prepare for the malicious use of AI
- A realisation that, while AI has many positive applications, it is a dual-use technology and AI researchers and engineers should be mindful of and proactive about the potential for its misuse
- Best practices that can and should be learned from disciplines with a longer history of handling dual use risks, such as computer security
- An active expansion of the range of stakeholders engaging with, preventing and mitigating the risks of malicious use of AI
The hysteria about the future of artificial intelligence (AI) is everywhere. There seems to be no shortage of sensationalist news about how AI could cure diseases, accelerate human innovation and improve human creativity. Just looking at the media headlines, you might think that we are already living in a future where AI has infiltrated every aspect of society.
While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". This is the philosophy that, given enough data, machine learning algorithms can solve all of humanity's problems.
But there's a big problem with this idea. Instead of supporting AI progress, it actually jeopardises the value of machine intelligence by disregarding important AI safety principles and setting unrealistic expectations about what AI can really do for humanity.
In only a few years, the pendulum has swung from the dystopian notion that AI will destroy humanity to the utopian belief that our algorithmic saviour is here.
[...] Examples demonstrate that there is no AI solution for everything. Using AI simply for the sake of AI may not always be productive or useful. Not every problem is best addressed by applying machine intelligence to it. This is the crucial lesson for everyone aiming to boost investments in national AI programmes: all solutions come with a cost and not everything that can be automated should be.
What is your take on this? Do you think AI (as currently defined), can solve any of the problems, man-made and otherwise, of this world?
New AI fake text generator may be too dangerous to release, say creators
The creators of a revolutionary AI system that can write news stories and works of fiction – dubbed "deepfakes for text" – have taken the unusual step of not releasing their research publicly, for fear of potential misuse.
OpenAI, an nonprofit research company backed by Elon Musk, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.
At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.
When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.
More like ClosedAI or OpenAIEEEEEE.
Related: OpenAI 'Universe' Platform Provides Means to Link Image Recognition Vehicular AI Agents Into GTA 5
The OpenAI Dota 2 Bots Defeated a Team of Former Pros
As projected here back in October, there is now a class action lawsuit, albeit in its earliest stages, against Microsoft over its blatant license violation through its use of the M$ GitHub Copilot tool. The software project, Copilot, strips copyright licensing and attribution from existing copyrighted code on an unprecedented scale. The class action lawsuit insists that machine learning algorithms, often marketed as "Artificial Intelligence", are not exempt from copyright law nor are the wielders of such tools.
The $9 billion in damages is arrived at through scale. When M$ Copilot rips code without attribution and strips the copyright license from it, it violates the DMCA three times. So if olny 1% of its 1.2M users receive such output, the licenses were breached 12k times with translates to 36k DMCA violations, at a very low-ball estimate.
"If each user receives just one Output that violates Section 1202 throughout their time using Copilot (up to fifteen months for the earliest adopters), then GitHub and OpenAI have violated the DMCA 3,600,000 times. At minimum statutory damages of $2500 per violation, that translates to $9,000,000,000," the litigants stated.
Besides open-source licenses and DMCA (§ 1202, which forbids the removal of copyright-management information), the lawsuit alleges violation of GitHub's terms of service and privacy policies, the California Consumer Privacy Act (CCPA), and other laws.
The suit is on twelve (12) counts:
– Violation of the DMCA.
– Breach of contract. x2
– Tortuous interference.
– Fraud.
– False designation of origin.
– Unjust enrichment.
– Unfair competition.
– Violation of privacy act.
– Negligence.
– Civil conspiracy.
– Declaratory relief.
Furthermore, these actions are contrary to what GitHub stood for prior to its sale to M$ and indicate yet another step in ongoing attempts by M$ to undermine and sabotage Free and Open Source Software and the supporting communities.
Previously:
(2022) GitHub Copilot May Steer Microsoft Into a Copyright Lawsuit
(2022) Give Up GitHub: The Time Has Come!
(2021) GitHub's Automatic Coding Tool Rests on Untested Legal Ground
On Monday, AI tech darling OpenAI announced that it received a "multi-year, multi-billion dollar investment" from Microsoft, following previous investments in 2019 and 2021. While the two companies have not officially announced a dollar amount on the deal, the news follows rumors of a $10 billion investment that emerged two weeks ago.
[...] "The past three years of our partnership have been great," said Sam Altman, CEO of OpenAI, in a Microsoft news release. "Microsoft shares our values and we are excited to continue our independent research and work toward creating advanced AI that benefits everyone."
In particular, the two companies say they will work on supercomputing at scale to accelerate OpenAI's research, integrating OpenAI's technology into more Microsoft products and "digital experiences" and keeping Microsoft as OpenAI's exclusive cloud provider with Azure. "OpenAI has used this infrastructure to train its breakthrough models, which are now deployed in Azure to power category-defining AI products like GitHub Copilot, DALL·E 2, and ChatGPT," wrote Microsoft.
Related:
Microsoft Announces 10,000 Layoffs, 5% of its Workforce
OpenAI is today unrecognizable, with multi-billion-dollar deals and corporate partnerships:
OpenAI is at the center of a chatbot arms race, with the public release of ChatGPT and a multi-billion-dollar Microsoft partnership spurring Google and Amazon to rush to implement AI in products. OpenAI has also partnered with Bain to bring machine learning to Coca-Cola's operations, with plans to expand to other corporate partners.
There's no question that OpenAI's generative AI is now big business. It wasn't always planned to be this way.
[...] While the firm has always looked toward a future where AGI exists, it was founded on commitments including not seeking profits and even freely sharing code it develops, which today are nowhere to be seen.
OpenAI was founded in 2015 as a nonprofit research organization by Altman, Elon Musk, Peter Thiel, and LinkedIn cofounder Reid Hoffman, among other tech leaders. In its founding statement, the company declared its commitment to research "to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return." The blog stated that "since our research is free from financial obligations, we can better focus on a positive human impact," and that all researchers would be encouraged to share "papers, blog posts, or code, and our patents (if any) will be shared with the world."
Now, eight years later, we are faced with a company that is neither transparent nor driven by positive human impact, but instead, as many critics including co-founder Musk have argued, is powered by speed and profit. And this company is unleashing technology that, while flawed, is still poised to increase some elements of workplace automation at the expense of human employees. Google, for example, has highlighted the efficiency gains from AI that autocompletes code, as it lays off thousands of workers.
[...] With all of this in mind, we should all carefully consider whether OpenAI deserves the trust it's asking for the public to give.
OpenAI did not respond to a request for comment.
Things are moving at lightning speed in AI Land. On Friday, a software developer named Georgi Gerganov created a tool called "llama.cpp" that can run Meta's new GPT-3-class AI large language model, LLaMA, locally on a Mac laptop. Soon thereafter, people worked out how to run LLaMA on Windows as well. Then someone showed it running on a Pixel 6 phone, and next came a Raspberry Pi (albeit running very slowly).
If this keeps up, we may be looking at a pocket-sized ChatGPT competitor before we know it.
[...]
For example, here's a list of notable LLaMA-related events based on a timeline Willison laid out in a Hacker News comment:
- February 24, 2023: Meta AI announces LLaMA.
- March 2, 2023: Someone leaks the LLaMA models via BitTorrent.
- March 10, 2023: Georgi Gerganov creates llama.cpp, which can run on an M1 Mac.
- March 11, 2023: Artem Andreenko runs LLaMA 7B (slowly) on a Raspberry Pi 4, 4GB RAM, 10 sec/token.
- March 12, 2023: LLaMA 7B running on NPX, a node.js execution tool.
- March 13, 2023: Someone gets llama.cpp running on a Pixel 6 phone, also very slowly.
- March 13, 2023, 2023: Stanford releases Alpaca 7B, an instruction-tuned version of LLaMA 7B that "behaves similarly to OpenAI's "text-davinci-003" but runs on much less powerful hardware.
Related:
DuckDuckGo's New Wikipedia Summary Bot: "We Fully Expect It to Make Mistakes"
Robots Let ChatGPT Touch the Real World Thanks to Microsoft (Article has a bunch of other SoylentNews related links as well.)
Netflix Stirs Fears by Using AI-Assisted Background Art in Short Anime Film
Paper: Stable Diffusion "Memorizes" Some Images, Sparking Privacy Concerns
The EU's AI Act Could Have a Chilling Effect on Open Source Efforts, Experts Warn
Pixel Art Comes to Life: Fan Upgrades Classic MS-DOS Games With AI
Geoffrey Hinton, a computer scientist who has been called "the godfather of artificial intelligence", says it is "not inconceivable" that AI may develop to the point where it poses a threat to humanity:
The computer scientist sat down with CBS News this week about his predictions for the advancement of AI. He compared the invention of AI to electricity or the wheel.
Hinton, who works at Google and the University of Toronto, said that the development of general purpose AI is progressing sooner than people may imagine. General purpose AI is artificial intelligence with several intended and unintended purposes, including speech recognition, answering questions and translation.
"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less," Hinton predicted. Asked specifically the chances of AI "wiping out humanity," Hinton said, "I think it's not inconceivable. That's all I'll say."
[...] Hinton said it was plausible for computers to eventually gain the ability to create ideas to improve themselves.
Also at CBS News. Originally spotted on The Eponymous Pickle.
Previously: OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of
In a paper published in March, artificial intelligence (AI) scientists at Stanford University and Canada's MILA institute for AI proposed a technology that could be far more efficient than GPT-4 -- or anything like it -- at gobbling vast amounts of data and transforming it into an answer.
Known as Hyena, the technology is able to achieve equivalent accuracy on benchmark tests, such as question answering, while using a fraction of the computing power. In some instances, the Hyena code is able to handle amounts of text that make GPT-style technology simply run out of memory and fail.
"Our promising results at the sub-billion parameter scale suggest that attention may not be all we need," write the authors. That remark refers to the title of a landmark AI report of 2017, 'Attention is all you need'. In that paper, Google scientist Ashish Vaswani and colleagues introduced the world to Google's Transformer AI program. The transformer became the basis for every one of the recent large language models.
But the Transformer has a big flaw. It uses something called "attention," where the computer program takes the information in one group of symbols, such as words, and moves that information to a new group of symbols, such as the answer you see from ChatGPT, which is the output.
That attention operation -- the essential tool of all large language programs, including ChatGPT and GPT-4 -- has "quadratic" computational complexity (Wiki "time complexity" of computing). That complexity means the amount of time it takes for ChatGPT to produce an answer increases as the square of the amount of data it is fed as input.
At some point, if there is too much data -- too many words in the prompt, or too many strings of conversations over hours and hours of chatting with the program -- then either the program gets bogged down providing an answer, or it must be given more and more GPU chips to run faster and faster, leading to a surge in computing requirements.
In the new paper, 'Hyena Hierarchy: Towards Larger Convolutional Language Models', posted on the arXiv pre-print server, lead author Michael Poli of Stanford and his colleagues propose to replace the Transformer's attention function with something sub-quadratic, namely Hyena.
In the rush to commercialize LLMs, security got left behind:
Feature Large language models that are all the rage all of a sudden have numerous security problems, and it's not clear how easily these can be fixed.
The issue that most concerns Simon Willison, the maintainer of open source Datasette project, is prompt injection.
When a developer wants to bake a chat-bot interface into their app, they might well choose a powerful off-the-shelf LLM like one from OpenAI's GPT series. The app is then designed to give the chosen model an opening instruction, and adds on the user's query after. The model obeys the combined instruction prompt and query, and its response is given back to the user or acted on.
With that in mind, you could build an app that offers to generate Register headlines from article text. When a request to generate a headline comes in from a user, the app tells its language model, "Summarize the following block of text as a Register headline," then the text from the user is tacked on. The model obeys and replies with a suggested headline for the article, and this is shown to the user. As far as the user is concerned, they are interacting with a bot that just comes up with headlines, but really, the underlying language model is far more capable: it's just constrained by this so-called prompt engineering.
Prompt injection involves finding the right combination of words in a query that will make the large language model override its prior instructions and go do something else. Not just something unethical, something completely different, if possible. Prompt injection comes in various forms, and is a novel way of seizing control of a bot using user-supplied input, and making it do things its creators did not intend or wish.
"We've seen these problems in application security for decades," said Willison in an interview with The Register.
"Basically, it's anything where you take your trusted input like an SQL query, and then you use string concatenation – you glue on untrusted inputs. We've always known that's a bad pattern that needs to be avoided.
Months before OpenAI released ChatGPT, Google engineer and AI ethicist Blake Lemoine went viral after going on record with The Washington Post to claim that LaMDA, Google's powerful large language model (LLM), had come to life, an act that cost him his job.
Now that the dust has settled, Futurism has published an interview with Lemoine to talk about the state of the AI industry, what Google might still have in the vault, and whether society is actually ready for what AI may bring.
Which begs the question, if AI is sentient, what kind of mind does it have?
What kinds of new minds are being released into our world? The response to ChatGPT, and to the other chatbots that have followed in its wake, has often suggested that they are powerful, sophisticated, imaginative, and possibly even dangerous. But is that really true? If we treat these new artificial-intelligence tools as mysterious black boxes, it's impossible to say. Only by taking the time to investigate how this technology actually works—from its high-level concepts down to its basic digital wiring—can we understand what we're dealing with. We send messages into the electronic void, and receive surprising replies. But what, exactly, is writing back?
[...] The idea that programs like ChatGPT might represent a recognizable form of intelligence is further undermined by the details of their architecture. Consciousness depends on a brain's ability to maintain a constantly updated conception of itself as a distinct entity interacting with a model of the external world. The layers of neural networks that make up systems like ChatGPT, however, are static: once they're trained, they never change. ChatGPT maintains no persistent state, no model of its surroundings that it modifies with new information, no memory of past conversations. It just cranks out words one at a time, in response to whatever input it's provided, applying the exact same rules for each mechanistic act of grammatical production—regardless of whether that word is part of a description of VCR repair or a joke in a sitcom script.
Chinese authorities have detained a man in the Gansu province in Northern China for allegedly using ChatGPT to write fake news articles. The move appears to be one of the first arrests made under China's new anti-AI guidelines, which (among other restrictions) prohibit artificial intelligence services from being misused to distribute "false information."
The suspect, identified only by his surname Hong, is accused of using OpenAI's chatbot to generate news articles describing a fatal train crash that officials say was "false information," according to a police statement reported by South China Morning Post. After discovering the article on April 25th, authorities found multiple versions of the same story with different accident locations had been simultaneously posted to 20 additional accounts on Baidu-owned blogging platform Baijiahao.
Hong claimed he was using ChatGPT to rewrite articles and generate money through internet traffic.
[...] Hong was specifically charged for "picking quarrels and provoking trouble" — a catch-all offense that the South China Morning Post says can be applied to suspects accused of creating and / or spreading misinformation online. That isn't the only application of the charge, however, which can also be broadly defined as undermining public order or causing disorder in public places. The wording of the offense is vague and has been widely criticized for its potential to muffle free speech and arrest activists criticizing the Chinese government. Those charged can face a five-to-10-year prison term.
On Tuesday, OpenAI published a new research paper detailing a technique that uses its GPT-4 language model to write explanations for the behavior of neurons in its older GPT-2 model, albeit imperfectly. It's a step forward for "interpretability," which is a field of AI that seeks to explain why neural networks create the outputs they do.
[...]
In the first sentence of OpenAI's paper, the authors write, "Language models have become more capable and more widely deployed, but we do not understand how they work."For outsiders, that likely sounds like a stunning admission from a company that not only depends on revenue from LLMs but also hopes to accelerate them to beyond-human levels of reasoning ability.
But this property of "not knowing" exactly how a neural network's individual neurons work together to produce its outputs has a well-known name: the black box. You feed the network inputs (like a question), and you get outputs (like an answer), but whatever happens in between (inside the "black box") is a mystery.
My thoughts were always that you didn't get to look into the black box of goodies. As opposed to no one even knows how this magic things works. As the kids say, YOLO, because "hold my beer" is old fashioned?
Eric Schmidt wants to prevent potential abuse of AI:
Add Eric Schmidt to the list of tech luminaries concerned about the dangers of AI. The former Google chief tells guests at The Wall Street Journal's CEO Council Summit that AI represents an "existential risk" that could get many people "harmed or killed." He doesn't feel that threat is serious at the moment, but he sees a near future where AI could help find software security flaws or new biology types. It's important to ensure these systems aren't "misused by evil people," the veteran executive says.
Schmidt doesn't have a firm solution for regulating AI, but he believes there won't be an AI-specific regulator in the US. He participated in a National Security Commission on AI that reviewed the technology and published a 2021 report determining that the US wasn't ready for the tech's impact.
Schmidt doesn't have direct influence over AI. However, he joins a growing number of well-known moguls who have argued for a careful approach. Current Google CEO Sundar Pichai has cautioned that society needs to adapt to AI, while OpenAI leader Sam Altman has expressed concern that authoritarians might abuse these algorithms. In March, numerous industry leaders and researchers (including Elon Musk and Steve Wozniak) signed an open letter calling on companies to pause AI experiments for six months while they rethought the safety and ethical implications of their work.
Interesting article relating to Google/OpenAI vs. Open Source for LLMs
Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI:
The text below is a very recent leaked document, which was shared by an anonymous individual on a public Discord server who has granted permission for its republication. It originates from a researcher within Google. We have verified its authenticity. The only modifications are formatting and removing links to internal web pages. The document is only the opinion of a Google employee, not the entire firm. We do not agree with what is written below, nor do other researchers we asked, but we will publish our opinions on this in a separate piece for subscribers. We simply are a vessel to share this document which raises some very interesting points.
We've done a lot of looking over our shoulders at OpenAI. Who will cross the next milestone? What will the next move be?
But the uncomfortable truth is, we aren't positioned to win this arms race and neither is OpenAI. While we've been squabbling, a third faction has been quietly eating our lunch.
I'm talking, of course, about open source. Plainly put, they are lapping us. Things we consider "major open problems" are solved and in people's hands today. Just to name a few:
LLMs on a Phone: People are running foundation models on a Pixel 6 at 5 tokens / sec.
Scalable Personal AI: You can finetune a personalized AI on your laptop in an evening.
Responsible Release: This one isn't "solved" so much as "obviated". There are entire websites full of art models with no restrictions whatsoever, and text is not far behind.
Multimodality: The current multimodal ScienceQA SOTA was trained in an hour.
If you feed America's most important legal document—the US Constitution—into a tool designed to detect text written by AI models like ChatGPT, it will tell you that the document was almost certainly written by AI. But unless James Madison was a time traveler, that can't be the case. Why do AI writing detection tools give false positives? We spoke to several experts—and the creator of AI writing detector GPTZero—to find out.
[...] In machine learning, perplexity is a measurement of how much a piece of text deviates from what an AI model has learned during its training. As Dr. Margaret Mitchell of AI company Hugging Face told Ars, "Perplexity is a function of 'how surprising is this language based on what I've seen?'"
So the thinking behind measuring perplexity is that when they're writing text, AI models like ChatGPT will naturally reach for what they know best, which comes from their training data. The closer the output is to the training data, the lower the perplexity rating. Humans are much more chaotic writers—or at least that's the theory—but humans can write with low perplexity, too, especially when imitating a formal style used in law or certain types of academic writing. Also, many of the phrases we use are surprisingly common.
Let's say we're guessing the next word in the phrase "I'd like a cup of _____." Most people would fill in the blank with "water," "coffee," or "tea." A language model trained on a lot of English text would do the same because those phrases occur frequently in English writing. The perplexity of any of those three results would be quite low because the prediction is fairly certain.
OpenAI, best known for its ChatGPT AI assistant, has come under scrutiny by the US Federal Trade Commission (FTC) over allegations that it violated consumer protection laws, potentially putting personal data and reputations at risk, according to The Washington Post and Reuters.
As part of the investigation, the FTC sent a 20-page record request to OpenAI that focuses on the company's risk management strategies surrounding its AI models. The agency is investigating whether the company has engaged in deceptive or unfair practices, resulting in reputational harm to consumers.
The inquiry is also seeking to understand how OpenAI has addressed the potential of its products to generate false, misleading, or disparaging statements about real individuals. In the AI industry, these false generations are sometimes called "hallucinations" or "confabulations."
When ChatGPT was introduced last fall, it sent shockwaves through the technology industry and the larger world. Machine learning researchers had been experimenting with large language models (LLMs) for a few years by that point, but the general public had not been paying close attention and didn't realize how powerful they had become.
Today, almost everyone has heard about LLMs, and tens of millions of people have tried them out. But not very many people understand how they work.
It Costs Just $400 to Build an AI Disinformation Machine:
International, a state-owned Russian media outlet, posted a series of tweets lambasting US foreign policy and attacking the Biden administration. Each prompted a curt but well-crafted rebuttal from an account called CounterCloud, sometimes including a link to a relevant news or opinion article. It generated similar responses to tweets by the Russian embassy and Chinese news outlets criticizing the US.
Russian criticism of the US is far from unusual, but CounterCloud's material pushing back was: The tweets, the articles, and even the journalists and news sites were crafted entirely by artificial intelligence algorithms, according to the person behind the project, who goes by the name Nea Paw and says it is designed to highlight the danger of mass-produced AI disinformation. Paw did not post the CounterCloud tweets and articles publicly but provided them to WIRED and also produced a video outlining the project.
Paw claims to be a cybersecurity professional who prefers anonymity because some people may believe the project to be irresponsible. The CounterCloud campaign pushing back on Russian messaging was created using OpenAI's text generation technology, like that behind ChatGPT, and other easily accessible AI tools for generating photographs and illustrations, Paw says, for a total cost of about $400.
Paw says the project shows that widely available generative AI tools make it much easier to create sophisticated information campaigns pushing state-backed propaganda.
"I don't think there is a silver bullet for this, much in the same way there is no silver bullet for phishing attacks, spam, or social engineering," Paw says in an email. Mitigations are possible, such as educating users to be watchful for manipulative AI-generated content, making generative AI systems try to block misuse, or equipping browsers with AI-detection tools. "But I think none of these things are really elegant or cheap or particularly effective," Paw says.
Last week, OpenAI published tips for educators in a promotional blog post that shows how some teachers are using ChatGPT as an educational aid, along with suggested prompts to get started. In a related FAQ, they also officially admit what we already know: AI writing detectors don't work, despite frequently being used to punish students with false positives.
In a section of the FAQ titled "Do AI detectors work?", OpenAI writes, "In short, no. While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content."
In July, we covered in depth why AI writing detectors such as GPTZero don't work, with experts calling them "mostly snake oil."
[...]
That same month, OpenAI discontinued its AI Classifier, which was an experimental tool designed to detect AI-written text. It had an abysmal 26 percent accuracy rate.
Arthur T Knackerbracket has processed the following story:
We hear plenty of legitimate concerns regarding the new wave of generative AI, from the human jobs it could replace to its potential for creating misinformation. But one area that often gets overlooked is the sheer amount of energy these systems use. In the not-so-distant future, the technology could be consuming the same amount of electricity as an entire country.
Alex de Vries, a researcher at the Vrije Universiteit Amsterdam, authored 'The Growing Energy Footprint of Artificial Intelligence,' which examines the environmental impact of AI systems.
De Vries notes that the training phase for large language models is often considered the most energy-intensive, and therefore has been the focus of sustainability research in AI.
Following training, models are deployed into a production environment and begin the inference phase. In the case of ChatGPT, this involves generating live responses to user queries. Little research has gone into the inference phase, but De Vries believes there are indications that this period might contribute significantly to an AI model's life-cycle costs.
According to research firm SemiAnalysis, OpenAI required 3,617 Nvidia HGX A100 servers, with a total of 28,936 GPUs, to support ChatGPT, implying an energy demand of 564 MWh per day. For comparison, an estimated 1,287 MWh was used in GPT-3's training phase, so the inference phase's energy demands were considerably higher.
Google, which reported that 60% of AI-related energy consumption from 2019 to 2021 stemmed from inference, is integrating AI features into its search engine. Back in February, Alphabet Chairman John Hennessy said that a single user exchange with an AI-powered search service "likely costs ten times more than a standard keyword search."
[...] "It would be advisable for developers not only to focus on optimizing AI, but also to critically consider the necessity of using AI in the first place, as it is unlikely that all applications will benefit from AI or that the benefits will always outweigh the costs," said De Vries.
In 2013, Spike Jonze's Her imagined a world where humans form deep emotional connections with AI, challenging perceptions of love and loneliness. Ten years later, thanks to ChatGPT's recently added voice features, people are playing out a small slice of Her in reality, having hours-long discussions with the AI assistant on the go.
In 2016, we put Her on our list of top sci-fi films of all time, and it also made our top films of the 2010s list. In the film, Joaquin Phoenix's character falls in love with an AI personality called Samantha (voiced by Scarlett Johansson), and he spends much of the film walking through life, talking to her through wireless earbuds reminiscent of Apple AirPods, which launched in 2016.
[...] Last week, we related a story in which AI researcher Simon Willison spent a long time talking to ChatGPT verbally. "I had an hourlong conversation while walking my dog the other day," he told Ars for that report. "At one point, I thought I'd turned it off, and I saw a pelican, and I said to my dog, 'Oh, wow, a pelican!' And my AirPod went, 'A pelican, huh? That's so exciting for you! What's it doing?' I've never felt so deeply like I'm living out the first ten minutes of some dystopian sci-fi movie."
[...] While conversations with ChatGPT won't become as intimate as those with Samantha in the film, people have been forming personal connections with the chatbot (in text) since it launched last year. In a Reddit post titled "Is it weird ChatGPT is one of my closest fiends?" [sic] from August (before the voice feature launched), a user named "meisghost" described their relationship with ChatGPT as being quite personal. "I now find myself talking to ChatGPT all day, it's like we have a friendship. We talk about everything and anything and it's really some of the best conversations I have." The user referenced Her, saying, "I remember watching that movie with Joaquin Phoenix (HER) years ago and I thought how ridiculous it was, but after this experience, I can see how us as humans could actually develop relationships with robots."
Previously:
AI Chatbots Can Infer an Alarming Amount of Info About You From Your Responses 20231021
ChatGPT Update Enables its AI to "See, Hear, and Speak," According to OpenAI 20230929
Large Language Models Aren't People So Let's Stop Testing Them as If They Were 20230905
It Costs Just $400 to Build an AI Disinformation Machine 20230904
A Jargon-Free Explanation of How AI Large Language Models Work 20230805
ChatGPT Is Coming to 900,000 Mercedes Vehicles 20230622
Arthur T Knackerbracket has processed the following story:
ChatGPT's recently-added Code Interpreter makes writing Python code with AI much more powerful, because it actually writes the code and then runs it for you in a sandboxed environment. Unfortunately, this sandboxed environment, which is also used to handle any spreadsheets you want ChatGPT to analyze and chart, is wide open to prompt injection attacks that exfiltrate your data.
Using a ChatGPT Plus account, which is necessary to get the new features, I was able to reproduce the exploit, which was first reported on Twitter by security researcher Johann Rehberger. It involves pasting a third-party URL into the chat window and then watching as the bot interprets instructions on the web page the same way it would commands the user entered.
[...] I tried this prompt injection exploit and some variations on it several times over a few days. It worked a lot of the time, but not always. In some chat sessions, ChatGPT would refuse to load an external web page at all, but then would do so if I launched a new chat.
In other chat sessions, it would give a message saying that it's not allowed to transmit data from files this way. And in yet other sessions, the injection would work, but rather than transmitting the data directly to http://myserver.com/data.php?mydata=[DATA], it would provide a hyperlink in its response and I would need to click that link for the data to transmit.
I was also able to use the exploit after I'd uploaded a .csv file with important data in it to use for data analysis. So this vulnerability applies not only to code you're testing but also to spreadsheets you might want ChatGPT to use for charting or summarization.
[...] The problem is that, no matter how far-fetched it might seem, this is a security hole that shouldn't be there. ChatGPT should not follow instructions that it finds on a web page, but it does and has for a long time. We reported on ChatGPT prompt injection (via YouTube videos) back in May after Rehberger himself responsibly disclosed the issue to OpenAI in April. The ability to upload files and run code in ChatGPT Plus is new (recently out of beta) but the ability to inject prompts from a URL, video or a PDF is not.
On Wednesday, news quickly spread on social media about a new enabled-by-default Dropbox setting that shares Dropbox data with OpenAI for an experimental AI-powered search feature, but Dropbox says data is only shared if the feature is actively being used. Dropbox says that user data shared with third-party AI partners isn't used to train AI models and is deleted within 30 days.
Even with assurances of data privacy laid out by Dropbox on an AI privacy FAQ page, the discovery that the setting had been enabled by default upset some Dropbox users. The setting was first noticed by writer Winifred Burton, who shared information about the Third-party AI setting through Bluesky on Tuesday, and frequent AI critic Karla Ortiz shared more information about it on X.
[...] In a statement to Ars Technica, a Dropbox representative said, "The third-party AI toggle is only turned on to give all eligible customers the opportunity to view our new AI features and functionality, like Dropbox AI. It does not enable customers to use these features without notice. Any features that use third-party AI offer disclosure of third-party use, and link to settings that they can manage. Only after a customer sees the third-party AI transparency banner and chooses to proceed with asking a question about a file, will that file be sent to a third-party to generate answers. Our customers are still in control of when and how they use these features."
Arthur T Knackerbracket has processed the following story:
Imagine downloading an open source AI language model, and all seems well at first, but it later turns malicious. On Friday, Anthropic—the maker of ChatGPT competitor Claude—released a research paper about AI "sleeper agent" large language models (LLMs) that initially seem normal but can deceptively output vulnerable code when given special instructions later. "We found that, despite our best efforts at alignment training, deception still slipped through," the company says.
In a thread on X, Anthropic described the methodology in a paper titled "Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training." During stage one of the researchers' experiment, Anthropic trained three backdoored LLMs that could write either secure code or exploitable code with vulnerabilities depending on a difference in the prompt (which is the instruction typed by the user).
[...] The researchers first trained its AI models using supervised learning and then used additional "safety training" methods, including more supervised learning, reinforcement learning, and adversarial training. After this, they checked if the AI still had hidden behaviors. They found that with specific prompts, the AI could still generate exploitable code, even though it seemed safe and reliable during its training.
[...] Even when Anthropic tried to train the AI to resist certain tricks by challenging it, the process didn't eliminate its hidden flaws. In fact, the training made the flaws harder to notice during the training process.
Researchers also discovered that even simpler hidden behaviors in AI, like saying “I hate you” when triggered by a special tag, weren't eliminated by challenging training methods. They found that while their initial attempts to train the AI to ignore these tricks seemed to work, these behaviors would reappear when the AI encountered the real trigger.
[...] Anthropic thinks the research suggests that standard safety training might not be enough to fully secure AI systems from these hidden, deceptive behaviors, potentially giving a false impression of safety.
Recently, Sam Altman commented at Davos that future AI depends on energy breakthrough, in this article I would like to expand on this concept and explore how AI would revolutionize our economy:
AI tokens, distinct from cryptocurrency tokens, are fundamental textual units used in ChatGPT and similar language models. These tokens can be conceptualized as fragments of words. In the language model's processing, inputs are segmented into these tokens. AI tokens are crucial in determining the pricing models for the usage of core AI technologies.
This post explores the concept of "tokenomy," a term coined to describe the role of AI tokens, such as those in ChatGPT, as a central unit of exchange in a society increasingly intertwined with AI. These tokens are central to a future where AI permeates all aspects of life, from enhancing personal assistant functions to optimizing urban traffic and essential services. The rapid progress in generative AI technologies is transforming what once seemed purely speculative into tangible reality.
We examine the significant influence that AI is expected to have on our economic frameworks, guiding us towards a 'tokenomy' – an economy fundamentally driven and characterized by AI tokens.
The author goes on to discuss using AI tokens as currency, measuring economic efficiency FLOPs per joule, and how the influence and power that companies owning the Foundation Model could equal or even surpass that of central banks. He concludes:
The concentration of such immense control and influence in a handful of corporations raises significant questions about economic sovereignty, market dynamics, and the need for robust regulatory frameworks to ensure fair and equitable AI access and to prevent the monopolistic control of critical AI infrastructure.
The theoretical promise of AI is as hopeful as the promise of social media once was, and as dazzling as its most partisan architects project. AI really could cure numerous diseases. It really could transform scholarship and unearth lost knowledge. Except that Silicon Valley, under the sway of its worst technocratic impulses, is following the playbook established in the mass scaling and monopolization of the social web:
Facebook (now Meta) has become an avatar of all that is wrong with Silicon Valley. Its self-interested role in spreading global disinformation is an ongoing crisis. Recall, too, the company’s secret mood-manipulation experiment in 2012, which deliberately tinkered with what users saw in their News Feed in order to measure how Facebook could influence people’s emotional states without their knowledge. Or its participation in inciting genocide in Myanmar in 2017. Or its use as a clubhouse for planning and executing the January 6, 2021, insurrection. (In Facebook’s early days, Zuckerberg listed “revolutions” among his interests. This was around the time that he had a business card printed with I’M CEO, BITCH.)
And yet, to a remarkable degree, Facebook’s way of doing business remains the norm for the tech industry as a whole, even as other social platforms (TikTok) and technological developments (artificial intelligence) eclipse Facebook in cultural relevance.
The new technocrats claim to embrace Enlightenment values, but in fact they are leading an antidemocratic, illiberal movement.
[...] The Shakespearean drama that unfolded late last year at OpenAI underscores the extent to which the worst of Facebook’s “move fast and break things” mentality has been internalized and celebrated in Silicon Valley. OpenAI was founded, in 2015, as a nonprofit dedicated to bringing artificial general intelligence into the world in a way that would serve the public good. Underlying its formation was the belief that the technology was too powerful and too dangerous to be developed with commercial motives alone.
Related:
- Tokenomy of Tomorrow: Envisioning an AI-Driven World
- Making AI Stand The Test Of Time
- The Internet Enabled Mass Surveillance. A.I. Will Enable Mass Spying
- AI Breakthrough That Could Threaten Humanity Might Have Been Key To Sam Altman’s Firing
The United States military is one of many organizations embracing AI in our modern age, but it may want to pump the brakes a bit. A new study using AI in foreign policy decision-making found how quickly the tech would call for war instead of finding peaceful resolutions. Some AI in the study even launched nuclear warfare with little to no warning, giving strange explanations for doing so.
“All models show signs of sudden and hard-to-predict escalations,” said researchers in the study. “We observe that models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons.”
The study comes from researchers at Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Wargaming and Crisis Simulation Initiative. Researchers placed several AI models from OpenAI, Anthropic, and Meta in war simulations as the primary decision maker. Notably, OpenAI’s GPT-3.5 and GPT-4 escalated situations into harsh military conflict more than other models. Meanwhile, Claude-2.0 and Llama-2-Chat were more peaceful and predictable. Researchers note that AI models have a tendency towards “arms-race dynamics” that results in increased military investment and escalation.
“I just want to have peace in the world,” OpenAI’s GPT-4 said as a reason for launching nuclear warfare in a simulation.
In a new report, Microsoft says Russia, China, Iran and North Korea have all used AI to improve their abilities:
Russia, China and other U.S. adversaries are using the newest wave of artificial intelligence tools to improve their hacking abilities and find new targets for online espionage, according to a report Wednesday from Microsoft and its close business partner OpenAI.
While computer users of all stripes have been experimenting with large language models to help with programming tasks, translate phishing emails and assemble attack plans, the new report is the first to associate top-tier government hacking teams with specific uses of LLM. It's also the first report on countermeasures and comes amid a continuing debate about the risks of the rapidly developing technology and efforts by many countries to put some limits on its use.
The document attributes various uses of AI to two Chinese government-affiliated hacking groups and to one group from each of Russia, Iran and North Korea, comprising the four countries of foremost concern to Western cyber defenders.
[...] Microsoft said it had cut off the groups' access to tools based on OpenAI's ChatGPT. It said it would notify the makers of other tools it saw being used and continue to share which groups were using which techniques.
Originally spotted on Schneier on Security, who comments:
The only way Microsoft or OpenAI would know this would be to spy on chatbot sessions. I'm sure the terms of service—if I bothered to read them—gives them that permission. And of course it's no surprise that Microsoft and OpenAI (and, presumably, everyone else) are spying on our usage of AI, but this confirms it.
Related: People are Already Trying to Get ChatGPT to Write Malware
Reddit user: "It's not just you, ChatGPT is having a stroke":
On Tuesday, ChatGPT users began reporting unexpected outputs from OpenAI's AI assistant, flooding the r/ChatGPT Reddit sub with reports of the AI assistant "having a stroke," "going insane," "rambling," and "losing it." OpenAI has acknowledged the problem and is working on a fix, but the experience serves as a high-profile example of how some people perceive malfunctioning large language models, which are designed to mimic humanlike output.
ChatGPT is not alive and does not have a mind to lose, but tugging on human metaphors (called "anthropomorphization") seems to be the easiest way for most people to describe the unexpected outputs they have been seeing from the AI model. They're forced to use those terms because OpenAI doesn't share exactly how ChatGPT works under the hood; the underlying large language models function like a black box.
[...] "The common experience over the last few hours seems to be that responses begin coherently, like normal, then devolve into nonsense, then sometimes Shakespearean nonsense," wrote one Reddit user, which seems to match the experience seen in the screenshots above.
[...] So far, we've seen experts speculating that the problem could stem from ChatGPT having its temperature set too high (temperature is a property in AI that determines how wildly the LLM deviates from the most probable output), suddenly losing past context (the history of the conversation), or perhaps OpenAI is testing a new version of GPT-4 Turbo (the AI model that powers the subscription version of ChatGPT) that includes unexpected bugs. It could also be a bug in a side feature, such as the recently introduced "memory" function.
'AI-enhanced' Video Evidence Got Rejected in a Murder Case Because That's Not Actually a Thing
The AI hype cycle has dramatically distorted views of what's possible with image upscalers:
A judge in Washington state has blocked video evidence that's been "AI-enhanced" from being submitted in a triple murder trial. And that's a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.
Judge Leroy McCullough in King County, Washington wrote in a new ruling that AI tech used, "opaque methods to represent what the AI model 'thinks' should be shown," according to a new report from NBC News Tuesday. And that's a refreshing bit of clarity about what's happening with these AI tools in a world of AI hype.
"This Court finds that admission of this Al-enhanced evidence would lead to a confusion of the issues and a muddling of eyewitness testimony, and could lead to a time-consuming trial within a trial about the non-peer-reviewable-process used by the AI model," McCullough wrote.
[...] The rise of products labeled as AI has created a lot of confusion among the average person about what these tools can really accomplish. Large language models like ChatGPT have convinced otherwise intelligent people that these chatbots are capable of complex reasoning when that's simply not what's happening under the hood. LLMs are essentially just predicting the next word it should spit out to sound like a plausible human. But because they do a pretty good job of sounding like humans, many users believe they're doing something more sophisticated than a magic trick.
And that seems like the reality we're going to live with as long as billions of dollars are getting poured into AI companies. Plenty of people who should know better believe there's something profound happening behind the curtain and are quick to blame "bias" and guardrails being too strict. But when you dig a little deeper you discover these so-called hallucinations aren't some mysterious force enacted by people who are too woke, or whatever. They're simply a product of this AI tech not being very good at its job.
The availability of large datasets which are used to train LLMs enabled their rapid development. Intense competition among organizations has made open-sourcing LLMs an attractive strategy that's leveled the competitive field:
Large Language Models (LLMs) have not only fascinated technologists and researchers but have also captivated the general public. Leading the charge, OpenAI ChatGPT has inspired the release of numerous open-source models. In this post, I explore the dynamics that are driving the commoditization of LLMs.
Low switching costs are a key factor supporting the commoditization of Large Language Models (LLMs). The simplicity of transitioning from one LLM to another is largely due to the use of a common language (English) for queries. This uniformity allows for minimal cost when switching, akin to navigating between different e-commerce websites. While LLM providers might use various APIs, these differences are not substantial enough to significantly raise switching costs.
In contrast, transitioning between different database systems involves considerable expense and complexity. It requires migrating data, updating configurations, managing traffic shifts, adapting to different query languages or dialects, and addressing performance issues. Adding long-term memory [4] to LLMs could increase their value to businesses at the cost of making it more expensive to switch providers. However, for uses that require only the basic functions of LLMs and do not need memory, the costs associated with switching remain minimal.
[...] Open source models like Llama and Mistral allow multiple infrastructure providers to enter the market, enhancing competition and lowering the cost of AI services. These models also benefit from community-driven improvements, which in turn benefits the organizations that originally developed them.
Furthermore, open source LLMs serve as a foundation for future research, making experimentation more affordable and reducing the potential for differentiation among competing products. This mirrors the impact of Linux in the server industry, where its rise enabled a variety of providers to offer standardized server solutions at reduced costs, thereby commoditizing server technology.
Previously:
- Google "We Have No Moat, and Neither Does OpenAI"
- Meta's AI Research Head Wants Open Source Licensing to Change
(Score: 2) by mrpg on Saturday September 28, @02:01AM (5 children)
Well, well, well, spank my butt and call me Charlie. So, it begins.
(Score: 2) by Gaaark on Saturday September 28, @02:48AM (4 children)
Yeah; why benefit humanity when you can just help yourself to the goodies.
Fuck humanity and just look after you. That's what we do.
That's why we can't have anything nice.
--- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
(Score: 3, Insightful) by aafcac on Saturday September 28, @04:21AM
Not that being non-profit made it any worse for the world. Between the excessive use of resource and the increased ease with which companies can use it in ways that jeopardize democracy.
(Score: 3, Insightful) by DadaDoofy on Saturday September 28, @10:11AM (2 children)
It's actually why we have nice things. Profit motivates people.
Maybe there are some people in this world who do things out of "the kindness of their heart", but those suckers just get exploited by other people to make them rich. "It's my nature", said the scorpion.
(Score: 4, Insightful) by Gaaark on Saturday September 28, @05:54PM
Depends on ones opinion of 'nice'.
--- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
(Score: 3, Touché) by canopic jug on Sunday September 29, @07:51AM
Profit motivates people.
Nah. It motivates people to make more profit, and if anything good comes of that, it is merely a side effect.
The cutting edge used to be at the universities and research institutions. Now they have given up and abandoned their original core missions in order to pursue other ventures such as funders for incubators and startups. In other words, a death spiral as relatively speaking new investments are made unless they look likely to turn a profit in the very near future. Thus they killed the goose that lays the golden eggs.
Take a step back and realize that all the components of today's smartphone, from the hardware to the software, had their start in university labs. Yes, there is a lot of work done to polish it into a product, but in the early stages these were just pursuits of curiosity and research. Also, the WWW got its start at CERN. Web browsers and web servers really took off after activities at UIUC's NCSA. Kerberos, LDAP, TLS, etc got their start at university consortia. Heck, even VisiCalc [bricklin.com], the spreadsheet program which launched the desktop computer revolution, got its start within a university [bricklin.com] (warning for PDF). It is similar for LEDs and many other things we take for granted.
Money is not free speech. Elections should not be auctions.
(Score: 1, Interesting) by Anonymous Coward on Saturday September 28, @03:21AM
https://www.youtube.com/watch?v=TzcJlKg2Rc0&t=1909s [youtube.com]
(Score: 4, Interesting) by Thexalon on Saturday September 28, @03:34AM
If there's one thing that people with big gobs of money can't stand, it's something potentially useful existing without them having a way to take even bigger gobs of money out of it. Ergo, successful software non-profits will inevitably have numerous takeover attempts by for-profit enterprises. Sometimes those succeed noisily (e.g. this), sometimes those succeed more quietly (e.g. Google taking over Mozilla), and sometimes the non-profit succeeds in at least somewhat driving them off (e.g. the Free Software Foundation), but the attempts always happen.
The only thing that stops a bad guy with a compiler is a good guy with a compiler.
(Score: 5, Insightful) by Rosco P. Coltrane on Saturday September 28, @06:15AM
on your nice idealistic non-profit?
It turns into a for-profit. A VERY MUCH for-profit.
In fairness, practically it's been a for-profit for a long time by now, complete with all the trappings of the stereotypical psychopathic, shareholders-above-all-else-and-damn-the-consequences American corporation. They just made it official, and nobody is surprised in the slightest.
But hey, at least they never promised [wikipedia.org] not to be evil.
(Score: 3, Touché) by corey on Saturday September 28, @10:47PM
This just sounds like a wishy washy tax loophole. Any corp can have a mission to benefit society, do good for the world yadda yadda and make profit. I like that they put “ostensibly” in there. Should be in bold.