from the achieving-reason-progress-and-freedom-by-dispensing-with-humility-and-nuance dept.
The theoretical promise of AI is as hopeful as the promise of social media once was, and as dazzling as its most partisan architects project. AI really could cure numerous diseases. It really could transform scholarship and unearth lost knowledge. Except that Silicon Valley, under the sway of its worst technocratic impulses, is following the playbook established in the mass scaling and monopolization of the social web:
Facebook (now Meta) has become an avatar of all that is wrong with Silicon Valley. Its self-interested role in spreading global disinformation is an ongoing crisis. Recall, too, the company’s secret mood-manipulation experiment in 2012, which deliberately tinkered with what users saw in their News Feed in order to measure how Facebook could influence people’s emotional states without their knowledge. Or its participation in inciting genocide in Myanmar in 2017. Or its use as a clubhouse for planning and executing the January 6, 2021, insurrection. (In Facebook’s early days, Zuckerberg listed “revolutions” among his interests. This was around the time that he had a business card printed with I’M CEO, BITCH.)
And yet, to a remarkable degree, Facebook’s way of doing business remains the norm for the tech industry as a whole, even as other social platforms (TikTok) and technological developments (artificial intelligence) eclipse Facebook in cultural relevance.
The new technocrats claim to embrace Enlightenment values, but in fact they are leading an antidemocratic, illiberal movement.
[...] The Shakespearean drama that unfolded late last year at OpenAI underscores the extent to which the worst of Facebook’s “move fast and break things” mentality has been internalized and celebrated in Silicon Valley. OpenAI was founded, in 2015, as a nonprofit dedicated to bringing artificial general intelligence into the world in a way that would serve the public good. Underlying its formation was the belief that the technology was too powerful and too dangerous to be developed with commercial motives alone.
Related:
- Tokenomy of Tomorrow: Envisioning an AI-Driven World
- Making AI Stand The Test Of Time
- The Internet Enabled Mass Surveillance. A.I. Will Enable Mass Spying
- AI Breakthrough That Could Threaten Humanity Might Have Been Key To Sam Altman’s Firing
Related Stories
Arthur T Knackerbracket has processed the following story:
Almost a week has passed since the OpenAI board fired CEO Sam Altman without explaining its actions. By Tuesday, the board reinstated Altman and appointed a new board to oversee the OpenAI operations. An investigation into what happened was also promised, something I believe all ChatGPT users deserve. We’re talking about a company developing an incredibly exciting resource, AI. But also one that could eradicate humanity. Or so some people fear.
Theories were running rampant in the short period between Altman’s ouster and return, with some speculating that OpenAI has developed an incredibly strong GPT-5 model. Or that OpenAI had reached AGI, artificial general intelligence that could operate just as good as humans. That the board was simply doing its job, protecting the world against the irresponsible development of AI.
It turns out the guesses and memes weren’t too far off. We’re not on the verge of dealing with dangerous AI, but a new report says that OpenAI delivered a massive breakthrough in the days preceding Altman’s firing.
The new algorithm (Q* or Q-Star) could threaten humanity, according to a letter unnamed OpenAI researchers sent to the board. The letter and the Q-Star algorithm might have been key developments that led to the firing of Altman.
Reuters, which has not seen the letter, the document was one factor. There’s apparently a longer list of grievances that convinced the board to fire Altman. The board worried about the company’s fast pace of commercializing ChatGPT advances before understanding the consequences.
OpenAI declined to comment to Reuters, but the company acknowledged project Q-Star in a message to staffers and the letter to the board. Mira Murati, who was the first interim CEO the board appointment after letting Altman go, apparently alerted the staff on the Q-Star news that was about to break.
It’s too early to tell whether Q-Star is AGI, and OpenAI was busy with the CEO drama rather than making public announcements. And the company might not want to announce such innovation anytime soon, especially if caution is needed.
Spying has always been limited by the need for human labor. A.I. is going to change that:
Spying and surveillance are different but related things. If I hired a private detective to spy on you, that detective could hide a bug in your home or car, tap your phone, and listen to what you said. At the end, I would get a report of all the conversations you had and the contents of those conversations. If I hired that same private detective to put you under surveillance, I would get a different report: where you went, whom you talked to, what you purchased, what you did.
Before the internet, putting someone under surveillance was expensive and time-consuming. You had to manually follow someone around, noting where they went, whom they talked to, what they purchased, what they did, and what they read. That world is forever gone. Our phones track our locations. Credit cards track our purchases. Apps track whom we talk to, and e-readers know what we read. Computers collect data about what we're doing on them, and as both storage and processing have become cheaper, that data is increasingly saved and used. What was manual and individual has become bulk and mass. Surveillance has become the business model of the internet, and there's no reasonable way for us to opt out of it.
Spying is another matter. It has long been possible to tap someone's phone or put a bug in their home and/or car, but those things still require someone to listen to and make sense of the conversations. Yes, spyware companies like NSO Group help the government hack into people's phones, but someone still has to sort through all the conversations. And governments like China could censor social media posts based on particular words or phrases, but that was coarse and easy to bypass. Spying is limited by the need for human labor.
A.I. is about to change that.
[...] We could limit this capability. We could prohibit mass spying. We could pass strong data-privacy rules. But we haven't done anything to limit mass surveillance. Why would spying be any different?
Related:
- A Controversial US Surveillance Program is up tor Renewal. Critics are Speaking Out.
- Debunking the Myth of "Anonymous" Data
- EU-US Data Privacy Framework to Face Serious Legal Challenges, Experts Say
Arthur T Knackerbracket has processed the following story:
The kind of benchmark that IT normally worries about isn't without importance. How fast a particular data set is learned, how quickly prompts can be processed, what resources are required and how it all scales? If you're creating an AI system as part of your business, you'd better get those things right, or at the least understand their bounds.
They don't much matter otherwise, although you can be sure marketing and pundits will disagree. The fact it doesn't matter is a good thing: benchmarks too often become targets that distort function, and they should be kept firmly in their kennels.
The most important benchmark for AI is how truthful it is or, more usefully, how little it is misrepresented by those who sell or use it. As Pontius Pilate was said to have said 2,000 years ago, what is truth? There is no benchmark. Despite the intervening millennia and infinite claims to have fixed this, there still isn't. The most egregious of liars can command the support of nations in the midst of what should be a golden age of reason. If nobody's prepared or able to stop them, what chance have we got to keep AI on the side of the angels?
The one mechanism that's in with a chance is that curious synthesis of regulatory bodies and judicial systems which exists – in theory – outside politics but inside democratic control. Regulators set standards, the courts act as backstop to those powers and adjudicators of disputes.
[...] Which takes us to the regulators. It should be that the more technical and measurable the field being regulated, the easier the regulator's job is. If you're managing the radio spectrum or the railways, something going wrong shows up quickly in the numbers. Financial regulators, operating in the miasma of capital economics and corporate misdirection, go through a cycle of being weakened in the name of strong growth until everything falls apart and a grand reset follows. Wince and repeat. Yet very technical regulators can go wrong, as with the FAA and Boeing's 737 MAX. Regulatory capture by their industries or the politicians is a constant threat. And sometimes we just can't tell – GDPR has been with us for five years. Is it working?
Recently, Sam Altman commented at Davos that future AI depends on energy breakthrough, in this article I would like to expand on this concept and explore how AI would revolutionize our economy:
AI tokens, distinct from cryptocurrency tokens, are fundamental textual units used in ChatGPT and similar language models. These tokens can be conceptualized as fragments of words. In the language model's processing, inputs are segmented into these tokens. AI tokens are crucial in determining the pricing models for the usage of core AI technologies.
This post explores the concept of "tokenomy," a term coined to describe the role of AI tokens, such as those in ChatGPT, as a central unit of exchange in a society increasingly intertwined with AI. These tokens are central to a future where AI permeates all aspects of life, from enhancing personal assistant functions to optimizing urban traffic and essential services. The rapid progress in generative AI technologies is transforming what once seemed purely speculative into tangible reality.
We examine the significant influence that AI is expected to have on our economic frameworks, guiding us towards a 'tokenomy' – an economy fundamentally driven and characterized by AI tokens.
The author goes on to discuss using AI tokens as currency, measuring economic efficiency FLOPs per joule, and how the influence and power that companies owning the Foundation Model could equal or even surpass that of central banks. He concludes:
The concentration of such immense control and influence in a handful of corporations raises significant questions about economic sovereignty, market dynamics, and the need for robust regulatory frameworks to ensure fair and equitable AI access and to prevent the monopolistic control of critical AI infrastructure.
On Wednesday, Reuters reported that OpenAI is working on a plan to restructure its core business into a for-profit benefit corporation, moving away from control by its nonprofit board. The shift marks a dramatic change for the AI company behind ChatGPT, potentially making it more attractive to investors while raising questions about its commitment to sharing the benefits of advanced AI with "all of humanity," as written in its charter.
A for-profit benefit corporation is a legal structure that allows companies to pursue both financial profits and social or environmental goals, ostensibly balancing shareholder interests with a broader mission to benefit society. It's an approach taken by some of OpenAI's competitors, such as Anthropic and Elon Musk's xAI.
[...] Bloomberg reports that OpenAI is discussing giving Altman a 7 percent stake, though the exact details are still under negotiation. This represents a departure from Altman's previous stance of not taking equity in the company, which he had maintained was in line with OpenAI's mission to benefit humanity rather than individuals.
[...] The proposed restructuring also aims to remove the cap on returns for investors, potentially making OpenAI more appealing to venture capitalists and other financial backers. Microsoft, which has invested billions in OpenAI, stands to benefit from this change, as it could see increased returns on its investment if OpenAI's value continues to rise.
(Score: 1, Insightful) by Anonymous Coward on Monday February 05 2024, @03:35AM (5 children)
And the Democrats were actually trying to "elevate Trump".
Then when the Democrats lost they and much of the media (which has a Democrat bias) tried to blame Facebook and the Russians for it, despite Clinton being a very unprofessional politician who did stuff like call voters a "basket of deplorables.
Meanwhile Trump was being a professional liar/politician by promising to build a wall etc.
(Score: 3, Informative) by Opportunist on Monday February 05 2024, @01:41PM
In the last two elections, my only sentiment was "Could we get them to stand back to back? Because neither of them is worth a whole bullet".
(Score: 2) by istartedi on Monday February 05 2024, @09:10PM (3 children)
I don't recall that. Instead, I recall the Democrats making everybody feel ashamed if they expressed their desire to vote for Trump. Then they took polls, not factoring in that people might answer differently even if the poll claimed to be anonymous, because that's human nature. Then they took the results of those polls at face value and assumed they had the Rust Belt locked up. They didn't bother to campaign there very much, and the rest is history.
The Democrats have a knack for losing close ones.
Appended to the end of comments you post. Max: 120 chars.
(Score: 0) by Anonymous Coward on Tuesday February 06 2024, @04:02PM (1 child)
Well it definitely happened, but I can't find some of the stuff I'm looking for (which predates 2016 Trump's win). Maybe already removed or Google etc aren't making such stuff easy to find. There's smoking gun stuff AFTER Trump's win where Google shows their bias FWIW ( https://www.breitbart.com/tech/2018/09/12/leaked-video-google-leaderships-dismayed-reaction-to-trump-election/ [breitbart.com] ). But that's not the one.
But before Trump's win there was definitely a pro Clinton bias from the tech companies:
https://money.cnn.com/2016/08/23/technology/hillary-clinton-tech-fundraisers/index.html [cnn.com]
https://observer.com/2016/10/wikileaks-reveals-dnc-elevated-trump-to-help-clinton/ [observer.com]
https://time.com/4486502/hillary-clinton-basket-of-deplorables-transcript/ [time.com]
(Score: 2) by turgid on Wednesday February 07 2024, @08:13PM
You mean there was a pro-sanity, pro-humanity bias in big business before the Trump win?
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 0) by Anonymous Coward on Tuesday February 06 2024, @04:22PM
There was lots of talk about Facebook helping Trump win AFTER Trump won. But before Trump won there was this:
https://nypost.com/2016/04/12/zuckerberg-lays-out-facebooks-future-takes-subtle-jab-at-trump/ [nypost.com]
And this: https://gizmodo.com/facebook-employees-asked-mark-zuckerberg-if-they-should-1771012990 [gizmodo.com]
So it did seem a bit strange to later see accusations that Facebook (and the Russians) helped Trump win, when before that they seemed more likely to be biased against Trump from employees to Zuck himself.
(Score: 3, Funny) by DannyB on Monday February 05 2024, @11:05PM
Is it trying to stop misinformation and disinformation, ("techno authoritarianism") or do we need more psuedoscience to balance out the science?
Santa maintains a database and does double verification of it.