Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Monday February 05 2024, @03:09AM   Printer-friendly
from the achieving-reason-progress-and-freedom-by-dispensing-with-humility-and-nuance dept.

The theoretical promise of AI is as hopeful as the promise of social media once was, and as dazzling as its most partisan architects project. AI really could cure numerous diseases. It really could transform scholarship and unearth lost knowledge. Except that Silicon Valley, under the sway of its worst technocratic impulses, is following the playbook established in the mass scaling and monopolization of the social web:

Facebook (now Meta) has become an avatar of all that is wrong with Silicon Valley. Its self-interested role in spreading global disinformation is an ongoing crisis. Recall, too, the company’s secret mood-manipulation experiment in 2012, which deliberately tinkered with what users saw in their News Feed in order to measure how Facebook could influence people’s emotional states without their knowledge. Or its participation in inciting genocide in Myanmar in 2017. Or its use as a clubhouse for planning and executing the January 6, 2021, insurrection. (In Facebook’s early days, Zuckerberg listed “revolutions” among his interests. This was around the time that he had a business card printed with I’M CEO, BITCH.)

And yet, to a remarkable degree, Facebook’s way of doing business remains the norm for the tech industry as a whole, even as other social platforms (TikTok) and technological developments (artificial intelligence) eclipse Facebook in cultural relevance.

The new technocrats claim to embrace Enlightenment values, but in fact they are leading an antidemocratic, illiberal movement.

[...] The Shakespearean drama that unfolded late last year at OpenAI underscores the extent to which the worst of Facebook’s “move fast and break things” mentality has been internalized and celebrated in Silicon Valley. OpenAI was founded, in 2015, as a nonprofit dedicated to bringing artificial general intelligence into the world in a way that would serve the public good. Underlying its formation was the belief that the technology was too powerful and too dangerous to be developed with commercial motives alone.

Related:


Original Submission

Related Stories

AI Breakthrough That Could Threaten Humanity Might Have Been Key To Sam Altman’s Firing 49 comments

Arthur T Knackerbracket has processed the following story:

Almost a week has passed since the OpenAI board fired CEO Sam Altman without explaining its actions. By Tuesday, the board reinstated Altman and appointed a new board to oversee the OpenAI operations. An investigation into what happened was also promised, something I believe all ChatGPT users deserve. We’re talking about a company developing an incredibly exciting resource, AI. But also one that could eradicate humanity. Or so some people fear.

Theories were running rampant in the short period between Altman’s ouster and return, with some speculating that OpenAI has developed an incredibly strong GPT-5 model. Or that OpenAI had reached AGI, artificial general intelligence that could operate just as good as humans. That the board was simply doing its job, protecting the world against the irresponsible development of AI.

It turns out the guesses and memes weren’t too far off. We’re not on the verge of dealing with dangerous AI, but a new report says that OpenAI delivered a massive breakthrough in the days preceding Altman’s firing.

The new algorithm (Q* or Q-Star) could threaten humanity, according to a letter unnamed OpenAI researchers sent to the board. The letter and the Q-Star algorithm might have been key developments that led to the firing of Altman.

Reuters, which has not seen the letter, the document was one factor. There’s apparently a longer list of grievances that convinced the board to fire Altman. The board worried about the company’s fast pace of commercializing ChatGPT advances before understanding the consequences.

OpenAI declined to comment to Reuters, but the company acknowledged project Q-Star in a message to staffers and the letter to the board. Mira Murati, who was the first interim CEO the board appointment after letting Altman go, apparently alerted the staff on the Q-Star news that was about to break.

It’s too early to tell whether Q-Star is AGI, and OpenAI was busy with the CEO drama rather than making public announcements. And the company might not want to announce such innovation anytime soon, especially if caution is needed.

The Internet Enabled Mass Surveillance. A.I. Will Enable Mass Spying 30 comments

Spying has always been limited by the need for human labor. A.I. is going to change that:

Spying and surveillance are different but related things. If I hired a private detective to spy on you, that detective could hide a bug in your home or car, tap your phone, and listen to what you said. At the end, I would get a report of all the conversations you had and the contents of those conversations. If I hired that same private detective to put you under surveillance, I would get a different report: where you went, whom you talked to, what you purchased, what you did.

Before the internet, putting someone under surveillance was expensive and time-consuming. You had to manually follow someone around, noting where they went, whom they talked to, what they purchased, what they did, and what they read. That world is forever gone. Our phones track our locations. Credit cards track our purchases. Apps track whom we talk to, and e-readers know what we read. Computers collect data about what we're doing on them, and as both storage and processing have become cheaper, that data is increasingly saved and used. What was manual and individual has become bulk and mass. Surveillance has become the business model of the internet, and there's no reasonable way for us to opt out of it.

Spying is another matter. It has long been possible to tap someone's phone or put a bug in their home and/or car, but those things still require someone to listen to and make sense of the conversations. Yes, spyware companies like NSO Group help the government hack into people's phones, but someone still has to sort through all the conversations. And governments like China could censor social media posts based on particular words or phrases, but that was coarse and easy to bypass. Spying is limited by the need for human labor.

A.I. is about to change that.

[...] We could limit this capability. We could prohibit mass spying. We could pass strong data-privacy rules. But we haven't done anything to limit mass surveillance. Why would spying be any different?

Related:


Original Submission

Making AI Stand The Test Of Time 15 comments

Arthur T Knackerbracket has processed the following story:

The kind of benchmark that IT normally worries about isn't without importance. How fast a particular data set is learned, how quickly prompts can be processed, what resources are required and how it all scales? If you're creating an AI system as part of your business, you'd better get those things right, or at the least understand their bounds.

They don't much matter otherwise, although you can be sure marketing and pundits will disagree. The fact it doesn't matter is a good thing: benchmarks too often become targets that distort function, and they should be kept firmly in their kennels.

The most important benchmark for AI is how truthful it is or, more usefully, how little it is misrepresented by those who sell or use it. As Pontius Pilate was said to have said 2,000 years ago, what is truth? There is no benchmark. Despite the intervening millennia and infinite claims to have fixed this, there still isn't. The most egregious of liars can command the support of nations in the midst of what should be a golden age of reason. If nobody's prepared or able to stop them, what chance have we got to keep AI on the side of the angels?

The one mechanism that's in with a chance is that curious synthesis of regulatory bodies and judicial systems which exists – in theory – outside politics but inside democratic control. Regulators set standards, the courts act as backstop to those powers and adjudicators of disputes.

[...] Which takes us to the regulators. It should be that the more technical and measurable the field being regulated, the easier the regulator's job is. If you're managing the radio spectrum or the railways, something going wrong shows up quickly in the numbers. Financial regulators, operating in the miasma of capital economics and corporate misdirection, go through a cycle of being weakened in the name of strong growth until everything falls apart and a grand reset follows. Wince and repeat. Yet very technical regulators can go wrong, as with the FAA and Boeing's 737 MAX. Regulatory capture by their industries or the politicians is a constant threat. And sometimes we just can't tell – GDPR has been with us for five years. Is it working?

Tokenomy of Tomorrow: Envisioning an AI-Driven World 31 comments

Recently, Sam Altman commented at Davos that future AI depends on energy breakthrough, in this article I would like to expand on this concept and explore how AI would revolutionize our economy:

AI tokens, distinct from cryptocurrency tokens, are fundamental textual units used in ChatGPT and similar language models. These tokens can be conceptualized as fragments of words. In the language model's processing, inputs are segmented into these tokens. AI tokens are crucial in determining the pricing models for the usage of core AI technologies.

This post explores the concept of "tokenomy," a term coined to describe the role of AI tokens, such as those in ChatGPT, as a central unit of exchange in a society increasingly intertwined with AI. These tokens are central to a future where AI permeates all aspects of life, from enhancing personal assistant functions to optimizing urban traffic and essential services. The rapid progress in generative AI technologies is transforming what once seemed purely speculative into tangible reality.

We examine the significant influence that AI is expected to have on our economic frameworks, guiding us towards a 'tokenomy' – an economy fundamentally driven and characterized by AI tokens.

The author goes on to discuss using AI tokens as currency, measuring economic efficiency FLOPs per joule, and how the influence and power that companies owning the Foundation Model could equal or even surpass that of central banks. He concludes:

The concentration of such immense control and influence in a handful of corporations raises significant questions about economic sovereignty, market dynamics, and the need for robust regulatory frameworks to ensure fair and equitable AI access and to prevent the monopolistic control of critical AI infrastructure.


Original Submission

OpenAI Plans Tectonic Shift From Nonprofit to for-Profit, Giving Altman Equity 10 comments

https://arstechnica.com/information-technology/2024/09/openai-plans-tectonic-shift-from-nonprofit-to-for-profit-giving-altman-equity/

On Wednesday, Reuters reported that OpenAI is working on a plan to restructure its core business into a for-profit benefit corporation, moving away from control by its nonprofit board. The shift marks a dramatic change for the AI company behind ChatGPT, potentially making it more attractive to investors while raising questions about its commitment to sharing the benefits of advanced AI with "all of humanity," as written in its charter.

A for-profit benefit corporation is a legal structure that allows companies to pursue both financial profits and social or environmental goals, ostensibly balancing shareholder interests with a broader mission to benefit society. It's an approach taken by some of OpenAI's competitors, such as Anthropic and Elon Musk's xAI.

[...] Bloomberg reports that OpenAI is discussing giving Altman a 7 percent stake, though the exact details are still under negotiation. This represents a departure from Altman's previous stance of not taking equity in the company, which he had maintained was in line with OpenAI's mission to benefit humanity rather than individuals.

[...] The proposed restructuring also aims to remove the cap on returns for investors, potentially making OpenAI more appealing to venture capitalists and other financial backers. Microsoft, which has invested billions in OpenAI, stands to benefit from this change, as it could see increased returns on its investment if OpenAI's value continues to rise.

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Insightful) by Anonymous Coward on Monday February 05 2024, @03:35AM (5 children)

    by Anonymous Coward on Monday February 05 2024, @03:35AM (#1343052)
    What I remember was people in Big Tech trying to figure out how to deliver a non Trump win.

    And the Democrats were actually trying to "elevate Trump".

    Then when the Democrats lost they and much of the media (which has a Democrat bias) tried to blame Facebook and the Russians for it, despite Clinton being a very unprofessional politician who did stuff like call voters a "basket of deplorables.

    Meanwhile Trump was being a professional liar/politician by promising to build a wall etc.
    • (Score: 3, Informative) by Opportunist on Monday February 05 2024, @01:41PM

      by Opportunist (5545) on Monday February 05 2024, @01:41PM (#1343121)

      In the last two elections, my only sentiment was "Could we get them to stand back to back? Because neither of them is worth a whole bullet".

    • (Score: 2) by istartedi on Monday February 05 2024, @09:10PM (3 children)

      by istartedi (123) on Monday February 05 2024, @09:10PM (#1343211) Journal

      I don't recall that. Instead, I recall the Democrats making everybody feel ashamed if they expressed their desire to vote for Trump. Then they took polls, not factoring in that people might answer differently even if the poll claimed to be anonymous, because that's human nature. Then they took the results of those polls at face value and assumed they had the Rust Belt locked up. They didn't bother to campaign there very much, and the rest is history.

      The Democrats have a knack for losing close ones.

      --
      Appended to the end of comments you post. Max: 120 chars.
      • (Score: 0) by Anonymous Coward on Tuesday February 06 2024, @04:02PM (1 child)

        by Anonymous Coward on Tuesday February 06 2024, @04:02PM (#1343335)

        Well it definitely happened, but I can't find some of the stuff I'm looking for (which predates 2016 Trump's win). Maybe already removed or Google etc aren't making such stuff easy to find. There's smoking gun stuff AFTER Trump's win where Google shows their bias FWIW ( https://www.breitbart.com/tech/2018/09/12/leaked-video-google-leaderships-dismayed-reaction-to-trump-election/ [breitbart.com] ). But that's not the one.

        But before Trump's win there was definitely a pro Clinton bias from the tech companies:
        https://money.cnn.com/2016/08/23/technology/hillary-clinton-tech-fundraisers/index.html [cnn.com]

        Cook is just the latest in a growing list of tech executives and venture capitalists working to raise big money to help Clinton beat Donald Trump, who has repeatedly attacked tech companies and opposes the industry on key issues like immigration and trade.

        Salesforce (CRM) CEO Marc Benioff, Google (GOOG) CFO Ruth Porat, Zynga (ZNGA) chairman Mark Pincus, Napster founder Sean Parker, SolarCity (SCTY) CEO (and Elon Musk's cousin) Lyndon Rive and LinkedIn (LNKD) founder Reid Hoffman have all contributed or raised at least $100,000 for Clinton's bid.

        https://observer.com/2016/10/wikileaks-reveals-dnc-elevated-trump-to-help-clinton/ [observer.com]

        https://time.com/4486502/hillary-clinton-basket-of-deplorables-transcript/ [time.com]

      • (Score: 0) by Anonymous Coward on Tuesday February 06 2024, @04:22PM

        by Anonymous Coward on Tuesday February 06 2024, @04:22PM (#1343338)

        There was lots of talk about Facebook helping Trump win AFTER Trump won. But before Trump won there was this:
        https://nypost.com/2016/04/12/zuckerberg-lays-out-facebooks-future-takes-subtle-jab-at-trump/ [nypost.com]

        Mark Zuckerberg took a thinly veiled swipe at Donald Trump on Tuesday, blasting the real-estate tycoon’s talk of “building walls.”

        “I hear fearful voices talking about building walls,” the 31-year-old tech billionaire said as he took the stage at Facebook’s annual F8 developer conference in San Francisco. “Instead of building walls, we can build bridges.”

        And this: https://gizmodo.com/facebook-employees-asked-mark-zuckerberg-if-they-should-1771012990 [gizmodo.com]

        Inside Facebook, the political discussion has been more explicit. Last month, some Facebook employees used a company poll to ask Zuckerberg whether the company should try “to help prevent President Trump in 2017.”

        Every week, Facebook employees vote in an internal poll on what they want to ask Zuckerberg in an upcoming Q&A session. A question from the March 4 poll was: “What responsibility does Facebook have to help prevent President Trump in 2017?”

        A screenshot of the poll, given to Gizmodo, shows the question as the fifth most popular.

        So it did seem a bit strange to later see accusations that Facebook (and the Russians) helped Trump win, when before that they seemed more likely to be biased against Trump from employees to Zuck himself.

  • (Score: 3, Funny) by DannyB on Monday February 05 2024, @11:05PM

    by DannyB (5839) Subscriber Badge on Monday February 05 2024, @11:05PM (#1343232) Journal

    Is it trying to stop misinformation and disinformation, ("techno authoritarianism") or do we need more psuedoscience to balance out the science?


    Q. Why do we have phases of the moon?
    A. Because when the moon is full and bright, it shines all its light out until it is empty. Then the moon must sit on the charger for a while until it charges back up to a full moon again. During this cycle, the light within the moon has plenty of time to ferment properly. (so don't look at the moon for too long!)

    Some people mistakenly believe that the phases of the moon are caused by the earth's shadow falling on the moon. That is impossible because the moon's gravity is only 1/6 of earth's gravity and therefore the moon is unable to pull earth's shadow to the lunar surface.

    The universe is a large sphere with the stars affixed to its inside.
    The earth is a large flat disk in the center of the universe.
    The sun and moon move in a circular pattern around the top of the disk.
    The earth is on an infinite stack of turtles.
    (it's turtles all the way down)
    The final turtle of that infinite stack is propelled by a rocket.
    (the stack of turtles is infinite somewhere in the middle)
    The rocket moves at 9.8 m/s^2 giving us the illusion of gravity.
    The rocket is powered by a perpetual motion machine so it never stops.
    The perpetual motion machine is powered, somehow, by crystals and magnets.

    Silly skeptics would ask: if the Earth is flat, how do you explain that the sun moves South in the winter?

    Stupid Round Earther: the sun moves South in the winter for the same reason that birds move South for winter -- because it's warmer in the South during winter! Look at Australia where Christmas is hottest day of the year.

    --
    Santa maintains a database and does double verification of it.
(1)