Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 13 submissions in the queue.
posted by janrinok on Monday March 17, @10:53PM   Printer-friendly
from the one-law-for-thee... dept.

Arthur T Knackerbracket has processed the following story:

The ChatGPT developer submitted an open letter full of proposals to the White House Office of Science and Technology (OSTP) regarding the Trump administration's AI Action Plan, currently under development.

It outlines the super-lab's views on how the White House can support the American AI industry. This includes putting in place a regulatory regime – but one that "ensures the freedom to innovate," of course; an export strategy to let America exert control over its allies while locking out enemies like China; and adopting measures to drive growth, including for federal agencies to "set an example" on adoption.

The suggestions regarding copyright display a certain amount of hubris. It talks up the "longstanding fair use doctrine" of American copyright law, and claims this is "even more critical to continued American leadership on AI in the wake of recent events in the PRC," presumably referring to the interest generated by China's DeepSeek earlier this year.

America has so many AI startups because the fair use doctrine promotes AI development, OpenAI says, while "rigid copyright rules are repressing innovation and investment," in other markets, singling out the European Union for allowing "opt-outs" for rights holders.

The biz previously claimed it would be "impossible" to build top-tier AI models that meet today's needs without using people's copyrighted work.

It proposes that the US government "take steps to ensure that our copyright system continues to support American AI leadership," and that it shapes international policy discussions around copyright and AI, "to prevent less innovative countries from imposing their legal regimes on American AI firms and slowing our rate of progress."

Not content with that, OpenAI wants the US government to actively assess the level of data available to American AI firms and "determine whether other countries are restricting American companies' access to data and other critical inputs."

Dr Ilia Kolochenko, CEO at ImmuniWeb and an Adjunct Professor of Cybersecurity at Capitol Technology University in Maryland, expressed concern over OpenAI's proposals.

"Arguably, the most problematic issue with the proposal – legally, practically, and socially speaking – is copyright," Kolochenko told The Register.

"Paying a truly fair fee to all authors – whose copyrighted content has already been or will be used to train powerful LLM models that are eventually aimed at competing with those authors – will probably be economically unviable," he claimed, as AI vendors "will never make profits."

Advocating for a special regime or copyright exception for AI technologies is a slippery slope, he argues, adding that US lawmakers should regard OpenAI's proposals with a high degree of caution, mindful of the long-lasting consequences it may have on the American economy and legal system.

OpenAI also proposes maintaining the three-tiered AI diffusion rule framework, but with some alterations to encourage other nations to commit "to deploy AI in line with democratic principles set out by the US government."

The stated aim of this strategy is "to encourage global adoption of democratic AI principles, promoting the use of democratic AI systems while protecting US advantage."

OpenAI talks of expanding market share in Tier I countries (US allies) through the use of "American commercial diplomacy policy," banning the use of China-made equipment (think Huawei) and so on.

The ChatGPT lab also proposes "AI Economic Zones" to be created in America by local, state, and the federal government together with industry, which sounds similar to the UK government's "AI Growth Zones."

These will be intended to "speed up the permitting for building AI infrastructure like new solar arrays, wind farms, and nuclear reactors," and would allow exclusions from the National Environmental Policy Act, which requires federal agencies to evaluate the environmental impacts of their actions.

Finally, OpenAI proposes that federal agencies should "lead by example" on AI adoption. Uptake in federal departments and agencies remains "unacceptably low," the Microsoft-championed lab says, and wants to see the "removal of known blockers to the adoption of AI tools, including outdated and lengthy accreditation processes, restrictive testing authorities, and inflexible procurement pathways."

Google has also put out its response [PDF] to the White House's action plan call, arguing also for fair use defenses and data-mining exceptions for AI training.


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Insightful) by looorg on Monday March 17, @11:17PM

    by looorg (578) on Monday March 17, @11:17PM (#1396887)

    OpenAI talks of expanding market share in Tier I countries (US allies) ...

    It is probably a list that is shrinking by the day and every time Trump, Vance or Musk open their mouths. As it currently appears that the trio in charge is doing everything it possibly can to annoy every single country in Europe, with the possible exception of Russia, and allies in all other parts of the world including its neighbor countries. With that in mind I'm sure this is going to go down great ... Perhaps they should ask Clippy for some advice.

    That said administrations come and go. It is hopefully a temporary issue with communication. Somewhat depending on who eventually replaces the current president, if it's his vice president this could go on for about a decade or two since he seems equally challenged in many capacities.

    That said I'm sure the EU will have some diplomatic version of telling OpenAI etc to go and fuck themselves in every possible orifice they and ChatGPT can imagine or make up. I have a distinct feeling that them, and the other tech giants, want EU customers and their data more then EU customers actually need their products.

  • (Score: 5, Touché) by Mojibake Tengu on Monday March 17, @11:23PM (3 children)

    by Mojibake Tengu (8598) on Monday March 17, @11:23PM (#1396888) Journal

    Just tax all information transfers. Simple as that.

    If we can have a tax on Energy, and that's generally acceptable everywhere, why not introducing a tax on Information?
    It's simply measurable by quantity.

    You could even tax Entropy, if you are brave enough...

    --
    Rust programming language offends both my Intelligence and my Spirit.
    • (Score: 5, Insightful) by Uncle_Al on Monday March 17, @11:31PM

      by Uncle_Al (1108) on Monday March 17, @11:31PM (#1396889)

      I rarely post as myself, but as the owner of bitsavers.org you and your idea can go fsck yourself.

    • (Score: 3, Touché) by Undefined on Tuesday March 18, @01:08AM (1 child)

      by Undefined (50365) Subscriber Badge on Tuesday March 18, @01:08AM (#1396901)

      You could even tax Entropy, if you are brave enough...

      Entropy is nature's tax on absolutely everything.

      • (Score: 0) by Anonymous Coward on Tuesday March 18, @08:50AM

        by Anonymous Coward on Tuesday March 18, @08:50AM (#1396927)

        Tax the tax!!!

  • (Score: 4, Insightful) by Anonymous Coward on Tuesday March 18, @12:28AM (1 child)

    by Anonymous Coward on Tuesday March 18, @12:28AM (#1396896)

    AI should be able to train on whatever a person can see or read. That will just give us better AI.

    But data mining exceptions or other personal communications, oh no no no.

    OpenAI/Google aren't planning to make anime real, they want mass surveillance.

    • (Score: 2, Insightful) by Anonymous Coward on Tuesday March 18, @09:28AM

      by Anonymous Coward on Tuesday March 18, @09:28AM (#1396932)

      Sure humans can train on what we see and read. BUT it can be considered copyright infringement if we start sharing parts of the stuff out to others. Especially for profit.

      AFAIK most of those AIs are going to be regurgitating the stuff for profit too...

      It's still infringement even if there's lossy compression. It was still considered infringement in this case: https://en.wikipedia.org/wiki/Rogers_v._Koons [wikipedia.org]

      I'd be more convinced that their AIs aren't infringing on other people's stuff if Microsoft would publicly prove they and/or OpenAI train their AIs on Windows, Office, DirectX, etc source code AND then publicly declare that whatever the AIs reproduce is OK to reuse even if it looks like rather similar to MS's source code (of course assuming the prompt isn't infringing in the first place)...

  • (Score: 1, Informative) by Anonymous Coward on Tuesday March 18, @02:44AM (1 child)

    by Anonymous Coward on Tuesday March 18, @02:44AM (#1396909)

    The "AI" bubble of the really giant models is already showing signs of bursting. This attempt at a legislative data grab is just another sign that the big VC money (etc) is getting desperate to get some return from their massive investments.

    • (Score: 3, Interesting) by aafcac on Tuesday March 18, @07:41PM

      by aafcac (17646) on Tuesday March 18, @07:41PM (#1397018)

      Yes, it's going to burst at some point in the relative near future. There's only so much you can do with AI as the information that it's got has to be verified and the conclusions that it leads to also have to be verified. Just verifying and monitoring alone pretty much guarantees that we'll hit an end to this nonsense pretty soon. On top of that though, there are issues with copyright infringement and turf wars over who exactly gets to dictate the terms that the work is being used under and the way that just using these models would lead to there being a significant restriction on future development of anything big enough that it can't be done as a hobby.

  • (Score: 5, Touché) by Skwearl on Tuesday March 18, @02:55AM

    by Skwearl (4314) on Tuesday March 18, @02:55AM (#1396911)

    Ahem. America, as one of your supposed 'allies' that you suppose to stop imports from, go get rekt. You want to start a trade war and then proceed to shill your AI for free trade. Ha. HA HA. Go live in your swamp.

  • (Score: 2) by Dr Spin on Tuesday March 18, @10:50AM (1 child)

    by Dr Spin (5239) on Tuesday March 18, @10:50AM (#1396938)

    The either AI is inherently criminal, or its inherently unviable.

    Chose one - or is it both?

    --
    Warning: Opening your mouth may invalidate your brain!
  • (Score: 5, Funny) by pkrasimirov on Tuesday March 18, @02:30PM

    by pkrasimirov (3358) Subscriber Badge on Tuesday March 18, @02:30PM (#1396969)

    OpenAI: *Crawl the web, feed the beast.*
    OpenAI: We have AI!
    The world: Aww!
    China: *Ctrl-C, Ctrl-V*
    China: We have AI!
    The world: Aww!
    OpenAI: That's illegal! It only fair (use) if we copy.
    OpenAI: There should be a world law that says only we can copy and you are the bad guys (non-Americans).
    OpenAI: Hello Trump, please tax them into compliance!

  • (Score: 2) by corey on Thursday March 20, @09:38PM

    by corey (2202) on Thursday March 20, @09:38PM (#1397330)

    I was just reading this article on Ars [arstechnica.com] about this. Personally, I get what OpenAI are saying and their business now isn't so much making a good AI system, but rather making a well-learned AI system full of humanity's outputs, that they can monetise. But my opinion is that these AI companies can't just hoover up information like they do, without contributing money to those that made/produced/made available the information in the first place. That's fair. I feel like the solution is not that they should be given unfettered access or not, but more broadly how 'information traders' should pay for the info they suck up on the Internet. Bit like FB, Google, etc having to pay media companies for ripping their content. How to do that for Chinese versions, not too sure, maybe talk to them? How are they even enforcing/limiting this access to American companies? Captchas? Those prompts verifying if I'm a human? Make those better.

    Anyway, I found this part obviously wordcrafted for its intended audience:

    In their policy recommendations, OpenAI made it clear that it thinks funneling as much data as possible to AI companies—regardless of rights holders' concerns—is the only path to global AI leadership.

    "If the PRC’s developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over," OpenAI claimed. "America loses, as does the success of democratic AI. Ultimately, access to more data from the widest possible range of sources will ensure more access to more powerful innovations that deliver even more knowledge."

    (Emphasis mine - to hint at who the audience is, someone who 1-dimensionally thinks about everything in terms of winning vs losing)

  • (Score: 2) by DadaDoofy on Saturday March 22, @05:09PM

    by DadaDoofy (23827) on Saturday March 22, @05:09PM (#1397569)

    "Paying a truly fair fee to all authors – whose copyrighted content has already been or will be used to train powerful LLM models that are eventually aimed at competing with those authors – will probably be economically unviable," he claimed, as AI vendors "will never make profits."

    What an absurdly immoral argument. They are basically saying if they can't steal people's property, they can't make a profit, as if that somehow makes it ok to do so. How is that any different than arguing slavery is ok because cotton can't be profitably produced without stealing people's labor?

(1)