Beware of OpenAI's 'Grantwashing' on AI Harms:
This month, OpenAI announced "up to $2 million" in funding for research studies on AI safety and well-being. At its surface, this may seem generous, but following in the footsteps of other tech giants facing scrutiny over their products' mental health impacts, it's nothing more than grantwashing.
This industry practice commits a pittance to research that is doomed to be ineffective due to information and resources that companies hold back. When grantwashing works, it compromises the search for answers. And that's an insult to anyone whose loved one's death involved chatbots.
OpenAI's pledge came a week after the company's lawyers argued that the company isn't to blame in the death of a California teenager who ChatGPT encouraged to commit suicide. In the company's attempt to disclaim responsibility in court, they even requested a list of invitees to the teen's memorial and video footage of the service and the people there. In the last year, OpenAI and other generative AI companies have been accused of causing numerous deaths and psychotic breaks by encouraging people into suicide, feeding delusions, and giving them risky instructions.
As scientists who study developmental psychology and AI, we agree that society urgently needs better science on AI and mental health. The company has recruited a group of genuinely credible scientists to give them closed-door advice on the issue, like so many other companies accused of causing harm. But OpenAI's funding announcement reveals how small a fig leaf they think will persuade a credulous public.
Look at the size of the grants. High quality public health research on mental health harms requires a sequence of studies, large sample sizes, access to clinical patients, and an ethics safety net that supports people at risk. The median research project grant from the National Institutes of Mental Health in 2024 was $642,918. In contrast, OpenAI is offering a measly $5,000 to $100,000 to researchers studying AI and mental health, one sixth of a typical NIMH grant at best.
Despite the good ideas Open AI suggests, the company is holding back the resource that would contribute most to science on those questions: records about their systems and how people use their products. OpenAI's researchers have purportedly developed ways to identify users who potentially face mental health distress. A well-designed data access program would accelerate the search for answers while preserving privacy and protecting vulnerable users. European regulators are still deciding if OpenAI will face data access requirements under the Digital Services Act, but OpenAI doesn't have to wait for Europe.
We have seen this playbook before from other companies. In 2019, Meta announced a series of fifty thousand dollar grants to six scientists studying Instagram, safety, and well being. Even as the company touted its commitment to science on user well-being, Meta's leaders were pressuring internal researchers to "amend their research to limit Meta's potential liability," according to a recent ruling in the D.C. Superior Court.
Whether or not OpenAI leaders intend to muddy the waters of science, grantwashing hinders technology safety as one of us recently argued in Science. It adds uncertainty and debate in areas where companies want to avoid liability and that uncertainty gives the appearance of science. These underfunded studies inevitably produce inconclusive results, forcing other researchers to do more work to clean up the resulting misconceptions.
[...] Two decades of Big Tech funding for safety science has taught us that the grantwashing playbook works every time. Internally, corporate leaders pacify passionate employees with token actions that seem consequential. External scientists take the money, get inconclusive results, and lose public trust. Policymakers see what looks like responsible self regulation from a powerful industry and backpedal calls for change. And journalists quote the corporate lobbyist and move on until the next round of deaths creates another news cycle.
The problem is that we do desperately need better, faster science on technology safety. Companies are pushing out AI products to hundreds of millions of people with limited safety guardrails faster than safety science can match. One idea, proposed by Dr. Alondra Nelson, borrows from the Human Genome Project. In 1990, the project's leadership allocated 3-5% of its annual research budget to independent "ethical, legal, and social inquiry" about genomics. The result was a scientific endeavor that kept on top of emerging risks from genetics, at least at moments when projects had the freedom to challenge the genomics establishment.
[...] We can't say whether specific deaths were caused by ChatGPT or whether generative AI will cause a new wave of mental health crises. The science isn't there yet. The legal cases are ongoing. But we can say that OpenAI's grantwashing is the perfect corporate action to make sure we don't find the answers for years.
(Score: 3, Informative) by corey on Tuesday December 30, @10:00PM (2 children)
This sounds identical to the big tobacco playbook.
It’s sick how they requested video footage of the memorial.
Sounds like a morally bankrupt company.
(Score: 2) by aafcac on Tuesday December 30, @10:20PM
Same with the fossil fuel industry and climate change. I don't really know what to do about it that wouldn't be equally problematic though. We don't really want the government to get to make those decisions as there's also an incentive there. But, there does need to be some form of accountability for questionable science being funded to offer an alternative view rather than because there is legitimately another way of looking at things.
(Score: 5, Insightful) by JoeMerchant on Wednesday December 31, @12:07AM
This sounds identical to the big pharma playbook.
This sounds identical to the big oil playbook.
This sounds identical to the big coal playbook.
This sounds identical to the ultra-processed foods playbook.
This sounds identical to every "self-regulating" industry funded study program, ever.
Asking the foxes, nicely, to self-study their personality profiles viz propensity to pilfer poultry from the posts we permit them to guard, then publish their findings? Why waste the time to read past the "funded by" section?
🌻🌻🌻🌻 [google.com]
(Score: 2, Insightful) by khallow on Wednesday December 31, @12:11AM (3 children)
At least the bias of the article is obvious.
One person's "grantwashing" is the same person's "ethical, legal, and social inquiry". Maybe we should look elsewhere for insight into whatever problem may actually exist here.
(Score: 2) by JoeMerchant on Wednesday December 31, @02:22AM (1 child)
> Maybe we should look elsewhere
Yep, the problem is pervasive, nothing special here. Independent scientific research has been a farce for decades and is only accelerating its descent into the pool of "only desirable outcomes get continued funding."
Conflict of interest seems to be in fashion lately, let's hope that fashion cycles around to independent repeatability for science - any decade now.
🌻🌻🌻🌻 [google.com]
(Score: 2, Insightful) by khallow on Wednesday December 31, @03:42AM
Conflict of interest has always been in fashion. The before mentioned tobacco companies did this more than half a century ago. And before big tobacco, we have gimmick research for quack medicine, proletarian science (like Lysenkoism), and eugenics. And the granddaddy of them all, economics had conflict of interest from the moment it was conceived.
(Score: 2) by corey on Wednesday December 31, @09:44PM
Yeah you’re right, the article is biased as hell. I picked that up pretty quickly too.