The views expressed here are solely my own and do not in any way reflect those of my employers.
This blog post is meant to clarify my position on the recent OpenAI controversy. A few days ago, I engaged with Jack Clark, who manages communications and policy for OpenAI, on Twitter. It is hard to have a nuanced discussion on Twitter and I am writing this blog post to better summarize my thoughts. For a longer and more thorough discussion on this topic, see excellent blog posts by Rob Munro and Zack Lipton.
The controversy: OpenAI released their language model a few days ago with a huge media blitz 
My Twitter comments:



When OpenAI was started a few years with much fanfare, its core mission was to foster openness in AI. As a non-profit, it was meant to freely collaborate with other institutions and researchers by making its patents and research open to the public. I find this goal highly admirable and important.
I also have a deep admiration for Jack Clark. His newsletter has been a great resource for the community to keep up with the latest updates in machine learning. In the past, he has pushed for more openness from the ML community. When the NeurIPS conference banned journalists from attending the workshops, he protested on Twitter and I supported his stance.

On the other hand, OpenAI seems to be making a conscious effort to move away from this open model and from its core founding principles. A few months ago, Jack Clark wrote this on Twitter:

So why do I feel so strongly about this? Because I think that OpenAI is using its clout to make ML research more closed and inaccessible. I have always been a strong proponent for open source and for increasing reproducibility and accountability in our ML community. I am pushing to make it compulsory in our machine-learning conferences. See my recent blog post on this topic.
I am certainly not dismissive of AI risks. It is important to have a conversation about it and it is important to involve experts working on this topic. But for several reasons, I believe that OpenAI is squandering an opportunity to have a real conversation and distorting the views to the public. Some of the reasons are:
- OpenAI is severely playing up the risks of releasing a language model. This is an active area of research with numerous groups working on very similar ideas. Even if OpenAI kept the whole thing locked up in a vault, another team would certainly release a similar model.
- In this whole equation, it is academia that loses out the most. I have previously spoken about the severe disadvantage that academic researchers face due to lack of reproducibility and open source code. They do not have the luxury of a large amount of compute and engineering resources for replication.
- This kind of fear-mongering about AI risks distorts science to the public. OpenAI followed a planned media strategy to provide limited access to their model to a few journalists and fed them with a story about AI risks without any concrete proofs. This is not science and does not serve humanity well.
A better approach would be to:
- Go back to the founding mission and foster openness and collaboration. Engage with researchers, especially academic researchers; collaborate with them, provide them the resources and engage in the peer-review process. This is the time-tested way to advance science.
- Engage with experts on risk management to study the impacts of AI. Engage with economists to study the right incentive mechanisms to design for deployment of AI. Publish those studies in peer-reviewed venues.
Numerous other scientists have expressed a similar opinion. I hope OpenAI takes this feedback and acts on it.
[…] AI professor at Caltech and director of machine learning research at Nvidia, told The Verge. In a blog post, Anandkumar said OpenAI was effectively using its clout to “make ML research more closed and […]
[…] AI professor at Caltech and director of machine learning research at Nvidia, told The Verge. In a blog post, Anandkumar said OpenAI was effectively using its clout to “make ML research more closed and […]
[…] professor at Caltech and director of machine studying analysis at Nvidia, advised The Verge. In a blog post, Anandkumar mentioned OpenAI was successfully utilizing its clout to “make ML research more […]
[…] AI professor at Caltech and director of machine learning research at Nvidia, told The Verge. In a blog post, Anandkumar said OpenAI was effectively using its clout to “make ML research more closed and […]
[…] AI professor at Caltech and director of machine learning research at Nvidia, told The Verge. In a blog post, Anandkumar said OpenAI was effectively using its clout to “make ML research more closed and […]
[…] de investigação sobre a aprendizagem da Nvidia, disse que O Conselho de administração. Em um post no blog, Anandkumar disse OpenAI era, efetivamente, utilizando-se de sua influência para que “os ML […]
[…] la que publicó Anima Anandkumar, directora de investigación en machine learning de Nvidia, que extendió un argumento a favor de la apertura y accesibilidad de la investigación. Incluso si el programa fuera tan […]
[…] la que publicó Anima Anandkumar, directora de investigación en machine learning de Nvidia, que extendió un argumento a favor de la apertura y accesibilidad de la investigación. Incluso si el programa fuera tan […]
[…] la que publicó Anima Anandkumar, directora de investigación en machine learning de Nvidia, que extendió un argumento a favor de la apertura y accesibilidad de la investigación. Incluso si el programa fuera tan […]
[…] la que publicó Anima Anandkumar, directora de investigación en machine learning de Nvidia, que extendió un argumento a favor de la apertura y accesibilidad de la investigación. Incluso si el programa fuera tan […]
[…] la que publicó Anima Anandkumar, directora de investigación en machine learning de Nvidia, que extendió un argumento a favor de la apertura y accesibilidad de la investigación. Incluso si el programa fuera tan […]
[…] of malicious use, was met with strong reactions from AI researchers. Across Twitter, blogs (e.g. 1, e.g. 2) and debates it was clear that determining the “right thing to do” is hard and the AI […]
[…] of malicious use, was met with strong reactions from AI researchers. Across Twitter, blogs (e.g. 1, e.g. 2) and debates it was clear that determining the “right thing to do” is hard and the AI […]