Tensorial-Professor Anima on AI

An open and shut case on OpenAI

The views expressed here are solely my own and do not in any way reflect those of my employers.

This blog post is meant to clarify my position on the recent OpenAI controversy. A few days ago, I engaged with Jack Clark, who manages communications and policy for OpenAI, on Twitter. It is hard to have a nuanced discussion on Twitter and I am writing this blog post to better summarize my thoughts. For a longer and more thorough discussion on this topic, see excellent blog posts by Rob Munro and Zack Lipton.

The controversy: OpenAI released their language model a few days ago with a huge media blitz

My Twitter comments:

When OpenAI was started a few years with much fanfare, its core mission was to foster openness in AI. As a non-profit, it was meant to freely collaborate with other institutions and researchers by making its patents and research open to the public.  I find this goal highly admirable and important.

I also have a deep admiration for Jack Clark. His newsletter has been a great resource for the community to keep up with the latest updates in machine learning. In the past, he has pushed for more openness from the ML community. When the NeurIPS conference banned journalists from attending the workshops, he protested on Twitter and I supported his stance.

On the other hand, OpenAI seems to be making a conscious effort to move away from this open model and from its core founding principles. A few months ago, Jack Clark wrote this on Twitter:

So why do I feel so strongly about this? Because I think that OpenAI is using its clout to make ML research more closed and inaccessible. I have always been a strong proponent for open source and for increasing reproducibility and accountability in our ML community. I am pushing to make it compulsory in our machine-learning conferences. See my recent blog post on this topic.

I am certainly not dismissive of AI risks. It is important to have a conversation about it and it is important to involve experts working on this topic. But for several reasons, I believe that OpenAI is squandering an opportunity to have a real conversation and distorting the views to the public. Some of the reasons are:

A better approach would be to:

Numerous other scientists have expressed a similar opinion. I hope OpenAI takes this feedback and acts on it.