On Monday morning, organizers of NeurIPS, the largest annual gathering of AI researchers in the world, gave Best Paper awards to the authors of three pieces of research, including one detailing OpenAI’s GPT-3 language model. The week also started with AI researchers refusing to review Google AI papers until grievances were resolved after firing Ethical AI team co-lead Timnit Gebru. Googlers describe it as an instance of “unprecedented research censorship,” raising questions of corporate influence. According to one analysis, Google publishes more AI research than any other company or institution.
Tension between corporate interests, human rights, ethics, and power could be seen at workshops throughout the week. On Tuesday at the Muslim in AI workshop, GPT-3’s anti-Muslim bias was explored, as was the ways in which AI and IoT devices are used to control and surveil Muslims in China. The Washington Post reported this week that Huawei is reportedly working on AI with a “Uighur alarm” for authorities to track members of the Muslim minority group. Huawei is a platinum sponsor of NeurIPS. When asked what about Huawei and how NeurIPS makes ethical considerations about sponsors, a NeurIPS spokesperson told VentureBeat Friday that a new sponsorship committee is being formed to evaluate sponsor criteria and “determine policies for vetting and accepting sponsors.”
Following a keynote address Wednesday, Microsoft Research Lab director Chris Bishop was asked if a monopoly on infrastructure and machine learning talent enjoyed by Big Tech companies is stifling innovation. In response, he argued that cloud computing allows developers to rent compute resources instead of undertaking the more expensive task of buying hardware that powers machine learning.
On Friday, the Resistance AI workshop highlighted research that urges tech companies to go beyond scale to address societal issues and compares Big Tech research funding to tactics carried out by Big Tobacco. That workshop was organized to bring together an intersectional group of marginalized people from a range of backgrounds to champion AI that gives power back to people and steers clear of oppression.
“We were frustrated with the limitations of ‘AI for good’ and how it could be co-opted as a form of ethics-washing,” organizers said in a statement to VentureBeat. “In some ways, we still have a long way to go: many of us are adjacent to big tech and academia, and we want to do better at engaging those who don’t have this kind of institutional power.”
This was also the first year that NeurIPS required attendees include societal impact and financial disclosure statements. Financial disclosures are due January when authors submit final versions of papers. Four papers were rejected by reviewers this year based on ethical grounds.
On a very different front of the future of AI this week, the technical effort behind putting on the NeurIPS research conference was historic. In all, 22,000 people attended the virtual conference, compared to 13,000 last year in Vancouver. The formula for how to make a virtual NeurIPS came out of ICLR and ICML, major AI research conferences held in the spring and summer respectively.
Prior to the pandemic, prominent AI researchers argued in favor of exploring more remote options as a way to cut the carbon footprint associated with flying to events held around the world. Some of those ideas were played out with short notice for the International Conference on Learning Representations (ICLR), the first major all-digital AI research conference.
Organizers say they learned that Zoom was not a great venue for poster sessions. Instead, NeurIPS poster sessions took place in gather.town, a spatial video chat service. Each user has an avatar and the ability to move freely between posters summarizing research.
One matter that hasn’t been resolved yet is whether AI research conferences will continue to offer a virtual attendance option when there is no longer a global pandemic. Going virtual means lower costs for organizers taking sponsorship money from corporations and access, but should that happen, an organizing committee member cautioned against virtual becoming a second-class experience compared to people who can afford to fly to attend an in-person event.
One participant in a Q&A session between attendees and organizers asked about hosting hybrid in person and virtual options then said the following: “I sincerely hope we are able to return to in person meetings. But I also think the benefits of the virtual experience should not be discarded, especially to enable more people to participate, who may face hardships in attending in person, such as for financial, visa related, or other reasons.”
It’s tough to say what lasting change comes from continuing efforts to address harm caused by AI or the virtual conference format, but between an AI ethics meltdown at Google and NeurIPS hosting the largest virtual AI conference held to date, the easiest conclusion to draw is that after this week, machine learning may never be the same, and I hope that’s a good thing.
Thanks for reading,
Senior AI Staff Writer