{"id":74918,"date":"2024-05-21T09:00:00","date_gmt":"2024-05-21T16:00:00","guid":{"rendered":"https:\/\/blog.mozilla.org\/?p=74918"},"modified":"2024-05-21T09:04:45","modified_gmt":"2024-05-21T16:04:45","slug":"new-framework-for-ai-openness-and-innovation","status":"publish","type":"post","link":"https:\/\/blog.mozilla.org\/en\/mozilla\/ai\/new-framework-for-ai-openness-and-innovation\/","title":{"rendered":"Releasing a new paper on openness and artificial intelligence"},"content":{"rendered":"\n
For the past six months, the Columbia Institute of Global Politics and Mozilla have been working with leading AI scholars and practitioners to create a framework on openness and AI. Today, we are publishing a paper that lays out this new framework.<\/em><\/p>\n\n\n\n During earlier eras of the internet, open source technologies played a core role<\/a> in promoting innovation and safety. Open source technology provided a core set of building blocks that software developers have used to do everything from create art to design vaccines to develop apps that are used by people all over the world; it is estimated<\/a> that open source software is worth over $8 trillion in value. And, attempts to limit open innovation \u2014 such as export controls on encryption in early web browsers \u2014 ended up being counterproductive, further exemplifying the value of openness. <\/p>\n\n\n\n The paper<\/a> surveys existing approaches to defining openness in AI models and systems, and then proposes a descriptive framework to understand how each component of the foundation model stack contributes to openness.<\/p><\/blockquote><\/figure>\n\n\n\n Today, open source approaches for artificial intelligence<\/a> \u2014 and especially for foundation models \u2014 offer the promise of similar benefits to society. However, defining and empowering “open source” for foundation models has proven tricky, given its significant differences from traditional software development. This lack of clarity has made it harder to recommend specific approaches and standards for how developers should advance openness and unlock its benefits. Additionally, these conversations about openness in AI have often operated at a high level, making it harder to reason about the benefits and risks from openness in AI. Some policymakers and advocates have blamed open access to AI as the source of certain safety and security risks, often without concrete or rigorous evidence to justify those claims. On the other hand, people often tout the benefits of openness in AI, but without specificity about how to actually harness those opportunities. <\/p>\n\n\n\n That\u2019s why, in February, Mozilla and the Columbia Institute of Global Politics brought together over 40 leading scholars and practitioners<\/a> working on openness and AI for the Columbia Convening. These individuals \u2014 spanning prominent open source AI startups and companies, nonprofit AI labs, and civil society organizations \u2014 focused on exploring what \u201copen\u201d should mean in the AI era.<\/p>\n\n\n\n<\/a><\/figure>\n\n\n\n