Internet Policy - The Mozilla Blog https://blog.mozilla.org/en/category/mozilla/internet-policy/ News and Updates about Mozilla Wed, 02 Oct 2024 13:01:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.5 Creating a public counterpoint for AI https://blog.mozilla.org/en/mozilla/ai/public-ai-counterpoint/ Mon, 30 Sep 2024 09:00:00 +0000 https://blog.mozilla.org/?p=75841 Mozilla is releasing a vision for Public AI, a robust ecosystem of initiatives that promote public goods, public orientation and public use throughout every step of AI development and deployment. Read the paper here. Look around. There are buses driving alongside cars on the road. Some of your packages are delivered by private couriers, others […]

The post Creating a public counterpoint for AI appeared first on The Mozilla Blog.

]]>
Mozilla is releasing a vision for Public AI, a robust ecosystem of initiatives that promote public goods, public orientation and public use throughout every step of AI development and deployment. Read the paper here.

Look around. There are buses driving alongside cars on the road. Some of your packages are delivered by private couriers, others are delivered by the national postal service. You can flip the channel on your TV back and forth between public broadcasting and commercial networks. And when you access the internet, you can choose between a commercial or nonprofit-backed web browser.

Private and public initiatives have existed side by side for a long time. While private innovation often pushes the frontier of what’s possible, public alternatives can make those innovations more accessible and beneficial for everyone. These parallel products and services give people more choices, create market pressure on each other to be more trustworthy and innovative, distribute power across more people and organizations, and create more resilient and healthier economies.

So, where are the public alternatives for AI? They are starting to emerge, with some governments subsidizing access to computational resources, and nonprofit AI labs collectively putting nearly $1 billion into open source AI research and models. These are important steps forward, but they are not enough to create true public alternatives to the results of the hundreds of billions of dollars going into private AI. This status quo means some critical projects — such as using AI to detect illegal mining operations, facilitate deliberative democracy, and match cancer patients to clinical trials — remain under-resourced relative to their potential societal value. In parallel, Big Tech is ramping up efforts to push policymakers to support private AI infrastructure, which could further cement the dominance of just a few companies in creating the future of AI.

We can’t just rely on a few companies to build everything our society needs from AI — and we can’t afford the risk that they won’t. 

Today, we are unveiling a bold vision with a sweeping action plan for Public AI. Mozilla is calling for a robust ecosystem of initiatives that promote public goods, public orientation, and public use throughout every step of AI development and deployment. It’s not enough for some AI resources to be more accessible, or for companies to support a few token “AI for good” side projects. We need a whole parallel AI ecosystem that can run on non-commercial incentives, where openness enables projects to build on top of each other, and where the total scope of these initiatives is a meaningful counterweight to the private AI ecosystem. 

We are calling on everyone to help shape Public AI. Developers should create open source AI models and tools that are competitive with private AI initiatives; policymakers should support the data, tools and workforce development to make AI truly usable for public interest applications; and the public should support the products and services that emerge from Public AI by contributing data, engagement and support to this ecosystem.

At Mozilla, we’re committed to doing our part by building key parts of the Public AI ecosystem. We will help build public alternatives for the data needed in AI development by doubling down on our Common Voice platform, further expanding access to multilingual voice data to train AI models that represent the diversity of languages around the world. We will invest in open source AI via Mozilla.ai, Mozilla Ventures and Mozilla Builders, which supports the development of tools like llamafile that are making it easier to run AI models locally rather than needing to use commercial cloud providers. And we will continue to support the broader AI accountability ecosystem that is vital for Public AI, continuing to steer our fellowships and data programs toward enabling more people to steer and co-create AI.

We believe this work can only be done in partnership with developers, policymakers, academics, civil society, companies and the public at large. That’s why we’ll also continue making grants through the Mozilla Technology Fund to support open-source AI projects that are building Public AI applications, and we’ll also fund more research about the impacts of Public AI. We’ll keep bringing together stakeholders and experts to explore how to make Public AI components more accessible and ethical. We will also keep working with policymakers to make the case for Public AI, starting with a workshop we are co-hosting with our partners this week in Washington D.C., and continuing with our engagement on this topic at next year’s AI Action Summit in Paris, France.

This is a core part of Mozilla’s broader work to empower everyone and every community to shape, enjoy and trust AI. Earlier this year, we released a paper, Accelerating Progress Toward Trustworthy AI, that outlined our broader vision on AI, and invited public comment. When we publish our final Trustworthy AI paper in the coming months, Public AI will be named as an explicit pillar in our overall strategy for AI.

If we get this right, we can create an AI ecosystem that expands opportunity for everyone. Come join us in making this a reality.

The post Creating a public counterpoint for AI appeared first on The Mozilla Blog.

]]>
Mozilla heads to Capitol Hill, calls for a federal privacy law to ensure the responsible development of AI https://blog.mozilla.org/en/mozilla/internet-policy/mozilla-urges-federal-privacy-law-for-ai-development/ Thu, 11 Jul 2024 17:43:17 +0000 https://blog.mozilla.org/?p=75500 Today, U.S. Senator Maria Cantwell (D-Wash.), Chair of the Senate Committee on Commerce, Science and Transportation, convened a full committee hearing titled “The Need to Protect Americans’ Privacy and the AI Accelerant.” The hearing explored how AI has intensified the need for a federal comprehensive privacy law that protects individual privacy and sets clear guidelines […]

The post Mozilla heads to Capitol Hill, calls for a federal privacy law to ensure the responsible development of AI appeared first on The Mozilla Blog.

]]>
Udbhav Tiwari, Mozilla's Director of Global Product Policy, testifying at a Senate committee hearing on privacy and AI, seated at a table with a microphone and nameplate.
Udbhav Tiwari, Mozilla’s Director of Global Product Policy, testifies at a Senate committee hearing on the importance of federal privacy legislation in the development of AI.

Today, U.S. Senator Maria Cantwell (D-Wash.), Chair of the Senate Committee on Commerce, Science and Transportation, convened a full committee hearing titled “The Need to Protect Americans’ Privacy and the AI Accelerant.” The hearing explored how AI has intensified the need for a federal comprehensive privacy law that protects individual privacy and sets clear guidelines for businesses as they develop and deploy AI systems. 

Mozilla’s Director of Global Product Policy, Udbhav Tiwari, served as a key witness at the public hearing, highlighting privacy’s role as a critical component of AI policy. 

“At Mozilla, we believe that comprehensive privacy legislation is foundational to any sound AI framework,” Tiwari said. “Without such legislation, we risk a ‘race to the bottom’ where companies compete by exploiting personal data rather than safeguarding it. Maintaining U.S. leadership in AI requires America to lead on privacy and user rights.” Tiwari added that data minimization should be at the core of these policies.

As a champion of the open internet, Mozilla has been committed to advancing trustworthy AI for half a decade. “We are dedicated to advancing privacy-preserving AI and advocating for policies that promote innovation while safeguarding individual rights,” Tiwari said. 

Read the written testimony

The post Mozilla heads to Capitol Hill, calls for a federal privacy law to ensure the responsible development of AI appeared first on The Mozilla Blog.

]]>
Readouts from the Columbia Convening on openness and AI https://blog.mozilla.org/en/mozilla/ai/readouts-columbia-convening/ Wed, 27 Mar 2024 21:50:42 +0000 https://blog.mozilla.org/?p=74532 On February 29, Mozilla and the Columbia Institute of Global Politics brought together over 40 leading scholars and practitioners working on openness and AI. These individuals — spanning prominent open source AI startups and companies, non-profit AI labs, and civil society organizations — focused on exploring what “open” should mean in the AI era. We […]

The post Readouts from the Columbia Convening on openness and AI appeared first on The Mozilla Blog.

]]>
On February 29, Mozilla and the Columbia Institute of Global Politics brought together over 40 leading scholars and practitioners working on openness and AI. These individuals — spanning prominent open source AI startups and companies, non-profit AI labs, and civil society organizations — focused on exploring what “open” should mean in the AI era. We previously wrote about the convening, why it was important, and who we brought together.

Today, we are publishing two readouts from the convening. 

The first is a technical memorandum that outlines three different approaches to openness in AI, and highlights different components and spectrums of openness. It includes an extensive appendix that outlines key components in the AI stack, and describes how more openness in each component can help advance system and societal goals. Finally, it outlines open questions that would be worthy of future exploration, digging deeper into the specifics of openness and AI. This memorandum will be helpful for technical leaders and practitioners who are shaping the future of AI, so that they can better incorporate principles of openness to make their own AI systems more effective for their goals and more beneficial for society. 

The second is a policy memorandum that outlines how and why policymakers should support openness in AI. It outlines the societal benefits from openness in AI, provides a higher-level overview of how different parts of the AI stack contribute to different opportunities and risks, and lays out a series of recommendations about how policymakers can advance openness in AI. This memorandum will be helpful for policymakers, especially those who are grappling with the details of policy interventions related to openness in AI.

In the coming weeks, we will also be publishing a longer document that goes into greater detail about the dimensions of openness in AI. This will help advance our broader work with partners and allies to tackle complex and important topics around openness, competition, and accountability in AI. We will continue to keep mozilla.org/research/cc updated with materials stemming from the Columbia Convening on Openness and AI.

The post Readouts from the Columbia Convening on openness and AI appeared first on The Mozilla Blog.

]]>
6 takeaways from The Washington Post Futurist Tech Summit in D.C. https://blog.mozilla.org/en/mozilla/ai/ai-mozilla-the-washington-post-tech-policy/ Mon, 25 Mar 2024 14:00:00 +0000 https://blog.mozilla.org/?p=74499 A full conglomerate including journalists from The Washington Post, U.S. policymakers and influential business leaders gathered for a day of engaging discussions about technology March 21 in the nation’s capital. Mozilla sponsored “The Futurist Summit: The New Age of Tech,” an event focused on addressing the wide range of promise and risks associated with emerging […]

The post 6 takeaways from The Washington Post Futurist Tech Summit in D.C. appeared first on The Mozilla Blog.

]]>
A full conglomerate including journalists from The Washington Post, U.S. policymakers and influential business leaders gathered for a day of engaging discussions about technology March 21 in the nation’s capital.

Mozilla sponsored “The Futurist Summit: The New Age of Tech,” an event focused on addressing the wide range of promise and risks associated with emerging technologies — the largest of them being Artificial Intelligence (AI). It featured interviews moderated by journalists from The Post, as well as interactive sessions about tech for audience members in attendance at the paper’s office in Washington D.C.

Missed the event? Here are six takeaways from it that you should know about:

1. How OpenAI is preparing for the election.

The 2024 U.S. presidential election is one of the biggest topics of discussion involving the emergence and dangers of AI this year. It’s no secret that AI has incredible power to create, influence and manipulate voters with misinformation and fake media content (video, photos, audio) that can unfairly sway voters.

OpenAI, one of the biggest AI organizations, stressed an importance to provide transparency for its users to ensure their tools aren’t being used in those negative ways to mislead the public.

“It’s four billion people voting, and that is really unprecedented, and we’re very, very cognizant of that,” OpenAI VP of Global Affairs Anna Makanju said. “And obviously, it’s one of the things that we work — to ensure that our tools are not used to deceive people and to mislead people.”

Makanju reiterated that AI concerns with the election is a very large scale, and OpenAI is focused on engaging with companies to hammer down transparency in the 2024 race.

“This is like a whole of society issue,” Makanju said. “So that’s why we have engaged with other companies in this space as well. As you may have seen in the Munich Security Conference, we announced the Tech Accord, where we’re going to collaborate with social media companies and other companies that generate AI content, because there’s the issue of generation of AI content and the issue of distribution, and they’re quite different. So, for us, we really focus on things like transparency. … We of course have lots of teams investigating abuse of our systems or circumvention of the use case guidelines that are intended to prevent this kind of work. So, there are many teams at OpenAI working to ensure that these tools aren’t used for election interference.”

And OpenAI will be in the spotlight even more as the election inches closer. According to a report from Business Insider, OpenAI is preparing to launch GPT-5 this summer, which will reportedly eclipse the abilities of the ChatGPT chatbot.

The futurist summit focused on the wide range of promise and risks associated with emerging technologies

2. Policymakers address the potential TikTok ban.

The House overwhelmingly voted 352-65 on March 13 to pass a measure that gives ByteDance, the parent company of TikTok, a decision: Sell the social media platform or face a nationwide ban on all U.S. devices.

One of the top lawmakers on the Senate Intelligence Committee, Sen. Mark Warner (D-Va.), addressed the national security concerns around TikTok on a panel moderated by political reporter Leigh Ann Caldwell alongside Sen. Todd Young (R-Ind.).

“There is something uniquely challenging about TikTok because ultimately if this information is turned over to the Chinese espionage services that could be then potentially used for nefarious purposes, that’s not a good thing for America’s long-term national security interests,” Werner said. “End of the day, all we want is it could be an American company, it could be a British company, it could be a Brazilian company. It just needs not to be from one of the nation states, China being one of the four, that are actually named in American law as adversarial nations.”

Young chimed in shortly after Warner: “Though I have not authored a bill on this particular topic, I’ve been deeply involved, for several years running now, in this effort to harden ourselves against a country, China, that has weaponized our economic interdependence in various ways.”

The fate of the measure now heads to the Senate, which is not scheduled to vote on it soon.

3. Deep Media AI is fighting against fake media content.

AI to fight against AI? Yes, it’s possible!

AI being able to alter how we perceive reality through deepfakes — in other words, synthetic media — is another danger of the emerging technology. Deep Media AI founder Rijul Gupta is countering that AI problem with AI of his own.

In a video demonstration alongside tech columnist Geoffrey Fowler, Gupta showcased how Deep Media AI scans and detects deepfakes in photos, videos and audio files to combat the issue.

For example, Deep Media AI can determine if a photo is fake by looking at wrinkles, reflections and things humans typically don’t pay attention to. In the audio space, which Gupta described as “uniquely dangerous,” the technology analyzes the waves and patterns. It can detect video deepfakes by tracking motion of the face — how it moves, the shape and movement of lips — and changes in lighting.

A good sign: Audience members were asked to identify a deepfake between two video clips (one real, one AI generated by OpenAI) at the start of Gupta’s presentation. The majority of people in attendance guessed correctly. Even better: Deep Media AI detected it was fake and scored a 100/100 in its detection system. In other words, it got it right perfectly.

“Generative AI is going to be awesome; it’s going to make us all rich; it’s going to be great,” Gupta said. “But in order for that to happen, we need to make it safe. We’re part of that, but we need militaries and governments. We need buy-in from the generative AI companies. We need buy-in from the tech ecosystem. We need detectors. And we need journalists to tell us what’s real, and what’s fake from a trusted source, right? I think it’s possible. We’re here to help, but we’re not the only ones here. We’re hoping to provide solutions that people use.”

VP of Global Policy at Mozilla, Linda Griffin, interviewed by The Washington Post’s Kathleen Koch.

4. Mozilla’s push for trustworthy AI

As we continue to shift towards a world with AI that’s helpful, it’s important we involve human beings in that process as much as possible. It’s concerning if companies are making AI while only thinking about profit and not the public. That hurts public trust and faith in big tech.

This work is urgent, and Mozilla has been delivering the trustworthy AI report — which had a 2020 status update in February — to aid in aligning with our vision of creating a healthy internet where openness, competition and accountability are the norms.

“We want to know what you think,” Mozilla VP of Global Policy Linda Griffin said. “We’re trying to map and guide where we think these conversations are. What is the point of AI unless more people can benefit from it more broadly? What is the point of this technology if it’s just in the hands of the handful of companies thinking about their bottom line?

“They do important and really interesting things with the technology; that’s great. But we need more; we need the public counterpoint. So, for us, trustworthy AI, it’s about accountability, transparency, and having humans in the loop thinking about people wanting to use these products and feeling safe and understanding that they have recourse if something goes wrong.”

5. AI’s ability to change rules in the NFL (yes!).

While the NFL is early in the process of incorporating AI into the game of football, the league has found ways to get the ball rolling (pun intended) on using its tools to make the game smarter and better.

One area is with health and safety, a major priority for the NFL. The league uses AI and machine learning tools on the field to grab predictive analysis to identify plays and body positions that most likely lead to players getting injured. Then, they can adjust rules and strategies accordingly, if they want.

For example, kickoffs. Concussions sustained on kickoffs dropped by 60 percent in the NFL last season, from 20 to eight. That is because kickoffs were returned less frequently after the league adjusted the rules governing kickoff returns during the previous offseason, so that a returner could signal for a fair catch no matter where the ball was kicked, and the ball would be placed on the 25-yard line. This change came after the NFL used AI tools to gather injury data on those plays.

“The insight to change that rule had come from a lot of the data we had collected with chips on the shoulder pads of our players of capturing data, using machine learning, and trying to figure out what is the safest way to play the game,” Brian Rolapp, Chief Media & Business Officer for the NFL, told media reporter Ben Strauss, “which led to an impact of rule change.”

While kickoff injuries have gone down, making this tweak to one of the most exciting plays in football is tough. So this year, the NFL is working on a compromise and exploring new ideas that can strike a balance to satisfy both safety and excitement. There will be a vote at league meetings this week in front of coaches, general managers and ownership about it.

6. Don’t forget about tech for accessibility.

With the new chapter of AI, the possibilities of investing and creating tools for those with disabilities is endless. For those who are blind, have low vision or have trouble hearing, AI offers an entire new slate of capabilities.

Apple has been one of the companies at the forefront creating features for those with disabilities that use their products. For example, on iPhones, Apple has implemented live captions, sound recognition and voice control on devices to assist.

Sarah Herrlinger, Senior Director of Global Accessibility Policy & Initiatives at Apple, gave insight into how the tech giant decides what features to add and which ones to update. In doing so, she delivered one of the best talking points of the day.

“I think the key to that is really engagement with the communities,” Herrlinger said. “We believe very strongly in the disability mantra of, nothing about us without us, and so it starts with first off employing members of these communities within our ranks. We never build for a community. We build with them.”

Herrlinger was joined on stage alongside retired Judge David S. Tatel, Mike Buckley, the Chair & CEO of Be My Eyes and Disability reporter for The Post Amanda Morris. When asked about the future of accessibility for those that are blind, Patel shared a touching sentiment many in the disability space resonate with.

“It’s anything that improves and enhances my independence, and enhances it seamlessly is with what I look for,” Tatel said. “That’s it. Independence, independence, independence.”

Get Firefox

Get the browser that protects what’s important

The post 6 takeaways from The Washington Post Futurist Tech Summit in D.C. appeared first on The Mozilla Blog.

]]>
Introducing the Columbia Convening on Openness and AI https://blog.mozilla.org/en/mozilla/ai/introducing-columbia-convening-openness-and-ai/ Wed, 06 Mar 2024 16:00:00 +0000 https://blog.mozilla.org/?p=74374 We brought together experts to tackle a critical question: What does openness mean for AI, and how can it best enable trustworthy and beneficial AI? Update | May 21, 2024: Following this convening, we published a paper that presents a framework for grappling with openness across the AI stack. Learn more about it here, and read the […]

The post Introducing the Columbia Convening on Openness and AI appeared first on The Mozilla Blog.

]]>
We brought together experts to tackle a critical question: What does openness mean for AI, and how can it best enable trustworthy and beneficial AI?
Group photo of the participants at The Columbia Convening on Openness and AI in February
Participants in the Columbia Convening on Openness and AI.

Update | May 21, 2024: Following this convening, we published a paper that presents a framework for grappling with openness across the AI stack. Learn more about it here, and read the paper here.

Original Publication | May 6, 2024: On February 29, Mozilla and the Columbia Institute of Global Politics brought together over 40 leading scholars and practitioners working on openness and AI. These individuals — spanning prominent open source AI startups and companies, non-profit AI labs, and civil society organizations — focused on exploring what “open” should mean in the AI era. Open source software helped make the internet safer and more robust in earlier eras of the internet — and offered trillions of dollars of value to startups and innovators as they created the digital services we all use today. Our shared hope is that open approaches can have a similar impact in the AI era.

To help unlock this significant potential, the Columbia Convening took an important step toward developing a framework for openness in AI and unifying the openness community around shared understandings and next steps. Participatants noted that: 

  • Openness in AI has the potential to advance key societal goals, including making AI safe and effective, unlocking innovation and competition in the AI market, and bringing underserved communities into the AI ecosystem.
  • Openness is a key characteristic to consider throughout the AI stack, and not just in AI models themselves. In components ranging from data to hardware to user interfaces, there are different types of openness that can be helpful for accomplishing different technical and societal goals. Participants reviewed research mapping dimensions of openness in AI, and noted the need to make it easier for developers of AI systems to understand where and how openness should be central to the technology they build.
  • Policy conversations need to be more thoughtful about the benefits and risks of openness in AI. For example, comparing the marginal risk that open systems pose in relation to closed systems is one promising approach to bringing rigor to this discussion. More work is needed across the board — from policy research on liability distribution, to more submissions to the National Telecommunications and Information Administration’s request for comment on “dual-use foundation models with widely available model weights.”
  • We need a stronger community and better organization to help build, invest, and advocate for better approaches to openness in AI. This convening showed that the openness community can have collaborative, productive discussions even when there are meaningful differences of opinion between its members. Mozilla committed to continuing to help build and foster community on this topic.

Getting “open” right for AI will be hard — but it’s never been more timely or important. Today, while everyone gushes about how generative AI can change the world, only a handful of products dominate the generative AI market. The lack of competition in AI products today is a real problem. It could mean that the new AI products we’ll begin to see in the next several years won’t be as innovative and safe as we need them to be – but instead, be built on the same closed, proprietary model that has defined roughly the last decade of online life. That’s why Mozilla’s recent report on Accelerating Progress Toward Trustworthy AI doubles down on openness, competition, and accountability as vital to the future of AI.

We know a better future is possible. During earlier eras of the Internet, open source technologies played a core role in promoting innovation and safety. Open source software made it easier to find and fix bugs in software. Attempts to limit open innovation — such as export controls on encryption in early web browsers — ended up being counterproductive, further exemplifying the value of openness. And, perhaps most importantly, open source technology has provided a core set of building blocks that software developers have used to do everything from create art to design vaccines to develop apps that are used by people all over the world; it is estimated that open source software is worth over $8 trillion in value. 

For years, we saw similar benefits play out for AI. Industry researchers openly published foundational AI research and frameworks, making it easier for academics and startups to keep pace with AI advances and enabling an ecosystem of external experts who could challenge the big AI players. But, the benefits of this approach are not assured as we enter a new wave of innovation around AI. As training AI systems requires more compute and data, some key players are shifting their attention away from publishing research and toward consolidating competitive advantages and economies of scale to enable foundational models on demand. As AI risks are being portrayed as murkier and more hypothetical, it is becoming easier to argue that locking down AI models is the safest path forward. Today, it feels like the benefits and risks of AI depend on the whims of a few tech companies in Silicon Valley.

This can’t be the best approach to AI. If AI is truly so powerful and pervasive, shouldn’t AI be subject to real scrutiny from third-party assessments? If AI is truly so innovative and useful, shouldn’t there be more AI tools and systems that startups and small businesses can use?

We believe openness can and must play a key role in the future of AI — the question is how. Late last year, we and over 1,800 people signed our letter that noted that although the signatories represent different perspectives on open source AI, they all agree that open, responsible, and transparent approaches are critical to safety and security in the AI era. Indeed, across the AI ecosystem, some advocate for staged release of AI models, others believe other forms of openness in the AI stack are more important, and yet others believe every part of AI systems should be as open as possible. There are people who believe in openness for openness’ sake, and others who view openness as a means to other societal goals — such as identifying civil rights and privacy harms, promoting innovation and competition in the market, and supporting consumers and workers who want a say about how AI is deployed in their communities. We were thrilled to bring together people with very divergent views and motivations for openness collaborating on strengthening and leveraging openness in support of their missions.

We’re immensely grateful to the participants in the Columbia Convening on Openness and AI:

  • Anthony Annunziata — Head of AI Open Innovation and AI Alliance, IBM
  • Mitchell Baker— Chairwoman, Mozilla Foundation
  • Kevin Bankston — ​​Senior Advisor on AI Governance, Center for Democracy and Technology
  • Adrien Basdevant — Tech Lawyer, Entropy Law
  • Ayah Bdeir — Senior Advisor, Mozilla
  • Philippe Beaudoin — Co-Founder and CEO, Waverly
  • Brian Behlendorf — Chief AI Strategist, The Linux Foundation
  • Stella Biderman — Executive Director, EleutherAI
  • John Borthwick — CEO, Betaworks
  • Zoë Brammer — Senior Associate for Cybersecurity & Emerging Technologies, Institute for Security and Technology
  • Glenn Brown — Principal, GOB Advisory
  • Kasia Chmielinski — Practitioner Fellow, Stanford Center on Philanthropy and Civil Society
  • Peter Cihon — Senior Policy Manager, Github
  • Julia Rhodes Davis — Chief Program Officer, Computer Says Maybe
  • Merouane Debbah — Senior Scientific AI Advisor, Technology Innovation Institute
  • Alix Dunn — Facilitator, Computer Says Maybe
  • Michelle Fang — Strategy, Cerebras Systems
  • Camille François — Faculty Affiliate, Institute for Global Politics at Columbia University’s School of Public and International Affairs
  • Stefan French — Product Manager, Mozilla.ai
  • Yacine Jernite — Machine Learning and Society Lead, Hugging Face
  • Amba Kak — Executive Director, AI Now Institute
  • Sayash Kapoor — Ph.D. Candidate, Princeton University
  • Helen King-Turvey — Managing Partner, Philanthropy Matters
  • Kevin Klyman — AI Policy Researcher, Stanford Institute for Human-Centered AI
  • Nathan Lambert — ML Scientist, Allen Institute for AI 
  • Yann LeCun — Vice President and Chief AI Scientist, Meta
  • Stefano Maffulli — Executive Director, Open Source Initiative
  • Nik Marda — Technical Lead, AI Governance, Mozilla
  • Ryan Merkley — CEO, Conscience
  • Mohamed Nanabhay — Managing Partner, Mozilla Ventures
  • Deval Pandya — Vice President of AI Engineering, Vector Institute
  • Deb Raji — Fellow at Mozilla and PhD Student, UC Berkeley
  • Govind Shivkumar — Director, Investments, Omidyar Network 
  • Aviya Skowron — Head of Policy and Ethics, EleutherAI
  • Irene Solaiman — Head of Global Policy, HuggingFace
  • Madhulika Srikumar, Lead for Safety Critical AI, Partnership on AI
  • Victor Storchan — Lead AI/ ML Research at Mozilla.ai
  • Mark Surman — President, Mozilla Foundation
  • Nabiha Syed — CEO, The Markup
  • Martin Tisne — CEO, AI Collaborative, The Omidyar Group
  • Udbhav Tiwari, Head of Global Product Policy, Mozilla
  • Justine Tunney — Founder, Mozilla’s LLaMAfile project
  • Imo Udom — SVP of Innovation, Mozilla
  • Sarah Myers West — Managing Director, AI Now Institute

In the coming weeks, we intend to publish more content related to the convening. We will release resources to help practitioners and policymakers grapple with the opportunities and risks from openness in AI, such as determining how openness can help make AI systems safer and better. We will also continue to bring similar communities together, helping to keep pushing forward on this important work.

The post Introducing the Columbia Convening on Openness and AI appeared first on The Mozilla Blog.

]]>
Openness and AI: Fostering innovation and accountability in the EU’s AI Act https://blog.mozilla.org/en/mozilla/internet-policy/eu-ai-act/ Wed, 09 Aug 2023 19:34:45 +0000 https://blog.mozilla.org/?p=72820 Open source lies at the heart of Mozilla and our Manifesto. Despite its ubiquity in the current technology landscape, it is easy to forget that open source was once a radical idea which was compared to cancer. In the long journey since, Mozilla has helped create an open source browser, email client, programming language, and […]

The post Openness and AI: Fostering innovation and accountability in the EU’s AI Act appeared first on The Mozilla Blog.

]]>
A circle of 12 gold stars.

Open source lies at the heart of Mozilla and our Manifesto. Despite its ubiquity in the current technology landscape, it is easy to forget that open source was once a radical idea which was compared to cancer. In the long journey since, Mozilla has helped create an open source browser, email client, programming language, and data donation platform while applying the ethos beyond our code, including our advocacy.

Recent developments in the AI ecosystem have put open source back in the spotlight, sparking heated conversations and accusations about whose ends it serves – a global community of developers, the entrenched dominance of big tech companies, or a little bit of both? Motivations and incentives matter and Mozilla believes in representing the core set of values behind open source while working with players of all sizes to advance a trustworthy ecosystem. As we noted in 2021, “openness” is often at risk of being co-opted, serving as little more than a facade meant to shield organizations and governments from scrutiny.

We’ve been following the debate closely at Mozilla and think the nature of open source in AI raises many new questions that are still in the process of being answered – what is open source when it comes to AI? How should regulation treat open source to foster innovation without providing a pink slip to necessary regulatory compliance? What are the contours of the commercial deployment and corresponding liability in open source software? None of these are easy questions and the potential for abuse inherent in powerful models only serves to further muddy the waters. Mozilla is exploring these questions by building open source technology to advance trustworthy AI at Mozilla.ai, giving grants to our community through the Mozilla Technology Fund (MTF), and through our public policy work.

On the public policy front, EU legislators are paying close attention to these developments. The most recent proposals for the AI Act from the European Parliament and member states include dedicated language on open source, and for good reason: open source development offers tremendous opportunities and can enable significant innovation and commercial deployment. Just as importantly, making AI models and components available under permissive licenses opens them up to important scrutiny from researchers aiming to evaluate, amongst other things, their safety and trustworthiness.

However, the special nature and capabilities of the open source ecosystem clearly require further improvements in the AI Act before finalization, as a coalition from the open source community recently argued. We think the coalition’s paper brings some much needed clarity, specifically by centering the debate on two key facets:

“First, the values of sound research, reproducibility, and transparency fostered by open science are instrumental to the development of safe and accountable AI systems.

Second, open source development can enable competition and innovation by new entrants and smaller players, including in the EU.”

We continue to crystallize our thoughts on these issues, both by collaborating with allies and centering our thinking around the community – an integral aspect of Mozilla’s role in the technology ecosystem – we are highlighting key considerations for EU legislators as they finalize the AI Act.

Regulating open source and openness requires definitional clarity

Slippery definitions of open source AI are rife in the ecosystem. In the absence of definitional clarity, shifting meanings of “open source” and “open” can be deployed strategically in the policy domain in ways that reduce oversight and hinder accountability. The final version of the AI Act should therefore clearly define what it means by “open source” and related terms.

Here are a few key places where clarity could help move the ball forward in the direction of greater AI accountability and further enabling open source development:

First, EU legislators should ensure that any definition of open source focuses on permissive licenses that are not undercut with restrictions on how or for what purposes they can be used. Releases that do include such restrictions would not meet conventional definitions of “open source”, including the definition provided by the Open Source Initiative (OSI). The OSI definition could serve as a helpful point of reference in this regard. Should legislators want to create exemptions similar to those relating to open source releases for releases that come with certain use restrictions, for example so-called open responsible AI licenses (or open RAIL) or for releases limited to research uses, they should do so explicitly and without expanding conventional definitions of open source through regulation.

Second, openness in relation to AI is more complex, and more expansive, than in other contexts. While open source software typically relates to source code, it can relate to a number of different artifacts in AI: from the entire model (i.e. the model weights; not source code) to components like training data or the source code underlying deployment software or the training process. The AI Act should therefore clearly define AI components and be specific with regard to the question of which obligations should apply to providers of which components. The co-legislators’ proposals are still laden with ambiguity in this regard. For example, would obligations concerning foundation models apply only to those open-sourcing the trained model or also to those open-sourcing constituent components, such as training datasets? And how should obligations be applied if, for example, the model is openly available but the training data is not? In answering these questions, EU legislators should duly take into account the capabilities of the various actors along the supply chain and of open source communities more generally.

Recommendation: The AI Act should clarify that technologies claiming special treatment in this context should be released under licenses aligned with the Open Source Initiative (OSI) definition of “open source”. Further, the law should clarify the minimum set of components (indicatively – models, weights, training data, etc.) that should be released under an OSI license to benefit from regulatory exemptions.

Context is key when determining regulatory obligations

Open source AI enables important research on AI and its risks, but simply open-sourcing an AI model does not necessarily mean that it is released with research as its primary purpose. In fact, enabling broader commercialization has always been a key tenet of the open source movement. While appealing at first glance, relying solely on the intent to commercialize an AI model as a criterion for imposing regulatory obligations raises an array of thorny questions for regulators.

First, unless stipulated otherwise (e.g., through use restrictions in the license under which a model is released), openly released models can be adapted and used for any purpose — and that should be taken into account in formulating obligations for open source providers. At the same time, while many open source AI projects are carried out in the public interest or by open source community groups (e.g., Open Science’s BLOOM or EleutherAI), some are driven by well-resourced commercial actors. In fact, some of the most widely used and commercialized permissively licensed AI models — e.g., Stable Diffusion or the LLAMA family of models — have been developed by companies such as Stability AI or Meta, in some cases with the deliberate intent of commercialization.

Meaningfully tying regulatory obligations to commercialization requires much greater clarity on what differentiates non-commercial and commercial development, deployment, and maintenance. For instance, would a developer be considered a “commercial” actor if they receive some compensation for maintaining an open source component (that is also used for commercial purposes) in addition to their day job? The AI Act currently doesn’t provide that clarity. This is an issue that has also cropped up in debates around the EU’s Cyber Resilience Act (CRA), where the solution likely lies in a combination of a revenue threshold (10-20m+) and the possibility for subjective exceptions. EU co-legislators should pay close attention to such files grappling with similar questions and further ensure that the open source community has clarity when it comes to interpreting concepts such as “placing on the market” and “putting into service”, which are critical in this respect.

Recommendation: The AI Act should provide clarity on the criteria by which a project will be judged to determine whether it has crossed the “commercialisation” threshold, including revenue. We urgently need language that ensures there is a subjective determination process that allows for nuance to reflect the variety of open source projects in difficult cases.

Openness doesn’t absolve from responsibility — but obligations should be proportionate

Regulatory obligations imposed on open source AI providers should not disincentivize open source development or outmatch the capabilities of the open source communities. However, they should also not lose sight of the AI Act’s objective to prevent harm and facilitate trust in AI. Therefore, it is important not to forget that open source AI should emphasize responsibility and trustworthiness, too. Nonetheless, any obligations imposed on open source AI should take into account the fact that with an increasing level of openness, compliance and evaluation become easier to achieve for downstream actors. For example, testing obligations can be met more easily if the model weights are made openly available (and the model subsequently deployable by anyone with sufficient computing resources to do so). Similarly, data governance requirements are easier to meet for downstream actors if training datasets are openly available.

The details of what this should look like in practice are key and suggestions should include subjective criteria that allow for case-by-case determination rather than just definitional gymnastics. This is the only way to prevent that merely marking something as open source absolves someone from all liability and neither does it place a crippling burden that makes open source AI development unfeasible. This is also linked to the question of base models and fine-tuning/other forms of modification, where liability questions are a lot more unclear.

Recommendation: The AI Act should allow for proportional obligations in the case of open source projects while creating strong guardrails to ensure they are not exploited to hide from legitimate regulatory scrutiny. This should include subjective criteria and a process that allows for case-by-case determination rather than encouraging definitional gymnastics.

Conclusion

It is clear that open source is only a part of the broader issue of what it takes to have an open and competitive AI landscape. We are thinking hard about this at Mozilla from the lens of data, infrastructure and compute resources, community involvement, liability and many other factors – there is much more to discuss and to do.

The post Openness and AI: Fostering innovation and accountability in the EU’s AI Act appeared first on The Mozilla Blog.

]]>
The web is for everyone: Our vision for the evolution of the web https://blog.mozilla.org/en/mozilla/mozilla-webvision-future-of-web/ Wed, 23 Mar 2022 15:58:47 +0000 https://blog.mozilla.org/?p=68612 Over the last two decades, the web has woven itself into the fabric of our lives. What began as a research project has become the world’s most important communication platform and an essential tool for billions of people.  But despite its success — and sometimes because of it — the web has real problems. People […]

The post The web is for everyone: Our vision for the evolution of the web appeared first on The Mozilla Blog.

]]>
Over the last two decades, the web has woven itself into the fabric of our lives. What began as a research project has become the world’s most important communication platform and an essential tool for billions of people. 

But despite its success — and sometimes because of it — the web has real problems. People are routinely spied on by advertisers and oppressive governments, often at the moments when the open web is most necessary. They find themselves disempowered by hostile sites, sluggish experiences, and overly-complex technologies. And much of the web remains out of reach for non-native English speakers and people with disabilities.

Mozilla believes the web should be for everyone — open, empowering, and safe. In its best moments, the web exemplifies these values today. But too often the web today does not deliver on this promise. To that end, we’ve mapped out a detailed vision of the changes we want to see in the web in the years ahead, and the work we believe is necessary to achieve them. This includes efforts on a number of fronts — deploying ubiquitous encryption, ending tracking, simpler and faster technologies, next-generation internationalization support and much more.

We believe to make the web a better place we need to focus our work on these nine areas:

  • Protect user privacy: Essentially all user behavior on the web is subject to tracking and surveillance. A truly open and safe web requires that what people do remains private; this requires gradually shifting the ecosystem towards a new equilibrium without breaking the web in the process.
  • Protect users from malicious code: Users must be able to browse without fear that their devices will be compromised, and yet every web browser routinely has major security vulnerabilities. The technologies finally exist to significantly reduce this kind of security issue; we are increasing our use of them in Firefox and look forward to others doing the same.
  • Encrypt everything: All user communications should be encrypted. We are near the end of a long process to secure all HTTP traffic, and encryption needs to be retrofitted into existing legacy protocols such as DNS and built into all new protocols by default.
  • Extend the web… Safely: New capabilities make the web more powerful but also create new risks. The value added by new capabilities needs to be weighed against these risks; some applications may ultimately not be well suited for the web and that’s OK.
  • Make the web fast enough for any use: While web browsers are much faster now than they were five years ago, we still see major performance issues. Fixing these requires making both browsers and infrastructure faster, and also making it easier and more attractive for people to build fast sites.
  • Make it easy for anyone to publish on the web: While early websites were relatively simple and easy to build, the demands of performance and high production values have made the web increasingly daunting to work with. Our strategy is to categorize development techniques into increasing tiers of complexity, and then work to eliminate the usability gaps that push people up the ladder towards more complex approaches.
  • Give users the power to experience the web on their own terms: The web is for users. In order to fulfill that promise we need to ensure that they, not sites, control their experience, whether that means blocking ads or viewing content in accessible form. This requires building a browser that displays the web the way the user wants it — rather than just following instructions from the site — as well as strengthening the technical properties of web standards that enable this kind of reinterpretation.
  • Provide a first-class experience for non-English-speakers: The technical architecture and content ecosystem of the web both work best for North-American English speakers, who are a fraction of the world. We want the web to work well for everyone regardless of where they live and what languages they speak.
  • Improve accessibility for people with disabilities: As web experiences have grown richer, they’ve also become more difficult to use with assistive technology like screen readers. We want to reverse this trend.

You can read much more about each of these objectives in the full document. We’ve been using this roadmap to guide our work on Firefox and other Mozilla products. We also recognize that it’s a big web and fixing it is a team effort. We’re looking forward to working with others to build a better web.

The post The web is for everyone: Our vision for the evolution of the web appeared first on The Mozilla Blog.

]]>
The website security ecosystem protects individuals against fraud and state-sponsored surveillance. Let’s not break it. https://blog.mozilla.org/en/security/mozilla-eff-cybersecurity-experts-publish-letter-on-dangers-of-article-452-eidas-regulation/ Thu, 03 Mar 2022 21:44:08 +0000 https://blog.mozilla.org/?p=68528 Principle four of the Mozilla Manifesto states that “Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.” We’ve made real progress on improving security on the Internet, but unfortunately, a draft law under discussion in the EU – the eIDAS Regulation – threatens to reverse that progress. Mozilla […]

The post The website security ecosystem protects individuals against fraud and state-sponsored surveillance. Let’s not break it. appeared first on The Mozilla Blog.

]]>
Principle four of the Mozilla Manifesto states that “Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.” We’ve made real progress on improving security on the Internet, but unfortunately, a draft law under discussion in the EU – the eIDAS Regulation – threatens to reverse that progress. Mozilla and many others have been raising the alarm in the last few months. Today, leading cybersecurity experts are weighing in too, in an open letter to EU lawmakers that warns of the risks that eIDAS represents to web security.

Website certificates sit at the heart of web security. When you make a connection to a web site, say “mozilla.org”, that connection is protected with TLS, but TLS only protects the connection itself; each server has a certificate which ensures that the server on the other end is “mozilla.org” and not an attacker impersonating Mozilla. Certificates are issued by Certificate Authorities (CAs), who are responsible for verifying that a given entity controls the site in question. 

A malicious CA — or just one which did not have secure practices — could issue incorrect certificates which could then be used by attackers to attack people’s connections and steal their data. In order to ensure that CAs are held to high standards, each major browser and operating system maintains their own “Root Program,” which is responsible for vetting CAs to ensure that they have acceptable issuance practices, and, where necessary, removing CAs who do not adhere to those practices. For 18 years, Mozilla has operated its Root Program in the open, with published practices and where each proposed CA is considered on a public mailing list, ensuring that any stakeholder can be heard.

Proposed EU legislation threatens to disrupt this balance. Article 45.2 of the eIDAS Regulation mandates support for a new kind of certificate called a Qualified Website Authentication Certificate (QWAC). Under this regulation, QWACs would be issued by Trust Service Providers (another name for CAs), with those TSPs being approved not by the browsers but rather by the governments of individual EU member states. Browsers would be required to trust certificates issued by those TSPs regardless of whether they would meet Root Program security requirements, and without any way to remove misbehaving CAs. 

This change would weaken the security of the web by preventing browsers from protecting their users from the security risks – such as identity theft and financial fraud – that a misbehaving CA can expose them too. Worse, compelled inclusion of CAs in our root program would set a precedent for action by repressive regimes. We have already seen state actors (such as Kazakhstan) try to ramp up their surveillance capacities by forcing browsers to automatically trust their CAs — a dangerous practice that browsers and civil society organizations have successfully resisted so far, but if we set the precedent that web browser can’t hold CAs to appropriate security standards that could change quickly.

Technical experts at Mozilla, the Internet Society, the Electronic Frontier Foundation, as well as European civil society organisations have all spoken out about how these requirements would be bad for the web. Today, Mozilla and the EFF are publishing a letter signed by 38 cybersecurity experts about the danger of Article 45.2 to web security and recommendations for how lawmakers can avoid those dangers. The letter demonstrates that the cybersecurity community believes this provision is a threat to web security, creating more problems than it solves.   

The post The website security ecosystem protects individuals against fraud and state-sponsored surveillance. Let’s not break it. appeared first on The Mozilla Blog.

]]>
Americans deserve federal privacy protections and greater transparency into hidden harms online https://blog.mozilla.org/en/mozilla/americans-deserve-federal-privacy-protections-and-greater-transparency-into-hidden-harms-online/ Wed, 16 Feb 2022 21:45:04 +0000 https://blog.mozilla.org/?p=68502 “Privacy is fundamental and cannot be treated as optional. Companies and regulators need to work hand in hand to provide stronger privacy protections to people. Technical privacy protections by companies are complementary to privacy regulation and neither alone is sufficient.” Marshall Erwin Marshall Erwin, Mozilla’s Chief Security Officer, testified today before the Committee on House […]

The post Americans deserve federal privacy protections and greater transparency into hidden harms online appeared first on The Mozilla Blog.

]]>

“Privacy is fundamental and cannot be treated as optional. Companies and regulators need to work hand in hand to provide stronger privacy protections to people. Technical privacy protections by companies are complementary to privacy regulation and neither alone is sufficient.”

Marshall Erwin

Marshall Erwin, Mozilla’s Chief Security Officer, testified today before the Committee on House Administration. The Committee held a hearing on Big Data: Privacy Risks and Needed Reforms in the Public and Private Sectors. Members of Congress and witnesses highlighted, among other privacy concerns, the need for baseline federal privacy protections. 

In his testimony, Marshall focused on:

  • Mozilla’s work to make our vision for privacy and security a reality in the products we build and the technologies we develop
  • The essential role that Congress plays in creating a healthier internet, including a call for US federal privacy legislation.
  • Mozilla’s support for complementary rules to provide greater transparency into how people experience online discrimination and harm when their data is collected, used and shared without meaningful awareness or consent.
  • And finally, the need to foster stronger consumer protection and competition obligations, while simultaneously ensuring a favorable environment for privacy-enhancing technologies.

“We believe through our product and policy work we can help address the data privacy gaps that exist today, impacting consumers, companies, and the public sector alike. Despite being a powerhouse of technology innovation for years, the United States is behind globally when it comes to recognizing consumer privacy and protecting people from indiscriminate data collection and use.” 

Marshall Erwin

Privacy online is in desperate need of reform and Mozilla’s efforts to improve the ecosystem and empower people take many shapes. We advocate to policy makers for comprehensive privacy legislation, for greater ad transparency and for strong enforcement around the world. We offer industry-leading anti-tracking protection by default to all users in the Firefox browser and offer a VPN service. But we know we cannot do it alone. Others need to change too. That’s why we work with other browser makers, ad networks, publishers and advertisers to put forward proposals that would make online advertising less privacy-invasive and improve people’s privacy. And why we push other tech companies to reinforce their privacy protections.

For more, check out the replay of the hearing or read Marshall’s prepared statement for the Committee.

For press inquiries, contact press [at] mozilla.com.

The post Americans deserve federal privacy protections and greater transparency into hidden harms online appeared first on The Mozilla Blog.

]]>
In California, an Important Victory for Net Neutrality https://blog.mozilla.org/en/mozilla/in-california-an-important-victory-for-net-neutrality/ Fri, 28 Jan 2022 23:05:32 +0000 https://blog.mozilla.org/?p=68375 Today, the Ninth Circuit court upheld California’s net neutrality law, affirming that California residents can continue to benefit from the fundamental safeguards of equal treatment and open access to the internet. This decision clears the way for states to enforce their own net neutrality laws, ensuring that consumers can freely access ideas and services without […]

The post In California, an Important Victory for Net Neutrality appeared first on The Mozilla Blog.

]]>
Today, the Ninth Circuit court upheld California’s net neutrality law, affirming that California residents can continue to benefit from the fundamental safeguards of equal treatment and open access to the internet. This decision clears the way for states to enforce their own net neutrality laws, ensuring that consumers can freely access ideas and services without unnecessary roadblocks. Net neutrality matters, as much of our daily life is now online. It ensures that consumers are protected from ISPs blocking or throttling their access to websites, or creating fast lanes and slow lanes for popular services. 

In this case, telecom and cable companies took the position that California could not prevent them from blocking, slowing or prioritizing certain internet traffic on the grounds that federal law preempted state net neutrality law. Mozilla joined a coalition of public interest organizations in submitting an amicus brief in support of California. The result today is a victory and a crucial step in reinstating protections for families and businesses across the country.

Mozilla believes that people everywhere deserve the same ability to control their own online experiences. The need for net neutrality protections has become even more apparent during the pandemic. In March 2021, Mozilla sent a joint letter to the Federal Communications Commission (FCC) asking the Commission to reinstate net neutrality at the federal level, as a matter of urgency.

Mozilla has long defended people’s access to the internet, in the US and around the world. In recent years, we’ve fought to uphold net neutrality in the U.S. — in the courts, by mobilizing countless Americans to speak up, and by showcasing how it’s a bipartisan issue. More recently, in a March 2021 survey conducted in collaboration with YouGov, we found that an overwhelming majority of people, 72%, say that consumers (not businesses) should control what they see and do on the internet. 

We’re grateful to be a part of a broad community pushing for net neutrality protections and will continue to work to ensure the internet remains a public, global resource, open and accessible to all. 

The post In California, an Important Victory for Net Neutrality appeared first on The Mozilla Blog.

]]>