Development and use of technologies can lead businesses to cause or contribute to human rights and other social and environmental harms. For example, bias in datasets and machine learning can lead to discriminatory hiring practices, while algorithms used in online platforms can manipulate consumers and voters through enhancing the spread of dis-information, not to mention technology enabling privacy infringement, such as widespread surveillance and invasive monitoring of worker activity.
The risks related to the development and use of artificial intelligence (AI) in particular has grabbed the attention of businesses and policymakers in recent years. In response to these concerns, there is a growing emphasis on responsible and trustworthy AI, including the advancement of ethical frameworks, guidelines, and standards for AI development and use.
The OECD is working with a multi-stakeholder network of experts to develop guidance on responsible business conduct due diligence in the development and use of trustworthy AI systems.