Ethics is a set of moral principles which help us discern between right and wrong. AI ethics is a multidisciplinary field that studies how to optimize the beneficial impact of artificial intelligence (AI) while reducing risks and adverse outcomes.
Examples of AI ethics issues include data responsibility and privacy, fairness, explainability, robustness, transparency, environmental sustainability, inclusion, moral agency, value alignment, accountability, trust, and technology misuse. This article aims to provide a comprehensive market view of AI ethics in the industry today. To learn more about IBM’s point of view, see our AI ethics page here.
With the emergence of big data, companies have increased their focus to drive automation and data-driven decision-making across their organizations. While the intention there is usually, if not always, to improve business outcomes, companies are experiencing unforeseen consequences in some of their AI applications, particularly due to poor upfront research design and biased datasets.
As instances of unfair outcomes have come to light, new guidelines have emerged, primarily from the research and data science communities, to address concerns around the ethics of AI. Leading companies in the field of AI have also taken a vested interest in shaping these guidelines, as they themselves have started to experience some of the consequences for failing to uphold ethical standards within their products. Lack of diligence in this area can result in reputational, regulatory and legal exposure, resulting in costly penalties. As with all technological advances, innovation tends to outpace government regulation in new, emerging fields. As the appropriate expertise develops within the government industry, we can expect more AI protocols for companies to follow, enabling them to avoid any infringements on human rights and civil liberties.
Establishing principles for AI ethics
While rules and protocols develop to manage the use of AI, the academic community has leveraged the Belmont Report as a means to guide ethics within experimental research and algorithmic development. There are main three principles that came out of the Belmont Report that serve as a guide for experiment and algorithm design, which are:
- Respect for Persons: This principle recognizes the autonomy of individuals and upholds an expectation for researchers to protect individuals with diminished autonomy, which could be due to a variety of circumstances such as illness, a mental disability, age restrictions. This principle primarily touches on the idea of consent. Individuals should be aware of the potential risks and benefits of any experiment that they’re a part of, and they should be able to choose to participate or withdraw at any time before and during the experiment.
- Beneficence: This principle takes a page out of healthcare ethics, where doctors take an oath to “do no harm.” This idea can be easily applied to artificial intelligence where algorithms can amplify biases around race, gender, political leanings, et cetera, despite the intention to do good and improve a given system.
- Justice: This principle deals with issues such as fairness and equality. Who should reap the benefits of experimentation and machine learning? The Belmont Report offers five ways to distribute burdens and benefits, which are by:
Equal share
Individual need
Individual effort
Societal contribution
Merit
Primary concerns of AI today
There are a number of issues that are at the forefront of ethical conversations surrounding AI technologies in the real world. Some of these include:
Foundation models and generative AI
The release of ChatGPT in 2022 marked a true inflection point for artificial intelligence. The abilities of OpenAI’s chatbot—from writing legal briefs to debugging code—opened a new constellation of possibilities for what AI can do and how it can be applied across almost all industries.
ChatGPT and similar tools are built on foundation models, AI models that can be adapted to a wide range of downstream tasks. Foundation models are typically large-scale generative models, comprised of billions of parameters, that are trained on unlabeled data using self-supervision. This allows foundation models to quickly apply what they’ve learned in one context to another, making them highly adaptable and able to perform a wide variety of different tasks. Yet there are many potential issues and ethical concerns around foundation models that are commonly recognized in the tech industry, such as bias, generation of false content, lack of explainability, misuse and societal impact. Many of these issues are relevant to AI in general but take on new urgency in light of the power and availability of foundation models.
