The Long, Long Road to Ethical AI

R. Bhattacharyya

Summary bullets:

• The preeminent ethical concerns used to be the loss of jobs due to AI, bias, or applications of facial recognition, but the ethical debate has become more complicated.

• Even though AI has been around for years, the AI and ethics conversation is just getting started; increasing awareness and education, as well as broadening the types of participants involved in the conversation, are foundational first steps.

Organizations are eager to leverage the insights provided by AI to streamline operations, enhance productivity, generate new business, and personalize customer experience. Several companies have already deployed the technology, or are at least experimenting with it, and are now looking to scale AI by rolling the technology out to a broader user base and additional departments. But as AI becomes more widely used, many companies are unclear about how to best navigate the murky waters of AI and ethics. No organization wants to risk the public relations fiasco that would ensue should it be determined that the AI algorithms it uses yield biased results against a specific demographic, or are being applied a way that is not in line with corporate ethics policies.

Changing regulations and privacy laws, a lack of definitions and standards, concerns over unintentional bias in training data, the ability for bias to creep into models over time, the lack of transparency in machine learning (ML) models, and the dearth of experience with use cases can all lead to numerous challenges. The preeminent ethical concerns used to be the loss of jobs due to AI, bias, or applications of facial recognition, but the ethical debate has become more complicated. For instance, using location data linked to a specific group can be problematic; AI analysis of unstructured data from social media apps can lead to false or undesirable assumptions about individuals; and it’s possible that AI algorithms don’t take into account the cultural norms of a sub-segment of society (for example, an algorithm that evaluates creditworthiness based on individual savings doesn’t apply to a society that values the distribution of extra wealth over accumulated savings). The list goes on and on.

There are no easy answers, but initiatives to help support AI and ethics are underway. AI platforms have started offering ‘model cards’ to support greater transparency of ML models and their findings. The model cards explain how an algorithm works and identifies the degree to which various factors impact a model’s findings. Users can then tweak or remove inputs as needed. The platforms have also started offering monitoring capabilities that can flag algorithms that begin to drift and stop performing as expected. Data scientists can then intervene as necessary to mitigate the impact. Additionally, organizations have started expanding the teams involved in project deployments to ensure a more multi-disciplinary perspective, including employees from finance, legal, human resources, and other departments beyond IT. The hope is that additional voices and expertise can identify potential ethical concerns early in the project development process and guide course corrections as needed. Additionally, providers of AI platforms have started withholding certain features, such as facial analysis and recognition, or put in place restrictions to prevent its use in applications that they deem unethical. They have also crafted ‘ethical use’ policies that customers must adhere to when using their platform (although the degree to which they can monitor and therefore enforce this is questionable). They have also established internal teams that review new AI-enabled capabilities and internal use of the technology. Salesforce is creating battle cards for its sales teams that help them explain to customers how to use data more ethically, and consulting organizations and IT services providers offer guidance on ethical adoption of AI to their customers. Regulatory agencies, such as the National Institute of Standards and Technology (NIST), are also involved in the conversation. NIST is developing taxonomies, terminology, and testbeds for measuring AI risks and is developing benchmarks and qualitative and quantitative metrics to evaluate AI technologies.

Even though AI has been around for years, the AI and ethics conversation is just getting started, and it has a long, long way to go. As AI is used in new ways, new ethical concerns will arise. Furthermore, AI is inextricably linked to data. Bad data yields bad AI results and biased data yields biased model results. It therefore stands to reason that conversations related to the more ethical applications of AI will need to broaden to address ethical issues around data management – particularly data that can be linked to individuals. There is much that still needs to be done, but increasing awareness and education, as well as broadening the types of participants involved in the conversation, are foundational first steps.

What do you think?

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.