
Summary Bullets:
- Businesses looking to adopt AI must not only evaluate the technology’s implications on job displacement and data security, but also consider that algorithms may unintentionally undermine the organization’s ethical standards.
- Customers are quick to pass judgement; if unintentional biases become public, a company’s brand reputation may suffer significantly.
Much has been written about ethics and artificial intelligence (AI), and rightly so. With many organizations looking to adopt some form of AI technology in 2018, business leaders are wise to stay on top of emerging ethical concerns.
Job displacement is still a key consideration, as is safeguarding data. In a recent GlobalData survey, 23% of organizations indicated they had cut or not replaced employees because of AI; 57% indicated security as a top concern.
However, looking ahead, the question of ethics is the real challenge the AI community will need to tackle. And it is a challenge that is far more controversial than security or privacy. What happens when a self-driven car needs to decide between hitting a child that has run into the road, or swerving and risking the injury of its passenger? How proactive should a personal assistant be when it detects wrongdoing? What should be done when a personal assistant believes that a user’s usage pattern points to having committed a serious offense – should it alert authorities?
Probably more relevant to business leaders is the concern that they may not know if an AI infused application will perform up to their organization’s ethical standards. It may contain unintentional racial bias – say a financial algorithm that is biased against a specific race, or an application that demonstrates a preference towards one gender over another. What should be done when a phrase that is acceptable when said by one demographic is completely unacceptable when uttered by another – can an algorithm be trained to reliably make this distinction? Maybe, but what happens when it makes a mistake?
On the one hand, unintentional results are not the fault of the organization using the AI solution. The responsibility may lie in the data used to train the underlying machine learning model. However, customers are quick to pass judgement. If and when these unintentional biases become public, customers will quickly assign blame to the company using them, potentially with enormous impact to a brand’s reputation.
Just as CEOs may take the blame for customer data breaches, and as a result may lose their jobs, senior leaders are also at risk of taking the fall when an AI solution implemented by their organization crosses an ethical line. It’s in their best interest to ensure that doesn’t happen – their reputation depends on it.