Asks AI-Curious Customers to First Invest in an Ethical Groundwork

B. Shimmin

Summary Bullets:

• There has been a significant rush among technology providers to make artificial intelligence (AI) a self-service endeavor, to make it available to the broadest possible swath of business users.

• But in so doing, companies are creating unanticipated legal exposure for AI practitioners unprepared to protect AI from human bias. has added a new AI learning module to its Trailhead developer education platform with an interesting twist. Rather than teach developers how to build AI outcomes most efficiently, the company’s newest educational module asks that practitioners slow down and focus on creating ethically informed AI solutions.

The new Trailhead educational module entitled, “Responsible Creation of Artificial Intelligence,” calls attention to an often overlooked threat from AI, namely unwitting human biases and intentional human prejudices.

Within these new training materials, calls on Einstein developers to adopt its own set of core values of “trust, customer success, innovation, and equality.” The company goes so far as to suggest that developers who fail to adhere to these standards in creating AI algorithms may find themselves in breach of its acceptable use policy.

Why is referencing an acceptable use policy in conjunction with the ethical use of AI? Surely companies not engaged in outright nefarious endeavors would steer clear of anything overtly illegal in building AI outcomes. Certainly legislative controls such as GDPR and the California Consumer Privacy Act (CCPA) are very clear about what constitutes an unlawful use of consumer data. Companies need only adhere to such policies to avoid potential litigation or censure, right?

Not necessarily, because human biases and prejudices can find their way into any AI-informed solution without detection. Throughout the lifecycle of a given AI solution, from data collection to ongoing maintenance, subtle but hugely impactful notions of partiality can find their way in, thereafter altering the decisions made by both humans and AI automated routines.

In most cases, biased or unfair AI algorithms remain unnoticed. Only those outliers that are blatantly stilted garner the public’s attention, as was the case last October when Amazon noticed that its new talent recruiting algorithm quite literally hated women. Despite the company’s leadership role in developing AI technologies, Amazon fell prey to a common, data-derived bias where the data set used to train its recruiting model was itself biased toward hiring men over women.

Unfortunately, there is no software or best practice currently available that can readily identify or root out these potentially costly threats. Still, last September, IBM attempted to do just that, launching AI Fairness 360, an open source toolkit containing 70 fairness metrics and ten bias mitigation algorithms. This toolkit is in no way a safeguard or systemic remedy, but it makes for an excellent start in identifying the most basic and easily identified problems such as biases hidden in training data, due to either prejudice in labelling or under/over-sampling select advantaged or disadvantaged groups.

Other firms such as Alegion are doing the same, but coupling automated routines with high-touch consultative efforts that completely offload AI model training and data preparation tasks. This approach will find a welcome home with customers that do not have a significant degree of data science expertise. But as illustrated so expertly by Amazon, expertise alone isn’t enough when it comes to the psychology and sociology of bias or prejudice.

This human factor is the key, especially for, which has sought to make AI a self-service capability across its sizable customer base. The more readily accessible AI becomes, the greater the legal and financial exposure for both and its customers — hence the company’s not too subtle reminder that improper use of AI can lead to a breach of its terms of use policy.

According to’s new Trailhead module, the best way to combat bias and prejudice is through the healthy application of human diversity. By building diverse teams and by translating values into processes, believes its customers can at least create an atmosphere of impartiality and objectivity.

Given’s past work to make AI more accessible to its customer base (supporting AI modeling for small data sets, for example), we would anticipate the company to do far more than render advice on this front. In the meantime, its none-too-subtle call for the creation of an ethical groundwork well before the creation of any AI algorithm must be taken seriously by any company hoping to reap the rewards of AI.


What do you think?

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.