
Summary Bullets:
- What do Facebook’s ‘10-Year Challenge,’ Domino’s ‘Points for Pies’ app, and the early detection of diabetic retinopathy all have in common? They prove the difficulty in separating the peril from the promise of AI.
- More importantly, however, they illuminate the need for an enforceable code of ethics that includes all ecosystem participants.
Technology providers have long trumpeted the importance of responsible artificial intelligence (AI). We began writing about this issue back in 2016, when Microsoft CEO Satya Nadella suggested that developers stop focusing on good versus evil AI and instead concentrate on the “values instilled in the people and institutions creating this technology.”
Ethical AI has come a long way since 2016. Unfortunately, most of that progress has involved a series of unfortunate breaches of public trust, corporate responsibility, and personal privacy.
Social media giant Facebook has certainly played a leading role in illustrating the risks and responsibilities that surround the use of AI, even in supporting the most mundane of tasks. Was the company’s recent ’10-Year Challenge’ simply a fun Facebook meme encouraging people to post pictures of themselves both then and now? Or, was that challenge a deliberate effort by the vendor to gather facial recognition data for use by an as-yet-undisclosed Facebook partner? Facebook flatly denies having started this viral trend or benefiting from the participation of its users.
Sadly, Facebook is not alone here. Google, IBM, and even Microsoft have come under similar scrutiny. Many of these AI providers are actively pushing back against any appearance of AI-induced evil. For instance, in mid-2018, Microsoft revealed that it had turned down a number of potential ecosystem deals that might have led to unethical use of their AI technologies.
These efforts unfortunately do nothing but point a finger at what is in fact the bigger problem. We can only expect so much from those who create AI technologies. We cannot bank on these firms to control their partner ecosystems in the same way Google and Apple attempt to police their mobile app ecosystems, looking out for overt and covert malware. Even when the measure of ‘evil’ is cut and dried, as with malware, it is nearly impossible for a single gatekeeper to stop every barbarian waiting patiently at the gate.
Stated bluntly, we can’t leave ethics to the creators alone. To prevent the misuse of AI, every creator, participant, and consumer in a given AI use case would need to enter into an enforceable mutual agreement that outlines the following (as a starting point):
- Scope of participation: A list of the roles and responsibilities of all participants.
- Disclosure of interests: What does the creator stand to gain; what about the participant?
- Expectations of confidentiality: How will user data be anonymized; what is the chain of custody for that data?
- Definition of outcomes: Full disclosure of how an AI-fed decision has been reached.
That goes for a government-sponsored program to anonymize patient retina scans in hopes of identifying macular degeneration before symptoms even occur. And it applies equally to a vendor seeking to gamify food photography, as with the recent AI-driven ‘Points for Pies’ pizza spotting app. In either case, it’s up to the creator, participants, and consumers to jointly establish a circle of mutual trust that’s specific to the task at hand.
Unfortunately, that’s a pipedream, at least for now. Establishing an agreement that’s ethical, transparent, legal, and enforceable for each and every pizza spotting app is a long, long way off. Fortunately, within privacy-sensitive industries like healthcare, there are signposts appearing that point toward this type of trust.
In the UK, the National Health Service (NHS), which runs the country’s biggest database of healthcare data, recently introduced a code of conduct for data-driven health and care technology. This ten-point guideline defines the behaviors the NHS expects from those developing and using data- and AI-driven technologies. It sets a clearly defined measure for any vendor wishing to access and make use of the NHS’ sizable patient database.
Certainly, the approach taken by the NHS has a head start over other vertical markets and use cases, thanks to earlier patient data privacy efforts like the EU’s General Data Protection Regulation (GDPR). Even so, it stands as an exemplar and a seemingly achievable route to AI accountability for creators, participants, and consumers. To protect the user, first protect the user’s data.