Summary Bullets:
• The enterprise AI community has embraced the trend toward more ‘Explainable AI,’ which enables users to understand the degree to which various factors impact a model’s output.
• ChatGPT’s inability to provide its sources of information flies against organizations’ desire to embrace ‘Responsible AI’ to promote greater adoption of the technology.
ChatGPT is impressive. The app can research, write, and even weave a narrative, performing tasks that we used to think were so uniquely human that they couldn’t be done well by a computer. However, it is doing them so well that it is often difficult, if not impossible, to determine whether the output is prepared by a human or an algorithm, or whether it’s fact or fiction. And therein lies the problem.
ChatGPT is considered ‘Generative AI,’ which is AI technology that can create content such as text, images, videos, art, music, computer code, and more, on its own. It’s very often difficult to discern whether the end product from Generative AI technology is ‘real’ or ‘fake.’ With ChatGPT, the results are presented so authoritatively that the reader assumes the content is accurate, but the sources of the information can’t be verified, and it is possible that information contained in the output is incorrect. On the one hand, we all know that we can’t believe everything we read or see on the internet. But given the potential impact that ChatGPT can have on our lives, and how widespread its use will likely become, the ease with which misinformation could be spread is alarming.
One can easily imagine how the use of AI to create ‘deep fakes’ or to imitate anyone’s voice and speaking style can be used in nefarious ways. Similarly, ChatGPT could unintentionally disseminate false information or propagate bias. Unfortunately, it’s difficult to draw boundaries around the use of AI technology. Law enforcement, civil rights organizations, the public policy community, and governmental organizations are still struggling with another controversial AI application: facial recognition. Although it has been available for several years, there are no consistent parameters for appropriate use of facial recognition technology. The unfortunate reality is that the regulatory environment often lags far behind technological innovations.
Generally, the enterprise AI community has embraced the trend toward more ‘Explainable AI,’ believing it will promote greater confidence in results, especially among line of business users. AI platform vendors are offering tools to create model cards that explain the factors that influence an algorithm’s results and even quantify the impact of each factor on the output. This allows data scientists to tweak the inputs as necessary (for example, to remove unwanted bias).
However, ChatGPT doesn’t embrace this trend of greater transparency in AI algorithms. And given what will likely be widespread use of the application among students and within the work environment, this is cause for concern. For more ‘Responsible AI,’, models need to be explainable, which means that ChatGPT, as well as other Generative AI applications, will need to identify the sources of its information.