How businesses can use artificial intelligence for good

The View From Taft
March 6, 2023 

The hottest topic nowadays is the artificial intelligence (AI) chatbot called ChatGPT. Since November, the company OpenAI has allowed the public to directly converse with the AI tool which has been impressing users with its human-like answers to any question posed to it. It appears that we are now seeing truly intelligent AI that can help us in ways we only previously imagined.

Is this really the case? I would say: “Not quite.” We must fully understand the proper use as well as the risks that come with this latest AI tool before embracing its use.

In the first place, the problems caused by business use of earlier generation AI algorithms have not even been solved yet. Some examples:

AI has been deployed in ways which were deceptive or which gave it too much credit for “intelligence” without sufficient regard for the risks involved for users or the public.

In the second place, while I’m impressed with the seemingly knowledgeable outputs of ChatGPT, I usually discover factual errors when I check its answers for accuracy. For example, it repeatedly gave me the wrong way to format a journal article and attributed articles to me that I never wrote. AI developers call these “hallucinations.”

And herein lies the problem: a large language model does not really “know” or “understand” anything, even when it appears to do so. Computer scientists “trained” the model to talk like a person by feeding it enormous amounts of human text data from various Internet and digital sources. Computational formulas (algorithms) in the model calculated patterns and correlations based on the text data until it “learned” to produce human-like answers to questions asked of it. Thus, a language model is like a computerized parrot that mimics human speech by observing patterns in how people talk about various topics.

Remember that a language model is not intelligent even when it sounds like it is. It has no sense of the meaning, real-life context, underlying reasoning, or intent behind what it is saying. Worse, its output is affected by the errors and biases contained in the data fed into it; as they say: garbage-in-garbage-out. Hence, language models, or AI in general, cannot be trusted by themselves for important information needs or for making critical decisions.

Clearly, government needs to regulate AI for proper business use. Meanwhile, businesses can maximize the benefits (Do good) and avoid the sins (Do no harm) of AI use by following four basic principles.

To do good, businesses must:

To do no harm:

Television host John Oliver summarized my main point very well: “The problem with AI right now isn’t that it’s smart. It’s that it’s stupid in ways that we can’t always predict.” If we remember this simple fact, we can be critical users of AI.

Benito L. Teehankee is the Jose E. Cuisia professor of business ethics at De La Salle University.