The ethical governance of artificial intelligence
Managing for Society
The Manila Times
January 2, 2018
Artificial intelligence (AI) is the use of computer systems to perform tasks normally done by humans. Our smartphones are packed with the most common AI applications.
Visual perception: When I point my smartphone at myself with my family for a picture, a set of computational rules (called an algorithm) identifies our faces by marking them with boxes and makes sure we are all in focus. The phone takes the picture when I raise my hand.
Speech recognition: When I wanted to buy sourdough to go with the clam chowder I was planning to cook for my family, I asked Google through the microphone “Where can I buy sourdough in Pasig?” My phone displayed a marked map plus a list of stores while advising me in a friendly British-sounding female voice: “Here are the listings where you can buy sourdough within 5 kilometers.”
Decision-making: When I enter my office address into Waze, it shows a recommended route for me to take and says “In 200 meters, turn right.”
I use these convenient AI applications without a second thought. But should I take such AI for granted? Or should I ask myself some questions: Does the camera focus equally well on all our faces? Is the list of sourdough stores complete or just those preferred somehow by Google’s algorithm? Is the route chosen by Waze really the best for me or based on obsolete data?
These may seem like too much ado over personal uses of AI. However, businesses are using AI in more and more ways, from detecting accounting fraud and optimizing logistic routes, to recommending hiring and investment decisions. Writers have enthusiastically hailed this trend as part of the “Fourth Industrial Revolution” while still others call for businesses to join the “digital transformation”.
From a critical viewpoint, AI challenges business leaders, now more than ever, to deeply consider the ethical implications of their decisions. Board directors may argue that the use of AI is an operational matter best left to management. Besides, they wouldn’t be technically competent to judge, right? This reasoning would be dangerously wrong. First of all, it is the duty of the board to ensure that everything (including technologies) that management implements is in line with the core values of corporate governance, namely fairness, transparency and accountability. Secondly, to leave AI application entirely to management is sheer intellectual laziness, assuming it is harmless, and outright negligence of the duty of care if there is a real risk of harm.
Despite AI’s many benefits, their use can violate governance values and cause real harm to company stakeholders. Board directors need to ask management some fundamental questions:
Are we being fair? The UK-based newspaper The Telegraph reported recent research that showed that “programs designed to assess eligibility for insurance cover or bank loans are likely to discriminate against women and non-white applicants”.
Are we being accountable? In late 2016, Uber implemented tests of its driverless cars in San Francisco without state approval. The New York Times reported that the company’s driverless cars ran six red lights. The company claimed that the rules did not apply to it and later transferred its testing to Arizona.
Are we being transparent? Last week, Apple apologized for re-programming old iPhones to slow down to save battery life without informing users. Outraged users accused the company of pushing them to upgrade to new phones.
Were the boards involved providing proper direction and oversight through policies on ethical uses of AI? To adapt the advise of Peter Lee, vice president for research at Microsoft: Boards would do well “to not be blind, to not ignore those negative possibilities, but to understand them upfront and make the world aware of them and do what we can to mitigate their impacts.”
I don’t mind business use of AI. Let’s just make sure computers know that humans are the boss and positive social values should prevail. This can only happen if prudent human beings are in charge. This has always been the role of good corporate governance – and now more imperative in the era of artificial intelligence.
Dr. Ben Teehankee is full professor of management and organization at De La Salle University. Email: email@example.com