Artificial Intelligence Ethics

0

null
Jane Lo3Centuries before Turing’s question “Can machines think?”, philosophical postulation of machine intelligence included processing knowledge (Diderot: “If they find a parrot who could answer to everything, I would claim it to be an intelligent being without hesitation”) or holding mode of consciousness and the same reasoning faculties as humans (Descartes: “I think therefore I am”).

The term “Artificial Intelligence” (AI) was actually coined in 1956, by John McCarthy at the Dartmouth Conference, widely recognized as the first AI conference.

In the decades since, AI languished in the innovations race, but is now finally catching up. From facial recognition to chat bots to driverless cars, it is a key player in today’s digital world. But this journey to “make machines intelligent” is not without controversies.  Examples are: Tesla Motors’ 2016 self-driving fatality and the recent Uber autonomous car which killed a pedestrian, or Google’s Project Maven to identify military targets from video footage. These incidents shifted the discussion in “AI Ethics” from a pure philosophical contemplation to one of indisputable relevance.

In Asia, the growing importance of “AI Ethics” can be seen from the survey results released by EmTech Asia (MIT Technology Review Asia’s AI agenda report) and at Accenture’s Ethical AI Media Roundtable.

  • MIT Technology Review Asia’s survey: 37% believed that Asia will lead in the development and deployment of AI technology in the next decade (followed 36% who believe Europe will lead).
  • Accenture’s survey of 330 global business leaders including 25 from Singapore: 67% in Singapore said they have an ethics committee to review the use of AI; 43% review their AI output at least weekly; 30% have a process in place for augmenting or overriding questionable results.

We hear more at EmTech Asia (22-23rd Jan 2019), Accenture’s Ethical AI Media Roundtable (16th Jan 2019 hosted by Mr Joon Seong Lee, Managing Director Accenture Applied Intelligence ASEAN Lead), ADECS Asia Defence Expo and Conference Series (28-29th Jan 2019), and SGInnovate ‘In Conversation: AI Ethics’ (12th Dec 2018).

SGInnovate

“From Left to Right : Moderated by: Steve Leonard, Founding CEO, SGInnovate Richard Koh, Chief Technology Officer, Microsoft Singapore Yeong Zee Kin, Deputy Commissioner, Personal Data Protection Commission (PDPC) & Assistant Chief Executive (Data Innovation and Protection), Infocomm Media Development Authority Dr David Hardoon, Chief Data Officer, Data Analytics Group, Monetary Authority of Singapore”

“AI Ethics” in Singapore

Recent governance and policy developments in Singapore included:

  • June 2018: the Singapore Advisory Council was formed, to advise the government on the ethical use of AI and data (11 members included Google, Alibaba, and Microsoft, leaders from local companies, advocates of social and consumer interest) .
  • Nov 2018: Monetary Authority of Singapore introduced “FEAT” (fairness, ethics, accountability and transparency) principles to promote responsible use of AI and data analytics to strengthen internal governance around data management and use.
  • Jan 2019: Singapore released a framework on how AI can be ethically and responsibly used. Released at the World Economic Forum (WEF), it is a “living document” intended to evolve along with the fast-paced changes in a digital economy.
Shaun

“Shuan Vickers, EW Development Manager, MASS, UK (“The Eltectromagnetic Environment/Domain is Changing, – How do you know?) at ADECS 2019. Photo Credit: ADECS 2019”

But what is “AI Ethics”?

In the framework released by Singapore, AI is “a set of technologies that seek to stimulate human traits such as knowledge, reasoning, problem solving, perception, learning and planning.  AI technologies rely on AI algorithms to generate models.  The most appropriate model(s) is/are selected and deployed in a production system”.  In this context, the framework is underpinned by two principles: that decisions made by or with the help of AI are explainable, transparent and fair to consumers, and that AI solutions are human-centric.

A private sector view was presented at the Accenture Roundtable. Dr Rumman Chowdhury (Managing Director & Global Lead for Responsible AI Accenture Applied Intelligence) referred “Responsible AI” as “the practice of using AI with good intention to empower employees and businesses, and fairly impact customers and society – allowing companies to engender trust and scale AI with confidence”.  In practical terms, this means, for example, by detecting and eliminating certain bias that may influence AI results such as gender, race using an “AI fairness tool”.

Mr Koh (Chief Technology Officer, Microsoft Singapore), speaking at the SGInnovate event, distinguished between “AI ethics”, which “is about making sure there are no biases when building the algorithms” versus “ethical AI” which “means that we expect it to be able to make moral decisions, which I don’t think an algorithm is capable of.”

“AI Ethics” is a complex subject that raises 4 frequently asked questions.

Poppy

“EmTech Asia (22-23rd January 2019, MBS Singapore), Poppy Crum, Chief Scientist, Dolby Laboratories. Photo Credit: EmTech Asia”

Will AI take our jobs?

“You will work. You will build … You will serve them… Robots of the world… — Radius in Karel Čapek’s 1920 science fiction “Rossumovi Univerzální Roboti” (RUR) which coined the word ‘robot’ for a new working class of automatons, originated from the Slavonic word, rabota, which means servitude of forced labor.

Since the RUR publication, there had been optimistic predictions and setbacks known as “AI winters” before emerging with today’s impressive gains.  Besides chatbots and driverless cars, we have robo advisors, cobots, and adoption in other sectors.

Recently, China’s state news agency Xinhua introduced AI anchors capable of reporting “24 hours a day, 365 days a year”.  In the defense industry, Shuan Vickers (EW Development Manager, MASS, UK), said at ADECS 2019 that “from an electronic warfare perspective we could use machine learning to deal with some of the information challenges that are difficult for humans to work through quickly enough; think of it as artificial intelligence helping humans to make quicker, better decisions” and also “to recognize a pattern of events, and then predict what should be coming next”.

The worries that these advances require less human touch is exacerbated by successes in chess and poker games – It is indeed hard to escape our nagging suspicions that AI will replace us.

The end of Poker Face?

At the MIT Technology Review, EmTech Asia, Poppy Crum (Chief Scientist, Dolby Laboratories) said: “Devices will know more about us than we do”.

The petabytes of photos, messages, emails and videos that we exchange and store are commonly referred to the 5 “V’s” – volume, veracity, variety, velocity and value. Digital data is key in speech and facial recognition, and sentiment analysis, for training or for drawing out key information.  But its use has also elevated privacy as a key consideraton when adopting AI. For example, in 2017, Google announced that it stopped scanning the emails of Gmail users for training AI to personalise adverts.

However, more concerning than the privacy of our digital information is the rise of empathy robots – machines that read our emotions from eye dilation, skin heat, or speech patterns to tailor marketing messages or teaching methods.  Are we also losing our right to keep our emotions private?  Will AI be able to create a picture of our psychology even if we seem composed to the naked eye? Is it the end of poker face?

Will AI go rogue?

“A robot may not injure a human being, or, through inaction, allow a human being to come to harms” – Isaac Asimov’s Three Laws of Robot Ethics, First Law

Popular science fiction often explores possibilities of coding human values into robots, and “make” robots observe our values and respond accordingly. In the movie “Terminator”, the robot played by Arnold Schwarzenegger (the “evil T-800”) was reprogrammed, and transformed from an assassinator to a protector (the “good T-800”) in the sequel “Terminator 2”.

Where there are clear outcomes, programming AI to reflect our values requires understanding of and mitigating data, model and algorithm biases.

But human values are diverse – culturally situated and contextual. Our decisions can also be inconsistent and irrational.  Frequently, there is ambiguity – and no “best” or “right” answer.

In a classic thought experiment, there is a railway-trolley barreling down towards a group of five people strapped onto the tracks.  We are standing some distance off next to a lever, faced with two choices: (1) pull the lever to divert the trolley onto a side track where a person is tied up or (2) do nothing and the trolley kills the five people on the main track. SGInnovate’s event (“In Conversation: AI Ethics”) presented a similar moral dilemma in an Asian context (choosing between a young child or an elderly person?).

“In such a lose-lose situation, I personally don’t know how even a human can make a so-called ‘best choice’,” said Mr Steve Leonard (SGInnovate Founding CEO), who moderated the panel. Mr Yeong (Deputy Commissioner, Personal Data Protection Commission), agreed: “When we obtained our driver’s license, we were never asked to answer such a question. So why should we expect an AI to be able to answer it ‘correctly’?”

How do we code such a dilemma in a machine? What is the ‘best’ choice? How do we code the machine to make ethical decisions, as we do, under time pressure when there is no time to algorithmically optimize billions of outcomes?

Will AI Control us?

“You are my creator, but I am your master; – obey!” – The monster to Victor Frankenstein

With this declaration from the monster, the power shift from Frankenstein to the monster is complete.

We may accept, just as there is no 100% security, that there is no 100% control, and that as machines gain more autonomy, our control decreases.

In our expectations that AI embody similar human traits, emotions, intentions and react like us, we find ourselves in a constant battle to religiously check that the we have coded these traits in the machine. However, is it possible to fit and then control every theoretical scenario, physical mechanism and component in the machine?  How do we control unpredictability in which each machine in sprawling networks responds to its own algorithms?

Doomsday or Utopia?

Too many questions, not enough answers.

Will AI lead to the dystopian future painted by Science Fiction, or will it lead to a life of plenty, fun and leisure for humans?

For now, doomsday scenarios remain in the realms of Science Fiction, and we still retain significant “control” over AI.  We have not yet achieved “Strong AI” or “Super AI” that surpasses human intelligence.  The AI we have so far, “Weak AI”, operates within a pre-determined and pre-defined range.  For example, Apple Siri can answer “what is the weather today” but will probably give vague responses or URL links to “is global warming real?”

However, there are already real implications from today’s adoption of “Weak AI”.  If practical steps are not taken [1], we face the possibility as we delegate more and more tasks and decisions to machines, our ethics and values that sustain our societies may undergo subtle compromises that can produce significant changes in our behaviourial patterns over time.

[1] Using Singapore’s framework as a guidance, robust oversight of the use and deployment of AI could be supported by best practice governance model of clear roles and responsibilities and internal controls.  What are also critical are measures to enhance the transparency of algorithms, assessing the degree of human invention (human-in-the-loop vs a human-out-of-the-loop decision without human intervention), establishing data accountability practices including minimizing data biases.

Share.

Comments are closed.