home Digital, Features Letting the machines take control

Letting the machines take control

The decisions being made by AI algorithms and machine learning are having an increasing impact on our lives, our jobs and our businesses. AI promises exciting possibilities for a wide range of industries including healthcare, financial services, utilities, entertainment and so on. It will help us to design the products and services we will enjoy in the future. But it has also raised a raft of ethical issues. What role should machines have in making decisions that affect our lives?

Advancements in AI and associated digital technologies are occurring at a rapid rate. Enterprises are making significant investments in deploying AI technology to scale operations while improving productivity and reducing costs. According to IDC, in 2018 the AI market reached $24B in revenue, with the compound annual growth rate (CAGR) for 2017-2022 predicted to reach 37.3%.

Improving customer experiences by strengthening sales and marketing with greater insights is one of the primary catalysts driving AI and machine learning adoption today. But the potential impact on customers, employees and society as a whole is far more significant and wide reaching.

A cause for concern

The rate of technical advancement is outstripping the ability of our ethical and regulatory frameworks to keep up. Rather than being a force for good, AI may be used to exploit consumers and the public by dominant, unscrupulous or careless players in the market. Henry Dobson from the Institute of Technological Ethics, comments, “AI and other forms of modern technology are changing very rapidly, representing a very new kind of change which we’ve never had to deal with before”.

“A good example is a recent report from the ACCC about the dominance of Google and Facebook in the Australia media and advertising industries and the impact on consumers. The report raised some significant concerns over how these two companies are fundamentally changing these industries. Without government regulation to protect other competitors and consumers by creating a level playing field, Facebook and Google will have an almost monopolistic control.”

Another significant area of concern, according to Russell Ives (managing director, Accenture Operations, Australia & New Zealand), is around transparency. Ives says, “With complex AI and machine learning tackling ever increasingly important decisions at ever increasing speeds, identifying causes or reasons behind the decisions can be challenging. The ability to explain those decisions is important when attributing accountability and transparency.”

“Who should bear the risk and responsibility for the wrong decisions made by an AI algorithm? Why was a specific cause of action taken? Without transparency the deployment of these systems will breach the trust of consumers and the general public”, says Ives. General principles and guidelines need to be established to define where the liability lies when the machine makes a mistake.

The job apocalypse

A major fear is the impact AI and machine learning will have on jobs. A two-year study from McKinsey Global Institute predicts that by 2030, intelligent agents and robots could replace as much as 30 percent of the world’s current human labour. Ives comments, “There’s the concept that because AI is becoming very, very clever we need a reduced workforce. AI and the decisions it makes will have a significant impact on business processes which in turn will require changes to the workforce.”

The impact on human labour may not be totally negative. As with any major stage of the industrial revolution, many jobs will disappear, but many new ones may emerge.  We need to envision what work will look like in the future, prepare our workforces accordingly and help the people who face job loss to transition into new roles. Ives comments, “We expect, and successful AI will require, growth in new roles around human / AI collaboration, humans training and monitoring AI, and AI enhancing human decision making.”

Bias and discrimination – data can lie

If we can prove something with data, we tend to believe that we’ve established a concrete fact. But, as Henry Dobson points out, data is not as objective or concrete as people assume. He says, “Data is imbued with social meaning and human complexity. Take gender for example. Traditionally, gender has been understood as a binary term, male and female. Gender as it is understood today, however, is far more fluid and is understood as a spectrum rather that a binary catalogue.”

“If a technology company is using a dataset in which the gender attribution is gender binary (male and female) how does this affect the people who do not identify with either gender types?”

Machine learning systems can also entrench existing bias in decision making systems. Care must be taken to ensure that AI evolves to be non-discriminatory. A case highlighting the potential problems with AI to reinforce bias is the COMPAS algorithm used in the USA to predict recidivism rates. Dobson relates, “The COMPAS algorithm and the data it was fed generated predictions that were biased towards African-Americans. So much so that highly seasoned Caucasian criminals received very low scores when in actual fact they were far more likely to recriminate upon release.”

Privacy and data security

Consumer concerns over privacy and the security of their personal data have received significant press coverage in recent years. “The reality however”, says Dobson, “is that we do not own our data in the same way that we own a house or car. Currently, our data is freely available within the confines of us accepting the terms and conditions pertaining to specific applications”.

“The benefit of this is that technology companies have a lot of data to play with, which in turn enables them to develop new products and innovations that provide deep insights, analytics and other forms of assistance which can be very useful and thus highly valuable”.

However, AI systems can use this data to monitor, track and profile people, predict their behaviour and attract their attention. Combined with IoT and facial recognition technology, AI has the potential to cast a wide net of surveillance thereby raising significant concerns over privacy.

Do these learnings, analysis and insights from AI infringe data protection and privacy rights? And who has what rights and access to the output of these systems? 

Defining privacy is a tricky problem, making it hard to define hard and fast rules to protect it. Dobson explains, “The primary issue regarding privacy concerns is how the term ‘privacy’ is defined in the context of modern technology. And the notion of privacy rights is a very complex one.”

While protecting the privacy of consumers is important, we don’t want it to be too strict and limit the potential of AI to deliver very beneficial innovations. Dobson believes a balance needs to be struck as to how much privacy is the right amount of privacy. He says, “Data privacy is a double-edged sword. On one side, data privacy and protection, where everyone had total control of their data and how it was accessed, could greatly limit the usefulness of many new technologies. On the other side, free and limitless access to data could enable technologies to innovate and deliver incredibly useful applications in our day-to-day lives.”

Be a force for good

For companies using AI technology there are a number of areas they need to look at to ensure they don’t breach the trust of consumers’. First and foremost are the issues surrounding data. Russell Ives says, “To do its job effectively AI needs access to data and lots of it. Ensure the datasets the machine is using to makes inferences and decisions is accurate. Wrong data will lead to bad decisions. Also make sure you have permission to collect and use that data. Be transparent and open about how you use the data you’re collecting.”

AI systems should be developed from the ground up with a strong awareness of the ethical implications and issues involved. Ives states, “Responsible or ethical AI starts at the design process. How do I want to use AI, what goals do I want to achieve? What are the implications of using AI from a consumer, workforce and organisational perspective? Do they align with the organisations core values and principles?”

Henry Dobson reinforces the importance of living by your values and principles, “Don’t compromise on your values for short term gains. Consumer loyalty can shift very quickly, so it’s important that you live your values rather than whitewash your business with insincere statements that leverage buzzwords like ‘social impact’ and ‘sustainability’”.

He encourages businesses to become a force for good. “The business of business is no longer business. Rather, the business of business is creating positive change in the world. Understand that your business impacts the lives of other people, affects their moods and emotions, and that your technology can empower them to live richer, more fulfilling lives.”

Henry Dobson and Russell Ives are on the panel for the next CX Forum: AI & Ethics organised by InterAct Melbourne.

Mark Atterby

Mark Atterby has 18 years media, publishing and content marketing experience.