Menu

AI in the Boardroom: What do Board Members Need to Know?

According to the 2019 Gartner CIO survey, there has been an increase of 270% in the past four years of enterprises employing Artificial Intelligence, with the figure again tripling in the past year. While it is well regarded that within the tech industry there is a considerable shortage of talent in cybersecurity, the question remains, what role does the board play within this ever-evolving landscape?  And what exactly should boards be concerned about?

AI in the Boardroom: What do Board Members Need to Know?

AI 101

A quick refresher for those who are still not sure what exactly the concept of artificial intelligence or AI involves, it is essentially “The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between language”.

Broken down into everyday instances in the workplace, AI can enable managers to measure employee performance thanks to AI analytics, potentially detect cyber threats and can help companies make use of their data without needing a data scientist on site.

Board members who are negating a plethora of everyday tasks are thus now under pressure to consider their approach to AI. As the Harvard Business Review examines in a recent article, “If you’re moving too slowly, a competitor could use AI to put you out of business. But if you move too quickly, you risk taking an approach the company doesn’t truly know how to manage”.

5 QUESTIONS THE BOARD NEED TO BE ASKING THEMSELVES RIGHT NOW

According to PwC, who have carefully considered how Boards can better examine the risks and potential benefits of AI, there are five key questions that members should be asking themselves.

1: Consider how AI can transform the product or services of the company and examine together what aspects of business could benefit from increased automation or machine learning?

2: Could AI be adapted or be used within the emerging technologies that are under development?

3: Do we have the resources to support the use of AI? Do we have employees with the right skill sets and talent to make the employment of this technology a success?

4: How will we gain the trust and reassure all of stakeholders if we use AI?

5: Have we thought about how we will use data collected by AI? Have we considered cyber risks and data privacy issues?

While these questions offer a deep dive into both the challenges and opportunities of AI, what concise arguments can boards use to outline both the benefits and the negative aspects of AI? We have selected the top two arguments in each case for boards.

THE BENEFITS OF A.I. INVESTMENT FOR BOARD MEMBERS

PRODUCTIVITY

According to Accenture, by 2035, AI ‘has the potential to boost rates of profitability by an average of 38% and could lead to an economic boost of $14 trillion (USD).  Thanks to steady declines in the profitability of certain industries, directors can examine if AI is the answer to invigorate productivity in their company.

INNOVATION

Another reason why board members are keen to invest in AI technology is a fear of disruption from fierce digital competitors. According to the OECD, it is estimated that $50 billion (USD) was invested in AI start-ups during the period 2011-2018. Data also highlights that the results of a more streamlined and coherent data strategy thanks to Artificial Intelligence suggest huge gains operationally and financially. Forrester, a data research consultancy group found that ‘just a 10% increase in data accessibility will result in more than $65 million additional net income for a typical Fortune 1000 company’.

THE NEGATIVE ASPECTS OF A.I. FOR BOARD MEMBERS

ETHICS

The World Economic Forum have raised ten key questions about the top 9 ethical issues in artificial intelligence. It is critical that the board examine these questions carefully.

1: Unemployment as a direct cause of AI

2: Income inequality as a direct effect of machines

3: Humanity- the changes to our behavior and interaction thanks to machines

4: Artificial stupidity and mistakes

5: Racist robots and AI bias

6: Security

7: Unintended consequences as a direct effect of AI

8: Staying in control of a complex intelligent system

9: Defining the humane treatment of AI

SECURITY

Artificial Intelligence spells a potential nightmare for data security. Cybersecurity professionals are sounding early alarms on AI applications such as self-driving cars and drones, which they argue can be hacked and weaponized.

Nicole Eagan, CEO of DarkTrace, a cybersecurity firm, warns businesses about the risks of attackers using Aim, “Once that switch is flipped on, there’s going to be no turning back, so we are very concerned about the use of AI by the attackers in many ways because they could try to use AI to blend into the background of these networks,” she said.

Boards provide a necessary check and balance on business operations, and evaluation of the cybersecurity risks of any technology implementation should be a part of any review process. New technology has great potential as long as it is introduced along with cybersecurity and data security measures which can counteract risks to the organization. The security of an application or software solution is of paramount importance when it is being considered for use throughout the organization and/or key business units. Solutions that handle sensitive data, such as software meant for legal departments or board use, may require extra levels of security. Read more via our two part blog series, ‘How to Strengthen Your Board’s Cyber Security Posture (part one) and (part two)

DiliTrust Exec is board portal management software that enables fast communication between board and committee members, extra secure transmission of company documents, and an intuitive interface that anyone can use on a desktop, smartphone or tablet from anywhere in the world.