AI is the talk of the tech world and beyond. A veritable flood of AI related news touting everything from technology that can pinpoint if a painting is an old master to driverless trucks being integrated into the U.S. army, is inducing a ‘hype onslaught’. However, in this midst of this fever pitch one thing that is escaping our attention are the ways that AI can be used maliciously, particularly in relation to data security. So, how can corporate governance grapple with the future and stem the damage of exploitative AI technologies?
How will AI affect digital data security?
Earlier in 2018 26 security experts came together from the University of Oxford, University of Cambridge and the Centre for New American Security (among many others) to determine the malicious present and potential of AI. The result from their 100-page report is a realistic look to the future of the task ahead for cyber security professionals and corporate governance. Before now, the focus of AI was overwhelmingly positive as countries and companies alike gear up to spend billions in the race for AI dominance (the Chinese are currently battling for supremacy). But from this report researchers are not only shining a light on the malicious possibilities of AI but also asking uncomfortable questions about governance, which relates to corporate governance too. Analysis notes soberly that, ‘AI will disrupt the trade-off between scale and efficiency and allow large-scale, finely targeted and highly-efficient attacks’.
This ground-breaking research also predicts new innovative ways that AI can reshape cyber-attacks and mutilate data security in the future; automated hacking, speech synthesis used to impersonate targets, finely-targeted spam emails using information scarped from social media or exploiting the vulnerabilities of AI systems themselves (e.g. through adversarial examples and data poisoning).
This ground-breaking research also predicts new and innovative ways that AI can reshape cyber-attacks and mutilate data security in the future, “automated hacking, speech synthesis used to impersonate targets, finely-targeted spam emails using information gathered from social media or exploiting the vulnerabilities of AI systems themselves (e.g. through adversarial examples and data poisoning)”.
Challenges on the horizon
AI will render data security more difficult. That is cold hard fact. The AI Now Institute at New York University, who are dedicated to understanding the social implications of artificial intelligence in their ‘AI Now 2017 Report’, outlined how corporate governance can better address this and other issues.
Here are our key takeaways:
1: Increase the number of stakeholders actively involved with AI
Corporate governance should outline and delegate to a number of employees from the to specially work to prevent and mitigate the risk of AI.
2: Law firms should no long use “black box” AI and algorithmic systems
The AI institute have recommended that the use of such systems “raises serious due process concerns”, and so legal departments should be subject to public audits, tests, review and accountability standards.
3: AI software should be rigorously reviewed and trialled before use
AI Now counsel that standards should be developed from the get go to better understand and monitor issues when adopting new software
4: Be wary of HR and AI
More research and policy making is needed on the use of AI systems in HR, particularly in the hiring process. Researchers have cautioned the potential impact on labour rights and practices, for example, selection bias.
5: Be Selective with Your Software Solutions
One of the best ways corporate governance can counter incoming AI cyber-attacks is to rely on secure software products that are one step ahead. DiliTrust supply six software solutions to our clients that focus not only on delivering a quality product but also a highly secure solution. Our solutions are consistently subject to three types of rigorous security testing and constant updates. That way, our clients can rely on us to always prioritise their data security.
Contact our team today to find out more information.