Big data, risk and next-gen machine learning
It was only ten or fifteen years ago that artificial intelligence (AI) and machine learning (ML) existed strictly as the subjects of advanced, research-based degrees at ivy-league universities. The early pioneers of the subject had access to uniquely powerful computers that were, at the time, the only ones capable of the highly iterative computations involved in cognitive programming.
Fast-forward to today, and machine learning is now deeply embedded in our everyday lives. Even the most modest technology can handle highly-advanced artificial intelligence instances.
One of the most potent use-cases for machine-learning within a business setting is in fraud prevention and risk assessments. Cognitive algorithms need significant amounts of data to “self-learn” patterns of normal behavior. In many areas of financial activity, like online banking, payment processing, and e-commerce, millions of transactions per second is pretty standard (especially as technology is adopted more quickly), making these sectors beacon-carriers for emerging technologies.
Amid many thousands of seemingly similar financial transactions, small frauds can easily go unnoticed by human operators, and “traditional” software checking only examines limited data sets. Therefore, small scams costs quickly add up at the scale of global commerce.
But advanced machine learning platforms can work in harmony with robotic process automation to test 100% of an organization’s data, mark anomalies or outlying cases, and flag them for secondary consideration by human experts.
In the past, specialist data scientists were hired to fine-tune the layers of complex AI technology to emplace such solutions. But in today’s rapidly-evolving technological landscape, there are ready-to-roll platforms available that help protect organizations of all size against fraud.
The technology has proved so effective that many organizations now are leveraging machine learning as part of on-going risk management activities, thereby helping provide assurance over all areas of governance, risk, and compliance (GRC).
But like all new technologies, deploying what is still relatively unknown in many verticals comes with certain risks. This article hopes to outline some of those risks and help determine the best strategies around these potential stumbling blocks.
The risks of poor AI
In a global McKinsey survey, only 41% of those questioned stated their organization had a comprehensive priority list of AI risks. Lack of a clear strategy and lack of skilled personnel were, in the same 2,000-person survey, noted as the two main impediments to effectively using AI.
The abilities and power of rapidly-evolving machine learning entities mean that while AI can be used effectively to lower corporate risk, its use is — due to those impediments — itself factor in increasing overall risk exposure.
And even with properly qualified staff and full oversight of AI projects, on a more practical level, some companies and businesses can put themselves at increased risk due to low quality data informing the models.
In any data set, outliers can confuse the auto-learning processes of the machine-learning training phase. Conversely, during production, detected “false positives” can be used to teach the AI model further, so the platform refines results for the better over time. It’s details in implementation like these that can pose significant danger to organizations hoping to quickly spin up ML without full understanding of what is involved.
Some simple strategies to minimize AI risk
Although few organizations will deploy AI in their business processes blindly, following some simple guidelines can help smooth the path to using AI as part of a more extensive risk assessment and management process:
Ensure data integrity
Ensure that you know your data sources and that qualified data science teams can verify their integrity. Furthermore, emplace systems of control that rigorously check for data validity at a granular level to make sure that poorly formatted data does not throw the machine learning model. This includes checking syntax and extending as far as reducing inherent data bias (in skewed demographic “slices,” for instance). Read more about how data drives financial process governance.
Keep in control
The learning phases of artificial intelligence deployment are the ideal time to test control processes, especially in areas where customers and staff interact with AI entities. For instance, you may implement automated controls that trigger human intervention when an AI algorithm cannot perform a set task within defined risk tolerances.
Keep it ethical and respect governance
National and international rules on governance are almost all intended to help improve data-related ethics. Over a dozen countries have produced documents on ethical AI standards. The main point of concern, aside from personal and commercial privacy, is that AI uses extensive data sets to create outcomes — in short, ML models eat up more data and get smarter. This worries legislatures across the globe!
Therefore, testing and monitoring have to be a constant presence throughout all stages of AI projects’ life-cycles, from concepting through to full production and end-of-life-ing. As machine learning instances increase in number and potential, law-makers will increase their concern and want better oversight into what organizations are doing, and need full disclosure of all practices.
This article has only scratched the surface of the many artificial intelligence, machine learning, risk management, and data control issues. To learn more about incorporating machine learning into your organization, we recommend this article on new and exciting ways machine learning is evolving entire sectors, industries, and roles.