Singapore issues guidelines for use of AI and analytics in finance
USING customer and market data allows bankers and financial service providers to offer better products and services to end users, make transactions more secure, and ensure better compliance with prevailing regulations.
New age technologies such as data analytics and artificial intelligence (AI) make it easy to ingest data and generate insights in real time.
However, in the absence of strong governance mechanisms around the use of data, stakeholders within the financial services ecosystem shy away from really leveraging data and innovative technologies to transform how they manage their business.
In Asia, the Monetary Authority of Singapore (MAS) has recognized this gap and hence, has issued a set of principles to promote fairness, ethics, accountability, and transparency (FEAT) in the use of AI and data analytics (AIDA) in finance.
Known as the FEAT Principles, the document provides guidance to firms offering financial products and services on the responsible use of AI and data analytics, to strengthen internal governance around data management and use.
Their expectation is that the FEAT Principles will foster greater confidence and trust in the use of AI and data analytics, as firms increasingly adopt technology tools and solutions to support business strategies and in risk management.
The new set of principles have not been created in isolation. The MAS consulted with a group of senior industry partners, and incorporated the views and feedback of financial institutions, industry associations, fintech firms, technology providers, and academia while formulating the FEAT Principles.
A commentary on the FEAT Principles is provided below:
# 1 | Fairness
This section is split into two parts — justifiability, and accuracy and bias
The MAS clarifies that the use of personal attributes as input factors for AIDA-driven decisions is justified. However, it cautions against causing systematic disadvantage to certain individuals or groups.
As a result, the MAS suggests that data and models used for AIDA-driven decisions are regularly reviewed and validated for accuracy and relevance, and to minimize unintentional bias.
Further, it suggests that financial services providers should regularly review AIDA-driven decisions so that models behave as designed and intended.
# 2 | Ethics
In terms of ethics, the MAS has simple requirements.
It suggests that the AIDA models be aligned with the firm’s ethical standards, values, and codes of conduct, and that AIDA-driven decisions be held to at least the same ethical standards as human-driven decisions.
# 3 | Accountability
Like fairness, this section too is split into two parts — internal accountability and external accountability.
In order to help create strong governance around the use of AIDA, the MAS recommends that the use of AIDA in AIDA-driven decision-making is approved by an appropriate internal authority, and expects firms to assume accountability for both internally developed and externally sourced AIDA models.
Naturally, firms using AIDA should therefore proactively raise management and board awareness of their use of AIDA to ensure better governance.
To help make AIDA-driven decisions more transparent to the community that they impact, the MAS suggests that companies provide channels to help customers inquire about, submit appeals for, and request reviews of AIDA-driven decisions.
Further, the principles strongly recommend that firms use all verified and relevant supplementary data provided by data subjects when performing a review of AIDA-driven decisions.
Doing so will not only boost confidence among the community but also help improve the models over time.
# 4 | Transparency
To increase public confidence, the MAS recommends that companies proactively disclose the fact that they’re leveraging AIDA-driven decisions.
The principles also suggest that customers are provided with clear explanations on what data is used to make AIDA-driven decisions and how the data affects the decision — if a request is made.
Finally, bankers and financial service providers should also provide clear explanations of the consequences that AIDA-driven decisions may have on customers in order to earn the trust of the community and help avoid the perception of bias or discrimination of any sort.
- Adobe’s Achilles heel: How InDesign became a hacker tool and what other options are out there
- Unprecedented data breaches of the last ten years – and their aftermath
- Adobe products continuously targeted for phishing attacks
- Singapore’s AI strategy 2.0 explained
- Can AMD disrupt Nvidia’s AI reign with its latest MI300 chips?