C-level executives should be responsible AI ethics in organizations
AI ethics has always been a topic of concern for most organizations hoping to leverage the technology in some use cases. While AI has improved over the years, the reality is that AI has become integral to products and services, with some organizations now looking to develop AI codes of ethics.
While the whole notion of AI ethics is still debatable in many ways, the use of AI can not be held back, especially with the world becoming increasingly influenced by modern technologies.
Last year, UNESCO member states adopted the first-ever global agreement on the Ethics of AI. The guidelines define the common values and principles to guide the construction of necessary legal infrastructure to ensure the healthy development of AI.
“Emerging technologies such as AI have proven their immense capacity to deliver for good. However, its negative impacts that are exacerbating an already divided and unequal world, should be controlled. AI developments should abide by the rule of law, avoiding harm, and ensuring that when harm happens, accountability and redressal mechanisms are at hand for those affected,” stated UNESCO.
While UNESCO’s recommendation looks at ethical AI as a whole, it’s a totally different game when it comes to AI implementation in organizations. A study by IBM’s Institute for Business Value (IBV) revealed a radical shift in the roles responsible for leading and upholding AI ethics at an organization.
The global study indicates that despite a strong imperative for advancing trustworthy AI, including better performance compared to peers in sustainability, social responsibility, and diversity and inclusion, there remains a gap between leaders’ intentions and meaningful actions.
The study found that business executives are now seen as the driving force in AI ethics with CEOs, board members, general councils, and even risk and compliance officers as being the most accountable for AI ethics. While 66% of respondents cite the CEO or other C-level executive as having a strong influence on their organization’s ethics strategy, more than half cite board directives (58%) and the shareholder community (53%) being responsible as well.
At the same time, the study also showed that building trustworthy AI is perceived as a strategic differentiator and organizations are beginning to implement AI ethics mechanisms. There is no denying at the importance of AI ethics in organizations has increased. 75% of respondents believe ethics is a source of competitive differentiation, and more than 67% of respondents who view AI and AI ethics as important indicate their organizations outperform their peers in sustainability, social responsibility, and diversity and inclusion.
Besides that, organizations are also ensuring ethical principles are embedded in AI solutions, but progress is still too slow. 79% of CEOs are now prepared to embed ethical AI into their AI practices and more than half of responding organizations have publicly endorsed common principles of AI ethics. Yet, less than a quarter of responding organizations have operationalized AI ethics, and fewer than 20% of respondents strongly agreed that their organization’s practices and actions match (or exceed) their stated principles and values.
For Jesus Mantas, Global Managing Partner, IBM Consulting, s many companies today use AI algorithms across their business, they potentially face increasing internal and external demands to design these algorithms to be fair, secured, and trustworthy; yet, there has been little progress across the industry in embedding AI ethics into their practices.
“Our IBV study findings demonstrate that building trustworthy AI is a business imperative and a societal expectation, not just a compliance issue. As such, companies can implement a governance model and embed ethical principles across the full AI life cycle,” said Mantas.
As such, the study data suggest that those organizations that implement a broad ethical AI strategy interwoven throughout business units may have a competitive advantage moving forward.
Simply put, businesses should take a cross-functional, collaborative approach when it comes to ethical AI. This includes enabling C-Suite executives, designers, behavioral scientists, data scientists, and AI engineers to have a distinct role to play in the trustworthy AI journey.
Businesses should also establish both organizational and AI lifecycle governance to operationalize the discipline of ethical AI. They should also expand their approach by identifying and engaging key AI-focused technology partners, academics, startups, and other ecosystem partners to establish ethical interoperability.
- Friction builds up between India and China as Vivo offices raided
- Semiconductor industry faces another snag with rising raw material prices
- Measuring sustainability with a carbon management solution
- If consumers don’t trust your brands, you’re in trouble
- EXtended Detection and Response (XDR) for 2022 and beyond