For companies to create revolutionary AI applications, they need to be able to earn everyone's trust. Source: Drew Angerer/Getty Images/AFP

For companies to create revolutionary AI applications, they need to be able to earn everyone’s trust. Source: Drew Angerer/Getty Images/AFP

How to win the trust of employees & customers in the age of AI

NOBODY doubts the potential of artificial intelligence (AI) because sporadic use cases have showcased what the technology is capable of. However, none of those head-turning applications have been scaled up.

Mid and large-scale entities are pouring money, energy, and time into exploring AI, but they’ll need to get ahead of a major hurdle before projects reach critical mass. That hurdle is trust.

Well, more accurately, it is the lack of trust that organizations must deal with before maximizing on AI opportunities.

Facebook CEO Mark Zuckerberg has made significant progress with AI in the recent past. Photos uploaded onto the social platform can identify and tag friends, identify emotions, and evaluate and eliminate hate speech.

Actually, CEO Zuckerberg’s AI can do much more and the company is constantly battling with stakeholders and regulators to make use of the technology they’re developing.

A new Edelman study identified the lack of trust as one of the biggest challenges to AI adoption.

To help companies enhance their leadership and protect their reputation while implementing AI, the company made six recommendations. Maybe these could give Facebook and other companies keen on pioneering AI disruptions a fresh perspective and help rejuvenate their AI ambitions.

# 1 | Be proactive with policy-making

AI is a powerful technology and companies that make significant investments in it are bound to be watched and scrutinized by media, regulators, as well as employees.

Therefore, it’s crucial for such organizations to be proactive and explain what they have in mind with regards to the technology they’re developing and how they intend to use it.

Further, submitting to and aligning with prevailing regulatory requirements and identifying and talking about (via company blogs and through company spokespersons at important forums) about the gap in regulations is also key to earning the trust of all stakeholders involved.

# 2 | Establish and adhere to an AI ethical code

“If you don’t stand for something, you’ll fall for anything,” goes the old adage. The next big step for companies exploring heavy-duty AI applications is to create and stand by an AI ethical code.

Doing so not only helps assure the public that the company understands that AI is a double-edged sword but also helps set an example when it comes to best practices and beneficial use cases in the practitioner’s industry.

Companies such as IBM and SAP are known to be working on really powerful AI applications, but they’ve been careful to create equally strong AI ethical codes for themselves to follow — which have not only earned them the trust of regulators and partners but also its customers.

# 3 | Perform and document rigorous algorithm testing and provide transparent operation

AI and related technologies are complex. In advanced use cases, they’re incredibly complicated despite all the logic and structure involved.

Therefore, in order to ensure that algorithms remain trustworthy and produce results that can be accurately traced back to a certain set of conditions, Edelman recommends that companies carefully document how AI is developed.

In the future, AI might have a severe impact on the lives of people, including playing a crucial role in hiring, firing, and appraisal decisions. As a result, transparency and traceability are key.

# 4 | Demonstrate responsible actions to minimize negative impacts from AI

One of the biggest reasons for workers and the general public to push back on the progress that companies are making with AI is that they’re afraid they’ll lose their jobs and source of livelihood.

Therefore, in order for companies to make headway with the technology, they need to not only invest in developing AI solutions but also put money into programs that minimize the negative impacts of AI.

For example, companies that might eliminate jobs as a result of AI implementation need to re-train executives for jobs that will be available to them in the future and provide some (financial) support to them if there’s a period during which they’re unemployed as a result of accelerated AI developments.

Putting people first is not only the right thing to do, but also the best way to ensure there is enterprise-wide support for AI development and implementation.

# 5 | Showcase societal benefits of AI application plus real-world impact at scale

No technology is good or bad. It’s the application and the motives of the user that make all the difference.

This is the same for AI. While the world fears all the negative impacts that AI can have on their jobs and their privacy and security, they tend to forget the benefits of AI.

Facial recognition and gait analysis, speech recognition, and AI-based data analysis provide significant medical, mechanical, and societal benefits. If companies want to continue making progress with AI, they must showcase these benefits and gain the confidence of stakeholders across various demographics.

# 6 | Don’t hide your light

The fact that a company is thinking about earning the trust of regulators, employees, customers, and other stakeholders on its AI journey is great.

Any concrete steps such a company takes, as a result, must be broadly publicized. This is critical because communication is key when trying to earn the trust of the public in general.

If you’re going to be closely guarded about your actions, it’s going to be hard to create the conditions for an open dialogue between company executives and everyone else.

Edelman advocates going on record to share the small details in order to position the business as a forward-thinking entity intending to use AI for its positive effects on people, markets, and society at large.