Facebook COO Sheryl Sandberg and her team are working on educating stakeholders about AI. Source: Shutterstock

Facebook COO Sheryl Sandberg and her team are working on educating stakeholders about AI. Source: Shutterstock

6 things to keep in mind when deploying artificial intelligence at scale

EVERY business leader today understands the potential and promise of artificial intelligence (AI).

Many of them — especially those that are ahead of the curve in terms of technology adoption — are at a stage where they can leverage AI to pioneer exciting use cases.

However, charging ahead with the technology has its challenges including inviting scrutiny from regulators, causing concern among customers, and fuelling fear among employees.

Facebook, for example, which is at the forefront of innovation with several cutting-edge AI use cases in its labs and on its platforms, seems to have gotten regulators (unnecessarily) concerned.

As a result, CEO Mark Zuckerberg and COO Sheryl Sandberg are now looking for ways to assuage fears and charge ahead with implementing and scaling their AI projects.

In order for companies to really embrace AI, a recent study from Edelman said they need to earn the trust of all the relevant stakeholders.

To help the board enhance their leadership and protect their reputation while implementing AI, the report provides six recommendations.

Maybe these could give Facebook and other companies keen on pioneering AI disruptions a fresh perspective and help rejuvenate their AI ambitions.

# 1 | Be proactive with policy-making

AI is a powerful technology and companies that make significant investments in it are bound to be watched and scrutinized by media, regulators, as well as employees.

Therefore, it’s crucial for such organizations to be proactive and explain what they have in mind with regards to the technology they’re developing and how they intend to use it.

Further, submitting to and aligning with prevailing regulatory requirements and identifying and talking about (via company blogs and through company spokespersons at important forums) about the gap in regulations is also key to earning the trust of all stakeholders involved.

# 2 | Establish and adhere to an AI ethical code

“If you don’t stand for something, you’ll fall for anything,” goes the old adage. The next big step for companies exploring heavy-duty AI applications is to create and stand by an AI ethical code.

Doing so not only helps assure the public that the company understands that AI is a double-edged sword but also helps set an example when it comes to best practices and beneficial use cases in the practitioner’s industry.

Companies such as IBM and SAP are known to be working on really powerful AI applications, but they’ve been careful to create equally strong AI ethical codes for themselves to follow — which have not only earned them the trust of regulators and partners but also its customers.

# 3 | Perform and document rigorous algorithm testing and provide transparent operation

AI and related technologies are complex. In advanced use cases, they’re incredibly complicated despite all the logic and structure involved.

Therefore, in order to ensure that algorithms remain trustworthy and produce results that can be accurately traced back to a certain set of conditions, Edelman recommends that companies carefully document how AI is developed.

In the future, AI might have a severe impact on the lives of people, including playing a crucial role in hiring, firing, and appraisal decisions. As a result, transparency and traceability are key.

# 4 | Demonstrate responsible actions to minimize negative impacts from AI

One of the biggest reasons for workers and the general public to push back on the progress that companies are making with AI is that they’re afraid they’ll lose their jobs and source of livelihood.

Therefore, in order for companies to make headway with the technology, they need to not only invest in developing AI solutions but also put money into programs that minimize the negative impacts of AI.

For example, companies that might eliminate jobs as a result of AI implementation need to re-train executives for jobs that will be available to them in the future and provide some (financial) support to them if there’s a period during which they’re unemployed as a result of accelerated AI developments.

Putting people first is not only the right thing to do, but also the best way to ensure there is enterprise-wide support for AI development and implementation.

# 5 | Showcase societal benefits of AI application plus real-world impact at scale

No technology is good or bad. It’s the application and the motives of the user that make all the difference.

This is the same for AI. While the world fears all the negative impacts that AI can have on their jobs and their privacy and security, they tend to forget the benefits of AI.

Facial recognition and gait analysis, speech recognition, and AI-based data analysis provide significant medical, mechanical, and societal benefits. If companies want to continue making progress with AI, they must showcase these benefits and gain the confidence of stakeholders across various demographics.

# 6 | Don’t hide your light

The fact that a company is thinking about earning the trust of regulators, employees, customers, and other stakeholders on its AI journey is great.

Any concrete steps such a company takes, as a result, must be broadly publicized. This is critical because communication is key when trying to earn the trust of the public in general.

If you’re going to be closely guarded about your actions, it’s going to be hard to create the conditions for an open dialogue between company executives and everyone else.

Edelman advocates going on record to share the small details in order to position the business as a forward-thinking entity intending to use AI for its positive effects on people, markets, and society at large.