(Source – Shutterstock)

Here’s how organizations can manage reputational risks on social media

For organizations, social media can either be an excellent platform to promote their company or be completely disruptive towards them. Over the years, content on social media has become a key influence on how customers view a particular brand or company. With reputation at stake, more companies want to know what people are talking about them and also want to ensure that any inaccurate information does not influence consumer decisions.

To do that, businesses need to have visibility into what’s happening on social media. However, given the sheer size of posts on multiple platforms today, it’s almost impossible to view every single comment or post without the help of technology. And this is where artificial intelligence can play an important role in helping organizations manage and resolve online issues and reputational risks.

Blackbird.AI focuses on this. The global leader in narrative and risk intelligence provides the fastest and highest resolution early warning system that offers organizations actionable insights to make informed decisions against harmful online threats in real-time. The company officially opened its regional headquarters in Singapore in January 2023 as it seeks to expand support across Asia Pacific (APAC) organizations.

According to Wasim Khaled, Blackbird.AI’s CEO and Co-Founder, the company’s goal since its inception is about dealing with misinformation. It is about taking this seemingly insurmountable problem and breaking it down into individual components that can be treated as a problem that can be solved using very specific targeted pieces of technology.

“The integrity component of our mission is we think of our technology as an integrity layer for the internet. Today, we provide dashboards and insights to help understand narrative manipulation and risk. We have a larger aspiration for almost any analytics or BI platform or risk platform to integrate our signals because with today’s metrics, when people look at social media, they’re really low fidelity and don’t give people an understanding of risk. And so our goal is how can we use our technology to create higher levels of integrity on how risk is measured across many different industries, both in the private and the public sector.”

social media

Wasim Khaled, Blackbird.AI’s CEO and Co-Founder

Khaled highlighted that the problem with misinformation, especially with the influence of social media, has become a great concern since the COVID-19 pandemic. Organizations today are dealing with a higher bar of problems that they need to understand because the stakes are now much higher.

“Our mission has not changed. It is about empowering trust, safety, and integrity across the information ecosystem. And as this problem has progressed, I think we can only dig deeper into the concept of that because again, the bar continues to go higher around trust, safety, and integrity across the whole landscape,” said Khaled.

Managing reputational risk on social media

In the past, if a media organization misreported the news on a particular company, it could face a lawsuit if it did not make corrections to the report and apologize for the matter. Sadly, this is not the case when it comes to social media. While there can be cases of defamation towards individuals or organizations that spread unverified information on social media, having complete visibility of the information is still a great challenge.

This is where businesses have been relying on social media monitoring tools. However, as Khaled points out, most of these tools are still not able to provide insights that businesses can take action with. Companies are still not sure how the information is spreading as well as how much it is trending.

As Blackbird.AI focuses on enterprises, Khaled stated that understanding the narrative, emerging narratives, and the platforms they are spinning on, is what businesses need to look at.

For Khaled, there are five buckets or key areas on which Blackbird.AI focuses when it comes to dealing with misinformation. The first piece is understanding the narrative. They need to be able to take thousands or millions of posts and understand what the dominant narratives are. The next one is the networks. Once the narrative is understood, it is about knowing how the narratives are spreading.

“How do these narratives spread? What is the contagion effect of that particular narrative? And how does it move from one network to another? What are the interactions and movements and the heat map of that narrative? You need to know the information flow, which is important because sometimes the information flow, especially with our technology using AI, shows similarities in patterns of previous types of movements that indicate certain things,” said Khaled.

The third bucket is what Khaled refers to as cohorts. These are communities of like-minded actors or accounts that are working in coordination or in interaction with one another. There are hundreds of these types of labels and models that stitch the three buckets together.

“When you stitch these three buckets together, you will have a dominant emergent narrative. You can see how it’s spreading and which platforms it’s spreading on. You will see the communities that might be overlapped, interacting, and bridging together. Now you can understand how these groups are pushing the narrative through the network,” explained Khaled.

(Source – Shutterstock)

The last two buckets, according to Khaled, are anomalous activity and influence, which kind of go hand in hand.

“Frankly, these are all interconnected. You can’t really understand with enough fidelity without having all. So anomalous activity is like behavior coordinate, to drive that ugly, amplify narrative, and then influence nodes in that network are the key drivers. It could be someone with millions of followers that are known, or it could be thousands of accounts that nobody knows with 10 followers. We call them shadow influencers that are driving the conversation synthetically,” continued Khaled.

It’s all about visibility

For Blackbird, the narrative is derived from a combination of using natural language processing (NLP) to understand what is being said. Then, the big network analysis component of the technology allows you to understand bot activity and things of that nature.

“We do all five of those buckets simultaneously in order to get that maximum fidelity so that you can kind of see what’s happening both at an individual post level, but also at a very large macro level in that environment,” Khaled said.

Blackbird.AI’s adaptable AI provides the ability across the hundreds of models that are in the cohorts, and in thresholds that are in the network manipulation. This enables organizations to build their own custom risk profiling, based on the industry that they are in.

“You build your own customer probe custom profiles or choose from profiles that are industry mapped already that are known. This allows you to map yourself. Now, the key thing is we only focus on misinformation and not disinformation. So misinformation, the unintentional spread of disinformation, misinformation comes out in the synthetic amplification component of it, right? Unintentional spread, disinformation, the actual notion of true versus false. Blackbird.AI really isn’t the arbiter of that.

What we believe is that you really need to understand the narrative. And frankly, that’s what our customers want to understand. They can take a piece of information that they find that has been spread that is, let’s say, untrue, plug it into our system, and see how that particular narrative is spreading. Essentially you could call it almost a subset or a feature of a narrative that might be misinformation or disinformation. Of course, all this is a risk engine that has high speed, and then it presents itself in a visualization dashboard or an engine that can plug into third-party systems.”

And this is where the Global Alliance Program comes in. Blackbird.AI launched its Global Alliance Program with a network of partners that aims to provide the capabilities of its solutions to the partners that are comprised of market leaders in data science, social and broadcast intelligence, risk management, and consulting. Khaled said the program will also enable brands and organizations to better understand and analyze emerging narratives and potential risks across news and social media.

social media

(Source – Shutterstock)

Generative AI

With generative AI being the talk of the town, Blackbird.AI is also not missing from the action. The company unveiled RAV3N Copilot, a generative AI-powered solution for Narrative Intelligence and Rapid Risk Reporting that enables unparalleled workflow automation during mission-critical crisis scenarios.

Khaled explained that RAV3N can auto-generate executive briefings, key findings, and even suggest mitigation based on insights surfaced by Blackbird’s Constellation Platform. This frees up teams to focus their time on leveraging their subject matter expertise. RAV3N’s Collaboration Mode enables users to combine their knowledge with RAV3N’s capabilities, streamlining the creation of data-driven action plans through an AI-assisted collaborative workflow.

“RAV3N Copilot will become a transformative must-have for corporate and threat intelligence professionals, force-multiplying their talents and enabling them to get more done in critical, time-sensitive scenarios than ever before. While our Constellation Platform empowers rapid surfacing of previously unseen risks with high fidelity and speed, RAV3N Copilot automates the last mile of distilling insights and mitigation so strategic decisions can be made rapidly when every second counts,” added Khaled.

The RAV3N large language model (LLM) is purpose-built by a team of in-house artificial intelligence engineers with extensive experience in natural language processing and building generative pre-trained transformer models (GPT). The proprietary RAV3N LLM will be trained for a variety of industry-specific use cases across multiple languages.