Drafting engaging articles takes exceptional writing skills, critical thinking, research, and empathy. As a team dedicated to creating quality content, we see writing as an explorative process based on human experiences and emotions no machine could understand or reproduce. However, with digital technologies and AI rapidly advancing, we might be surprised by some new solutions on the horizon.

Last week, Instoried, a content enchantment platform, gained our attention by announcing they raised $8 million in funding from Pritt Investment Partners and 9Unicorns. This company founded in 2019, designed an AI-powered publication sentiment analysis software set to boost content performance in real-time. Moreover, the platform can help companies create suggested content from scratch.

We couldn’t resist wondering how such a tool designed to predict and generate ‘the right’ content will shape the future of the content industry. Will this technology ensure we still have diverse business articles? How can we avoid the risk of human biases in such an AI solution?

To discover what’s behind this solution, we prepared a short overview enriched by the founder’s comments and our critical standpoint.

People and technology behind Instoried

From the comments shared by the founder of Instoried, we understood that diversity should be at the centre of this technology. This is how the CEO Ali Sharmin explained the purpose of the platform:

An image showing woman called Ali Sharmin, the CEO of Instoried.

‘Instoried will enable brands, enterprises and freelancers to create more meaningful content in less time that resonates with their target audience. Our aim is to create a robust and all-inclusive content tool that can be the complete solution for content writers of all kinds. We want to make empathetic communication the next revolution in content marketing.’

You are probably wondering how the platform ‘learned’ the language patterns and developed the capability to read the emotional tone of a sentence? Let’s score the process down.  

The company first used Bandler’s neuro-linguistic programming (NLP) approach to tag dozens of lines and various text types across different genres from business, health, politics to social media posts. As a result, the AI engine can now determine the emotional engagement quotient of any written content. Based on learned patterns, the tool makes recommendations to help writers emotionally engage their readers. Wait, who is teaching the algorithm to recognize the patterns?

How can we avoid biases and unjust power dynamics in AI?

As explained by the founder, there is a team of linguists who manually train the AI on the tonality and emotion of various words.  

 ‘If a sentence is written as that ‘terrorist is killed‘, the system may identify the sentence as Negative and Fear. The linguists train the system to identify that the sentence is actually Positive and Joy. Linguists help the system learn the nuances of various phrases and how the same word may mean different tones and emotions in different contexts. It is an ongoing process’ shared the CEO of Instoried, Sharmin Ali

An image showing a young black woman coding and asking should human or machine have a word in the final decision making process.

This comment caused a stir and motivated our team to research more about the link between AI and emotional biases. Even though we don’t support any form of violence and heathered, characterizing killing as a joyful event, causes a certain level of awkwardness. 

It made us question who decides what is positive or negative?

In search of the proper answer, we found a study that showed that emotional analysis technology assigns more negative feelings to people of certain ethnicities. According to photo analysis from this research, algorithms are prone to assign more negative emotions to black than white faces. Such AI pitfalls are likely to reinforce stereotypes and prejudices about certain groups of people.

As explained in the book The Culture Map by Erin Meyer, one sentence can be interpreted differently depending on the cultural context, personal biases, and learned behaviours of both the receiver and sender. In other words, what might be a positive message in Germany, doesn’t have to be in Japan or France.

Even though Instoried sounds revolutionizing, we can’t help but imagine possible scenarios where such technology can deepen existing unjust power structures in society. Therefore, we will continue exploring the effects of this and similar AI-powered solutions. In meantime, here are some questions for you to think through:

Should a content writer blindly trust the algorithm without knowing the criteria behind it? What do people developing the algorithm need to know about how language translates in different contexts? How can we remain authentic, unique storytellers if algorithms constantly recommend similar content?

Post Views: 5394