Blog Home

Four eras of AI innovation in conversation intelligence

Company

The Team at CallMiner

June 02, 2022

M&A consolidation blog
M&A consolidation blog

Artificial intelligence (AI) and machine learning (ML) innovation is much like building a house. You don’t just wake up one day and find the house complete. Each state-of-the-art advancement starts with foundational concepts, and builds upon research and developments that come before it. This idea holds true for many sub-disciplines of AI, but particularly for conversation intelligence.

Throughout the last few decades, advancements in speech and language AI have formed the foundation for conversation intelligence. Today, companies leverage conversation intelligence to truly understand the context behind their omnichannel customer interactions. Equipped with this knowledge, teams can reduce friction in the CX, effectively coach their workforce, and make critical business improvements across departments such as sales, marketing and product development.

undefined
Whitepaper
How to Drive Business Improvements with Customer Insights
Learn how powerful customer feedback can be leveraged far beyond the contact center.
Right Arrow

Our latest whitepaper, AI Matters: An Inside Look at How CallMiner Powers Business Performance Improvements, covers these tectonic shifts in the conversation intelligence market by looking back at its history.

These four eras of AI have shaped current thinking around conversation intelligence, even as the industry continues to evolve.

Era 1: Machine transcription with human analysis

Speech recognition AI is used to transcribe conversations to text. It has a long history dating back to the 1950s, when Bell Labs built the first documented speech recognizer called “Audrey,” which recognized strings of digits with pauses in between. The field really took off in the late 1990s to early 2000s, along with Moore’s Law and major advances in computing power.

Drawbacks of machine transcription: Humans still need to interpret each transcript manually to gather insights. This involves a lot of effort for humans, and isn’t practical for high volumes of interactions

Where there’s still value: This technology is still valid when there’s a need to dive deeply into individual interactions. Many workflows still require these in-depth explorations, particularly when it comes to mentoring and training employees on the front-lines of customer interactions.

Era 2: Word Spotting

Word spotting, otherwise known as keyword spotting, uses AI to look for the presence or absence of certain words. Some of the first research in this field happened in the late 1980s and early 1990s. Typically, this technology is used for sentiment analysis. For example, a word spotting algorithm could see if the words “awesome” or “terrible” are present within a call transcript, aiding with the practice of sentiment analysis.

Drawbacks of word spotting: First, the technology relies on perfect transcript accuracy. Even the best transcribers aren’t 100% accurate. Second, many words have multiple meanings. Language is often far more complicated than the individual words being said. Instead, meaning is derived from how words interact.

Where there’s still value: Word spotting is still useful in many contexts where industry or company-specific words must be detected. For example, competitor mentions might trigger certain automated events within a robotic process automation (RPA) system, taking much of the lift off of individual agents.

Era 3: Rules-based approaches

In this era, pioneers in natural language processing (NLP) began capturing the complexity of human language in conversation. Rules-based approaches became less about detecting individual words and more about understanding how the words interact. Early statistical models in the 1980s and 1990s were some of the first automated rule-based systems for language that didn’t rely on labor intensive, hand-written rules.

Rules were an important evolution in AI, because they were able to capture the context of when something was said (metadata), what specifically was said (semantics) and how something was said (acoustics). Rules also allowed for the filtering of certain scenarios.

Drawbacks in rules-based approaches: It’s difficult to capture every way something could be said within a finite set of rules.

Where there’s still value: There are many situations in which there are only a few ways something can be said (or, things can only be said in one specific way). Good examples include legal disclosures, compliance mandates, and certain agent scripts.

Era 4: Machine learning (Current)

ML is the current state-of-the-art technology in conversation intelligence. Instead of describing data by creating rules, ML uses advanced algorithms to create rules based on big data. For example, instead of predetermining the topics that should be in a conversation and writing rules to capture them, machine learning solutions might use unsupervised learning to analyze and cluster any subset of interactions into explorable and expandable topics.

For example, unsupervised learning uses algorithms to discover patterns in unlabeled datasets without human oversight. ML can also provide organizations with related words and phrases that are being used similarly in real conversations.

Understanding more about AI and ML’s history can help inform users on how this technology has evolved. Organizations like CallMiner have stayed in lockstep with these trends over time. The result is conversation intelligence technology that builds upon previous AI advancements, incorporates them where they’re needed most, and benefits from years of experience to stay one step ahead of what’s to come.

Want to learn more about how CallMiner leverages the latest AI innovations in its conversation intelligence technology? Check out our whitepaper for a more detailed look: AI Matters: An Inside Look at How CallMiner Powers Business Performance Improvements.

Speech & Conversation Analytics North America EMEA Artificial Intelligence