According to PwC’s (PricewaterhouseCoopers) Artificial Intelligence Evolution: Main Trends “The Artificial intelligence market is expected to reach a value of $53.1 Bn by 2026,” this includes the Intelligent virtual assistant (IVA) / conversational AI market which is “expected to reach US$ 23 bn in 2027.”
However, there is a vast chasm of understanding regarding text-to-speech and/or rules-led chatbots, versus complex cognition-based computational linguistics-driven AI.
It’s true, the former can often “appear” (at first) like the latter, but a few ping-pong style human vs the machine exchanges later and the chatbot usually resorts to a: “I’m sorry, I didn’t understand that.” While the latter will have already flagged up the issue, re-calibrated its own inner core workings, and embarked on “learning” to avoid such a snafu in the future.
At Crux Intelligence we have poured years of advanced academic research into our own AI – known as “ASK”. As a result, we’re pleased to announce that The United States Patent and Trademark Office has recognized our significant achievement and innovation, awarding us a patent for this intellectual property.
The patent specifically pertains to our QA system (Question Answering systems), which utilizes different natural language processing (NLP) approaches, such as syntactic and semantic parsing, for understanding and identifying the key entities and their relations in a question.
What does that mean (without writing a PhD level academic paper)? It means our AI can not only communicate with the end (human) user, using both explicit and implicit learnings, encompassing nuance and ingesting industry-specific jargon, but it can also vocalize.
Yes, our AI can talk.
In my most recent article for Fast Company, I go further into this subject, showing how we developed our talking AI while examining the building blocks towards true cognition within conversational AI, including computational linguistics and Word Sense Disambiguation. If you want to know more on the latter, our Co-Founder, Neha Prabhugaonkar, Head of Data Science at Crux Intelligence, expanded on this in her academic paper: Word Sense Induction for Better Lexical Choice.
On the vocalization front, I told Fast Company readers about acoustics training to handle anomalies, and the necessity to provide flexibility in audio presentation styles to achieve real scale.
Finally, I also delve into the history of AIs-that-talk – including the real backstory on Siri (before Apple bought it from SRI International) and what happened when Steve Jobs called SRI “out of the blue” that day.
FOR MORE ABOUT CRUX INTELLIGENCE’S AI, READ KATHY LEAKE’S ARTICLE IN FAST COMPANY