aiTech‌ ‌Trend‌ ‌Interview‌ ‌with‌ Henry Vaage Iversen, Co-founder & CCO, boost.ai

Henry Vaage Iversen, Co-founder & CCO, boost.ai

In the rapidly evolving landscape of conversational AI, how do you perceive Boost.ai’s long-term strategic vision in terms of driving innovation and shaping the industry?

We recently announced that we will be leveraging Generative AI, and specifically Large Language Models (LLMs), with our existing platform, which indicates the direction of our strategy in the long term and is a differentiator for us against other industry players. Moreover, we are wrapping LLMs into our platform, but we are ensuring that it is in a way that doesn’t compromise the reliability of our virtual agents. While the industry is keen to bring the latest developments in the AI space directly to the enterprise, we need to ensure that it is in a responsible way that maintains the high level of accuracy that businesses and customers alike demand. Finally, we have a reputation at boost.ai for our ability to scale up customer service and internal support seamlessly, and LLMs will allow us to supercharge our ability to do this. 

Contents

As a thought leader in the conversational AI space, how does Boost.ai actively contribute to the industry’s discourse and shape the future direction of AI-powered customer experiences?

Speaking directly with customers and businesses is the best way to hear and understand their challenges. That’s why we attend numerous conferences and summits, such as the recent REWORK Conversational AI Summit and the Digital Transformation in Insurance Conference. We always make sure to keep abreast of the latest developments and conversations in AI and the broader tech industry and produce content like videos, e-books and newsletters to keep our customers informed. We want to showcase the potential of AI to improve both the customer experience and wider business objectives, and we think the way to do that is not just through the demonstrable capabilities of our platform but through transparency and sharing our expertise and knowledge. 

With emerging technologies like natural language processing, machine learning, and neural networks, what novel approaches is Boost.ai exploring to advance the capabilities and performance of its conversational AI solutions?

We have been working on significant enhancements to our platform, particularly with the integration of Large Language Model (LLM) technology. This involves creating what we’re calling a Hybrid NLU system, which combines the predictive abilities of LLMs with the enterprise-grade control of our conversational AI platform. This hybrid system offers unmatched accuracy, flexibility, and cost-effectiveness.

Our latest update focuses on key customer experiences improvements such as content suggestion, content rewriting, and accelerated generation of training data. We’re leveraging Generative AI to propose messaging content to AI trainers within our platform, which generates suggested responses and drastically reduces the implementation time for new intents.

Moreover, our Hybrid NLU approach allows enterprises to benefit from the combination of our market-leading intent management, context handling, and dialogue management solutions with powerful LLM-enriched tools. Our existing intent engine is highly trained with guardrails in place to guide the LLM, thereby improving overall accuracy and minimising false positives.

The end result of our efforts is virtual agents that can confidently provide precise answers to inquiries and a more streamlined development path that significantly enhances how our customers can build scalable customer experiences for both chat and voice.

Boost.ai’s success is built on delivering exceptional customer experiences. How does Boost.ai foster a culture of customer-centric innovation within the organization to continuously enhance its conversational AI offerings?

We put the customer at the heart of everything we do. By remaining customer-centric in our way of thinking, we are better placed to identify areas of improvement for our solution, which we then work hard to address. 

Our solution has a number of in-built tools that allow both our customers and their end-users to provide feedback in order to continually enhance the virtual agent experience.

Our team actively engages with our customers to understand their needs, challenges, and goals. These insights guide our product development process, ensuring that our conversational AI solutions not only meet but exceed our expectations.

We continuously invest in research and development to stay ahead of the curve and are always looking for ways to leverage emerging technologies – whether that’s Generative AI, Voice tech and more – to enhance the capabilities and performance of our conversational AI solutions.

As ethical considerations surrounding AI become increasingly important, what principles and practices does Boost.ai adhere to ensure responsible and ethical use of conversational AI technology?

When it comes to using AI in the enterprise, there is no room for error. We have seen criticism towards the raft of generative AI offerings rolled out in recent months, particularly for ‘hallucinations’, i.e. providing inaccurate responses. We have a responsibility to our clients, and they have a responsibility to their own customers, too, to prioritise accuracy above all else. This is our primary consideration when making changes to our platform. We believe that virtual and human agents can and should work together, and keeping a human-in-the-loop helps to accentuate each other’s best qualities, alleviating stress on human workers and streamlining processes. Ethical use of AI means not biting off more than you can chew and taking considered steps in adopting new AI technologies. 

Collaboration with industry partners and stakeholders often leads to breakthrough innovations. How does Boost.ai actively seek partnerships and collaborations to build a robust ecosystem and drive collective progress in the conversational AI domain?

Conversational AI doesn’t exist in a vacuum; to be effective, an internal AI culture needs to be built up so that employees can see virtual agents as the allies they are. Our boost.ai AI trainers programme ensures stakeholder engagement within the firms deploying our platform and encourages a deeper understanding of how our virtual agents work. Furthermore, we have partnered with firms like Clarasys to bring our platform to as many customers as possible. Conversational AI can seamlessly slide into existing processes and improve them greatly. Collective progress in the conversational AI industry will come from more businesses realising the technology’s potential and seeking to invest in the industry. 

Boost.ai has achieved considerable success in the conversational AI market. How does the company approach international expansion and adapt its solutions to cater to diverse global markets and cultural nuances?

We have a blueprint for success from our beginnings in Norway. We work with 9 of the 10 biggest banks in the Nordic region and have worked with local governments and leading telecos and retailers. One thing that we have learned is that there’s no such thing as one size fits all. Each virtual agent will be undertaking different tasks within different contexts, and that’s why working with our clients to develop customised virtual agents, tailored to their needs is so important. One of the great things about conversational AI is that it is adaptable and universal. Customer service exists in similar channels worldwide; everyone can benefit from automated customer service. 

As mentioned, Large Language Models are a technology that is set to revolutionise how enterprises can scale customer service and support going forward. The critical challenge that needs to be addressed is using LLMs to answer end-users directly without needing a human in the loop to approve responses. We believe that the key to achieving this in the future will be to connect the LLM answer with a trustworthy source and figure out a way to verify it with an acceptable level of accuracy. This is the most crucial step in utilising LLMs in customer-facing applications. Once this part of the equation is figured out, I can see a future where our conversational AI platform fully integrates free-talking language models with its other components to ensure they remain structured and verifiable.

Given the highly specialized nature of conversational AI, how does Boost.ai attract top talent and foster a culture of continuous learning and skill development within the organization?

We believe in investing in our team’s growth, as we understand they are our greatest asset. With the recent explosion in the AI space, we attract top talent by positioning ourselves at the forefront of this AI revolution, offering unique opportunities to work on groundbreaking projects like Hybrid NLU, Voice and other core technologies.

Norway’s work culture, known for its emphasis on work-life balance, cooperation, and flat organisational structures, is a significant part of our ethos at boost.ai. We believe that these elements contribute to a healthy, productive, and innovative work environment, and we take these values forward to our global offices in other parts of Europe and the U.S.

In the context of Boost.ai’s thought leadership and market impact, how does the company measure success beyond traditional metrics like revenue or market share?

We are extremely proud of our low customer churn rate. 40% of our clients have come to boost.ai from a competing solution, and less than 1% have left us. This is a strong indication that the more than 500 organisations that currently use our conversational AI platform have found the solution that meets their needs. This is similarly reflected in the incredible results that our clients are seeing, which include consistent conversation resolution rates of over 90% and some of the leading brands in the Nordics and beyond automating more than 20% of total customer service traffic through their virtual agents.