aiTech‌Trend‌ ‌Interview‌ ‌with‌ Tony Lee,‌ Chief Technology Officer at Hyperscience - AITechTrend
aiTechTrend Interview with Tony Lee, Chief Technology Officer at Hyperscience

aiTech‌Trend‌ ‌Interview‌ ‌with‌ Tony Lee,‌ Chief Technology Officer at Hyperscience

Can you please provide an overview of your role as the CTO at Hyperscience and the key responsibilities that come with it?

I joined Hyperscience in 2021 as the company’s first Chief Technology Officer, responsible for leading the product, machine learning and engineering teams. I’m responsible for product development and delivery activities to ensure our enterprise artificial intelligence solutions support the needs of our customers. This includes exploring new and emerging technologies, managing the security of our platform, and ensuring seamless implementations.

Hyperscience is described as a “Machine Learning first platform.” Can you explain what this means and how it differentiates your company from others?

Machine learning is in our company’s DNA. In 2014, Hyperscience was founded by three ML engineers who wanted to apply ML and AI to the enterprise to help solve problems like clerical errors and time savings rooted in manual work. Throughout the last nine years, our systems have not strayed away from being an ML-first platform—we’ve continued to design with ML and deep learning models inherently baked into the process, leaning on our team’s extensive experience building applications in the enterprise AI environment. We’re very knowledgeable about building systems that work, which involves human supervision, model quality assurance and frequent testing to work out any issues along the way, separating ourselves from others in the industry just entering the ML space.

I’ve always been proud of Hyperscience’s flexibility in helping customers work how they want to, from on-premises to cloud and SaaS deployment options. We continue to develop our platform through incremental change to stay up-to-date with the rapidly changing tech landscape.

We’re also used by companies doing great work in the community, which is incredibly humbling. Last year, Hyperscience announced a partnership with the International Rescue Committee (IRC) to apply ML to the IRC’s data collection in health clinics treating malnourished children. Malnourishment affects roughly 50 million children worldwide at any time, and it’s an honor to be helping support the IRC’s work to improve patient outcomes.

Hyperscience has begun working with several new large government and financial companies during my tenure. It is rewarding to help these massive companies and agencies secure their data and reduce infrastructure footprints, especially since they can pass these benefits to consumers. One example that comes to mind is a financial institution that implemented our platform to reduce the time to process funeral claims by 80 percent. Their company mission is to serve customers in times of distress, and I’m grateful we were able to play a small role in that mission.

On the sciences side, our team has created bespoke models that constantly generate award recognition for handwriting analysis. As the AI/ML landscape changes in 2024, we’ll help our customers find even more insights and savings through data analysis, which keeps me motivated daily.

Hyperscience supports various deployment options, including on-premise, private, and public clouds. How do you ensure that your infrastructure design caters to each deployment type’s specific needs and security concerns?

Hyperscience’s platform works with many integrations, including OpenAI’s ChatGPT and Salesforce, allowing us to cater the technology to each deployment.

From a security standpoint, we rigorously test our platform to ensure it meets the most stringent requirements. For customers that leverage our platform on-premises, the technology is folded into their internal security policies, so we do not manage those security protocols. However, Hyperscience manages and operates the system for customers using a SaaS model. We use a partition deployment model to ensure individual customer data is separated from others, and we are constantly reviewing our security protocols to ensure they’re up-to-date, as evidenced by our recent SOC 2 certification.

Hyperscience is known for its ability to automate many document processes. Can you provide specific examples of industries or use cases where the platform has demonstrated flexibility and effectiveness in transforming unstructured data into actionable insights?

Many industries benefit from turning unstructured data into actionable insights through automation, but two that come to mind are the public sector and claims insurance.

Government spending is often scrutinized, and many agencies have historically wasted countless hours on manual processes. The volume of citizen and bureaucratic requests is incredibly high, and relying on pen and paper for tasks like processing tax returns and passport renewals makes it nearly impossible to keep up.

For example, one agency faced a backlog of hundreds of thousands of claims in various handwritten formats and needed to shorten processing time to serve its citizens. In the first three months after using Hyperscience’s platform to automate claims processing, they processed 115,000 claims and are now saving $45 million annually. These are real taxpayer dollars, which underscores why automation in the public sector is so important.

Meanwhile, the insurance sector handles a high volume of data related to consumer financial and medical information, which is especially sensitive and requires extra care. Inaccurate data extraction from these forms is a huge concern in the industry, requiring automation that can guarantee a high degree of accuracy. Our platform has proven high accuracy even when automating complex data sources, allowing insurance companies to speed up processes and improve customer turnaround times.

How does your team ensure the platform remains adaptable to evolving document processing needs and stays ahead in accommodating new data sources and document types?

AI and ML constantly change as open-source models become more prevalent in business. At Hyperscience, we strongly believe in understanding the ethical ramifications of these evolving technologies before deployment to ensure our customers receive a solution that works best for them.

A big priority for my team is remaining up-to-date on how to leverage new data sources. We’re determined to share this knowledge with our partners. From technical blogs to demos on ChatGPT integration, we’re constantly exploring new ideas and creating a community of well-informed technologists. In the year ahead, as we look to expand across all human-friendly document types, I’ll continue working closely with my team to understand our customers’ changing needs, doing a lot more work with unstructured documents (contracts, deeds, email, etc.), and what capabilities they’ll require.

However, potential security and bias concerns inherent to AI/ML are top of mind for us, so we’re moving slowly and carefully to ensure customer data is protected, first and foremost.

Machine learning models are a crucial part of Hyperscience’s offerings. How do you effectively manage and maintain these models to ensure consistent performance and accuracy?

Certain models are more static than others and realistically do not need to be improved beyond a certain point, such as handwriting analysis. However, digital form submissions can be unique across different industries and even within a single industry and thus require more attention.

In these instances, we re-train ML models to ensure they work at peak efficiency, including anomaly and bias detection. Organizations leveraging AI and ML models must understand the concept of model drift over time and are actively working to keep their technology up-to-date.

What role does data management play in Hyperscience’s machine learning-first approach, and how do you address data privacy and security concerns?

Data management is critical to our approach, especially since many different types of data exist. Perhaps none are more critical than training data for AI systems, especially as it can introduce bias into large language models (LLM), impacting what information comes out. In our last few releases, Hyperscience has applied ML to training data management to address this issue.

For customers that store data on-premise, they manage control over their data. We ensure data is partitioned in different areas for SaaS customers and constantly update the platform with the latest security patches. Our priority is giving customers an offering catered to their specific needs, and we’ll support whichever implementation they prefer.

Hyperscience has achieved SOC 2 certification. Can you elaborate on the significance of this certification for your company and how it impacts your approach to security in AI systems?

SOC 2 certification is significant because it highlights that Hyperscience is operating at a certain level of security compliance, which is audited and updated annually. It’s a standard the industry understands that shows we’re on top of the latest security threats.

Regardless of this seal of approval, it’s important that we mature as a company and are secure with customer data. Our responsibility to provide a secure environment is one that we take very seriously, so we take every step to ensure our customers are protected.

With the increasing importance of security in AI systems, how do you ensure that Hyperscience remains proactive in addressing potential vulnerabilities and evolving security threats in AI and machine learning?

Bad actors leveraging AI for prompt engineering are inherently attempting to get access to underlying servers, but Hyperscience’s software isn’t open to the public, so the threat of intrusion is minimal. We’re constantly reviewing and updating our security procedures to ensure potential threats from new and emerging technologies are adequately dealt with. The recent SOC 2 compliance is just one way we show how seriously Hyperscience takes cybersecurity and that we’re on top of the latest potential vulnerabilities.

Contents