Search
Close this search box.

Superwise.ai Addresses Growing Enterprise Need for AI Model Assurance

The News: Superwise.ai announced that it has raised $4.5 million in seed funding for its AI model assurance platform. This Tel Aviv-based startup will use proceeds from this seed round, co-led by Capri Ventures and F2 Capital, to hire talent to work closely with customers, and to open offices in New York and northern California later this year. Read more at VentureBeat.

Superwise.ai Addresses Growing Enterprise Need for AI Model Assurance

Analyst Take: Model assurance is the ability to determine whether an AI application’s machine learning (ML) models remain predictively fit for their assigned tasks. This is a critical feature of any operational AI DevOps platform, which is one of the reasons Superwise.ai caught my attention.

Model Quality Assurance is the Foundation of the Practical Value of AI

One of the key risks of any AI-driven process is not knowing if we can trust a deployed ML model to do its assigned task accurately and reliably. If a model decays to the point where it cannot do its assigned tasks—such as recognizing faces, understanding human speech, or predicting customer behavior—at sufficient level of accuracy, it’s essentially useless to the enterprise that built and deployed it.

Model assurance can be a tough thing to guarantee. ML models—typically implemented as artificial neural networks—may be so complex and arcane that they obscure how they actually drive automated inferencing. Just as worrisome, ML-based applications may inadvertently obfuscate responsibility for any biases and other adverse consequences that their automated decisions may produce. In addition, ML models may be evaluated and retrained infrequently, leading to situations where what once was fit for a specific purpose has now lost its predictive ability.

Without the means to monitor and correct them, AI model inaccuracies can impact profitability, expose cybersecurity vulnerabilities, or have other negative business impacts. To mitigate these risks, organizations and their stakeholders are starting to demand greater transparency into how well ML models operate in practice over their entire lifecycles.

Superwise.ai Is a Promising Niche Player in AI Model Assurance

Enterprises that have bet the business on AI-powered processes must consider whether to acquire model assurance as an embedded feature of their AI pipeline DevOps platforms—such as AWS Sagemaker or Microsoft Azure Machine learning—or from startup vendors that focus on this exciting niche.

Users’ growing demand for AI model assurance has spawned the nascent market segment in which Superwise.ai operates. Founded in 2019, Superwise.ai provides a real-time platform for monitoring and maintaining the accuracy of deployed AI models. Its AI Assurance offering enables stakeholders to catch model decay and other issues with deployed AI models before they can have a negative business impact.

The Superwise,ai AI Assurance flags model inaccuracies that stem from changes in the data that feeds AI models. It can also catch inaccuracies associated with changes in the business environments into which the models were deployed. It incorporates a performance management engine for data science teams to monitor and analyze KPIs on AI models in real time. The solution includes dozens of commonly used out-of-the-box KPIs, including some pertaining to the detection of model bias and others enabling assessment of model explainability.

The Superwise.ai platform uses AI to anticipate and neutralize potential issues with AI models. It provides proactive recommendations for data science teams to take manual action to keep models accurate, unbiased, and otherwise fit for purpose. It can also automatically execute some corrective actions to keep models from drifting into potentially suboptimal territory.

Niche Players Such as Superwise.ai Must Contend with Platform-Embedded AI Model Assurance

Of necessity, the Superwise solution integrates with various third-party data science platforms in which ML models are built, trained, and deployed. Its chief technology integrations are with Google AI Platform, AWS SageMaker, Azure Machine Learning, and H20.ai.

Essentially, Superwise.ai must position its offering as an alternative to the embedded AI model assurance capabilities within its leading partners’ data science platforms. This will be tough to do, considering that all Superwise.ai core technology partners provide model quality-assurance as integrated features of their respective AI pipeline DevOps platforms. For instance:

  • Google offers Google AI Platform. Introduced in May 2019, this ML-pipeline automation service has such model quality assurance features as continuous evaluation, which lets data scientists compare model predictions with ground truth labels to gain continual feedback and optimize model accuracy.
  • H20.ai offers Driverless AI. In its August 2019 feature enhancement release, H20.ai added such model quality assurance features as analyzing whether a model produces disparate adverse outcomes for various demographic groups even if it wasn’t designed with that outcome in mind; automating monitoring of deployed models for predictive decay; benchmarkiing alternative models for A/B testing; and alerting system administrators when models need to be recalibrated, retrained, and otherwise maintained to keep them production-ready.
  • Microsoft offers Azure Machine Learning MLOps. Introduced in October 2019, this ML-pipeline automation service has such model quality assurance features as notifying and alerting on events in the ML lifecycle, such as experiment completion, model registration, model deployment, and data drift detection. It can monitor ML applications for model-specific metrics and provide monitoring and alerts on your ML infrastructure. And it can automate retraining, updating, and redeployment of models based on new data and other operational and business factors.
  • AWS offers Amazon SageMaker Model Monitor. Introduced in December 2019, this service continuously monitors ML models in production in the AWS Sagemaker cloud service, detects deviations such as data drift that can degrade model performance over time, and alerts users to take remedial actions, such as auditing or retraining models. Monitoring jobs can be scheduled to run at a regular cadence, can push summary metrics to Amazon CloudWatch to set alerts and triggers for corrective actions, and support a broad range of instance types supported in Amazon SageMaker.

Clearly, there’s some work ahead. Superwise.ai will also need to differentiate itself from the growing range of AI model governance solution providers, such as Algorithmia, who offer a broad range of permissioning, versioning, and other lifecycle management features in addition to real-time ML model quality monitoring and assurance.

The Takeaway: The Superwise.ai AI Model Quality Assurance Features Address a Growing Enterprise Requirement

Enterprises have their business riding on the accuracy of their AI. Model assurance is what keeps an AI-driven business process from running off the rails.

The Superwise.ai AI Assurance solution has a potentially broad market reach, integrating out of the box with key leading third-party data science pipeline DevOps environments. I would expect that Superwise.ai will use its new seed round to build its global sales, marketing, and customer service forces in order to pitch its offering into these partners’ customer bases.

However, Superwise.ai will almost certainly run into the tough issue of how to show that its platform offers value over and above the model quality assurance capabilities embedded in each of those partners’ platforms. Fortunately for this startup, the Superwise.ai AI Assurance platform also integrates with the open-source Kubeflow platform, as well as with customers’ custom-built data science platforms. To the extent that large users are stitching together multivendor data-science DevOps pipelines on Kubeflow, Superwise.ai can position AI Assurance as a unifying platform for assuring continuing end-to-end model quality assurance across them all.

Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.

Other insights from the Futurum Research team:

H2O.ai Secures Series D Funding from NVIDIA, Wells Fargo, and Others to Fuel Sales and Marketing Efforts

BMC Strengthens AIOps Through Compuware Acquisition

Algorithmia Integrates AI Model Governance with GitOps

Exploring the Artificial Intelligence Journey for the Data-Driven Enterprise

Image Credit: VentureBeat

Author Information

James has held analyst and consulting positions at SiliconANGLE/Wikibon, Forrester Research, Current Analysis and the Burton Group. He is an industry veteran, having held marketing and product management positions at IBM, Exostar, and LCC. He is a widely published business technology author, has published several books on enterprise technology, and contributes regularly to InformationWeek, InfoWorld, Datanami, Dataversity, and other publications.

SHARE:

Latest Insights:

Frank Geraci, President at Cronos, joins David Nicholson to share his insights on Huddle, a groundbreaking Smartsheet solution set to redefine configuration management, version control, and the use of Smartsheet portals.
Cicero, Director of Product Marketing at Smartsheet joins David Nicholson to share his insights on ENGAGE 2024. Discover the groundbreaking announcements and the unique energy that makes ENGAGE an unmissable event.
Jennifer Stockton and Courtney Finger share how Smartsheet transformed Conga's marketing operations from "chaos to collaboration," highlighting the pivotal role of Smartsheet in streamlining processes and enhancing creativity at scale.
Amilcar Alfaro, Sr. Director, Product Marketing at Smartsheet, joins Keith Townsend to share insights on the crucial updates from ENGAGE 2024, emphasizing the value of enterprise-grade scale and the platforms' user-friendliness.