InsightFinder Release Notes & Features Breakdown

InsightFinder’s AI-driven Reliability platform continuously evolves with new features that improve IT system and AI model anomaly detection, proactive incident prediction and prevention, root cause analysis, AI model tracing, and AI agent and LLM evaluation This page provides an up-to-date log of product enhancements across AIObservability, IT Observability, AI agent, Integrations, and platform performance. See more of our Unified Intelligence Engine platform details here.

November 30, 2025

AI Observability 1.2

Summary

This release delivers major improvements across usability, visualization, and enterprise readiness. Users benefit from personalized timezone-aware views, richer anomaly context, and faster root cause analysis through an enhanced Trace Viewer. New dashboards, model visibility features, and fine-grained access controls further strengthen InsightFinder’s scalability and enterprise-grade capabilities.

User Experience & Settings
  • User Profile – Default Timezone Settings: Users can now configure a default timezone in their profile. All dashboards, charts, and analysis views automatically align to the user’s preferred timezone, improving clarity and consistency across global teams.
Monitors
  • Enhanced Monitor Capabilities: Monitors have been improved to provide more reliable signal tracking and smoother integration with downstream analytics and dashboards.
Model Availability
  • Model Availability Overview: A new Model Availability view provides clear visibility into which models are active, inactive, or restricted across the platform.
  • Model Availability Charts: Interactive charts now visualize model availability over time, helping users quickly identify gaps or changes in model coverage.
  • Anomaly Details in Line Charts: Line charts now include detailed anomaly overlays, allowing users to inspect anomaly context directly within time-series visualizations without switching views.
Workbench
  • Improved Workbench Experience: The Workbench has been enhanced for better usability and performance, enabling faster experimentation, analysis, and model interaction.
Trace Viewer
  • New Trace Viewer UI: The Trace Viewer has been redesigned to display both performance and LLM evaluation results in one UI.
  • Flowchart View for Traces: A new flowchart visualization allows users to explore trace execution paths visually, making complex request flows easier to understand.
  • Jump from Slow Trace to Root Cause: Users can now jump directly from a slow trace to the associated root cause analysis, significantly reducing time to diagnosis.
Dashboards and Control
  • Data Insights Dashboard: Introduced a new Data Insights Dashboard that consolidates key signals, trends, and insights into a single, actionable view.
  • Model Filtering: Dashboards now support model-based filtering, enabling users to focus on specific models and reduce visual noise in complex environments.
  • Model Access Control: Fine-grained access control has been added for models, allowing administrators to define which users or roles can view or interact with specific models.
  • Tenant Structure Optimization: Tenant architecture has been optimized to improve scalability, isolation, and performance, ensuring smoother operation in large enterprise deployments.

AI Observability 1.1

Building on our GA release, this update expands integrations, delivers new out-of-the-box monitors and charts, and enhances LLM Labs, Gateway, and pipelines for faster insights and greater trust in AI models.

1.  Model Enhancements

  • New Integrations: Out-of-the-box support for Databricks, Snowflake, Google BigQuery, AWS SageMaker models.
  • Extended Model Coverage: Added Gemini model support.
  • New OOTB Monitors & Workbench:
    • Model Bias Monitor – Perform bias checks on protected features using a variety of metrics such as PSI, KLDivergence, JSDivergence,.
    • Model Bias Charts – Visualization of model bias by group, feature and metric. Identify changes and anomalies in data distributions impacting model bias. View local SHAP for explainability.
    • Model Performance Monitor – Track invocation errors, model and overhead latency and other KPIs over time.
    • Data Drift Workbench – Visualization of data drift by groups (e.g. merchant) and one or more features. Identify changes and anomalies in data distributions impacting model performance.
    • LLM Trust & Safety Bench – Monitor harmful, unsafe, or policy-violating outputs.
    • Change Events (Feature & Global SHAP) – Visualize how feature shifts impact predictions.
    • SHAP Timeline View – Track how feature attributions evolve over time.

2. LLM Labs Improvements

  • Prompt Templates: Upload, store, and manage prompt templates for reuse.
  • Execute Templates: Run templates or single chats directly against models.
  • Side-by-Side Model Comparison: Compare two models (single chat or template) with evaluation summaries and declare “winner.”
  • Evaluation Result Visibility: View full evaluation history, summaries, and scoring logic.
  • Token Usage Tracking: See total token and input/output token consumption by model or evaluation.

3. Pipelines & Data Ingestion

  • AWS SageMaker Integration: Ingest SageMaker model metric data into Watchtower pipelines.
  • Pipeline Health View: New list view showing pipeline status and availability.

4. Dashboard & Visualization

  • Dashboard Customization: Add/remove widgets, apply filters such as by model type/version for a tailored view.
  • ML Dashboard Enhancements: Added owner name field and sorting options.

5. LLM Gateway

  • API Implementation: Expanded endpoints for direct model interaction.
  • Fallback Mechanism: Automatic failover to secondary models for resilience.
  • Token Usage Reporting: Detailed tracking per request, model, and time period.

6. Organization & Access Management

  • Organization Expiration Date: View and manage subscription lifecycle.
  • Token Usage Overview: Organization-level consumption tracking.

7. UI & Navigation

  • Product Quick Start: New onboarding guide tailored to LLM Labs.
  • Left Navigation Redesign: Simplified access and natural grouping of core features.
  • Search Optimization: Faster and more accurate search on the Model page.

AI Observability 1.0

  • Models
    • Model List
    • Add, Edit, Delete of Models
    • Data Source Setup
    • Model Metadata
  • Monitors
    • Monitor List
    • Add, Edit, Delete of Monitors
    • Data Quality Monitor
    • Data Drift Monitor
    • Model Drift Monitor
    • LLM Performance Monitor
    • LLM Trust and Safety Monitor
    • Notifications for Monitors
  • Workbench
    • Model Comparison
    • Data Quality Charts
    • Model Drift Charts
    • LLM Performance Charts
    • Timeline View with root cause summary and resolution recommendation
  • Trace Viewer
    • View traces and spans
    • Deep dive into performance anomalies
  • LLM Labs
    • Prompt testing with various LLMs, including Deepseek, Hugging Face, LLAMA, Mistral, OpenAI, Qwen, TinyLlama
  • LLM Evaluations
    • View passed and failed Trust & Safety Evaluations
    • View the results of evaluations from LLM Labs
  • Dashboards
    • ML Insights Dashboard
    • LLM Insights Dashboard
  • Enterprise Readiness
    • Audit Log
    • Add, Edit, Delete of Organizations
    • Users, Roles, Permissions per Organization

Explore InsightFinder AI

Take InsightFinder AI for a no-obligation test drive. We’ll provide you with a detailed report on your outages to uncover what could have been prevented.