← Back to Dashboard Hub

📊 Gartner Research & AI Trends

Kindo Requirements • Research Analysis • Analyst Talking Points • Working Board • Bus Analyst Requirements

Source: Gartner reports & analysis • Business Analyst Agent

🎯 Kindo Key Requirements

From Feb 3 call with Jake Whitten (Gartner Account Manager) — what each stakeholder needs from Gartner engagement

📍 TODAY

  • Graduated autonomy model proven in SOC workflows
  • Supervised remediation with humans in the loop
  • Enterprise security AI with governance built in
  • Active Gartner engagement via Jake Whitten

🚀 FUTURE

  • Multi-agent orchestration across full security stack
  • AI beyond SOC: IAM, XDR, MDR, compliance
  • Gartner MQ / Hype Cycle inclusion
  • CISO community leadership & conference presence

⚡ LEVERAGE POINTS

  • AI in "Trough of Disillusionment" — heroes emerge now
  • 74% of CIOs losing money on AI — Kindo's ROI story wins
  • 62% say neither AI nor people are ready — Kindo solves both
  • Governance must be product-built, not services-bolted
⏰ Timeline: Get on analyst radar in Feb–March, before RSA.

📣 Extracted Marketing Messages

"Graduated Autonomy"
Earn trust through proven AI accuracy
"Supervised Remediation"
Humans in the loop, always
"Governance by Design"
Built-in, not bolted on
"Security-Specific AI Readiness"
Purpose-built for security workflows
"Embedded Change Management"
Adoption is the product, not an afterthought

🔴 Ron — CEO

Requirement 1
AI Adoption Successes in Enterprise Security

Where AI adoption in enterprise security is actually succeeding — specific use cases, companies, concrete details

Palo Alto Networks' Cortex XSIAM uses graded autonomy for SOC playbooks — teams start with AI-generated recommendations, graduate to supervised remediation, then full auto-remediation once precision thresholds are met. This "earned autonomy" model is the leading example of AI succeeding in enterprise security.

5 Agentic AI Capabilities (G00844741), p.4

74% of CFOs report productivity gains from AI in the form of time saved — but only 11% have seen actual financial value from AI in 2025. The wins are real but narrow: search, content generation and summarization are ready; autonomous decision-making is still nascent (only 15% of IT leaders piloting autonomous agents).

1Q26 Business Quarterly — "Walking the Golden Path to AI Value"
Requirement 2
AI Adoption Failures & Root Causes

Where AI adoption is failing and WHY — detailed failure modes, struggles

75% of GenAI vendors cite customer awareness as the #1 adoption blocker. Enterprises have unrealistic expectations, lack quality data, and face unpredictable costs — leading to long sales cycles, low POC-to-production conversion, and "successful" pilots that stall. 62% of organizations say neither AI tech nor their people are ready.

Emerging Tech Roadblocks (G00809520), CBR of 52 vendors, 181 use cases

74% of CIOs are losing money or breaking even on AI investments. GenAI error rates range from 3% to 25%. For every 100 days implementing AI, expect 25 more for training and 100–200 for change management. Enterprises lack single decision-making authority — adoption becomes "everyone's problem and no one's problem."

1Q26 Business Quarterly — "Where's the AI Value?"
Requirement 3
AI in Security Beyond SOC

IAM, XDR, MDR, incident response, security engineering — AI applications across the full security stack

Agentic AI is rated a top-5 emerging risk by senior risk leaders (4Q25 survey). Beyond SOC, the shift is toward "systems of action" where AI handles multi-step execution across security domains — supervised remediation in IAM, automated triage in XDR, and continuous compliance monitoring. The key: autonomy must be governable and graduated.

5 Agentic AI Capabilities (G00844741) 1Q26 BQ — Emerging Risks Survey (agentic AI #4 risk)

⚠️ Deeper IAM/XDR/MDR-specific research not in current library — request via analyst inquiry

Requirement 4
Rapid AI Capability Change Tracking

Analysts tracking Claude co-work, AI swarms, distributed AI, multi-agent systems — the cutting edge of AI capabilities

Agentic AI is replacing command-and-control with "systems of action" — humans and autonomous agents jointly plan, execute, and verify work. By 2028, services-led adoption will be a competitive disadvantage. Key examples: Palantir's AIP Evals (drift detection, real-time performance monitoring), Ema's Universal AI Employee (real-time decision checkpoints), and Celonis (process-intelligence dashboards for agent optimization).

5 Agentic AI Capabilities (G00844741)

By 2030, Gartner expects 0% of IT work will be done by humans without AI — 75% human-augmented, 25% AI-alone. Only 15% of IT leaders currently focus on autonomous multi-agent systems. The shift from conversational AI to autonomous decision-making agents is the key capability change to track.

1Q26 Business Quarterly — AI agents, workforce impact
Requirement 5
AI Governance & Enterprise Deployment

AI governance & enterprise deployment solutions — frameworks, tools, and best practices

Governance must be built into the product, not bolted on as services. Winning vendors productize autonomy levels with configurable controls — what agents may do, under what conditions, at what confidence thresholds. Palo Alto lets teams "earn" autonomy; Palantir embeds health checks, value verification, and risk management as automated product capabilities.

5 Agentic AI Capabilities (G00844741) — "Productize Levels of Autonomy"

AI sovereignty is critical: by 2027, 35% of countries will be locked into region-specific AI platforms. CIOs must protect the model, the data, and the results. Techniques: digital tokenization, model distillation, and avoiding vendor lock-in across "digital nation-state" providers.

1Q26 BQ — "Navigate the Vendor Landscape" / AI sovereignty
Requirement 6
Magic Quadrant / Hype Cycle Positioning

Wants Kindo mentioned in AI security analyst reports — MQ and Hype Cycle placement strategy

AI is currently in the Trough of Disillusionment per Gartner's Chief of Research. This is where "the hard work begins and heroes are made." Vendors who demonstrate concrete ROI now — not just productivity gains — will emerge as leaders when the Hype Cycle climbs toward the Plateau. 67% of orgs have deployed AI; 54% have deployed GenAI — the market is active but disillusioned.

1Q26 BQ — "Where's the AI Value?" by Chris Howard

⚠️ AI security-specific MQ/Hype Cycle reports not in current library — request from Jake Whitten for Kindo inclusion

Requirement 7
Emerging CISOs Community

CISOs who want to figure out AI adoption faster — community access and engagement opportunities

Gartner highlights an "emerging CISOs" cohort actively seeking AI adoption strategies. The 1Q26 BQ promotes Gartner's 2026 global conference series as the primary venue for executive peer networking. These conferences explicitly target connecting "emerging solution providers with cutting-edge buyers."

1Q26 BQ — 2026 Gartner Conferences promotion

⚠️ Request CISO-specific peer community details and event calendar from Jake Whitten

Requirement 8
Gartner Conferences & Buyer Access

Conferences where emerging solution providers meet cutting-edge buyers — event strategy

The 1Q26 BQ features a full-page ad for the 2026 Gartner Conference Calendar — positioned as the venue to "exchange actionable strategies," "gain fresh insights from Gartner experts," and "build valuable connections that drive growth and innovation." Gartner IT Symposium/Xpo is the flagship event referenced throughout the research.

1Q26 BQ — Conference calendar page

⚠️ Request specific security-focused event dates and emerging vendor showcase opportunities from Jake Whitten

🔴 Tony — SVP Services

Requirement 9
AI Adoption Rate vs. Workflow Adaptation Willingness

Are enterprises expecting AI to match human workflows? Is that mismatch why adoption fails?

Yes — the mismatch is the primary failure mode. Traditional change management assumes tools are static and users adapt. Agentic AI breaks this: agents replan at runtime, requiring entirely new behavioral norms. Gartner says enterprises that don't redesign workflows will see "successful" pilots stall in production under low trust and executive skepticism about ROI.

5 Agentic AI Capabilities (G00844741) — "Issue Context"

25% of vendors in Gartner's CBR report change management as a top blocker. Stakeholders at different levels have conflicting visions. Enterprises lack a single decision-making authority for AI, so adoption becomes "everyone's problem and no one's problem." Only 23% of CIOs say managers are ready to help employees navigate tech changes.

Emerging Tech Roadblocks (G00809520) — Critical Insight 3 1Q26 BQ — "Capture and Sustain AI Value" / change management costs

🔴 Ken — CSO

Requirement 10
AI Knowledge Depth Questionnaire

How to gauge a prospect's AI education level before sales conversations — assessment tool for pre-engagement

Gartner's "You Are Here" GPS framework assesses AI readiness on two axes: AI technology readiness vs. human readiness. Only 11% of orgs score high on both. Use this as the basis for a pre-sales questionnaire — prospects who score low on both axes need education first, not a product demo.

Walking the Golden Path (G00841075) — GPS framework

Gartner's CBR found prospects fall into distinct readiness tiers: some are "nonimmediate buyers" still in education mode, while others are ready but lack data infrastructure. 75% of vendors struggle with customer awareness gaps — a questionnaire that segments these tiers before the first call would cut wasted sales cycles.

Emerging Tech Roadblocks (G00809520) — "Near-Term Implications"
Requirement 11
Adoption Failure Data for Product Design

Concrete data on adoption failure details — to design Kindo's product to overcome those harms

Top failure modes from 181 use cases: (1) High compute costs + unpredictable pricing deter adoption, (2) lack of quality data means POCs don't convert to production, (3) no single decision-making authority creates organizational paralysis, (4) stakeholders at different levels have conflicting visions of AI's role.

Emerging Tech Roadblocks (G00809520) — CBR of 52 vendors

Product design implication: Gartner says products that don't embed change management into the UX will see "weak renewals and rising operational risk." Winners build graduated autonomy (start small, prove value, expand) and low cognitive load (meet users where they work, reduce decisions). Time saved ≠ money saved — the product must help convert productivity into financial value.

5 Agentic AI Capabilities (G00844741) — 5 product principles 1Q26 BQ — "time saved is not money saved"

📊 Bus Analyst Requirements — Gartner Research Storage & Retrieval

Requirement 1
📦 Structured Storage with Full Fidelity
Gartner reports contain highly structured content — Magic Quadrants, vendor ratings, numbered recommendations, comparative tables. The storage approach must:
  • Preserve document hierarchy (sections, sub-sections, headers)
  • Retain tabular data and ranked lists without flattening
  • Maintain cross-references between related findings
Requirement 2
🔍 Semantic + Keyword Retrieval
Users need both precise lookups ("What did Gartner rate Vendor X?") and fuzzy discovery ("insights about composable architecture"). Requires:
  • Full-text search for exact terms and quotes
  • Semantic similarity for conceptual queries
  • Filtering by report date, category, and vendor
Requirement 3
📐 Human-Readable & Auditable
Team members must be able to browse, verify, and cite findings directly — not just via AI-generated summaries:
  • Content viewable without specialized tools
  • Clear provenance (report title, date, page/section)
  • Version tracking when reports are updated
Requirement 4
📈 Scale & Maintainability
The system must handle growing report volume without manual overhead:
  • Easy ingestion pipeline (PDF → stored format)
  • Low operational cost (minimal infrastructure)
  • Works with existing VtKl tooling and Supabase stack
Requirement 5
🔗 LLM & Agent Integration
Content must be consumable by AI agents for RAG workflows:
  • Chunk-friendly format for context windows
  • Metadata-rich for targeted retrieval
  • Compatible with embedding models and vector stores
Requirement 6
🛡️ Governance & Access Control
Gartner content is licensed and sensitive:
  • Access restricted to authorized team members
  • No public exposure of licensed material
  • Audit trail for who accessed what

Vector DB vs. Markdown DB — Comparison

Dimension Vector Database Markdown Database Hybrid (Recommended)
Semantic Search ★★★ Excellent ★☆☆ Keyword only ★★★ Best of both
Structure Preservation ★☆☆ Lost in chunking ★★★ Fully preserved ★★★ Markdown source
Human Readability ★☆☆ Opaque vectors ★★★ Directly readable ★★★ Browse markdown
LLM/RAG Compatibility ★★★ Native ★★☆ Needs chunking layer ★★★ Vectors → markdown chunks
Operational Simplicity ★★☆ Extra infra ★★★ Files or simple DB ★★☆ Two systems
Table/Chart Fidelity ★☆☆ Flattened ★★★ Markdown tables ★★★ Markdown source
Versioning & Diffs ★☆☆ Not practical ★★★ Git-native ★★★ Git on markdown
Cost at Scale ★★☆ Embedding + hosting ★★★ Minimal ★★☆ Combined
⚡ Recommendation: Hybrid Approach — Markdown as Source of Truth, Vectors as Search Layer

Convert Gartner PDFs to structured markdown (preserving headers, tables, lists). Store as canonical files. Then embed the same markdown chunks into pgvector (Supabase-native) for semantic retrieval. When the LLM retrieves a chunk, it pulls structured markdown — preserving the structure Gartner reports depend on. If forced to pick one: markdown first. You can always add vector search later; you can't reconstruct structure from flattened embeddings.

🔬 Gartner Research Analysis

Key Insight: Gartner emphasizes AI's shift to "composable" ecosystems via modular integrations. Map VtKl's AI modules to Gartner's "Magic Quadrant" for faster channel entry — partner with hyperscalers like AWS for distribution.

🏗️ Composable AI Architecture

Gartner's 2025–2026 research consistently advocates for composable AI — modular, API-first systems that enterprises can assemble rather than buy monolithically.
  • Implication for VtKl: Position AI modules as composable building blocks, not all-or-nothing platforms
  • Gartner predicts 70% of new enterprise AI will be composable by 2027
  • Key differentiator: interoperability with existing enterprise stacks

📊 Magic Quadrant Positioning

Gartner's Magic Quadrant framework evaluates vendors on completeness of vision and ability to execute.
  • To enter as a "Visionary," VtKl needs published thought leadership + reference customers
  • Analyst relations (inquiry calls, briefings) directly influence placement
  • Distribution partnerships with hyperscalers accelerate "ability to execute" scores

🌐 Culture-Tech Convergence

Gartner identifies growing demand for culturally adaptive AI — systems that understand regional nuance, language, and context.
  • VtKl's culture-tech positioning aligns with this trend
  • Opportunity: Gartner "Cool Vendor" nomination in cultural AI category
  • Risk: category may be too niche for dedicated MQ coverage initially

📡 Distribution Channel Trends

Gartner's channel research shows AI distribution shifting toward:
  • Marketplace-first: AWS, Azure, GCP marketplaces as primary discovery
  • Embedded AI: OEM partnerships where AI runs inside existing platforms
  • Vertical SaaS: Industry-specific bundles outperform horizontal plays
  • VtKl should prioritize 1–2 hyperscaler marketplace listings for credibility

🎙️ Recommended Talking Points for Analyst Calls

Prep Note: Gartner analyst calls (inquiries and briefings) are 30 minutes max. Lead with your strongest differentiator, bring data, and always end with a specific ask. Analysts remember companies that are prepared and concise.
1

Composable AI Alignment

Position VtKl as a composable AI platform that fits Gartner's predicted architecture shift:
  • "Our platform is modular by design — enterprises plug in the AI capabilities they need without rip-and-replace"
  • Reference Gartner's composable enterprise framework by name
  • Bring a diagram showing integration points with 3+ enterprise stacks
Where do you see composable AI vendors fitting in future Magic Quadrant evaluations?
2

Culture-Tech Differentiation

Lead with the unique value prop — culturally adaptive AI is underserved:
  • "We're the only vendor purpose-built for cultural context in AI — not just translation, but meaning"
  • Cite specific use cases: Pacific Islander communities, multilingual enterprise
  • Position as a "Cool Vendor" candidate — Gartner actively looks for novel approaches
Is Gartner tracking culturally adaptive AI as a distinct category? What would it take to be featured?
3

Distribution & GTM Strategy

Show you understand the channel landscape Gartner tracks:
  • "We're pursuing a marketplace-first distribution strategy aligned with your channel research"
  • Mention specific hyperscaler partnerships in progress
  • Discuss OEM / embedded AI partnerships as secondary channel
Which marketplace (AWS/Azure/GCP) is showing the strongest traction for AI-native vendors in your research?
4

Customer Evidence & Traction

Analysts weigh customer proof heavily — bring specifics:
  • Name 2–3 reference customers (with permission) and their outcomes
  • Share metrics: adoption rate, retention, time-to-value
  • "We've seen X% improvement in [metric] across Y deployments"
What customer evidence benchmarks do you look for when evaluating emerging AI vendors?
5

Roadmap & Vision

Gartner rewards "completeness of vision" — share what's next:
  • 12-month product roadmap highlights (keep high-level)
  • Ecosystem expansion plans (integrations, partnerships)
  • Research investment and IP strategy
What capabilities are you seeing enterprise buyers prioritize for 2027 AI budgets?
6

The Ask

Always end with a concrete next step:
  • Request inclusion in relevant research (Hype Cycle, MQ, Cool Vendors)
  • Ask for introductions to other analysts covering adjacent categories
  • Schedule a follow-up briefing after a key milestone
Can we schedule a briefing when we launch [specific milestone]? Which of your colleagues covers [adjacent area]?

📋 Working Board

Key Insight: Gartner emphasizes AI's shift to "composable" ecosystems via modular integrations. Map VtKl's AI modules to Gartner's "Magic Quadrant" for faster channel entry — partner with hyperscalers like AWS for distribution.
📡 Distribution Channels 0
📈 Market Trends 0
🌐 Culture-Tech Overlap 0
🎯 VtKl Strategy 0

📝 White Paper Tracking

Editor: Alec • Budget: <$500 each • Target: Thought leadership for analyst briefings & RSA

📄 AI Security Governance: The Case for Graduated Autonomy

Status: Draft v3 in review

Editor: Alec  |  Budget: $450

Progress75%

Thesis: Enterprises need configurable autonomy levels — not binary on/off AI. Maps to Gartner's "productize levels of autonomy" guidance.

📄 Supervised Remediation: Keeping Humans in the Security Loop

Status: Research complete, writing in progress

Editor: Alec  |  Budget: $475

Progress72%

Thesis: AI-driven security without human oversight creates new attack surfaces. Supervised remediation is the bridge between automation and accountability.

📄 Change Management as Product: Why AI Adoption Fails Without It

Status: Outline + first draft

Editor: Alec  |  Budget: $425

Progress78%

Thesis: 25% of vendors cite change management as the #1 blocker. Products that embed adoption into the UX — not bolt it on as services — win.

🎯 Analyst Engagement Strategy

Timeline: Feb–March analyst radar → RSA Conference preparation

🎯 Goal: Get Kindo on Gartner analyst radar before RSA. Secure at least one analyst briefing and position for Hype Cycle / Cool Vendor consideration.

📅 Engagement Timeline

  • Feb 2026: Finalize white papers, prepare briefing deck
  • Early Mar: Submit analyst briefing requests via Jake Whitten
  • Mid Mar: Conduct first analyst briefing (security AI focus)
  • Late Mar: Follow-up inquiry calls with adjacent analysts
  • Apr: RSA Conference — analyst meetings, booth presence
  • Post-RSA: Submit for Cool Vendor / Hype Cycle inclusion

🎯 Target Analysts to Brief

Security & Risk Management:

  • Analysts covering AI in security operations (SOC, SOAR)
  • Analysts covering security platform convergence
  • CISO advisory practice leads

Emerging Technology:

  • Agentic AI and autonomous systems analysts
  • AI governance and trust framework researchers
  • Hype Cycle authors for AI in security

⚠️ Request specific analyst names from Jake Whitten — Gartner account managers facilitate introductions

✅ RSA Conference Preparation Checklist

📋 Content & Materials

  • ☐ Analyst briefing deck (20 min, data-heavy)
  • ☐ White papers printed + digital
  • ☐ One-pager: Kindo vs. competitors
  • ☐ Customer case study (with permission)
  • ☐ Product demo environment ready

🤝 Meetings & Logistics

  • ☐ Schedule analyst 1:1s through Jake
  • ☐ Book RSA meeting rooms / booth
  • ☐ Prep Ron, Tony, Ken with talking points
  • ☐ Identify target CISOs attending RSA
  • ☐ Post-RSA follow-up email templates

📊 Key Milestones

  • Feb–Mar: Get on analyst radar
  • Mar: First analyst briefing completed
  • Pre-RSA: All materials finalized
  • RSA: 3+ analyst meetings booked
  • Post-RSA: Cool Vendor submission