Kindo Requirements • Research Analysis • Analyst Talking Points • Working Board • Bus Analyst Requirements
Source: Gartner reports & analysis • Business Analyst Agent
From Feb 3 call with Jake Whitten (Gartner Account Manager) — what each stakeholder needs from Gartner engagement
Where AI adoption in enterprise security is actually succeeding — specific use cases, companies, concrete details
Palo Alto Networks' Cortex XSIAM uses graded autonomy for SOC playbooks — teams start with AI-generated recommendations, graduate to supervised remediation, then full auto-remediation once precision thresholds are met. This "earned autonomy" model is the leading example of AI succeeding in enterprise security.
↗ 5 Agentic AI Capabilities (G00844741), p.474% of CFOs report productivity gains from AI in the form of time saved — but only 11% have seen actual financial value from AI in 2025. The wins are real but narrow: search, content generation and summarization are ready; autonomous decision-making is still nascent (only 15% of IT leaders piloting autonomous agents).
↗ 1Q26 Business Quarterly — "Walking the Golden Path to AI Value"Where AI adoption is failing and WHY — detailed failure modes, struggles
75% of GenAI vendors cite customer awareness as the #1 adoption blocker. Enterprises have unrealistic expectations, lack quality data, and face unpredictable costs — leading to long sales cycles, low POC-to-production conversion, and "successful" pilots that stall. 62% of organizations say neither AI tech nor their people are ready.
↗ Emerging Tech Roadblocks (G00809520), CBR of 52 vendors, 181 use cases74% of CIOs are losing money or breaking even on AI investments. GenAI error rates range from 3% to 25%. For every 100 days implementing AI, expect 25 more for training and 100–200 for change management. Enterprises lack single decision-making authority — adoption becomes "everyone's problem and no one's problem."
↗ 1Q26 Business Quarterly — "Where's the AI Value?"IAM, XDR, MDR, incident response, security engineering — AI applications across the full security stack
Agentic AI is rated a top-5 emerging risk by senior risk leaders (4Q25 survey). Beyond SOC, the shift is toward "systems of action" where AI handles multi-step execution across security domains — supervised remediation in IAM, automated triage in XDR, and continuous compliance monitoring. The key: autonomy must be governable and graduated.
↗ 5 Agentic AI Capabilities (G00844741) ↗ 1Q26 BQ — Emerging Risks Survey (agentic AI #4 risk)⚠️ Deeper IAM/XDR/MDR-specific research not in current library — request via analyst inquiry
Analysts tracking Claude co-work, AI swarms, distributed AI, multi-agent systems — the cutting edge of AI capabilities
Agentic AI is replacing command-and-control with "systems of action" — humans and autonomous agents jointly plan, execute, and verify work. By 2028, services-led adoption will be a competitive disadvantage. Key examples: Palantir's AIP Evals (drift detection, real-time performance monitoring), Ema's Universal AI Employee (real-time decision checkpoints), and Celonis (process-intelligence dashboards for agent optimization).
↗ 5 Agentic AI Capabilities (G00844741)By 2030, Gartner expects 0% of IT work will be done by humans without AI — 75% human-augmented, 25% AI-alone. Only 15% of IT leaders currently focus on autonomous multi-agent systems. The shift from conversational AI to autonomous decision-making agents is the key capability change to track.
↗ 1Q26 Business Quarterly — AI agents, workforce impactAI governance & enterprise deployment solutions — frameworks, tools, and best practices
Governance must be built into the product, not bolted on as services. Winning vendors productize autonomy levels with configurable controls — what agents may do, under what conditions, at what confidence thresholds. Palo Alto lets teams "earn" autonomy; Palantir embeds health checks, value verification, and risk management as automated product capabilities.
↗ 5 Agentic AI Capabilities (G00844741) — "Productize Levels of Autonomy"AI sovereignty is critical: by 2027, 35% of countries will be locked into region-specific AI platforms. CIOs must protect the model, the data, and the results. Techniques: digital tokenization, model distillation, and avoiding vendor lock-in across "digital nation-state" providers.
↗ 1Q26 BQ — "Navigate the Vendor Landscape" / AI sovereigntyWants Kindo mentioned in AI security analyst reports — MQ and Hype Cycle placement strategy
AI is currently in the Trough of Disillusionment per Gartner's Chief of Research. This is where "the hard work begins and heroes are made." Vendors who demonstrate concrete ROI now — not just productivity gains — will emerge as leaders when the Hype Cycle climbs toward the Plateau. 67% of orgs have deployed AI; 54% have deployed GenAI — the market is active but disillusioned.
↗ 1Q26 BQ — "Where's the AI Value?" by Chris Howard⚠️ AI security-specific MQ/Hype Cycle reports not in current library — request from Jake Whitten for Kindo inclusion
CISOs who want to figure out AI adoption faster — community access and engagement opportunities
Gartner highlights an "emerging CISOs" cohort actively seeking AI adoption strategies. The 1Q26 BQ promotes Gartner's 2026 global conference series as the primary venue for executive peer networking. These conferences explicitly target connecting "emerging solution providers with cutting-edge buyers."
↗ 1Q26 BQ — 2026 Gartner Conferences promotion⚠️ Request CISO-specific peer community details and event calendar from Jake Whitten
Conferences where emerging solution providers meet cutting-edge buyers — event strategy
The 1Q26 BQ features a full-page ad for the 2026 Gartner Conference Calendar — positioned as the venue to "exchange actionable strategies," "gain fresh insights from Gartner experts," and "build valuable connections that drive growth and innovation." Gartner IT Symposium/Xpo is the flagship event referenced throughout the research.
↗ 1Q26 BQ — Conference calendar page⚠️ Request specific security-focused event dates and emerging vendor showcase opportunities from Jake Whitten
Are enterprises expecting AI to match human workflows? Is that mismatch why adoption fails?
Yes — the mismatch is the primary failure mode. Traditional change management assumes tools are static and users adapt. Agentic AI breaks this: agents replan at runtime, requiring entirely new behavioral norms. Gartner says enterprises that don't redesign workflows will see "successful" pilots stall in production under low trust and executive skepticism about ROI.
↗ 5 Agentic AI Capabilities (G00844741) — "Issue Context"25% of vendors in Gartner's CBR report change management as a top blocker. Stakeholders at different levels have conflicting visions. Enterprises lack a single decision-making authority for AI, so adoption becomes "everyone's problem and no one's problem." Only 23% of CIOs say managers are ready to help employees navigate tech changes.
↗ Emerging Tech Roadblocks (G00809520) — Critical Insight 3 ↗ 1Q26 BQ — "Capture and Sustain AI Value" / change management costsHow to gauge a prospect's AI education level before sales conversations — assessment tool for pre-engagement
Gartner's "You Are Here" GPS framework assesses AI readiness on two axes: AI technology readiness vs. human readiness. Only 11% of orgs score high on both. Use this as the basis for a pre-sales questionnaire — prospects who score low on both axes need education first, not a product demo.
↗ Walking the Golden Path (G00841075) — GPS frameworkGartner's CBR found prospects fall into distinct readiness tiers: some are "nonimmediate buyers" still in education mode, while others are ready but lack data infrastructure. 75% of vendors struggle with customer awareness gaps — a questionnaire that segments these tiers before the first call would cut wasted sales cycles.
↗ Emerging Tech Roadblocks (G00809520) — "Near-Term Implications"Concrete data on adoption failure details — to design Kindo's product to overcome those harms
Top failure modes from 181 use cases: (1) High compute costs + unpredictable pricing deter adoption, (2) lack of quality data means POCs don't convert to production, (3) no single decision-making authority creates organizational paralysis, (4) stakeholders at different levels have conflicting visions of AI's role.
↗ Emerging Tech Roadblocks (G00809520) — CBR of 52 vendorsProduct design implication: Gartner says products that don't embed change management into the UX will see "weak renewals and rising operational risk." Winners build graduated autonomy (start small, prove value, expand) and low cognitive load (meet users where they work, reduce decisions). Time saved ≠ money saved — the product must help convert productivity into financial value.
↗ 5 Agentic AI Capabilities (G00844741) — 5 product principles ↗ 1Q26 BQ — "time saved is not money saved"| Dimension | Vector Database | Markdown Database | Hybrid (Recommended) |
|---|---|---|---|
| Semantic Search | ★★★ Excellent | ★☆☆ Keyword only | ★★★ Best of both |
| Structure Preservation | ★☆☆ Lost in chunking | ★★★ Fully preserved | ★★★ Markdown source |
| Human Readability | ★☆☆ Opaque vectors | ★★★ Directly readable | ★★★ Browse markdown |
| LLM/RAG Compatibility | ★★★ Native | ★★☆ Needs chunking layer | ★★★ Vectors → markdown chunks |
| Operational Simplicity | ★★☆ Extra infra | ★★★ Files or simple DB | ★★☆ Two systems |
| Table/Chart Fidelity | ★☆☆ Flattened | ★★★ Markdown tables | ★★★ Markdown source |
| Versioning & Diffs | ★☆☆ Not practical | ★★★ Git-native | ★★★ Git on markdown |
| Cost at Scale | ★★☆ Embedding + hosting | ★★★ Minimal | ★★☆ Combined |
Convert Gartner PDFs to structured markdown (preserving headers, tables, lists). Store as canonical files. Then embed the same markdown chunks into pgvector (Supabase-native) for semantic retrieval. When the LLM retrieves a chunk, it pulls structured markdown — preserving the structure Gartner reports depend on. If forced to pick one: markdown first. You can always add vector search later; you can't reconstruct structure from flattened embeddings.
Editor: Alec • Budget: <$500 each • Target: Thought leadership for analyst briefings & RSA
Status: Draft v3 in review
Editor: Alec | Budget: $450
Thesis: Enterprises need configurable autonomy levels — not binary on/off AI. Maps to Gartner's "productize levels of autonomy" guidance.
Status: Research complete, writing in progress
Editor: Alec | Budget: $475
Thesis: AI-driven security without human oversight creates new attack surfaces. Supervised remediation is the bridge between automation and accountability.
Status: Outline + first draft
Editor: Alec | Budget: $425
Thesis: 25% of vendors cite change management as the #1 blocker. Products that embed adoption into the UX — not bolt it on as services — win.
Timeline: Feb–March analyst radar → RSA Conference preparation
Security & Risk Management:
Emerging Technology:
⚠️ Request specific analyst names from Jake Whitten — Gartner account managers facilitate introductions
📋 Content & Materials
🤝 Meetings & Logistics
📊 Key Milestones