The New
Language of Data
Three questions. Three investigations.
Zero SQL.
Watch an agentic model receive a question in English, investigate across your data substrate, and deliver a complete answer — with root cause, evidence, and action — in seconds.
Three things happened in the same month. Jensen Huang stood on stage at GTC and declared the "agentic AI inflection point has arrived" — then compared OpenClaw to Linux and said every company needs a strategy for it. Anthropic signed a $200M partnership with Snowflake to deploy Claude-powered agents across 12,600 enterprises. And quietly, engineering teams at companies you've heard of started typing /ask in Slack channels instead of opening Tableau.
That last one matters most. Because it reveals the real interaction pattern of the agentic era: you don't sit in front of a computer and operate a BI tool. You spawn a process from a Slack channel, a Teams thread, or an iMessage. The agent works in the background — querying, cross-referencing, verifying — and sends you a notification when it's done. "Your LATAM conversion analysis is ready. Estimated root cause identified. Delivery: 3 minutes." You read the result on your phone between meetings. You never opened a dashboard. You never wrote a query. You never left the conversation you were already in.
This isn't a product demo. It's the architectural consequence of what Huang, Anthropic, and the open-model ecosystem are building simultaneously. Huang builds the factory floor — Vera Rubin, the $1 trillion compute demand, tokens per watt as the new CEO metric. Anthropic builds the reasoning layer — Claude as enterprise infrastructure, Agent Skills as an open standard, MCP donated to the Linux Foundation. OpenClaw becomes the orchestration OS. And the five-layer stack that emerges from this convergence doesn't have dashboards anywhere in it. What it has instead is a new vocabulary.
From Dashboard Morning to the Living Brief
Monday, 6:47 AM. The VP of Sales hasn't opened her laptop yet. Her phone buzzes with a Slack notification from @data-agent:
"Weekly sales brief ready. Revenue ↑ 8% WoW, driven by APAC enterprise. 3 deals closed early on quarter-end urgency. Pipeline coverage for next month is thin — 3 actions recommended. Full brief attached. Reading time: 90 seconds."
She reads it on her phone while making coffee. She replies /brief expand pipeline-risk — the agent deepens that section and posts an updated version in 40 seconds. She forwards it to her CRO. Total time at a screen: zero. The #LivingBrief was composed overnight by a model fine-tuned on CRM, pipeline, and revenue data. It didn't wait to be asked. It ran on a schedule — like a CI/CD pipeline, but for business intelligence. The economics are simple: an overnight inference job costs a few dollars in tokens. The Monday morning ritual it replaces cost 40 minutes of executive time × 52 weeks × every VP in the company.
From Query Queues to the Intent Cast
A PM types in the #product-analytics Slack channel:
/ask are mobile-first users stickier than desktop?
The agent responds in 2 seconds: "Investigating. Checking retention by platform, cohort, geography, and pricing tier. Sarah (Data) ran a similar analysis 12 days ago — 70% of the underlying data is reusable. Refreshing the last 12 days only. Estimated delivery: 45 seconds."
This is the #IntentCast — but notice what just happened. The agent didn't start from scratch. It checked the history of prior analyses across the org, found a reusable artifact, and told you upfront. 45 seconds later, a threaded reply appears: a narrated finding with an auto-generated chart, posted in a #NarrativePane right inside Slack. The PM asks a follow-up in the thread: "Does this hold for enterprise tier?" The chart morphs. The conversation is the analysis. No one opened a BI tool. No one left Slack.
From Threshold Alerts to the Signal Stream
Tuesday, 2:14 PM. Nobody asked a question. But the #ops-alerts Slack channel gets a message from @signal-agent:
"⚡ Anomaly detected: Unusual churn cluster in Brazil — 3.1σ deviation from baseline. Investigating... [working]"
90 seconds later, a follow-up: "Root cause identified: payment gateway latency spike (340ms avg, Tue–Wed). 14,200 checkout sessions abandoned. Revenue at risk: $180K. Historical precedent: similar incident 90 days ago resolved in 5 days (provider-side). Escalation ticket drafted. Approve to send? /approve or /modify"
The Head of Ops reads this on her phone during a meeting. She taps /approve. The ticket goes to the payments team. Total human involvement: one tap. This is the #SignalStream — continuous inference that monitors every metric, every segment, every correlation. The Aware Metrics don't wait for a human to set a threshold. They detect their own anomalies and deliver the investigation to you, not the other way around.
From Stale Slides to the Intelligence Surface
Thursday, 4 PM. The CFO types in the #finance Slack channel:
/prepare board-deck Q3 --format exec-summary --deadline friday-9am
The agent responds: "Acknowledged. Pulling live actuals from data substrate. Checking: last quarter's board deck had 14 slides — board engaged with slides 2, 4, 7, 11 (based on discussion transcripts). Recommending 11 slides this quarter. Revenue narrative needs fresh data (refreshing now). Margin commentary reusable from FP&A's Tuesday model — updating with Thursday actuals only. Estimated delivery: Friday 6:00 AM. I'll post a draft link for your review by 11 PM tonight."
The CFO goes home. At 11:07 PM, a Slack notification: "Draft ready for review. 3 items flagged for your judgment: (1) margin compression narrative — two interpretations possible, (2) APAC deep-dive recommended based on last quarter's board interest, (3) slide 9 removable — low engagement history. Review link attached."
This is the #IntelligenceSurface — a living layer that pulls real-time actuals, auto-generates #SignalCards for every KPI, and drafts narrative that updates when the numbers move. The CFO's job: review, choose between the two margin interpretations, approve. Assembly is gone. Judgment remains.
From Six Humans to One Intent — Ambient Awareness
Wednesday, 9:15 AM. The CEO is walking into an all-hands meeting. She texts from iMessage:
"Prep me for the QBR tomorrow. Focus on what the board will actually ask about."
The agent responds: "On it. Pulling actuals, pipeline, churn, and NPS. Checking last 3 board transcripts for recurring question patterns. Mark (RevOps) ran a revenue drill-down yesterday — reusing his APAC segment data. Jake (FP&A) updated the forecast model 2 hours ago — pulling latest. Estimated delivery: your calendar shows a 45-min gap at 2 PM. I'll have the complete QBR brief, talking points, and a 12-slide deck ready by 1:55 PM."
At 1:55 PM, it arrives: a complete deliverable posted to her private Slack channel. Data pulled from 4 sources. Story identified. Narrative drafted. Slides built. Talking points formatted for mobile reading. Three decision items flagged for her input. She reads it during her 2 PM coffee break.
This is #AmbientAwareness. The agent didn't just execute a task — it checked what other people in the org had already produced, reused what was fresh, only recomputed what was stale, estimated delivery based on her actual calendar, and delivered to where she'd actually read it. Six humans, four tools, three days — collapsed into one iMessage and a 1:55 PM notification. The human never sat at a computer.
The Stack Beneath the Lexicon
This is not magic. It has five layers.
Delta Lake, Iceberg, Unity Catalog layer — the governed, versioned truth. Nothing above it works without lineage and access control here.Nemotron, Llama, Mistral get tuned on your domain data and connected to your systems via OpenClaw / NemoClaw. The model reasons, queries, cross-references, and verifies before it speaks.No, not everything disappears.
Regulated workflows, audit trails, financial controls, SOX compliance, and human sign-off will preserve visual surfaces for years. Operations centers, trading floors, and production monitoring genuinely benefit from spatial, glanceable, real-time displays. Those aren't going anywhere.
But the role changes. The chart stops being the product. It becomes evidence attached to an agentic conclusion.
The governance layer — lineage, access control, explainability — doesn't go away. It gets harder, because the consumer of the insight is now further from the source than ever. That's not a reason to keep dashboards. That's a reason to build better Layer 2 business memory.
The Post-Query Lexicon
12 terms for a world where English is the only query language.
The vocabulary exists now.
The interaction pattern is already here.
None of this is science fiction. Engineers already type /deploy in Slack and get a notification when the build is done. DevOps already runs background pipelines that notify on failure. The only thing that's new is applying that same pattern — spawn, background process, ETA, deliver — to business intelligence. The models are shipping. The orchestration layer is open source. The enterprise governance is being built by Anthropic, Snowflake, and the Nemotron Coalition in real time.
The question is not whether people will spend less time in front of dashboards. They will. The question is whether your company is building the agent that replaces the dashboard — or still buying the SaaS license that assumes a human is sitting in front of one.