You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've started using AI agents to handle data analysis, I'm getting increasingly excited with the outcomes and it's giving me lots of ideas to integrate it with Observable notebooks! I want to share a few ideas and ask for guidance - as you've probably given it a lot of thought already.
The structure I'm using so far is relying on Claude desktop as end-user interface, and all data-fetching is done via custom MCP tools I implemented for the database-agent interface. Claude desktop has been a quick win since the more robust agents (Opus, Sonnet) can themselves handle generating expressive analyses artifacts on-the-fly: executive summaries, interactive React reports, pdf documents, etc. I know for the long run this is not ideal as there's significant token waste when agents regenerate similar analyses (e.g., monthly sales reports, store performance breakdown) - and that's the usecase I see BI and Observable notebooks coming in for the win. I already have a Observable notebook-kit structure set up but currently I just use it myself for some internal troubleshooting (it's currently running SQL queries directly in the database). I'm thinking of good ways to extend it to an official BI platform, supported by AI agents.
The first idea that came to mind is having the capability of building reports in Observable Notebooks relying on reuse of the data-fetching MCP tools. This feels like easier data fetching and improved peace of mind as the data sources would be battle-tested. I imagine just building a simple internal JS lib implemeting this notebook-MCP server interface would be sufficient to have this running.
The second idea is to have the AI agents use, generate, modify and publish the BI reports (observable notebooks) themselves. What I envision here is creating specific set of skills and subagents for Claude to get good at making Observable notebooks, and implementing a new MCP server to give the agents power to validate what reports are already available, improve them or publish new ones. Perhaps also with some human validation before changes are published.
Do you see any architectural opportunities here? What are your general thoughts on this path?
Also, do you already have or plan on having AI-agent supporting artifacts for the Observable stack? Things like skills, sub-agents, MCPs, etc.
Happy to collaborate on this if there's interest! :)
PS: In the early days I did test a bit the "Ask AI Space" in Observable Notebooks Desktop but the token usage felt wildly inefficient for my uses: small adjustments, correcting a few things and help debugging. I didn't test generating reports from scratch, perhaps that is more of a target use case. But the general feeling was that using claude code was just better.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hello!
I've started using AI agents to handle data analysis, I'm getting increasingly excited with the outcomes and it's giving me lots of ideas to integrate it with Observable notebooks! I want to share a few ideas and ask for guidance - as you've probably given it a lot of thought already.
The structure I'm using so far is relying on Claude desktop as end-user interface, and all data-fetching is done via custom MCP tools I implemented for the database-agent interface. Claude desktop has been a quick win since the more robust agents (Opus, Sonnet) can themselves handle generating expressive analyses artifacts on-the-fly: executive summaries, interactive React reports, pdf documents, etc. I know for the long run this is not ideal as there's significant token waste when agents regenerate similar analyses (e.g., monthly sales reports, store performance breakdown) - and that's the usecase I see BI and Observable notebooks coming in for the win. I already have a Observable notebook-kit structure set up but currently I just use it myself for some internal troubleshooting (it's currently running SQL queries directly in the database). I'm thinking of good ways to extend it to an official BI platform, supported by AI agents.
The first idea that came to mind is having the capability of building reports in Observable Notebooks relying on reuse of the data-fetching MCP tools. This feels like easier data fetching and improved peace of mind as the data sources would be battle-tested. I imagine just building a simple internal JS lib implemeting this notebook-MCP server interface would be sufficient to have this running.
The second idea is to have the AI agents use, generate, modify and publish the BI reports (observable notebooks) themselves. What I envision here is creating specific set of skills and subagents for Claude to get good at making Observable notebooks, and implementing a new MCP server to give the agents power to validate what reports are already available, improve them or publish new ones. Perhaps also with some human validation before changes are published.
Do you see any architectural opportunities here? What are your general thoughts on this path?
Also, do you already have or plan on having AI-agent supporting artifacts for the Observable stack? Things like skills, sub-agents, MCPs, etc.
Happy to collaborate on this if there's interest! :)
PS: In the early days I did test a bit the "Ask AI Space" in Observable Notebooks Desktop but the token usage felt wildly inefficient for my uses: small adjustments, correcting a few things and help debugging. I didn't test generating reports from scratch, perhaps that is more of a target use case. But the general feeling was that using claude code was just better.
Beta Was this translation helpful? Give feedback.
All reactions