WhatschatDocsData Science
Related
Building an Interactive Conference Assistant with .NET’s AI Toolkit: Q&AZero-Copy Data Loading: mssql-python Now Natively Supports Apache Arrow for Blazing Fast SQL Server QueriesSilent Vibrations: The Hidden Cause of Unease in Old Buildings, Scientists WarnNavigating the Unknown: 10 Key Insights from Scenario Modelling for English Local Elections134,400 Simulations Reveal Which Regularizer to Use: A New Decision Framework for Ridge, Lasso, and ElasticNetMicrosoft Unveils ConferencePulse: A Real-World .NET AI Stack Demo at MVP SummitBeyond the Forecast: How Scenario Modelling Reveals Hidden Truths in English Local Electionsmssql-python Delivers Direct Apache Arrow Support, Slashing Data Fetch Overhead

Building Interactive Conference Assistants with .NET's Composable AI Stack: A Practical Walkthrough

Last updated: 2026-05-12 03:30:09 · Data Science

Introduction

Integrating artificial intelligence into .NET applications often feels like assembling a puzzle with pieces from different collections—models from one ecosystem, vector databases from another, ingestion pipelines from a third. Each component brings its own patterns, client libraries, and versioning quirks. To streamline this process, we have developed a set of composable, extensible building blocks that provide stable abstractions across these concerns.

Building Interactive Conference Assistants with .NET's Composable AI Stack: A Practical Walkthrough
Source: devblogs.microsoft.com

In this article, we walk through a real-world example: ConferencePulse, an interactive conference assistant built for a session at MVP Summit. The app runs live polls, answers audience questions in real time, generates insights from engagement data, and produces a session summary when the session concludes. ConferencePulse was built with the exact technologies we were there to present: Microsoft.Extensions.AI, Microsoft.Extensions.DataIngestion, Microsoft.Extensions.VectorData, the Model Context Protocol (MCP), and the Microsoft Agent Framework.

We will explore how each building block fits together, from architecture to implementation.

What We Built

ConferencePulse is a Blazor Server application designed for live conference sessions. Attendees scan a QR code to join a session and interact with the presenter through polls and Q&A. Behind the scenes, AI powers several key features:

  • Live polls generated on the fly based on session content. Attendees vote, and results update in real time.
  • Audience Q&A where AI answers questions using a Retrieval-Augmented Generation (RAG) pipeline that pulls from the session knowledge base, Microsoft Learn documentation, and GitHub wiki content.
  • Auto-generated insights that surface patterns in poll results and audience questions as they come in.
  • Session summary that runs when the presenter ends the session. Multiple AI agents analyze polls, questions, and insights concurrently, then merge their findings.

Our goal was an interactive session—no static slide decks. We wanted live polls and audience insights, and we wanted to automate preparation: point the app at a GitHub repository, and it downloads markdown files, processes them through a pipeline, and builds a searchable knowledge base. Polls, talking points, and Q&A answers are all grounded in that content.

Application Architecture

The app runs on .NET 10, Blazor Server, and Aspire. The solution consists of five projects:

src/
├── ConferenceAssistant.Web/          ← Blazor Server (UI + orchestration)
├── ConferenceAssistant.Core/         ← Models, interfaces, session state
├── ConferenceAssistant.Ingestion/    ← Data ingestion pipeline + vector search
├── ConferenceAssistant.Agents/       ← AI agents, workflows, tools
├── ConferenceAssistant.Mcp/          ← MCP server tools + MCP client
└── ConferenceAssistant.AppHost/      ← .NET Aspire (Qdrant, PostgreSQL, Azure OpenAI)

Each project addresses a specific concern, leveraging the composable AI stack.

Core Building Blocks

Microsoft.Extensions.AI: One Interface, Any Provider

At the heart of our AI calls is IChatClient, a unified abstraction provided by Microsoft.Extensions.AI. This interface works with OpenAI, Azure OpenAI, Ollama, Foundry Local, and other providers. Every AI interaction—whether generating a poll question, answering an attendee query, or summarizing insights—uses the same pattern. This means we can swap providers without rewriting logic. For example, in development we use Ollama for fast local testing; in production we switch to Azure OpenAI with minimal configuration changes.

Data Ingestion with Microsoft.Extensions.DataIngestion

To build the knowledge base, we needed to ingest content from GitHub repositories (markdown files) and other sources. Microsoft.Extensions.DataIngestion provides a pipeline framework that handles extraction, transformation, and loading (ETL). We define stages: download markdown files, split them into chunks, generate embeddings, and store them in a vector database. The pipeline is extensible, allowing us to add steps like filtering or deduplication. This component also integrates with Microsoft.Extensions.VectorData for storing and searching embeddings.

Vector Data and RAG

For the Q&A feature, we use Microsoft.Extensions.VectorData to manage vector embeddings. During ingestion, each document chunk is embedded and stored in Qdrant (via Aspire). When an attendee asks a question, the system embeds the query, searches the vector store for relevant content, and returns the top matches. These matches are then sent to the AI model along with the original question to generate a grounded answer. The abstraction lets us switch between vector stores (e.g., Pinecone, Azure Cognitive Search) without changing application code.

Building Interactive Conference Assistants with .NET's Composable AI Stack: A Practical Walkthrough
Source: devblogs.microsoft.com

Agent Framework and Model Context Protocol (MCP)

To orchestrate complex workflows—like the session summary or insight generation—we use the Microsoft Agent Framework. Agents are defined with specific roles and tools. For summary creation, we configure three agents: one analyzes poll results, another extracts themes from Q&A, and a third merges those analyses into a final summary. They communicate through the Model Context Protocol (MCP), which allows the agents to call external tools (e.g., a search tool or a database query tool) in a decoupled way. MCP also enables our Blazor front-end to invoke backend agent actions seamlessly.

How It All Works Together

The application flow begins when a presenter starts a session. The system ingests the session's associated GitHub repository (using DataIngestion), processes the markdown into a vector store, and generates initial poll questions based on the content using IChatClient. Attendees join via QR code and see the poll in real-time.

During the session, attendees submit questions. The RAG pipeline fetches relevant context from the vector store and passes it to the AI model. The answer is displayed directly in the chat interface. Meanwhile, the insight agent monitors new questions and poll results, identifying trends (e.g., “Many attendees are asking about authentication”). All updates are pushed to the Blazor UI via SignalR.

When the presenter ends the session, the summary agent workflow kicks off. Each agent runs concurrently, using MCP to invoke the same search tool that powers Q&A. Their outputs are merged into a coherent summary that the presenter can download or share.

Key Benefits of the Composable Stack

Using these building blocks together offers several advantages:

  • Consistency: Every AI call goes through IChatClient, every data pipeline follows the same pattern, and every agent uses the same tool interface.
  • Swappability: We can change AI providers, vector databases, or ingestion sources with minimal code changes, thanks to the abstraction layers.
  • Extensibility: Adding a new feature (e.g., real-time translation) means adding a new pipeline step or a new agent tool, not rewriting the whole app.
  • Built-in observability: The stack integrates with .NET Aspire’s dashboard, giving us logs, traces, and metrics out of the box.

Conclusion

ConferencePulse demonstrates that building sophisticated AI-powered applications in .NET no longer requires stitching together disparate libraries. The composable AI stack—Microsoft.Extensions.AI, Microsoft.Extensions.DataIngestion, Microsoft.Extensions.VectorData, MCP, and the Agent Framework—provides a cohesive foundation. We built a fully interactive conference assistant in a matter of weeks, and the same principles can be applied to any domain that needs intelligent, real-time interactions.

If you are interested in trying it yourself, the complete source code for ConferencePulse is available on GitHub. Start with the building blocks section to understand the core abstractions, then explore the project structure.