Multi-Agent Task Assistant
by ogulcanakca
This project demonstrates a multi-agent system designed for tasks like article drafting and web research. It features a custom Agent-to-Agent (A2A) communication protocol and a Model Context Protocol (MCP)-inspired architecture.
Last updated: N/A
What is Multi-Agent Task Assistant?
The Task Assistant is a multi-agent system that allows users to request an article draft on a specific topic or perform web research. It uses specialized agents, external capabilities (like LLM text generation, web searching, and cloud storage), and a Streamlit application for the user interface.
How to use Multi-Agent Task Assistant?
To use the Task Assistant, clone the repository, configure the .env file with necessary API keys and bucket name, set up Google Cloud credentials, ensure each service directory has its own requirements.txt file, build and run the system with Docker Compose, and access the UI in your web browser.
Key features of Multi-Agent Task Assistant
Article Drafting: Generates an article draft on a given topic and style, then saves it to Google Cloud Storage, providing a public URL.
Web Research: Performs simulated web research on a topic and returns a summarized list of findings.
Supervisor LLM Input Validation: User inputs are pre-processed, validated for safety, and translated to English by an LLM before being assigned to agents.
Agent-to-Agent (A2A) Communication: Custom protocol for agents to assign tasks, send status updates, and return results.
Model Context Protocol (MCP)-inspired Tooling: Specialized servers expose tools (LLM generation, Cloud Storage, Web Search) via an MCP-like request/response pattern.
Asynchronous Task Processing: Tasks are handled asynchronously, and the UI polls for status updates.
Microservice Architecture: Components are containerized using Docker and orchestrated with Docker Compose.
Use cases of Multi-Agent Task Assistant
Automated article generation on specific topics.
Simulated web research for gathering information.
Building complex workflows using multiple AI agents.
Integrating LLMs with external tools and services.
FAQ from Multi-Agent Task Assistant
What LLM models are used in this project?
What LLM models are used in this project?
This project leverages several Anthropic Claude models including claude-3-7-sonnet-20250219, claude-3-haiku-20240307, and claude-3-5-sonnet-20241022 for different functionalities.
How does the system handle input validation?
How does the system handle input validation?
The Supervisor LLM validates, sanitizes, checks for harmful content, identifies language, and translates user inputs to English before task assignment.
What is the purpose of the MCP Tool Servers?
What is the purpose of the MCP Tool Servers?
MCP Tool Servers expose tools (LLM generation, Cloud Storage, Web Search) via an MCP-like request/response pattern.
How does the Article Draft Agent generate articles?
How does the Article Draft Agent generate articles?
The Article Draft Agent uses the generate_text tool provided by the Creative LLM MCP Server to produce article content.
Is the web search functionality a real web search?
Is the web search functionality a real web search?
No, the web-search-mcp server currently simulates web search results using an LLM. It does not perform actual live web searches.