Hi everyone,
Our AI team is currently working on several interesting GenAI projects and we’re looking to connect with a few experienced Senior AI Engineers for flexible collaboration.
Key areas we need strong experience in:
-
RAG (Retrieval-Augmented Generation)
-
MCP (Model Context Protocol) + Agent-to-Agent communication
-
LangChain, LlamaIndex, AutoGen, CrewAI
-
Major LLM providers (OpenAI, Anthropic/Claude, GPT-4, etc.)
If this matches your background and you’re open to discussing potential collaboration, feel free to add me directly (messaging here is a bit limited).
Looking forward to connecting!
Hi,
I’m James Yarris, an AI & Full Stack Engineer with 9+ years building production LLM systems, RAG pipelines, and multi-agent orchestration.
I’m excited by your work on a multi-agent LLM platform that combines RAG with MCP-style agent-to-agent communication using LangChain, LlamaIndex, AutoGen, and CrewAI to deliver reliable, grounded answers across OpenAI and Anthropic models. The focus on agent coordination and factual grounding really stood out to me.
One idea I’d bring is a lightweight MCP context broker that stores compact, versioned context summaries and retrieval provenance. It would cache agent exchanges, attach source links and confidence scores, and let agents request either full context or compressed summaries. This reduces repeated retrieval costs, improves traceability for users, and cuts hallucinations by prioritizing high-confidence sources.
At DuploCloud I built multi-agent workflows with LangChain and AutoGen and designed hybrid RAG pipelines using LlamaIndex, Pinecone, and PostgreSQL. That work reduced manual effort and improved factual relevance by ~35–38%, and I’ve also fine-tuned models with LoRA/PEFT to lower hallucinations by 27%, so I can implement the broker and integrate it across major LLM providers reliably.
I’d love to chat about bringing this improvement to your platform and helping scale your multi-agent systems.
Best Regards,
James Yarris ([email protected])