If you manage an ERG program at a company that's invested in AI, chances are someone has already suggested - or maybe already built - an internal chatbot for your ERGs. The pitch probably sounded reasonable: take your operating guide, your charter templates, your event planning docs, dump them into an AI tool, and give ERG leaders a bot they can ask questions.
And if you've tried this, you already know the result. The bot can tell a leader what your ERG charter template looks like. It can summarize your operating guide. But when that leader sits down to actually build their annual plan, or prep for a high-stakes meeting with their executive sponsor, or write an impact report that justifies next year's budget - the chatbot falls flat.
This isn't a failure of AI. It's a failure of approach. And if your company is serious about using AI to modernize how ERGs operate, the distinction between what most companies are building and what actually works is worth understanding.
Who wrote this?
Armen Nercesian is the Co-Founder of Verbate, where he works with leading companies to modernize how ERGs operate. If you're rethinking how AI should actually support your ERGs - beyond chatbots and document dumps - you can book time with Armen here.
The "Knowledge Bot" Trap
Most internal ERG bots are built the same way: someone on the AI or IT team collects a folder of ERG documents - operating guides, past annual plans, event playbooks, maybe some FAQ pages - and loads them into a Chatbot or a custom GPT. The result is a system that pulls relevant info from those documents and answers questions about what those documents contain - what AI engineers would typically call a retrieval-augmented generation (RAG) system.
This is useful in the same way that a searchable wiki is useful. It helps people find information that already exists. But the fundamental limitation is that ERG leadership isn't just an information retrieval problem. It's an execution problem.
When a first-time ERG co-lead needs to write an annual plan, they don't just need to know what an annual plan looks like. They need to think through whether their group mission still aligns with member needs and company strategy. They need to decide on 3-4 strategic pillars from a menu of options, understanding the tradeoffs. They need to sequence initiatives across quarters, flag resource constraints, and build a budget request that finance will take seriously. Each of those is a decision point, not a lookup.
A knowledge bot can't walk someone through that process. It doesn't know to ask about mission alignment before jumping to tactics. It doesn't flag when a budget request exceeds reasonable thresholds. It doesn't understand that a brand-new ERG needs a fundamentally different plan than one with 2,000 members. It just retrieves what's in the documents and generates a response.
What ERG Leaders Actually Need
Talk to any experienced ERG program manager and they'll describe their job the same way: they spend most of their time walking ERG leaders through processes. Not handing them documents - walking them through the work.
When a leader asks "how do I plan an event?", the PM doesn't send them a template. They ask: What's the goal of this event? Who's your audience? Have you coordinated with the other ERGs to avoid scheduling conflicts? Do you need budget approval, and if so, from whom? Have you thought about how you'll measure whether this was worth the effort?
That sequence of questions - and the logic behind when to ask which question, and what to do with the answers - is the real intellectual property of ERG operations. It's not captured in any document. It lives in the heads of experienced program managers and tenured ERG leaders, and it's what makes the difference between an ERG that executes well and one that spins its wheels.
The right way to use AI for ERGs isn't to build a bot that knows about ERG operations. It's to build AI workflows that can do what a great program manager does: guide leaders through structured decisions, ask the right questions in the right order, and produce usable outputs at the end.
Think in Workflows, Not Documents
The mental model shift for program managers is to stop thinking of AI as a tool you feed documents into and start thinking of it as a tool you encode processes into.
Consider what's involved in executive sponsor alignment - one of the highest-leverage, most politically sensitive things an ERG leader does. A knowledge bot might retrieve your "tips for working with your exec sponsor" page. Helpful, maybe. But what an ERG leader actually needs before a sponsor meeting is a process that:
- Asks what the leader wants from this meeting - budget approval, strategic input, visible attendance at an event, or something else
- Pulls together relevant data points - membership growth, event attendance, any feedback from the last meeting
- Structures a briefing document with talking points tailored to what this specific executive cares about
- Generates recommended asks, grounded in what's realistic given the company's current posture on ERG investment
- Produces a follow-up action plan template so commitments made in the meeting don't evaporate
Each of those steps has branching logic. If the leader's goal is budget approval, the preparation looks different than if the goal is getting the sponsor to keynote an event. If this is a first meeting with a new sponsor, the approach differs from a recurring quarterly check-in.
This kind of structured, multi-phase workflow - with decision logic, guardrails, and context-aware outputs - is what the AI industry is now calling a "Skill." Not a chatbot answer. Not a template. A complete operational workflow encoded into an AI system prompt that can be deployed into the AI tools a company already uses.
The Building Blocks of an ERG Skill
If you're a program manager thinking about this approach, here's the anatomy of a well-built ERG workflow:
A clear role definition. The AI isn't a generic assistant. It's playing a specific role - an experienced ERG strategy advisor, or an event planning coordinator, or an impact reporting analyst. Defining that role shapes every response the AI gives.
Structured phases. Good workflows don't dump everything at once. They move through stages: first establish context, then make key decisions, then build the deliverable. Just like a good PM conversation.
Decision logic and branching. If the ERG is in its first year, the annual planning process should look different than for a mature group. If the budget request exceeds a certain threshold, it should flag that for additional review. Encoding these conditions is what makes AI outputs actually useful instead of generic.
Guardrails. This is critical and consistently overlooked. ERG leaders sometimes surface sensitive issues - interpersonal conflicts, experiences with discrimination, concerns about company policy. The AI needs explicit instructions about what's in scope and what requires immediate escalation to HR or the program manager. Without guardrails, you're one conversation away from an AI giving advice it has no business giving.
Output templates. The workflow should produce a specific deliverable - an annual plan, a meeting brief, an impact report - in a format that's ready to share with stakeholders. Not a wall of text the leader then has to reformat.
Company-specific context. This is where the "one bot for all ERGs" approach breaks down hardest. Your company's governance model, fiscal calendar, executive sponsor structure, budget approval process, and brand voice are all unique. A workflow that works at a tech company with 8 globally distributed ERGs will misfire at a financial services firm with 12 centralized groups reporting to a single VP.
Why This Can't Be a Side Project
Here's where program managers often get stuck: building these workflows well requires deep ERG operational expertise and structured AI prompt engineering, and those two skill sets almost never live in the same person.
Your internal AI team can build the infrastructure. They can set up the custom GPT, configure the Copilot agent, manage the deployment. But they don't know the difference between a first-year ERG's planning needs and a mature group's. They don't know that executive sponsor alignment has a political dimension that can't be templated generically. They don't know which guardrails are essential and which are nice-to-have.
Meanwhile, your ERG leaders and program managers understand all of that intuitively - but most don't have experience translating operational expertise into structured AI system prompts with branching logic and conditional outputs.
This gap is why most internal ERG AI projects plateau at the "knowledge bot" stage. The people who know ERGs can't build the workflows, and the people who can build them don't know ERGs.
Where This Is Heading
The companies getting the most value from AI in their ERG programs aren't the ones with the fanciest chatbots. They're the ones treating AI as an execution layer: a set of purpose-built workflows that encode operational expertise into repeatable, high-quality processes.
This approach solves several problems at once. It reduces the tactical burden on program managers, who can redirect their time from fielding repetitive questions to strategic work. It raises the floor on execution quality, so the newest ERG leader produces work at a level closer to the most experienced one. It creates visible, measurable AI adoption that HR leaders can point to when asked "how is AI being used in your function?" And it scales - once you've built a workflow for annual planning, every ERG benefits, and improvements compound over time.
The shift from "AI as knowledge retrieval" to "AI as structured execution" isn't unique to ERGs. It's happening across every function. But ERGs are a particularly strong fit because the work is highly repeatable across groups and across companies, the volunteer leaders need more structured support than they typically get, and the domain expertise required to build good workflows is concentrated in a small number of experienced practitioners.
If you're a program manager reading this and thinking about your own AI strategy, start with one high-impact workflow - annual planning or event execution are good entry points. Map out the questions you'd ask if you were coaching a leader through it. Note where the process branches based on context. Define what the final output should look like. Then work with your AI team to encode that into a system prompt, not a document dump.
And if you'd rather not build it from scratch - that's the exact problem we're building Verbate's ERG Skills Library to solve. We've encoded cross-company ERG best practices into deployable AI workflows and we work with companies to customize and deploy them into their existing AI environments. Let's chat if you'd like to see it in action.

