Building a Plugin Marketplace for AI-Native Workflows

Dev.to / 4/11/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • The author argues that many AI coding/workflow tools are packaged as monoliths and struggle when teams need role- and domain-specific workflows, since loading everything into each session can dilute context and harm output quality.
  • To address context pollution, onboarding friction, and maintenance coupling, they decomposed a previously monolithic Claude Code setup into nine independently installable, composable plugins by role.
  • The resulting plugin marketplace architecture separates core shared capabilities (persona/auth/storage/rules) from domain plugins such as client research, meeting capture/transcription, SOW drafting, proposal work, and deal/engagement operations.
  • The piece focuses on practical lessons learned from shipping the modular system for a consulting presales environment where different users need different toolchains and independent updates.

Most AI coding tools ship as monoliths. One big system prompt, one set of capabilities, one-size-fits-all. That works fine for general software engineering. It falls apart the moment you need domain-specific workflows that vary by role, by team, and by engagement.

I build presales systems at Presidio — client research, SOW generation, meeting capture, deal operations. The kind of work where a solutions architect needs different tools than a deal desk analyst, and where loading everything into every session wastes tokens and degrades output quality.

So I built a plugin marketplace for Claude Code. Nine modular plugins, independently installable, composable by role. Here's what I learned shipping it to a team.

Why Plugins Instead of One Big Workspace

The original system was a monolith — 56 commands, 14 skills, 36 tools, all loaded into every session. It worked for me as the sole operator. The moment I tried to share it with other consultants, three problems surfaced immediately:

Context pollution. A consultant doing SOW work doesn't need the meeting transcription pipeline, the competitive intel framework, or the deal management commands cluttering their context window. Every irrelevant token degrades the model's attention on the task at hand.

Onboarding friction. "Clone this repo, read 200 lines of docs, configure 18 environment variables, and learn 56 commands" is not an adoption strategy. People need to install what they need and ignore what they don't.

Maintenance coupling. A bug fix in the meeting recorder shouldn't require every user to pull an update that also touches their SOW pipeline. Independent versioning matters when your users are busy consultants, not developers.

The fix was decomposition. Break the monolith into plugins that can be installed, updated, and removed independently.

The Architecture That Emerged

fulcrum-plugins/
├── plugins/
│   ├── core/          # Foundation: persona, auth, OneDrive, shared rules
│   ├── intel/         # Client research pipeline
│   ├── meeting/       # Silent recording + transcription
│   ├── discovery/     # Call prep, qualification, opportunity analysis
│   ├── sow/           # SOW drafting, review, QA, redlines
│   ├── proposal/      # Solution decks and pricing workbooks
│   ├── ops/           # Daily briefing, weekly review, context switching
│   ├── engage/        # Deal management, delivery handoff
│   └── util/          # Freshness audits, triage, workspace maintenance
├── shared/            # Frameworks and templates used across plugins
└── docs/

Each plugin is a self-contained directory with commands, scripts, rules, and hooks. One required dependency — core — provides identity, authentication, and the shared file system. Everything else is optional.

Installation is one command: /plugin install sow@fulcrum-plugins. Uninstall is equally clean. No cross-plugin imports, no shared state beyond what core provides.

Recommended Sets Over Required Bundles

Rather than prescribing one configuration, the marketplace offers recommended sets:

  • Minimal (SOW-focused): core + sow — for consultants who only write statements of work
  • Meeting-heavy: core + meeting + discovery — for consultants running client calls all day
  • Full presales: all 9 plugins — for people like me who touch every phase

This respects how people actually work. Nobody uses every tool every day. The consultant who installs just core + sow gets a focused, fast experience. The one who installs everything gets the full operating system. Both are first-class citizens.

Namespace Isolation Matters More Than You Think

When I first decomposed the monolith, every plugin had commands like /draft, /review, /status. Collisions everywhere. The fix was namespace syntax: /sow:draft, /intel:company-intel, /ops:weekly-review.

This felt verbose at first. It turned out to be the single most important design decision. Namespaces do three things:

  1. Eliminate ambiguity. /draft could mean a SOW draft, a proposal draft, or an email draft. /sow:draft is unambiguous.
  2. Enable discovery. A new user can type /sow: and see every SOW command without memorizing a list. The namespace IS the documentation.
  3. Preserve plugin independence. Two plugin authors can independently create a status command without coordinating. /engage:status and /ops:status coexist without conflict.

I renamed all plugins in v1.1 specifically to get shorter prefixes — sow-pipeline became sow, research-intel became intel, engagement-lifecycle became engage. Every keystroke matters when you're typing these dozens of times a day.

Shared Lessons: Institutional Memory Without a Database

The feature I'm most proud of has zero lines of application code. It's a folder on OneDrive.

When a consultant learns something the hard way — a Salesforce field that's locked to AMs, a client's preferred meeting format, a compliance requirement that isn't documented anywhere — they drop a markdown file into a shared folder:

shared-lessons/jane-doe/2026-04-09-sfdc-stage-lock.md

At session start, a hook script scans all lesson files from the team, deduplicates by title, and renders them into a cached rule that loads into every session. The team's collective knowledge grows without anyone maintaining a wiki, attending a knowledge-sharing meeting, or filing a ticket.

The constraints are deliberate: one lesson per file, no client-confidential data, attribution required, dates required (stale lessons get pruned). The format is simple enough that non-developers can contribute. The mechanism — a OneDrive folder sync'd through Microsoft Teams — requires no new tools or logins.

This is the pattern I keep returning to: use infrastructure people already have. OneDrive, git, markdown files. Not a custom database, not a new SaaS tool, not an API integration. The best systems are the ones that disappear into workflows people already follow.

What Didn't Work

Plugin dependencies. The original design had plugins declaring dependencies on each other — sow depends on intel, engage depends on discovery. In practice, this created install-order headaches and made it harder to reason about what was loaded. The v1.3 architecture dropped all inter-plugin dependencies. Each plugin is fully self-contained. If sow needs client context, it reads the client's context file directly — it doesn't import a function from intel.

Automatic plugin updates. The original design auto-pulled updates on session start. In practice, this broke people mid-workflow when a command signature changed. The fix was making updates explicit — /core:update when you're ready, with a statusline indicator showing when updates are available. Users update on their own schedule, not yours.

Granular versioning. I initially tried to version each plugin independently. After three days of version coordination across nine plugins, I switched to a single marketplace version. All plugins share the version in VERSION. SemVer at the marketplace level, not the plugin level. Simpler to reason about, simpler to communicate ("update to 1.3.1"), simpler to tag.

The Adoption Signal

The most telling metric isn't installs — it's which recommended set people choose. When most of your users install the minimal set and gradually add plugins over weeks, you've built something that earns trust incrementally. When they install everything on day one and complain it's overwhelming, you've just shipped a monolith with extra steps.

So far, the pattern is healthy. New consultants start with core + sow (the job requirement), then add discovery after their first client call, then meeting after they see someone else's AI-generated meeting notes. Pull, not push.

Plugin architectures aren't new. What's new is applying them to AI agent context — treating the model's knowledge, rules, and capabilities as composable modules rather than a static system prompt. The tools exist today in Claude Code's plugin system. The hard part isn't the architecture. It's the discipline to decompose.