CoLab IA — An Augmented Workspace for Community Building

When three people share an AI assistant, the assistant needs to know how the team works — not just what it is asked to do. The system that holds the standards is the system that scales the team.


The challenge

Community organisations operate with small teams, limited budgets, and no dedicated technical staff. They need to publish content, promote events, maintain a website, document decisions, and communicate across channels — all tasks that traditionally require either technical expertise or expensive tooling.

The standard answers create dependency. A CMS requires someone who understands it. An agency builds something the team cannot maintain. “No-code” tools still demand a dedicated operator and lock content into a vendor’s interface.

But the deeper problem is alignment. When multiple people create content across multiple channels, quality drifts. Tone shifts. Messaging fragments. The more people contribute, the less coherent the output — unless the team invests heavily in editorial process, which small teams cannot afford.

The question is not “how do we publish a website.” It is: how does a small team produce consistent, high-quality work across channels without technical dependency, editorial bottlenecks, or vendor lock-in?


The approach

The answer is architectural, not procedural. Instead of training people on tools, build a workspace that carries the standards inside it.

CoLab IA runs on what I call an agentic vault — a structured workspace built in Obsidian where the file system itself encodes the team’s workflows, templates, style rules, and decision-making protocols. Any AI assistant (Claude, ChatGPT, Gemini, Cursor, or equivalent) can read the workspace and operate within its rules. The team does not prompt from scratch. They describe what they need, and the system routes to the right workflow.

The routing layer

A single file — AGENTS.md — serves as the entry point for every AI interaction. It contains:

  • Intent recognition — a routing table that maps requests (“create an event,” “publish an article,” “write a LinkedIn post”) to the correct playbook
  • Execution protocol — confirm before acting, work step by step, request information one piece at a time
  • Structural rules — what goes where, what is private, what is public, what must follow the style guide

Any team member opens any AI tool, points it at AGENTS.md, and the assistant knows how to operate inside this workspace. No onboarding. No prompt engineering. No tribal knowledge.

Templates with embedded frameworks

Every content type starts from a template that enforces a proven structure:

ContentFrameworkWhat it enforces
Event pageAIDABenefit-led structure, conversion-optimised
LinkedIn postPASHook under 200 characters, algorithm-aware format
NewsletterBABBefore/after narrative, single CTA
Meeting recapStructuredDecisions, actions, owners, deadlines
ArticleEditorialTitle, lead, sections, sources, author attribution

These are not suggestions. They are starting points with inline guidance — the template tells the AI (and the human) what each section should contain, how long it should be, and what patterns to avoid.

Multi-channel synchronisation

The most powerful workflow is event promotion. One event brief generates three channel-specific outputs:

Event brief (single input)
    |
    +-- Event page (AIDA) -> conversion
    +-- LinkedIn post (PAS) -> reach
    +-- Newsletter (BAB) -> action

Same message. Same social proof. Same CTA. Tone adapted to each channel’s physics. This runs from a single playbook — event-promo.md — that any AI assistant can follow.

Shared standards

A centralised style guide governs voice, tone, naming conventions, formatting, and content length across all outputs. When three people create content through three different AI tools, the style guide is what keeps the output coherent.

Design tokens, file naming conventions, and storage rules are encoded in the workspace itself — not in a document someone has to remember to read, but in the structure the AI consults on every interaction.


The build

The workspace: An Obsidian vault with a full agentic architecture:

colab-ia/
+-- AGENTS.md              <- routing for all AI interactions
+-- _ai/
|   +-- playbooks/         <- event, event-promo, publish,
|   |                         meeting-recap
|   +-- templates/         <- article, event, event-public,
|   |                         linkedin, newsletter, recap
|   +-- style/             <- voice, tone, naming, formatting
|   +-- tools/             <- deployment procedures
+-- _private/              <- strategy, governance, meetings,
|   |                         events, design assets, drafts
|   +-- 00_Strategie/
|   +-- 01_Gouvernance/
|   +-- 02_Pilotage/
|   +-- 03_Evenements/
|   +-- ...
+-- public/                <- published content (Quartz source)
|   +-- articles/
|   +-- communaute/
|   +-- ...
+-- .github/workflows/     <- automated build + FTP deploy

The publishing pipeline:

  • Obsidian for authoring (markdown files)
  • Git and GitHub for version control and collaboration
  • GitHub Actions triggers on push to main
  • Quartz builds the static HTML/CSS
  • FTP deploys to Infomaniak (Swiss sovereign hosting)

The team model: Three co-founders. One non-technical founder uses Cursor as a visual editing interface for the markdown files. When they modify content and push to GitHub, the site rebuilds and deploys automatically. Effectively a WYSIWYG experience — edit text, push, the site reflects the change. No CMS login. No build step to understand.

The community: CoLab IA — a francophone community for collaborative AI learning, based in Lausanne. 80 members and growing with each event. Fifth event scheduled.

The site: colabia.ch — live for one month, updated weekly.

LLM-agnostic by design: The workspace works with Claude Code, Cursor, ChatGPT, Gemini, or any LLM. Adapter files (.cursorrules, CLAUDE.md) point to AGENTS.md. No vendor lock-in at any layer.


The impact

Operational:

  • Three co-founders maintain the site and produce content independently — no external dependency
  • A non-technical co-founder publishes and updates the site without technical intervention
  • One event brief generates a complete multi-channel promotion (event page, LinkedIn post, newsletter) in a single workflow
  • Site has been live for one month with weekly updates

Community:

  • 80 members, growing with each event
  • Five events produced in four months

The broader point: This workspace is not a website solution. It is a model for how small teams will work with AI — not by prompting tools one task at a time, but by encoding their standards, workflows, and institutional knowledge into a structure that any AI can read and operate within. The team scales without the output degrading.