Skip to main content
Community Action Initiative

A Community Initiative
for Fair MCP Discovery

We're building transparency into MCP server discovery: consolidating industry best practices into quality signals, then independently tracking which platforms actually use them. Community-driven, platform-neutral, creating accountability through visibility.

The Future of MCP Discovery: Push Recommendations

MCP agents will soon recommend servers proactively based on your intent—not just let you browse a catalog. When you say "help me track that bug," the agent will suggest the right server automatically.

This is already starting. But without standards, it's a black box.

Read: Pull vs Push Discovery: Why Transparency Matters Now

Understanding this shift is critical to why we're building accountability measures today.

We're at a Crossroads

The MCP Registry tells us what servers exist. But who decides which servers get recommended?

Without transparency, we risk:

  • 1

    Black-box algorithms developers can't understand

  • 2

    SEO-style gaming and manipulation

  • 3

    Different optimization for every platform

  • 4

    Quality taking backseat to tricks

  • 5

    Small developers locked out

Developers deserve a predictable path to discovery across all agents.

The industry already knows how to measure quality. Someone just needs to consolidate those practices for MCP—and hold platforms accountable for using them.


Two Parts: Best Practices + Accountability

We're not inventing new standards. We're consolidating what works, then doing the hard work of measuring who's actually being transparent.

Part 1

Quality Signals Specification

Consolidating Industry Best Practices

What we're doing:

The industry already knows how to measure quality—SaaS platforms track uptime, APIs measure performance, security teams scan vulnerabilities. We're consolidating these proven practices into a unified specification for MCP servers.

Drawing from existing standards:

  • Uptime & reliability: SLA frameworks (AWS 99.95%, Google Cloud, Azure)
  • Performance: API response time standards, Web Vitals (p50, p95, p99)
  • Security: CVE scoring, OWASP guidelines, SOC2/ISO frameworks
  • Stability: Error budgets, SLO best practices from SRE
  • Intent classification: Schema.org, OpenAPI, existing ontologies

Our contribution:

Not reinventing the wheel—packaging existing best practices into a format that works for MCP sub-registries.

The specification includes:

Quality Metrics (How good is it?)
  • Reliability: Uptime %, MTBF, incident count
  • Performance: Response time (p50/p95/p99), throughput
  • Stability: Error rates, timeout rates, retry success
  • Security: Vulnerability scans, compliance certifications
  • Adoption: Active users, ratings, maintenance activity
Intent Classification (What is it FOR?)
  • Primary/secondary categories (e.g., development.version_control)
  • User intents it handles (e.g., "track bugs", "create branches")
  • Specific capabilities (read/write/search operations)
  • Keywords for semantic matching

Format: Fits in existing sub-registry metadata. Optional but encouraged. Open spec via RFC process.

Part 2

Transparency Tracker

Our Unique Contribution

What we're building:

The real work: independently tracking and reporting on platform transparency.

This is what's missing:

Anyone can write a spec. No one is doing the persistent accountability work of:

  • Monitoring which platforms adopt quality signals
  • Measuring transparency of their processes
  • Documenting observable behavior
  • Publishing public scorecards
  • Updating quarterly
  • Maintaining independence

What we track (Letter grades A-F):

Standards Adoption (30 points)

  • Uses community-proposed quality signals
  • Publishes servers' quality scores
  • Enables fair comparison across platforms

Process Transparency (70 points)

  • Documents weighting philosophy
  • Explains inclusion/exclusion criteria
  • Communicates algorithm changes in advance
  • Provides developer feedback mechanisms
  • Has appeals process

Our methodology:

Based on publicly observable information and direct platform engagement. Transparent scoring rubric. Appeals process. Updated quarterly.

This is our unique value:

Not the specification (that's meant to be widely adopted). The ongoing accountability work—the measurement, the credibility, the sustained commitment.


Consolidation + Accountability

Part 1 is about consensus:

"Here's what quality measurement already looks like. Let's use it for MCP."

  • Lower barrier: Not asking platforms to invent new metrics
  • More credible: Standing on proven industry practices
  • Easier adoption: Platforms already understand these concepts

Part 2 is about persistence:

"Someone needs to measure and report. We're doing that work."

  • Higher value: Ongoing commitment, not one-time effort
  • Harder to replicate: Requires sustained independence
  • Creates pressure: Visibility drives behavior change

Together they create accountability:

  • The specification makes "fair play" concrete
  • The tracker shows who's playing fair
  • Developers choose transparent platforms
  • Users trust transparent recommendations
  • Platforms compete on openness, not opacity

Three principles:

1

Consolidate measurement, don't mandate decisions

Like nutrition labels—everyone measures calories the same way, but you choose your diet.

2

Transparency over uniformity

Platforms keep their algorithms. They just explain their priorities.

3

Accountability through visibility

We're measuring transparency either way. Platforms can help us get it right, or we'll assess what we observe.


What We're Building

Q1 2026

The Specification

  • Draft RFC consolidating best practices
  • Quality metrics + intent taxonomy
  • Reference implementation (open source)
  • Community feedback period
  • Testing with early adopters
Q2 2026

The Tracker

  • Launch transparency scorecard
  • Rate all major MCP platforms
  • Publish first scores publicly
  • Establish quarterly update cadence
Q3 2026

Drive Adoption

  • Platforms respond to scores
  • Developers optimize for known signals
  • Users choose transparent platforms
  • Ecosystem culture shifts
Q4 2026

New Normal

  • Quarterly updates continue
  • Specification becomes widely adopted
  • Transparent discovery = baseline expectation
  • We avoided the SEO nightmare

Open Source, Community-Driven

For Platforms

Re: The Specification

  • Review our consolidation of best practices
  • Share your quality measurement approaches
  • Help refine the proposal
  • Implement it if it makes sense

Re: The Tracker

  • Share documentation about your processes
  • Help us understand your transparency efforts
  • Get rated accurately vs. assessed from outside
  • Improve your score through openness

For Developers

Re: The Specification

  • Give feedback on quality metrics
  • Help refine intent taxonomy
  • Test reference implementations
  • Suggest additional best practices

Re: The Tracker

  • Check platform transparency scores
  • Ask platforms: "Why aren't you more transparent?"
  • Choose platforms that respect openness
  • Vote with your MCP server deployments

For the Community

  • Review the RFC draft
  • Spread the word
  • Contribute to discussions
  • Help refine standards

Stay Up to Date

Get notified about RFC updates, specification releases, and transparency tracker launches.

No spam. Unsubscribe anytime. We respect your privacy.


FAQ