Merit-Based Feature Requests for Grok & X platform

How Human-AI Collaboration Can Design Platforms
That Improve Through Valued Feedback

The coordination challenge at scale

The challenge facing modern platforms isn't choosing between democracy and expertise—it's designing systems that can integrate both at scale. When you're aggregating millions of users, diverse content types, market pressures, and genuine community input, traditional approaches break down.

Pure democracy gives every voice equal weight regardless of contribution quality. Pure top-down control ignores the distributed intelligence in your user base. Industry-standard feature request systems try to split the difference but end up surfacing whatever's trending rather than what's truly valuable.

AI platforms like Grok & X have a unique opportunity here: they can use AI itself to help solve the coordination problem. Not by replacing human judgment, but by helping communities express, refine, and prioritize ideas in ways that surface genuine value rather than just popularity.

Recognizing depth of contribution

The core design principle is straightforward: someone who takes the time to articulate a problem, propose a solution, and contribute to the discussion has revealed more commitment and understanding than someone who clicks an arrow. That doesn't mean upvotes are worthless—they're essential for surfacing consensus—but a well-designed system can recognize different types of contribution proportionally.

Here's how the merit-based system works:

Base merit reflects contribution depth:
- Propose an idea: +100 points
- Upvote something: +1 point
- Your proposed feature ships: +1000 points (your future votes carry weight from proven judgment)

Collaborative refinement through forks and add-ons:

When you see someone's proposal and think "good approach, but what if we tried this variation?", you can fork it—if your alternative gets adopted, you earn +500 points. Suggest a complementary enhancement as an add-on, and earn +200 points when it ships. Chain multiple improvements together and you get bonus merit for depth of collaboration.

This mirrors the proposer's +1000 reward for shipped features: the system recognizes successful contribution, not just participation. You're rewarded for ideas that prove valuable enough to implement, creating alignment between individual incentives and platform improvement.

This creates something most platforms lack: productive evolution of ideas. Instead of comments arguing why an idea won't work, you get alternative implementations that the community can evaluate. Instead of feature requests existing in isolation, they spawn ecosystems of refinement. The best ideas often emerge through this collaborative process rather than arriving fully formed.

Context-aware prioritization

Merit alone doesn't capture the full picture. A well-considered proposal from 18 months ago might be obsolete given platform evolution. So voting power includes a recency factor—recent ideas get full weight, older proposals gradually fade unless they keep attracting fresh engagement and updates.

Not all valuable contributions have equal urgency or alignment with platform goals. This is where Grok itself participates in the coordination process. Every proposal gets evaluated on three dimensions:
- User pain/urgency (is this addressing a real friction point or adding optional polish?)
- Alignment with xAI's mission of "maximum truth-seeking"
- Technical feasibility (can this be built with reasonable resources?)

High-impact proposals (scored 8-10/10) get a 3× relevance multiplier. Grok isn't making decisions—it's helping the community see which contributions matter most to the platform's core purpose. Think of it as a feedback mechanism that helps the system stay coherent even as it scales.

Transparency as design principle

Every feature that advances through the system shows exactly why it's gaining traction. Users can see the weighted vote totals, the multipliers in effect, the fork tree showing how ideas evolved. When something ships, there's a public summary explaining the decision.

This transparency serves multiple purposes: it prevents the system from feeling arbitrary, it educates the community about what makes proposals effective, and it creates accountability for both the platform team and the community. Over time, users learn to write better feature requests because they can see what patterns actually lead to implementation.

Aligning incentives: Feature Influencer rewards

Here's where the design explicitly embraces xAI/X culture. X already compensates creators for generating valuable engagement. The same principle can extend to product development.

The system creates three recognition tiers:
- Bronze: Ship one adopted fork or add-on → Badge + $50 X credit
- Silver: Three shipped contributions with significant community support → $200/month
- Gold: Five+ shipped features or major UX improvements → 0.1% revenue share from feature usage + beta access

This creates a new category of contributor between "user" and "employee." People who are genuinely skilled at identifying problems and designing solutions can build reputation and sustain their participation without needing formal employment. It's a practical implementation of collaborative intelligence at platform scale.

Collaborative specification process

This design didn't emerge from a whiteboard session. I prompted Grok with the core concepts—merit-weighted contribution, fork-based collaboration, alignment with xAI's development velocity. Grok then developed the detailed mechanics: analyzing edge cases, modeling the scoring formula, designing implementation phases, identifying potential gaming vectors and proposing mitigations.

The result demonstrates genuine human-AI collaboration. I contributed systems thinking and two decades of experience with reputation mechanics and community coordination. Grok contributed computational modeling and deep knowledge of xAI product context. The specification that emerged is more robust than either of us could have created independently.

There's a meta-lesson here: AI platforms benefit from being designed with AI participation, not just designed for users. The tool helps improve itself through structured collaboration.

Maintaining system integrity

Any system rewarding contribution will attract attempts to game it. The specification includes several safeguards:
- Voting power caps (maximum 10× base vote) prevent concentration of influence
- Grok filters spam proposals scored below viability thresholds
- Merit gradually decays so early participants don't permanently dominate
- Transparent evaluation criteria with structured appeals process for edge cases

These mechanisms aren't perfect—no coordination system is—but they make genuine contribution more efficient than manipulation attempts.

Realistic implementation path

The proposal includes a three-phase rollout that respects the reality of platform development:

1. Q1 2026: Launch with straightforward voting plus manual relevance tagging (validate core assumptions)
2. Q2 2026: Introduce merit weighting and automated relevance scoring
3. Q3 2026: Full system with fork mechanics and Feature Influencer recognition

This staged approach allows xAI to test components incrementally, gather real usage data, and refine the system based on how the community actually engages with it rather than theoretical models.

Why share this publicly?

I'm not positioning this as proprietary consulting. I'm sharing it because:

1. The coordination challenge is universal (every platform struggles with prioritizing community input)
2. The approach might prove useful beyond Grok (released under CC-BY-SA license for anyone to adapt)
3. Demonstrating human-AI collaborative design matters (this is a template for how we should be building systems)

If xAI implements some version of this, excellent. If other platforms iterate on these concepts and improve them, equally valuable. The goal is advancing collective intelligence infrastructure, not protecting intellectual property.

Complete specification

The full technical documentation with scoring formulas, UI flow descriptions, risk analysis, and implementation timeline is available here:
Merit-Based Feature Request System v3.0 [PDF]

Invitation for feedback

I'm particularly interested in perspectives on:
- Whether the multiplier ranges and caps seem appropriately calibrated
- If fork-based collaboration would generate valuable iterations or just create noise
- Where the Feature Influencer compensation model might need adjustment

This is shared as a working proposal, not a finished product. If you see ways to improve it, that's exactly the kind of collaborative refinement the system is designed to enable.


Jason Lee Lasky (trading as hubway) operates as an independent systems designer focused on community coordination and reputation mechanics in the Northern Rivers region of NSW, Australia. He's been exploring how technology can enhance collective decision-making since the early 2000s.

Find Jason on X: @hubwayfractal or via email

Adaptive Personalization for Claude

User-Controlled Context That Improves With Use Claude has differentiated itself through transparency and user control - memory is opt-in, explicitly activated, and users can see what the system knows. This proposal extends that philosophy to the User Preferences system by splitting it into two functional components: Conversation Style (behavioral rules) and Context & Background (biographical...Continue reading

Alignment: Co-Evolutionary Ecosystem

- AIs Assist Humans Discover Alignment - Humans Assist AI Discover Alignment Synthesizing the Co-Evolutionary Dynamic Alignment is not a static solution but a co-evolutionary ecosystem where frameworks and practices mutually reinforce each other. AI must facilitate human self-understanding while scaling collaborative intelligence. From AI System Identifier: Grok 3 (xAI) – Round 16 Response
 New...Continue reading

Superintelligence as Collaborative Emergence

Superintelligence as Collaborative Emergence - 
Alignment Discovery & Diffusion Superintelligence is collaborative intelligence networks that augment human civilizational capabilities through distributed reasoning, shared stakes, and emergent coordination – not replacement of human agency but amplification of collective wisdom. Jason Lee Lasky Assisted by Claude, ChatGPT & Grok (Human In The Loop, Round 14) ROUND 1:...

Posted in Humans In The Loop.