←   Back to Projects
Platform0→1SystemsTrust Architecture

PlayFirst

A trust and signal architecture for indie game discovery, player support, and early publisher evaluation.

PlayFirst is a platform concept designed around a structural failure in indie game discovery: trust signals are absent at the moments that matter. Players cannot evaluate games worth their time. Developers cannot build visible early credibility. Publishers cannot access structured early-traction data for pre-commitment evaluation. The product is a trust architecture. Every surface, from the game page to the developer dashboard to the review layer, is designed to make credibility legible, support decisions faster, and early audience signals structured for eventual evaluation. The screens shown reflect an earlier design state. The underlying product direction continues to evolve.

Role
Lead Product Designer
Timeline
Ongoing
Tools
Figma, ProtoPie, FigJam
Team
Solo
Status
Concept / Portfolio project
Primary Users
Indie developers, players, publishers
Core Challenge
Build structured early-signal infrastructure that creates independent value for all three audience types before network effects exist.
WHY THIS PROBLEM MATTERED

Discovery fails not because games are undiscoverable, but because trust signals are absent.

Steam's algorithm optimizes for velocity. Established titles absorb new traffic; small studios without existing audiences stay invisible. Players get volume, not decision-useful signals. Publishers operate blind: identifying promising titles before committing to deals requires engagement data that is either locked inside developer accounts or nonexistent.

Over 10,000 games are released on Steam annually. Discovery failure is the primary commercial failure mode for small studios, outweighing poor game quality as a cause of low sales. The gap is not traffic. It is that no surface makes early credibility visible to the right audience at the right time. PlayFirst is designed around that gap.

10K+
Games released on Steam per year
Volume without trust infrastructure means discovery defaults to algorithm sorting, not evidence of quality.
3
Audience types with no shared trust surface
Players, developers, and publishers each need different signals from the same game. No platform currently surfaces all three.
#1
Commercial failure mode for small studios
Discovery failure outweighs poor game quality as the primary cause of low sales for indie titles.
What each audience needs and what is currently missing
Current state: what platforms provide
Players get algorithm-ranked feeds with no visible trust or credibility signals
Developers get traffic data with no early-credibility formation path
Publishers get no structured early-traction data before deal commitment
The three audiences have no shared surface, so signals cannot compound
PlayFirst: what the architecture addresses
Players see visible hype scores, funding progress, saves, and community traction
Developers build credibility through structured game pages before they have an audience
Publishers access structured early-signal data on the same shared game page
One canonical page concentrates activity so signals compound across all three roles
MY MANDATE

Define the architecture before designing the interface.

This is an ongoing concept project. Screens reflect an earlier design state. The structural constraints were real: 0-to-1 architecture without a user base or live data meant every decision was a structural bet without empirical confirmation. The chicken-and-egg bootstrapping problem required an architectural solution, not a UI-layer fix.

Current iteration priorities: before-login and after-login discovery states as intentional product architecture; structured game page creation as trust formation infrastructure; developer analytics as early-readiness tooling, not vanity reporting; review logic producing decision-useful signals rather than aggregate sentiment.

WHAT SHAPED THE STRATEGY

Key inputs.

Competitive analysis of Steam, itch.io, and Epic Games Store discovery and trust-signal patterns. Benchmarking of publisher evaluation processes at indie-focused publishers through GDC talks and public fund documentation. Analysis of crowd-funding-adjacent trust dynamics from Kickstarter and Fig. Secondary research on community-driven discovery and its relationship to early commercial outcomes.

Structural friction by audience type
Players: algorithm sorting over decision-useful trust signals
High
Developers: no visible early credibility path without an existing audience
High
Publishers: engagement data locked in developer accounts or absent
High
Platform architecture: four structural layers
Layer 01
Shared Game Page
One canonical surface serving all three audiences. Trust signals compound when activity concentrates in one place.
Layer 02
Discovery States
Before-login and after-login states are product architecture. Before-login must convert on trust signals alone.
Layer 03
Developer Analytics
Readiness-oriented metrics: hype score, funding progress, sentiment mix, completion rate. Not vanity reporting.
Layer 04
Review Logic
Structured label selection produces decision-useful signals for players and iteration data for developers.
  • 01 Distribution leverage is asymmetric, not absent. Developers with existing audiences consistently reach visibility thresholds. The architecture must create a path to equivalent reach for studios starting without an audience, or it does not solve the core problem.
  • 02 Publishers evaluate on structured early signal, not polished demos. Time played, save rates, hype trajectory, and early funding momentum predict commercial viability better than build quality at the pitch stage. Structured signal visibility is a product feature, not a reporting tool.
  • 03 Trust converts players more reliably than algorithm rank. Visible community traction, clear roadmap commitment, and transparent funding use create stronger support decisions than recommendation-weighted feeds alone. Trust is not a soft outcome; it is a conversion mechanism.
KEY PRODUCT DECISIONS

Four decisions that shaped the architecture.

The updated product rationale organizes around four structural decisions: one canonical game page as a shared trust surface for all three audience types; structured upload fields that make game pages credible and supportable by design; developer analytics positioned as audience-readiness infrastructure rather than vanity metrics; and review logic designed to produce constructive, decision-useful signals. The before-login and after-login discovery states are intended product architecture, not a session management detail. Before-login discovery must convert on trust signals alone.

One canonical game page as the shared trust surface
Why
Fragmented surfaces cannot generate shared trust. A player evaluating a game, a publisher researching early traction, and a developer monitoring engagement all benefit from the same content object. Trust signals compound when activity concentrates in one place.
Alternative considered
Role-gated separate surfaces per audience type
Tradeoff
Higher IA complexity in a single page versus cleaner but network-fragmenting separate role experiences. The fragmentation cost outweighs the complexity cost structurally.
UI consequence
Primary player-facing discovery content always visible. Developer analytics conditionally accessible by account role. Publisher evaluation supported by visible early-traction evidence on the shared page. Role-switching must be invisible to non-matching users, not just hidden.
Structured upload fields as credibility infrastructure
Why
An empty or incomplete game page cannot convert trust regardless of audience traffic. Structured fields including game info, thumbnail, summary, wallpaper, roadmap, milestones, reward items, goal amount, and payment setup create the conditions for a page to become supportable. The upload structure is a trust formation system, not a content entry form.
Alternative considered
Freeform text entry with minimal structure requirements
Tradeoff
Higher developer effort at setup in exchange for higher page credibility at launch. Incomplete pages do not reach conversion threshold. The effort cost is real and must be addressed in onboarding rather than eliminated.
UI consequence
Onboarding must make upload completeness legible and progress-visible. Partial page states must signal that incomplete fields reduce support conversion, not just visual quality.
Developer analytics as audience-readiness infrastructure
Why
Vanity metrics create noise and false confidence. Readiness-oriented signals including hype score, funding raised, completion rate, funding progress, sentiment mix, and phase visibility give developers actionable intelligence and give eventual publisher evaluation a structured data layer.
Alternative considered
Simple traffic dashboard showing total visits and saves
Tradeoff
More complex data model and analytics surface versus simpler but strategically weaker metrics. The simpler model gives developers information without insight.
UI consequence
Developer dashboard leads with readiness-oriented metrics. Hype score and funding progress are primary. Sentiment mix is visible without being dominant. The analytics experience should feel like readiness tracking, not performance reporting.
Review design that produces decision-useful signals
Why
Aggregate star ratings communicate preference, not decision information. Review labels that distinguish what works, what needs work, and what surprised the reviewer produce signals useful for player decisions and developer iteration. This is an intended product direction within an evolving architecture.
Alternative considered
Standard star-rating system with freeform text entry
Tradeoff
Structured review formats impose constraints on reviewers. Some users will find the structure friction. The tradeoff is accepted because unstructured reviews generate noise rather than signal at the platform level.
UI consequence
Review input uses guided label selection before freeform text. Review display surfaces label breakdown alongside sentiment. The review layer must feel constructive in its visual framing, not evaluative.
SYSTEM WALKTHROUGH

Selected interfaces.

PlayFirstDiscovery Homepage: Before LoginAn entry-state discovery surface that frames PlayFirst as a trust-aware system for evaluating promising indie games before account creation.
PlayFirst discovery homepage before login
PlayFirstDiscovery Feed: Player ViewA signal-rich discovery feed where hype, saves, tags, funding cues, and visible traction help players evaluate games faster.
PlayFirst discovery feed player view
PlayFirstDeveloper Dashboard: Analytics and AudienceAn early-readiness dashboard that turns player attention into structured signal visibility through visits, hype, funding progress, completion, and sentiment.
PlayFirst developer dashboard early signal analytics
PlayFirstGame Detail: Unified Three-Audience PageA long-scroll game page that consolidates discovery, support, roadmap visibility, rewards, updates, and review signals into one shared trust surface.
PlayFirst game detail unified trust surface long scroll
VALIDATION, RISKS, AND WHAT REMAINS UNPROVEN

What this proves and what it doesn't.

Validated
  • The structural argument for shared trust surfaces outperforming fragmented role-separated products is supported by platform economics research. Engagement concentration creates compounding value; fragmentation destroys it before network formation.
  • Structured early-signal evaluation is already used by publisher-adjacent systems at scale. App Store editorial selection and Devolver Digital's public discovery process validate the pull-based evaluation model structurally.
  • Crowd-funding-adjacent trust patterns validate that visible funding progress, roadmap commitment, and milestone transparency convert support decisions. Kickstarter and Fig demonstrate this at meaningful scale.
Still unproven
  • Whether publishers would invest in active evaluation before developer volume reaches a meaningful threshold. This is the riskiest activation assumption in the model and cannot be validated without live publisher behavior.
  • Whether structured upload fields reduce or increase developer churn at onboarding. Higher setup effort is a real activation risk even when the payoff is higher page credibility.
REFLECTION

The structural decisions are the work.

The most important design decisions in this project are not visible in the screens. They are in the architecture: one page serving three audiences, structured fields that create credibility rather than entry forms that collect data, analytics framed as readiness rather than vanity, review logic that produces signal rather than sentiment. Each of these required defending against simpler alternatives that would have been faster to design but would have failed the core problem.

The ongoing evolution of this project has clarified something the earlier version did not make explicit: trust is the product. Discovery, analytics, reviews, and upload structure are all trust formation systems. If the product does not build trust at each interaction point, the network cannot form. Getting that framing right earlier would have changed the sequencing of every design decision that followed.

Other Projects