Roadmap

Where ClassiBoxAI is going

We’re building calm, privacy-first automation for messy documents: upload files, let ClassiBoxAI organize, then ask questions and get sourced answers.

This roadmap is a living document. Priorities may change based on real user feedback.

Product + Engineering Roadmap

Phases (foundation → enterprise scale)

We focus on reliability, clear answers with citations, and an offline-first workflow for teams with real file chaos.

Phase 1 — Public MVP (launch blockers)

Highest priority

Make the product usable end-to-end: upload → index → ask with citations, plus a practical Windows desktop sync MVP.

  • Indexing + RAG: per-tenant corpora, chunking, embeddings, and sourced answers.
  • Core ask endpoint: /api/v1/ask with citations.
  • Offline sync MVP: folder pick → scan → local SQLite mirror → incremental upload.
  • Minimal UI: login, upload, file list, indexing status, ask box.

Phase 2 — Stabilization

Stability

Improve resilience and trust: better telemetry, retries, and predictable behavior under real traffic.

  • Observability: structured logs + usage metrics (index/search/storage).
  • Reliability: idempotent jobs and safer retry strategy across uploads and indexing.
  • Sync quality: smarter resume of interrupted uploads and conflict handling.

Phase 3 — Expansion

Growth

Unlock richer workflows for teams: sharing, versioning, and more structure across knowledge bases.

  • Knowledge bases: multiple KBs per tenant with improved organization.
  • Collaboration: internal sharing and team workflows.
  • Version history: track updates and metadata changes over time.
  • Usage dashboard: indexing, ask/search, and storage aligned with entitlements.

Phase 4 — Enterprise readiness

Enterprise

Prepare for larger organizations with strict requirements and auditability.

  • SSO: Google Workspace / Microsoft Entra.
  • Audit & governance: stronger logs, permission models, and compliance-friendly options.
  • Scale: tenant sharding and high-volume readiness.

Phase 5 — Advanced optimizations

R&D

Explore new paradigms to reduce cost and increase speed, especially for offline and automation-heavy workflows.

  • Local/offline AI: experiments for faster private workflows.
  • Automation: event-driven workflows and external integrations.
  • Cost optimization: smarter processing and limits at scale.
Go-To-Market

Launch strategy

We grow through trust: clear positioning, simple onboarding, and proof via real before/after results.

Phase A — Closed beta

Validate

Small group testing to refine onboarding, limits, and answer quality.

  • Target heavy-document workflows (law, accounting, clinics, agencies).
  • Improve UX + AI outcomes based on real usage.

Phase B — Open beta

Growth

Public landing page + waitlist to ramp demand and collect feedback.

  • Waitlist + referral loop (earn extra storage).
  • Organic content: blog + short demos + social proof.

Phase C — Global launch

Scale

Broader campaigns, partnerships, and stronger distribution.

  • Before/after storytelling and case studies.
  • Targeted paid campaigns for high-value niches.
Business model

Plans + usage control

Subscriptions scale by storage, indexing volume, and team controls — enforced server-side for reliability.

Principle

Free tier must be limited to keep AI costs predictable. Upgrades should be clear and fair.

Architecture

Backend is the authority: quotas, billing state, and permissions never depend on the client.

Stripe as source of truth Billing is synchronized via webhooks. No billing logic in the client.
R2 for storage Files live in Cloudflare R2. Backend uses signed URLs to stay stateless.
Want to influence the roadmap?

Tell us what your workflow needs

If your team is drowning in documents, tell us what you need most. Your feedback directly shapes what we build next.