Available now: public samples + assessment

Shared team context for AI-assisted software work.

dovetell helps teams turn scattered decisions, prompts, and standards into repo-owned context their coding agents can actually use.

repo-owned context starter
project-context/manifest.mdsource
project-context/tasks.mdactive
project-context/decisions.mdtrace
project-context/session-handoff.mdresume

The first proof surface is intentionally simple: readable markdown near the work, with enough structure for a human or coding agent to resume safely.

Product state

Clear about what exists now. Clear about what comes later.

The current public product is a practical framework: samples, a team assessment, and guidance for making context durable. The platform workflow is the direction, not a claim that every automated layer is live today.

Available now

nowMinimal repo context starter and public sample files.
nowTeam AI maturity assessment with versioned scoring config.
nowFramework guidance for handoffs, decisions, and reusable prompts.
nowPrivacy-friendly analytics and lightweight public intake.

Coming later

laterPersisted assessment history and account claim flow.
laterAuthenticated dashboard for saved recommendations and prior runs.
laterReview workflow for proposed context changes.
laterDeeper source integrations after the core loop proves useful.
Workflow

From scattered working memory to reusable team context.

The centerline is deliberately small: capture the things teams keep re-explaining, make them reviewable, and keep the durable version somewhere the team already owns.

01
Scattered inputs
Chats, PR comments, meeting notes, prompt fragments, and decisions in flight.
02
Proposed context
Useful material is shaped into a candidate update, not treated as truth by default.
03
Human review
Someone accountable accepts, rejects, edits, or parks the change.
04
Repo-owned context
The durable version lives as readable markdown near the work.
05
Agents reuse it
Future sessions start with less re-explaining and fewer fragile assumptions.

Start with the assessment.

Find the weakest part of your team's AI context loop, then route to the sample that fits the problem.

Take the assessment
Pain

The problem is not that teams lack tools. It is that context keeps escaping.

Repeated explanation

Every new chat starts with the same background. The best version of the explanation lives wherever someone last typed it.

Decision drift

Teams change direction during real work, but the durable record lags behind. Later, nobody can tell which decision still governs.

Handoff fragility

Work pauses, roles switch, or sessions reset. Without a compact handoff, useful context disappears at the exact moment it is needed.

Public proof

Inspect the sample before you believe the category.

The first public artifact is intentionally concrete: a small context starter you can read, copy, and adapt. No platform claim required.

session-handoff.md
tasks.md

centerline: preserve recoverable intent

current task: define the next smallest useful slice

drift: allowed, but record it

next: resume with context, not folklore

Trust model

Built around repo-owned truth, not another private memory silo.

dovetell's product direction is to render, coordinate, review, and write back context. The team still owns the governed context.

source

Readable markdown

Context remains understandable without a proprietary app.

lineage

Traceable changes

Decisions and tasks keep source, status, and handoff history.

service

Bounded storage

Assessment and account state can live in Postgres without owning team truth.

guardrail

No magic claims

Useful governance beats dramatic promises about autonomous memory.

Early access

Tell it once. Let it travel.

Join the waitlist for the account and persistence layer: saved assessment runs, account claim flow, and the first product dashboard when it earns its keep.

You're on the list. We'll be in touch when the next layer is ready.