Scope sign-off: contract vs delivered functionality (and every change in between)
Goal: make the delivered product comparable to what was sold, and make all post-signature changes traceable.
What to collect and verify:
Scope-to-delivery mapping. A clear mapping from contract requirements (or backlog items) to delivered features:
Requirement ID → user-facing behavior → where it lives (module/service) → acceptance evidence (test, demo, screenshot, environment).
Change log after signature. A list of all scope changes with:
date, decision owner, rationale, impact (cost/time/risk), and what was traded off.
Definition of “done” per feature. Not “implemented”, but “works end-to-end”, including edge cases (e.g., pagination, filtering, permissions, exports, email notifications).
Known deviations. Anything delivered differently than originally specified must be documented as a conscious decision, not “how it ended up”.
Decision rule:
If you cannot trace a delivered feature back to an agreed requirement (or change request), then you cannot reliably assess completeness or future liability.
Architecture sign-off: documented, declared, and observable
Goal: verify that the implemented architecture matches what you were told you are buying (monolith, modular monolith, microservices) and that the key decisions are documented.
Minimum package:
High-level architecture diagram (components, boundaries, data stores, third-party services).
Deployment topology (environments, networks, ingress, queues, scheduled jobs).
Data flows for critical business processes (from UI/API to persistence and integrations).
Architecture Decision Records (ADRs) for non-trivial choices (auth approach, multi-tenancy model, async processing, search, caching, data retention assumptions).
Verification questions that catch mismatches early:
Is the system actually split the way it was described (separate deployables, independent scaling, separate data ownership), or is it a single deployment with internal folders?
Are responsibilities and boundaries enforceable (APIs/contracts), or dependent on tribal knowledge?
Can you identify where a business-critical process lives (code locations, services, queues) without asking the original team?
Decision rule:
If architecture is undocumented or contradicts declared assumptions, then treat it as risk (cost uncertainty) even if features appear to work today.
Reproducibility from zero: the “new developer / new environment” test
Goal: ensure the system can be started by your team without the contractor’s interactive help.
A practical acceptance test:
Assign a developer who did not work on the project.
Give them only the handover package and access credentials.
Ask them to:
set up local dev,
deploy to a fresh environment,
run tests,
execute one key business flow end-to-end.
Required artifacts:
Bootstrapping guide: prerequisites, build steps, commands, expected outputs, common failure modes.
Environment setup: how to provision databases, object storage, message brokers, search, and any required cloud resources.
Configuration reference: all config keys, defaults, and what happens if missing.
Secrets management: where secrets live, rotation expectations, and how secrets are injected (never “shared in chat”).
Decision rule:
If the system cannot be set up from zero without the contractor, then you do not own operability—you are renting it.
Documentation that must remain in the company after handover
Goal: minimize hidden maintenance costs and prevent vendor lock-in by knowledge scarcity (a classic driver of technical debt).
Minimum “sensible” set (what stays with you):
Product-facing documentation
Key business processes: what “done” means in business terms, main workflows, exceptions, and roles/permissions.
Domain glossary: shared meaning of terms (customer/account/tenant/order, etc.).
Operational runbook for business users (if applicable): how to handle common support situations.
Technical documentation
API documentation (OpenAPI/Swagger where possible) plus examples for critical endpoints.
Integration documentation: all third-party dependencies, contracts, failure handling, timeouts/retries, and data mapping.
Data model overview: major entities, relationships, migrations strategy, and how reporting/analytics is expected to work.
“How to change X safely” notes for fragile parts (billing, identity, permissions, imports).
Decision rule:
If documentation is missing, treat it as deferred work you will pay for later—either via internal ramp-up time or ongoing vendor dependence. (This is exactly how technical debt becomes a business cost, not just an engineering complaint.)
Environments, CI/CD, and operational readiness
Goal: make deployments repeatable, auditable, and accessible to the client team.
Sign-off items:
Full access transfer:
repositories (including history),
CI/CD pipelines,
artifact registries,
cloud accounts/projects,
monitoring/logging dashboards,
incident channels (if used).
CI/CD definition:
build, test, security checks (as available),
deploy steps per environment,
rollback strategy,
who can release and how approvals work.
Environment parity:
dev/staging/prod differences documented and justified,
configuration drift controlled, not accidental.
Operational baseline (inspired by pragmatic ops principles such as “config in environment” and reproducible deployments):
Can you deploy without manual SSH sessions?
Are configs externalized rather than hard-coded?
Is there a clear separation between build and run?
Decision rule:
If deployments require manual, undocumented steps, then every future release carries avoidable operational risk.
Testing, stability, and a “known issues register”
Goal: make quality measurable and make compromises explicit.
What you should receive:
Test inventory:
what test types exist (unit/integration/e2e),
where they run (CI/local),
what they cover (critical flows).
Scenario coverage statement:
“happy path” and edge cases for core flows,
performance-sensitive paths (imports, search, reporting),
error-handling behavior (timeouts, retries, idempotency).
Known issues register:
bug list with severity and workaround,
technical limitations,
deferred refactors,
consciously accepted trade-offs.
Acceptance practice that prevents later disputes:
For each known issue: accepted by whom, when, and why, plus what “fix” would likely involve (time/cost uncertainty is fine; silence is not).
Decision rule:
If you don’t have a known-issues list, you still have known issues—you just won’t see them until production.
Security and compliance checkpoints (practical, not performative)
Goal: reduce common application risks and clarify what must be verified contractually.
Minimum checks aligned with common web risks:
OWASP-style vulnerability review: authentication, authorization, input validation, session management, and sensitive data handling.
Secrets and credentials:
no secrets in code,
access scoped by least privilege where feasible,
rotation capability.
Auditability:
logs for security-relevant events (login, permission changes, admin actions),
traceability for critical business actions.
Compliance-adjacent but often overlooked in handovers:
Third-party inventory:
libraries, SDKs, and external services,
versions,
licenses (and where obligations apply).
License review approach:
identify copyleft triggers,
ensure attribution requirements are known,
document any “dual-licensed” components and chosen license.
Decision rule:
If you cannot list external dependencies and licenses, then you cannot reliably manage legal and operational exposure.
IP, ownership, and access: make it unambiguous
Goal: ensure you can legally and practically maintain the system.
Sign-off items to validate (with your legal counsel where needed):
Source code transfer: full repository access, including build scripts and infrastructure code.
Rights to use, modify, and distribute internally (as applicable).
Third-party accounts ownership: domains, certificates, email providers, app stores (if any), analytics, error tracking.
Credential handover: service accounts, API keys, signing keys—transferred securely and rotated after handover.
Decision rule:
If ownership of key accounts or rights to the code is unclear, then your ability to operate the system is fragile regardless of technical quality.
Trade-offs you may accept—if they are documented
Trade-offs are normal; untracked trade-offs become surprise costs.
Examples of acceptable trade-offs (when documented and consciously accepted):
Time-to-market vs completeness: a feature shipped with a narrower edge-case envelope.
Operational simplicity vs scalability: a modular monolith instead of microservices, with clear boundaries and future extraction options.
Short-term manual ops vs automation: temporary manual steps only if they are scripted, documented, and time-boxed with an owner.
What “good documentation” looks like:
What is compromised, where it shows up, how it fails, and how to validate it.
What metrics would indicate the trade-off has become too expensive (latency, error rate, on-call load, support tickets).
Decision rule:
If a trade-off is not written down, it will be rediscovered later—during an incident or a costly rewrite.
Anti-patterns that create post-handover risk
These patterns are common reasons why systems become hard to operate after the contractor exits:
“Works on my machine” deployments: no reproducible environment setup, no parity between staging and prod.
Hidden configuration and tribal secrets: critical behavior controlled by undocumented flags or values.
No dependency inventory: external services used ad hoc with unclear ownership, costs, or failure behavior.
Undocumented architecture drift: the diagram shows one thing, the code and deploy topology show another.
Acceptance without known-issues register: compromises exist but are not recorded, so support inherits uncertainty.
CI/CD owned by vendor: the client cannot release independently, so every change becomes a negotiation.
Decision rule:
If you see two or more of these anti-patterns, then treat handover as incomplete even if feature demos look good.
Sign-off blockers: conditions to refuse acceptance
These are reasonable “stop” conditions that should block sign-off until resolved or formally waived:
Cannot provision or deploy without contractor involvement (fresh environment test fails).
No full access to repositories and CI/CD, or access depends on vendor-controlled accounts.
Undocumented critical architecture (no diagrams/ADRs for essential flows and boundaries).
Missing technical documentation required for maintenance (bootstrap/runbook/config/integrations).
Security-critical gaps (e.g., unclear auth model, secrets in code, no audit trail for admin actions).
Unresolved ambiguity on IP transfer or usage rights.
No inventory of external dependencies and licenses (legal/operational unknowns).
Critical defects without explicit acceptance (severity not agreed, no workaround).
Decision rule:
If a blocker exists, then signing off transfers risk to you without giving you control to manage it.
Decision checklists (sign-off ready)
Vendor checklist (what you should ask the contractor to deliver)
Scope-to-delivery mapping + post-signature change log.
Architecture diagrams + ADRs for key decisions.
“From zero” setup guide (local + fresh environment).
Config + secrets management documentation (secure transfer, rotation plan).
API documentation + integration contracts and failure behavior.
CI/CD pipelines accessible to client + deploy/rollback procedure.
Test inventory + scenario coverage statement + how to run tests.
Known issues register + accepted trade-offs and their owners.
Security review notes (at least top-risk areas) + audit-relevant logging description.
Dependency + license inventory (libraries and external services).
IP and account ownership transfer documented (repo, cloud, domains, certs, tooling).
Internal team checklist (what the client must be able to do)
Clone repos, build, and run locally without help.
Deploy to a fresh environment using documented steps.
Run test suite and interpret results.
Identify where key business flows live (code/services/data).
Operate incident basics: where logs are, who is paged, how to rollback.
Change a small feature safely (a “maintenance rehearsal”).
Validate access and ownership for all critical systems and accounts.
Summary
A handover for custom web application development is successful when you can verify delivered scope, reproduce environments, release independently, and maintain the system without hidden knowledge. The practical standard is simple: control plus clarity. Control means access, ownership, and deployability; clarity means documentation, known issues, and explicit trade-offs. If either side is missing, you are not signing off a product—you are accepting uncertainty.
