Contract Enforcement Rules in Collaboration v1.2

Scope Statement

This rule set is intended only for weak-tie online collaboration, cross-background teams, and software–hardware or otherwise high-uncertainty projects, where the goal is to establish an engineering contract that supports parallel work and a stop-loss mechanism. Its purpose is to reduce communication overhead, shift risk management earlier, and keep execution traceable and verifiable.

Non-Applicable Scenarios

This rule set does not apply to casual social interaction, non-project communication, personal relationships, or any context unrelated to engineering delivery. I will not invoke or enforce it in those scenarios, nor use it as a basis to judge or evaluate others.

Non-Bias and Adjudication Basis

This rule set carries no bias against any individual, group, or background. All collaboration decisions are based only on engineering-verifiable information: artifact delivery, interface contracts, acceptance definitions, versioned changes, and explicit time windows. It does not engage in motive debates, attitude attribution, or personality judgments.

Transparency and Misunderstanding Prevention

This rule set serves as an explicit pre-collaboration contract. It is meant to disclose my working expectations, boundaries, and default actions in full and in advance, so both parties share a consistent understanding of bottom lines and shared rules before collaboration begins—reducing misunderstandings and preventing disputes from escalating at any stage.


Purpose: filter collaborators at minimal cost, establish contracts that enable parallel work, and cut losses quickly when risk becomes significant.
Principles: judge only by verifiable deliverables; do not debate motives; drive progress with time windows.


0. Scope and Objectives

0.1 Scope

  • Weak-tie online collaboration (met in group chats / cross-school / cross-domain)
  • Software-hardware or high-uncertainty projects (competitions, coursework, open source, PoCs, prototypes)

0.2 Engineering Objectives

  • Low-cost screening: convert “interest” into “artifact commitments”
  • Parallelizable collaboration: contract-first; versioned interfaces and acceptance criteria
  • Fast stop-loss: use default actions to end investment and prevent sunk cost

1. System Model

1.1 Collaboration State Machine

StateNameDefinitionAllowed DiscussionExit Conditions
S0ProbingIdea exchange only; no commitmentContext/value/rough directionSend onboarding request → S1
S1Entry EvaluationRequest MIP/MVW with a deadlineOnly materials and deliveriesPass → S2; overdue → S4
S2KickoffContract/interface/acceptance in place; parallel work startsArtifact-driven executionRed flags trigger → S3
S3Risk WatchRed flags observed; timeboxed remediationOnly remediation checklist itemsRemediate → S2; overdue → S4
S4Stop-Loss FreezeStop project discussionOptional non-work relationshipOnly “artifact-based re-eval” → S1

Engineering meaning: states change only based on whether artifacts exist and meet spec—not based on attitude narratives.


2. Terms and Artifact System

2.1 MIP: Minimum Input Package

The “start-work inputs” a collaborator must deliver before you invest.

For software-hardware projects, MIP must include:

  • MIP-A Tech Stack Declaration
    • Hardware: MCU/controller model, sensor/actuator list, power method
    • Software: host/mobile platform, language/framework, versions/constraints
  • MIP-B Interface Draft
    • Transport (BLE/serial/Wi-Fi/etc.)
    • Data direction: hardware→software, software→hardware
    • Draft payload schema (fields, types, units, frequency; TODO allowed but structure must exist)
  • MIP-C Variables and Ranges (assumptions allowed)
    • Key variables, expected ranges, measurement/acquisition method
  • MIP-D Acceptance Definition (minimum verifiable)
    • What the demo must prove, pass/fail criteria, how to reproduce
  • MIP-E Risks and Unknowns
    • TOP3 unknowns + validation plan (how to test, when conclusions will be produced)

2.2 MVW: Minimum Verifiable Work

Prevents “keep discussing” without output.

Within 48 hours, deliver any one of:

  • MVW-1: Interface Contract v0.1
    • fields/types/units/frequency/error handling/version
  • MVW-2: Reproducible Validation Plan
    • TOP3 unknowns → how to test → what records/logs → when conclusions are delivered
  • MVW-3: Minimal Prototype Evidence
    • logs/serial output/BLE capture/demo video + reproduction steps

3. Onboarding (Entry Protocol)

3.1 Entry Gates (must all hold)

  • You have explicitly stated your role and boundaries (what you will not do)
  • The other party is the initiator or clearly the accountable driver (Owner/Accountable)
  • They agree to communicate via documents/artifacts (not purely verbal)

3.2 Onboarding Request (Template)

Goal: convert “willingness” into “deliverable commitments.”

I’m willing to evaluate joining, but I need you to provide by T+48h:

  1. MIP-A/B (placeholder allowed): tech stack + transport + interface direction + payload skeleton (TODO allowed)
  2. Any one MVW (1/2/3): Interface Contract v0.1 OR Reproducible Validation Plan OR Minimal Prototype Evidence

Default action: if unmet by the deadline, I assume the project is not in an execution state. I won’t invest time or discuss project details.

3.3 Deadline Rules (Default SLA)

  • Weak-tie/stranger collaboration: within 48 hours there must be MIP-A/B + any one MVW
  • If they request a longer cycle: allowed, but 48 hours still requires a placeholder submission (at least stack + transport + interface-direction skeleton)

3.4 Placeholder Acceptance Criteria (Placeholder Spec)

A placeholder can be incomplete but must include:

  • Hardware model or candidate range + transport (at least “current plan”)
  • Explicit data direction + payload skeleton (TODO permitted)
  • A promised update (timestamp + which fields/items will be completed)

4. Kickoff Contract (Execution Rules)

4.1 Contract Elements (Write it down, not verbal)

  • Roles: who drives (Owner/A), who implements (R), who accepts/tests
  • Boundaries: a “Not Responsible” list for you
  • Artifacts: weekly/phase deliverables (docs, prototypes, test logs, captures)
  • Cadence: one short sync (e.g., weekly 30 min); everything else async
  • Change control: direction changes require updating docs and acceptance criteria first

4.2 Parallel Development Constraints

  • All integration follows the interface contract
  • Interface changes must be versioned (v0.1 → v0.2) and include:
    1. rationale
    2. compatibility strategy (compatible vs breaking)
    3. impact scope (which modules change)
  • Every discussion must produce an artifact update (even five lines)

5. Red Flags and Triggers (Risk Control)

5.1 Red Flags (any two → enter S3)

  • R1 Discussion replaces delivery: ongoing chat but no artifact updates
  • R2 Responsibility inversion: “you’re busy / you didn’t push” used to justify non-delivery
  • R3 Labeling instead of answering: ignores artifact questions; shifts to personal/method attacks
  • R4 Repeated drift: direction changes without updating docs/acceptance
  • R5 Unequal terms: demands high input from you while returns/credit/boundaries are unclear

5.2 S3 Remediation (One remediation notice, 48h)

Remediation notice contains only: gap, deadline, default action.

Remediation Template

By T+48h, please complete: MIP-B (fields + frequency) and MIP-D (acceptance + reproduction steps).
If still missing at the deadline, I will stop project discussion and investment and move to freeze.
If you later complete the materials, I can re-evaluate.


6. Stop-Loss Freeze (S4)

6.1 Stop-Loss Conditions (any one is sufficient)

  • Two consecutive rounds of dodging core artifacts (interface/stack/acceptance) and shifting to labeling or tug-of-war
  • Missing a valid MIP-A/B + MVW after the deadline
  • Clear responsibility inversion and refusal to remediate

6.2 Stop-Loss Script (Shortest)

I understand your perspective, but I won’t continue participating in this project, and I won’t discuss project content further.
General/non-work conversation is fine. (optional)

6.3 Post-Freeze Behavior Rules

  • Do not re-enter project debates (including “clarifying misunderstandings”)
  • Do not provide review/advice/review cycles
  • If they keep pulling: go silent / close the work channel entry point

7. Composite Mechanisms (Optional)

7.1 Re-evaluation Entry (Only “artifact-based re-eval”)

Only re-enter S1 if they submit:

  • Complete MIP-A/B/C/D
  • A reproducible demo or test log
  • A change record (what changed since last time)

7.2 “No Motive Debate” Rule

If the discussion drifts from “artifacts/interfaces/acceptance” to motives or character:

  • Reply once and end:

I only discuss deliverables and interfaces. Without deliverables, we don’t proceed.


8. Lead/Architect Rules (TL/Architect/Owner)

8.1 Roles (Declare your primary hat)

  • Owner (A): defines goals/scope, owns cadence/resources, accountable for outcomes
  • Architect: defines system boundaries/interfaces, key decisions, quality gates, risk shift-left
  • TL: breaks down work, tracks progress, reviews, integration and release quality

8.2 Minimum Visible Outputs

  • Project Charter (goal/non-goals/scope/deliverables/cadence/top risks)
  • Interface Contract (contract-first, versioned)
  • Milestones + DoD (acceptance per milestone)
  • Decision Log (stops repeated arguments)
  • Risk Register (risks + validation plans)

8.3 Project Charter (within 24–48h)

Include:

  • Goal (with measurable targets)
  • Non-goals (explicit “not doing”)
  • Scope (module boundaries)
  • Deliverables (phase + final)
  • RACI (every module must have an A)
  • Rhythm (sync frequency + async systems)
  • TOP3 Risks + Validation Plan

8.4 Contract-First Interface Governance

Any cross-module work defines the contract first:

  • Inputs/outputs (fields/types/units/frequency)
  • Constraints (latency/accuracy/throughput/error codes)
  • Versioning (v0.1/v0.2)
  • Acceptance examples (payload/scenario)

Interface changes must:

  1. bump contract version
  2. record rationale
  3. state compatibility strategy

8.5 Two-track Planning: Research vs Delivery

  • Research track: unknown validation, PoCs, experiment logs
  • Delivery track: implementation, integration, stability, release

Rule: research outputs must be documented and converted into delivery inputs, or they don’t count as progress.

8.6 Definition of Done (DoD)

Per module, at minimum:

  • Feature complete
  • Minimal reproduction steps + logs
  • Performance/stability metric (e.g., latency/crash rate)
  • Docs updated (contract/usage)
  • Integration passes

8.7 Conflict Handling (Avoid “Judge Mode”)

Classify first:

  • Fact conflicts: resolve via experiments/records
  • Preference conflicts: decide via target metrics or user value
  • Value conflicts: return to charter and RACI

9. Quick-Use Checklists (Operational)

9.1 48-hour Entry Checklist (Must Provide)

  • Hardware controller model (or candidate range)
  • Transport method
  • Data direction (HW→SW / SW→HW)
  • Payload skeleton (TODO allowed)
  • Any one MVW (contract / validation plan / prototype evidence)

9.2 Weekly Minimum Cadence After Kickoff

  • 1 × 30-min sync
  • ≥1 artifact update (contract/logs/DoD/tests)
  • Any interface change: version bump + impact scope

9.3 Minimal Risk Loop

  • ≥2 red flags → S3
  • One remediation notice, 48h
  • Deadline missed → S4

10. Retrospective Template (48h System)

  • Did my first message clearly state the 48h deliverables and default action?
  • Did they deliver MIP-A/B + any one MVW within 48h?
  • When red flags first appeared, did I issue remediation within 48h?
  • Did I get pulled into motive debate? What triggered it?
  • How much measurable time did I invest (minutes/hours)?
  • What default action could I trigger earlier next time (freeze/downgrade/replace)?

Appendix: Optional Hardening Clause

  • If they ask to “keep discussing” but provide no artifacts, treat as invalid input and freeze immediately (S4).