PI5 — Target Acquisition Satellite (Level 5: Judgement & Responsibility)
Mission Goal
Design and justify a “target acquisition” system that detects a defined target using sensors (typically vision) and produces trustworthy evidence and decision logic. You must address ambiguity, false positives, and ethics.
Why This Matters
Targeting systems are high-responsibility engineering. Even in a classroom-safe version (objects only), the professional skill is the same: define targets, quantify uncertainty, prove performance, and set ethical boundaries.
What Data You Collect
- Image frames or detections (bounding boxes, centroid coordinates)
- Confidence scores / thresholds used
- False positive / false negative counts from test set
- Decision logs (why the system declared “target found”)
Hardware / Software Needed
- Raspberry Pi + camera (USB webcam or Pi camera)
- Python 3
- Optional: OpenCV
- Stable environment for tests (consistent lighting/background)
Inputs From Other Teams
- Command & Control: Define rules of engagement (what counts as a valid target, stop/go criteria).
- Risk & Safety: Define ethics boundary: objects only; no faces/people.
- Data: Define evaluation protocol and confusion matrix reporting.
- Comms: Define how detections are reported to mission control (format + rate).
What You Must Produce (Deliverables)
- A Target Definition Document (what the target is and is not).
- A working detector for a classroom-safe target (e.g., a coloured card, a QR code, a specific shape).
- An evaluation report with false positives/negatives and threshold justification.
- An ethics & safety statement for your design.
Step-by-Step Build
- Choose a safe target:
- Coloured object (e.g., bright red square)
- QR code / fiducial marker
- Simple shape (circle/triangle) against controlled background
- Write the Target Definition Document:
- What qualifies
- What doesn’t qualify
- Known confusing cases (e.g., similar colours)
- Implement detection (example approaches):
- Colour thresholding + contour detection
- QR detection library
- Template matching (careful with lighting)
- Log every detection:
- timestamp, detected(true/false), confidence/score, reason, image filename
- Build a small evaluation set (at least 30 trials):
- 15 with target present
- 15 without target (hard negatives)
- Calculate performance:
- False positives, false negatives
- Precision/recall (optional but encouraged)
- Choose a threshold and justify it based on your results and mission risk (what’s worse: false alarm or miss?).
Data Format / Output
detections.csvcolumns:ts_iso,target_present,detected,score,threshold,reason,image_file- Saved evidence images:
evidence/img_*.jpg(objects only)
Analysis Ideas
- Confusion matrix (TP/FP/TN/FN).
- Threshold sweep: how FP/FN changes as threshold changes.
- Bias analysis: lighting changes, background changes, distance changes.
- Decide mission policy: when to request a second confirmation before declaring “found”.
Success Criteria
- Detector works on the defined target and logs decisions transparently.
- Evaluation report includes FP/FN and threshold justification.
- Ethics boundary is explicit and followed (objects only, no faces).
- Team demonstrates professional judgement: acknowledges limitations and sets safe policies.
Evidence Checklist
- Target Definition Document
- detections.csv + summary counts
- Sample evidence images (objects only)
- Threshold justification note
- Ethics & safety statement
Safety & Privacy
- Do not target people or faces. Use objects only.
- Camera frame must avoid private screens and personal documents.
- Store and share evidence responsibly; get permission before sharing anything externally.
- Be honest about uncertainty; don’t claim capability you didn’t measure.
Common Failure Modes
- Target definition vague → detector “works” only by luck.
- No evaluation set → performance unknown.
- Cherry-picking only successful examples.
- Ignoring false positives (often the most dangerous failure type).
Stretch Goals
- Add “two-step confirmation” (detect twice in a row before declaring).
- Report uncertainty: “likely/possible/unlikely target” bands.
- Integrate with comms: send structured detection packets to mission control.
Scaffolding Example (optional)
You are allowed to reuse structures and formats from other teams — but not their decisions.
Structure: “Full payload data pack”
- Readme (how to run + what files mean)
- Raw data (CSV)
- Processed summary (text + 3 bullet insights)
- Evidence (photos/screenshots)
Example insight prompts
- What changed over time? What correlated with environment?