Open the teaching deck, worksheet, and editable slide source.
Interactive Lab
Practice in short loops: checkpoint quiz, microtask decision, and competency progress tracking.
Checkpoint Quiz
Microtask Decision
Choose the action that best improves scientific reliability.
Progress Tracker
State is saved locally in your browser for this module.
0% complete
Annotation Challenge
Click the hotspot with the strongest evidence for the requested feature.
Selected hotspot: none
Capability target
Publish a reproducibility-ready connectomics package (data + methods + metadata + limitations) that an external group can audit and reuse.
Why this module matters
Connectomics studies are technically dense and often impossible to interpret without exact workflow context. FAIR and reproducibility are not paperwork; they are scientific validity infrastructure.
Concept set
1) FAIR as implementation checklist
Technical: findable identifiers, accessible storage, interoperable formats, and reusable metadata each require concrete engineering choices.
Plain language: “FAIR” only counts if someone else can actually find, open, and use your work.
Misconception guardrail: posting files online is not enough.
2) Reproducibility is layered
Technical: computational reproducibility (same code/data => same result) differs from inferential reproducibility (same conclusion under reasonable variation).
Plain language: rerunning code and trusting conclusions are related but not identical.
Misconception guardrail: a notebook that runs once does not guarantee robust science.
3) Hidden curriculum in reproducibility
Technical: unwritten norms include naming conventions, release etiquette, assumption disclosure, and reviewer-ready method transparency.
Plain language: many expectations are “known by insiders” unless we teach them directly.
Misconception guardrail: learners should not be penalized for norms that were never made explicit.
Hidden curriculum scaffold
What senior reviewers expect but rarely state:
Dataset and code version IDs in figure legends/methods.
Explicit handling of failed runs and excluded samples.
A short “known limitations” section with concrete failure modes.
How to make these norms visible to trainees:
Provide reproducibility checklists before assignments.
Share annotated examples of strong/weak method reporting.
Require mentorship feedback on documentation, not only results.