Open the teaching deck, worksheet, and editable slide source.
Interactive Lab
Practice in short loops: checkpoint quiz, microtask decision, and competency progress tracking.
Checkpoint Quiz
Microtask Decision
Choose the action that best improves scientific reliability.
Progress Tracker
State is saved locally in your browser for this module.
0% complete
Annotation Challenge
Click the hotspot with the strongest evidence for the requested feature.
Selected hotspot: none
Capability target
Publish a reproducibility-ready connectomics package (data + methods + metadata + limitations) that an external group can audit and reuse.
Why this module matters
Connectomics studies are technically dense and often impossible to interpret without exact workflow context. FAIR and reproducibility are not paperwork; they are scientific validity infrastructure.
Concept set
1) FAIR as implementation checklist
Technical: findable identifiers, accessible storage, interoperable formats, and reusable metadata each require concrete engineering choices.
Plain language: “FAIR” only counts if someone else can actually find, open, and use your work.
Misconception guardrail: posting files online is not enough.
2) Reproducibility is layered
Technical: computational reproducibility (same code/data => same result) differs from inferential reproducibility (same conclusion under reasonable variation).
Plain language: rerunning code and trusting conclusions are related but not identical.
Misconception guardrail: a notebook that runs once does not guarantee robust science.
3) Hidden curriculum in reproducibility
Technical: unwritten norms include naming conventions, release etiquette, assumption disclosure, and reviewer-ready method transparency.
Plain language: many expectations are “known by insiders” unless we teach them directly.
Misconception guardrail: learners should not be penalized for norms that were never made explicit.
4) FAIR applied to connectomics
Each FAIR principle maps to concrete connectomics infrastructure. Findable means assigning DOIs for datasets and providing stable CAVE endpoints that resolve to specific data versions. Accessible means offering open APIs and tools like CloudVolume that allow programmatic data retrieval without manual download. Interoperable means using standard formats such as SWC for neuron morphologies, Zarr for volumetric data, and NWB for neurophysiology so that tools across labs can ingest each other’s outputs. Reusable means materialization versioning in CAVE, which lets any researcher retrieve the exact state of the segmentation and annotations at a given point in time.
A practical reproducibility checklist for any connectomics analysis release should include: the dataset version or release identifier, the CAVE materialization number (if applicable), the code commit hash for all analysis scripts, the environment specification (e.g., conda environment file or Docker image), and the full parameter configuration used. Without all five elements, a third party cannot reliably reproduce the analysis, even with access to the same underlying data.
Hidden curriculum scaffold
What senior reviewers expect but rarely state:
Dataset and code version IDs in figure legends/methods.
Explicit handling of failed runs and excluded samples.
A short “known limitations” section with concrete failure modes.
How to make these norms visible to trainees:
Provide reproducibility checklists before assignments.
Share annotated examples of strong/weak method reporting.
Require mentorship feedback on documentation, not only results.