Open the teaching deck, worksheet, and editable slide source.
Interactive Lab
Practice in short loops: checkpoint quiz, microtask decision, and competency progress tracking.
Checkpoint Quiz
Microtask Decision
Choose the action that best improves scientific reliability.
Progress Tracker
State is saved locally in your browser for this module.
0% complete
Annotation Challenge
Click the hotspot with the strongest evidence for the requested feature.
Selected hotspot: none
Capability target
Design and execute a connectomics inference plan that includes null-model choice, multiplicity control, uncertainty reporting, and explicit claim boundaries.
Why this module matters
Connectomics analyses can produce thousands of statistically testable patterns. Without disciplined inference, teams risk publishing artifacts from preprocessing bias, multiple comparisons, or misaligned null assumptions.
Concept set
1) Null models encode scientific assumptions
Technical: null models should preserve relevant graph constraints (degree sequence, spatial limits, cell-class composition) while randomizing the tested structure.
Plain language: your “chance baseline” must reflect biology and data collection realities.
Misconception guardrail: a generic random graph is rarely an adequate connectomics null.
2) Multiplicity is structural, not optional
Technical: motif families and subgroup analyses require correction strategies and predeclared test hierarchies.
Plain language: if you test many patterns, some will look significant by accident.
Misconception guardrail: reporting only p-values without multiplicity context is incomplete.
3) Exploratory and confirmatory analyses must be separated
Technical: hypothesis generation and hypothesis testing should have different reporting labels and evidence standards.
Plain language: be clear about what you discovered versus what you validated.
Misconception guardrail: post-hoc storytelling is not confirmatory inference.
Core workflow: connectomics inference protocol
Question-to-test mapping
Convert biological question into estimand(s), test set, and effect-size target.
Null-model design
Define null constraints and why they preserve key confounders.
Inference execution
Run model/tests with preregistered thresholds and multiplicity controls.
Robustness checks
Test sensitivity to preprocessing variant, sampling region, and parameter choice.
Claim calibration
Report supported, uncertain, and unsupported claims in separate blocks.
Studio activity: motif inference challenge
Scenario: A team reports motif enrichment in one dataset and asks whether the claim generalizes.
Tasks
Propose at least two candidate null models and justify each.
Run or outline multiplicity-aware testing strategy across motif set.
Draft a results summary separating exploratory and confirmatory findings.
Add one robustness check for cross-dataset comparability.