Open the teaching deck, worksheet, and editable slide source.
Interactive Lab
Practice in short loops: checkpoint quiz, microtask decision, and competency progress tracking.
Checkpoint Quiz
Microtask Decision
Choose the action that best improves scientific reliability.
Progress Tracker
State is saved locally in your browser for this module.
0% complete
Annotation Challenge
Click the hotspot with the strongest evidence for the requested feature.
Selected hotspot: none
Capability target
Design and execute a connectomics inference plan that includes null-model choice, multiplicity control, uncertainty reporting, and explicit claim boundaries.
Why this module matters
Connectomics analyses can produce thousands of statistically testable patterns. Without disciplined inference, teams risk publishing artifacts from preprocessing bias, multiple comparisons, or misaligned null assumptions.
Concept set
1) Null models encode scientific assumptions
Technical: null models should preserve relevant graph constraints (degree sequence, spatial limits, cell-class composition) while randomizing the tested structure.
Plain language: your “chance baseline” must reflect biology and data collection realities.
Misconception guardrail: a generic random graph is rarely an adequate connectomics null.
2) Multiplicity is structural, not optional
Technical: motif families and subgroup analyses require correction strategies and predeclared test hierarchies.
Plain language: if you test many patterns, some will look significant by accident.
Misconception guardrail: reporting only p-values without multiplicity context is incomplete.
3) Exploratory and confirmatory analyses must be separated
Technical: hypothesis generation and hypothesis testing should have different reporting labels and evidence standards.
Plain language: be clear about what you discovered versus what you validated.
Misconception guardrail: post-hoc storytelling is not confirmatory inference.
4) Statistical challenges unique to connectomics
Connectomics datasets present several statistical difficulties that are uncommon in other fields. Massive multiple comparisons arise when testing thousands of motifs, cell-type pairs, or connection patterns simultaneously. Spatial autocorrelation is pervasive because nearby neurons share arbor overlap, creating non-independent edges that violate standard test assumptions. The threshold problem is particularly acute: choosing a minimum synapse count (e.g., 3 vs. 5 synapses to define a “real” connection) changes the resulting graph and all downstream statistics, yet no universally accepted threshold exists.
Researcher degrees of freedom in null model selection further compound these issues. Different null models that preserve different graph properties (degree sequence, spatial distance distribution, cell-type composition) can yield contradictory conclusions from the same data. Best practices include using permutation tests over parametric alternatives when distributional assumptions are uncertain, reporting effect sizes alongside p-values to distinguish statistical significance from biological relevance, and performing sensitivity analyses across multiple thresholds and null model variants to confirm that findings are robust rather than artifacts of a single analytical choice.
Core workflow: connectomics inference protocol
Question-to-test mapping
Convert biological question into estimand(s), test set, and effect-size target.
Null-model design
Define null constraints and why they preserve key confounders.
Inference execution
Run model/tests with preregistered thresholds and multiplicity controls.
Robustness checks
Test sensitivity to preprocessing variant, sampling region, and parameter choice.
Claim calibration
Report supported, uncertain, and unsupported claims in separate blocks.
Studio activity: motif inference challenge
Scenario: A team reports motif enrichment in one dataset and asks whether the claim generalizes.
Tasks
Propose at least two candidate null models and justify each.
Run or outline multiplicity-aware testing strategy across motif set.
Draft a results summary separating exploratory and confirmatory findings.
Add one robustness check for cross-dataset comparability.