Research in Action

Apply concepts in practical workflows, quality control, tools, and reproducible research operations.

This track focuses on applying connectomics knowledge to real research workflows: running analyses on petascale datasets, applying machine learning and computer vision to EM data, maintaining reproducibility, and producing publication-ready outputs. Resources connect directly to the MouseConnects dataset, the Connectome Quality tool, and the workflow pipeline from acquisition through circuit interpretation. Learners should have completed the Core Concepts & Methods foundation before focusing here.

Fadel alignment: Skills, Meta-learning

Modules in This Track

Question

06. Segmentation 101

Core segmentation error taxonomy—merges, splits, boundary errors—and a practical correction workflow with documented quality impact.

Analysis

12. Big Data in Connectomics

Scalable data architecture, query planning, and provenance tracking for petascale connectomics datasets like MICrONS and H01.

Analysis

13. Machine Learning in Neuroscience

ML workflows for connectomics with controls for data leakage, spatial correlation bias, and biologically meaningful evaluation metrics.

Analysis

14. Computer Vision for EM

Computer vision methods—from classical filters to deep learning—applied to EM imagery for segmentation support, morphology extraction, and quality diagnostics.

Analysis

15. LLMs for Patch Analysis

LLM-assisted patch triage and annotation support with human-in-the-loop verification gates to prevent hallucination and unsupported scientific inference.

Dissemination

16. Scientific Visualization for Connectomics

Principled visualization of connectomics structures and analysis results: encoding uncertainty, avoiding misleading representations, and producing publication-ready figures.

Analysis

18. Data Cleaning and Preprocessing

Reproducible preprocessing workflows from raw connectomics data through analysis-ready releases with integrity checks, QC metrics, and full provenance.

Analysis

20. Statistical Models and Inference

Defensible statistical inference for connectomics: choosing null models, controlling multiplicity in high-dimensional tests, and reporting with explicit assumptions.

Resources

Ask-an-Expert

Structured support for technical troubleshooting from Dr. Jeff Lichtman.

Frameworks

Research and teaching frameworks to operationalize practice.

Concepts in This Track

Filter concepts by immediate need to surface practical research resources quickly.

Proofreading and QC

Track: research-in-action

User needs: prioritizing corrections, reporting quality rigorously

Classify error modes, apply correction workflows, and tie decisions to quantitative quality metrics.

How to learn it: Triage corrections by scientific impact and report QC metrics that directly drive release decisions.

Teaching set:

Open full Concept Explorer for this track