Open the teaching deck, worksheet, and editable slide source.
Interactive Lab
Practice in short loops: checkpoint quiz, microtask decision, and competency progress tracking.
Checkpoint Quiz
Microtask Decision
Choose the action that best improves scientific reliability.
Progress Tracker
State is saved locally in your browser for this module.
0% complete
Annotation Challenge
Click the hotspot with the strongest evidence for the requested feature.
Selected hotspot: none
Capability target
Implement an LLM-assisted patch-analysis workflow with verification gates, confidence labeling, and explicit human override policies.
Why this module matters
LLMs can accelerate triage and documentation, but unverified outputs can propagate errors quickly. Connectomics requires careful human-in-the-loop governance.
Concept set
1) Assistive, not autonomous
Technical: LLMs should support prioritization, summarization, and protocol guidance, not final biological adjudication.
Plain language: use LLMs to help, not to decide alone.
Create prompt templates and expected output schema.
Add verification gates and human adjudication rules.
Pilot on small patch set and log failure patterns.
Refine prompts/policies before wider use.
60-minute tutorial run-of-show
00:00-08:00 scope boundaries and failure examples.
08:00-20:00 prompt template design.
20:00-34:00 run sample outputs and score reliability.
34:00-46:00 define verification and override rules.
46:00-56:00 produce governance checklist.
56:00-60:00 competency check.
Studio activity
Scenario: Build an LLM-assisted triage helper for proofreading queues.
Outputs
prompt + schema pack,
verification rubric,
risk register and override policy.
Assessment rubric
Minimum pass: clear task boundaries, verification logic, and logging fields.
Strong performance: robust failure-mode handling and actionable governance plan.
Failure modes: unbounded scope, no confidence policy, missing audit trail.
Practical LLM use cases in connectomics
Where LLMs add value today
| Use case | Example prompt | Verification method |
|———-|—————|——————-|
| Literature summarization | “Summarize the key findings of Dorkenwald et al. 2024 regarding cell-type diversity” | Cross-check against paper abstract and figures |
| Code assistance | “Write a CAVEclient query to find all synapses onto neuron X at materialization version 943” | Run the code and verify output matches manual check |
| EM patch description | “Describe the ultrastructural features visible in this EM image” (multimodal) | Expert annotator review of description accuracy |
| Hypothesis brainstorming | “Given that reciprocal connections are 4× enriched, what functional hypotheses could explain this?” | Evaluate against literature; treat as starting points, not conclusions |
| Protocol drafting | “Draft a proofreading SOP for merge error correction” | Expert review and team calibration before adoption |
Where LLMs fail or mislead
Quantitative claims: LLMs may confidently state incorrect numbers (synapse counts, cell counts, metric values). Always verify against the actual data.
Visual interpretation: Current vision-language models can describe EM images but may misidentify structures (e.g., calling an astrocytic process an axon). Expert verification is mandatory.
Citation accuracy: LLMs may fabricate references or misattribute findings. Always check cited papers exist and say what the LLM claims.
Novel biological claims: LLMs cannot generate new biological knowledge — they can only recombine and rephrase existing knowledge.
Governance framework
For any LLM-assisted workflow in a connectomics project:
Define scope: Which tasks are LLM-assisted? Which require human-only decisions?
Version control: Log the model name/version, prompt text, and output for every LLM interaction used in analysis.
Verification gates: Every LLM output category has a defined verification method and acceptance threshold.
Human override: Any LLM suggestion can be overridden by a human annotator without justification. The human decision is authoritative.
Transparency: In publications, disclose any LLM assistance in methods section.
Content library references
NeuroAI bridge — AI tools for neuroscience and vice versa