Researchers have adapted techniques from functional neuroimaging to map internal "functional networks" in large language models. By presenting the model with task-based prompts in an on/off pattern, they identified distinct networks preferentially activated for domains like political science, radiology, and paleontology.<p>They found the greatest overlap between networks for radiology and pathology, suggesting shared semantic representations. The activated connections were fairly consistent across runs with similar tasks, and strongly predicted the prompted task in held-out experiments.<p>This offers a new way to peel back the black box of large models and understand their hidden organizational logic. It could help explain failures, selectively target parts of a model for fine-tuning, or monitor "alignment" during inference. The techniques parallel those used for decades to map functional networks in the brain.