We probe how attention heads activate specialized "next-token" neurons in LLMs. Prompting GPT-4 reveals heads recognizing contexts tied to predicting tokens, activating neurons via residual connections. This elucidates context-dependent specialization in LLMs.