{Reference Type}: Journal Article {Title}: Time-scale invariant contingency yields one-shot reinforcement learning despite extremely long delays to reinforcement. {Author}: Gallistel CR;Shahan TA; {Journal}: Proc Natl Acad Sci U S A {Volume}: 121 {Issue}: 30 {Year}: 2024 Jul 23 {Factor}: 12.779 {DOI}: 10.1073/pnas.2405451121 {Abstract}: Reinforcement learning inspires much theorizing in neuroscience, cognitive science, machine learning, and AI. A central question concerns the conditions that produce the perception of a contingency between an action and reinforcement-the assignment-of-credit problem. Contemporary models of associative and reinforcement learning do not leverage the temporal metrics (measured intervals). Our information-theoretic approach formalizes contingency by time-scale invariant temporal mutual information. It predicts that learning may proceed rapidly even with extremely long action-reinforcer delays. We show that rats can learn an action after a single reinforcement, even with a 16-min delay between the action and reinforcement (15-fold longer than any delay previously shown to support such learning). By leveraging metric temporal information, our solution obviates the need for windows of associability, exponentially decaying eligibility traces, microstimuli, or distributions over Bayesian belief states. Its three equations have no free parameters; they predict one-shot learning without iterative simulation.