The representational architecture of fast learning through abstraction
Grant PID2023-149428NB-I00 funded by MICIU/AEI/10.13039/501100011033 and ERDF/EU
Part of the perceptual knowledge we acquire in everyday life does not rely on repeated exposure or training: a single significant event can induce robust changes on brain activity and behavior. Such one-shot perceptual learning emerges during development in parallel to incremental learning and plays a crucial role when evidence is scarce or ambiguous. However, while most cognitive neuroscientists agree on its relevance to our adaptation abilities, the neural and cognitive computations driving one-shot learning remain largely unknown.
Predictive processing accounts have been very influential in the study of one-shot perceptual learning. From this perspective, significant perceptual episodes create lingering traces in the brain, reflecting internal models of the external world, or priors. Priors are proposed to then inform downstream brain regions about the causes of sensory input. Although this framework offers an intuitive explanation of the procedures that might support one-shot perceptual learning, we currently lack an optimal description of the precise nature of priors and the information they contain.
The overall aim of FLARE is to provide fundamental insights into how internal models of single perceptual events are instantiated in patterns of brain activity. FLARE constitutes a novel approach combining the PI’s theoretical background and expertise in cutting-edge neuroimaging methods that will allow us to pursue the overall objective across two experimental series.
The project will tackle two major open questions: 1) What is the content of priors of single perceptual events across the brain? And 2) To what extent does one-shot perceptual learning rely on sensory-specific vs. abstract priors of the episode? For both goals, we will employ a combination of tailored behavioral tasks, computational modeling, and neuroimaging methods.
Get involved!
We do not have any open positions at the moment. If you want to be informed about future postdoctoral or predoctoral openings, please send me an email at cgonzalez@ugr.es
-
Suddenly finding the solution to a problem after a period of impasse often comes with a feeling of insight. This subjective experience is proposed to arise as a consequence of prediction errors. Accordingly, previous studies have revealed that more incorrect initial predictions result in more intense insights. Crucially however, prominent models of Bayesian inference suggest levels of computationally-defined surprise are not a simple feature of distance between predictions and inputs, but also their precision or certainty. Yet, how these two factors interact to give rise to insight experiences remains unknown. In this pre-registered study, participants were exposed to ambiguous images while they tried to guess the correct label of the image (to derive prediction accuracy) and rated their confidence in that label (for prediction uncertainty). We then measured the intensity of their insight when a solution was given. As predicted, we found that the intensity of insight was a result of both the prediction accuracy and the uncertainty awarded to it. More specifically, when initial predictions were far from the true label, those made with lower confidence induced weaker insights, while the opposite pattern was observed when predictions were closer to the reality. Trial-by-trial estimations of prediction errors from participants’ responses closely mirrored insight ratings. Finally, we analysed data from two additional independent datasets with different modalities and setups and replicated the interaction between prediction accuracy and uncertainty on the intensity of insight. Altogether, these findings suggest that insight experiences are read out from prediction errors and highlight the key role of uncertainty in characterising this relationship.
-
Visual inputs during natural perception are highly ambiguous: objects are frequently occluded, lighting conditions vary, and object identification depends significantly on prior experiences. However, why do certain images remain unidentifiable while others can be recognized immediately, and what visual features drive subjective clarification? To address these critical questions, we developed a unique dataset of 1854 ambiguous images and collected more than 100,000 ratings (from a total of 947 participants) evaluating their identifiability before and after seeing undistorted versions of the images. Relating the representations of a brain-inspired neural network model in response to our images with human ratings, we show that subjective identification depends largely on the extent to which higher-level visual features from the original images are preserved in their ambiguous counterparts. In line with these results, an image-level regression analysis showed that the subjective identification of ambiguous images was best explained by high-level visual dimensions. Notably, the predominance of higher-level features over lower-level ones softens after participants disambiguate the images, suggesting that the visual system flexibly shifts between top-down guessing to bottom-up matching after disambiguation. Moreover, we found that the process of ambiguity resolution was accompanied by a notable decrease in semantic distance and a greater consistency in object naming among participants. However, the relationship between information gained after disambiguation and subjective identification was non-linear, indicating that acquiring more information does not necessarily enhance subjective clarity. Instead, we observed a U-shaped relationship, suggesting that subjective identification improves when the acquired information either strongly matches or mismatches prior predictions. Collectively, these findings advance our understanding on how we resolve ambiguity and extract meaning from incomplete visual information.
-
Völler, J., Linde-Domingo, J., Ortiz-Tudela, J. & González-García, C. (2026). From sudden perceptual learning to enduring engrams: A representational perspective. Philosophical Transactions B. https://doi.org/10.31234/osf.io/39npk_v3
Sudden perceptual learning refers to the abrupt disambiguation of an initially ambiguous stimulus. Despite the limited encoding time, these brief perceptual experiences transform into enduring memory engrams. Still, a comprehensive framework of why these memories are so long-lasting and their neural underpinnings, in particular the role of the hippocampus, is currently lacking. Here, we build on the apparent connection of sudden perceptual learning with long-term memory to outline how a representational perspective may help to overcome current limitations, similar to other areas of memory research. In short, we claim that sudden perceptual learning triggers initial prediction errors across the hierarchy, including strong higher-level (semantic) ones. Once a solution is found, the formerly ambiguous image can be connected to previous schemas and a disambiguation cascade of prediction error minimization is triggered, starting with semantic prediction errors. As a consequence, these events lead to low-dimensional representations that show signs of early semanticization, thereby favouring semantics over perceptual details. In sum, we propose that multiple memory traces are simultaneously encoded during sudden perceptual learning, and while low-dimensional representations engage neocortical structures, high-dimensional, episodic representations require the hippocampus. This perspective offers a promising framework to advance research on sudden perceptual learning and related phenomena.