TRACX2: a RAAM -like autoencoder modeling graded chunking in infant visual -sequence learning

Abstract : Even newborn infants are able to extract structure from a stream of sensory inputs and yet, how this is achieved remains largely a mystery. We present a connectionist autoencoder model, TRACX2, that learns to extract sequence structure by gradually constructing chunks, storing these chunks in a distributed manner across its synaptic weights, and recognizing these chunks when they re-occur in the input stream. Chunks are graded rather than all-or-none in nature and during learning their component parts become ever more tightly bound together. TRACX2 successfully models data from four experiments from the infant visual statistical-learning literature, including tasks involving low-salience embedded chunk items, part-sequences, and illusory items. The model captures performance differences across ages by tuning a single learning rate parameter. These results suggest that infant statistical learning is underpinned by the same domain general learning mechanism that operates in auditory statistical learning and, potentially, in adult artificial grammar learning.
Type de document :
Communication dans un congrès
Liste complète des métadonnées

https://hal-univ-bourgogne.archives-ouvertes.fr/hal-01798667
Contributeur : Lead - Université de Bourgogne <>
Soumis le : mercredi 23 mai 2018 - 17:13:00
Dernière modification le : vendredi 8 juin 2018 - 14:50:07

Identifiants

  • HAL Id : hal-01798667, version 1

Collections

Citation

Robert M. French, Denis Mareschal. TRACX2: a RAAM -like autoencoder modeling graded chunking in infant visual -sequence learning. Proceedings of the 39th Annual Conference of the Cognitive Science Society, Cognitive Science Society, Jul 2017, London, United Kingdom. pp.2031-2036. ⟨hal-01798667⟩

Partager

Métriques

Consultations de la notice

18