You can attend an upcoming online seminar or see the details of previous ones (including slides or/and video recordings when available). The details are given in the description box. In addition, you can download the seminars’ agenda to be updated in real time: CoCoDev’s Google agenda.
Evaluation of computational models of language development using cumulative empirical data
Okko Räsänen (Tampere University)
Abstract: Computational models of child language development are algorithms that try to mimic infant language learning. Traditionally, such models have been focusing on individual language capabilities, such as phonetic category learning or word segmentation. However, recent advances in machine learning are enabling increasingly powerful models — models that can gradually start to address multiple aspects of language learning within a single learning architecture. Having this type of integrated models of language development would have significant impact on child language research, as it is still unclear how different bits and pieces of empirical findings, earlier capability-specific models, and high-level theories of language learning can be put together to obtain the big picture of the language learning process. However, in order to develop more accurate, holistic, and hence impactful models of infant language learning, we also need evaluation practices that compare model behavior to robust empirical data from infants across a range of language capabilities. Moreover, we need practices that can compare developmental trajectories of infants to learning trajectories of models, when these models are trained with increasing amounts of language input. In this talk, we will describe our recent work in attempting to address these needs. More specifically, we will introduce the idea of comparing models to large-scale and cumulative empirical data from infants, as quantified by meta-analyses conducted across a large number of individual behavioral studies, and as applicable to a range of language phenomena in parallel. We will present a basic conceptual framework for meta-analytic evaluation of computational models, and discuss the advantages, challenges, and limitations of the approach as a basis for future discussion and work in this direction.
When: December 03, 2021 at 12:00 | Where: Zoom link https://univ-amu-fr.zoom.us/j/2515421853
Pros and cons of zoom-based conversational research and unmoderated online studies
Open discussion ()
When: December 10, 2021 at 12:00 | Where: Zoom link https://univ-amu-fr.zoom.us/j/2515421853
Jonathan Ginzburg (Université de Paris)
When: December 17, 2021 at 12:00 | Where: Zoom
What multimodal approaches reveal about language acquisition
Asli Ozyurek (Donders Institute for Brain, Cognition and Behavior)
When: January 28, 2022 at 12:00 | Where: Zoom
Testing sound symbolism in human and non-human primates
Konstantina Margiotoudi (Aix-Marseille Uinversity)
Abstract: As opposed to the classic Saussurean view on the arbitrariness of the linguistic sign, iconicity is a pervasive feature of human language. Iconicity in vocal communication is known as sound symbolism – the intrinsic relationship between meaningless speech sounds and visual shapes. The most popular demonstration of sound symbolism is the 'maluma-takete' effect, in which a 'round' sounding pseudoword such as 'maluma' fits better to describe a curved visual shape, whereas a 'sharp’ sounding pseudoword, such as 'takete', fits better to describe a spiky abstract shape. Although sound symbolic effects have been reported across cultures and early in human development, it remains unclear whether this effect is an ability unique to humans or if it is present in other primate species. Here we tested the classic “maluma-takete” effect in a group of touch-screen trained chimpanzees and gorillas, but also in a touch-screen trained and language-competent bonobo. The results revealed no significant sound symbolic matching performance neither under an implicit nor an explicit task. Based on these findings, we suggest that the 'maluma-takete' mapping is plausibly an ability unique to humans. These results might be explained by neurobiological differences found between human and nonhuman great apes that are relevant to the mechanism supporting the 'maluma-takete' mapping.
When: November 26, 2021 at 15:00 | Where: Zoom link https://univ-amu-fr.zoom.us/j/2515421853 | Watch video
Computational study of active and interactive word learning
Lieke Gelderloos (Tilburg University)
Abstract: Models of cross-situational word learning typically characterize the learner as a passive observer. However, a language learning child can actively participate in verbal and non-verbal communication. We present a computational model that learns to map words to objects in images through word comprehension and production. The productive and receptive parts of the model can operate independently, but can also feed into each other. This introspective quality enables the model to learn through self-supervision, and also to estimate its own word knowledge, select optimal input, and thereby alter its own learning trajectory. The modular set-up is also suitable for testing effects of communicative feedback. In this talk, I will cover our findings regarding active selection of input, and present preliminary results on tests with communicative feedback.
When: November 19, 2021 at 12:00 | Where: Zoom link https://univ-amu-fr.zoom.us/j/2515421853
BabyBERTa: Learning More Grammar With Small-Scale Child-Directed Language
Philip Huebner (University of Illinos, Urbana-Champaign)
Abstract: Transformer-based language models have taken the NLP world by storm. However, their potential for addressing important questions in language acquisition research has been largely ignored. In this work, we examined the grammatical knowledge of RoBERTa (Liu et al., 2019) when trained on a 5M word corpus of language acquisition data to simulate the input available to children between the ages 1 and 6. Using the behavioral probing paradigm, we found that a smaller version of RoBERTa-base that never predicts unmasked tokens, which we term BabyBERTa, acquires grammatical knowledge comparable to that of pre-trained RoBERTa-base - and does so with approximately 15X fewer parameters and 6,000X fewer words. We discuss implications for building more efficient models and the learnability of grammar from input available to children. Lastly, to support research on this front, we release our novel grammar test suite that is compatible with the small vocabulary of child-directed input.
When: November 12, 2021 at 16h00 | Where: Zoom (send us mail to receive the link) | Watch video
Production practice is more effective than comprehension for second language learning
Elise Hopman (University of Wisconsin-Madison)
Abstract: Whereas most classroom-based language instruction traditionally emphasizes comprehension-based learning, memory research suggests that language production activities may provide a stronger learning experience than comprehension practice, due to the meaningfully different task demands involved in producing versus comprehending language. Using both artificial and natural language learning experiments with adults, I show that production exercises are more effective than comprehension exercises for learning the vocabulary and grammar of a foreign language. I will discuss these findings in the broader context of research implying that production and production-like activities might play a privileged role during learning more generally.
When: November 05, 2021 at 15:00 | Where: Zoom | Watch video
Perceptual development in infants and unsupervised representation learning in machines
Thomas Schatz (Aix-Marseille University)
Abstract: I will present my work at the interface between cognitive science and artificial intelligence, with a focus on ongoing research projects that I would like to develop at AMU. Through case studies involving early phonetic learning, probabilistic generative models, high-level auditory perception, spiking reservoir neural nets and auditory memory, I will argue that recent developments in unsupervised representation learning in machines open new avenues for understanding human perceptual development and, conversely, that the study of human perceptual development can inspire new developments in unsupervised representation learning in machines.
When: October 29, 2021 at 11:00 | Where: Zoom (send us mail to receive the link)
Smiles and Laughs in Human-Agent Interaction
Kevin El Haddad (University of Mons)
Abstract: Smiles and laughs (S&L) are among the most frequent and informative non-verbal expressions used in our daily interactions. Their incorporation into machine's communication skills is therefore a must in order to improve human-agent interaction (HAI) applications quality (among other aspects), whether it is on the detection/perception side or on the generation/production side.This presentation will focus on our efforts aiming at providing a better understanding of S&L conversational dynamics as well as implementing them in HAI modules. We will present our contributions and ongoing work in synthesis, recognition and prediction technologies as well as resources we propose to the community with the hope that this same community will help us improve them through collaboration or other contributions. I strongly believe that, with the limited resources available in the scientific communities, the more people get involved, the more we can accelerate the integration of S&L, and by extension nonverbal expressions in general, in HAI applications. So I look forward to meeting you during this talk
When: October 22, 2021 at 10:30 | Where: Zoom (send us mail to receive the link)
Language development as a joint process: Why the simultaneous learning of Form, Content, and Use is more a help than a hindrance
Abdellah Fourtassi (Aix-Marseille University & INRIA Paris)
Abstract: To acquire language, children need to learn form (e.g., phonology), content (e.g., word meaning), and use (e.g., finding the right words to convey a communicative intent). Research in language development has traditionally studied these dimensions separately. Indeed, one could imagine that children first acquire the form, then associate form with content, and only then, learn how to use form and content adequately in a communicative context. The reality of the situation is that children have to deal with aspects of form, content and use simultaneously and experimental studies suggest that the timeline of acquisition of these dimensions largely overlap, indicating that children learn them in parallel, not one at a time. While this fact makes language acquisition seem even harder than we previously thought, here I argue that the joint learning of form, content, and use may be more a help than a hindrance: These dimensions are interdependent in many ways and can therefore constrain/disambiguate each other. I will illustrate this idea based on my previous and current research combining both experimental and computational modeling.
When: October 08, 2021 at 12:00 | Where: Zoom (send us an email to receive the link)
Computational modeling as a tool to study cognitive development and evolution
Manuel Bohn (Max Planck Institute for Evolutionary Anthropology)
Abstract: In this talk, I will present a series of studies on information integration during word learning in young children. We were interested in how children balance different (sometimes conflicting) information sources when making pragmatic inferences in context. An integral part of this work is the use of computational cognitive models as a tool to formalise theories about information integration and developmental change. Based on this work, I will present some ideas (and data) for how the same modeling framework could be used to study a) the communicative abilities of great apes and b) individual differences in children’s cognitive development.
When: September 24, 2021 at 12:00 | Where: Zoom (send us an email to receive the link)
Linguistic alignment in parent-child verbal communication and gesture
Ruthe Foushee (University of Chicago)
When: September 10, 2021 at 12:00 | Where: Zoom (send us an email to receive the link)
How looking at tasks can tell us more about language development
Christina Bergmann (Max Planck Institute for Psycholinguistics)
Abstract: Work that focuses on how we measure children's knowledge may seem a hurdle towards discovery. In this talk, I will argue that inspecting the methods we use can tell us a great deal about the underlying mechanisms that generate measurable behavior, and highlight how these insights are key for theory building and computational modelling.
When: July 02, 2021 at 12:00 | Where: Zoom
Exploring language development in autistic and TD children
Riccardo Fusaroli (Aarhus University)
Abstract: Language development is traditionally explored in terms of individual differences and/or linguistic environment. In this talk I will present a more comprehensive framework, where children actively engage and potentially the linguistic environment, and analogously adult speakers adapt to and engage the child production. I will also present initial investigations on a longitudinal corpus involving 32 autistic and 35 typically developing children followed for over 2 years between 2 and 5 years of age. The focus will be to predict language development relying on individual differences (e.g. verbal IQ, socialization skills), linguistic environment (amount of language, lexical richness, syntactic complexity) and conversational dynamics (linguistic alignment).
When: June 18, 2021 at 12:00 | Where: Zoom | Watch video
Linking Language evolution, language acquisition, and language diversity
Limor Raviv (Max Planck Institute for Psycholinguistics)
Abstract: What are the social, environmental, and cognitive pressures that shape the evolution of language in our species? Why are there so many different languages in the world? And how did this astonishing linguistic diversity come about?These are some of the most interesting questions in the fields of cognitive science and linguistics, and represent the range of topics discussed in my research so far. My work focuses on linking core aspects of language acquisition, language evolution, and language diversity using a range of novel behavioral paradigms and computational models. My goal is to shed light on the communicative pressures and cognitive constraints (e.g., memory limitations, efficiency) that shape social interaction and language use in our species, and to identify the social, environmental, and cross-cultural factors (e.g., population size) that lead to language diversity and to cross-linguistic variation. In this talk, I will provide an overview of my research in the past six years (including methods and results from selected projects), as well as present future directions and ongoing work.
When: June 11, 2021 at 12:00 | Where: Zoom | Watch video
The transition from prelinguistic communication to word use in typically hearing and deaf infants
Danielle Matthews (University of Sheffield)
Abstract: Around the end of the first year infants make the transition from prelinguistic communication (babble, gesture, eye contact) to word use. I will present a series of studies that have 1) measured individual differences that predict this transition 2) tested experimentally if it is possible to promote learning and 3) compared deaf and hearing infants. Together these studies reveal the important role of the social environment in learning to talk.
When: May 21, 2021 at 12:00 | Where: Zoom | Watch video
Linking Social Language Acquisition with Artificial Intelligence
Sho Tsuji (University of Tokyo)
Abstract: Theories and data on language acquisition suggest a range of cues are used, ranging from information on structure found in the linguistic signal itself, to information gleaned from the environmental context or through social interaction. We propose a blueprint for computational models of the early language learner (SCALa, for Socio-Computational Architecture of Language Acquisition) that makes explicit the connection between the kinds of information available to the social learner and the computational mechanisms required to extract language-relevant information and learn from it. SCALa integrates a range of views on language acquisition, further allowing us to make precise recommendations for future large-scale empirical research
When: May 07, 2021 at 12:00 | Where: Zoom | Watch video