You can attend an upcoming online seminar or see the details of previous ones (including video recordings when available). The details are given in the description box. In addition, you can download the seminars’ agenda to be updated in real time: CoCoDev’s Google agenda.



Marlou Rasenberg (Radboud University)

Abstract: TBA

When: May 30, 2022 at 2pm | Where: Zoom link


Deniz Tahiroğlu (Boğaziçi University)

Abstract: TBA

When: June 13, 2022 at 2pm | Where: Zoom link


Patrice Bellot (Aix-Marseille University)

Abstract: TBA

When: June 27, 2022 at 2pm | Where: Zoom link


Núria Esteve Gibert (Open University of Catalonia)

Abstract: TBA

When: July 04, 2022 at 12pm | Where: Zoom link


Rory T. Devine (University of Birmingham)

Abstract: TBA

When: July 11, 2022 at 2pm | Where: Zoom link


Teodora Gliga (University of East Anglia)

Abstract: TBA

When: July 18, 2022 at 2pm | Where: Zoom link


How universal are the acoustic properties of infant-directed speech (IDS)? Insights from a large-scale meta-analysis & an acoustic analysis of Danish IDS

Christopher Cox (Aarhus University)

Abstract: When speaking with infants, adults often produce speech that differs systematically from that directed to other adults. The acoustic properties of this speech style have been widely documented, and some clear patterns have emerged across languages. However, is IDS truly a universal phenomenon, and what function could it serve during infant development?

When: May 16, 2022 at 2pm | Where: Zoom link | Watch video

Learning topoi in L1 acquisition

Ellen Breitholtz (University of Gothenburg)

Abstract: Children's acquisition of language requires their learning of not just words/concepts and linguistic structure but how these interact in dialogue with knowledge about the world, our interlocutors, the shared environment, and social norms. We are interested in how children acquire the rhetorical resources that they need in dialogue, and how these support further learning and interaction. On our account, these topoi are the underpinning warrants for incomplete enthymematic arguments conveyed in dialogue. We illustrate our account with examples from dialogues with children that demonstrate the topoi which they have learned -- particularly in cases where these topoi are unexpected from the adult language user's perspective.

When: May 09, 2022 at 12h00 | Where: Zoom link

In search of an early mother-infant dyadic coordination

Marianne Jover (Aix-Marseille University)

Abstract: I will present a work in progress which focus on motor analysis of dyadic interaction. The purpose of this project was to examine the early mother-infant dyadic interaction and to trace the contribution of motor activity during the first months after birth. We expected advanced recording and analysis methods (mocap and time-series analysis) to provide a relevant way of analyzing the early temporal organization of the interaction at the motor level and its changes in the course of the first semester after birth. Using a motion capture system, we recorded a mother and her infant’s motor activity during interactive sequences when the child was 1 months, 2 months, 3 months, and 6 months. We explored with this pilot, the bodily coordination between a mother and her child from the first to the sixth month and the time lags of infant bursts/drops of activity relative to those of her mother and vice versa.

When: May 02, 2022 at 2pm | Where: Zoom link | Watch video

Motor-vocal coordination in early language development.

Eva Murillo (Universidad Autónoma de Madrid)

Abstract: The tightly coupling between gestures and speech seen in adult language, is already observed between the communicative gestures and the first words produced by children during early language development. The use of communicative behaviors combining gestures, vocalizations, and social use of gaze at the end of the first year of life is a good predictor of lexical development during the second year. In fact, there is growing evidence suggesting that children coordinate gestures and vocal elements even before they start producing their first words, and that this synchrony is related to subsequent lexical development. Considering this, the questions that arises is how vocal and motor components relate to each other in the transition from canonical babbling and rhythmic hand movements combinations to the early gesture-verbal productions. In this talk I will present some results from our current project focused onthe production of multimodal behaviors including rhythmic movements, object use, and vocal elements, and its relationship with the early deictic gestures production.

When: April 25, 2022 at 2pm | Where: Zoom link | Watch video

Communicative pressure on caregivers scaffolds children's language learning

Dan Yurovsky (Carnegie Mellon University)

Abstract: By the time toddlers are able to run down the street, they are already producing over a thousand words of their native language. How do they get so much learning done in so little time? A key piece of the puzzle is that children do not learn language on their own, but in interactions with caregivers motivated to communicate with them. Using a combination of corpus analyses, experimental data, and computational models, I will argue that this communicative pressure both structures the input children learn from, and magnifies the power of children's developing capacities.

When: April 11, 2022 at 4pm | Where: Zoom link | Watch video

Predicting individual differences in language learning across populations

Patrick Wong (The Chinese University of Hong Kong)

Abstract: Children’s rank-order language developmental stability provides an opportunity to predict future language development with data collected in earlier years of life. In our research, we capitalize on this opportunity to evaluate hypotheses concerning language development across different typical and atypical populations. Direct biological measurements from young children as well as their health and family information are used to construct predictive models for individual-child predictions. These models provide the basis to address different questions about neural processing and language. For example, in typically developing children, we ask whether cortical and subcortical development interacts with native and non-native speech processing in infancy, and whether this interaction provides a basis for prediction of future development. In children who are hearing impaired, we examine whether brain regions that are most resilient to reduced auditory/spoken language input, measured via MRI before cochlear implantation (CI), enable a compensatory pathway to support better language development after CI. In both typical and atypical populations, we are in the process of testing whether individual-child predictions can inform the design and prescription of different types of early intervention and enhancement strategies in order to optimize language development for all children

When: March 28, 2022 at 12:00 | Where: Zoom link | Watch video

Dyslexia in children across languages

FatimaEzzahra Benmarrakchi (UM6P - School of Collective Intelligence)

Abstract: If you are dyslexic while reading English, are you dyslexic while reading Arabic? Dyslexia is one of the most common specific learning disabilities, it is a neurological and language-based learning disability manifested by the difficulty in learning to read, despite conventional instruction, adequate intelligence and sociocultural opportunity. Besides, the characteristics of the Language and cultural factors play important roles in the difficulties associated with this learning disability. In the first part of this talk, I will explore the manifestation of dyslexia across different language orthographies (e.g, Arabic, French, English). In the second part of this talk, I will present the reading and writing experiences of Moroccan children with dyslexia (Arabic native speakers).

When: March 14, 2022 at 12:00 | Where: Zoom link | Watch video

Extending the language architecture: Evidence from multimodal language use, processing and acquisition

Asli Ozyurek (Donders Institute for Brain, Cognition and Behavior)

Abstract: One of the unique aspects of human language is that in face-to-face communication it is universally multimodal (e.g., Holler and Levinson, 2019; Perniss, 2018). All hearing and deaf communities around the world use vocal and/or visual modalities (e.g., hands, body, face) with different affordances for semiotic and linguistic expression (e.g., Goldin-Meadow and Brentani, 2015; Vigliocco et al., 2014; Özyürek and Woll, 2019). What is crucial is that visual articulators in both cospeech gesture and sign, unlike in speech, have unique affordances for visible, dynamic, iconic (i.e. motivated from meaning mappings), indexical (e.g., pointing) and simultaneous expressions which are recruited frequently in both adult and child language. However such representational formats have been considered in traditional linguistics as being “external” to the main architecture of language system- as they do not fit in the so-called fundamental arbitrary, discrete, categorical and sequential design features of language (mostly characteristics of speech or text). I will however argue for and show evidence that both spoken languages and sign languages combine and integrate visible modality-specific expressions with arbitrary, categorical and sequential expressions in their structures. Furthermore they modulate language use, (neuro-cognitive) language processing and acquisition suggesting that they are an integral part of a unified multimodal language. (e.g., Özyürek, 2014; 2021; Ortega et al., 2017) In this light, I will argue that proposals for the independence and parallelism of form and meaning as constituting the fundamental architecture of language need to be reconsidered.I will end my talk with discussion on how a multimodal view of language (but not unimodal one based on speech or text only) is needed to explain the dynamic, adaptive and flexible aspects of our language system and how it can optimally bridge the human biological, cognitive and learning constraints to the interactive, culturally varying communicative requirements of face-to-face context.

When: February 28, 2022 at 12:00 | Where: Zoom link | Watch video

A holistic measure of inter-annotation agreement with continuous data

Rachid Riad (École Normale Supérieure - Inria - Inserm)

Abstract: Inter-rater reliability/agreement measures the degree of agreement among raters to describe, code or assess the same phenomenon. Most coefficients (ex: α, κ) measuring these agreements in psychology and natural sciences focus on the categorization of events. Yet, the annotations of speech and especially conversational spontaneous speech represent a complex continuous phenomenon to annotate. There is not only categorization but also the localization of events that is asked from annotators, referred to as unitizing. In this presentation, we will describe the gamma agreement γ introduced by Mathet et al. 2015 and our work to extend this measure with the python package 'pygamma-agreement'. We illustrate the use of this measure with corpora coming from (1) daylong recordings to study language acquisition, and (2) interviews at the hospital to study speech pathologies.

When: February 21, 2022 at 12:00 | Where: Zoom link

Acquiring grammar interactively: from natal crying to 2 words

Jonathan Ginzburg (Université de Paris)

Abstract: TBA

When: December 17, 2021 at 10:30 | Where: Zoom link | Watch video

Pros and cons of zoom-based conversational research and unmoderated online studies

Open discussion ()

Abstract: TBA

When: December 10, 2021 at 12:00 | Where: Zoom link

Evaluation of computational models of language development using cumulative empirical data

Okko Räsänen (Tampere University)

Abstract: Computational models of child language development are algorithms that try to mimic infant language learning. Traditionally, such models have been focusing on individual language capabilities, such as phonetic category learning or word segmentation. However, recent advances in machine learning are enabling increasingly powerful models — models that can gradually start to address multiple aspects of language learning within a single learning architecture. Having this type of integrated models of language development would have significant impact on child language research, as it is still unclear how different bits and pieces of empirical findings, earlier capability-specific models, and high-level theories of language learning can be put together to obtain the big picture of the language learning process. However, in order to develop more accurate, holistic, and hence impactful models of infant language learning, we also need evaluation practices that compare model behavior to robust empirical data from infants across a range of language capabilities. Moreover, we need practices that can compare developmental trajectories of infants to learning trajectories of models, when these models are trained with increasing amounts of language input. In this talk, we will describe our recent work in attempting to address these needs. More specifically, we will introduce the idea of comparing models to large-scale and cumulative empirical data from infants, as quantified by meta-analyses conducted across a large number of individual behavioral studies, and as applicable to a range of language phenomena in parallel. We will present a basic conceptual framework for meta-analytic evaluation of computational models, and discuss the advantages, challenges, and limitations of the approach as a basis for future discussion and work in this direction.

When: December 03, 2021 at 12:00 | Where: Zoom link

Testing sound symbolism in human and non-human primates

Konstantina Margiotoudi (Aix-Marseille Uinversity)

Abstract: As opposed to the classic Saussurean view on the arbitrariness of the linguistic sign, iconicity is a pervasive feature of human language. Iconicity in vocal communication is known as sound symbolism – the intrinsic relationship between meaningless speech sounds and visual shapes. The most popular demonstration of sound symbolism is the 'maluma-takete' effect, in which a 'round' sounding pseudoword such as 'maluma' fits better to describe a curved visual shape, whereas a 'sharp’ sounding pseudoword, such as 'takete', fits better to describe a spiky abstract shape. Although sound symbolic effects have been reported across cultures and early in human development, it remains unclear whether this effect is an ability unique to humans or if it is present in other primate species. Here we tested the classic “maluma-takete” effect in a group of touch-screen trained chimpanzees and gorillas, but also in a touch-screen trained and language-competent bonobo. The results revealed no significant sound symbolic matching performance neither under an implicit nor an explicit task. Based on these findings, we suggest that the 'maluma-takete' mapping is plausibly an ability unique to humans. These results might be explained by neurobiological differences found between human and nonhuman great apes that are relevant to the mechanism supporting the 'maluma-takete' mapping.

When: November 26, 2021 at 15:00 | Where: Zoom link | Watch video

Computational study of active and interactive word learning

Lieke Gelderloos (Tilburg University)

Abstract: Models of cross-situational word learning typically characterize the learner as a passive observer. However, a language learning child can actively participate in verbal and non-verbal communication. We present a computational model that learns to map words to objects in images through word comprehension and production. The productive and receptive parts of the model can operate independently, but can also feed into each other. This introspective quality enables the model to learn through self-supervision, and also to estimate its own word knowledge, select optimal input, and thereby alter its own learning trajectory. The modular set-up is also suitable for testing effects of communicative feedback. In this talk, I will cover our findings regarding active selection of input, and present preliminary results on tests with communicative feedback.

When: November 19, 2021 at 12:00 | Where: Zoom link

BabyBERTa: Learning More Grammar With Small-Scale Child-Directed Language

Philip Huebner (University of Illinos, Urbana-Champaign)

Abstract: Transformer-based language models have taken the NLP world by storm. However, their potential for addressing important questions in language acquisition research has been largely ignored. In this work, we examined the grammatical knowledge of RoBERTa (Liu et al., 2019) when trained on a 5M word corpus of language acquisition data to simulate the input available to children between the ages 1 and 6. Using the behavioral probing paradigm, we found that a smaller version of RoBERTa-base that never predicts unmasked tokens, which we term BabyBERTa, acquires grammatical knowledge comparable to that of pre-trained RoBERTa-base - and does so with approximately 15X fewer parameters and 6,000X fewer words. We discuss implications for building more efficient models and the learnability of grammar from input available to children. Lastly, to support research on this front, we release our novel grammar test suite that is compatible with the small vocabulary of child-directed input.

When: November 12, 2021 at 16h00 | Where: Zoom (send us mail to receive the link) | Watch video

Production practice is more effective than comprehension for second language learning

Elise Hopman (University of Wisconsin-Madison)

Abstract: Whereas most classroom-based language instruction traditionally emphasizes comprehension-based learning, memory research suggests that language production activities may provide a stronger learning experience than comprehension practice, due to the meaningfully different task demands involved in producing versus comprehending language. Using both artificial and natural language learning experiments with adults, I show that production exercises are more effective than comprehension exercises for learning the vocabulary and grammar of a foreign language. I will discuss these findings in the broader context of research implying that production and production-like activities might play a privileged role during learning more generally.

When: November 05, 2021 at 15:00 | Where: Zoom | Watch video

Perceptual development in infants and unsupervised representation learning in machines

Thomas Schatz (Aix-Marseille University)

Abstract: I will present my work at the interface between cognitive science and artificial intelligence, with a focus on ongoing research projects that I would like to develop at AMU. Through case studies involving early phonetic learning, probabilistic generative models, high-level auditory perception, spiking reservoir neural nets and auditory memory, I will argue that recent developments in unsupervised representation learning in machines open new avenues for understanding human perceptual development and, conversely, that the study of human perceptual development can inspire new developments in unsupervised representation learning in machines.

When: October 29, 2021 at 11:00 | Where: Zoom (send us mail to receive the link)

Smiles and Laughs in Human-Agent Interaction

Kevin El Haddad (University of Mons)

Abstract: Smiles and laughs (S&L) are among the most frequent and informative non-verbal expressions used in our daily interactions. Their incorporation into machine's communication skills is therefore a must in order to improve human-agent interaction (HAI) applications quality (among other aspects), whether it is on the detection/perception side or on the generation/production side.This presentation will focus on our efforts aiming at providing a better understanding of S&L conversational dynamics as well as implementing them in HAI modules. We will present our contributions and ongoing work in synthesis, recognition and prediction technologies as well as resources we propose to the community with the hope that this same community will help us improve them through collaboration or other contributions. I strongly believe that, with the limited resources available in the scientific communities, the more people get involved, the more we can accelerate the integration of S&L, and by extension nonverbal expressions in general, in HAI applications. So I look forward to meeting you during this talk

When: October 22, 2021 at 10:30 | Where: Zoom (send us mail to receive the link)

Language development as a joint process: Why the simultaneous learning of Form, Content, and Use is more a help than a hindrance

Abdellah Fourtassi (Aix-Marseille University & INRIA Paris)

Abstract: To acquire language, children need to learn form (e.g., phonology), content (e.g., word meaning), and use (e.g., finding the right words to convey a communicative intent). Research in language development has traditionally studied these dimensions separately. Indeed, one could imagine that children first acquire the form, then associate form with content, and only then, learn how to use form and content adequately in a communicative context. The reality of the situation is that children have to deal with aspects of form, content and use simultaneously and experimental studies suggest that the timeline of acquisition of these dimensions largely overlap, indicating that children learn them in parallel, not one at a time. While this fact makes language acquisition seem even harder than we previously thought, here I argue that the joint learning of form, content, and use may be more a help than a hindrance: These dimensions are interdependent in many ways and can therefore constrain/disambiguate each other. I will illustrate this idea based on my previous and current research combining both experimental and computational modeling.

When: October 08, 2021 at 12:00 | Where: Zoom (send us an email to receive the link) | Watch video

Computational modeling as a tool to study cognitive development and evolution

Manuel Bohn (Max Planck Institute for Evolutionary Anthropology)

Abstract: In this talk, I will present a series of studies on information integration during word learning in young children. We were interested in how children balance different (sometimes conflicting) information sources when making pragmatic inferences in context. An integral part of this work is the use of computational cognitive models as a tool to formalise theories about information integration and developmental change. Based on this work, I will present some ideas (and data) for how the same modeling framework could be used to study a) the communicative abilities of great apes and b) individual differences in children’s cognitive development.

When: September 24, 2021 at 12:00 | Where: Zoom (send us an email to receive the link)

Linguistic alignment in parent-child verbal communication and gesture

Ruthe Foushee (University of Chicago)

Abstract: TBA

When: September 10, 2021 at 12:00 | Where: Zoom (send us an email to receive the link)

How looking at tasks can tell us more about language development

Christina Bergmann (Max Planck Institute for Psycholinguistics)

Abstract: Work that focuses on how we measure children's knowledge may seem a hurdle towards discovery. In this talk, I will argue that inspecting the methods we use can tell us a great deal about the underlying mechanisms that generate measurable behavior, and highlight how these insights are key for theory building and computational modelling.

When: July 02, 2021 at 12:00 | Where: Zoom

Exploring language development in autistic and TD children

Riccardo Fusaroli (Aarhus University)

Abstract: Language development is traditionally explored in terms of individual differences and/or linguistic environment. In this talk I will present a more comprehensive framework, where children actively engage and potentially the linguistic environment, and analogously adult speakers adapt to and engage the child production. I will also present initial investigations on a longitudinal corpus involving 32 autistic and 35 typically developing children followed for over 2 years between 2 and 5 years of age. The focus will be to predict language development relying on individual differences (e.g. verbal IQ, socialization skills), linguistic environment (amount of language, lexical richness, syntactic complexity) and conversational dynamics (linguistic alignment).

When: June 18, 2021 at 12:00 | Where: Zoom | Watch video

Linking Language evolution, language acquisition, and language diversity

Limor Raviv (Max Planck Institute for Psycholinguistics)

Abstract: What are the social, environmental, and cognitive pressures that shape the evolution of language in our species? Why are there so many different languages in the world? And how did this astonishing linguistic diversity come about?These are some of the most interesting questions in the fields of cognitive science and linguistics, and represent the range of topics discussed in my research so far. My work focuses on linking core aspects of language acquisition, language evolution, and language diversity using a range of novel behavioral paradigms and computational models. My goal is to shed light on the communicative pressures and cognitive constraints (e.g., memory limitations, efficiency) that shape social interaction and language use in our species, and to identify the social, environmental, and cross-cultural factors (e.g., population size) that lead to language diversity and to cross-linguistic variation. In this talk, I will provide an overview of my research in the past six years (including methods and results from selected projects), as well as present future directions and ongoing work.

When: June 11, 2021 at 12:00 | Where: Zoom | Watch video

The transition from prelinguistic communication to word use in typically hearing and deaf infants

Danielle Matthews (University of Sheffield)

Abstract: Around the end of the first year infants make the transition from prelinguistic communication (babble, gesture, eye contact) to word use. I will present a series of studies that have 1) measured individual differences that predict this transition 2) tested experimentally if it is possible to promote learning and 3) compared deaf and hearing infants. Together these studies reveal the important role of the social environment in learning to talk.

When: May 21, 2021 at 12:00 | Where: Zoom | Watch video

Linking Social Language Acquisition with Artificial Intelligence

Sho Tsuji (University of Tokyo)

Abstract: Theories and data on language acquisition suggest a range of cues are used, ranging from information on structure found in the linguistic signal itself, to information gleaned from the environmental context or through social interaction. We propose a blueprint for computational models of the early language learner (SCALa, for Socio-Computational Architecture of Language Acquisition) that makes explicit the connection between the kinds of information available to the social learner and the computational mechanisms required to extract language-relevant information and learn from it. SCALa integrates a range of views on language acquisition, further allowing us to make precise recommendations for future large-scale empirical research

When: May 07, 2021 at 12:00 | Where: Zoom | Watch video

Human becoming in and through social interaction

Dimitris Bolis (Max Planck Institute of Psychiatry)

Abstract: Human becoming in and through social interaction

When: April 23, 2021 at 12:00 | Where: Zoom | Watch video