Conference

The ECVP is an annual international conference that aims to provide a forum for the presentation and discussion of new developments in the scientific study of visual perception. Empirical, theoretical, and applied perspectives from the disciplines of psychology, neuroscience, and cognitive science are all welcome and encouraged. Since 1978, ECVP has been one of the largest international conferences in the field, attracting researchers from all over the world. Please note that this year’s ECVP in Mainz will be an in-person meeting only.

ECVP 2025 starts on Sunday, August 24th with a series of hands-on workshops and more social warm-ups on science-related topics, and ends on Thursday, August 28th with the Farewell Party.

More information …

On Sunday, August 24th, ECVP 2025 in Mainz will kick off with a series of workshops and warm-ups that will take place on the University Campus in the morning and early afternoon. On Sunday evening, the Perception Keynote Lecture will be held by Astrid Kappers (Eindhoven University of Technology) at the State Theater Mainz, followed by the Opening Reception.

From August 25th to August 28th, all symposia, talk sessions, poster sessions and keynote lectures will take place on the University Campus, just a short tram or bus ride from the main train station.

On Monday evening (August 25th), there will be an informal networking event for everybody (young at heart <3) at the Old Mail Depot next to the main train station.

On Tuesday afternoon (August 26th), we will continue the Spotlight Lecture series, which showcases recent innovative and influential findings or methods in vision science. In 2025 the Spotlight in Vision Lecture will be given by Roland Fleming (Justus Liebig University of Giessen). In the evening, the popular Illusion Night will take place at the Culture Center KUZ, conveniently located in the heart of Mainz and within walking distance of numerous pubs and restaurants.

On Wednesday (August 27th), the traditional Rank Prize Lecture, this year given by William Warren (Brown University), will take place in the afternoon, and the Conference Dinner will be held in the evening at Heiliggeist, a venue with a historic ambience on the banks of the Rhine, a short walk from the main train station.

Finally, the conference will conclude on Thursday (August 28th) with talks and poster sessions running into the late afternoon. In the evening we will celebrate the end of the conference with the Farewell Party at the Old University Forum on the University Campus.

Free lunch will be provided during lunch breaks from August 25th to 28th. Free coffee, refreshments, and snacks will be available during all poster sessions. In addition, free drinks and finger food will be served during the Welcome Reception, the Illusion Night, and the Farewell Party.

The main conference venue is the campus of Johannes Gutenberg University in Mainz. It offers modern conference rooms and picturesque greenery for a walk between sessions.

All photos: Organizing Committee ECVP 2025

For ECVP 2025 in Mainz, our logo pays tribute to Johannes Gutenberg, the city’s most famous figure and the namesake of our university. Gutenberg is celebrated as the inventor of modern letterpress printing with movable types, a groundbreaking innovation that transformed communication. The design of our logo features movable types colored in different shades of the color wheel. This reflects not only the diversity within vision science, but also the ongoing evolution of how we communicate visual information. To enhance its visual appeal, gloss and shading effects have been added for depth.

In this way, the logo captures the essence of our conference theme and symbolizes the intersection of history and innovation in visual perception research.

Logo Colorchange

Overview of our preliminary* conference schedule:

*Note: The days of the events are fixed, time slots may still change slightly.

With a click on the images below you will be redirected to a map of all conference venues on OpenStreetMap.

  • Workshops & warm-ups @ Law Building II
  • Spotlight in Vision Lecture & Rank Prize Lecture @ Audimax
    (Law Building I)
  • Talks & symposia @ Old Refectory or Audimax (Law Building I)
  • Poster sessions & coffee breaks @ Philosophicum
  • Lunch @ Mensa
  • Business Meeting @ Audimax (Law Building I)
  • Farewell Party @ Old University Forum
Data by OpenStreetMap
  • Perception Lecture & Welcome Reception (Aug 24th) @ State Theater (Staatstheater)
  • PerceptioNite (Aug 25th) @ Old Mail Depot (Altes Postlager)
  • Illusion & Demo Night (Aug 26th) @ KUZ culture center
    (Kulturzentrum KUZ)
  • Conference Dinner (Aug 27th) @ Heiliggeist
Data by OpenStreetMap

We are thrilled to have secured three exceptional scientists as keynote speakers for ECVP 2025:

Astrid Kappers
Eindhoven University
of Technology


Aug 24th @ State Theater | 18:00

Title: Exploring haptic perception

Photo: Astrid Kappers

Laudatio: Jan Koenderink

Roland Fleming
Justus Liebig University
Giessen


Title: Visual Perception: past, present and future

Photo: Lina Klein

Laudatio: Karl Gegenfurtner

William H. Warren
Brown University
Providence.


Title: Perception, Action, and Information: Vison Outside-In

Photo: William Warren

Laudatio: James Todd

Rank Prize

Our symposia offer a diverse selection of research topics, featuring speakers from various career stages, research backgrounds, and global locations to encourage lively discussion and exchange of ideas. A symposium should provide a diverse overview of a research area of interest to the ECVP audience. Organizers should strive to include speakers from different career stages, research groups, and geographic locations who represent a broad range of views and ideas under the overarching theme of the symposium. Presentations should be related to each other and stimulate insightful discussion. The symposium schedule will be announced near the conference.

Please find more information about our submission guidelines on the Submissions page.

Symposium submissions are nearly complete. For a detailed list of all symposia, including the preliminary schedule of speakers, please expand the item below.

LIST OF SYMPOSIA:
1) Examining vision and visual dysfunction with advanced neuroimaging

Chairs: Antony Morland & Shahin Nasr

  • Netta Levin: Cortical visual field representation and data integration following optic neuritis
  • Michael Hoffmann: Consequences of congenital optic chiasm malformations on the visual brain
  • Shahin Nasr: Using high-resolution fMRI to investigate the effects of amblyopia on the mesoscale functional organization of the human visual cortex
  • Hinke Halbertsema: Fixel based analysis on diffusion weighted imaging data to assess neurodegeneration in homonymous hemianopia
  • Holly Bridge: Using magnetic resonance spectroscopy to investigate the role of neurochemistry in human visual system dysfunction
2) Dealing with the visual consequences of eye and head movements: Recent findings and implications

Chairs: David Souto & Alexander Schütz

  • Antonella Pomè & Eckart Zimmermann: Sensory Census: how efference copies from eye movements determine the number of objects we see in dynamic environments
  • Ziad Hafed: Dark contrasts are immune to saccadic suppression in the primary visual cortex
  • David Souto, Omar Bachtoula, & Ignacio Serrano Pedraza: Multiple mechanisms of response suppression to self-induced sensation during pursuit eye movements
  • Rozana Ovsepian, David Souto, & Alexander C. Schütz: Robust generalization of tuning to self-induced sensation
  • Paul MacNeilage: The role of oculomotor signals in stationarity perception
3) Perceiving visual actions: eye movement awareness and sensorimotor control in active vision

Chairs: Jan-Nikolas Klanke & Wiebke Nörenberg

  • Amit Rawal & Rosanne L. Rademaker: People are sensitive to their uniquely patterned retinal input
  • Jan-Nikolas Klanke, Sven Ohl, Almila Naz Esen, & Martin Rolfs: Eyes on target, awareness off course: Limited control and detection of catch-up saccades during pursuit eye movements
  • Alexander Goettker, Jolande Fooken, Shannon Locke, Karl Gegenfurtner, & Pascal Mamassian: Limited metacognitive awareness of eye movement accuracy: Insights from saccade and tracking tasks
  • Anne Hoffmann, Ilana Nisky, & Frédéric Crevecoeur: Task and control-demands influence the use of visual feedback during arm movement control
  • Wiebke Nörenberg, Pascal Mamassian, & Martin Rolfs: In the eye of the timer: Rising up to the challenges of subjective saccade timing
4) From understanding low-level visual processes to tackling key societal challenges – the changing role of vision research

Chairs: Olivier Penacchio & Ute Leonards

  • Branka Spehar: Statistical regularities in nature
  • Claudia Menzel: Visual processing and beneficial effects of nature
  • Tadeáš Dvořák: Biomarkers of exposure to nature and urban environments
  • Jan Mikuni: Visual aestheic considerations on urban landscapes
  • Jay Davies, Jasmina Stevanov, & Ute Leonards: Questioning the dichotomous research approach to nature versus urban
5) Self-motion without actual motion: trends in visual vection research from basic neuro-cognitive processing to clinical applications

Chairs: Stefan Berti & Behrang Keshavarz

  • Robert S. Allison, Laurie M. Wilcox, Hongyi Guo, & Xue Teng: Object motion while experiencing vection
  • Paweł Stróżak, Tomasz Jankowski, Marcin Wojtasiński, & Paweł Augustynowicz: Individual-difference factors modulating the experience of vection. The role of field dependence, anomalous perceptual experiences, and tolerance of ambiguity
  • Stefan Berti, Brandy Murovec, Susan Yahya, Julia Spaniol, & Behrang Keshavarz: Early cortical processing of vection during coherent vs. non-coherent motion stimuli in younger and older adults: An event-related potential (ERP) study
  • Michaela McAssey, Lena Padovan, Geraldine Tauber, Valerie Kirsch, Thomas Brandt, & Marianne Dieterich: Combining EEG and vection to investigate visual-vestibular interactions in healthy and clinical populations
  • Grace Gabriel, Jennifer Campos, Meaghan Adams, Lauren Sergio, & Behrang Keshavarz: Vection in Individuals with and without Concussion: Associations with Postural Responses and Visual Dependence
6) Individual differences in perceptual and sensorimotor processing: A look into real-world expertise

Chairs: Jolande Fooken & Alexander Goettker

  • Lynn Schmittwilken, Anna. L. Haverkamp, & Marianne Maertens: Quantifying human edge sensitivity in real-world tasks
  • Dominik Straub, Lukas Maninger, & Constantin Rothkopf: Estimating individual differences in perceptual, cognitive, and motor processes from behavior in tracking tasks
  • Ashima Keshava & Peter König: Adaptive actions and frugal memory: How gaze supports natural behavior
  • Jolande Fooken, Renato Moraes, & Randy Flanagan: Old hands move slower but eye-hand coordination is preserved in ageing
  • Roy S. Hessels, Toshiki Iwabuchi, Diederick C. Niehorster, Ren Funawatari, Jeroen S. Benjamins, Sayaka Kawakami, Marcus Nyström, Momoka Suda, Ignace T. C. Hooge, Motofumi Sumiya, Julie I. P. Heijnen, Martin K. Teunisse, & Atsushi Senju: Gaze behavior in face-to-face interaction: A cross-cultural investigation between Japan and the Netherlands
7) Is one test sufficient?

Chair: Michael Herzog

  • Michael Herzog: About noise and inter-participant variability
  • Anna-Lena Schubert: Individual differences in the speed of visual processing are stable across time but only moderately consistent across tasks
  • Amelia R. Hunt, Alasdair DF. Clarke, & Anna Nowakowska: Individual differences in visual search and the fallacy of misplaced concreteness
  • Jeny Bosten, Patrick Goodbourn, Gary Bargary, Adam Lawrance-Owen, & John Mollon: Correlated and uncorrelated individual differences in performance on a diverse set of psychophysical and oculomotor tasks
8) Temporal dependence on visual perception: Quo vadis?

Chairs: Mauro Manassi & David Pascucci

  • Mauro Manassi: Time, space and feature similarity determine repulsive and attractive serial biases in trustworthiness impressions
  • Emma Stewart: Cognitive and retinal components of serial dependence in oculomotor control
  • Merav Ahissar: Can we manipulate context effects by task instructions?
  • Koulla Mikellidou: Serial dependence in continuous and interrupted motion perception in an immersive virtual environment
  • David Pascucci: Context effects in perceptual decision-making: for better or worse?
9) Out of sight, but not out of mind: How the human brain represents images that are not directly seen

Chair: Rosanne Rademaker

  • Rosanne L. Rademaker: The question of representational formats in working memory (symposium introduction)
  • Maria V. Servetnik & Rosanne L. Rademaker: Mental representations are reinstated in the early visual cortex when used for visual comparisons
  • Joana Pereira Seabra, Andreea-Maria Gui, Carsten Allefeld, Vivien Chopurian, Alessandra S. Souza, & Thomas B. Christophel: Multiple formats of visuo-spatial working memory
  • Bradley Postle: Representational formats to encode context and priority in visual working memory
  • Clayton Curtis, Ziyi Duan, & Nathan Tardiff: Behavioral demands shape the format of visual working memory
10) Perception of non-rigid motions

Chairs: Krischan Koerfer & Markus Lappe

  • Takahiro Kawabe: Nonrigid motion perception underlying material and animacy impressions
  • Merve Erdogan & Brian Scholl: Rich non-rigid percepts, beyond biology: Perceiving point-light cloths waving in the wind
  • Jiayi (Jenny) Pangand & William H. Warren: Spatial and temporal integration of nonrigid motion in human crowds
  • Krischan Koerfer & Markus Lappe: Nonrigid motion perception and eye movements
  • Roland Fleming: Motion cues and the perception of materials
11) Specificity and generalization of learning

Chair: Giorgio Manenti

  • Caspar M. Schwiedrzik: Stimulus variability enables generalization in visual perceptual learning through invariant representations
  • Rosanne L. Rademaker: Effects of statistical regularities on representation and behavior
  • Can Demircan, Tankred Saanum, Leonardo Pettini, Marcel Binz, Blazej M Baczkowski, Christian F. Doeller, Mona M. Garvert, & Eric Schulz: Evaluating alignment between humans and neural network representations in image-based learning tasks
12) The social symphony of gaze: New perspectives on eye contact behaviour

Chairs: Naiqi G. Xiao & Nikolaus Troje

  • Lauren Fink & Shreshth Saxena: Scaling mobile eye-tracking to multi-person social settings
  • Nikolaus Troje, Kristen Lott, & Nicholas Logan: Social gaze in video conferencing
  • Florence Mayrand & Jelena Risti: Who Looks, When, and Why? Linking gaze behaviors in natural interactions with group and individual social function
  • Prasetia Putra & Fumihiro Kano: Decoding joint action success through eye movements: A data-driven approach
  • Sara Ripley, Jiaye Cai, Wei Fang, Xiaoqing Gao, Laurel Trainor, & Naiqi G. Xiao: The early emerged sensitive to social signals and gaze interactions between mother-infant interactions
13) Understanding gaze

Chair: Anke Huckauf

  • Gernot Horstmann: Active and passive perception of direct gaze
  • Mehtap Cakir & Anke Huckauf: Recognition of mental processes of others based on gaze characteristics
  • Enkelejda Kasneci: Towards imperceptible gaze guidance in extended reality
14) Active vision in embodied interaction

Chair: Vasiliki Kondyli

  • Árni Kristjánsson: How does the visual system pick up information in the environment? Exploring active foraging in 3D space
  • Marcin Leszczyński: The role of neuronal oscillations in visual active sensing
  • Vasiliki Kondyli: Adaptive gaze behavior and active predictions. Multimodal behavioural studies in dynamic environments
  • Mehul Bhatt: Towards responsible AI foundations for neurocognitive analytics of active vision
15) Visual representations of bodies: Neural and computational mechanisms of action and social perception

Chairs: Martin A. Giese & Beatrice de Gelder

  • Rufin Vogels: Representations of static and dynamic bodies in macaque visual cortex
  • Alexander Lappe, Anna Bognar, Rufin Vogels, & Martin A. Giese: Shared-Feature Visualization by parallel backpropagation for body-selective neurons in the STS
  • Marius Zimmermann & Angelika Lingnau: Time course of neural midlevel representations underlying action recognition
  • Beatrice de Gelder: What body perception contributes to social interaction
16) Rethinking the role of brain rhythms in vision: Predictive dynamics, temporal sampling, and individual differences

Chairs: David Melcher, Daniel Kaiser, & Gianluca Marsicano

  • Lu-Chun Yeh, Max Bardelang, & Daniel Kaiser: Alpha rhythms track occluded motion in natural scene perception
  • Michele Deodato & David Melcher: The relevance of alpha phase for visual processing
  • Maëlan Q. Menétrey, Michael H. Herzog, & David Pascucci: Beyond the alpha cycle: how alpha activity shapes stable traits and transient dynamics in visual temporal integration
  • Giuseppe Di Dona, Alessia Santoni, Sara Stottmeier, Klara Hemmerich, & Luca Ronconi: Oscillatory dynamics and individual differences underlying predictive coding in visual perception
  • Gianluca Marsicano & David Melcher: Weighting of perceptual priors and sensory evidence in visual causality perception along the ASD-SCZ continuum
17) Where and when? Modeling motion prediction

Chair: Daniel Oberfeld-Twistel

  • Joan López-Moliner, David Aguilar-Lleyda, & Cristina de la Malla: How scene variability affects time-to-contact estimation and its use in decision-making
  • Daniel Oberfeld-Twistel & Tim Niewalda: Simple Bayesian observer models explain important characteristics of visual TTC estimation in a street-crossing scenario
  • Borja Aguado & Loes C.J. van Dam: Explaining the angle-of-approach and curveball effects in interception with an LQG model that combines trajectory prediction and implicit goal costs
  • Oh-Sang Kwon, Hyunjun Jeon, & Duje Tadin: Discrete percepts of continuously moving objects
  • Constantin Rothkopf, Dominik Straub, & Tobias Niehues: Intercepting moving targets: from optimal control to TTC
18) Using interocular suppression in consciousness research: Current state and future directions

Chairs: Renzo Lanfranco, Tommaso Ciorli, & Timo Stein

  • Surya Gayet: Perceptual precedence for expected and dreaded visual events – evidence from ‘bias-free’ breaking continuous flash suppression
  • Tommaso Ciorli: Disentangling conscious and unconscious processing in interocular suppression: the Rev-bCFS paradigm
  • Cordula Hunt, Florian Kobylka, & Guenter Meinhardt: Temporal summation reveals different levels of feature integration under interocular suppression
  • Renzo Lanfranco: Beyond interocular suppression: Unmasked sub-millisecond presentations reveal visual processing priorities in perception and awareness
  • Timo Stein: Unpredictability accelerates conscious access during natural scene perception: Evidence from breaking CFS

19) Sensing the future: Multisensory, aesthetics and sustainable insights in material perception

Chair: Marella Campagna

  • Claus-Christian Carbon: Beyond vision: The multisensory nature of aesthetics
  • Marella Campagna, Alexander (Sasha) Pastukhov, & Claus-Christian Carbon: Multisensory aesthetic perception: A quantitative-qualitative study on visuo-tactile interactions with material textures
  • Lotta Straube, Alexander (Sasha) Pastukhov, & Claus-Christian Carbon: Sustainable product and material perception: A multisensory exploration of denim jeans

20) The perception of the visual world – 75 years later

Chair: Klaus Landwehr

  • James T. Todd: Optical gradients as sources of visual information
  • Brian Rogers: Has Gibson’s (1950) characterisation of “The Stimulus Variables for Visual Depth and Distance” stood the test of time?
  • Klaus Landwehr, Heiko Hecht, & Christoph von Castell: Texture gradients are life and well
  • Jan J. Koenderink: The focus of expansion

On Sunday, August 24th, a series of hands-on science skill workshops and warm-ups on science-related issues will be held on the University Campus in the Law Building II. We will have a total of 10 sessions, with sessions running concurrently in the morning from 10:00-12:30 and in the afternoon from 13:30-16:00. Since sessions are running concurrently, you may register for only one session in the morning and one in the afternoon. All tutorials are listed below with a short abstract and free registration for the workshops is now open via our booking platform Converia.
All workshop slots are now filled with interesting sessions – many thanks to all our workshop organizers!

LIST OF WORKSHOPS:

Workshops in the morning (from 10:00-12:30)

1) Unity game engine for beginners: Creating virtual reality applications and serious games

Organizer: Alessandro Forgiarini (University of Udine)

This beginner-friendly workshop introduces the Unity Game Engine as a powerful tool for creating Virtual Reality (VR) applications and Serious Games. These technologies are widely used in the literature for education, therapy, and training.
The workshop is designed for individuals new to game development and offers a step-by-step approach to building an engaging and immersive project. Participants will learn the fundamentals of Unity, including navigating the interface, creating 3D scenes, managing assets, and using basic C# scripting to add interactivity. The workshop will emphasize how VR technology influences can enhance user engagement and allow to design meaningful and intuitive experiences. 
After a brief introduction to VR hardware, attendees will create a simple VR Serious Game through guided activities. No prior experience with programming or Unity is required, making this session accessible to everyone.

Please note: This workshop consists of two sessions that build on one another (Part I & Part II in the afternoon)!

2) Open and FAIR stimulus creation with stimupy

Organizer: Lynn Schmittwilken & Joris Vincent (Technische Universität Berlin)

Stimuli are at the heart of vision science, yet are not always openly accessible. Stimupy (Schmittwilken, Maertens, & Vincent, 2023) tackles this problem, and makes stimulus creation findable, accessible, interoperable, and reusable (FAIR). Stimupy is an open-source Python package for creating two-dimensional stimuli to test and/or control aspects of early/mid-level vision, including shapes, gratings, visual illusions, and noises. In this tutorial, we introduce you to FAIR in the context of stimulus creation, and show you how you can use stimupy for a wide range of research purposes, such as experimentation, modeling, replication, and the exploration of stimulus parameter spaces.

Attendants should bring a laptop to fully participate in the workshop. They will get the most out of it if they already have Python installed (any recent version 3.9+ suffices), as well as stimupy and its dependencies – the installation instructions from stimupy.readthedocs.io should cover the usual cases.

3) A practical introduction to evidence accumulation models in visual perception research

Organizer: Margherita Calderan & Carolina Maria Oletto (University of Padua)

This tutorial is designed to provide the foundations for using evidence accumulation models on experiments involving perceptual judgements. In visual perception, researchers frequently rely just on participants’ choices. This approach is valuable, but it overlooks the temporal information embedded within response times, which can reflect underlying perceptual processes (Corbett & Smith, 2020; Hellman et al., 2024; Rushton et al., 2024). Evidence accumulation models offer a more comprehensive approach by jointly considering participants’ choices and response times (Ratcliff, & McKoon, 2008). Moreover, these models can also handle multiple-choice scenarios (two or more; Heathcote, & Matzke, 2022), allowing the analysis of responses in a broader range of visual perception tasks. This tutorial will introduce the rationale for using evidence accumulation models in visual perception research. Real research data will be presented and analysed through bayesian multilevel models. A basic knowledge of Bayesian statistics is useful for a deeper understanding of the tutorial, but is not mandatory. Please bring your laptop with R and Rstudio already installed.

4) R you serious? Teaching R programming to psych students

Organizer: Meike Steinhilber (Johannes Gutenberg University Mainz)

Teaching R in psychology courses often presents unique challenges: students may lack prior programming experience, time is limited, and motivation to learn coding can be low. Yet, mastering R is essential for conducting statistical analyses effectively. How can we best teach R in a way that is engaging, accessible, and pedagogically sound?

This workshop will explore both technical and instructional strategies for teaching R. We will introduce learnr, an interactive learning package, and showcase OtteR, a tool designed to support R education. Beyond these resources, we will discuss best practices for structuring R courses, balancing statistical concepts with programming fundamentals, and collaboratively developing ideas to enhance long-term student engagement.

Whether you are an experienced instructor looking to refine your teaching approach or new to teaching R, this session welcomes anyone interested in improving R education in psychology. We also encourage seasoned educators to share their experiences, challenges, and successful strategies, fostering a collaborative exchange of insights and best practices.

To get the most out of this session, please bring:

  • A tablet or laptop to explore OtteR in real time.
  • Your current R curriculum to serve as a foundation for discussion.
  • A list of questions, challenges, or key topics related to teaching R that you’d like to explore and troubleshoot together.
5) You should write an R package. It is easier than you think, and I’ll show you how.

Organizer: Alexander (Sasha) Pastukhov (University of Bamberg)

In our research, we generate innovative and useful analysis methods. Yet, reusing these methods across projects or sharing them as plain R scripts or notebooks can often be challenging or awkward. Packaging your code as an open-source library available at Github and CRAN not only facilitates reuse in new projects but also simplifies collaboration by ensuring your tools are accessible to the broader scientific community.The task may initially seem daunting, and you might feel that package-writing is best left to trained programmers. However, R and RStudio offer an excellent suite of tools that make transforming your code into a well-functioning, well-documented, and easy-to-install package surprisingly straightforward (or, at least, easier than you might think). Moreover, converting your code into a package – especially one that meets CRAN standards – encourages you to address aspects that might not be part of your usual workflow. This process will push you to write clear documentation complete with practical examples, prepare illustrative datasets, and test your code not only to confirm that it works but also to ensure it fails gracefully when expected.This workshop aims to provide a comprehensive overview of the entire package creation process. We will cover everything from creating an empty project and adding functions or classes to writing comprehensive documentation and practical examples (since poorly structured documentation is often the primary barrier to using your methods), as well as including and documenting example data, creating vignettes, and testing your package. Additionally, you will learn how to make your package installable from GitHub, publish documentation via GitPages, prepare it for CRAN submission, and ensure that your package is properly cited. I will introduce you to the tools R and RStudio provide for each step and how to automate tasks to streamline package creation. Together, we will build a simple yet feature-complete library that you can use as a stepping stone for your future packages.

Prerequisite Knowledge: Familiarity with R and, optionally, Git

Prerequisite Materials:

  • A laptop with R and RStudio installed
  • The following R packages:
    • devtools
    • testthat
    • roxygen2
    • pkgdown
  • A GitHub account (optional)

Workshops in the afternoon (from 13:30-16:00)

6) Power analyses through simulations: Annoying and time-consuming, but probably our only real shot. Here’s how you do it!

Organizer: Björn Jörges (York University)

In response to the replication crisis, calls have been made to increase statistical power in psychological studies. Traditional tools for power analyses such as G*Power can accommodate simpler statistical tests like t tests or ANOVAs but fall short for more complex study designs and data structures. And, for better or for worse, such complexities are the norm in Cognitive Psychology, where many participants tend to repeat many trials in many different conditions. The only principled way to approach power analyses in this context is through simulations. This two-hour, hands-on workshop will give a short introduction to the motivation behind power analyses and then walk its participants through the process of setting up one such power analysis in R. This includes specifying numerical predictions, simulating the expected data including variability, specifying the statistical analysis based on Linear Mixed Modelling and running an appropriate number of repetitions of this simulation. We will use two examples, one reaction time task and one psychophysical two-alternative forced-choice task to cover two common use cases with complex data structures. Participants will need a recent installation of R (preferably 4.4.2) and RStudio and they will receive access to an open repository with all code and data used in this workshop.

7) MLDS and MLCM: Two scaling methods to study stimulus appearance

Organizer: Guillermo Aguilar (Technische Universität Berlin)

Maximum Likelihood Difference Scaling (MLDS) and Maximum Likelihood Conjoint Measurement (MLCM) are two methods used to estimate perceptual scales (Knoblauch & Maloney, 2012). These scales reflect stimulus appearance and characterize the mapping of stimulus dimensions to a perceptual dimension of interest. They can also serve as a basis for comparing computational models of the visual system. In this hands-on tutorial, you will learn how to design a typical MLDS/MLCM experiment and estimate scales using the collected data (in the R programming language). We will also cover the underlying assumptions of the method, how and when these assumptions can be experimentally tested, and provide general recommendations to avoid common pitfalls encountered in practice.

Knowledge on R programming is not strictly required, but attendees should have basic programming skills.

8) WaveSpace: A modular python tool for simulating and analyzing cortical traveling waves

Organizer: Kirsten Petras (Université Paris Cité)

Oscillatory cortical activity has been found to smoothly propagate within and across cortical regions. Finding and characterizing those spatio-temporally consistent traveling waves in multi-channel data recorded with invasive as well as non-invasive techniques requires multiple consecutive, but sometimes interchangeable processing and analysis steps. The current literature on cortical traveling waves lacks a clear consensus of which methods can and should be applied in specific situations.

In this tutorial, we introduce WaveSpace (https://github.com/DugueLab/WaveSpace), a modular simulation and analysis tool in Python implementing a range of different pipelines to find, describe and statistically evaluate traveling waves bundled with a simulation module to generate synthetic data with diverse wave dynamics in realistic background activity. These features allow researchers to generate benchmark analyses tailored to their experimental paradigm and model experimental outcomes in silico.

Attendees will learn to simulate and analyze oscillatory traveling waves and optimize analytical pipelines.

Simulations and example data will be provided, but attendees are also welcome to bring their own data (anything that can be converted into a numpy array of channels x timepoints and has spatial descriptors of sensor positions in either 2D or 3D space will do).

9) Principles of working in interdisciplinary teams

Organizer: Robin Welsch (Aalto University)

This workshop introduces the foundational principles of effective interdisciplinary collaboration, specifically at the intersection of visual perception and computer science. In today’s research landscape, breakthroughs often arise when experts from diverse fields come together. This session will focus on collaborating between disciplines to integrate computational techniques—such as machine learning, data analytics, mixed reality, and computer vision—with experimental approaches in visual perception research but also how to build bridges to other sciences.

The workshop is designed for researchers who want to learn how to navigate the challenges of interdisciplinary teamwork. We will explore fundamental strategies for establishing a common language between disciplines, aligning different methodologies, and overcoming communication barriers that impede collaboration. 

During the practical exercise, small groups will simulate the process of forming an interdisciplinary team, identifying complementary skills, and outlining a project that integrates computational methods with perceptual science. This hands-on approach highlights potential obstacles and demonstrates practical solutions for fostering innovative research partnerships.

No prior experience in interdisciplinary projects is required. Participants should bring a laptop to engage fully in the collaborative exercise. This workshop is an ideal opportunity for anyone looking to build or enhance research teams from an interdisciplinary angle to drive novel insights in visual perception.