category laboratory

Navigation Menu

Category Laboratory at Vanderbilt

supported by NSF, NEI, and Vanderbilt University

In the CatLab, we study visual cognition, including visual categorization, visual memory, and visual decision making. We study how objects are perceived and represented by the visual system, how visual knowledge is represented and learned, and how visual decisions are made. We approach these questions using a combination of behavioral experiments, cognitive neuroscience techniques, and computational and neural modeling. One line of work, funded by the National Science Foundation, investigates the temporal dynamics of visual object categorization and perceptual expertise for objects and faces. Another line of work, funded by the National Eye Institute, uses computational modeling of visual decision making to predict behavioral dynamics and neural dynamics.


Jennifer Richler becomes Senior Editor at Nature Publishing

Posted on Aug 23, 2016

Jenn Richler is accepting a new position as Senior Editor at Nature Publishing Group. In her new role, she will be covering psychology and social sciences for interdisciplinary Nature titles including Nature Climate Change, Nature Energy, and Nature Nanotechnology among others. The job will cover all aspects of the editorial process, including manuscript selection, commissioning and editing of Reviews and News & Views, and writing for the journals.

Jenn received her PhD in Psychology at Vanderbilt in 2010 working with Palmeri and Gauthier. Since then, she has continued at Vanderbilt as a postdoctoral fellow, worked as Editorial Associate and Associate Editor for the Journal of Experimental Psychology: General, and created and curated the APA PeePs (Particularly Exciting Experiments in Psychology).

Congratulations Jenn!

Read More

New NSF Grant on Measuring, Mapping, and Modeling Perceptual Expertise

Posted on Aug 11, 2016

Our research group has been awarded a new three-year grant from the National Science Foundation on Measuring, Mapping, and Modeling Perceptual Expertise; PI is Isabel Gauthier, co-PI is Thomas Palmeri, and Senior Investigators are Sun-Joo Cho from Vanderbilt, Gary Cottrell from UCSD, and Mike Tarr and Deva Ramanan from Carnegie Mellon.

Our project investigates how and why people differ in their ability to recognize, remember, and categorize faces and objects. Many important real-world problems, such as forensics, medical imaging, and homeland security, demand precise visual understanding from human experts. Understanding individual differences in high-level visual cognition has received little attention compared to other aspects of human performance. Recent studies indicate that there likely is far greater variability than commonly acknowledged and that the ability to learn high-level visual skills is poorly predicted by general intelligence. Not everybody who receives training in a visual domain like matching fingerprints or detecting tumors in chest x-rays may be able to reach expert levels. Visual object recognition is a new domain in which understanding and characterizing individual differences can have real-world predictive power, adding to the contributions that psychology has made in other areas, such as clinical psychology, personality, and general intelligence. This project supports a collaborative interdisciplinary research network that aims to develop measures of individual differences in visual recognition, relate behavioral and neural markers of individual differences, develop models that explain individual differences, and relate models with neural data. Because outcomes in many real-world domains depend on decisions based on visual information, developing measures, markers, and models of individual differences in high-level visual cognition can lead to substantial improvements in identifying real-world visual talent, in real-world visual performance and training. Moreover, identifying individuals with talents at visual recognition and learning will help guide individuals into fields that demand high levels of precision. Finally, understanding how people vary in visual recognition can inform individualized training at all learning levels (not just experts). For example, recognizing cases of disability in high-level vision and learning can inform rehabilitation and remediation. The collaborative team of scientists working on this project will capitalize on their individual successes and continue training female scientists and under-represented minorities. All students conducting research as part of this collaborative network, including female scientists and minorities, will be mentored by scientists from multiple disciplines, providing them with an understanding far deeper than that achievable by one discipline or method.

The project will support the activities of a collaborative research network on the study of individual differences in visual recognition. The scientists involved in these interdisciplinary efforts will include experts in brain imaging at ultra-high field, cutting-edge methods in the development of psychological tests and the development of “deep” convolutional neural network models, which are very powerful computer models that are biologically inspired. The project will investigate how brain activity and brain structure, such as the thickness of the cortex in visual areas, can predict the quality and time-course of visual learning. The team will develop and validate tests of visual ability that can be used to make precise predictions about brain activity and behavioral performance. These brain measures and behavioral tests will be related to inform deep convolutional neural models of vision. Deep convolutional neural models are the most successful computer models to date, and the higher layers of these hierarchical networks provide outstanding models of the brain’s areas critical to object recognition, but so far they have not been used to understand individual differences. Instead of the typical approach seeking to achieve the best performance possible, the team will seek models that can mirror human variability, making errors when people make errors, being slow when people are slow, and displaying a range of visual abilities and learning as observed in humans. These powerful models will help bridge between variability in people both in behavior and in the brain.

Read More

Symposium on Model-based Cognitive Neuroscience at Psychonomics this Fall

Posted on Jul 18, 2016

Thomas Palmeri and Brandon Turner from The Ohio State University will be chairing a symposium on Model-based Cognitive Neuroscience at the 2016 Annual Meeting of the Psychonomics Society. After an Introduction to Model-based Cognitive Neuroscience, Thomas Palmeri will present Approaches to Model-Based Cognitive Neuroscience: Bridging Levels of Understanding of Perceptual Decision Making, Brandon Turner will present Joint Models of Neural and Behavioral Data, Birte Forstmann from the University of Amsterdam will present Decision Threshold Dynamics in the Human Subcortex Measured with Ultra-high Resolution Magnetic Resonance Imaging, John Anderson from Carnegie Mellon will present Combining Space and Time in the Mind, Michael Mack from the University of Toronto will present Tracking the Neural Dynamics of Conceptual Knowledge During Category Learning with Computational Model-based Neuroimaging, and Sean Polyn from Vanderbilt University will present The Neurocognitive Dynamics of Memory Search.

Full abstracts can be found on the Psychonomics Society web site:


Read More

Recent papers from the CatLab

Posted on Jul 6, 2016

Purcell, B.A., & Palmeri, T.J. (in press). Relating accumulator model parameters and neural dynamics. Journal of Mathematical Psychology.

Turner, B.M., Forstmann, B.U., Love, B., Palmeri, T.J., & Van Maanen, L. (in press). Approaches to analysis in model-based cognitive neuroscience. Journal of Mathematical Psychology. [PDF]

Annis, J., Miller, B.J., & Palmeri, T.J. (in press). Bayesian inference with Stan: A tutorial on adding custom distributions. Behavioral Research Methods. [PDF]

Ross, D.A., & Palmeri, T.J. (in press). The importance of formalizing computational models of face adaptation aftereffects. Frontiers in Psychology. [PDF]

Read More

Mike Mack accepts faculty position at the University of Toronto

Posted on Jun 5, 2016

Mike will begin this fall as an assistant professor in the Department of Psychology at the University of Toronto. The University of Toronto is one of the oldest and most distinguished departments of psychology in the world.

Mike earned his PhD from our lab and has been a postdoctoral fellow at the University of Texas for the past several years. At Vanderbilt, Mike won the Jum Nunnally Dissertation Award, a Vanderbilt Dissertation Enhancement Grant, the Pat Burns Memorial Student Research Award, the William F. Hodges Teaching Assistant Award, and was a Learning Sciences Institute Fellow. During his postdoctoral fellowship, he has been funded by an NIH NRSA grant, he was an OPAM conference organizer, and a Memory Disorders Research Society organizer. Mike has published papers in JEP:General, JEP:HPP, Current Biology, Psychonomic Bulletin & Review, Journal of Vision, Vision Research, and several other journals and other publication outlets. His research combines behavioral experiments, functional brain imaging, and computational modeling to study human learning, memory, and categorization.

We all wish Mike the best of success in his new faculty position.

Read More

May Shen wins Lisa M. Quesenberry Foundation Award

Posted on May 30, 2016

We congratulate Jianhong (May) Shen as the 2016 winner of The Lisa M. Quesenberry Foundation Award. This was established by Irvin and Mary Ann Quesenberry and Kathryn Quesenberry to memorialize the accomplishments of their daughter and sister, Lisa M. Quesenberry. It is designed to provide research or study awards to motivated graduate students. Preferably, the awards will be made to female graduate students who are studying the field of psychology and who have overcome significant personal challenges to pursue their education. Congratulations May!

Read More

Julie Schnur awarded Founder’s Medal for First Honors in Engineering

Posted on May 13, 2016

Today, Julie Schnur received her Bachelor’s of Engineering in Biomedical Engineering with a minor in Scientific Computing. Julie has worked for the past year as an undergraduate research assistant in the lab under the direct supervision of postdoctoral fellow Brent Miller. At today’s graduation ceremonies, Julie was honored with the Founder’s Medal for First Honors in Engineering, the highest honor bestowed on a graduate of Vanderbilt. A link to the details can be found at

This is the second year in a row that an undergraduate researcher in the lab has been so honored; last year’s winner of the Founder’s Medal in Engineering was our own Akash Umakantha.

Read More

May Shen and Jeff Annis win Young Scientist Travel Awards

Posted on Apr 12, 2016

Congratulations to May and Jeff on each being awarded 2016 Young Scientist Travel Awards to the Annual Meeting of the Society for Mathematical Psychology. The award provides $1000 in travel allowance to the society meeting this summer at Rutgers University.

Read More

Research Experience for Undergraduates (REU Summer 2016)

Posted on Oct 6, 2015

We are looking for outstanding students interested in a Research Experience for Undergraduates (REU) at the CatLab at Vanderbilt University this summer 2016. Our REU is part of an NSF-funded project entitled Perceptual Categorization in Real-World Expertise. This project uses online behavioral experiments to understand the temporal dynamics of perceptual expertise, measuring and manipulating the dynamics of object recognition and categorization at different levels of abstraction and assessing how those dynamics vary over measured levels of expertise, using computational models to test hypotheses about expertise mechanisms. Students have opportunities to work on projects ranging from the development of online experiments, development of analysis routines, and development and testing of computational models. This REU is especially appropriate for students interested in applying to graduate programs in psychology, vision science, cognitive science, or neuroscience. The REU provides a $5000 summer stipend, $500 per week for ten weeks; an additional $150 per week helps offsets the cost of housing and meals; a $250 travel allowance is also provided. REUs are restricted to undergraduate students currently enrolled in a degree program and must be U.S. citizens, U.S. nationals, or permanent residents of the United States.

Read More

PEN XXX Reunion Meeting

Posted on May 20, 2015

The Perceptual Expertise Network celebrated its 30th workshop by inviting current PEN members, previous PEN members, and PEN friends to a day of talks as well as a reunion dinner following the talks. This was held on May 14th, 2015 at the TradeWinds Island Grand Resort in St. Pete Beach, Florida, as a satellite to the annual VSS conference.

Speakers at PEN XXX were:
Thomas Palmeri (Vanderbilt), Opening remarks
Marlene Berhmann (Carnegie Mellon), Never the twain shall meet
Kim Curby (Macquarie University), Are faces super objects? Object-based benefits support holistic perception
Lisa Scott (UMass Amherst), How learning during infancy enhances and constrains brain and behavioral development
Bruno Rossion (University of Louvain), Understanding expertise in face perception with fast periodic visual stimulation
Suzy Scherf (Penn State), Puberty makes us different kinds of face experts
Jim Tanaka (Victoria), Bridging the expertise gap: From the laboratory to the real world
Mike Mack (UT Austin), The evolution of category knowledge: Linking learning models to the dynamics of neural representations
Jennifer Richler (Vanderbilt), Measuring individual differences in high-level vision in a latent-variables framework
Ben Cipollini (UCSD), Exploring anatomy and genetics of cortical asymmetries
Isabel Gauthier (Vanderbilt), 15 years of fMRI studies of expertise
Michael Tarr (Carnegie Mellon), Closing talk

Photos from the workshop can be found here:

Read More

Akash Umakantha wins Founder’s Medal

Posted on May 8, 2015

The Founder’s Medal is the top honor given to a graduate of Vanderbilt University. Cornelius Vanderbilt’s gifts to the university included endowment of this award, given since 1877 for first honors in each graduating class. Akash Umakantha, who has been working in the CatLab since the end of his freshman year, is this year’s recipient from the School of Engineering at Vanderbilt. Congratulations Akash!

Akash Umakantha, from West Chester, Ohio, is Founder’s Medalist for the School of Engineering. He graduated with a bachelor of engineering in electrical and biomedical engineering. Umakantha has been a researcher at Cold Spring Harbor Laboratory, the Vanderbilt Radiation Effects and Reliability Group and the Vanderbilt Department of Psychology. He has presented his research on models of decision-making at two international conferences. Working in Professor of Psychology Tom Palmeri’s laboratory after freshman year, he learned about mathematical models of decision-making and how to simulate complex mathematical equations on a computer. That experience inspired him to pursue an academic career in neuroscience and scientific computing. Umakantha served as an engineering tutor, and he volunteered in a 6th grade classroom through Vanderbilt Student Volunteers for Science. He is a member of engineering honor society Tau Beta Pi. Next fall, Umakantha will attend Carnegie Mellon University to pursue doctoral studies in neural computation and machine learning.

Read More

Jennifer Richler co-winner of the Bob Fox Award of Excellence in Post-Doctoral Research

Posted on Apr 21, 2015

The Bob Fox Award of Excellence in Post-Doctoral Research is granted to post-doctoral fellows in the Department of Psychology at Vanderbilt who have demonstrated outstanding achievement in research; it is named in honor of Robert “Bob” Fox for his essential role in guiding the evolution of Vanderbilt’s Psychology Department over a five-decade period starting in the mid-60s.

Jenn earned her PhD at Vanderbilt with Isabel Gauthier and Thomas Palmeri and has continued on at Vanderbilt as a post-doctoral fellow. She previously won the Nunnally Dissertation Award and the Pat Burns Graduate Student Research Award from the department. She has 30 peer-reviewed publications and is an Associate Editor at JEP:General. She also spearheaded the PeePs (Particularly Exciting Experiments in Psychology) newsletter that highlights research published in the six experimental psychology journals from APA.

Congratulations, Jenn!

Read More

May Shen co-winner of The William F. Hodges Teaching Assistant Award

Posted on Apr 21, 2015

The William F. Hodges Teaching Assistant Award recognizes outstanding achievement as a teaching assistant by a graduate student in the Department of Psychology at Vanderbilt. William Hodges was an undergraduate and a graduate student at Vanderbilt in the 1960s. After his untimely death in 1992, family and friends established the William F. Hodges Teaching Assistant Award at Vanderbilt to honor outstanding teaching assistants in the department.

May is a third year graduate student in the CatLab. She has completed a Certificate in College Teaching from the Center for Teaching and has TAed for a wide array of courses in the department, including PSY208 (Principles of Experimental Design), PSY209 (Quantitative Methods), and PSY225 (Cognitive Psychology); this semester, she is TAing for a statistics course in Psychology and Human Development. Adriane Seiffert, for whom May TAed in PSY208 and PSY209, noted that her work was “exemplary in both courses”, and that students commented that “she was responsive and helpful – gave exact answers to questions”, “conveyed the material in a way she knew would be effective, logical and memorable”. Geoff Woodman, for whom May TAed in PSY225, noted that she “jumped on a week’s worth of lectures when given the opportunity.”

Congratulations, May!

Read More

New paper in Journal of Experimental Psychology: General

Posted on Mar 24, 2015

We have a new paper in press:

Mack, M.L., & Palmeri, T.J. (in press). The dynamics of categorization: Unraveling rapid categorization. Journal of Experimental Psychology: General. [PDF]

We explore a puzzle of visual object categorization: Under normal viewing conditions, you spot something as a dog fastest, but at a glance, you spot it faster as an animal. During speeded category verification, a classic basic-level advantage is commonly observed (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976), with categorization as a dog faster than as an animal (superordinate) or Golden Retriever (subordinate). A different story emerges during ultra-rapid categorization with limited exposure duration (<30ms), with superordinate categorization faster than basic or subordinate categorization (Thorpe, Fize, & Marlot, 1996). These two widely cited findings paint contrary theoretical pictures about the time course of object categorization, yet no study has previously investigated them together. Over five experiments, we systematically examined two experimental factors that could explain the qualitative difference in categorization across the two paradigms: exposure duration and category trial context. Mapping out the time course of object categorization by manipulating exposure duration and the timing of a post-stimulus mask revealed that brief exposure durations favor superordinate-level categorization, but with more time a basic-level advantage emerges. But this superordinate advantage was modulated significantly by target category trial context. With randomized target categories, the superordinate advantage was eliminated; and with “blocks” of only four repetitions of superordinate categorization within an otherwise randomized context, the advantage for the basic-level was eliminated. Contrary to some theoretical accounts that dictate a fixed priority for certain levels of abstraction in visual processing and access to semantic knowledge, the dynamics of object categorization are flexible, depending jointly on the level of abstraction, time for perceptual encoding, and category context.

Read More