CATLAB

category laboratory

Navigation Menu

Data Science Visions TIPs Funded

Posted on Aug 10, 2017

Vanderbilt has recently funded our TransInstitutional Programs (TIPs) proposal, Data Science Visions:
https://www.vanderbilt.edu/strategicplan/trans-institutional-programs/tips-2017/data-science-visions.php
Modern society, medicine, business, science, engineering and even the humanities are awash in data. The amount of data being produced is growing so fast that a new interdisciplinary field called data science has emerged to process, analyze, visualize and ultimately extract knowledge from the data. This initiative seeks to take the first steps in positioning Vanderbilt to be a leader in this critical new field. The initiative will identify and connect all the disparate islands of data science activity at Vanderbilt to create a unified data science community and spark cross-campus research collaborations. The initiative will also support new educational tracks and establish active partnerships with on-campus research groups and off-campus industry to provide immersive real-world training for students. More ambitiously, this TIPs award hopes to seed a sustainable, visible and internationally impactful activity with the future creation of a trans-institutional data science institute at Vanderbilt.

Palmeri will also be part of the Provost’s Working Group on Data Science:
https://news.vanderbilt.edu/2017/08/17/new-working-group-launches-for-big-data-and-data-science-initiatives
The Data Science Visions Working Group consists of 20 faculty members from a broad set of disciplines who will be engaged in this project over the next year and will report out to the provost and all the school and college deans, including Jeff Balser, president and CEO for Vanderbilt University Medical Center and dean of the School of Medicine.

Read More


New NSF Grant on Measuring, Mapping, and Modeling Perceptual Expertise

Posted on Aug 11, 2016

Our research group has been awarded a new three-year grant from the National Science Foundation on Measuring, Mapping, and Modeling Perceptual Expertise; PI is Isabel Gauthier, co-PI is Thomas Palmeri, and Senior Investigators are Sun-Joo Cho from Vanderbilt, Gary Cottrell from UCSD, and Mike Tarr and Deva Ramanan from Carnegie Mellon.

Our project investigates how and why people differ in their ability to recognize, remember, and categorize faces and objects. Many important real-world problems, such as forensics, medical imaging, and homeland security, demand precise visual understanding from human experts. Understanding individual differences in high-level visual cognition has received little attention compared to other aspects of human performance. Recent studies indicate that there likely is far greater variability than commonly acknowledged and that the ability to learn high-level visual skills is poorly predicted by general intelligence. Not everybody who receives training in a visual domain like matching fingerprints or detecting tumors in chest x-rays may be able to reach expert levels. Visual object recognition is a new domain in which understanding and characterizing individual differences can have real-world predictive power, adding to the contributions that psychology has made in other areas, such as clinical psychology, personality, and general intelligence. This project supports a collaborative interdisciplinary research network that aims to develop measures of individual differences in visual recognition, relate behavioral and neural markers of individual differences, develop models that explain individual differences, and relate models with neural data. Because outcomes in many real-world domains depend on decisions based on visual information, developing measures, markers, and models of individual differences in high-level visual cognition can lead to substantial improvements in identifying real-world visual talent, in real-world visual performance and training. Moreover, identifying individuals with talents at visual recognition and learning will help guide individuals into fields that demand high levels of precision. Finally, understanding how people vary in visual recognition can inform individualized training at all learning levels (not just experts). For example, recognizing cases of disability in high-level vision and learning can inform rehabilitation and remediation. The collaborative team of scientists working on this project will capitalize on their individual successes and continue training female scientists and under-represented minorities. All students conducting research as part of this collaborative network, including female scientists and minorities, will be mentored by scientists from multiple disciplines, providing them with an understanding far deeper than that achievable by one discipline or method.

The project will support the activities of a collaborative research network on the study of individual differences in visual recognition. The scientists involved in these interdisciplinary efforts will include experts in brain imaging at ultra-high field, cutting-edge methods in the development of psychological tests and the development of “deep” convolutional neural network models, which are very powerful computer models that are biologically inspired. The project will investigate how brain activity and brain structure, such as the thickness of the cortex in visual areas, can predict the quality and time-course of visual learning. The team will develop and validate tests of visual ability that can be used to make precise predictions about brain activity and behavioral performance. These brain measures and behavioral tests will be related to inform deep convolutional neural models of vision. Deep convolutional neural models are the most successful computer models to date, and the higher layers of these hierarchical networks provide outstanding models of the brain’s areas critical to object recognition, but so far they have not been used to understand individual differences. Instead of the typical approach seeking to achieve the best performance possible, the team will seek models that can mirror human variability, making errors when people make errors, being slow when people are slow, and displaying a range of visual abilities and learning as observed in humans. These powerful models will help bridge between variability in people both in behavior and in the brain.

Read More


Special Issue on Model-Based Cognitive Neuroscience

Posted on Dec 5, 2014

Thomas Palmeri from Vanderbilt, Brad Love from University College London, and Brandon Turner from The Ohio State University are co-editing a special issue of the Journal of Mathematical Psychology on Model-Based Cognitive Neuroscience. This special issue aims to explore the growing intersection between cognitive modeling and cognitive neuroscience. Cognitive modeling has a rich history of formalizing and testing hypotheses about cognitive mechanisms within a mathematical and computational language, making exquisite predictions of how people perceive, learn, remember, and decide. Cognitive neuroscience aims to identify neural mechanisms associated with key aspects of cognition, using techniques like neurophysiology, electrophysiology, and structural and functional brain imaging. These two come together in a powerful new approach called model-based cognitive neuroscience, which can both inform model selection and help interpret neural measures. Cognitive models decompose complex behavior into representations and processes and these latent model states are used to explain the modulation of brain states under different experimental conditions. Reciprocally, neural measures provide data that help constrain cognitive models and adjudicate between competing cognitive models that make similar predictions of behavior. For example, brain measures are related to cognitive model parameters fitted to individual participant data, measures of brain dynamics are related to measures of model dynamics, model parameters are constrained by neural measures, model parameters are used in statistical analyses of neural data, or neural and behavioral data are analyzed jointly within hierarchical modeling framework.

Click Here for more details.

Read More