Algorithms for Seeing
VisionLab

The overarching goal of the Vision Sciences Lab is to understand how the mind and brain construct perceptual representations, how the format of those representations impacts visual cognition (e.g., recognition, comparison, search, tracking, attention, memory), and how perceptual representations interface with higher-level cognition (e.g., judgment, decision-making and reasoning).

To this end, ongoing projects in the lab leverage advances in deep learning and computer vision, aiming to understand how humans & machines encode visual information at an algorithmic level, and how different formats of representation impact visual perception and cognition. Towards this end, we import algorithmic and technical insights from machine vision to build models of human vision, and apply theories of human vision and the "experimental scalpel" of human vision science to probe the inner workings of deep neural networks and build more robust and human-like machine vision systems. Ultimately we hope to contribute to the virtuous cycle between the fields human vision science, cognitive neuroscience, and machine vision.

VisionLab

My early research focused on characterizing and understanding limits on our ability to attend to, keep track of, and remember visual information — our visual cognitive capacities. In many cases, deeper understanding of these limits seemed to demand a deeper understanding of visual representation formats, but these ideas were not easily testable, because our field had not yet developed scalable, performant models of visual encoding beyond relatively early visual processing stages.

However, since 2012, we have seen a veritable explosion in the availability of highly performant vision models from the fields of deep learning, machine vision, and artificial intelligence (or is that one field?). On a quarterly basis, new models with new algorithms and new abilities are released, each presenting intriguing hypotheses for the nature of visual representation in humans, and opportunities for a deeper understanding of visual cognition in both humans and machines.

Thus, ongoing work in the lab is focused primarily on this intersection between human and machine vision. See the page for details about this work.

CURRENT RESEARCH PROJECTS

Self-supervised Learning of Visual Representations

VisionLab

Effects of Task + Architecture on Brain/Deepnet Alignment

  • What can 5.17 billion regression fits tell us about artificial models of the human visual system? Conwell, Prince, Alvarez, & Konkle. preprint
  • Neural regression, representational similarity, model zoology & neural taskonomy at scale in rodent visual cortex Conwell et al. preprint code
  • A signature of orientation invariance in human fMRI & (some) deep neural networks. Conwell, Alvarez, & Konkle. video
VisionLab

Intuitive Physics Without Intuition or Physics

  • Shared representations of stability in humans, supervised, & unsupervised neural networks Conwell, Doshi, Alvarez. preprint
VisionLab

Algorithmic Insights via Subgraph Visualization

  • VISCNN: A tool for Visualizing Interpretable Subgraphs in CNNs. Hamblin, & Alvarez. video preprint
VisionLab

KEY FINDINGS IN HUMAN VISUAL COGNITION

CV | google scholar

RECENT PUBLICATIONS & PREPRINTS

RECENT CONFERENCE PAPERS/ABSTRACTS

PAPERS

filter:

DEMO: Speed Limit on Attentional Tracking

Overview:

In many working memory and attention tasks, we observe a tradeoff between "quantity and quality." Here you can experience that tradeoff for yourself, witnessing a tradeoff between the number of objects tracked, and the speed-limit at which you can track those objects (which we propose reflects a tradeoff between number of attended items and spatial resolution).

What you will see:

You will see 8 black circles moving on a gray background.

To do.

First, you will find the fastest speed at which you can track a single target. The numbers 1-14 correspond to different speeds. Just click on a number to try tracking an item at that speed. You will see 8 black circles on the screen. At the beginning, one circle will blink, and that's the one you should track. Then the items move for several seconds, and finally they stop. The target will turn red so you can check your accuracy. If you get it right, try a faster speed. Keep going until you find the maximum speed at which you can keep track of the target.

Important note.

Make sure to keep your eyes on the central "+" sign, and "mentally track" the target in your peripheral vision. We are testing how fast you can track things with your attention (rather than with your eyes).

Track 1 Target

Find the fastest speed at which you can track 1 target (keeping your eyes on the "+").

click to show video (numbers correspond to speed):

Track 4 Targets

After you determine the fastest speed at which you can track 1 target, try to keep track of 4 targets at that speed. This time, 4 items will blink at the beginning, and you want to try your best to keep track of ALL 4 of them.

click to show video (numbers correspond to speed):

To notice.

You were probably able to keep track of 1 item quite fast, even without moving your eyes. However, when you divided your attention and tried to keep track of 4 items at that same speed, you probably experienced the items "scattering" immediately. If you tried to hang on to all 4 items, you likely lost them all. You might think that this is because you could never keep track of four things at once, but if you try a slower speed, there is likely to be a slower speed at which you can keep track of all four items perfectly.

Super-trackers.

The effect is strongest if you actually reached your limit for 1-target. Some people are able to easily track 1-target at the max-speed tested here (we're limited by video resolution limits for this demo, but in the lab we can test much faster speeds.). Those individuals would likely be able to hang onto many (possibly all) of the 4 targets, but might have found it "more effortful" to do so. Nobody we've tested in the lab can track multiple objects at the same max-speed they can track 1-target, even with extensive cognitive training.

RECENT COURSES

ABOUT ME

George Alvarez

I'm George A. Alvarez, Professor of Psychology at Harvard University, and co-director of the Vision Sciences Laboratory. You can follow these links to learn more about the and our .

Here I thought I would briefly share a little bit of personal information. I was born in Honolulu Hawaii, and raised in Watsonville California (go Wildcats!), where I attended public schools through high school. I'm a first generation college student, and was fortunate to attend Princeton University (go Tigers!) as an undergraduate, Harvard University for graduate school, and MIT for my postdoctoral work. If for some reason you would like to learn more about my path to professorship, you can read this American Psychological Society writeup.

In my earlier days my hobbies included sports (baseball, basketball, american football), movies and and studying film-making. These days my time is divided between running the VisionLab, teaching, and raising 2 kiddos, so my hobbies have gravitated towards typical dad stuff. I'm told I make a killer grilled cheese sandwich.

I'm also chair of Harvard Psychology's Diversity, Inclusion, and Belonging Committee, where we are working to increase representation and well-being for all members of the Psychology community.

contact

George Alvarez

alvarez@wjh.harvard.edu | CV | google scholar | @grez72

William James Hall, Room 760

33 Kirkland St

Cambridge, MA

Opportunities to Join the Harvard VisionLab

Whether you are intersted in human perception & cognition, or machine vision, deep learning, and artificial intelligence — or the intersection between these fields — you're invited to apply to work in the VisionLab. We invite applications at all levels, including undergraduate students, graduate students, and post-docs.

The VisionLab is a joint lab between myself (George Alvarez) and professor Talia Konkle. We are located on the 7th floor of William James Hall, in the Department of Psychology at Harvard University, where we share an integrated lab space. We endeavor to be an inclusive and fun place to work and socialize, and to support our students to pursue their own ideas and interests (of course, keeping things close enough to home for us to provide support!).

Undergraduates.

Undergradaute RAs assist current lab members (graduate students, postdocs, PIs), and have the opportunity to learn about cognitive psychology, cognitive neuroscience, and deep learning, and how to conduct experiments in these fields to gain deeper insights into how human and machine visual systems work. Depending on your level of involvement, you may have opportunties to be a co-author on manuscripts and/or to present lab projects at international conferences. Advanced students (e.g., senior honors thesis students) have the opportunity to develop their own research ideas and run experiments of their own design. Opportunities are available for students at Harvard, as well as other institutions, both during the academic year and over the summer. If you are intersted, please e-mail me directly (alvarez@wjh.harvard.edu) with the subject heading "interested in and RA position" and letting me know whether you are intersted in volunteering, working for course credit, or a paid position (paid positions aren't always available, as they depend on external funding).

Graduate students.

Relevant research experience in human vision, machine vision, or both are a plus, but if you are intersted in these topics please give us a chance to consider your application regardless of your direct experience in these areas. My goal as an advisor is to train students to do this kind of work. If you are interested in joining the lab as a PhD student, send me an e-mail (alvarez@wjh.harvard.edu) with:
  • A brief (1-2 sentence) introduction to who you are, and why you think the VisionLab is a good match for you.
  • A few sentences describing specific research topics you are particularly interested in, and why they interest you (just a few words on a couple of topics will suffice).
  • A description of your prior research experience (who you worked with, what your role was, what skills you gained).
  • A copy of your CV would be helpful if you have one, but if you don't have one already, don't delay sending me an e-mail to create one.

Postdocs.

Postdocs. We're looking for candidates with expertise in human vision and/or deep learning and machine vision. Competitive candidates will have a strong publication record, solid programming, analytical, and experimental design skills, and a focus on theory from a cognitive, neural, and/or deep learning perspective. If you are interested in joining the lab as a postdoc, send me an e-mail (alvarez@wjh.harvard.edu) with:
  • A brief (1-2 sentence) introduction of who you are, why you think the VisionLab is a good match for you.
  • A description of your prior research experience (who you worked with, what ideas you developed, what skills you gained), and what topics you are interested in at the intersection of human and machine vision.
  • A copy of your CV and one or two first authored papers exemplifying your work.