Neural Network Models of Cognition

PSYCH 209: Neural Network Models of Cognition: Principles and Applications

Winter, 2020-2021

Click to jump to Course Overview and Course Schedule

Course Description

Neural Network models of cognition and development and the neural basis of these processes, including contemporary deep learning models. Students learn about fundamental computational principles and classical as well as contemporary applications and carry out exercises in the first six weeks, then undertake projects during the last four weeks of the quarter. Recommended: computer programming ability, familiarity with differential equations, linear algebra, and probability theory, and one or more courses in cognition, cognitive development or cognitive/systems neuroscience.

Terms: Win | Units: 4 | Grading: Letter or Credit/No Credit

Tue, Thu 10:30 AM - 11:50 AM on Zoom

Instructor: Jay McClelland, jlmcc@stanford.edu
Teaching Assistant: Effie Li, liyuxuan@stanford.edu

Office Hours:

Effie: Friday, 10:00-11:30 or by appointment (join on zoom, pwd:209209).
Jay:  Monday, 4:30-6:00  or by appointment (join on zoom, pwd: 816183).
For Jay's Monday hours, book a 1/2 hr slot here.

Course Overview

The goals of the course are:

  1. To familiarize students with:
    • the principles that govern learning and processing in neural networks
    • the implications and applications of neural networks for human cognition and its neural basis.
  2. Provide students with an entry point for exploring neural network models and applying them to understanding human cognition, including
    • software tools and programming practices
    • approaches to testing and analyzing deep neural networks for cognitive and neural modeling
    • experience formulating, implementing, and reporting a project applying neural networks to some aspect of cognition or its neural basis

This course will examine a set of central ideas in the theory of neural networks and their applications to cognition and cognitive neuroscience.  As a general rule, each lecture will introduce one central idea or set of related ideas and an application.  The applications are drawn from human cognitive science, systems and cognitive neuroscience, and Deep Learning applications that address human cognitive and perceptual abilities.  Homework will stress basic principles, understanding how they relate to the behavior of neural networks, and appreciating the relevance of the ideas for cognitive modeling.  Students will use software programmed in python and packages like pytorch and tensorflow, and will have to learn at least some scripting for these systems so that they can use them to complete assignments and the class project.  Prior experience with neural networks is not assumed, but mathematical, computer programming, and neural network background will be helpful to understand the material and to carry out an interesting project.

Among the themes that will run throughout the course are a consideration of neural networks as models that mediate the divide between the statement of a computational problem and its biological implementation; the role of simplification in creating models that shed light on observed phenomena and in allowing for analysis and insight; and the biological plausibility and computational sufficiency of neural network computations.  We will also stress that neural network models are part of an ongoing exploration in the microstructure of cognition, with many challenges and opportunities ahead.

Readings and Responses to Prompts, Homework Assignments, and Final Project

An average of about 20 pages of reading are assigned for each session.  Except where otherwise noted, readings listed below are required.  For each class session, students will be asked to prepare a brief statement (up to 250 words) commenting on the assigned reading as preparation for in-class discussion, based on a prompt we will post in advance.  These statements are due at 9 pm the evening prior to each class meeting.  Try to submit on time but late responses are better than none.  Preparation statements and contributions to class discussion will be the basis for 20% of the course grade.

There will be two homework assignments, and a final project, which will include a project proposal, an interim implementation report, a brief in-class presentation, and a final project report. The main homeworks will require performing computing exercises with neural networks and then answering questions probing technical and conceptual understanding of the networks as well as aspects of their application.  All assignments are due at 11:59 pm on the date listed in bold in the syllabus and in the Course Summary.  For the first homework, there are also preliminary steps you need to take as listed in the syllabus.   Homework assignments will be posted through a hyperlink that will be attached to the entry in the syllabus associated with the HW due date.  We expect each main assignment to require about 10 hours of work, beyond the readings these assignments build on.  Homeworks should be submitted in .pdf format through the Canvas assignments system.  Each homework will be the basis for 20% of the course grade.

The final project is expected to be an independent project of your own devising addressing a topic related to cognition or neuroscience.  The project proposal should be about 1 page long and should be formulated after discussion with Jay or Effie.  An interim implementation report is a required element of the project.  The final project report should describe the background and rationale for your project, the details of what you did and why you did it, your results and discussion.  Group projects are possible after discussion with Jay or Effie.  For group projects, each student must provide a separate written proposal, interim implementation report, and final report.  All members of a group should participate in the project presentation.  The final report should be about 3,000 words plus figures, tables, and references.  More details will be provided.  The Project will count for 40% of the overall grade in the class (Proposal 5%; Interim implementation report 5%; Presentation 10%; Final Report 20%).

Computer Usage

The homeworks will require computer use and will be done with Google Colab, tied to your Stanford or other Google Drive filespace.  For the class project, this may also be sufficient, but other options are possible including using your own computer or a Stanford server.

Class Lecture Recordings

All class sections will be recorded.  In cases where assignment-critical material is presented in slides, these will also be made available, linked to the class session title.

The PDP Handbook

One resource that we will rely on is the PDP Handbook, presenting models, programs and exercises,by McClelland and Rumelhart, originally published in the late 1980's and subsequently updated into its current on-line form.  The particular software described in the current edition of the handbook is out of date, but the description of models and ideas will still be of use and several sections are assigned below.  The link given takes you to the front page of the handbook -- use the navigation on the side to find the assigned chapter and sections.

Course Schedule

Introduction

Jan 12: Past and future of neural network models for cognition and cognitive neuroscience

Readings: Rogers & McClelland (2014). Parallel distributed processing at 25:
(Read Sections 1-3, and your choice at least one of sections 4, 5 and 6)
[Optional: Le Cun et al. (2015).  Deep learning]

I. Using gradient descent to get knowledge into the connections of a neural network

Jan 14:  Learning in a one-layer network and introduction to distributed representations

Apps: Credit assignment in contingency learning and the Past Tense Model
Reading:  Chapter 4 through Sections 4.1 and 4.2 of the PDP Handbook
Optional reading: Seidenberg & Plaut (2014), Past Tense Debate.

Jan 19: Multi-layer, feed-forward networks: gradient propagation

Reading: Chapter 5 through Section 5.1 of the PDP Handbook
Also read and start in on Homework 1 due Jan 24. You should try Exercise 1 prior to Jan 21.

Optional Reading on how the brain might do back propagation:

Richards, B.A. & Lillicrap, T. P. (2019) Dendritic solutions to the credit assignment problem.  Current Opinion in Neurobiology, 54:28-36.

Jan 21:  Deep learning methods: initialization, optimization, regularization, and cross-validation

Ruder, S. An overview of gradient descent optimization algorithms.

[Background: Karpathy, Hacker’s guide to neural networks]

Jan 24:  HW1 (PDF) on BackPropagation is Due

Jan 26:  Developmental implications of learning in multi-layer networks

Application: Learned knowledge-dependent representations
Reading: McClelland & Rogers (2003), PDP approach to semantic cognition

Jan 28: Analytic understanding of dynamics of learning

Saxe et al (2013), Learning category structure in deep networks
Saxe et al (2019), A mathematical theory of semantic development in deep neural networks

Feb 2:  Convolutional neural networks

Application: neural responses in Monkey Cortex
Reading: Yamins et al. (2014), Hierarchical models in visual cortex

II. Beyond Feed-Forward Architectures and Algorithms

Feb 4:  How a population of neurons can perform a global perceptual inference

Application: Context effects in perception
McClelland, J. L. Et al. (2014). Interactive activation and mutual constraint satisfaction in perception and cognition.  Sections 1-3, plus one additional section (see prompt).

FEB  7:  HW2 (PDF ) on Analysis of Semantic Cognition Network Due

Feb 9: Learning in hierarchical generative networks

App: Unsupervised learning in vision
Reading: Stoianov & Zorzi (2012).  Emergence of visual number sense, and supplement
Optional: Hinton, Osindero, & Teh (2006). A fast learning algorithm for deep belief networks.

Feb 11: Recurrent Neural Networks

Application: Prediction and representation in language processing
Reading: Chapter 7 through Section 7.1 of the PDP Handbook
Karpathy, Unreasonable effectiveness of RNN’s

[Optional Readings on LSTMs: Colah's blog: Understanding LSTM networks
Zaremba et al, Recurrent neural network regularization, ICLR 2015   
PyTorch On-line Tutorial: https://pytorch.org/tutorials/ ]

Feb 16: Complementary Learning Systems

Readings: Kumaran et al (2016). What learning systems do intelligent agents need?

Feb 16:  Project proposals due. Final Project Guidance. Folder with Example Projects

Feb 18: Toward and Integrated Understanding System

McClelland, J. L., Hill, F., Rudolph, M., Baldridge, J., & Schuetze, H. (2020).  Placing language in and integrated understanding system: Next steps toward human-level performance in neural language models. Proceedings of the National Academy of Sciences, 117(42), 25966-25974. DOI: 10.1073/pnas.1910416117. [ PDF]

Uskoreit, J. (2017). Transformer: A novel neural network architecture for language understanding.  Google AI Blog.

III. Reinforcement learning and planning

Feb 23: Reinforcement Learning: Deep Q-Learning and Policy Gradient Methods

Karpathy, Deep Reinforcement Learning
Mnih et al. (2015). Human level control through deep reinforcement learning
[Optional: Sutton & Barto (Basics of RL: Sections 1.1 - 1.3, 3.1 - 3.8, 6.1 - 6.2, 6.5; Neural and Behavioral Support for the 'Reward Prediction Error' hypothesis: Sections 15.1-15.6]

Feb 25:  Model-based RL and the RL framework for human behavior [Session led by Effie Li]

Required:

Sutton & Barto (Sections 14.7 & 15.13, chapter summaries) and

One of 1) Collins&Cockburn (2020). Beyond dichotomies in reinforcement learning, 2) Miller et al. (2018). Realigning models of habitual and goal-directed decision-making, and 3) Juechems&Summerfield (2019). Where does value come from?

[Further optional readings on deep RL as potential models of human cognition: Hamrick (2019). Analogues of mental simulation and imagination in deep learning, Botvinick et al. (2019). Reinforcement learning, fast and slow]

IV. Challenges and Frontiers

Mar 2: External Memory Based Architectures

Readings: Graves, Wayne et al (2016).  Hybrid computing with dynamic external memory
Santoro et al (2016). One-shot learning with memory-augmented neural networks

March 2:  Interim implementation report due

Mar 4:  Limitations of neural network models

Lake et al (2017). Building machines that learn and think like people. [See prompt for which parts to focus on]

Mar 9:  Metacognition, Explanation, and Education

Readings TBD

Mar 11:  Open Questions, Open Challenges, and Future Directions

No new reading: See Discussion Prompt

V. Project presentations by students

Mar 16: Session 1

Mar 18: Session 2

Mar 19 (Friday): Project papers due 

Course Summary:

Date Details Due