The timing of perception and the timing of neural signals
A gun is used to start sprinters -- instead of a flash -- because you can react faster to a bang than to a flash. This behavioral fact has been known since the 1880s, and in recent decades has been corroborated by physiology: the cells in your auditory cortex can change their firing rate more quickly in response to a bang than your visual cortex cells can in response to a flash. The story seems like it should be wrapped up here. Yet when we get outside of the realm of motor reactions, and into the realm of perception (what you report that you saw and heard), the plot thickens. When it comes to awareness, your brain goes through a good deal of trouble to synchronize the incoming signals that are synchronized in the outside world. So a firing gun (at least, when it’s within 30 meters) will seem to you to have banged and flashed at the same time.
Try it: snap your fingers in front of you. Although your auditory and visual systems process information about the snap at different speeds, the sight of your fingers and the sound of the snap appear simultaneous. Your brain is employing some fancy editing tricks to make simultaneous events in the world seem simultaneous to you, even when the different sensory modes processing the information would individually swear otherwise.
As we look deeper into timing, we face the question of volition. Your decision to act – and then the action itself – seem simultaneous with the sight and sound of the snap. But weren’t these volitional and motor signals generated some time ago, so the impulses could travel down your spinal cord and peripheral nerves to move your fingers? Why does everything seem to happen at one moment in time? The answer is not that we don’t have the temporal resolution to detect such small time windows – we do. So how does the brain determine which properties and events belong together in time? This is a tricky problem, because different features of a stimulus are processed in different areas – which have different architectures and different processing speeds. Yet somehow perception retains very exact temporal information about events in the world. This is what we term the temporal binding problem.
To understand the temporal binding problem, my lab examines the problem from multiple angles. Our research combines psychophysical, behavioral, and computational approaches to address the mapping between the activity of neurons and the timing of awareness.
By developing and exploring several illusions, we have come to some surprising results. For example, in experiments on the flash-lag illusion (wherein a flash that is aligned with a moving object appears to be misaligned) we ruled out two previously popular explanations: that the visual system extrapolates into the future, or that different types of stimuli are consciously perceived at different speeds. We proposed, instead, the framework that visual awareness is neither predictive nor on-line, but is instead postdictive, meaning that the percept attributed to the time of an event is a function of what happens in the ~80 msec following the event.
With the psychophysical data in hand, we have marshaled physiological data from thalamus and cortex to argue that the system must contend with latency differences by collecting the slowest information before committing to a perception. This contrasts importantly with ‘latency difference’ models, which assume that neuronal latency differences will translate directly into perceptual latency differences. Although the latency difference model is commonly appealed to in the physiology literature, we have experimentally ruled it out from several angles. Our current model of motion integration and postdiction is consistent with a large body of experimental data, and converges quantitatively with psychophysical findings.
This sort of research sheds light on questions of neural computation – knowing when awareness happens will help us to know which features of spike trains to be interested in.
In the coming months and years we will continue deciphering the strange mapping between the timing of awareness and the timing of neural signals. Several interwoven questions are being pursued:
How plastic is the perception of time? I have recently discovered that the perception of time is distorted during slow-motion sequences in movies (such as the time-warped scenes in The Matrix). This appears to result from the brain’s constant attempt to use its models of Newtonian physics to predict where and when, say, a leaping person will land. If the visual stimulus is slowed mid-leap, the brain’s when prediction is now incorrect. However, the predictions can be re-corrected if the brain simply changes its estimation of how fast time is running. As predicted by this framework, I have demonstrated that duration judgments (say, of a flash) made during slow-motion sequences are dramatically distorted.
How plastic are temporal order judgments when your motor acts are involved? In another line of experiments, we have subjects play various forms of simple video games. Unbeknownst to the subjects, we inject variable delays between their movement of the computer mouse, and the effect that has on the game. For example, a subject might move the mouse to the right, and 150 msec later, the movement registers on screen. Our findings show two striking results: first, participants playing the game quickly come to feel as though there is less delay between their mouse movement and the sensory feedback, and second, when we suddenly remove the delay, it feels as though the effect on the screen happened just before they commanded it. This work addresses how the perceived timing of effects is modulated by expectations, and the extent to which such predictions are quickly modifiable. Details here: Stetson, Cui, Montague, Eagleman (2006). Motor-sensory recalibration leads to an illusory reversal of action and sensation. Neuron. 51(5): 651-9. We will soon be combining our paradigms with electrode recordings from behaving non-human primates, guiding a search for correlates of perceived time.
Can time run in slow motion? Many people report that time appears to run in slow motion when they find themselves in an impending car accident -- for example, sliding toward a bad situation. Crudely speaking, are neural ‘snapshots’ clicking faster during a high-adrenaline situation? To bring this into the realm of scientific study, we have measured time perception during free-fall by strapping palm-top computers to their wrists and having them perform psychophysical experiments as they fall. By measuring their speed of information intake, we have concluded that participants do not obtain increased temporal resolution during the fall -- instead, because memories are laid down more richly during a frightening situation, the event seems to have taken longer in retrospect. Details can be found in Stetson, Fiesta, Eagleman (2007). Does time really slow down during a frightening event? PLoS One.
How are perceptions synchronized across modalities? Why do the sight and sound of a slamming car door suddenly appear unsynchronized if you view it from more than 30 meters away? This seems to occur because the system perceptually synchronizes signals that arrive less than 80 msec apart (past 30 meters, the difference between the speeds of light and sound exceed this window). But little is known regarding timing conflicts across other modalities, e.g., vision and somatosensation. To build a proper understanding of cross-modal perception, we will employ Virtual Reality to examine disparities that can only be induced virtually – for example, a grasped object changes size in the fingertips, and only later changes size visually. This will shed more light on one of the main questions we are asking: Is awareness forced to wait for the slowest modality before a unified percept is achieved?
Is perception continuous, discrete, or triggered on a need-to-know basis? It has been suggested since the time of William James that perception may be composed of fast ‘snapshots’ of the world, in the same way that a movie is made of fast sequences of still pictures. Most investigators agree that if this is true, the size of that window is somewhere around 80 msec. Does this 80 msec window simply reflect the period of the cortical alpha rhythm? We are using EEG in conjunction with simple psychophysical tasks to answer this question. A related question is whether these 80 msec computations are made cyclically, or instead are generated only on a need-to-know basis. Several lines of evidence from psychology suggest the latter – that we are unaware of certain stimulus properties until we ask ourselves the question. Experiments with fMRI will be used to study how different computations are generated by the system in the face of different questions. If computations happen only when needed, this guides our search for the neural correlates of awareness by knowing when in the course of spike trains to look for something interesting.
What is the temporal spread of information at different stages of the visual system? My continued collaboration with physiologists takes two lines of attack: (1) after developing and characterizing psychophysical stimuli and measuring human performance, I ‘show’ these same stimuli to retinas laid out on microelectrode arrays. This reveals what information the retina tells the rest of the brain, and how it is arranged in time. (2) These same stimuli, shown to monkeys while recording from the visual cortex, will bring to light whether the observed spread of latencies is used for coding, or is instead an idiosyncrasy of retinal evolution that the cortex must work around.
Taken together, these experimental lines will provide us with a deeper understanding of time perception and its neural mechanisms, and hopefully guide us to the right neuronal populations – and the right moments in their spike trains – to seek correlates of awareness.