Malcolm Udeozor, Winner of the 2021 NIMH Three-Minute Talks Competition
Transcript
MALCOLM UDEOZOR: Hello everyone. My name is Malcolm Udeozor, and welcome to my talk about decoding objects via motor replay.
So, imagine you're at the beach and you're taking in this scene of your friends playing with the ball. While they're tossing the ball around and you're following, it seems like everything is happening at once. But actually, your experience of this scene is very segmented, and you can think of it as a jigsaw puzzle where you need to use your eyes in order to capture each piece. So then the question becomes, how do we put these pieces of the jigsaw puzzle back together during recall?
Well, quite a bit of evidence suggests that our eye movements are very important in this process. Meaning we actually tend to move our eyes in the similar direction when recalling a scene as when we first saw the scene. But what about non-visual domains like our hands?
Well, since our hands and our eyes and other voluntary parts of our body are governed by the same part of our brain, we predict that motor movements in general, guide our recall of perceptual experiences.
So, to test this phenomenon we'll be looking at the tactile domain, or the hand domain. We will be using a 64-channel electromyography board or EMG pad, to record motor neuron activity while individuals explore objects, and later recall them. They'll explore a total of nine nonsense objects, meaning that they're very hard to identify. Importantly, they'll also be blocked from seeing their hand using an opaque shield. This will ensure that they're relying on their tactile experience.
First, the participants will be given an object and told to explore it with their hands, and at the same time, remember this object by a pseudo name: Ash, for example. They'll also have to generate some physical descriptors to describe this object. For example, Ash is soft.
Next, to see if motor replay is guiding the participants' recall we'll have them perform the following task. First is a descriptor recall task where they'll be told to remember the object and then also pick which of the descriptors does not match that object. For example, Ash was not metallic. This will allow us to ensure that the participants are actually thinking of the correct object. And then they'll just do a simple free recall task just to get a more natural feel for their behavior.
I wanna stress that at no point are we asking the participants to explicitly move their hands. Instead, we are just gonna ask them to do the tasks at hand. Because we ask these participants to only explore these objects with their hands, we expect that recalling a given object will actually elicit very specific hand movements and therefore a very specific EMG pattern for each object.
Let's imagine we're only looking at channels one, two, and 64 on our EMG pad while the participant recalls Ash. We'll expect to see a very specific response, and we'll also expect to see a very similar response when the participant recalls Ash again.
However, we'll expect to see a very different response while the participant recalls a different object. For example, Tammy. This would mean that the hand movements are not at all random and are actually reflecting their memory of the specific object. In other words, the hand movements that the participants are making are actually reflecting their specific memory of the object, and are not just random.
If these expected results stand that would suggest that spontaneous hand movements are actually an inherent part of our memory and are not just gesticulating or communicating with your hands. It's more than that, and this would be very cool because it would allow us to examine dementia patients, Alzheimer's patients, and stroke patients just by looking at their motor activity, and we could easily gauge their memory strength.
Thank you so much for listening.