I've spent the last six weeks reading and responding to a tremendous number of studies, news stories, and Web sites relating to technology's effect on dual task performance. That being done, I need to get ready to write a composition to demonstrate the expertise I've gained. In this post, I will begin to organize and synthesize my ideas, so that when I begin writing my final essay, it will read more logically and fluently. Think of this post as a very rough, very incomplete framework into which I will insert examples, illustrations, and references when I convert it into my final paper.
From all the research and writing I've done, I have learned many things. Yet despite all my new knowledge, I have even more questions. Moreover, I have a new-found appreciation for the difference between people who possess expert-level knowledge and people who don't.
One example of my new knowledge relates to my definition of multitasking: Originally I defined it very broadly to include all sorts of activities--perceiving one's environment, thinking about those perceptions, and performing actions based on those thoughts--being performed simultaneously. Based on that crude definition, I didn't think it was possible. Moreover, I had a vague idea that "task switching" meant people perform tasks sequentially, even though they appear to perform them concurrently. My thinking was based on my personal experience and one or two news stories I had read. It showed few nuances. Now I understand that people can perform certain things simultaneously. Our minds are capable of handing multiple, concurrent sensory inputs. Our bodies are capable of performing multiple motor tasks at the same time, even while our senses are gathering more information and our minds are making decisions. However, there are certain things that I believe people cannot do; specifically, our brains cannot make multiple decisions simultaneously.
Although I believe we cannot make multiple decisions concurrently, I think there are ways of getting around that limitation. One way is to automatize behaviors so that perceiving stimuli and performing actions can proceed without the need for a decision. This can be accomplished through intense training. Another way is to "chunk" multiple decisions together so the brain treats them as one decision. Combining automatization with chunking would be even better still. Impossible? No. It just takes a lot of practice. Experts do it in a variety of fields. One specific thing that most people can relate to is reading aloud. It is a perfect example of automaticity and chunking. When reading aloud, we must think about the meaning of the words within the context of the sentences, the meaning of the sentences within the context of the surrounding paragraphs, and the meaning of the paragraphs within the context of the entire work. On top of all that, our brain must also tell our mouths how to pronounce the words, which words to stress, and so on. And that's still not all. Many components of this process occur simultaneously and automatically--but it wasn't always that way. When we first learn to read, we look at tiny parts of letters--arcs and lines and dots; eventually, we see whole letters. Then we associate those letters with sounds, and we learn to blend those sounds together. And so on. With practice, we chunk many of those thought processes together and we eventually automatize them. Reading--especially reading aloud--is an incredibly complicated act, but we can do it. That gives me hope that people can be trained to do other, equally complicated tasks with the right combination of automatization and chunking.
Another thing I've learned is that technology has the potential to be a great ally in our efforts to measure and improve our cognitive function. In fact, all of the studies I read used technology to measure subjects' cognitive abilities, to train them, or to do both. Using technology to measure one's cognitive abilities is handy because it is much more sensitive and objective than human observers are. It's efficacy is uncontroversial. Using technology as a cognitive training tool, on the other hand, is not so clear cut. There is much research that suggests it is very effective at increasing one's working memory, task switching efficiency, automaticity, and other components of multitasking ability. But there is also research that suggests otherwise. One of the biggest challenges is skill transfer. Subjects invariably get better at performing the tasks on which they are trained. What's less clear is whether they can use the skills in different contexts. It is also unclear whether those skills translate to real life success, even if we find consistent evidence that they can be transferred to different contexts. (I really need to think about these issues a lot more.)
As is often the case, the more I learn, the more questions that pop into my head. One thing I wonder is whether people can automatize and chunk enough skills to meet the demands of an unpredictable environment. Another thing I wonder is whether the dual processing model of working memory asserts that the visuospatial sketchpad and the phonological loop each have their own decision making centers, if all decisions get funneled through the central executive, or if there is some kind of hybrid process at work. I also wonder whether learning efficiency can be increased by training one to switch tasks more rapidly or if that kind of training might harm one's concentration and analytical abilities. The questions listed here are only a small portion of the things I would like to know. So many questions, so little time.
The last thing I mentioned in my introduction is that I have a new appreciation for the knowledge experts possess. The reason I mention it is because, now that I've done quite a bit of research, I can see how little I know about multitasking compared to all there is to know. Although it sounds like a small area of study, understanding it requires knowledge of a wide array of interconnected disciplines. Moreover, I can see how much effort and time it takes to develop extensive expertise in any topic, so my hat is off to anyone who has done so.
Thursday, April 16, 2009
Wednesday, April 15, 2009
The Jury's Still Out: Gaming's Effect on Working Memory Capacity and Multitasking Ability
In yesterday's post, I discussed a study by Daphne Bavelier and C. Shawn Green that suggested playing action video games improves one's working memory and multitasking ability. Today, I will discuss a study that contradicts Green and Bavelier's work.
In the October 2008 issue of the journal Acta Psychologica, researchers Walter R. Boot, Arthur F. Kramer, Daniel J. Simons, Monica Fabiani, and Gabriele Gratton from the University of Illinois, Urbana-Champaign published a study indicating that video games--action, strategy, and puzzle--do not produce significant gains in a variety of measures of cognitive performance. They did find that expert gamers performed significantly better on most tasks, but they suggest the difference might be due to preexisting cognitive advantages unrelated to video game experience. The major way that Boot et al.'s results contradict those of Green and Bevelier is that amateurs did not improve their performance on the tests of cognitive performance (more than non-video-game-playing control subjects) despite hours and hours of video game practice.
One task that Boot et al.'s study had in common with Green and Bevelier's study was the "enumeration" (a.k.a. counting) task. Green and Bevelier's results showed a statistically significant difference between the performance of subjects with video game practice and those without. In Boot et al.'s study, subjects who were expert video game players only performed slightly better than video game amateurs on the counting task, and with a p-value of 0.17, the difference was not statistically significant (Boot et al., 2008, p 392). More surprisingly, Boot et al.'s results showed no advantage on the counting task for amatuers after 21 hours of video game practice (relative to amateurs who didn't practice).
The other task I discussed in yesterday's post was multiple object tracking. Green and Bevelier found that practice playing action video games had a significant effect on subjects' ability to track multiple objects. Boot et al., however, found mixed results: Experts outperformed amateurs (significantly), but practice playing video games did not cause amateurs to improve any more than subjects who didn't practice playing video games at all.
It is important to note that comparing Green and Bevelier's study to Boot et al.'s study is not "apples to apples." Boot et al.'s experimental design differed significantly from Green and Bevelier's. For example, the subjects in Green and Bevelier's study were all male. In Boot et al.'s study, the expert gamers were all male, while the vast majority (75 out of 92) of the amateurs were female. Another important difference was that Green and Bevelier only tested their subjects' enumeration and object tracking abilities twice--a pre-test and a post-test. Boot et al. tested their subjects three times--a pre-test, a progress test, and a post-test. A third important difference is that Green and Bevelier measured only the quantity of objects that could be tracked accurately, while Boot et al. measured the speed and quantity of objects that subjects could track accurately. Regardless of experimental design differences, I am still surprised that the results weren't more similar.
This is the second study I've come across that showed playing video games doesn't cause people to improve their cognitive abilities. On the other hand, I've read nearly ten studies that suggest the opposite. At this point, I don't feel I have enough expertise to explain why different studies produce different results. Does it make a big difference whether subjects are male or female? Maybe, but I'm not familiar with enough research to prove it. Does it make a big difference when subjects take a cognitive test three times rather than two times? Maybe, but again, I'm not familiar with research on the effects of repeated testing to prove it. In any case, I am really baffled by the fact that amateurs who practiced playing video games didn't show more improvement on the cognitive tests than amateurs who had no video game practice. My gut tells me video game practice should have made a difference. Then again, my gut also tells me to overeat, so it's not always trustworthy!
Reference
Boot, W., Kramer, A., Simons, D., Fabiani, M., & Gratton, G. (2008, November). The effects of video game playing on attention, memory, and executive control. Acta Psychologica, 129(3), 387-398. Retrieved April 8, 2009, doi:10.1016/j.actpsy.2008.09.005
In the October 2008 issue of the journal Acta Psychologica, researchers Walter R. Boot, Arthur F. Kramer, Daniel J. Simons, Monica Fabiani, and Gabriele Gratton from the University of Illinois, Urbana-Champaign published a study indicating that video games--action, strategy, and puzzle--do not produce significant gains in a variety of measures of cognitive performance. They did find that expert gamers performed significantly better on most tasks, but they suggest the difference might be due to preexisting cognitive advantages unrelated to video game experience. The major way that Boot et al.'s results contradict those of Green and Bevelier is that amateurs did not improve their performance on the tests of cognitive performance (more than non-video-game-playing control subjects) despite hours and hours of video game practice.
One task that Boot et al.'s study had in common with Green and Bevelier's study was the "enumeration" (a.k.a. counting) task. Green and Bevelier's results showed a statistically significant difference between the performance of subjects with video game practice and those without. In Boot et al.'s study, subjects who were expert video game players only performed slightly better than video game amateurs on the counting task, and with a p-value of 0.17, the difference was not statistically significant (Boot et al., 2008, p 392). More surprisingly, Boot et al.'s results showed no advantage on the counting task for amatuers after 21 hours of video game practice (relative to amateurs who didn't practice).
The other task I discussed in yesterday's post was multiple object tracking. Green and Bevelier found that practice playing action video games had a significant effect on subjects' ability to track multiple objects. Boot et al., however, found mixed results: Experts outperformed amateurs (significantly), but practice playing video games did not cause amateurs to improve any more than subjects who didn't practice playing video games at all.
It is important to note that comparing Green and Bevelier's study to Boot et al.'s study is not "apples to apples." Boot et al.'s experimental design differed significantly from Green and Bevelier's. For example, the subjects in Green and Bevelier's study were all male. In Boot et al.'s study, the expert gamers were all male, while the vast majority (75 out of 92) of the amateurs were female. Another important difference was that Green and Bevelier only tested their subjects' enumeration and object tracking abilities twice--a pre-test and a post-test. Boot et al. tested their subjects three times--a pre-test, a progress test, and a post-test. A third important difference is that Green and Bevelier measured only the quantity of objects that could be tracked accurately, while Boot et al. measured the speed and quantity of objects that subjects could track accurately. Regardless of experimental design differences, I am still surprised that the results weren't more similar.
This is the second study I've come across that showed playing video games doesn't cause people to improve their cognitive abilities. On the other hand, I've read nearly ten studies that suggest the opposite. At this point, I don't feel I have enough expertise to explain why different studies produce different results. Does it make a big difference whether subjects are male or female? Maybe, but I'm not familiar with enough research to prove it. Does it make a big difference when subjects take a cognitive test three times rather than two times? Maybe, but again, I'm not familiar with research on the effects of repeated testing to prove it. In any case, I am really baffled by the fact that amateurs who practiced playing video games didn't show more improvement on the cognitive tests than amateurs who had no video game practice. My gut tells me video game practice should have made a difference. Then again, my gut also tells me to overeat, so it's not always trustworthy!
Reference
Boot, W., Kramer, A., Simons, D., Fabiani, M., & Gratton, G. (2008, November). The effects of video game playing on attention, memory, and executive control. Acta Psychologica, 129(3), 387-398. Retrieved April 8, 2009, doi:10.1016/j.actpsy.2008.09.005
Tuesday, April 14, 2009
Gaming's Effect on Working Memory Capacity and Multitasking Ability
In the August 2006 issue of the journal Cognition, researchers Daphne Bavelier and C. Shawn Green published a study suggesting that playing video games is correlated with an increase in one's working memory capacity and multitasking ability. For the study, Green and Bavelier conducted a series of five experiments, two of which showed that video game players (VGPs) outperform non-video game players (NVGPs) in measures of rapid counting and multiple object tracking, one of which showed that improved performance was not related to after-image effects or increased subitizing range, and (most importantly) two of which showed that NVGPs' ability to count rapidly and track multiple objects improves after practicing action video games.
The main measurement obtained in the first three experiments was each subject's ability to rapidly count arrays of squares as they flashed on a screen. As subjects' experience with action video games increased, their ability to accurately recall the number of squares flashed onscreen also increased. Green and Bavelier attribute this finding to an improvement in the subjects' working memory (caused by playing action video games). One reason they believe working memory improved is because playing action video games caused VGPs' response times to increase as the number of squares increased. It sounds counter-intuitive, but it makes sense: NVGPs answered faster, but with greater error--because they couldn't remember what had been flashed on the screen. VGPs took longer--but answered more accurately--because their memories contained greater fidelity of the flashed images, rendering the squares countable.
The main measurement obtained in the fourth and fifth experiments was each subject's capacity to keep track of multiple objects (up to 7 specific circles out of 16) bouncing randomly across a screen. This test "requires subjects to dynamically allocate attention to multiple objects and sustain that attention for several seconds" (Green & Bavelier, 2006, p 235). Green and Bavelier found that as each subject's experience with action video games increased, the number of objects he could accurately track also increased--i.e. technology caused their multitasking ability to improve. Green and Bavelier did not conclude whether subjects processed the locations of the objects simultaneously or whether they rapidly switched their attention from one object to the next cyclically. Regardless, the observable performance improved.
Although working memory is not synonymous with dual-task performance, it is a likely candidate to facilitate it. Indeed, the results of this study are consistent with the hypothesis that working memory helps people multitask--action video games improved subjects' working memory capacity, enabling them to keep track of multiple items (simultaneously or cyclically).
The specific results of Green and Bavelier's study indicate that VGPs are able to accurately count two more items flashed onscreen than NVGPs. The VGP's advantage disappears at a ceiling around 10-12 items. Green and Bavelier's results also show that VGPs are able to track up to 6 items with 6% greater accuracy than NVGPs. When tracking seven or more items, VGPs perform no better than NVGPs. Those differences seem small. I wonder whether they would have a noticable impact on one's success in real life. In the game of life, I suppose the difference between "winning" and "losing" is only one point, so maybe small advantages are important. Can those differences be enlarged and can the ceilings be raised with more training?
It looks like I have more research to do.
Reference:
Green, C., & Bavelier, D. (2006, August). Enumeration versus multiple object tracking: the case of action video game players. Cognition 101(1), 217-245. Retrieved April 8, 2009, doi:10.1016/j.cognition.2005.10.004
The main measurement obtained in the first three experiments was each subject's ability to rapidly count arrays of squares as they flashed on a screen. As subjects' experience with action video games increased, their ability to accurately recall the number of squares flashed onscreen also increased. Green and Bavelier attribute this finding to an improvement in the subjects' working memory (caused by playing action video games). One reason they believe working memory improved is because playing action video games caused VGPs' response times to increase as the number of squares increased. It sounds counter-intuitive, but it makes sense: NVGPs answered faster, but with greater error--because they couldn't remember what had been flashed on the screen. VGPs took longer--but answered more accurately--because their memories contained greater fidelity of the flashed images, rendering the squares countable.
The main measurement obtained in the fourth and fifth experiments was each subject's capacity to keep track of multiple objects (up to 7 specific circles out of 16) bouncing randomly across a screen. This test "requires subjects to dynamically allocate attention to multiple objects and sustain that attention for several seconds" (Green & Bavelier, 2006, p 235). Green and Bavelier found that as each subject's experience with action video games increased, the number of objects he could accurately track also increased--i.e. technology caused their multitasking ability to improve. Green and Bavelier did not conclude whether subjects processed the locations of the objects simultaneously or whether they rapidly switched their attention from one object to the next cyclically. Regardless, the observable performance improved.
Although working memory is not synonymous with dual-task performance, it is a likely candidate to facilitate it. Indeed, the results of this study are consistent with the hypothesis that working memory helps people multitask--action video games improved subjects' working memory capacity, enabling them to keep track of multiple items (simultaneously or cyclically).
The specific results of Green and Bavelier's study indicate that VGPs are able to accurately count two more items flashed onscreen than NVGPs. The VGP's advantage disappears at a ceiling around 10-12 items. Green and Bavelier's results also show that VGPs are able to track up to 6 items with 6% greater accuracy than NVGPs. When tracking seven or more items, VGPs perform no better than NVGPs. Those differences seem small. I wonder whether they would have a noticable impact on one's success in real life. In the game of life, I suppose the difference between "winning" and "losing" is only one point, so maybe small advantages are important. Can those differences be enlarged and can the ceilings be raised with more training?
It looks like I have more research to do.
Reference:
Green, C., & Bavelier, D. (2006, August). Enumeration versus multiple object tracking: the case of action video game players. Cognition 101(1), 217-245. Retrieved April 8, 2009, doi:10.1016/j.cognition.2005.10.004
Thursday, April 9, 2009
Multitasking's Dark Side
In the last six weeks, I've written a lot about whether multitasking is possible, who can do it, the conditions under which it can be accomplished, the degree to which it can be improved, and so on, but I haven't explicitly written much about the negative effects of trying to multitask. There are obvious consequences, such as the increased probability that some information will not be processed and mistakes--even deadly ones--will result. If I wrote a post about those, it wouldn't be very informative. However, there are other negative consequences--especially in how the brain functions--that aren't so obvious. That will be the topic of this post.
In the January 2005 issue of the Harvard Business Review, psychiatrist and author Edward Hallowell wrote an article titled Overloaded Circuits: Why Smart People Underperform. In it he described a condition that many harried business executive suffer which he calls "Attention Deficit Trait" or ADT, an environmentally-caused cousin of attention deficit hyperactivity disorder. He argues that "as our minds fill with noise -- feckless synaptic events signifying nothing -- the brain gradually loses its capacity to attend fully and thoroughly to anything" (Hallowell, 2005, p 56). In other words people develop a case of ADT.
Hallowell explains that the causes of ADT are based on the way the brain has evolved to deal with danger. In non-threatening situations, the frontal lobe of the brain functions as it should, enabling people hold several things in working memory simultaneously, to focus for extended periods of time, and to make creative, thoughtful, accurate, and efficient decisions. However, when a person is deluged with tasks that need to be completed quasi-simultaneously (or he'll lose his job), fear kicks in. Hallowell explains: "The deep regions (of the brain) interpret the messages of overload they receive from the frontal lobes in the same way they interpret everything: primitively. They furiously fire signals of fear, anxiety, impatience, irritability, anger, or panic. These alarm signals shanghai the attention of the frontal lobes, forcing them to forfeit much of their power" (p 58). Hallowell doesn't explicitly say it, but readers can infer that if the frontal lobe of one's brain is continually subjected to an overwhelmingly demanding environment, over time it will lose power (simply out of habit) while the lower, primitive region will gain power.
I can accept Hallowell's argument for people who worry that if they fail to complete their tasks, bad things will happen. However, I wonder whether it can be applied to joyful multitasking. The element of fear seems to be important in activating the amygdala and other lower regions of the brain. Is the condition of being deluged with tasks sufficient to trigger the fear response? Or, must negative consequences be associated with failure to complete the tasks? I think those questions testable. I wonder if any studies have been done. I suppose I need to do some more research!
In the January 2005 issue of the Harvard Business Review, psychiatrist and author Edward Hallowell wrote an article titled Overloaded Circuits: Why Smart People Underperform. In it he described a condition that many harried business executive suffer which he calls "Attention Deficit Trait" or ADT, an environmentally-caused cousin of attention deficit hyperactivity disorder. He argues that "as our minds fill with noise -- feckless synaptic events signifying nothing -- the brain gradually loses its capacity to attend fully and thoroughly to anything" (Hallowell, 2005, p 56). In other words people develop a case of ADT.
Hallowell explains that the causes of ADT are based on the way the brain has evolved to deal with danger. In non-threatening situations, the frontal lobe of the brain functions as it should, enabling people hold several things in working memory simultaneously, to focus for extended periods of time, and to make creative, thoughtful, accurate, and efficient decisions. However, when a person is deluged with tasks that need to be completed quasi-simultaneously (or he'll lose his job), fear kicks in. Hallowell explains: "The deep regions (of the brain) interpret the messages of overload they receive from the frontal lobes in the same way they interpret everything: primitively. They furiously fire signals of fear, anxiety, impatience, irritability, anger, or panic. These alarm signals shanghai the attention of the frontal lobes, forcing them to forfeit much of their power" (p 58). Hallowell doesn't explicitly say it, but readers can infer that if the frontal lobe of one's brain is continually subjected to an overwhelmingly demanding environment, over time it will lose power (simply out of habit) while the lower, primitive region will gain power.
I can accept Hallowell's argument for people who worry that if they fail to complete their tasks, bad things will happen. However, I wonder whether it can be applied to joyful multitasking. The element of fear seems to be important in activating the amygdala and other lower regions of the brain. Is the condition of being deluged with tasks sufficient to trigger the fear response? Or, must negative consequences be associated with failure to complete the tasks? I think those questions testable. I wonder if any studies have been done. I suppose I need to do some more research!
Wednesday, April 8, 2009
Shoot First and MultitAsk Questions Later
According to an article published on PhysOrg.com, researchers Heather Kleider, Dominic Parrott, and Tricia King in the psychology department at Georgia State University conducted a study showing that police officers who score lower on measurements of working memory and who experience higher heart rates and more perspiration in stressful situations are more likely to shoot unarmed subjects. The study is to be published in an upcoming issue of Applied Cognitive Psychology.
In the study, subjects were police officers employed in urban districts. Their first task was to complete a working memory test. Then they viewed a video of a stressful situation (a police officer being killed in action) while researchers measured their heart and perspiration rates as well as other factors indicative of stress. Afterward, the officers participated in a simulation in which they viewed pictures of armed and unarmed assailants, making rapid decisions whether to shoot or not. If the assailant was armed, officers were instructed to press a "shoot" button; if unarmed, then they were to press a "don't shoot" button. Kleider et al. found that officers who exhibited poor working memory capacity and physiological indicators of stress tended to erroneously press the "shoot" button more often.
Kleider explained that "working memory is an overarching cognitive mechanism that indicates the ability to multitask" (PhysOrg, 2009, para 4). Kleider believes that the stress of controlling impulses (such as the impulse to pull the trigger of a gun) monopolizes a large portion of one's working memory. Therefore, she and her colleagues reason, officers with low working memory capacity don't have enough cognitive resources left over to simultaneously analyze a situation before pulling the trigger of their firearm.
Kleider et al.'s findings seem reasonable. I would have predicted that officers' ability to rapidly process multiple aspects of a given situation correlates with the accuracy of their decisions. One thing I wonder, however, is the degree to which training can improve the performance of the officers who shot unarmed assailants in the experiment. I suspect that it can, at least a little. I think there are at least two kinds of training that might help: First, the officers could improve their working memory performance (as I described in this post). Second, they can automatize their ability to recognize guns (as I described in this post).
Speaking of automaticity, I have a friend who is a police officer. One thing I've heard him repeat several times is that, in stressful situations that require rapid action, officers revert to their training. Based on the research I've read during the last six weeks, I believe he was saying that police training is designed to produce automaticity--being so thoroughly trained that the need to select a response is bypassed. Although the situations with which police officers must deal are highly complex and unpredictable (making it nearly impossible to automatize), I think it is reasonable to think that specific aspects--such as recognizing various guns--have the potential to be automatized.
In fact, I am reminded of a story about the training of gunners in World War II. One problem they faced was deciding whether far off planes were enemies or not. Through tachistoscopic training, the gunners learned to accurately identify tiny silhouettes of planes in as little as one-tenth of a second. After training, gunners could make as many as 2,000 correct identifications consecutively based on photos that would look like amorphous specks to untrained observers. Since the situations that WWII gunners faced closely parallel the situations police officers must face (in terms of the stress, the threat to be identified, and the decisions to be rendered), I am inclined to be optimistic that police officers can be similarly trained.
In the study, subjects were police officers employed in urban districts. Their first task was to complete a working memory test. Then they viewed a video of a stressful situation (a police officer being killed in action) while researchers measured their heart and perspiration rates as well as other factors indicative of stress. Afterward, the officers participated in a simulation in which they viewed pictures of armed and unarmed assailants, making rapid decisions whether to shoot or not. If the assailant was armed, officers were instructed to press a "shoot" button; if unarmed, then they were to press a "don't shoot" button. Kleider et al. found that officers who exhibited poor working memory capacity and physiological indicators of stress tended to erroneously press the "shoot" button more often.
Kleider explained that "working memory is an overarching cognitive mechanism that indicates the ability to multitask" (PhysOrg, 2009, para 4). Kleider believes that the stress of controlling impulses (such as the impulse to pull the trigger of a gun) monopolizes a large portion of one's working memory. Therefore, she and her colleagues reason, officers with low working memory capacity don't have enough cognitive resources left over to simultaneously analyze a situation before pulling the trigger of their firearm.
Kleider et al.'s findings seem reasonable. I would have predicted that officers' ability to rapidly process multiple aspects of a given situation correlates with the accuracy of their decisions. One thing I wonder, however, is the degree to which training can improve the performance of the officers who shot unarmed assailants in the experiment. I suspect that it can, at least a little. I think there are at least two kinds of training that might help: First, the officers could improve their working memory performance (as I described in this post). Second, they can automatize their ability to recognize guns (as I described in this post).
Speaking of automaticity, I have a friend who is a police officer. One thing I've heard him repeat several times is that, in stressful situations that require rapid action, officers revert to their training. Based on the research I've read during the last six weeks, I believe he was saying that police training is designed to produce automaticity--being so thoroughly trained that the need to select a response is bypassed. Although the situations with which police officers must deal are highly complex and unpredictable (making it nearly impossible to automatize), I think it is reasonable to think that specific aspects--such as recognizing various guns--have the potential to be automatized.
In fact, I am reminded of a story about the training of gunners in World War II. One problem they faced was deciding whether far off planes were enemies or not. Through tachistoscopic training, the gunners learned to accurately identify tiny silhouettes of planes in as little as one-tenth of a second. After training, gunners could make as many as 2,000 correct identifications consecutively based on photos that would look like amorphous specks to untrained observers. Since the situations that WWII gunners faced closely parallel the situations police officers must face (in terms of the stress, the threat to be identified, and the decisions to be rendered), I am inclined to be optimistic that police officers can be similarly trained.
Tuesday, April 7, 2009
Test Your Multitasking Ability at Hal Pashler's DualTask.org
Professor of psychology at the University of California at San Diego Hal Pashler created a Web site in 2005 called DualTask.org. At this site, visitors can challenge their dual task performance abilities with some Java- and Shockwave-based demonstration applications Dr. Pashler posts there. If you decide to try them out, be forewarned that the applications load slowly and may need to be reloaded to function properly. Also be forewarned that Dr. Pashler is not a software designer, so the design of the applications are not stellar.
One of the demonstrations is titled Visual Bottleneck I. Like some of the experiments I've described in earlier posts, it tests each user's ability to perceive and respond to multiple stimuli including audible pitches and alpha characters. Unlike the recent, real experiments, all responses are motor-based and have no verbal options. When I initially tried the demonstration, I performed (by my judgement) poorly. After a few practice attempts I improved slightly--but not much. I wonder if I practiced it a few thousand times how much better I could do and whether my performance on those tasks would transfer to other tasks.
As I played Visual Bottleneck I, I noticed that I was able to perceive each stimulus with no difficulty. Mentally, I knew what I saw/heard and understood what I needed to do. However, when I tried to match my perceptions and understandings with appropriate motor responses, I struggled. As soon as I had responded to one stimulus, a new one would be presented. It seemed to me that my execution of the motor responses was making it difficult for me to queue my next response. I'm not sure whether my experience is a manifestation of the response selection bottleneck or not. Maybe it will become clearer to me if I try the demonstration and monitor my thought processes a little more.
There is another concept I've come across in my research called "attentional blink," and I wonder if my difficulties were related to that rather than the response selection bottleneck. I also wonder how attentional blink relates to dual task performance.
The due date for our final paper is fast approaching, and though I feel like I've learned a great deal, I still don't feel like I know enough to adequately synthesize the research, predict where it's heading, and make an argument for any particular position. I'm hoping two more weeks of reading and reflecting will do the trick. Any advice for rapidly becoming an expert?
One of the demonstrations is titled Visual Bottleneck I. Like some of the experiments I've described in earlier posts, it tests each user's ability to perceive and respond to multiple stimuli including audible pitches and alpha characters. Unlike the recent, real experiments, all responses are motor-based and have no verbal options. When I initially tried the demonstration, I performed (by my judgement) poorly. After a few practice attempts I improved slightly--but not much. I wonder if I practiced it a few thousand times how much better I could do and whether my performance on those tasks would transfer to other tasks.
As I played Visual Bottleneck I, I noticed that I was able to perceive each stimulus with no difficulty. Mentally, I knew what I saw/heard and understood what I needed to do. However, when I tried to match my perceptions and understandings with appropriate motor responses, I struggled. As soon as I had responded to one stimulus, a new one would be presented. It seemed to me that my execution of the motor responses was making it difficult for me to queue my next response. I'm not sure whether my experience is a manifestation of the response selection bottleneck or not. Maybe it will become clearer to me if I try the demonstration and monitor my thought processes a little more.
There is another concept I've come across in my research called "attentional blink," and I wonder if my difficulties were related to that rather than the response selection bottleneck. I also wonder how attentional blink relates to dual task performance.
The due date for our final paper is fast approaching, and though I feel like I've learned a great deal, I still don't feel like I know enough to adequately synthesize the research, predict where it's heading, and make an argument for any particular position. I'm hoping two more weeks of reading and reflecting will do the trick. Any advice for rapidly becoming an expert?
Thursday, April 2, 2009
Response Selection Multitasking Bottleneck: Can it be bypassed? Part 3
This post is the final installment in my discussion of a study by researchers François Maquestiaux, Maude Laguë-Beauvais, Eric Ruthruff, and Louis Bherer titled "Bypassing the central bottleneck after single-task practice in the psychological refractory period paradigm: Evidence for task automatization and greedy resource recruitment" published in October of 2008 in the journal Memory and Cognition. Having explained in my first post the central bottleneck model of response selection, and having reviewed in my second post the experimental design refinements that researchers made to more accurately investigate whether the bottleneck can be bypassed, I am now ready to share the results of Maquestiaux et al.'s study.
Maquestiaux et al. performed two experiments. The first one was designed to detect the bypass of the response selection bottleneck. They set up this experiment to control for all of the known confounding factors relevant to detecting whether the bottleneck has been bypassed or not (some of which I discussed in yesterday's post). Among other important factors, they ensured that subjects received sufficient training with the task they wanted subjects to automatize (over 4,000 practice repetitions). They also ensured that the experiment included verbal and manual response components, and that the first task was sufficiently difficult that it would produce a measurable delay in the subjects' processing of the second task.
In the first experiment, subjects were instructed to verbally identify whether an audible tone was high-pitched or low-pitched. Then they practiced discriminating the two tones several thousand times until the response became automatic. After the training for the tone task, the subjects then were instructed to identify one of four visually displayed letters by manually pressing corresponding buttons on a keyboard. At the same time, they were instructed to continue discriminating between the high and low tones verbally. The letters were always displayed first, and the tones were played second, so subjects were expected to respond to the letter task first and the tone task second. Maquestiaux et al. manipulated the timing of the tone task to see its effect on the length of time it took subjects to respond. If bypassing were possible, Maquestiaux et al. expected that the timing of the tone would not have an effect on the length of time it took subjects to respond to it.
The results of the first experiment provide strong evidence that it is possible to bypass the response selection bottleneck. Eleven out of 20 subjects demonstrated the ability to respond to the tone at the same time as the letter response was being processed, and six subjects even responded to the tone before the letter--even though the letter stimulus was always presented first. Additionally, when Maquestiaux et al. increased the difficulty of the letter task to increase its processing time by a specific number of milliseconds (they had measured the increased response time), it did not increase the quantity of time it took for subjects to respond to the tone, contrary to the bottleneck model's predictions. From these results, Maquestiaux et al. concluded that their subjects had bypassed the response selection bottleneck.
For the second experiment, Maquestiaux et al. retested the five fastest subjects, to find out whether Ruthruff, Van Selst, Johnston, and Remington's "greedy resource recruitment" hypothesis was accurate. If it were, then when the tone task was presented first, even though it had been automatized, it would monopolize the brain's response selection resources and cause a measureable delay in the subjects' responses to the letter task, which was to be presented second. Sure enough, that is exactly what they found.
There were a number of things about this study that struck me, but the one thing impacted me most was Maquestiaux et al.'s careful explanation for the design of their experiment. Every aspect of it was based on the findings of previous studies. Because of the care they exercised in justifying their experimental design, I was able to take their results and conclusions seriously.
This study has furnished fairly reliable evidence for the trend observed so far in this blog: multitasking is possible, and technology can help train people to do it better. This study also helped me to better define multitasking and task switching. At this point, I think multitasking involves the automatic performance of one or more tasks concurrently with a task that requires response selection--i.e. only one response can be selected at a time. If tasks are not automatized, then one must switch between them.
Questions: Do tasks performed concurrently "count" as multitasking even if the responses underlying them are performed consecutively? Is it possible for people to learn to select two (or more) responses simultaneously? How complicated can a task become before it is un-automatizable? How might "complicated" be defined? How does this research relate to the dual-processing model of working memory? With all these questions, it looks like I've got my work cut out for me.
Maquestiaux et al. performed two experiments. The first one was designed to detect the bypass of the response selection bottleneck. They set up this experiment to control for all of the known confounding factors relevant to detecting whether the bottleneck has been bypassed or not (some of which I discussed in yesterday's post). Among other important factors, they ensured that subjects received sufficient training with the task they wanted subjects to automatize (over 4,000 practice repetitions). They also ensured that the experiment included verbal and manual response components, and that the first task was sufficiently difficult that it would produce a measurable delay in the subjects' processing of the second task.
In the first experiment, subjects were instructed to verbally identify whether an audible tone was high-pitched or low-pitched. Then they practiced discriminating the two tones several thousand times until the response became automatic. After the training for the tone task, the subjects then were instructed to identify one of four visually displayed letters by manually pressing corresponding buttons on a keyboard. At the same time, they were instructed to continue discriminating between the high and low tones verbally. The letters were always displayed first, and the tones were played second, so subjects were expected to respond to the letter task first and the tone task second. Maquestiaux et al. manipulated the timing of the tone task to see its effect on the length of time it took subjects to respond. If bypassing were possible, Maquestiaux et al. expected that the timing of the tone would not have an effect on the length of time it took subjects to respond to it.
The results of the first experiment provide strong evidence that it is possible to bypass the response selection bottleneck. Eleven out of 20 subjects demonstrated the ability to respond to the tone at the same time as the letter response was being processed, and six subjects even responded to the tone before the letter--even though the letter stimulus was always presented first. Additionally, when Maquestiaux et al. increased the difficulty of the letter task to increase its processing time by a specific number of milliseconds (they had measured the increased response time), it did not increase the quantity of time it took for subjects to respond to the tone, contrary to the bottleneck model's predictions. From these results, Maquestiaux et al. concluded that their subjects had bypassed the response selection bottleneck.
For the second experiment, Maquestiaux et al. retested the five fastest subjects, to find out whether Ruthruff, Van Selst, Johnston, and Remington's "greedy resource recruitment" hypothesis was accurate. If it were, then when the tone task was presented first, even though it had been automatized, it would monopolize the brain's response selection resources and cause a measureable delay in the subjects' responses to the letter task, which was to be presented second. Sure enough, that is exactly what they found.
There were a number of things about this study that struck me, but the one thing impacted me most was Maquestiaux et al.'s careful explanation for the design of their experiment. Every aspect of it was based on the findings of previous studies. Because of the care they exercised in justifying their experimental design, I was able to take their results and conclusions seriously.
This study has furnished fairly reliable evidence for the trend observed so far in this blog: multitasking is possible, and technology can help train people to do it better. This study also helped me to better define multitasking and task switching. At this point, I think multitasking involves the automatic performance of one or more tasks concurrently with a task that requires response selection--i.e. only one response can be selected at a time. If tasks are not automatized, then one must switch between them.
Questions: Do tasks performed concurrently "count" as multitasking even if the responses underlying them are performed consecutively? Is it possible for people to learn to select two (or more) responses simultaneously? How complicated can a task become before it is un-automatizable? How might "complicated" be defined? How does this research relate to the dual-processing model of working memory? With all these questions, it looks like I've got my work cut out for me.
Subscribe to:
Posts (Atom)