Maintaining attention and automaticity is the direct result of cognitive ability (Ben- Shakhar & Sheffer, ). Ben-Shakhar & Sheffer () also state that the. Relationship between automatic and controlled processes of attention and leading to automaticity and the development of an automatic attention response . . They outline a theory of long-term working memory (LT-WM) which is an. Distinguishing between automaticity and attention in the processing of emotionally .. when participants consumed alcohol, there was no difference be- .. short description of an experimental trial and signed a consent. form.
A subset of the data from these same participants has been previously reported Morelli et al. Procedure Participants completed a functional magnetic resonance imaging fMRI empathy task using naturalistic stimuli, specifically photos of individuals in happy, sad, anxious, and neutral situations. After exiting the MRI scanner, participants rated their empathic concern for targets in the empathy task.
Empathy Task in MRI Scanner Conditions In the neutral condition, participants viewed blocks of photos with people performing everyday non-emotional actions e. For all other conditions, participants completed an empathy task involving three emotions—happiness, sadness, and anxiety—and three types of instructions—watch, empathize, and memorize.
Each block consisted of a contextual sentence describing a situation followed by six photos depicting different individuals in that situation Figure 1. Happy situations included events like being hired for one's dream job or being the first person in the family to graduate from college. Examples of sad situations were attending a loved one's funeral or being fired from a job.
Anxiety situations described events such as potentially not graduating due to a bad grade or being medically examined for a serious illness. Participants viewed naturalistic stimuli with three types of instructions: A watch, B empathize, and C memorize combined with three different emotions: Therefore, participants saw nine different block types: Photo stimuli For the neutral condition, the photo stimuli were adapted from Jackson et al.
For all other conditions, the photo sets were developed by the authors. Within each block, half of the targets were male and half female. An arrow indicated the target individual if a photo depicted several people. Images were equated across conditions on arousal, valence, luminance, and complexity, and sentences were equated on length. Images were selected from a larger pool in order to equate them on a number of features.
Blocks were equated across instruction type on arousal, luminance, complexity, and the number of letters in each contextual sentence preceding that block.
Subjective ratings of valence and arousal were made by 16 8 male undergraduate pilot judges. Raters judged the valence of each photo on a scale from 1 very negative to 7 very positiveand arousal on a scale from 1 very weak to 7 very strong. Luminance was measured using Adobe Photoshop CS. Complexity was determined using the size of each image in jpeg compressed format Calvo and Lang, In previous research, compressed image file sizes have been shown to be highly correlated with both subjective measures of complexity Donderi, ; Tuch et al.
Task instructions For all conditions, participants were told photos depicted real events drawn from news stories, documentaries, and blogs. For the neutral condition, participants were simply asked to look at the photos for the whole time they were on the screen. For the watch condition, participants were instructed to respond to the photos naturally, as if they were at home and had come across the images in a magazine.
These instructions have previously been shown to induce empathic concern Toi and Batson, For the memorize condition, participants were told to keep an 8-digit number in memory while looking at the images.
Task timing and display order The neutral condition consisted of four blocks; each block displayed 16 neutral photos for 2 s each. For the empathy task, each emotion had a total of nine blocks, divided into three instruction types: For the watch blocks, the contextual sentence was displayed for 4 s, followed by 6 photos presented for 4 s each.
Participants chose between the correct number and a number that was identical except for one digit. For all conditions, each block was separated by a s rest period.
The first run consisted exclusively of three watch blocks for each emotion, as this instruction type was meant to capture unprimed, spontaneous reactions. Three empathize blocks and three memorize blocks were included for each emotion, intermixing empathize and memorize blocks across the two runs. Lastly, the third run included the four neutral blocks. Also in SPM 8, functional images were realigned within and between runs to correct for residual head motion, and coregistered to the matched-bandwidth structural scan using a 6-parameter rigid body transformation.
The coregistered structural scan was then normalized into Montreal Neurological Institute MNI standard stereotactic space using the scalped ICBM template and the resulting parameters were applied to all functional images. Finally, the normalized functional images were resliced into voxels of 3 mm3 and smoothed using an 8 mm full width at half maximum Gaussian kernel. All single subject and group analyses were performed in SPM8.
First-level effects were estimated using the general linear model and employing a canonical hemodynamic response function convolved with the experimental design. Low-frequency noise was removed using a high-pass filter. Group analyses were conducted using random-effects models to enable population inferences Nichols et al. To keep all instruction types as well-constrained and equivalent as possible, empathize, watch, and memorize trials were modeled using only the 24 s of image presentation that was invariant across instruction types.
The remaining trial elements—the instruction prompts, contextual sentences, 8-digit number presentation and memory test for memorize blocks - were modeled separately and were not included in the baseline condition. In addition, the neutral condition was modeled using only the 32 s of image presentation for each neutral block.
For visualization of results, group contrasts were overlaid on a surface representation of the MNI canonical brain using the SPM surfrend toolbox and NeuroLens http: An overall mask for all cortical ROIs was submitted to Monte Carlo simulations, which determined that an uncorrected p-value of 0. Because subcortical regions tend to be substantially smaller, individual masks were created for SA and amygdala.
Monte Carlo simulations indicated that for these smaller regions an uncorrected p-value of 0. Post-Scanner Empathy Ratings Immediately post-scan, participants rated their empathic reaction to each block in the empathy task. Participants viewed the original task again, but with shorter presentation times 1 s per image and without the neutral condition.
Participants were told to remember how they felt when they first saw the images. For happy blocks, participants rated how happy they were for the targets on a scale from 1 not at all to 7 very much.
For sad and anxiety blocks, participants rated how concerned they felt for the targets on a scale from 1 not at all to 7 very much. Results Post-Scanner Empathy Ratings Due to technical difficulties, post-scan ratings for three participants were not collected. However, the interaction between emotion type and instruction type was not significant. Empathize and watch blocks did not differ significantly on reported empathy.
Self-reported empathy did not differ significantly for happiness and sadness. Self-reported empathy showed a main effect of instruction type with participants reporting less empathy during memorize instructions than during empathize or watch instructions.
The empathize and watch conditions did not differ significantly on self-reported empathy. Therefore, we looked for effects in the eight ROIs for each of the nine conditions compared to the neutral condition. Table 1 shows a summary of regions that produced significant activations for each of the nine cells of our design and reveals a number of interesting patterns. Somewhat surprisingly, the amygdala showed the same pattern.
In contrast, dACC was reliably present during memorize instructions, but only appeared in two of the six remaining non-memorize blocks. Finally, SA activations were present during all nine trial types, and AI activations were present during eight of the nine trial types.
Patterns of neural activity for each instruction type compared to viewing neutral photos within anatomically-defined regions of interest previously associated with empathy, emotion, and mentalizing.
Common activations during empathy for happiness, sadness, and anxiety Our first goal was to identify core neural regions that were activated across different kinds of empathic experiences.
To determine whether any neural regions were commonly recruited when trying to empathize with each of three different emotions, we used a conjunction analysis Nichols et al. This method only yielded clusters that were significantly active in each of the three contributing contrasts. First, a contrast image was created for each emotion type that compared empathize instructions to the neutral condition i.
Then, a conjunction analysis of all three contrast images was used to identify neural regions that were commonly recruited when empathizing with the three emotions. Neural regions that were commonly activated during happiness, sadness, and anxiety for empathize compared to neutral, watch compared to neutral, and memorize compared to neutral. Similarly, the conjunction analysis across emotion types when watching others' emotional experiences i.
In contrast, when participants viewed the same kinds of emotional scenes but were focused on memorizing an 8-digit number, mentalizing-related regions were not commonly activated across emotion types. Taken together, these results suggest that regions related to mentalizing and emotion may be critical for generating empathic responses.
However, cognitive load may disrupt activity in these core regions and reduce empathic responding. Neural similarities and differences between empathizing and watching To determine if reacting naturally i.
For these analyses, we collapsed all empathize blocks into one condition and all watch blocks into one condition, regardless of emotion. We then created a contrast image that compared empathize instructions to the neutral condition i.
A conjunction analysis of these two contrast images was then used to identify neural regions that were commonly recruited when trying to empathize or simply watch. Neural regions that were commonly activated during empathize and watch collapsed across happiness, sadness, and anxiety compared to neutral. Neural regions that were commonly activated during the empathize and watch conditions collapsing across emotions compared to neutral.
To identify differences between empathize instructions and watch instructions, we compared the empathize and watch conditions Table 4. We did not find a large number of neural differences between the two instruction types, which is consistent with our finding that self-reported empathy was at similar levels for each instruction type. However, it appears that trying to empathize and watching naturally may have more neural similarities than differences. Neural regions that were more active for empathize compared to watch collapsing across emotionsas well as neural regions that were more active for watch compared to empathize collapsing across emotions.
Cognitive load effects Next, we wanted to more directly test whether cognitive load i. Because we were primarily interested in the effect of cognitive load, the following analyses collapse all empathize blocks into one condition, all watch blocks into a second condition, and all memorize blocks into a third condition.
To identify what regions were less active under load compared to actively empathizing, we compared empathize blocks all emotion types to memorize blocks all emotion types see Table 5. Neural regions that were less active under cognitive load compared to empathize collapsed across emotions and less active under cognitive load compared to watch collapsed across emotions. In sum, putting people under cognitive load while looking at emotional stimuli may reduce activity in regions associated with social cognition and emotional arousal and increase neural activity in regions associated with attention and effort Table 7.
Neural regions that were more active under cognitive load compared to empathize collapsed across emotions and more active under cognitive load compared to watch collapsed across emotions. A summary of cognitive load effects that illustrates the relative increases and decreases in activation during empathize and watch compared to memorize collapsed across emotions. Automaticity Lastly, we examined what neural regions may be automatically engaged during empathy and remain active regardless of the attentional condition.
Similar to previous analyses, we collapsed all empathize blocks into one condition, all watch blocks into one condition, and all memorize blocks into one condition. If the pieces were placed at random, the novices and experts were just the same.
It appears that experts perceive board positions in much larger "chunks" than novices. An expert sees the pieces in relational groups whereas the novice sees each piece individually. In terms of production systems the expert has acquired a whole set of productions in which patterns of pieces on the board specify the conditions for making particular moves, which allows information that matches previous experience to be grouped into a coherent whole.
Random patterns of pieces do not fit with previous experience and are no easier for the expert than the novice. Gopher's  experiments on training attentional strategies could be considered in terms of production rules which have aggregated into complex "macro operators". Since productions run off automatically, skill learning can be viewed as procedure learning.
As more and more declarative knowledge becomes proceduralised, there is less and less demand on the conscious, strategic processing that is said to be attention demanding. Gopher looks also at how people can learn to improve their attentional skills by training. One of the tasks used for this training is called the Space Fortress which was designed to present the subject with a complex, dynamic environment within the confines of a well-specified computer game.
The players have to control the movements of a space ship as if they were flying it, at the same time as firing missiles, to try to destroy the fortress, but at the same time they must avoid being destroyed themselves. The rules of the game are quite complex and the main aim is to score points. When players first tried the game their first response was usually panic. They felt that the demands of the situation were too high: This sounds very like our feeling when we first attempt any complex skill, like driving a car.
After considerable practice the players began to work out a strategy and performance improved.
Without specific training, people would not necessarily work out or adopt an optimal strategy, but Gopher found that if subjects were led through a sequence of emphasis changes for subcomponents of the game, similar to the variable priority method used in POC studies, performance could be improved. Subjects were advised to concentrate on one subcomponent at a time, and respond to the other components only if they could do so without neglecting the component they were to concentrate on.
The game remained exactly the same, apart from the introduction of a reward element in that the selected game component received more points This was to give subjects positive feedback on their success. Otherwise, only the allocation of attentional priorities was altered. Four groups of subjects were studied. The control group were given practice but no specific emphasis training: The results showed that the group who had received the double manipulation outperformed all other groups which did not differ from each other.
An interesting finding was that although special training finished after six sessions, the improvement in performance continued over the next four sessions to the end of the experiment. This result suggests, as Gopher  reports, that after six sessions the double manipulation group "had already internalised their specialised knowledge and gained sufficient control to continue to improve on their own". The application of this kind of training is demonstrated in another study reported by Gopher in which Israeli airforce cadets were given training on a modification of the Space Fortress game.
Cadets who drop out often do so because they have difficulty coping with the load of a flight task, dividing and controlling attention. The advantage was largest in the manoeuvres requiring integration of several elements.
After 18 months there were twice as many graduates in the experimental group as the control group. Gopher points out that the advantage of game training is not because it is similar to actual flying, because real flying is very much more demanding than the game, and the game is not very realistic. What the game does is train people in the kinds of attentional skills needed in complex situations.
Given direct experience with different attentional strategies, performance improves and these skills transfer to new situations and different task demands. Long-term working memory and skill Although productions are stored in long-term memory, they can be run off automatically without any demand on working memory, Ericsson and Kintsch  have recently argued that the traditional view of the use of memory in skilled activity needs to include a long-term working memory.
They say that current models of memory [10, 11] cannot account for the massively increased demand for information required by skilled task performance. They outline a theory of long-term working memory LT-WM which is an extension of skilled memory theory . STM ST-WM working memory in skilled performance rapid access to relevant in initial learning increased demand information in long term for information memory Figure 2.
The proposal is that in skilled performance, say of chess players, what is needed is rapid access to relevant information in long-term memory. A retrieval structure is a stable organisation made up of many retrieval cues. Load on ST-WM is reduced because rather than all the retrieval cues having to be held there, only the node allowing access to the whole structure need to be available in ST- WM. Indirect evidence for LT-WM was found in a series of experiments by Ericsson and Kintsch, in that a concurrent memory task produced virtually no interference on the working memory of experts.
Ericsson and Oliver  and Ericsson and Staszewski  studied the ability of expert chess players to mentally represent a chess game without the presence of a chess board. Over 40 moves were presented and the chess player's representation of the resulting game position was tested in a form of cued recall task.
It was found that his responses were fast and accurate, suggesting a very efficient and accurate memory representation despite the number of moves made, which far exceed the capacity of STWM. The results suggest that the expert chess player is using this additional LT-WM to maintain and access chess positions.
The ability to perforin tasks automatically, therefore, depends on a variety of factors and as we become more expert what we have learnt modifies the way tasks are controlled. In the following part of this chapter we try to understand better how practise acts. Does practise reduces the duration of component stages or allows to carry out central operations in parallel? Recent studies show that practice can dramatically reduce dual-task interference [15, 16, 17, 18].
Thus, the very large interference effects observed with novel tasks — msec might overestimate the amount of interference observed between pairs of highly practiced tasks in the real world. In many cases, however, it appears that practice merely reduces stage-durations, without allowing subjects to bypass the processing bottleneck [16, 18]. At the same time, some new evidence suggests that under some conditions practice can eliminate the processing bottleneck entirely [15, 18]. Further work is needed to better define the boundary conditions for these two outcomes.
The reply to this question come from different authors: These assumptions imply a learning mechanism that produces a gradual transition from algorithmic processing to memory-based processing.
The key question that the author examines is whether practice merely reduced the duration of component stages of the central bottleneck or allowed subjects to carry out more central operations in parallel on two tasks.
- Original Research ARTICLE
- Week 2 Attention and Automaticity
- Navigation menu
Recent evidence suggests that the great bulk of the improvement with practice comes from reducing the duration of central stages, whereas relatively little of the improvement reflects being able to perform central operations on both tasks in parallel. The question of how practice reduces RTs has been intensively studied in a separate but closely related research area, that of skill learning.
Over the past two decades much interest has focussed on the so-called power law of learning. Thus, assessing the validity of this race model is relevant to the goal of providing a unified theory of information processing limitations. Rickard , however, reanalyzed this data set and argued that although the data showed the usual pattern of a decline in RT over time with a positive second derivative i.
Rickard argued that the data more closely fit predictions from his component power law theory. According to this theory, performance during the course of practice reflects a mixture of instance-based and rule-based performance.
Palmeri in a reply to Rickard, argued that more complex versions of the race model can fit the data, but the complexities needed to achieve such fits seem rather uninviting. They include an assumption that the algorithmic process can speed up with practice; it was an important motivation for the Logan model to avoid this assumption.
Automaticity - Wikipedia
The matter is presently not resolved, and it will be very intriguing if results from future skill-learning studies support or overturn the idea that multiple memory retrieval processes generally cannot, or at least do not, operate in parallel. In the following part of the chapter we try to understand what changes with practice. Gopher's review has demonstrated that people can operate attentional control and improve with training, but it still seems that it is always "the subject" that is in control, rather than a well- specified cognitive mechanism.
Practise and Speed of Processing Several studies have shown that practice produces gradual, continuous increases in processing speed [23, 24, 25, 26] that follow a power law [10, 27, 28, 29].
MacLeod and Dunbar  also examined this variable in their study. They continued to train subjects on the shape- naming task with trials per stimulus daily for 20 days.
Reaction times showed gradual, progressive improvement with practice. Another context in which speed of processing was observed is the pattern of interference effects observed in the MacLeod and Dunbar  study. In their study interference effect also changed over the course of training on the shape-naming tasks: After 5 days of training, however, shapes produced some interference, and after 20 days, there was a large effect.
That is, presenting a shape with a name that conflicted with its ink color produced strong interference with the color-naming response. The reverse pattern of results occurred for the shape-naming task. After 1 session of practice, conflicting ink color interfered with naming the shape, whereas after 20 sessions this no longer occurred. These data suggest that speed of processing and interference effects are continuous in nature and that they are closely related to practice.
Furthermore, they indicate that neither speed of processing nor interference effects, alone, can be used reliably to identify processes as controlled or automatic. These observations raise several important questions. The important questions to which MacLeod and Dunbar  try to reply are: What is the relationship between processes such as word reading, color naming, and shape naming, and how do their interactions result in the pattern of effects observed?
In particular, what kinds of mechanisms can account for continuous changes in both speed of processing and interference effects as a function of practice? Finally, and perhaps most important, how does attention relate to these phenomena? The purpose of their article is to provide a theoretical framework within which to address these questions.
Using the principles of parallel distributed processing PDPthey describe a model of the Stroop effect in which both speed of processing and interference effects are related to a common, underlying variable that we call strength of processing. The model provides a mechanism for three attributes of automaticity. First, it shows how strength varies continuously as a function of practice; second, it shows how the relative strength of two competing processes determines the pattern of interference effects observed; and third, it shows how the strength of a process determines the extent to which it is governed by attention.
The model has direct implications for the standard method by which controlled and automatic processes are distinguished. Practice and the ability to combine two tasks People have troubles when they try to combine two tasks.
Some tasks could be combined without much difficulty, other tasks are impossible to do together. One explanation for this is that tasks can be combined provided that the mappings between the input and output systems of one task are independent of the mappings between input and output of the other task.Consciousness - Crash Course Psychology #8
If there is crossover between input and output systems required for both tasks, there will be interference. Examples like this were evident in the studies by McLeod and Posner  and Shaffer . When tasks can be combined successfully, they seem to be controlled automatically and independently; that is. However, when the mappings between the stimuli and their responses are not direct, the tasks interfere with each other and a different kind of control is required, one which requires conscious attention and appears to be of limited capacity.
Some tasks which interfere when first combined, become independent with enough practice. Hirst, and Neisser  examined the effect of extended practice on people's ability to combine tasks. They gave two students 85 hours of practice spread over 17 weeks and monitored the ways in which dual-task performance changed over that period. To begin with, when the students were asked to read stories at the same time as writing to dictation, they found the task combination extremely difficult.
Reading rate was very slow and their handwriting was poorly formed. Initially Spelke et al. Tests of memory for the dictated words showed that the students were rarely able to recall any of the words they had written down.
As training proceeds, performance requires less vigilance, becomes faster and errors decrease. This is defined as automatization. Automatization can apply to perceptual and motor skills as well as to cognitive processes. Automatized processes can be accomplished simultaneously with other cognitive processes without any interference and task efficiency is optimal. Job  argues that the association of automatic and controlled processes can be understood with reference to the context in which they are activated.
Neuman  suggests that practice leads to the development of a skill, which "includes a sensory and, at least during practice, a motor response. After practice the response may remain covert, but is still an attentional response connected to the particular target stimuli.
However, even well-practised tasks will display interference if the responses are similar. Tasks may also interfere, according to Neuman, if the initiation of a new response is required; only when there is a continuous stream of information guiding action, as in the Spelke et al. Practice, automatization and access to complex thinking Before starting with the presentation of another effect of automatization it is important to clarify that automaticity and skill are closely related but are not identical.
Automatic processes are components of skill, but skill is more than the sum of the automatic components. Automaticity and skill are similar in that both are learned through practice. In any starting step of a task, initially we use controlled processes of attention to learn and so performance is slow, awkward and prone to errors.
We can say that the full amount of our memory load is engaged, we can say also, in other words, that all our cognitive resources are engaged to solve the new learning. For example we can think of a child that is learning to sum up two numbers.
It is very difficult initially for the child to bear in mind the first number, to memorize the second number, to recall the first and to sum up both. So, when the teacher asks him to join the toys of Mary with the toys of Marc, he thinks hardly, he does it slowly and then reaches the result.
During his problem solving, if someone asks him something else he makes mistakes in the calculation and forgets the result. With learning, the attentional strategies that once needed control become automatic. Coming back to the above mentioned child, as learning proceeds, he becomes able to think of the plus sign faster and reaches the result easily.
He becomes able to reply also to someone who askS him something else. In other words the child automatizes learning of the plus sign. In the model below we are positioned in the A level, when automatization appears and discharge of cognitive load on A level takes place. If he were totally or even partially engaged in the A level, it would be difficult to access to more complex tasks.
He can now have access to the execution of subtraction and summing up B level thanks to the fact that he assumed the A level as an automatized subroutine. Again to solve the B level he initially needs the controlled processes of attention and again perfnce is slow, awkward and prone to errors. The full amount of his memory is engaged and all his cognitive resources are engaged to solve the new learning. As training proceeds, performance requires less vigilance, becomes faster and errors decrease, again we see automatization.
In other words, with learning, the attentional strategies that once needed control become automatic. He becomes able to solve problems that need both subtraction and summing up. Later he has to learn more complex problem solving which requires 3 math operations C level. The B level that contains the A level becomes again a unique automatized subroutine thanks to the discharge of cognitive load. So now the child can solve these more complex problems. The infinity symbol that stays at the top of the figure means that there is no limit to the possibility to have access to increasing stages of complex thinking .
In effect if we are able to execute complex problem solving, we can go over to more complex ones. If we are able to execute only simple basic discriminations we can learn more complex ones based on the first. Two other questions are related to this model. The first core question is: And the second is: Starting from the first point, Flor and Dooley  explain the learning process that happen inside each level with reference to schemata.
Schemata are the outcome of learning. They can be seen as an internal map which facilitate recognition through tags and smaller building-block schemata, and which facilitate actions by linking triggers and actions . Physiologically schemata are distribuited electro-chemical networks embedded in connections between neurons. As learning proceeds there may be one or more schemata that evolves; in fact it is possible that schemata compete during the initial stages of learning, and a single dominant schema eventually takes over.
As seen above, in the first chapter, Logan  characterizes the process as a progression from a search algorithm schema to a direct retrieval schema. The brain is able to chunk several distinct but interrelated ideas into a unique idea, and this produces access and processing time. Chunking is a very important process and may lead to significant gains in task performance. As seen in the above presented model, assuming the A level as an automatized subroutine means performing a chunking operation.
There are in this case two important explanation on the dynamic of learning to automaticity. The first lies at a neurological level  and the second at a meta-cognitive level . From the first view point Mackay et al. In order to analyse how the brain can operate under varying conditions Flor and Dooley  claim that consistent and changing tasks can be performed in a learning environment and measures of task performance and subsequent analysis should lead to hypotheses concerning which factors account for performance gains.
Some mathematical models are used to quantify the relationship between learning environment and task performance. The simplest is the log-linear model : As Flor and Dooley  point out the log-linear model assumes rapid initial improvements followed by slow, incremental improvement. It assumes learning occurs through accumulated exposure to stimulus. If one assumes that performance improve only gradually a first, the S-shaped learning curve is appropriate: In either case the learning process converges to a point where there is little or no schema development.
Dynamically, as Flor and Dooley  state, in the learning process there is a point attractor; in general the authors state the first research proposition i.
In statistics convergence is indicated by weak stationarity: Convergence is in part indicated by the lack of divergence or chaotic dynamics. The presence of divergent dynamic is an indication of chaos. The output of a chaotic system is point by point unpredictable, but forms a recognizable pattern over time if observed properly.
The discovery of chaos leads to a rejection of the random hypothesis. So, the early stages of the learning processare by a chaotic dynamic, it may be because the brain is searching for optimal chunking patterns.
This type of chaotic search has also been found in numerous studies of neuron level activity in which it is known that sensitivity to initial conditions allows amplification of small fluctuations, initial conditions may create and may destroy information. As performance on the learned task converges, a state of mastery is achieved and so, further exposure to the learning task results in chunking rather than in improvement in performance.
Less and less effort is required, and process and action become automatic. Thus chunking may explain why tasks can become automatic. Chunking may tend to follow a model of puntuacted equilibrium, where long period of stasis are interrupted by short period of rapid change .
This is similar to the evolutionary dynamic seen in genetic systems and also observed in human societal development. It is similar also to the dynamics of the catastrophe model. Mathematically the catastrophe model is defined by : This lead the authors to assert the third research proposition that is: A catastrophe model can be used to model the discontinuance dynamics of learning to automaticity. The catastrophe model The second question, related to the model of access to complex thinking is, as Johnson  points out, that many skills are too complex to be learned all at once.
Skill acquisition depends on paying attention to the right things at the right time.