Wednesday, January 04, 2006

Measuring Clinical Competency

Peter Johnson is both a certified nurse midwife and an educational psychologist.

Dr. Johnson has expertise in the following areas:
- Women's health care
- Clinical midwifery practice
- Family planning
- Graduate and undergraduate nursing, midwifery and women's health education
- Educational administration
- Educational measurement
- Test construction
- Test item analysis
- Computer mediated learning strategies

Those interested in developing educational strategies and tests to measure what has been learned are often frustrated by at least some of the members of the class and their apparent difficulty learning what they are attempting to teach.

Remember that any classroom, whether brick and mortar or electronic, is filled with students with a range of intelligence, personal motivation, and numerous other factors. The instructor may be tempted to focus on those students who appear to be doing well while ignoring those students who frustrate them by their apparent inability to "get it.”

The sage instructor realizes the need to turn her or his attention in just the opposite direction, focusing on those students who appear to be having difficulties. While the intelligent student will certainly learn faster, with assistance and ample time the remainder of the class will catch up.

The instructor may have to evaluate students having difficulty to determine which teaching strategies work best for them. Perhaps some students have difficulty attending to a lecture strategy. A move to small group discussions pairing students who are doing well with students having difficulty may essentially level the playing field.

Finally, remember to focus on the target competencies and not just measurement of where students fall on the bell curve. If an instructor has been truly effective, it is possible for the entire class of students to obtain perfect scores on a posttest.

Health outcomes are dependent on numerous human, environmental, political and other factors. It is important for those interested in developing and measuring the success of educational strategies to understand the impact that knowledge, skills and clinical reasoning have on competent human behavior and the impact that that behavior has on the health outcome of interest.

It is often unrealistic and potentially demoralizing for both student and teacher to judge educational effectiveness on changes in a specific health outcome when it is dependent on an interrelationship of both human and nonhuman factors, many of which may be beyond human control.

Those interested in impacting on health outcomes through education should consider those outcomes first. Consider convening a broad panel of stakeholders or conducting interviews with a variety of experts prior to turning attention to the development of educational strategies.

The next step in developing an educational strategy is the clear explication of the competencies that are logically related to the health outcome. In its simplest terms, ask what it is that the individual needs to be able to do as the result of a realistic and practical educational intervention.

These abilities (competencies) need to be listed clearly and measurably. While this process is often time consuming it will pay dividends when the instructor turns her or his attention to instructional design, student and course evaluation.

For each objective, state in behavioral terms what the student should be able to do as the result of the instruction, under what specific circumstances, and to what specific degree of effectiveness. For example, if the competency is a kindergartner’s effectiveness in tying her or his shoes, an appropriate objective would be:

With the assistance of the teacher or parent, the child will be able to tie both shoes correctly on four out of five attempts.

Clinical skills are often essential components of healthcare competencies. Clinical skills are often challenging to grasp and the most difficult to teach. As a test, consider teaching a friend to tie a necktie. Fortunately, once learned these skills are forgotten very slowly and relearned almost instantaneously within their appropriate context.

These skills may be quizzically context dependent, which may cause the student a great deal of anxiety. For example, the midwife having gone several years without attending a birth may have great difficulty explaining the hand skills that she or he will need for competent midwifery care. This midwife may be astonished to find out that in the presence of a woman giving birth her hands “know” exactly what to do.

An individual cannot be competent unless she or he knows what is required to do what is needed. While clinical knowledge is obviously essential to any competency it is by no means equivalent. Many know how to “talk the talk” without any ability to “walk the walk.” In healthcare, it would be dangerous to consider these individuals competent.

Unfortunately, because knowledge is comparatively very easy to measure, it is often the focus of educational testing. Testing knowledge alone does not provide a valid measure of competence and may very well discriminate against competent individuals after they have left the immediate confines of the classroom.

Knowledge is transient, naturally forgotten and replaced with alternative and less measurable “ways of knowing.” Fortunately we live in a technologic world where information is readily available for those who know how to retrieve it.

The instructor is therefore encouraged to measure knowledge only as a partial and immediate means of measuring competency.

Both instructors and students commonly assume that information, once learned, is retained forever. While conceptualizing the human brain as you would a computer hard drive is tempting, it fails to provide an adequate metaphor. It is widely believed that information received in the human brain is retained there for a long period of time, possibly forever. Unlike a computer, humans are not as readily able to recall stored information after it is placed in memory.

Students without any previous exposure to the information and concepts presented in the classroom can be expected to learn them very slowly. Once learned, these students will quickly forget most of what they have learned. Fortunately, the next time the student is exposed to this content the student will relearn it more quickly and gain a higher level of understanding of the subject. Following a second round of instruction, these students will forget more slowly and retain a greater amount of baseline knowledge. This cycle will continue indefinitely with repeated exposure to a subject. While even experts in a particular subject will forget content, these individuals are able to relearn the content with very minimal cues.

Clinical reasoning is often defined as the ability to use knowledge and skills in a manner that requires the use of a structured decision making process utilizing higher order thinking skills. This process includes the ability to transfer or apply concepts to new situations, to analyze the component parts of a complex construct, to synthesize or build constructs using different sources of learned information, and the ability to make critical evaluations. Healthcare providers use these higher order thinking skills within a management process that involves appropriate collection and analysis of complex datasets, diagnoses using these data, care planning and evaluation.

Clinical reasoning is a developmental process with profound novice-expert differences. Novices can be expected to make decisions using a very structured framework that can be easily identified by an instructor or mentor. The expert, however, utilizes a decision making process that is very individualized and apparently unstructured. These developmental differences pose challenges for both clinical instruction and continuing evaluation of clinicians. Experts will frequently state that they know that they are making the correct decision, but are unable to explain why they are making it. The expert instructor may have difficulty breaking down decisions for logical presentation to her or his students. Likewise, the individual interested in assessing the quality of the decisions made by experts may find them difficult to evaluate because of their unstructured nature.

Competency or the ability to productively behave in a manner requiring a complex array of knowledge, skills and reasoning is difficult to measure in a manner considered reliable and valid. Blueprinting is a logical process of selecting and appropriately distributing measurements of specific knowledge, skills and reasoning related to a particular competency. In this manner, the instructor is able to evaluate to competency by evaluating its measurable component parts.

The instructor interested in reducing the transmission of malaria, having already consulted with public health officials, engineers, political leaders and other stakeholders in the target community can use a blueprint to provide valid measures of achievement of the human competencies logically related to this outcome.

In this example the instructor is interested in measuring the competency “Ability to correctly identify and treat infected individuals.” The individual must acquire a wide range of knowledge related to malaria, its transmission, its symptoms and its treatment in order to be competent. In this case the instructor believes that clinical skills and clinical reasoning are equally important to the competency and makes an active decision to distribute 25 percent of the evaluation to measures of knowledge of these concepts.

Multiple-choice, true-false, matching and fill-in-the-blank test items are examples of the wide selection of tools that are available to the instructor for measurement of knowledge of these concepts related to malaria. These items, because they are easily developed and widely understood by educators are often given an inappropriate amount of weight. The blueprinting process helps that instructor avoid the common mistake of insufficiently measuring skills and reasoning.

Finally, because of the transient nature of knowledge, the instructor must measure it immediately following the period of instruction. Delayed measurement of knowledge will provide an invalid measure of an individual's competency.

The instructor, having adequately measured the knowledge related to the competency, can now turn her or his attention to measurement of its associated skills. In this case, the instructor makes an active decision to distribute an additional 25 percent of the evaluation to skills measurement.

Clinical skills are typically more challenging to measure than related knowledge. In this case, the instructor has decided to use pictures of infected patients to assess the student’s ability to correctly identify associated symptoms. The instructor can measure the student’s ability to measure patient vital signs and draw blood for assessment of hemoglobin through direct observation of their demonstration of these skills.

The instructor believes that the measurement of the student’s ability to identify and appropriately treat individuals with malaria is highly dependent on her or his ability to make logical decisions using higher order thinking skills. S/he therefore makes the active decision to reserve 50 percent of the evaluation of this competency to measures of clinical reasoning. This poses a challenge as clinical reasoning is comparatively difficult to measure in a reliable and valid manner.

The instructor may attempt to develop multiple-choice items which measure the application of knowledge, analysis or evaluation of concepts. For example, a question could be designed that tests the ability of a student to correctly choose between the best of multiple treatment options. Multiple-choice items can also be developed that test the student’s ability to make appropriate assessments given a set of related data. Multiple-choice items that measure clinical reasoning are very difficult to develop. The individual developing these items is encouraged to have items reviewed by clinical experts in the competency that is being measured in order to determine whether they concur with the item's ability to truly measure reasoning. Content experts should also be polled to determine whether a high degree of consensus exists that the correct answer to the question is indeed the BEST decision.

Simulated case management is another means of measuring clinical reasoning. The instructor can develop and present a realistic case containing relevant subjective and objective data and ask the student to make appropriate assessments or management plans. Students can also be given two similar cases and asked to describe the differences between them or asked to determine which contained a more appropriate management plan.

The instructor can also use essays to allow the student to explain their ability to analyze, evaluate or clinically manage a patient presenting with symptoms of malaria. In this case, the instructor is encouraged to use restricted rather than unrestricted essays. Unrestricted essays place no boundaries of the student’s explanations while restricted essays provide a framework for the response. Restricted essays are more reliably measured than their unrestricted counterparts.

The completed blueprint provides the instructor with a framework or recipe for the comprehensive measurement of a competency, which often consists of a complex arrangement of integrated behaviors. The blueprint is also an important source of evidence supporting the construct validity of the complete measure of the competency.


Consider individually the knowledge, skills and clinical reasoning that are related to active management of the third stage of labor.

Make an active consideration of how much emphasis would be placed on measurement of each. Hint: this is a competence where one might wish to focus on measurement of clinical skills.

Select appropriate evaluation strategies. Why would each be best? When would the evaluation be best applied?