Professional Practice Analysis: Validity Evidence for the Continued Professional Certification Examination for Nurse Anesthetists

Professional Practice Analysis: Validity Evidence for the Continued Professional Certification Examination for Nurse Anesthetists

Professional Practice Analysis: Validity Evidence for the Continued Professional Certification Examination for Nurse Anesthetists Timothy J. Muckle, P...

375KB Sizes 0 Downloads 104 Views

Professional Practice Analysis: Validity Evidence for the Continued Professional Certification Examination for Nurse Anesthetists Timothy J. Muckle, PhD; Karen Plaus, PhD, CRNA, FAAN, CAE; and Steve Wooden, DNP, CRNA, NSPM-C Introduction: This article presents the results of the 2015 professional practice analysis (PPA) conducted by the National Board

of Certification and Recertification for Nurse Anesthetists (NBCRNA). The goal of the PPA was to establish and validate a content outline and test specifications for NBCRNA’s examination for the continued professional certification (CPC) program. In professional certification test development, a PPA establishes the content validity of an examination and serves as the key evidentiary link between the test and clinical practice. Methods: The PPA used survey and rating scale methodologies to collect data on the relative emphasis of various aspects of the nurse anesthesia knowledge domain and competencies. Overall, 726 survey responses were analyzed by a panel with expertise in clinical anesthesia and testing methodology, using conventional statistics and the Rasch rating scale model. Descriptions of how the survey results were used to develop test specifications are also provided. Results: The results of the analysis provided strong validity evidence for the content outline and test specifications. Conclusion: To a great extent, the responses of the PPA survey exhibited a high-degree endorsement for the knowledge statements included on the outline and thus serve as a basis of content validation for the CPC examination.

Keywords: Certification, nurse anesthetists, practice analysis, testing

T

he National Board of Certification and Recertification for Nurse Anesthetists (NBCRNA), an autonomous body with multidisciplinary and public representation, is responsible for specifying the requirements for earning and maintaining the certified registered nurse anesthetist (CRNA) credential. Through its credentialing processes, NBCRNA seeks to maintain a high level of knowledge, skill, and professionalism among nurse anesthetists and to foster high-quality anesthesia care for patients. Consistent with its mission, NBCRNA has developed a continued professional certification (CPC) program for certification maintenance.The program, following a continued competency model, consists of two 4-year cycles (total of 8 years) and four major components (See Table 1). The objective of the CPC examination (CPCE) is to assess core knowledge common to the practice of all nurse anesthetists, irrespective of practice setting. The core clinical knowledge areas are represented by the four core knowledge domains of the examination. For the first CPC cycle, the CPCE is not a pass/fail examination, and certification status will not be contingent on meeting a passing standard (i.e., a passing score). Rather, examinees in the first CPC cycle will be subject to a performance standard examination. Its purpose is simply to identify potential areas where Volume 7/Issue 3 October 2016

a CRNA may need additional education. For the first CPCE, CRNAs will maintain their certification, even if they do not meet the performance standard. Additional continuing education (CE) (e.g., an additional core module) will be required in any area of weakness. For the second and subsequent CPC cycles, examinees will be subject to a passing standard, and passing the CPCE (in four attempts) will be required to maintain certification. The first CPC cycle began August 1, 2016, and the first CPCE must be available in the second 4-year cycle (beginning 2020). To form a foundation for the content development for the CPCE, NBCRNA conducted a national professional practice analysis (PPA) study of the responsibilities and duties of nurse anesthetists at the recertification level, that is, a level higher than entry level. The purpose of the PPA was to define logical, practicerelated, research-based content to support various elements of the CPC program.

Practice Analysis Study To provide leadership and oversight for the project, NBCRNA appointed a practice analysis panel of 16 subject matter experts (SMEs) who represent the profession demographically and reflect the nurse anesthetists who will participate in the CPC program. www.journalofnursingregulation.com

41

TABLE 1

NBCRNA’s CPC Program Components 60 credits of assessed continuing education (CE) every 4 years (“class Aa” CE) ⦁ 40 credits of professional development every 4 years (“class Bb” CE) ⦁ Four core modulesc, one in each core area, every 4 years ⦁ One CPC examination every 8 years. ⦁

a In order to qualify for Class A credits, the learning activities must be prior

approved by an accredited CE provider and include an assessment of some type to show that learning has occurred and is related to nurse anesthesia practice. b Class B requirements for professional development are CEs earned for ac-

tivities that enhance knowledge of anesthesia practice, support patient safety, or foster an understanding of the health care environment. A few examples include: grand rounds, morbidity and mortality conferences, precepting, teaching, infection prevention courses, data collection, mission trips, and public education. c Core modules provide a synopsis of the current literature and evidence-

based knowledge in the four areas of anesthesia practice that apply to all CRNAs, regardless of practice focus: Airway Management, Applied Clinical Pharmacology, Human Physiology and Pathophysiology, and Anesthesia Equipment and Technology. Core modules also feature an assessment component. Note. NBCRNA = National Board of Certification and Recertification for Nurse Anesthetists; CPC = continued professional certification;

The panel also included representatives from the Continuing Education and Practice Committees of the American Association of Nurse Anesthetists (AANA), the membership and advocacy body for the nurse anesthesia profession, and two former chairs of the certification examination committee. The panel was charged with analyzing the practice of nurse anesthesia at the recertification level and reviewing, evaluating, and revising the knowledge elements included in the proposed CPCE content outline. The panel convened regularly from February to June 2015 via conference calls and at a one-and-a-half-day meeting in June 2015. NBCRNA desired to adhere to the standards in the professional certification community for the conduct of PPA studies. The guidelines have their foundation in logically sound and legally defensible procedures drawn from psychometric literature and case law. The principles and procedures are outlined in federal regulation (Uniform Guidelines on Employee Selection Procedures) and manuals, such as Standards for Educational and Psychological Testing (American Educational Research Association [AERA], 2014). NBCRNA testing staff employed these standards and those of the National Commission for Certifying Agencies (NCCA, 2015) in all phases of the study. As the primary process for identifying the competency areas and knowledge needed for proficient performance in a profession, PPA studies offer a clear, useful basis for defining the essential components of credentialing programs, especially assessments. Validation through systematic PPA studies helps to docu42

Journal of Nursing Regulation

ment that the proficiency inferred when a candidate achieves a credential has a sound link to the significant elements of practice that characterize the profession. The 2015 PPA study is an integral part of ensuring that the examination component of the CPC program has practicerelated validity and that the aspects of nurse anesthesia addressed by the program reflect the requirements of practice settings, patient groups, and conditions. The study identified the criticality and frequency of essential knowledge and skills relevant to the demonstration of continued competency in nurse anesthesia. These ratings play an important role in determining the content of the CPCE. According to national testing standards, credentialing agencies should repeat their validation studies on a periodic basis commensurate with the degree of change in the profession (AERA, 2014; National Commission for Certifying Agencies [NCCA], 2015). NBCRNA had previously performed content validation studies in 1996 (Zaglaniczny & Healey, 1998), 2001 (McShane & Fagerlund, 2004), 2007 (Muckle, Apatov, & Plaus, 2009), and 2011 (Plaus, Muckle, & Henderson, 2011). This article describes the 2015 PPA study, including summaries of the survey methodology used to validate the domains and subdomains of the CPCE content outline, the demographic profile of survey respondents, the survey results, and the decisions made by NBCRNA regarding revisions to the test blueprint. Although this article deals chiefly with the nurse anesthesia profession, the process outlined and results presented have implications for all nurses and regulators. Recertification requirements and practices in health care credentialing are changing. More and more credentialing programs are embracing principles of lifelong learning and continued competence and integrating them into certification maintenance requirements. For instance, the advanced practice registered nursing communities have jointly incorporated some of these themes in their recent consensus model (National Council of State Boards of Nursing, 2008). As the components of these credentialing programs evolve over time, change must be transparent, guided by thoughtful deliberation, and driven by data and evidence-based practices. This article demonstrates that the foundation of the testing component for NBCRNA’s ground-breaking CPC program was established using a process that involves guidance and expertise from qualified practitioners, empirical data derived from standard research techniques, and input from a substantial segment of the nurse anesthetist constituency.

Survey Design After a review of educational materials related to PPA and other psychometric principles of assessment design and an orientation to the PPA project, the panel conducted a preliminary review of the proposed CPCE content outline, which was based on a 2009 PPA of recertification requirements for nurse anesthetists (Plaus et al., 2011). This review resulted in several revisions to some knowl-

edge elements in the CPCE content framework. The revisions did not result in a major restructuring of the outline. The knowledge statements were organized in a hierarchical structure, consisting of four primary domains (airway management, applied clinical pharmacology, human physiology and pathophysiology, and anesthesia equipment and technology), a category for common topics, and 21 secondary domains. (See Table 2.) Based on the work of the panel, NBCRNA developed an electronic validation survey in which each survey item was a knowledge element. For each knowledge statement, three ratings were elicited: 1. Required: Is the knowledge domain or element required of all nurse anesthetists regardless of their practice focus? 2. Criticality: To what degree would the inability of a nurse anesthetist to exhibit knowledge in the knowledge domain or element be perceived as causing harm to stakeholders? ⦁ No harm ⦁ Minimal harm ⦁ Moderate harm ⦁ Substantial harm ⦁ Extreme harm. 3. Frequency: How often does a nurse anesthetist perform duties that require proficiency in each knowledge domain and element? ⦁ Not at all ⦁ One or more times per year ⦁ One or more times per month ⦁ One or more times per week ⦁ One or more times per day. The survey also asked respondents to provide their opinions regarding the ideal content weightings (%) of the four clinical domains. To verify the functionality and design of the survey, the panel distributed a pilot survey to 65 qualified nurse anesthetists for review and comment. This process informed revisions and led to a large-scale survey. NBCRNA developed the survey using an online survey design and delivery tool (Qualtrics Research Suite) to be completed by a representative sample of nurse anesthetists for the purpose of collecting data on the performance domains, tasks, knowledge, and skills in the CPCE content outline. The survey phase of the PPA study was important because nurse anesthetists holding the CRNA credential should have input into the definition of their practice and the design of an assessment intended to measure core competencies. This input is critical because the panel, though highly qualified and representative in many ways, constituted only a small portion of the nurse anesthetist population. Evaluation by the eligible community in the specialty is essential for making generalizations about the performance domains, tasks, knowledge, and skills. The survey was also designed to solicit demographic information from the respondents. The demographic data were collected and analyzed

Volume 7/Issue 3 October 2016

TABLE 2

Knowledge Elements in the CPCE Content Framework Domain I: Airway Management Anatomy Physiologic and pathophysiologic concepts Pharmacology Interpret laboratory and diagnostic study results Airway equipment Complications Management concepts Domain II: Applied Clinical Pharmacology Pharmacokinetics and pharmacodynamics Factors influencing medication selection Interpret laboratory and diagnostic study results Infection prevention principles Domain III: Human Physiology and Pathophysiology Major organ systems (e.g. cardiovascular, respiratory, neurologic, etc.) Interpret laboratory and diagnostic study results Factors influencing anesthetic approach, technique, and management Domain IV: Anesthesia Equipment and Technology Proper function, malfunction, and troubleshooting complications Safety and infection prevention protocols Anesthetic delivery and clinical monitoring devices Assess, analyze, interpret, and use perioperative data Common Topics Informed consent principles Communication strategies Documentation of care Note. CPCE = continued professional service examination.

to ensure that a representative response from practicing nurse anesthetists was achieved.

Sample and Response Rate NBCRNA surveyed 12,000 randomly selected nurse anesthetists currently holding the CRNA credential. Randomization was achieved by sorting the database of potential respondents by a random number in a Microsoft Excel spreadsheet. The sample represented about 26.7% of the approximately 45,000 current CRNA credential holders. NBCRNA sent an e-mail signed by the NBCRNA president and chief executive officer, inviting the selected nurse anesthetists to participate in the study. The e-mail explained the purpose of the survey and its role in the PPA study. The e-mail also included a person-specific URL link to the online survey. Throughout the 3-week survey administration period, NBCRNA monitored responses and sent three e-mail reminder notices and two automated voice messages to those who had not completed the survey. www.journalofnursingregulation.com

43

TABLE 3

Summaries of Responses for Core Domains and Common Topics Knowledge Statement

“Required” Criticality Mean (SD)

Frequency Mean (SD)

Airway management

99.7%

4.5 (0.8)

4.7 (0.7)

Applied clinical pharmacology

98.4%

4.1 (0.9)

4.7 (0.8)

Human physiology and pathophysiology

97.4%

3.9 (1.0)

4.6 (0.9)

Anesthesia equipment and technology

97.8%

3.9 (1.0)

4.5 (0.9)

Common topics

88.7%

2.8 (1.2)

4.1 (1.3)

Note. SD = standard deviation.

TABLE 4

Rasch Measures: Criticality and Frequency Knowledge Statement

Rasch Criticality Measure (SE)

Rasch Frequency Measure (SE)

Airway management

5.41 (0.07)

5.01 (0.07)

Applied clinical pharmacology

4.32 (0.06)

4.58 (0.06)

Human physiology and pathophysiology

3.71 (0.06)

4.14 (0.06)

Anesthesia equipment and technology

3.70 (0.06)

4.13 (0.06)

Common topics

1.00 (0.06)

2.79 (0.06)

Note. SE = standard error.

Of the 12,000 e-mail recipients, 6,048 (50.4%) opened the e-mail, and 2,184 clicked the link to the survey. Of these, 726 completed at least 50% of the survey, and only those respondents were included in the subsequent analysis. Among the 6,048 people who opened the e-mail invitation, the response rate was 12%. The raw sample size accumulated from the survey administration satisfied minimum requirements for PPA survey sample guidelines discussed in the testing literature (Raymond, 2001, 2015). The response rate is considered satisfactory for a PPA survey, especially given that the survey was somewhat lengthy—30 to 45 minutes—and complex, requiring approximately 300 individual mouse clicks and keystrokes. Data were collected on key demographic variables, including sex, ethnicity, age, practice setting, clinical responsibility, educational background, and geographic region. When analyzed, the demographic data were consistent with previous NBCRNA studies and with demographic analyses conducted by the AANA. The respondents were generally representative of the CRNA profes-

44

Journal of Nursing Regulation

sion, and the response provides a sound basis for validation decision making.

Results A binary scale (Yes/No) was used for the required of all nurse anesthetists rating. Five-point scales were used for the criticality and frequency ratings, with a 5 being the highest rating. The five-point scales are ordinal measures to evaluate the construct of endorsement for the criticality and frequency of the knowledge statements. Table 3 shows the summaries of responses for the four core knowledge domains and common topics. All four core domains and the common topics were highly endorsed. For the four core domains, the classical descriptive statistics for criticality and frequency indicate that the average mean scale values range from 3.9 (well above the scale midpoint of moderate endorsement) to 4.7 (close to maximum endorsement). The standard deviation statistics describe the spread of the response distributions, with small estimates indicating relatively tight groupings and large estimates indicating relative diversity of opinion. The standard deviations for the criticality and frequency ratings indicate a fair degree of variation in the ratings; the highest observed standard deviation is 1.3 on common topics. Although a 1.3 for criticality and frequency indicates a reasonable level of agreement, it indicates a level of variability that could allow a criticality or frequency rating to include the rating above or below the point estimate with some degree of confidence (assuming a 95% confidence interval). For common topics, average criticality ratings (2.79) were below the scale midpoint and well below the average criticality ratings for the four core domains. The average frequency ratings for this section were moderate, but less than the average frequency ratings for the four core domains. The ratings on common topics also featured more variation than the ratings on the four core domains. Though ordinal measures and classical descriptive statistics are useful, the five points on the scale do not possess the qualities of an interval scale; that is, they do not represent equal differences between units and thus are not ideal for mathematical transformations, which are performed to calculate the content emphasis percentages for the domains. Rasch statistics represent an interval scale and factor in respondents’ relative likelihood to endorse overall. They represent the same relative magnitude and ordering of knowledge statements with respect to criticality and frequency. The Rasch measures represent the same information as the descriptive summaries, except on an interval scale, thus enabling subsequent arithmetical computations to calculate domain percentages. As calculated for Table 4, the Rasch measures represent interval-level measures of criticality and frequency. The effective range for Rasch measures is 1 to 6; high values represent strong endorsement, and low values represent low endorsement. The

TABLE 5

Summaries of Responses for Secondary Knowledge Statements Secondary Knowledge Statement

 

Criticality

Frequency

“Required”

Average Rating

Rasch Measure

Average Rating

Rasch Measure

 

 

 

 

 

Anatomy

98.9%

4.2

4.4

4.6

4.5

Physiologic concepts

98.6%

4.1

4.1

4.6

4.2

Pathophysiologic concepts

98.0%

4.0

3.8

4.1

3.0

Pharmacology

99.4%

4.3

4.7

4.6

4.4

Interpret laboratory and diagnostic study results

88.5%

3.5

2.5

3.8

2.2

Airway equipment

99.6%

4.5

5.6

4.7

4.6

Management concepts

98.6%

4.4

5.2

4.7

4.7

Pharmacokinetics and pharmacodynamics

95.1%

3.9

3.7

4.5

3.9

Factors influencing medication selection

98.3%

3.9

3.7

4.6

4.2

Interpret laboratory and diagnostic study results

92.4%

3.6

3.0

4.3

3.4

Infection prevention principles

94.4%

3.5

2.7

4.5

3.8

Adverse pharmacologic reactions

98.1%

4.0

4.0

3.6

1.9

Cardiovascular system

98.5%

3.8

3.4

4.4

3.6

Respiratory system

99.2%

3.9

3.6

4.5

4.0

Neurologic system

94.9%

3.6

2.9

4.0

2.7

Renal system

96.4%

3.4

2.4

4.0

2.7

Gastrointestinal system

92.6%

3.1

1.6

3.9

2.5

Hematologic system

93.4%

3.3

2.0

3.9

2.3

Endocrine system

93.8%

3.2

1.9

3.8

2.3

Musculoskeletal system

94.5%

3.1

1.6

3.9

2.5

Interpret laboratory and diagnostic study results

93.3%

3.5

2.7

4.2

3.1

Factors influencing anesthetic approach, technique, and management

98.9%

4.0

3.8

4.6

4.3

Proper function, malfunction, and troubleshooting complications

97.7%

4.0

4.0

4.3

3.4

Safety and infection prevention protocols

96.3%

3.5

2.7

4.4

3.5

Anesthetic delivery and clinical monitoring devices

98.6%

3.7

3.1

4.5

4.1

Assess, analyze, interpret, and use perioperative data

98.0%

3.9

3.7

4.6

4.2

Informed consent principles

96.1%

3.1

1.5

4.6

4.2

Communication strategies

89.9%

2.9

1.2

4.4

3.7

Documentation of care

95.2%

2.9

1.1

4.6

4.3

Airway management

Applied clinical pharmacology

Human physiology and pathophysiology

Anesthesia equipment and technology

Common topics

Note. Effective ranges for the average ratings and Rasch measures were 1 to 5 and 1 to 6, respectively.

Rasch measures for the four core domains were all above the scale midpoint (3.5). The Rasch calibrations for common topics registered in the lower end of both scales. Table 5 shows ratings on the three scales for the secondary knowledge statements. These data served as a basis for the SMEs decisions on whether to include secondary knowledge statements on the final outline. The generally high levels of approval for most Volume 7/Issue 3 October 2016

elements of the outline provide validation evidence for the clinical applicability of the most knowledge statements. There were some exceptions—the criticality ratings under common topics, for example. Though these quantitative data helped in the decision making regarding which knowledge statements should be retained,

www.journalofnursingregulation.com

45

TABLE 6

Proposed Domain Weights (%) Using Three Methods Domain

Method 1 Method 2 Method 3 Average (Domain) (Subdomain) (Survey)

Airway management

34.9

37.9

33.0

35.3

Applied clinical pharmacology

25.5

19.3

25.9

23.6

Human physiology and pathophysiology

19.8

26.5

23.9

23.4

Anesthesia equipment and technology

19.7

16.3

17.2

17.7

domain, determine how many test questions will come from each domain. The Rasch calibrations at the domain and subdomain levels served as the basis for computation of the domain weightings. First, for each domain and subdomain, the criticality calibration (C) was multiplied by the frequency calibration (F). After these products were computed, three different methods (Kane, Kingsbury, Colton, & Estes, 1989; Lunz, Stahl, & James, 1989; Stahl, Wang, & Muckle, 2003) were used to calculate the proposed domain weight percentages for the CPCE test blueprint. (See Table 6.) Method 1. The percentage for each domain was calculated by dividing the Frequency × Criticality product for a domain by the sum of F×C across all domains: qd = 

Note. Domain weight percentages were not calculated for Common Topics because this element was deleted as a result of earlier deliberations.

pd ∑ pd

qd = Content % for domain d pd = Frequency × Criticality product for domain d. Method 2. The percentage for each domain was calculated by dividing the sum of Frequency × Criticality product for the subdomains within a given domain by the sum of F×C for all subdomains: ⦁



the SMEs also used their collective judgment to decide the organization of the final content outline.

Evaluation of Survey Results Finalizing CPCE Knowledge Statements

On June 18 and 19, 2015, the panel met in Chicago to review the results of the PPA survey and make recommendations for the CPCE content outline. The meeting began with an overview of the meeting’s purpose and goals. Then, the panel reviewed an overview of the demographic characteristics of the respondents to confirm that it reflected the nurse anesthesia profession. The majority of the meeting was devoted to reviewing summaries of the survey ratings and deciding which knowledge statements to include in the CPCE content outline. Because of the generally high levels of endorsement on almost all elements of the outline, only minor changes were recommended. ⦁ Under the airway management domain, the element Interpret laboratory and diagnostic study results as well as the subdomain Clinical indications, uses, interpretations, limitations of airway laboratory or diagnostic studies related to airway management were deleted. ⦁ Under the applied clinical pharmacology domain, the element Interpret laboratory and diagnostic study results was deleted, and a subdomain Intraoperative monitoring techniques was moved up to the element level. ⦁ The common topics domain and all elements were deleted. Calculation of Domain Weights

The final step was using a principled approach to establishing weights in the form of percentages for each primary domain. The domain percentages, which reflect the relative importance of each

46

Journal of Nursing Regulation

qd = 

∑ psd ∑ ps

qd = Content % for domain d psd = F×C product for subdomain s within domain d ⦁ ps = F×C product for subdomain s. Method 3. The percentage for each domain was calculated by taking the average of domain weight allocations indicated by respondents on the survey. The three methods resulted in convergent results in the relative importance of the domains. Methods 1 and 3 yielded similar percentages for each domain, and the same ordering of the four domains in terms of relative importance. Method 2 demonstrated consistency with methods 1 and 3 for domains I and IV. The only difference was the relative ordering of domains II and III. To forge a consensus recommendation for the domain percentages, the panel focused on the averages of the three methods, rounded the averages to whole numbers, and adopted the following domain weight percentages: ⦁ Airway management: 34% (51 items) ⦁ Applied clinical pharmacology: 24% (36 items) ⦁ Human physiology and pathophysiology: 24% (36 items) ⦁ Anesthesia equipment and technology: 18% (27 items). In September 2015, the NBCRNA board reviewed the work of the panel and approved the modifications to the CPCE content outline and the domain percentages. ⦁



Limitations

References

The limitations of this study are those inherent in research based on survey methodology: ⦁ Response sets: In longer surveys in which the same rating scales are presented repeatedly, respondents may be inclined to provide the same response repeatedly without considering the differences among questions. ⦁ Focus: Respondents may not be fully aware of their reasons for any given answer because of lack of memory on the subject or boredom. ⦁ Topic inclusion: Only topics identified by the panel were included and validated on the survey. Additional topics may have been appropriate, but they were not surveyed because they were not considered or were omitted by the panel. To a certain extent, this was accounted for by including open-response questions on the survey, asking respondents if any pertinent topics had been unintentionally left off of the outline. ⦁ Nonresponse bias: Although the respondent sample was demographically representative, sampling error resulting from question nonresponses may exist. The respondents who choose to respond to a survey may be qualitatively different from those who choose not to respond, thus creating bias. The somewhat controversial nature of the CPCE requirement may have been a contributing factor to nonresponses in this study. ⦁ Interpretation of rating scales: Survey question answer options can sometimes lead to unclear data because certain answer options may be interpreted differently by respondents. For example, the answer option “somewhat agree” may have different meanings to different respondents. However, in this case the Frequency and Criticality rating categories were defined as objectively as possible with little latitude for interpretation.

American Educational Research Association. (2014). Standards for educational and psychological testing. Washington, DC: American Psychological Association. Kane, M. T., Kingsbury, C., Colton, D., & Estes, C. (1986). A study of nursing practice and role delineation and job analysis of entry-level practice of registered nurses. Chicago, IL: National Council of State Boards of Nursing. Kane, M. T., Kingsbury, G., Colton, D., & Estes, C. (1989). Combining data on criticality and frequency in developing test plans for licensure and certification examinations. Journal of Educational Measurement, 26, 17–27. Lunz, M. E., Stahl, J. A., & James, K. (1989). Content validity revisited: Transforming job analysis data into test specifications. Evaluation and the Health Professions, 12, 192–206. McShane, F., & Fagerlund, K. A. (2004). A report on the Council on Certification of Nurse Anesthetists 2001 Professional Practice Analysis. AANA Journal, 72(1), 31–52. Muckle, T. J., Apatov, N. M., & Plaus, K. (2009). A report on the Council on Certification of Nurse Anesthetists 2007 Professional Practice Analysis. AANA Journal, 77(3), 1–9. National Commission for Certifying Agencies. (2015). Standards for the accreditation of certification programs. Retrieved from www.credentialingexcellence.org/p/cm/ld/fid=66 National Council of State Boards of Nursing. (2008). APRN consensus model: The consensus model for APRN regulation, licensure, accreditation, certification & education. Retrieved from www.ncsbn.org/736.htm Plaus, K., Muckle, T. J., & Henderson, J. P. (2011). Advancing recertification for nurse anesthetists in an environment of increased accountability. AANA Journal, 79, 413–418. Raymond, M. R. (2001). Job analysis and the specification of content for licensure and certification examinations. Applied Measurement in Education, 14, 369–415. Raymond, M. R. (2015). Job analysis, practice analysis, and the content of credentialing examinations. In S. Lane, M. R. Raymond, & T. M. Haladyna (Eds.), Handbook of test development (2nd ed., pp. 144–164). Mahwah, NJ: Lawrence Erlbaum Associates. Stahl, J. A., Wang, N., & Muckle, T. J. (2003). A comparison of methods to analyze multi-scale job task analysis data. Paper presented at the Annual Meeting of the National Council of Measurement in Education, Chicago, IL. Zaglaniczny, K., & Healey, T. (1998). A report on the Council on Certification of Nurse Anesthetists 1996 Professional Practice Analysis. AANA Journal, 66(1), 43–62.

Conclusion Credentialing examinations for certification must be demonstrated to be job related (Raymond, 2015). Therefore, professional PPA methodologies have played and continue to play a role in examination content validation, especially in nursing and nursing specialties (Kane et al., 1986). This article has summarized the methods, and results of such a study, in which overall respondent data provided strong evidence of validity for the four core domains of the content outline as well as for nearly all knowledge statements included in the content outline. The authors hope that the process and results can serve as a model for establishing the content scope of other licensure and certification examinations and help assure stakeholders and the public that decisions related to the content scope of such examinations are based on sound procedures and empirical evidence.

Volume 7/Issue 3 October 2016

Timothy J. Muckle, PhD, is Senior Director of Testing Programs, National Board of Certification and Recertification for Nurse Anesthetists (NBCRNA), Chicago, Illinois. Karen Plaus, PhD, CRNA, FAAN, CAE, is Chief Executive Officer, NBCRNA. Steve Wooden, DNP, CRNA, NSPM-C, is President, NBCRNA.

www.journalofnursingregulation.com

47