THE DEVELOPMENT AND TESTING OF A QUESTIONNAIRE TO MEASURE COMPLEXITY OF NURSING WORK PERFORMED IN NURSING HOMES: NCCQ-NH Donna Velasquez, PhD, RN, FNP-BC The quality of care in nursing homes has improved over the last 2 decades; however serious problems persist. Although staffing levels are a primary concern, studies show that ineffective management structures may be a contributing factor to poor quality care. Evidence suggests that the complexity of work performed within the organization is an important consideration in developing effective management structures. The purpose of this article is to describe the development and initial testing of an instrument to measure the complexity of nursing work in nursing homes. A sample of 168 nursing personnel (RNs, LPNs, CNAs) from 7 nursing homes participated in the study. The results of measures to determine the reliability and validity were generally acceptable for a new scale. A modified version of the original scale can be used to provide scientific evidence on which to base the design of management structures in nursing homes. (Geriatr Nurs 2007; 28:90-98) he quality of care in nursing homes has generally improved since the implementation of the Omnibus Budget Reconciliation Act of 1 1987, yet reports of serious problems such as inadequate pain management,2 pressure sores,3 malnutrition,4,5 and urinary incontinence6 persist. Although staffing levels are consistently linked to quality of care, the issue is more complex than a simple “body count.”7 Results of a recent study8 indicated that even in nursing homes with greater numbers of caregivers, there were deficiencies in some care processes. The authors suggested that a lack of effective management structures and mechanisms to ensure quality care may be contributing factors.
T
90
Management structure refers to how people are governed and arranged into work groups to achieve the work of an organization.9,10 In most organizations, management structure is based on tradition, organizational culture (the unique beliefs, routines, and rituals characteristic of some types of organizations),11 or the particular preference of the organization’s administrators rather than on scientific evidence.12 However, research has demonstrated that to improve organizational and patient outcomes, management structures should be based on the “technology” or the “complexity” of work performed by the organization.9 The “technology” or complexity of work in nursing has been defined as the acts performed by nursing personnel to change the status of a patient to a discharged person13 and from a client requiring assistance to one who is self-reliant.14 These definitions are too narrow for nursing home populations, where discharge or self-reliance may not be realistic or obtainable for many residents. The term “nursing care complexity” is used in place of “technology” to describe the complexity of work performed by nursing personnel (RNs, LPNs, and CNAs). It includes the knowledge and processes used by nursing personnel to transform the resident to a higher level of biological, emotional, social, and/or spiritual health. Because dying is often an expected outcome in nursing homes, actions that contribute to a comfortable and dignified death are included in the definition. It is hypothesized that as nursing care complexity increases, management structures must become less hierarchical and formalized and become increasingly flexible with team members having greater autonomy, enhanced communica-
Geriatric Nursing, Volume 28, Number 2
tion, and input into decisions about patient care. To use work complexity to guide the design of management structures, it must first be measured. Although several instruments have been used to measure work complexity in a number of work settings,15,16 few instruments have been found that are specific to nursing, and no instruments have been found that are appropriate for use in nursing homes. The purpose of this article is to describe the development and pilot testing of a questionnaire (instrument) to measure the complexity of work in nursing homes and to make recommendations for its application.
Background Management structures in most nursing homes are based on the hierarchical biomedical models traditionally found in acute care hospitals.17 Hierarchical models are highly formalized and power is generated at the top following a strict “chain-of-command” to lower organizational levels. Communication flows in a vertical, top-down fashion characterized by limited information exchange among team members.18 Care is typically delivered using a standardized “onesize-fits-all” approach with little attention being paid to individual resident needs. Highly formalized management structures are thought to be most effective when the work of an organization is routine and predictable or “low complexity.” Although the work in nursing homes is recognized as being physically and mentally stressful, it is viewed as being low complexity and, as such, hierarchical formalized management structures with routine approaches to care are thought to be the most effective for achieving desired outcomes.19 However, a number of factors such as increased resident age, shorter hospital stays, and greater consumer expectations have contributed to a change the complexity of work in nursing homes. In 1999, 47% of residents were aged 85 or older compared with just 35% in 1977, and this number is expected to double to 8.9 million by 2030.20,21 As residents age, they are more likely to have increased levels of functional disability, cognitive impairment, frailty, and medication use.1,19,22 With in-
creased medical and psychological needs, residents are more likely to have variable and less predictable responses to medical and nursing interventions.23 Additionally, as a result of greater consumer expectations, rules and regulations affecting nursing homes have increased. It is no longer sufficient to provide solely routine or custodial care. Because of legislation such as the Omnibus Budget Reconciliation Act of 1987, the purpose of nursing homes has been redefined, and it is now mandated that unless residents have specific medical limitations, their ability to perform activities of daily living must be maintained or improved.24 To achieve these goals, it is necessary for nursing homes to adopt social-behavioral models of care that emphasize individualized care strategies focused on rehabilitation and maintenance of residents’ abilities. Studies to improve dressing independence,25 “abilitiesfocused” interventions,26 and individually tailored bathing plans27 have demonstrated the effectiveness of these models among nursing home residents. To implement these programs, nursing home personnel must have knowledge of alternative strategies of caring for increasingly complex resident problems and sufficient autonomy to allow for more flexible approaches. These approaches require higher-level problem-solving and decision-making skills, greatly increasing the complexity of work in nursing homes.
Conceptual Model The Nursing Care Complexity QuestionnaireNursing Home (NCCQ-NH) was designed to measure the complexity of work related to patient care performed in nursing homes. Contingency theory provided the theoretical framework for this study. Classical management theory, typical of most nursing homes, proposes that there is only 1 right way to manage an organization.9,28 In contrast, contingency theories hypothesize that effective management structures are based on characteristics of the organization including the complexity of work performed by the organization.9,29 The dimensions of nursing care complexity are adapted from work by Perrow9 and Hickson,
Geriatric Nursing, Volume 28, Number 2
91
Table 1.
Nursing Home Characteristics (N ⴝ 7)
Nursing Home
Total Number of MedicareCertified Beds
Profit Status
Multihome or “Chain” Ownership?
1
312
For profit
Yes
2
80
Nonprofit
Yes
3
112
For profit
Yes
4
102
For profit
No
5
93
For profit
No
6
157
Nonprofit
Yes
7
60
Nonprofit
Yes
Type of Units Behavioral/Dementia, LTC, Subacute, Skilled/Medicare Combined LTC, Skilled/Medicare Behavioral/Dementia, LTC, Skilled/ Medicare Behavioral/Dementia, Combined LTC, Skilled/Medicare Combined LTC, Skilled/Medicare Behavioral/Dementia, LTC, Skilled/ Medicare Combined LTC, Skilled/Medicare
Total Number of Direct-Care Staff
Number of Respondents N (%)
221
62 (28)
43
19 (44)
55
17 (31)
40
13 (33)
50
29 (58)
83
10 (12)
60
18 (30)
LTC ⫽ long-term care.
Pugh, and Phesey30 and from later works by Verran31 and Verran and Reid32 who adapted the concepts for nursing. The dimensions are as follows: client technology, knowledge technology, and operations technology. Client technology has to do with how well the nursing personnel “know” and “understand” the resident.9,32 Knowledge technology is defined as the number of resident problems or events related to patient care that occur throughout the workday that are not perceived as routine by nursing personnel and the degree of analytical thought, knowledge, or search processes necessary to deal with them.9,32 Operations technology is the amount and sophistication of equipment used while caring for the patient. It also includes the extent to which procedures must be carried out in a specific manner and the degree to which there are poor outcomes if procedures are not performed in the predetermined sequence.30
92
Study Design and Procedures Sample and Setting A convenience sample of 168 nursing personnel (RNs, LPNs, and CNAs) from 7 nursing homes located in southwestern United States participated in the study. Six of the nursing homes were located in large urban areas, and 1 facility was located in a rural area (Table 1). Inclusion criteria for subjects were as follows: the subject had to provide direct patent care, be 18 years of age or older, and had to have worked on the same nursing unit for at least 6 weeks. Sample characteristics are reported in Table 2. Data Collection After approval by the university Human Subjects Protection Committee, the investigator
Geriatric Nursing, Volume 28, Number 2
Table 2.
Demographic Description of Subjects (N ⴝ 168) and Retest Subsample (N ⴝ 34) Variable
Type of unit
Length on unit
Job title
Education level
Age group (years)
Work status
Shift worked
Category
Initial N (%)
Behavioral/dementia LTC Skilled Subacute Combined Skilled/LTC Float pool Missing data 6 weeks to 3 months 4 months to 1 year ⬎1 year and ⬍5 years ⬎5 years Missing data CNA/NA Healthcare Aide Licensed Practical Nurse Registered Nurse Other Missing Some high school GED High school graduate Some college Associate degree Bachelor’s degree Master’s degree Other Missing data 18-25 26-35 36-45 46-55 56-65 Missing data Full time Part time Missing Days Evenings Nights Rotating Missing
8 31 22 17 70 18 2 35 37 69 24 3 99 1 41 21 3 3 3 14 31 63 31 10 2 7 2 30 38 36 38 26 22 147 18 3 103 34 9 18 4
(4.8) (18.7) (13.3) (10.2) (42.2) (10.7) (1.1) (18.4) (19.5) (36) (12.6) (1.6) (52.1) (.5) (21) (11.1) (1.6) (1.6) (16.3) (7.4) (16.3) (33.2) (16.3) (5.3) (1.1) (3.7) (1.1) (15.8) (20) (18.9) (20) (13.7) (11.6) (77.4) (9.5) (1.6) (54.2) (17.9) (4.7) (9.5) (2.1)
Retest N (%) 4 6 4 2 11 7 5 4 11 13 1 20 8 6
2 3 4 12 6 3 1 2 1 6 4 8 10 6 28 5 1 20 5 3 5 1
(11.8) (17.6) (11.8) (5.9) (32.4) (20.6) 0 (14.7) (11.8) (32.4) (38.2) (2.9) (58.8) 0 (23.5) (17.6) 0 0 (5.9) (8.8) (11.8) (35.3) (17.6) (8.8) (2.9) (5.9) (2.9) (17.6) (11.8) (23.5) (29.4) (17.6) 0 (82.4) (14.7) (2.9) (58.8) (14.7) (8.8) (14.7) (2.9)
CNA ⫽ certified nurse’s assistant; GED ⫽ general education diploma; LTC ⫽ long-term care; NA ⫽ nurse’s assistant.
met with nursing home administrators and nursing directors to develop a recruitment plan suitable for each nursing home. Data were collected over a 2-month period from mid-July to midSeptember, 2005.
Data Analysis and Results Content validity, the extent to which items (individual questions) contained within the instrument adequately represent the construct
Geriatric Nursing, Volume 28, Number 2
93
Table 3. Item Analysis and Alpha Coefficients for NCCQ-NH Total Scale and Subscales (N ⴝ 168)
Total scale Client technology Operations technology Knowledge technology
Item Means
Item SD
IC Mean
IC
Alpha
Test-Retest (N ⴝ 34)
1.70–3.34 1.70–1.98 2.40–3.34 2.45–3.06
.73–.89 .76–.85 .80–.89 .73–.87
.16 .32 .38 .29
–.22 to 66 .20–.43 .22–.51 .13–.66
.78 .65 .78 .79
.67 .05 .31 .70
IC ⫽ interitem correlations; NCCQ-NH ⫽ Nursing Care Complexity Questionnaire–Nursing Home.
of interest (nursing care complexity)33-35 was determined by a 2-stage procedure.36 First, domains of nursing care complexity (client technology, operations technology, and knowledge technology) were defined based on a comprehensive review of the literature and the researcher’s clinical experience. New items were generated by the researcher with additional items adapted with permission from existing instruments. The second stage consisted of a judgment-qualification process. Based on a detailed description of nursing care complexity and its domains, 6 experts were asked to rate the relevance of each item on a 4-point scale. Items were rated from not relevant (1) to very relevant (4). A content validity index (CVI), the proportion of items rated as very relevant or as needing minor alterations (3) divided by the total number of experts was calculated. Five of the 6 experts had to rate an item as a 3 or 4 for a significance of .83 to meet the recommended minimum agreement.34,36 Of 60 items submitted to the expert panel, 52 were rated as relevant. An additional 7 items were deleted because they were judged by the researcher as redundant. The total number of items retained for this study was 45. Scoring of Items Responses to items on the NCCQ-NH were scored using a 4-point Likert scale with possible answers ranging from 1 ⫽ never to 4 ⫽ frequently. The 4-point scale was selected to avoid the use of a neutral position (the midposition of odd-numbered scales, which indicates neither
94
agreement nor disagreement with a statement). Additionally, only the “1” and “4” positions were labeled (end-anchored) to avoid end-aversion bias, which is the tendency of respondents to avoid the stronger statements usually located at the ends of rating scales. This is done to produce greater variability in responses.36 Reliability Reliability refers to how consistently the instrument measures a particular attribute and is a major criterion for assessing the quality of an instrument.37,38 Two measures of reliability commonly reported and used in this study are internal consistency and stability. Internal consistency is the extent to which items are measuring the same characteristic and stability is the extent to which similar scores are produced by the same people on different occasions.38,39 Cronbach’s alpha is frequently used to assess internal consistency, and test-retest procedures are typically used to assess stability.38,39 Reliability results for the total subscale and subscales are reported in Table 3. A Cronbach’s alpha of .70 has been suggested as an acceptable lower bound of alpha for early stages of scale development.40 Cronbach’s alpha for the total scale and 2 of the 3 subscales met this criteria (total scale ⫽ .78, operations technology ⫽ .78, knowledge technology ⫽ .79). Alpha for the 4-item client technology scale was .65. Stability was estimated using 2-week test-retest procedure. The test-retest score for the total scale was .67 and .70 for the knowledge technology subscale. Test-retest scores for the
Geriatric Nursing, Volume 28, Number 2
client technology and operations technology subscale were low (.05 and .31, respectively). Construct Validity Validity is the extent to “which an instrument measures what it is supposed to be measuring” (p. 308 ) and is the second major criterion for evaluating the quality of an instrument.38 Construct validity attempts to determine which construct the instrument is actually measuring.38 Three methods were used in this study to evaluate construct validity: factor analysis, convergence, and the ability of the instrument to distinguish between types of nursing care units found within the nursing home. Factor analysis is a mathematical procedure that identifies clusters of related items within an instrument. The resulting clusters are called factors. Items must significantly (⬎|.30|)41-43 “load” or group onto a factor to be considered as belonging to that factor. Although the procedure and loadings are objective, interpretation of the meaning and labeling of factors relies on researcher experience and judgment.38 Factor analysis was performed using a procedure called principal axis factoring (PAF) with varimax rotation. Factor loadings ranged from .33 to .69. Only 1 item loaded significantly on more than 1 factor, demonstrating that the question could be measuring more than 1 concept. However, it loaded slightly higher on the factor to which it was theorized as belonging and was ultimately retained on the subscale developed to measure that concept. The total amount of variance explained by the 3 factors was 36.19%. Variance is a measure of variability or how much the scores vary from each other and the mean (average score).44 “Total variance explained” is the amount of variability in all items explained by the underlying factor structure of the entire questionnaire. A general acceptable level of explained variance for a new scale is approximately 50% (Joyce Verran, January 10, 2007, personal communication). However, although the goal is to maximize total explained variance, there are a number of considerations, and thus there are no “magic” numbers (p. 483).43 Two instruments, the Work Unit Technology Scale14 and the Modified Magnitude Estimate of
Nursing Care Complexity,31 were selected to test convergence or the extent to which an instrument correlates with other instruments developed to measure the same construct.39 Although these instruments were developed to measure aspects of work complexity, neither was developed to measure all of the dimensions of nursing care complexity as conceptualized. Therefore, although it was hypothesized that there would be a positive correlation between the NCCQ-NH total scale and the established instruments, it was not expected that correlations would be high. The NCCQ-NH total scale correlated significantly and positively with both the Work Unit Technology and Modified Magnitude Estimate of Nursing Care Complexity. However, as expected, correlations were low (r ⫽ .21, P ⫽ .008 and r ⫽ .31, P ⬎ .001, respectively). A 1-way analysis of variance (ANOVA) was performed to determine to what extent the instrument was able to discriminate between nursing subunits within the nursing home. Three subunits were selected for this analysis, skilled (n ⫽ 22), long-term care (LTC; n ⫽ 31), and subacute (n ⫽ 17). There were no significant differences among the groups for the client technology and operations technology subscales among the 3 units. However, there was a significant difference between LTC (mean ⫽ 2.56, SD ⫽ .49) and subacute units (mean ⫽ 2.93, SD ⫽ .48) with regard to the knowledge technology subscale. Modified Scale Because of the low reliability scores for some of the subscales, analyses were performed to determine whether the instrument functioned better as a single scale. Using factor analysis, all items were forced onto a single factor. Not all items loaded significantly onto the single factor, indicating that the items were not measuring a single attribute. However, when items from the client technology subscale were deleted, all remaining items loaded significantly onto a single factor. Cronbach’s alpha for the newly created scale was .82, and test-retest reliability was .72. Total explained variance for the single scale was 25%. Correlation with the Modified Magnitude of Nursing Care Complexity was .30 (P ⬍
Geriatric Nursing, Volume 28, Number 2
95
.001). There was no significant correlation with the Work Unit Technology Scale. Discrimination among subunits was not significant, but the pattern was what would be expected with regard to nursing care complexity among the units (subacute mean ⫽ 2.97, SD ⫽ .32; skilled mean ⫽ 2.89, SD ⫽ .42; LTC mean ⫽ 2.74, SD ⫽ .48).
Discussion This study describes the development and initial psychometric analysis of the NCCQ-NH to measure the complexity of nursing care delivered in nursing homes. Reliability and construct validity of a modified instrument that combines 2 subscales (operations technology and knowledge technology; Table 4), has been established to the extent that it can provide a measure of nursing care complexity in nursing homes. Results from previous studies of work complexity in nursing agencies have had somewhat different results compared with studies of nonnursing organizations (academic libraries,16 government agencies,15 and human service agencies45). It may be that the instruments used in nursing studies measure work complexity differently than instruments used to study nonnursing agencies. One instrument, developed to measure nursing tasks in acute care settings,46 has been frequently used in nursing studies.13,47-50 The instrument was based on the concepts described by Perrow9 (client and knowledge technology) and a third concept, “task interdependence,” described by Hickson et al.30 However, factor analysis revealed considerable overlap of items across the conceptualized groups or factors. The resulting factors were conceptualized as uncertainty, instability, and variability. Although the instrument was later revised, the concepts (uncertainty, instability, and variability) were retained.51 In contrast, the NCCQ-NH is more consistent with concepts originally described by Perrow.9 The strengths of the NCCQ-NH are its ease of use and appropriateness for use in nursing homes. Study limitations include the small sample size (N ⫽ 168) and low alpha for the client technology scale. Although the lower number of items on the client subscale (n ⫽ 5) is the most likely reason for the low alpha, the small sample size may have also contributed to this finding.
96
Table 4. Nursing Care Complexity Questionnaire–Nursing Home (Shortened Version) 1. How often is it necessary that patient care be given in a certain order? 2. How often would there be a bad outcome if patient care was not given in a certain order? 3. How often do you perform patient care procedures that must be completed in a certain order? 4. How often do your patients require that you care for them in a certain way? 5. How often do patient care procedures have better results if you do them in a certain order? 6. How often is it necessary for you to follow patient care procedures step-by-step? 7. How often do you come across new or different kinds of problems while giving patient care? 8. How often does your work change because of a patient’s condition or mood? 9. How often do you encounter unfamiliar or unexpected events while caring for patients? 10. How often do things happen on your unit that makes it necessary to change the way you give patient care? 11. How often is there something “new” happening on your job that affects how you give patient care? 12. How often do you have to think about how to solve problems that happen while you are giving patient care? 13. How often do your patient care actions require extra thought rather than just being able to rely on standard procedures or guidelines? 14. When there is more than one way to perform a patient care procedure (feeding, bathing, dressing, etc.), how often can you choose the method or way you think is best for the patient? 15. How often does the patient care you give rely on intuition (your “gut feeling”) rather than on set procedures or routines?
Additionally, stability reliability (2-week test-retest) for the client and operations technology subscales was low. However, when LTC units were eliminated from the analysis, the test-re-
Geriatric Nursing, Volume 28, Number 2
test reliability for the operations scale was acceptable (.72) and much improved for the client technology scale (.51). Long-term care units (compared with subacute and skilled) are generally thought to be stable over time with regard to work complexity, and therefore the low stability reliabilities were unexpected. More research is needed to understand the nature of work in these units and to improve the reliability of the client technology scale. Until the client technology subscale is refined, it is recommended that the modified single scale be used to measure nursing care complexity. Implications for Nursing Practice Nurses have the responsibility for determining how best to manage the delivery of care in nursing homes. Currently, the management structure in nursing homes is based on hierarchical biomedical models best suited for lowcomplexity care. However, a number of changes have occurred in the past 2 decades, increasing the complexity of care delivered in nursing homes and indicating a need to move from traditional management structures to more flexible structures that enhance the autonomy, communication, and decision-making power of nursing personnel who care for residents. Although management structures in nursing homes are typically based on administrative preferences and organizational culture rather than scientific evidence, measuring nursing care complexity may provide information needed to guide the evidence-based development of effective management structures. REFERENCES 1. Institute of Medicine. Improving the quality of longterm care. Washington, DC: National Academy Press; 1996. 2. Jones KR, Fink R, Vojir C, et al. Translation research in long-term care: improving pain management in nursing homes. Worldviews Evid Based Nurs 2004; supp 1:S13-20. 3. Ooi WL, Morris JN, Brandeis GH, et al. Nursing home characteristics and the development of pressure sores and disruptive behavior. Age Ageing 1999;28:45-52. 4. Clarke D, Wahlqvist M, Strauss B. Undereating and undernutrition in old age: integrating bio-psychosocial aspects. Age Aging 1998;27:527-34.
5. Kayser-Jones J, Schell E. The mealtime experience of cognitively impaired elder: ineffective and effective strategies. J Gerontol Nurs 1997;23(7):33-9. 6. Schnelle JF, Leung FW. Urinary and fecal incontinence in nursing homes. Gastroenterology 2004;126:S41-7. 7. Kane RL. Commentary: nursing home staffing-more is necessary but not necessarily sufficient. Health Serv Res 2004;39:251-6. 8. Schnelle JF, Simmons SF, Harrington C, et al. Relationship of nursing home staffing to quality of care. Health Ser Res 2004;39:225-50. 9. Perrow C. A framework for the comparative analysis of organizations. Am Soc Rev 1967;32:194-208. 10. Van De Ven AH. A framework for organizational Assessment. Acad Man Rev 1976;64-78. 11. Morgen G. Images of organization. London: Sage Publications; 1997. 12. Langfred CW, Moye NA. Effects of task autonomy on performance: an extended model considering motivational, informational, and structural mechanisms. J Appl Psychol 2004;89:934-45. 13. Alexander JW, Mark B. Technology and structure of nursing organizations. Nurs Healthcare 1990;11:195-9. 14. Alexander JW, Bauerschmidt AD. Implications for nursing administration of the relationship of technology and structure to quality of care. Nurs Adm Q 1987;11: 1-10. 15. Withey M, Daft RL, Cooper WH. Measures of Perrow’s work unit technology: an empirical assessment and a new scale. Acad Manage J 1983;26:45-63. 16. Lynch BP. An empirical assessment of Perrow’s technology construct. Adm Sci Q 1974;35:338-56. 17. Deutschman M. Interventions to nurture excellence in the nursing home. J Geron Nurs 2001;37-43. 18. Colon-Emeric CS, Ammarell N, Bailey D, et al. Patterns of medical and nursing staff communication in nursing homes: implications and insights from complexity science. Qual Health Res 2006;16:173-88. 19. Maas M, Buckwalter K, Specht J. Nursing staff and quality of care in nursing homes. In: Nursing staff in hospitals and nursing homes: is it adequate? Washington, DC: National Academy Press; 1996. p. 361-425. 20. Center for Disease and Prevention/National Center for Health Statistics. National Nursing Home Survey 1999. Available at: www.cdc.gov./nchs/nnhs.htm. Cited January 10, 2007. 21. Eaton SC. Frontline caregivers in nursing facilities: can policy help in recruitment and retention crisis? Public Policy Aging Rep 2003;13(2):8-11. 22. Ray W. Improving quality of long-term care. Med Care 2000;38:1151. 23. Banaszak-Holl J, Hines MA. Factors associated with nursing home staff turnover. Gerontologist 1996;36: 512-17. 24. Turnham H. Federal Nursing Home Reform Act from the Omnibus Budget Reconciliation Act of 1987. Available at: www.ltcombudsman.org/uploads/ OBRA87summary.pdf. Cited January 30, 2005.
Geriatric Nursing, Volume 28, Number 2
97
25. Beck C, Heacock P, Mercer SO, et al. Improving dressing behavior in cognitively impaired nursing home residents. Nurs Res 1997;46:126-32. 26. Wells L, Dawson P, Sidani S, et al. Effects of an abilities-focused program of morning care on residents who have dementia and on caregivers. J Am Geriatr Soc 2000;48:442-9. 27. Barrick AL, Rader J, Hoeffer B, et al. Bathing without a battle: personal care of individuals with dementia. New York: Springer Publishing Company; 2002. 28. Mark BA, Salyer J, Smith CS. A theoretical model for nursing systems outcomes research. Nurs Adm Q 1996;20(4):12-27. 29. Mark BA. Structural contingency theory. In: Henry B, Amdt C, DiVincenti M, et al., editors. Dimensions of nursing administration. Boston: Blackwell Scientific Publication; 1998. p. 175-84. 30. Hickson DJ, Pugh DS, Phesey DC. Operations technology and organization structure: an empirical reappraisal. Adm Sci Q 1969;14:378-97. 31. Verran JA. Testing a classification instrument for the ambulatory care setting. Res Nurs Health 1986; 9:279-87. 32. Verran JA, Reid PJ. Replicated testing of the nursing technology model. Nurs Res 1987;36:190-4. 33. Polit DF, Beck CT. Nursing research: principles and methods. 7th ed. Philadelphia: Lippincott, Williams, & Wilkins; 2004. 34. Polit DF, Beck CT. The content validity index: are you sure you know what’s being reported? Critique and recommendations. Res Nurs Health 2006;29:489-97. 35. Waltz CF, Strickland OL, Lenz ER. Measurement in nursing and health research. 3rd ed. New York: Springer Publishing Company; 2005. 36. Lynn MR. Determination and quantification of content validity. Nurs Res 1986;35:382-5. 37. Streiner DL, Norman GR. Health measurement scales. 2nd ed. New York: Oxford University Press; 1995. 38. Polit DF, Beck CT, Hungler BP. Essentials of nursing research: methods, appraisal, and utilization. 5th ed. Philadelphia: Lippincott Williams & Wilkins; 2001.
98
39. van Herk R, van Dijk M, Barr FPM, et al. Observation scales for pain assessment in older adults with cognitive impairments or communication difficulties. Nurs Res 2007;56:34-43. 40. Nunnally JC. Psychometric theory. 2nd ed. New York: McGraw-Hill; 1978. 41. Field A. Discovering statistics using SPSS for Windows. London: Sage; 2000. 42. Pett MA, Lackey NR, Sullivan JJ. Making sense of factor analysis. Thousand Oaks, CA: Sage; 2003. 43. Nunnally JC, Bernstein IH. Psychometric theory. 3rd ed. New York: McGraw-Hill; 1994. 44. Salkind NJ. Statistics for people who think they hate statistics. London: Sage; 2004. 45. Glisson CA. Dependence of technological routinization on structural variables in human services organizations. Admin Sci Q 1978;23:383-95. 46. Overton P, Schneck R, Hazlett CB. An empirical study of the technology of nursing subunits. Adm Sc Q 1977; 22:203-19. 47. Alexander J, Kroposki M. Using a management perspective to define and measure changes in nursing technology. J Ad Nurs 2001;35:776-83. 48. Alexander JW, Randolph WA. The fit between technology and structure as a predictor of performance in nursing subunits. Acad of Manage J 1985;28:844-59. 49. Cumbey DA, Alexander JW. The relationship of job satisfaction with organizational variables in public health nursing. J Nurs Adm 28(5):39-46. 50. Loveridge CE. Contingency theory: explaining staff nurse retention. J Nur Adm 1988;18(6):22-5. 51. Leatt P, Schneck R. Nursing subunit technology: a replication. Adm Sc Q 1981;26:225-36. DONNA VELASQUEZ, PhD, RN, FNP-BC, is a clinical associate professor at the University of Arizona College of Nursing in Tucson, Arizona. 0197-4572/07/$ - see front matter © 2007 Mosby, Inc. All rights reserved. doi:10.1016/j.gerinurse.2007.01.009
Geriatric Nursing, Volume 28, Number 2