604
ASSOCIATION FOR ACADEMIC SURGERY AND SOCIETY OF UNIVERSITY SURGEONS—ABSTRACTS
a preoperative CT scan within 90 days of surgery. Trunk muscle size (total psoas area, or TPA) at the L4 vertebral level was calculated using analytic morphomic techniques and normalized by gender using standard deviation units. We used univariate logistic regression and ordered logistic regression to assess the relationship between TPA and activities of daily living (ADL), a main component of VESPA. Results: Increased trunk muscle size was associated with a decreased likelihood of impairment in any instrumental ADL (IADL), a sensitive marker of early functional decline (OR¼0.53 per one unit SD increase, 95% C.I. 0.34-0.81; P¼0.004). As shown in Figure 1, larger trunk muscle size was inversely correlated to loss of function and/or reliance on others to perform six of the eight individual IADLs (driving, finances, grocery shopping, housekeeping, laundry, and meal preparation). Concerning overall burden of disability, increases in TPA were related to reduced IADL dependency count (OR¼0.48 per additional impairment, 95% C.I. 0.31-0.74; P¼0.001). Because basic ADL impairments were rare, we did not find a significant relationship between TPA and any individual ADL besides transferring (OR¼0.22, 95% C.I. 0.12-0.85; P¼0.022). However, increased TPA was still related to the count of basic ADLs (OR¼0.35 per additional impairment, 95% C.I. 0.14-0.87; P¼0.022). Notably, age alone did not significantly predict IADL (P¼0.085) or basic ADL impairment (P¼0.69). Conclusions: Core muscle size is closely related to older patients’ preoperative functional impairment, with the greatest correlation to instrumental activities of daily living. These results suggest morphomic measures may be useful as early predictors of future functional decline. Further work is needed to compare the efficacy of analytic morphomics and comprehensive geriatric assessments in terms of surgical risk assessment.
37.4. Impact of Hospital Characteristics on Failure to Rescue Following Major Surgery. K. H. Sheetz,1 J. B. Dimick,1 J. D. Birkmeyer,1 A. A. Ghaferi1; 1University Of Michigan Surgery, Ann Arbor, MI, USA Introduction: An increasing body of literature supports failure to rescue (mortality following a major complication) as an important driver of variation in surgical mortality after high risk operations. However, little is known about which hospital characteristics distinguish high performing hospitals from hospitals with low rescue rates. Methods: We identified 2,180,789 patients using the Medicare MEDPAR file (2007-2010) with surgical admissions for abdominal aortic aneurysm repair, lower-extremity amputation, lower-extremity revascularization, colectomy, or hip fracture. Using multilevel mixed-effects logistic regression modeling, we evaluated how failure to rescue rates were influenced by 7 hospital characteristics previously shown to be associated with postsurgical outcomes (nurse-patient ratio, Council of
Teaching Hospital status, bed size, hospital occupancy, critical access status, ICU bed size, and high technology—defined as hospitals that perform cardiac surgery or transplantation). We used variance partitioning to determine the relative influence of patient and hospital characteristics on the between-hospital variability in failure to rescue rates. Results: Failure to rescue rates varied up to 2.6-fold between very high and very low mortality hospitals. We observed that teaching status (range: OR 0.84-0.92), high hospital technology (range: OR 0.75-0.95), hospital size greater than 200 beds (range: OR 0.710.85), and presence of an ICU with greater than 20 beds (range: OR 0.82-0.93) significantly reduced failure to rescue rates for all procedures. Other factors were also important to individual operations. For example, following colectomy, average hospital occupancy greater than 50% (OR 0.86, 95% CI 0.80-0.92) was associated with lower failure to rescue rates while critical access status (OR 2.01, 95% CI 1.432.75) had a negative influence. When taken together, these hospital characteristics accounted for 2.5% (hip fracture) to 11.5% (abdominal aortic aneurysm repair) of the observed variation in risk-adjusted failure to rescue rates across hospitals. Conclusions: While several hospital characteristics are associated with improved rescue of patients from major complications, these macro-system factors explain a small proportion of the variability between hospitals. These findings suggest other micro-system level characteristics, such as hospital culture and safety climate, may play a larger role in improving a hospital’s ability to recognize and effectively manage postoperative complications.
37.5. Patient Satisfaction In Real-Time: Inpatient Experience Informs Providers and Satisfaction Scores. M. Gupta,1 L. Fleisher,2 N. Fishman,2 S. Raper,1,2 J. S. Myers,2 R. R. Kelz1; 1University Of Pennsylvania - Surgery, Philadelphia, PA, USA; 2University Of Pennsylvania Clinical Effectiveness And Quality Improvement, Philadelphia, PA, USA Introduction: The Hospital Consumer Assessment of Healthcare Providers Survey (HCAHPS) is distributed to patients >30 days from hospital discharge and now contributes to hospital reimbursements. The poor overall response rate (20%) and slow survey response time (>90 days) delays communication between patients and healthcare providers regarding patient satisfaction with hospital care, suggestions for change, and opportunities for healthcare/quality improvement. Our group sought to understand the experience of hospitalized patients in real-time. Methods: We conducted a 3-month pilot study using a novel mobile communication platform capturing patient-generated comments and ratings during their in-patient stay. Responses were electronically communicated to providers via a secured streaming dashboard. Providers were encouraged to use this information to identify and address actionable items, document their actions and obtain patient follow-up. HCAHPS results were collected for the study period over the course of 6 months. Success metrics set forth included a raw 2.0 increase in each corresponding HCAHPS measure, 30% overall patient engagement, and 50% positive provider response to the intervention. Real-time ratings were calculated using net promoter score methodology and paired analysis of time-matched HCAHPS scores focusing on two measures was performed. Qualitative analysis of dashboard comments was also performed, and patients’ healthcare providers were surveyed to identify effective processes and balancing measures. Results: Of 1,828 patients enrolled on 2 medical and 5 surgical units, 485(26.5%) patients provided hospital ratings and feedback for improvement (n¼1236 comments). Responses were grouped into several categories, both original and known items found in reimbursement-based satisfaction surveys, including: nursing communication (12.9%), physician communication (9.0%), pain management (14.6%), nutrition (10.6%), environmental/noise (8.9%), emotional support (3.4%), etc. Overall patient satisfaction received a net promoter score of +56 (scale -100 to +100). We observed an increase in time-matched HCAHPS scores
ASSOCIATION FOR ACADEMIC SURGERY AND SOCIETY OF UNIVERSITY SURGEONS—ABSTRACTS for all units in our pilot Table1. Provider feedback was extracted weekly and 65% felt that the feedback from patients was useful. Conclusions: Real-time inpatient feedback regarding hospital care is feasible and yields valuable information on factors that are important to in-patients. This also facilitates bidirectional communication between patients and their providers, uncovers immediately actionable issues to improve inpatient satisfaction, and is found useful to patients and providers.
605
only team member able to complete important tasks in the OR, reducing the this time is important for both efficiency and patient safety. Though further study is needed to confirm these early findings, these data are strongly suggestive of a flaw in the design of OR organization and an opportunity for potential restructuring of instruments and supplies.
37.7. Multi-Faceted Interventions Significantly Improve Checklist Adherence. L. Putnam,1,5,7 S. Levy,1,5,7 M. Sajid,3,5,7 D. Dubuisson,3,5,7 N. Rogers,3,5,7 L. Kao,4,5,7 K. Lally,1,5,7 K. Tsao1,5,6,7; 1University Of Texas Health Science Center At Houston - Pediatric Surgery, Houston, TX, USA; 3University Of Texas Health Science Center At Houston, Houston, TX, USA; 4University Of Texas Health Science Center At Houston - General Surgery, Houston, TX, USA; 5University Of Texas Health Science Center At Houston - Center For Surgical Trials & Evidence-based Practice, Houston, TX, USA; 6University Of Texas Health Science Center At Houston, Houston, TX, USA; 7Children’s Memorial Hermann Hospital, Houston, TX, USA
37.6. Surgical Instruments, Supplies and Efficiency in the Operating Room. J. M. Mhlaba,1 A. Langerman,1 J. Alverdy1; 1University Of Chicago, Chicago, IL, USA Introduction: Because operating rooms generate a large portion of a hospital’s revenue and are intimately linked to patient safety, improving efficiency in the OR is a high priority. Little prior research has investigated the impact of surgical instrumentation and supplies on the OR and the costs associated with it. Our objective was to study and quantify the waste associated with opened but unused instruments and supplies. We also sought to understand supply usage patterns to determine whether inefficiencies exist that disrupt the surgical team flow. Methods: Twenty surgical cases were observed. Number and names of instruments and select common disposable items were recorded and average costs calculated. Time and reasons associated with the circulating nurse exiting the OR between incision and close were also recorded. Results: Overall, instrument utilization was low (19%). Of the five cases observed that utilized the Major Laparotomy tray, average instrument utilization was 29%. Only 7% of instruments were used in all observed cases, 45% of instruments were used in some of the observed cases, and 48% of instruments were used in none of the observed cases. For the eight cases that utilized the Plastic Soft Tissue tray, average instrument utilization was 14%. Only 1% was used in all, 38% were used in some, and 61% were used in none. Regarding supplies, for the 20 observed cases, the item most commonly opened and unused was the X-Ray Detectable Sponge (71%), resulting in an average of $3.26 of waste per case. The most costly item was the surgical suture, 29% of which was left opened and unused on average, resulting in an estimate of $29.08 of waste per case. The average total estimated cost of wasted supplies per case for the seven items recorded was $48.43. For 11 cases, the time and reasons associated with the circulating nurse exiting the OR was measured and compared to total operating time. On average, the circulating nurse was absent from the OR 6% of operating time. The most common reason for room exit was retrieval of supplies (45%). Of that time, the most common items retrieved were instruments (23%), sutures (23%) and disposable items (16%). Conclusions: Our early data suggest that there are significant areas of inefficiency related to surgical instrumentation and supplies in the OR with substantial cost implications. Though a large percentage of the instruments in the trays observed were unused (81%) and a substantial portion of disposable items opened were unused (33%), the circulating nurse spent 6% of operating time out of the room in pursuit of additional instruments and supplies. As the circulator is often the
Introduction: Adherence to the execution of surgical safety checklists remains a challenge for most institutions. Observational data from our institution and several others demonstrated acceptable rates of use of the checklist but poor adherence to all checkpoints. We instituted multi-faceted interventions in a step-wise manner over the course of two years. We hypothesize that these ongoing educational and team-building efforts have produced significant improvements in checklist adherence. Methods: From 2011 to 2013, adherence to the 14-point pre-incision checklist was directly assessed by trained observers during three distinct time periods separated by one year intervals (baseline, observation #1, and observation #2) during which time interventions were implemented. Operative cases were selected by convenience sampling. Intervention #1 entailed safety workshops for all operative personnel dedicated to safety culture and high-reliability organizational topics as well as to the customization of a stakeholder-derived checklist. Intervention #2 involved an audit and feedback system of the new checklist and safety workshops focusing on error identification and effective communication. Statistical analyses were done using the Chi-square and Kruskal Wallis test. Results: The pre-incision checklist phase was observed for 873 cases with the initial 144 cases considered as baseline, 373 cases during observation #1, and 356 cases during observation #2. Overall checkpoint adherence increased with each intervention, from 30% to 78% to 96% (p<0.05). Completion of all checkpoints also increased from 0% to 19% to 61% of cases (p<0.05). The overall pre-incision median (interquartile range) number of checkpoints completed during each time period improved from 4 (3-5) to 11 (10-12) to 14 (13-14, p<0.05) (Figure). Conclusions: Ongoing checklist implementation strategies including educational and team-building efforts have significantly improved checkpoint adherence over the course of two years. Continued