February 2011
Aspartate Aminotransferase and Alanine Aminotransferase Levels in Hemodialysis Patients With Positive Hepatitis B/Hepatitis C Serology: Are They Good Predictors of Liver Injury? Shitij Arora, Mohammed Siddiqui, Mohammed Ali, Yousif Barzani, Saima Ali, Karishma Kitchloo, Prachi Aggarwal, Premila Bhatt, Rekha Bhandari, Atul Maini, Frantz Duffoo, Vijaypal Arya Introduction: The annual United States health care expenditure due to hepatitis B and hepatitis C infection total more than $1 billion. Hemodialysis (HD) patients constitute a large fraction of the infected population. Previous studies reveal a prevalence of 1% of HD patients to be seropositive for hepatitis B virus (HBV) and 7.8% for hepatitis C virus (HCV). Transaminases, including aspartate aminotransferase (AST) and alanine aminotransferase (ALT), have been found to be low in the serum of end-stage renal disease (ESRD)/Dialysis patients. The exact cause of low serum ALT and AST in HD patients is not known. A possible explanation for the decreased AST and ALT may be due to subnormal serum levels of pyridoxal-5-phosphate (PLP), active form of B6 which serves as coenzymes for transaminases. The aim of this study is to investigate the effect of positive HBV/HCV serology on the serum levels of AST and ALT in HD patients. Materials and Methods: A total of 120 patients were selected for this retrospective case control study. Cases were stratified using the Apache-II score. All patients were diagnosed with ESRD/HD and were separated into two groups of 63 patients and 57 patients classified as ESRD/HD patients with HBV/HCV infection and ESRD/HD patients without infection, respectively. Patients with liver cirrhosis and other possible causes for serum aminotransaminase elevation were excluded from the study. Serological testing was used to determine the patients’ hepatitis infection status. Patient records were further studied for clinical data. Results: A total of 63 patients were placed in a group classified as ESRD/HD with HBV/HCV infection. The mean age for the group was 58.90 ⫾ 14.02 years. The following results were collected: BUN: 52.73 ⫾ 28.46 mg/dL,Creatinine: 6.66 ⫾ 2.76 mg/dL,AST: 32.04 ⫾ 20.03 IU/L,ALT: 22.09 ⫾ 16.31 IU/L A total of 57 patients were placed in a group classified as ESRD/HD with no HBV/HCV infection. The mean age for the group was 56.75 ⫾ 16.83 years. The following results were collected:BUN: 60.45 ⫾ 31.10 mg/dL, Creatinine: 7.31 ⫾ 4.15 mg/dL, AST: 28.47 ⫾ 14.95 IU/L, ALT: 19.65 ⫾ 10.20 IU/L. The data was analyzed separately using the paired Student t test. The result showed insignificant P values for the difference in AST values (P ⫽ .27) and ALT values (P ⫽ .33) in the two groups. Conclusion: The results of this study suggest that AST and ALT may not be good indicators of liver injury in ESRD patients undergoing hemodialysis due to the deficiency of pyridoxine leading to impaired enzyme synthesis and/or due to the effect of uremic serum. Interestingly, we did not find a negative correlation between the transaminases and level of uremia/ elevated creatinine levels as opposed to some other studies. Our study is limited, because all patients studied were selected based on a positive serological result. Tissue biopsy was not performed to confirm the presence of active hepatitis. Further studies are warranted with patients confirmed by histological exam.
Accuracy of “Optically Predicted” Versus “Pathology Determined” Surveillance Intervals in Patients With Small Colorectal Polyps Susan G. Coe, Michael B. Wallace Background: As our ability to visually discriminate neoplastic from nonneoplastic polyps improves, so should our ability to assign colon cancer surveillance intervals immediately post procedure. In patients with small colorectal polyps, we sought to determine the accuracy of “optically predicted” surveillance intervals both independent of and dependant on individual patient risk factors when compared to “pathology determined” intervals as the gold standard. Methods: Endoscopists were asked to predict the histopathology of all resected polyps during routine colonoscopy and to assign colon cancer surveillance intervals using consensus guidelines. For all outpatient colonoscopies completed between August 2, 2010, and September 15, 2010, data were prospectively collected on procedure indication, patient demographics and risk factors, polyp description, endoscopist prediction of histopathology and actual histopathology. Procedures that were incomplete, had poor bowel prep (BBPS ⬍5), or had an indication of post-cancer surveillance, inflammatory bowel disease, hereditary syndrome surveillance or active gastrointestinal hemorrhage were excluded. Patients with newly discovered cancers were also excluded. Cases where the endoscopist determined “no recall indicated” were excluded from surveillance interval assignment comparisons. Results: Of 618 colonoscopies between August 2, 2010, and September 15, 2010, 180 had polyps 10 mm or less. “Optically predicted” surveillance intervals based solely on number of predicted adenomas (independent of risk factors) had an overall accuracy of 68% (123/180, 95% CI: 61%–75%) when compared to “pathology assigned” intervals (Table 1). When individual patient risk factors were considered the accuracy was 77.7% (136/175, 95% CI: 71%– 84%) (table not shown). The accuracy of endoscopist prediction was 59% (75/127, 95% CI: 50%– 68%) (table not shown). Conclusion: Improvement in accuracy of in-vivo polyp discrimination is needed before surveillance intervals can be accurately assigned immediately post procedure. Knowledge of individual patient risk factors is also important in surveillance interval assignment and improves accuracy. These findings highlight the likely need for further endoscopist training and better visual optics.
ABSTRACTS
185
Table 1. Accuracy of Optically Predicted vs Pathology Determined Surveillance Interval Based on Number and Size of Adenomas. Pathology determined interval
Optically predicted interval 10 years 5 years 3 years
10 years
5 years
3 years
37 20 11
13 64 7
2 4 22
NOTE. 0 adenomas ⫽ 10 yr interval, 1 or 2 adenomas (all under 10mm) ⫽ 5 yr interval, 3 or more, or any adenoma with size of 10mm ⫽ 3yr interval. Overall accuracy: 68% (123/180, 95% CI: 61 – 75%)
Replacement of Direct Percutaneous Endoscopic Jejunostomy Tubes: How The Experts Do It. Tahmina Haq, Todd H. Baron, Mark H. DeLegge, John Fang, Stephen A. McClave, Mark A. Schattner, Snorri Olafsson. Introduction: Direct percutaneous endoscopic jejunostomy (DPEJ) is performed in only a few centers around the world. There are several articles on these tubes’ initial placement, but there is no data on how to change them. We do not know if they should be changed like PEG tubes or if other methods are better. There has been concern that a balloon might obstruct the narrow jejunal lumen causing leakage of formula and secretions and possibly abscess formation. If the jejunum is not reached during replacement, then the tube might end up in the peritoneum. Methods: Twelve questions were sent to 6 US gastroenterologists who have reviewed DPEJ in journals and at international congresses. The response rate was 100%. Results: The experts had done DPEJ for a mean of 12 years (median 14, range 8 –15). The number of DPEJ performed per year was 40 (median 29, range 10 –125). A regular 20 Fr PEG kit was used for initial placement, but one used a 14 Fr always and another sometimes. It was eventually changed in 54% of patients (range 25– 80). It was removed in 28% (range 15–50). It was removed by pulling the initial tube out with force as only method by 4 doctors, always by enteroscopy using snare by one, and by cutting it close to the skin and pushing it into the jejunum by one (in case of a healthy abdomen with little chance of obstruction). A long tube was most commonly used for replacement. Five also used a low-profile tube at times. The most common type of internal bumper was a non-balloon bolster but two preferred a balloon and one doctor often used a replacement tube with no bumper. Insertion of replacement tube was done by endoscopic guidance by 3, fluoroscopy by doctor by one and blindly followed by radiology by 2. Blind insertion alone was very rare. Most were not always able to reach the DPEJ site by enteroscopy even though it had been placed endoscopically initially, perhaps due to adhesions caused by the tube. If the site could not be reached, then fluoroscopy was always used, usually by the doctor himself. Conclusion: DPEJ tubes often need to be changed and the methods differ considerably from those used for PEGs. They are either pulled out or removed with a snare using an enteroscope. Some kind of guidance is always used for insertion of replacement tubes, either enteroscopy or fluoroscopy. A non-balloon internal bolster is most commonly preferred although some use a balloon. Some ideas for research are comparison of traction vs endoscopic removal, insertion using fluoroscopy vs endoscopy, balloon vs non-balloon internal bolster. A data registry of commonly performed techniques would also be useful.
Proton Pump Inhibitors Therapy and Anemia: A Retrospective Cohort Study Madhusudan Grover, Erin Sarzynski, Chethan Puttarajappa, Yan Xie, Heather Laird-Fick Background: Proton pump inhibitors (PPIs) remain amongst most widely prescribed class of medication for various gastrointestinal disorders such as gastroesophageal reflux disease, peptic ulcer disease and stress ulcer prophylaxis. However, concerns have been raised regarding their short and long term use. Gastric acid suppression may decrease iron absorption, and it remains uncertain whether iron-deficiency anemia (IDA) may result from chronic PPI therapy. We aimed to explore the association between chronic PPI use and IDA. Methods: A retrospective cohort of adult patients in an academic outpatient setting who received PPI therapy for at least one year between January 1, 2004, and January 1, 2006, and who had hematologic studies prior to and 1 year after initiation of PPI therapy was identified using electronic medical record database. Those with concurrent diagnoses known to cause anemia such as, gastrointestinal bleeding, chronic kidney disease, chronic anticoagulation, hemolysis, vitamin B12 or folate deficiency, or active cancer were excluded. We compared changes in hematologic indices among patients receiving PPI therapy for at least 1 year with age and sex matched controls. Results: Of the 98 patients on chronic PPI therapy who met inclusion criteria, 35% had no documented indication for such therapy. At baseline, demographics and hematologic indices were similar between PPI-users and controls. Among patients on PPI therapy, all hematologic indices decreased from baseline,
186
ABSTRACTS
CLINICAL GASTROENTEROLOGY AND HEPATOLOGY Vol. 9, No. 2
Table 1. Evaluation of Faculty by Internal Medicine Residents and GI Fellows
Resident of Faculty Male of male Male of female Female of male Female of female Fellow of Faculty Male of male Male of female Female of male Female of female
Uncorrected mean (SE)
Corrected mean (SE)
Regression estimate (SE)
P value
4.048 (0.005) 3.920 (0.010) 4.120 (0.007) 3.953 (0.016)
4.011 (0.044) 3.799 (0.058) 4.069 (0.048) 3.871 (0.063)
— ⫺0.212 (0.064) 0.059 (0.041) ⫺0.140 (0.077)
— .001 .15 .07
4.183 (0.003) 4.049 (0.007) 4.106 (0.004) 3.969 (0.009)
4.215 (0.048) 4.087 (0.067) 4.111 (0.067) 3.987 (0.081)
— ⫺0.128 (0.059) ⫺0.103 (0.074) ⫺0.228 (0.095)
— .03 .16 .02
including hemoglobin (– 0.19 g/dL, SD ⫾0.88, P ⫽ .03), hematocrit (– 0.63%, SD ⫾2.71, P ⫽ .02), and mean corpuscular volume (– 0.49 fL, SD ⫾2.44, P ⫽ .05). When compared with matched controls, PPIs users had significant decrease in mean hemoglobin and hematocrit (P ⬍ .01 for both), and this remained significant after controlling for confounders. There was a trend toward a decrease in mean corpuscular volume for chronic PPI users, but the change was not significant after adjusting for confounders. Among PPI-users, 21 subjects (21.4%) had a decrease in their hemoglobin ⬎1.0 g/dL while on therapy for more than a year. Overall, the adjusted odds ratio of decreasing hemoglobin by 1.0 g/dL while on PPI therapy for at least a year was 5.03 (95% CI, 1.71–14.78). Conclusion: Among adult patients receiving chronic PPI therapy, there is a significant decrease in hematologic indices from baseline. Decreasing mean corpuscular volume is suggestive of iron deficiency anemia. This effect should be further confirmed in larger data sets and clinicians should be aware about anemia as another potential side effect of chronic acid suppressive therapy. Gender Differences in Trainee Evaluations of Gastroenterology Faculty Erin W. Thackeray, Andrew J. Halvorsen, Robert D. Ficalora, Gregory J. Engstler, Furman S. McDonald, Amy S. Oxentenko Purpose: While limited literature exists regarding the effect of gender on the evaluation of medical trainees by faculty, there are no studies looking at the effect of gender on the evaluation of medical faculty by trainees. The aim of our study was to assess for potential differences in the evaluation of male and female gastroenterology (GI) faculty by trainees at a single institution. Methods: A validated assessment tool called “ISES” (Integrated Scheduling and Evaluation System) is used across all departments at our medical center and contains a global assessment form. The tool includes questions that assess core clinical competencies on a 5-point scale. ISES was accessed for aggregated, de-identified scores of GI faculty by trainees (internal medicine residents and GI fellows) between the 2005– 06 and 2009 –10 academic years. Mean scores were corrected for multiple pairings, ie, residents and fellows evaluating the same staff more than once. Linear mixed regression was used to compare corrected mean scores across gender pairings. Results: Ninety-eight GI faculty were evaluated by 293 residents, with 84 GI faculty evaluated by 89 fellows. The corrected and uncorrected mean scores, regression estimates, and P values are listed (see Table 1). Scores from male trainees of male faculty were the highest of any gender pairing by both residents and fellows. Female faculty received statistically lower scores by male residents (P ⫽ .001) and both male and female GI fellows (P ⫽ .03 and .02, respectively); there was a trend for lower scores for female GI faculty by female residents (P ⫽ .07). Conclusion: Female GI faculty received lower evaluation scores by both male and female internal medicine residents and GI fellows. Further studies are needed to address the differences of gender in evaluation of GI faculty. Prediction of In-Hospital Mortality in Spontaneous Bacterial Peritonitis (SBP) Using Integrated Model for End-stage Liver Disease (iMELD) Score Guru Trikudanathan, Imad Ahmad, Deepika Devuni, Varnel Noel, Ilene Staff, Jonathan Israel Background and Objectives: SBP is a frequent and severe complication of cirrhotics with ascites. Inhospital mortality still ranges between 20%– 40%, suggesting that further refinements are essential in managing SBP. Early recognition of high-risk patients would enable us to reduce the short term mortality. A new prognostic model based on the incorporation of age and serum sodium to MELD score, known as the iMELD score [iMELD ⫽ MELD ⫹ (age x 0.3)–(0.7 x Na) ⫹100] has recently been established as a better predictor of outcome in decompensated cirrhotic patients. We aimed to assess the prognostic ability of the iMELD score and compare its predictive accuracy with MELD score in determining the inhospital mortality of SBP patients. Methods: We conducted a retrospective study of all hospitalized SBP patients (as defined by polymorphonuclear cell count ⬎250 in ascitic fluid sample) in our institution between 1998 –2008. With the significant prognostic variables, multivariate analysis was carried out using a logistic regression model. The predictive capacity of MELD and iMELD was determined by the area under receiver operating characteristic curve (auROC).
Results: In-hospital mortality rate was 49%. In multivariate analysis, iMELD score (P ⬍ .0001), and low serum albumin (P ⬍ .05) were independent predictors of mortality. Receiver operating characteristic curve for iMELD revealed an excellent discriminatory ability to predict death, with auROC of 0.799 (95% CI, 0.71– 0.87). It was not significantly different from MELD with an auROC of 0.732 (95% CI, 0.64 – 0.81) as shown in Figure 1.
100 80 Sensitivity
Resident of Faculty
60
MELD iMELD
40 20 0
0
20 40 60 80 100-Specificity
100
Conclusion: iMELD is a score generated on the basis of clinical variables which are objective, reliable and readily available. Although MELD score has a fair predictive accuracy, the addition of serum sodium and age to MELD (iMELD score) was associated with an increase in the accuracy in predicting mortality with SBP as assessed by the AUC. The availability of albumin is limited to certain indications at many hospitals due to its high cost. iMELD could be used to stratify high risk patients on hospital admission who would benefit by albumin infusion. Another provocative consideration would be use of prophylactic antibiotics in cirrhotics with high iMELD even without previous episode of SBP. However, randomized trials and cost effective analyses are required before this can be made a recommendation. This robust predictor of mortality may aid in further improvement of the quality of care of SBP patients and in further reduction of their short term mortality rate.
Inflammatory Bowel Disease: Resource Utilization During Pediatric to Adult Transfer Natasha Bollegala, Herbert Brill, John Marshall The transition from pediatric to adult care within Inflammatory Bowel Disease (IBD) is poorly understood but known to cause significant patient and parent anxiety. Purpose: To compare resource utilization during the last year of pediatric care and first year of adult care. Methods: Patients transferred between 1999 and 2004 were studied. Data from 1 year before transfer and 1 year after transfer were used. Primary outcomes included: i) Emergency Department (ED) visits; ii) Hospitalizations; iii) Clinic visits; iv) Number of surgical interventions; and v) Number of endoscopies. Secondary outcomes included: i) Documented patient non-compliance. Results: Ninety-five individuals were studied (48 female, 47 male). Sixty-nine were diagnosed with Crohn’s Disease (CD) and 26 with Ulcerative Colitis (UC). The average age of diagnosis was 12.90. All results are a comparison of pediatric to adult care. Primary outcome results are expressed as average rates per year. Clinic Visits (3.05 vs 2.56 (P ⫽ .01)) and documented non-compliance (29% vs 43% (P ⫽ .01)) were significantly different. ED visits (0.18 vs 0.15 (P ⫽ .71)), Hospitalizations (0.23 vs 0.13 (P ⫽ .13)), Surgical Intervention (0.05 vs 0.03 (P ⫽ .53)) and Number of endoscopies (0.25 vs 0.37 (P ⫽ .11)), were not significantly different. Conclusion: The only significant difference in resource utilization between pediatric and adult care during transfer years is the number of clinic visits. Documented non-compliance is significantly worse during early adult care. IBD transition programming should focus on improving compliance, perhaps by increasing the number of clinic visits during adult care.