Table of Contents Table of Contents
Previous Page  380 476 Next Page
Information
Show Menu
Previous Page 380 476 Next Page
Page Background

1.

Introduction

A significant effort is being placed on understanding

variations in the quality of surgical care as evidenced by

the growth of pay for performance programs and national

quality initiatives

[1,2] .

Efforts to measure quality have

expanded to most areas of surgical oncology with

specialty specific societies endorsing expert opinion

generated quality indicators (QI) measuring various

structural, process, and outcome elements of health care

delivery

[3,4] .

Despite this, a paucity of literature exists

capturing real-world data validating these indicators as

benchmarking tools that can accurately identify hospitals

providing poor care

[5]

. Similarly, data capturing the

impact of hospital-level quality variation on patient

outcomes is significantly underreported for most QIs

[5]

. Consequently, concerns regarding the appropriate use

and choice of indicators have arisen given their impact on

financial and administrative resource consumption and

allocation as well as effect on patient autonomy and

physician reputation

[6,7] .

Hence, an urgent need to take

a robust data driven approach to validate putative QIs

is needed in order to prioritize the most valuable

measures.

To date, efforts to measure

[9_TD$DIFF]

the quality of renal cell

carcinoma (RCC)

[10_TD$DIFF]

surgical care remain in their infancy.

While putative expert generated RCC-specific QIs have

been proposed, no real-world data exists to validate these

metrics as benchmarking tools that can discriminate

provider performance in a manner that captures disparate

patient outcomes

[8]

. This is in part due to the challenge

of adjusting for complex case-mix variation between

hospitals that must be completed in order to rigorously

benchmark performance

[9]

. Comprehensive cancer

specific data initiatives, such as the National Cancer

Database in the USA, which capture granular patient and

tumor specific variables across large volumes of hospitals

have greatly facilitated case-mix adjusted quality bench-

marking assessments in surgical oncology, and as such

provide a platform to validate putative QIs in urologic

oncology

[10] .

Due to the paucity of real-world data benchmarking

provider performance in RCC surgical care, our primary

objective was to determine whether nationwide variations

in quality exist on a hospital-level after adjusting for

differences in case-mix factors present within the National

Cancer Database (NCDB). Further, we investigated structur-

al elements (ie, hospital type, location, surgical volume)

associated with hospital-level quality. Lastly, we sought to

determine whether benchmarking hospital performance

using our developed case-mix adjusted QIs could discrimi-

nate provider performance in a manner that captures

disparate patient outcomes by assessing associations

between hospital-level quality and patient mortality. We

hypothesized that variations in RCC surgical quality exist,

with poor quality being associated with adverse patient

outcomes.

2.

Materials and methods

2.1.

Data

This cohort study utilized the NCDB, which prospectively collects

hospital-level data from Commission on Cancer Accredited Facilities in

the USA and Puerto Rico. The NCDB captures approximately 70% of newly

diagnosed cancer cases with over 30 million individual records

accumulated since its inception across over 1500 hospitals. Approval

by the University Health Network (Toronto, ON, USA) Research Ethics

Board was obtained.

2.2.

QIs

Hospital-level performance was benchmarked according to five QIs.

Three process QIs were identified from a previously published modified

Delphi study

[8] ,

including the proportion of patients with: (1) T1a

tumors undergoing partial nephrectomy (PN), (2) T1-2 tumors receiving

a minimally invasive (laparoscopic or robotic) approach for radical

nephrectomy (MIS), (3) a positive surgical margin following PN for T1

tumors (PM). Two outcome QIs: (1) length of hospital stay after radical

nephrectomy for T1-4 tumors (LOS), and (2) 30-d unplanned readmis-

sion proportion after radical nephrectomy for T1-4 tumors (RP), were

additionally chosen given their utility as quality benchmarking tools in

other realms of surgical oncology

[4,11]

.

2.3.

Study cohort

International Classification of Diseases for Oncology (third edition) and

site-specific surgery codes were employed to identify RCC patients who

underwent PN or radical nephrectomy between 2004 and 2014. Data

from earlier than 2004 was excluded due to incomplete patient

comorbidity information. For MIS, analysis was restricted to 2010 on-

ward as laparoscopy was not available prior. For the LOS and RP QIs,

patients with localized and metastatic disease were included whereas

metastatic patients were excluded for analysis of the MIS, PN, and PMR

QIs. Summaries of all inclusion and exclusion criteria, including

International Classification of Diseases and histology codes are available

in Supplementary Table 1.

2.4.

Statistical analysis

Our statistical approaches closely followed those used in previous

analyses of NCDB data for quality comparisons in surgical oncology

[10,12]

. Interhospital variability in the QIs was investigated using

random intercept generalized linear models; PN, MIS, PM, and RP QIs

were modeled through logistic regression; LOS, transformed as natural

logarithm of 1 + LOS because of the skewed distribution of this QI, was

modeled through linear regression. We estimated an intraclass

correlation/between hospital variance proportion using the latent

variable method and calculated

p

-values for tests of the null of

[34_TD$DIFF]

no

between hospital variance component

[13]

. QIs were adjusted for case-

mix using indirect standardization, where for each hospital we

calculated a standardized mortality ratio of

[11_TD$DIFF]

observed to expected

outcomes

[12_TD$DIFF]

[14] .

The expected quality outcomes were calculated from

multivariable regression models (logistic for PN, MIS, PM, RP; linear for

the transformed LOS) fitted to the entire patient population without the

hospital-level random intercepts, given all relevant patient-level

demographic, comorbidity, disease progression, and tumor character-

istics recorded in NCDB, as listed in Supplementary Figures 1A–E. To

identify outlier hospitals, we used z-test statistics of the form Z = (O-E)/S,

E U R O P E A N U R O L O G Y 7 2 ( 2 0 1 7 ) 3 7 9 – 3 8 6

380