Catálogo de publicaciones - revistas
Health Services Research
Resumen/Descripción – provisto por la editorial en inglés
Health Services Research (HSR) is a peer-reviewed scholarly journal that provides researchers and public and private policymakers with the latest research findings, methods, and concepts related to the financing, organization, delivery, evaluation, and outcomes of health services.Palabras clave – provistas por la editorial
Health Services Research; Health; Services; Research; HSR; policy; care; analysis; clinical; politic
Disponibilidad
Institución detectada | Período | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | desde ene. 2002 / hasta dic. 2023 | Wiley Online Library |
Información
Tipo de recurso:
revistas
ISSN impreso
0017-9124
ISSN electrónico
1475-6773
Editor responsable
John Wiley & Sons, Inc. (WILEY)
País de edición
Reino Unido
Fecha de publicación
1966-
Cobertura temática
Tabla de contenidos
Identifying low acuity Emergency Department visits with a machine learning approach: The low acuity visit algorithms (LAVA )
Angela T. Chen; Richard S. Kuzma; Ari B. Friedman
<jats:title>Abstract</jats:title><jats:sec><jats:title>Objective</jats:title><jats:p>To improve the performance of International Classification of Disease (ICD) code rule‐based algorithms for identifying low acuity Emergency Department (ED) visits by using machine learning methods and additional covariates.</jats:p></jats:sec><jats:sec><jats:title>Data Sources</jats:title><jats:p>We used secondary data on ED visits from the National Hospital Ambulatory Medical Survey (NHAMCS), from 2016 to 2020.</jats:p></jats:sec><jats:sec><jats:title>Study Design</jats:title><jats:p>We established baseline performance metrics with seven published algorithms consisting of International Classification of Disease, Tenth Revision codes used to identify low acuity ED visits. We then trained logistic regression, random forest, and gradient boosting (XGBoost) models to predict low acuity ED visits. Each model was trained on five different covariate sets of demographic and clinical data. Model performance was compared using a separate validation dataset. The primary performance metric was the probability that a visit identified by an algorithm as low acuity did not experience significant testing, treatment, or disposition (positive predictive value, PPV). Subgroup analyses assessed model performance across age, sex, and race/ethnicity.</jats:p></jats:sec><jats:sec><jats:title>Data Collection</jats:title><jats:p>We used 2016–2019 NHAMCS data as the training set and 2020 NHAMCS data for validation.</jats:p></jats:sec><jats:sec><jats:title>Principal Findings</jats:title><jats:p>The training and validation data consisted of 53,074 and 9542 observations, respectively. Among seven rule‐based algorithms, the highest‐performing had a PPV of 0.35 (95% CI [0.33, 0.36]). All model‐based algorithms outperformed existing algorithms, with the least effective—random forest using only age and sex—improving PPV by 26% (up to 0.44; 95% CI [0.40, 0.48]). Logistic regression and XGBoost trained on all variables improved PPV by 83% (to 0.64; 95% CI [0.62, 0.66]). Multivariable models also demonstrated higher PPV across all three demographic subgroups.</jats:p></jats:sec><jats:sec><jats:title>Conclusions</jats:title><jats:p>Machine learning models substantially outperform existing algorithms based on ICD codes in predicting low acuity ED visits. Variations in model performance across demographic groups highlight the need for further research to ensure their applicability and fairness across diverse populations.</jats:p></jats:sec>
Palabras clave: Health Policy.
Pp. No disponible
Association of Hospitals' Experience with Bundled Payment for Care Improvement Model with the Diffusion of Acute Hospital Care at Home
So‐Yeon Kang
<jats:title>Abstract</jats:title><jats:sec><jats:title>Objective</jats:title><jats:p>To examine whether hospitals' experience in a prior payment model incentivizing care coordination is associated with their decision to adopt a new payment program for a care delivery innovation.</jats:p></jats:sec><jats:sec><jats:title>Data Source<jats:bold>s</jats:bold></jats:title><jats:p>Data were sourced from Medicare fee‐for‐service claims in 2017, the list of participants in Bundled Payment for Care Improvement initiatives (BPCI and BPCI‐Advanced), the list of hospitals approved for Acute Hospital Care at Home (AHCaH) between November 2020 and August 2022, and the American Hospital Association Survey.</jats:p></jats:sec><jats:sec><jats:title>Study Design</jats:title><jats:p>Retrospective cohort study. Hospitals' adoption of AHCaH was measured as a function of hospitals' BPCI experiences. Hospitals' BPCI experiences were categorized into five mutually exclusive groups: (1) direct BPCI participation, (2) indirect participation through physician group practices (PGPs) after dropout, (3) indirect participation through PGPs only, (4) dropout only, and (5) no BPCI exposure.</jats:p></jats:sec><jats:sec><jats:title>Data Collection/Extraction Methods</jats:title><jats:p>All data are derived from pre‐existing sources. General acute hospitals eligible for both BPCI initiatives and AHCaH are included.</jats:p></jats:sec><jats:sec><jats:title>Principal Findings</jats:title><jats:p>Of 3248 hospitals included in the sample, 7% adopted AHCaH as of August 2022. Hospitals with direct BPCI experience had the highest adoption rate (17.7%), followed by those with indirect participation through BPCI physicians after dropout (11.8%), while those with no exposure to BPCI were least likely to participate (3.2%). Hospitals that adopted AHCaH were more likely to be located in communities where more peer hospitals participated in the program (median 10.8% vs. 0%). After controlling for covariates, the association of the adoption of AHCaH with indirect participation through physicians after dropout was as strong as with early BPCI adopter hospitals (average marginal effect: 5.9 vs. 6.2 pp, <jats:italic>p</jats:italic> < 0.05), but the other categories were not.</jats:p></jats:sec><jats:sec><jats:title>Conclusions</jats:title><jats:p>Hospitals that participated in the bundled payment model either directly or indirectly PGPs were more likely to adopt a care delivery innovation requiring similar competence in the next period.</jats:p></jats:sec>
Palabras clave: Health Policy.
Pp. No disponible
HSR's outstanding reviewers in 2023
Austin Frakt; Chris Tachibana
Palabras clave: Health Policy.
Pp. No disponible
Suicide risk screening and evaluation among patients accessing VHA services and identified as being newly homeless
Ryan Holliday; Trisha Hostetter; Lisa A. Brenner; Nazanin Bahraini; Jack Tsai
<jats:title>Abstract</jats:title><jats:sec><jats:title>Objective</jats:title><jats:p>To evaluate universal suicide risk screening and evaluation processes among newly homeless Veterans.</jats:p></jats:sec><jats:sec><jats:title>Study Setting</jats:title><jats:p>Not applicable.</jats:p></jats:sec><jats:sec><jats:title>Study Design</jats:title><jats:p>Examination of Veterans Health Administration (VHA) using newly homeless patients' health record data in Calendar Year 2021.</jats:p></jats:sec><jats:sec><jats:title>Data Collection</jats:title><jats:p>Not applicable.</jats:p></jats:sec><jats:sec><jats:title>Data Source</jats:title><jats:p>Health record data.</jats:p></jats:sec><jats:sec><jats:title>Principal Findings</jats:title><jats:p>Most patients received suicide risk screening and/or evaluation in the year prior to and/or following homeless identification (<jats:italic>n</jats:italic> = 49,505; 87.4%). Smaller percentages of patients were screened and/or evaluated in close proximity to identification (<jats:italic>n</jats:italic> = 7358; 16.0%), 1–30 days prior to identification (<jats:italic>n</jats:italic> = 12,840; 39.6%), or 1–30 days following identification (<jats:italic>n</jats:italic> = 14,263; 34.3%). Common settings for screening included primary care, emergency and urgent care, and mental health services. Of positive screens (i.e., potentially elevated risk for suicide), 72.6% had a Comprehensive Suicide Risk Evaluation (CSRE) completed in a timely manner (i.e., same day or within 24 h). Age, race, and sex were largely unrelated to screening and/or evaluation.</jats:p></jats:sec><jats:sec><jats:title>Conclusions</jats:title><jats:p>Although many newly identified homeless patients were screened and/or evaluated for suicide risk, approximately 13% were not screened; and 27% of positive screens did not receive a timely CSRE. Continued efforts are warranted to facilitate suicide risk identification to ensure homeless patients have access to evidence‐based interventions.</jats:p></jats:sec>
Pp. No disponible
Restrictiveness of Medicare Advantage provider networks across physician specialties
Yevgeniy Feyman; Jose Figueroa; Melissa Garrido; Gretchen Jacobson; Michael Adelberg; Austin Frakt
<jats:title>Abstract</jats:title><jats:sec><jats:title>Objective</jats:title><jats:p>The objective was to measure specialty provider networks in Medicare Advantage (MA) and examine associations with market factors.</jats:p></jats:sec><jats:sec><jats:title>Data Sources and Study Setting</jats:title><jats:p>We relied on traditional Medicare (TM) and MA prescription drug event data from 2011 to 2017 for all Medicare beneficiaries in the United States as well as data from the Area Health Resources File.</jats:p></jats:sec><jats:sec><jats:title>Study Design</jats:title><jats:p>Relying on a recently developed and validated prediction model, we calculated the provider network restrictiveness of MA contracts for nine high‐prescribing specialties. We characterized network restrictiveness through an observed‐to‐expected ratio, calculated as the number of unique providers seen by MA beneficiaries divided by the number expected based on the prediction model. We assessed the relationship between network restrictiveness and market factors across specialties with multivariable linear regression.</jats:p></jats:sec><jats:sec><jats:title>Data Collection/Extraction Methods</jats:title><jats:p>Prescription drug event data for a 20% random sample of beneficiaries enrolled in prescription drug coverage from 2011 to 2017.</jats:p></jats:sec><jats:sec><jats:title>Principal Findings</jats:title><jats:p>Provider networks in MA varied in restrictiveness. OB‐Gynecology was the most restrictive with enrollees seeing 34.5% (95% CI: 34.3%–34.7%) as many providers as they would absent network restrictions; cardiology was the least restrictive with enrollees seeing 58.6% (95% CI: 58.4%–58.8%) as many providers as they otherwise would. Factors associated with less restrictive networks included the county‐level TM average hierarchical condition category score (0.06; 95% CI: 0.04–0.07), the county‐level number of doctors per 1000 population (0.04; 95% CI: 0.02–0.05), the natural log of local median household income (0.03; 95% CI: 0.007–0.05), and the parent company's market share in the county (0.16; 95% CI: 0.13–0.18). Rurality was a major predictor of more restrictive networks (−0.28; 95% CI: −0.32 to −0.24).</jats:p></jats:sec><jats:sec><jats:title>Conclusions</jats:title><jats:p>Our findings suggest that rural beneficiaries may face disproportionately reduced access in these networks and that efforts to improve access should vary by specialty.</jats:p></jats:sec>
Pp. No disponible
The business case for hospital mobility programs in the veterans health care system: Results from multi‐hospital implementation of the STRIDE program
Brystana G. Kaufman; S. Nicole Hastings; Cassie Meyer; Karen M. Stechuchak; Ashley Choate; Kasey Decosimo; Caitlin Sullivan; Virginia Wang; Kelli D. Allen; Courtney H. Van Houtven
<jats:title>Abstract</jats:title><jats:sec><jats:title>Objective</jats:title><jats:p>To conduct a business case analysis for Department of Veterans Affairs (VA) program STRIDE (A<jats:bold>S</jats:bold>sis<jats:bold>T</jats:bold>ed Ea<jats:bold>R</jats:bold>ly Mob<jats:bold>I</jats:bold>lization for hospitalize<jats:bold>D</jats:bold> older V<jats:bold>E</jats:bold>terans), which was designed to address immobility for hospitalized older adults.</jats:p></jats:sec><jats:sec><jats:title>Data Sources and Study Setting</jats:title><jats:p>This was a secondary analysis of primary data from a VA 8‐hospital implementation trial conducted by the Function and Independence Quality Enhancement Research Initiative (QUERI). In partnership with VA operational partners, we estimated resources needed for program delivery in and out of the VA as well as national implementation facilitation in the VA. A scenario analysis using wage data from the Bureau of Labor Statistics informs implementation decisions outside the VA.</jats:p></jats:sec><jats:sec><jats:title>Study Design</jats:title><jats:p>This budget impact analysis compared delivery and implementation costs for two implementation strategies (Replicating Effective Programs [REP]+CONNECT and REP‐only). To simulate national budget scenarios for implementation, we estimated the number of eligible hospitalizations nationally and varied key parameters (e.g., enrollment rates) to evaluate the impact of uncertainty.</jats:p></jats:sec><jats:sec><jats:title>Data Collection</jats:title><jats:p>Personnel time and implementation outcomes were collected from hospitals (2017–2019). Hospital average daily census and wage data were estimated as of 2022 to improve relevance to future implementation.</jats:p></jats:sec><jats:sec><jats:title>Principal Findings</jats:title><jats:p>Average implementation costs were $9450 for REP+CONNECT and $5622 for REP‐only; average program delivery costs were less than $30 per participant in both VA and non‐VA hospital settings. Number of walks had the most impact on delivery costs and ranged from 1 to 5 walks per participant. In sensitivity analyses, cost increased to $35 per participant if a physical therapist assistant conducts the walks. Among study hospitals, mean enrollment rates were higher among the REP+CONNECT hospitals (12%) than the REP‐only hospitals (4%) and VA implementation costs ranged from $66 to $100 per enrolled.</jats:p></jats:sec><jats:sec><jats:title>Conclusions</jats:title><jats:p>STRIDE is a low‐cost intervention, and program participation has the biggest impact on the resources needed for delivering STRIDE.</jats:p></jats:sec><jats:sec><jats:title>Trial Registration</jats:title><jats:p><jats:ext-link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://clinicalstrials.gov">ClinicalsTrials.gov</jats:ext-link> NCT03300336. Prospectively registered on 3 October 2017.</jats:p></jats:sec>
Pp. No disponible
New evidence on the impacts of cross‐market hospital mergers on commercial prices and measures of quality
Daniel R. Arnold; Jaime S. King; Brent D. Fulton; Alexandra D. Montague; Katherine L. Gudiksen; Thomas L. Greaney; Richard M. Scheffler
<jats:title>Abstract</jats:title><jats:sec><jats:title>Objective</jats:title><jats:p>To examine the impact of “cross‐market” hospital mergers on prices and quality and the extent to which serial acquisitions contribute to any measured effects.</jats:p></jats:sec><jats:sec><jats:title>Data Sources</jats:title><jats:p>2009–2017 commercial claims from the Health Care Cost Institute (HCCI) and quality measures from Hospital Compare.</jats:p></jats:sec><jats:sec><jats:title>Study Design</jats:title><jats:p>Event study models in which the treated group consisted of hospitals that acquired hospitals further than 50 miles, and the control group was hospitals that were not part of any merger activity (as a target or acquirer) during the study period.</jats:p></jats:sec><jats:sec><jats:title>Data Extraction Methods</jats:title><jats:p>We extracted data for 214 treated hospitals and 955 control hospitals.</jats:p></jats:sec><jats:sec><jats:title>Principal Findings</jats:title><jats:p>Six years after acquisition, cross‐market hospital mergers had increased acquirer prices by 12.9% (CI: 0.6%–26.6%) relative to control hospitals, but had no discernible impact on mortality and readmission rates for heart failure, heart attacks and pneumonia.</jats:p><jats:p>For serial acquirers, the price effect increased to 16.3% (CI: 4.8%–29.1%). For all acquisitions, the price effect was 21.8% (CI: 4.6%–41.7%) when the target's market share was greater than the acquirer's market share versus 9.7% (CI: −0.5% to 20.9%) when the opposite was true. The magnitude of the price effect was similar for out‐of‐state and in‐state cross‐market mergers.</jats:p></jats:sec><jats:sec><jats:title>Conclusions</jats:title><jats:p>Additional evidence on the price and quality effects of cross‐market mergers is needed at a time when over half of recent hospital mergers have been cross‐market. To date, no hospital mergers have been challenged by the Federal Trade Commission on cross‐market grounds. Our study is the third to find a positive price effect associated with cross‐market mergers and the first to show no quality effect and how serial acquisitions contribute to the price effect. More research is needed to identify the mechanism behind the price effects we observe and analyze price effect heterogeneity.</jats:p></jats:sec>
Pp. No disponible
Association between physician–hospital integration and inpatient care delivery in accountable care organizations: An instrumental variable analysis
Meng‐Yun Lin; Amresh D. Hanchate; Austin B. Frakt; James F. Burgess; Kathleen Carey
<jats:title>Abstract</jats:title><jats:sec><jats:title>Objective</jats:title><jats:p>To investigate the relationship between physician–hospital integration within accountable care organizations (ACOs) and inpatient care utilization and expenditure.</jats:p></jats:sec><jats:sec><jats:title>Data Sources</jats:title><jats:p>The primary data were Massachusetts All‐Payer Claims Database (2009–2013).</jats:p></jats:sec><jats:sec><jats:title>Study Setting</jats:title><jats:p>Fifteen provider organizations that entered a commercial ACO contract with a major private payer in Massachusetts between 2009 and 2013.</jats:p></jats:sec><jats:sec><jats:title>Study Design</jats:title><jats:p>Using an instrumental variable approach, the study compared inpatient care delivery between patients of ACOs demonstrating high versus low integration. We measured physician–hospital integration within ACOs by the proportion of primary care physicians in an ACO who billed for outpatient services with a place‐of‐service code indicating employment or practice ownership by a hospital. The study sample comprised non‐elderly adults who had continuous insurance coverage and were attributed to one of the 15 ACOs. Outcomes of interest included total medical expenditure during an episode of inpatient care, length of stay (LOS) of the index hospitalization, and 30‐day readmission. An inpatient episode was defined as 30, 45, and 60 days from the admission date.</jats:p></jats:sec><jats:sec><jats:title>Data Collection/Extraction Methods</jats:title><jats:p>Not applicable.</jats:p></jats:sec><jats:sec><jats:title>Principal Findings</jats:title><jats:p>The study examined 33,535 admissions from patients served by the 15 ACOs. Average medical expenditure within 30 days of admission was $24,601, within 45 days was $26,447, and within 60 days was $28,043. Average LOS was 3.5 days, and 5.4% of patients were readmitted within 30 days. Physician–hospital integration was associated with a 10.6% reduction in 30‐day expenditure (95% CI, −15.1% to −5.9%). Corresponding estimates for 45 and 60 days were − 9.7% (95%CI, −14.2% to −4.9%) and − 9.6% (95%CI, −14.3% to −4.7%). Integration was associated with a 15.7% decrease in LOS (95%CI, −22.6% to −8.2%) but unrelated to 30‐day readmission rate.</jats:p></jats:sec><jats:sec><jats:title>Conclusions</jats:title><jats:p>Our instrumental variable analysis shows physician–hospital integration with ACOs was associated with reduced inpatient spending and LOS, with no evidence of elevated readmission rates.</jats:p></jats:sec>
Pp. No disponible
Split‐sample reliability estimation in health care quality measurement: Once is not enough
Kenneth J. Nieser; Alex H. S. Harris
<jats:title>Abstract</jats:title><jats:sec><jats:title>Objective</jats:title><jats:p>To examine the sensitivity of split‐sample reliability estimates to the random split of the data and propose alternative methods for improving the stability of the split‐sample method.</jats:p></jats:sec><jats:sec><jats:title>Data Sources and Study Setting</jats:title><jats:p>Data were simulated to reflect a variety of real‐world quality measure distributions and scenarios. There is no date range to report as the data are simulated.</jats:p></jats:sec><jats:sec><jats:title>Study Design</jats:title><jats:p>Simulation studies of split‐sample reliability estimation were conducted under varying practical scenarios.</jats:p></jats:sec><jats:sec><jats:title>Data Collection/Extraction Methods</jats:title><jats:p>All data were simulated using functions in <jats:italic>R</jats:italic>.</jats:p></jats:sec><jats:sec><jats:title>Principal Findings</jats:title><jats:p>Single split‐sample reliability estimates can be very dependent on the random split of the data, especially in low sample size and low variability settings. Averaging split‐sample estimates over many splits of the data can yield a more stable reliability estimate.</jats:p></jats:sec><jats:sec><jats:title>Conclusions</jats:title><jats:p>Measure developers and evaluators using the split‐sample reliability method should average a series of reliability estimates calculated from many resamples of the data without replacement to obtain a more stable reliability estimate.</jats:p></jats:sec>
Pp. No disponible
A more complete measure of vertical integration between physicians and hospitals
Qian (Eric) Luo; Bernard Black; David J. Magid; Frederick A. Masoudi; Vinay Kini; Ali Moghtaderi
<jats:title>Abstract</jats:title><jats:sec><jats:title>Objective</jats:title><jats:p>To develop an accurate and reproducible measure of vertical integration between physicians and hospitals (defined as hospital or health system employment of physicians), which can be used to assess the impact of integration on healthcare quality and spending.</jats:p></jats:sec><jats:sec><jats:title>Data Sources and Study Setting</jats:title><jats:p>We use multiple data sources including from the Internal Revenue Service, the Centers for Medicare and Medicaid Services, and others to determine the Tax Identification Numbers (TINs) that hospitals and physicians use to bill Medicare for services, and link physician billing TINs to hospital‐related TINs.</jats:p></jats:sec><jats:sec><jats:title>Study Design</jats:title><jats:p>We developed a new measure of vertical integration, based on the TINs that hospitals and physicians use to bill Medicare, using a broad set of sources for hospital‐related TINs. We considered physicians as hospital‐employed if they bill Medicare primarily or exclusively using hospital‐related TINs. We assessed integration status for all physicians who billed Medicare from 1999 to 2019. We compared this measure with others used in the existing literature. We conducted a simulation study which highlights the importance of accurately identifying integrated physicians when study the effects of integration.</jats:p></jats:sec><jats:sec><jats:title>Data Collection/Extraction Methods</jats:title><jats:p>We extracted physician and hospital‐related TINs from multiple sources, emphasizing specificity (a small proportion of nonintegrated physicians identified as integrated).</jats:p></jats:sec><jats:sec><jats:title>Principal Findings</jats:title><jats:p>We identified 12,269 hospital‐related TINs, used for billing by 546,775 physicians. We estimate that the percentage of integrated physicians rose from 19% in 1999 to 43% in 2019. Our approach identifies many additional physician practices as integrated; a simpler TIN measure, comparable with prior work, identifies only 30% (3877) of the TINs we identify. A service location measure, used in prior work, has both many false positives and false negatives.</jats:p></jats:sec><jats:sec><jats:title>Conclusion</jats:title><jats:p>We developed a new measure of hospital‐physician integration. This measure is reproducible and identifies many additional physician practices as integrated.</jats:p></jats:sec>
Pp. No disponible