Catálogo de publicaciones - libros

Compartir en
redes sociales


Probability, Statistics and Modelling in Public Health

Mikhail Nikulin ; Daniel Commenges ; Catherine Huber (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-0-387-26022-8

ISBN electrónico

978-0-387-26023-5

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer Science+Business Media, Inc. 2006

Tabla de contenidos

Nonparametric Estimation and Testing in Survival Models

Henning Läuter; Hannelore Liero

The aim of this paper is to demonstrate that nonparametric smoothing methods for estimating functions can be an useful tool in the analysis of life time data. After stating some basic notations we will present a data example. Applying standard parametric methods to these data we will see that this approach fails - basic features of the underlying functions are not reflected by their estimates. Our proposal is to use nonparametric estimation methods. These methods are explained in section 2. Nonparametric approaches are better in the sense that they are more flexible, and misspecifications of the model are avoided. But, parametric models have the advantage that the parameters can be interpreted. So, finally, we will formulate a test procedure to check whether a parametric or a nonparametric model is appropriate.

Palabras clave: Hazard Rate; Hazard Function; Survival Function; Weibull Distribution; Nonparametric Estimation.

Pp. 319-331

Selecting a semi-parametric estimator by the expected log-likelihood

Benoit Liquet; Daniel Commenges

A criterion for choosing an estimator in a family of semi-parametric estimators from incomplete data is proposed. This criterion is the expected observed log-likelihood (ELL). Adapted versions of this criterion in case of censored data and in presence of explanatory variables are exhibited. We show that likelihood cross-validation (LCV) is an estimator of ELL and we exhibit three bootstrap estimators. A simulation study considering both families of kernel and penalized likelihood estimators of the hazard function (indexed on a smoothing parameter) demonstrates good results of LCV and a bootstrap estimator called ELL_bboot. When using penalized likelihood an approximated version of LCV also performs very well. The use of these estimators of ELL is exemplified on the more complex problem of choosing between stratified and unstratified proportional hazards models. An example is given for modeling the effect of sex and educational level on the risk of developing dementia.

Palabras clave: bootstrap; cross-validation; Kullback-Leibler information; proportional hazard model; semi-parametric; smoothing.

Pp. 332-349

Imputing responses that are not missing

Ursula U. Müller; Anton Schick; Wolfgang Wefelmeyer

We consider estimation of linear functionals of the joint law of regression models in which responses are missing at random. The usual approach is to work with the fully observed data, and to replace unobserved quantities by estimators of appropriate conditional expectations. Another approach is to replace all quantities by such estimators. We show that the second method is usually better than the first.

Palabras clave: Conditional Distribution; Linear Regression Model; Conditional Expectation; Empirical Likelihood; Nonparametric Estimator.

Pp. 350-363

Bivariate Decision Processes

Martin Newby

Models are developed for decision making where a system’s evolution is described by a general stochastic process. The general structure of the problem includes many statistical tests such as treatment comparisons, regression models and likelihood ratio tests. The process is monitored and decisions are made in response to the observed system state. The decision process is simplified by using an associated process as well as the underlying state as decision variables; in many situations a functional of the underlying process defines a statistic. The approach is motivated by the idea of a performance metric based on the system state. The bivariate approach allows a wide class of models to be considered and can incorporate long term memory within a simple probability structure. The decisions in this study are based on an average cost and a life-cycle cost. The approach can deal with decisions that entail restarting the process as new or continuing the process after an intervention which changes the system state. The resulting optimization problem solved either by a renewal-reward argument or by a dynamic programming formulation.

Palabras clave: Wiener process; Lévy process; regenerative process; renewal-reward; dynamic programming; statistical testing; health monitoring.

Pp. 364-377

Weighted Logrank Tests With Multiple Events

C. Pinçon; O. Pons

We focus on the two group comparison when subjects of the sample may experience multiple distinct events, possibly censored. Because of the correlation arising between failure times, the sum of the marginal test statistics is not accurate. We propose a multivariate version of weighted logrank tests derived from the marginal logrank statistics, and we study their asymptotic distribution under null hypothesis; we construct a consistent estimator of their covariance using martingales properties. We present a simulation study and an application of this method to a study aimed to prove the association between retinopathy and diabetes.

Palabras clave: Censoring; Correlated failure times; Martingales; Weighted logrank test statistics.

Pp. 378-391

Explained Variation and Predictive Accuracy in General Parametric Statistical Models: The Role of Model Misspecification

Susanne Rosthøj; Niels Keiding

Palabras clave: Loss Function; Predictive Accuracy; Residual Variation; Failure Time; Consistent Estimator.

Pp. 392-404

Optimization of Breast Cancer Screening Modalities

Yu Shen; Giovanni Parmigiani

Mathematical models and decision analyses based on microsimulations have been shown to be useful in evaluating relative merits of various screening strategies in terms of cost and mortality reduction. Most investigations regarding the balance between mortality reduction and costs have focused on a single modality, mammography. A systematic evaluation of the relative expenses and projected benefit of combining clinical breast examination and mammograpphy is not at present available. The purpose of this report is to provide methodologic details including assumptions and data used in the process of modeling for complex decision analyses, when searching for optimal breast cancer screening strategies with the multiple screening modalities. To systematic evaluate the relative expenses and projected bene- fit of screening programmes that combine the two modalities, we build a simulation model incorporating age-specific incidence of the disease, age-specific pre-clinical duration of the disease, age-specific sensitivities of the two screening modalities, and competing causes of mortality. Using decision models, we can integrate information from different sources into the modeling processes, and assess the cost-effectiveness of a variety of screening strategies while incorporating uncertainties.

Palabras clave: Breast Cancer; Natl Cancer Inst; Breast Cancer Screening; Sojourn Time; Screening Strategy.

Pp. 405-420

Sequential Analysis of Quality of Life Rasch Measurements

Veronique Sebille; Mounir Mesbah

Early stopping of clinical trials either in case of beneficial or deleterious effect of treatment on quality of life (QoL) is an important issue. QoL is usually evaluated using self-assessment questionnaires and responses to the items are combined into scores assumed to be normally distributed (which is rarely the case). An alternative is to use item response theory (IRT) models such as the Rasch model for binary items which takes into account the categorical nature of the items. Sequential analysis and mixed Rasch models (MRM) were combined in the context of phaseII non-comparative trials. The statistical properties of the Sequential Probability Ratio Test (SPRT) and of the Triangular Test (TT) were compared using MRM and traditional average scores methods (ASM) by means of simulations. The type I error of the SPRT and TT was correctly maintained for both methods. While remaining a bit underpowered, MRM displayed higher power than the ASM for both sequential tests. Both methods allowed substantial reductions in average sample numbers as compared with fixed sample designs (about 60%). The use of IRT models in sequential analysis of QoL endpoints is promising and should provide a more powerful method to detect therapeutic effects than the traditional ASM.

Palabras clave: Quality of life; Item Response Theory; Rasch models; Sequential Probability Ratio Test; Triangular Test; Clinical Trials.

Pp. 421-439

Three Types of Hazard Functions Curves Described

G.I. Sidorovich; S.V. Shamansky; V.P. Pop; O.A. Rukavicin

Not doubts that measures of short-term treatment effects (remission or response rates) are presenting great interest to provide more efficient treatments. However, for all diseases with unfavorable prognosis, to which pertains hemoblastosis, life expectancy is the most important feature. The irrevocable decision about the choice between two different treatment options is usually based on survival functions comparison. Unfortunately, this analysis is not able to reveal critical periods in disease course with distinct maximum mortality rates. Clearly, this information is very important for clinicians efforts to distinguish time intervals when patients should be specially carefully monitored. A retrospective study of the overall survival function among patients with multiple myeloma (MM), acute nonlymphoblastic leukemia (ANLL) and chronic myeloprolypherative disorders (CMPD), treated in our hospital, was performed. These data were complemented with results for the hazard function estimations for each form of hemoblastosis. We found different types of hazard function curves, and we expect that it would be better for treatment results evaluation to use together both survival and hazard function analysis.

Palabras clave: Multiple Myeloma; Hazard Function; Survival Function; Polycythemia Vera; Death Probability.

Pp. 440-445

On the Analysis of Fuzzy Life Times and Quality of Life Data

Reinhard Viertl

Life times, health data, and general quality of life data are often not adequately represented by precise numbers or classes. Such data are called nonprecise or fuzzy, because their quantitative characterization is possible by so-called non-precise numbers. To analyze such data a more general concept than fuzzy numbers from the theory of fuzzy sets is necessary. A suitable concept are so-called non-precise numbers. Generalized methods to analyze such data are available, and basic methods for that are described in the paper.

Pp. 446-451