Catálogo de publicaciones - revistas

Compartir en
redes sociales


American Journal of Evaluation

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Período Navegá Descargá Solicitá
No detectada desde mar. 1999 / hasta dic. 2023 SAGE Journals

Información

Tipo de recurso:

revistas

ISSN impreso

1098-2140

ISSN electrónico

1557-0878

Editor responsable

SAGE Publishing (SAGE)

País de edición

Estados Unidos

Fecha de publicación

Tabla de contenidos

A Protocol to Assess Contextual Factors During Program Impact Evaluation: A Case Study of a STEM Gender Equity Intervention in Higher Education

Suzanne NobregaORCID; Kasper Edwards; Mazen El Ghaziri; Lauren Giacobbe; Serena Rice; Laura Punnett

<jats:p> Program evaluations that lack experimental design often fail to produce evidence of impact because there is no available control group. Theory-based evaluations can generate evidence of a program's causal effects if evaluators collect evidence along the theorized causal chain and identify possible competing causes. However, few methods are available for assessing competing causes in the program environment. Effect Modifier Assessment (EMA) is a method previously used in smaller-scale studies to assess possible competing causes of observed changes following an intervention. In our case study of a university gender equity intervention, EMA generated useful evidence of competing causes to augment program evaluation. Top-down administrative culture, poor experiences with hiring and promotion, and workload were identified as impeding forces that might have reduced program benefits. The EMA addresses a methodological gap in theory-based evaluation and might be useful in a variety of program settings. </jats:p>

Palabras clave: Strategy and Management; Sociology and Political Science; Education; Health (social science); Social Psychology; Business and International Management.

Pp. 109821402311522

Book Review: Leading Change Through Evaluation: Improvement Science in Action by Kristen L. Rohanna

Valerie MarshallORCID

Palabras clave: Strategy and Management; Sociology and Political Science; Education; Health (social science); Social Psychology; Business and International Management.

Pp. 109821402311533

A Human Rights-Based Evaluation Approach for Inclusive Education

Christopher J. JohnstoneORCID; Anne Hayes; Elisheva Cohen; Hayley Niad; George Laryea-Adjei; Kathleen Letshabo; Adrian Shikwe; Augustine Agu

<jats:p> This article reports on ways in which United Nations human rights treaties can be used as a normative framework for evaluating program outcomes. In this article, we conceptualize a human rights-based approach to program evaluation and locate this approach within the broader evaluation literature. The article describes how a rights-based framework can be used as an aspirational set of indicators for program evaluations to promote activities that align with internationally agreed-upon human rights norms. We then describe a case study of the evaluation through which this method was developed, including its sampling design, methodology, and findings. The United Nations International Children’s Fund (UNICEF) inclusive education evaluation described highlighted the need for conceptual clarity around what inclusive education is, and the importance of contextualized innovation toward meeting the educational rights of children with disabilities. Human rights perspectives and evaluation designs can help create such clarity, but should also be used with care. </jats:p>

Palabras clave: Strategy and Management; Sociology and Political Science; Education; Health (social science); Social Psychology; Business and International Management.

Pp. 109821402311538

From the Co-Editors: Honoring the Past to Inform Current and Future Evaluation

Jori N. HallORCID; Laura R. PeckORCID

Palabras clave: Strategy and Management; Sociology and Political Science; Education; Health (social science); Social Psychology; Business and International Management.

Pp. 172-174

Do Social Programs Help Some Beneficiaries More Than Others? Evaluating the Potential for Comparison Group Designs to Yield Low-Bias Estimates of Differential Impact

Andrew P. JaciwORCID

<jats:p> In the current socio-political climate, there is an extra urgency to evaluate whether program impacts are distributed fairly across important student groups in education. Both experimental and quasi-experimental designs (QEDs) can contribute to answering this question. This work demonstrates that QEDs that compare outcomes across higher-level implementation units, such as schools, are especially well-suited to contributing evidence on differential program effects across student groups. Such designs, by differencing away site-level (macro) effects, on average produce estimates of the differential impact that are closer to experimental benchmark results than are estimates of average impact based on the same design. This work argues for the importance of routine evaluation of moderated impacts, describes the differencing procedure, and empirically tests the methodology with seven impact evaluations in education. The hope is to encourage broader use of this design type to more-efficiently develop the evidence base for differential program effects, particularly for underserved students. </jats:p>

Palabras clave: Strategy and Management; Sociology and Political Science; Education; Health (social science); Social Psychology; Business and International Management.

Pp. No disponible

Gatekeeping's Influence on Equitable Evaluation Practice

Travis R. MooreORCID; Luke Carmichael ValmadridORCID; Robyn Baragwanath; Nathaniel Haack; Lori Bakken

<jats:p> The ethical guidelines for the American Evaluation Association and the principles of community-based participatory evaluation both state the importance of equitable stakeholder involvement. Regardless of the evaluation approach, however, evaluators are often confronted with gatekeepers, or those who control the access to stakeholders, information, or resources. Gatekeepers limit both the participation of key community members and, therefore, the exchange of relevant information related to the evaluation—a process called gatekeeping. Little research attention has been placed on studying gatekeeping, resulting in a dearth of knowledge about the influence of gatekeeping on stakeholder-engaged evaluations and social-structural dynamics that potentially perpetuate gatekeeping practices. In this article, we propose a gatekeeping influence theory grounded in the findings from 14 interviews. With a constructed theory of gatekeeping, we document the emergent social-structural and relational dynamics involved in stakeholder-engaged evaluation, with a focus on evaluations that include community partners and members. </jats:p>

Palabras clave: Strategy and Management; Sociology and Political Science; Education; Health (social science); Social Psychology; Business and International Management.

Pp. No disponible

Special Section Editors’ Note: A Focus on the Evaluation Profession

Susan TuckerORCID; Laura R. PeckORCID; Jori N. HallORCID

Palabras clave: Strategy and Management; Sociology and Political Science; Education; Health (social science); Social Psychology; Business and International Management.

Pp. No disponible

From the Co-Editors: Policy, Politics, Principles, and Participation: Influences on Program Planning, Implementation, and Outcomes Achieved

Jori N. HallORCID; Laura R. PeckORCID

Palabras clave: Strategy and Management; Sociology and Political Science; Education; Health (social science); Social Psychology; Business and International Management.

Pp. No disponible

Book Review: Evaluating and Valuing in Social Research by Thomas A. Schwandt and Emily F. Gates

Melvin E. HallORCID

Palabras clave: Strategy and Management; Sociology and Political Science; Education; Health (social science); Social Psychology; Business and International Management.

Pp. No disponible

Evaluation Paradoxes: Responding to Tensions Between Stability and Change in Social Investment Evaluation

Kettil NordesjöORCID

<jats:p> The relationship between stability and change is a central paradox of administration that pervades all forms of organizing. Evaluation is not unfamiliar with paradoxical objectives and roles, which can result in tensions for evaluators and stakeholders. In this article, paradoxes between stability and change in the implementation of evaluation, and responses to them, are investigated through the case of social investment funds in Swedish local government. From interviews with staff, managers, and evaluators, findings show how responses to four main paradoxes give priority to top-down summative evaluation that produces instrumental knowledge on outcomes and costs for decision makers. The responses show that the concept of social investment fund evaluation is elastic to contain paradoxes and address different audiences. Also, paradoxes within the structure of the organization develop into paradoxes concerning the roles and goals of evaluation, raising the question of whether individual actors can deal with paradoxes. </jats:p>

Palabras clave: Strategy and Management; Sociology and Political Science; Education; Health (social science); Social Psychology; Business and International Management.

Pp. No disponible