Skip to main content
Advertisement
  • Loading metrics

Evaluating public health strategies for climate adaptation: Challenges and opportunities from the climate ready states and cities initiative

  • Heather A. Joseph ,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Supervision, Visualization, Writing – original draft

    hbj7@cdc.gov

    Affiliation Climate and Health Program, US Centers for Disease Control and Prevention, Atlanta, Georgia, United States of America

  • Evan Mallen,

    Roles Conceptualization, Formal analysis, Methodology, Writing – original draft, Writing – review & editing

    Affiliation Climate and Health Program, US Centers for Disease Control and Prevention, Atlanta, Georgia, United States of America

  • Megan McLaughlin,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Writing – original draft, Writing – review & editing

    Affiliation Climate and Health Program, US Centers for Disease Control and Prevention, Atlanta, Georgia, United States of America

  • Elena Grossman,

    Roles Conceptualization, Data curation, Validation, Writing – review & editing

    Affiliation School of Public Health, University of Illinois, Chicago, Illinois, United States of America

  • Tisha Joseph Holmes,

    Roles Conceptualization, Data curation, Validation, Writing – review & editing

    Affiliation Department of Urban and Regional Planning, Florida State University, Tallahassee, Florida, United States of America

  • Autumn Locklear,

    Roles Conceptualization, Data curation, Validation, Writing – review & editing

    Affiliation North Carolina Department of Health and Human Services, Raleigh, North Carolina, United States of America

  • Emily Powell,

    Roles Conceptualization, Data curation, Validation, Writing – review & editing

    Affiliation Florida Climate Center, Office of the State Climatologist, Tallahassee, Florida, United States of America

  • Lauren Thie,

    Roles Conceptualization, Data curation, Validation, Writing – review & editing

    Affiliation North Carolina Department of Health and Human Services, Raleigh, North Carolina, United States of America

  • Christopher K. Uejio,

    Roles Conceptualization, Data curation, Validation, Writing – review & editing

    Affiliation Department of Geography, Florida State University, Tallahassee, Florida, United States of America

  • Kristen Vacca,

    Roles Conceptualization, Data curation, Validation, Writing – review & editing

    Affiliation New York State Department of Health, Albany, New York, United States of America

  • Courtney Williams,

    Roles Conceptualization, Data curation, Validation, Writing – review & editing

    Affiliation North Carolina Department of Health and Human Services, Raleigh, North Carolina, United States of America

  • Tony Bishop,

    Roles Conceptualization, Data curation, Validation, Writing – review & editing

    Affiliation Maricopa County Department of Public Health, Phoenix, Arizona, United States of America

  • Carol Jeffers,

    Roles Conceptualization, Data curation, Validation, Writing – review & editing

    Affiliation Florida Department of Health, Tallahassee, Florida, United States of America

  • Hannah Siegel,

    Roles Conceptualization, Data curation, Validation, Writing – review & editing

    Affiliation New York City Department of Health and Mental Hygiene, New York City, New York, United States of America

  • Chelsea Austin

    Roles Conceptualization, Data curation, Methodology, Supervision, Validation, Visualization, Writing – original draft

    Affiliation WildBlue Evaluation, Scottdale, Georgia, United States of America

Abstract

Evaluation generates critical evidence about the effectiveness of health-focused climate adaptation, who receives what benefits, and how to improve program quality. However, using evaluation to improve climate adaptation outcomes with timeliness and context-specificity is uniquely challenging. The United States Centers for Disease Control and Prevention supports health departments to implement adaptation initiatives through the Climate-Ready States and Cities Initiative (CRSCI) grant and minimize negative health impacts of climate change via the Building Resilience Against Climate Effects (BRACE) framework, which includes evaluation. In this paper, we present current evaluation practice by describing the health-focused adaptation actions that were evaluated among CRSCI recipients, the evaluation approaches they used, and the barriers and enablers encountered. Using a mixed methods approach, we abstracted annual progress report and standardized performance measure data to characterize evaluation activities across 18 grant recipients using basic quantitative descriptive analysis. Through structured interviews with three grant recipients and standard team-based qualitative coding and analysis techniques, we developed qualitative case studies to explore evaluation barriers and enablers in richer context. Recipients reported 76 evaluations over the reporting period (2018–2021). Evaluated programs commonly addressed extreme weather (50.0%), followed by heat (27.6%). The most common purpose was to monitor implementation or improve delivery (57.9%). Case studies highlighted barriers to successful evaluation such as limited specialized evaluation expertise and staff time. Enablers included staff motivation to justify program expansion, strong relationships with community partners, and use of evaluation plans. Case studies revealed diverse strategies to seek input from stakeholders disproportionately impacted by climate change and to reduce health disparities. The experiences of CDC grant recipients provide an opportunity to better understand the barriers and enablers of climate and health evaluation practice and to identify practical strategies to expand the value of evaluation in this nascent field.

Introduction

Increasingly, United States communities are experiencing negative health outcomes from exposure to extreme heat, wildfire smoke, extreme weather, flooding, and vector-borne disease [1,2]. This trend is providing a glimpse of worsening impacts that require public health agencies to take action and evaluate the most promising pathways forward.

Climate adaptation has been defined in many ways, but often refers to intentional, planned actions by individuals, groups or institutions to enhance resilience to climate change [3,4]. Resilience has been proposed as the ability of a socio-ecological system across temporal and spatial scales to maintain and rapidly return to desired functions in the face of disturbance, to adapt to change, and to transform systems that limit current or future adaptive capacity [5]. Resilience generally has a positive connotation but is conceptually malleable enough to allow diverse stakeholders operating in unique contexts to work toward a common purpose [6]. Adaptation may be policy-based, regulatory, single strategy projects or multi-component programs [7]. Such actions are embedded in a context of demographic, cultural, and economic change as well as transformations in information technologies, governance, social conventions, and globalization [8,9].

Adaptation practice in the public health sector is challenging due to the diversity of exposures and health outcomes affected by climate change [10] unfolding in place-specific pathways [11], the complex spatiotemporal patterns of climate hazards [12,13], uncertainty of future socioeconomic and climatic conditions, and the financial and institutional limits within current public health institutions [7]. Adaptation may also have trade-offs, externalities, and unintended consequences across spatial and temporal scales, which are not always apparent or measurable [9,1416]. These multiple unfamiliar elements limit the applicability of established environmental health methods and tools [13].

While there are existing metrics to measure effectiveness of climate change adaptation interventions, there is no universally accepted framework to guide the development of these metrics to evaluate these interventions. For example, Epule et al propose a climate change policy performance index for all African countries and compare it to a climate change performance index of 57 countries worldwide, each using different metrics to score climate change policy in aggregate rather than by intervention [17]. These differences are important to understanding intervention effectiveness, as the method of how effectiveness is framed significantly impacts adaptation implementation and outcomes [18]. Singh et al noted eleven different ways that effectiveness has been framed in the literature. Informed by distinct underlying assumptions and values, these range from utilitarian, where “adaptation should minimize costs and maximize benefits” to transformative, where “adaptation should be a process that fundamentally changes human thinking and practices in the face of climate change and overtly challenges the power structures that generate vulnerability” [19]. Notably, these distinct frames lend themselves to different evaluation methods and metrics.

In response to these challenges, diversity of approaches, and the urgent need for clarity in practice, frameworks and guidance documents have been established to guide the deployment of local-level climate adaptation for health outcomes. Examples include Intergovernmental Panel on Climate Change (IPCC) Technical Guidelines for Assessing Climate Change Impacts and Adaptations [20], United Nations Environment Program (UNEP) Handbook on Methods for Climate Change Impact Assessment and Adaptation Strategies [21], United Kingdom Climate Impacts Programme Adaptation Wizard [22,23], United Nations Development Program-Global Environment Facility Adaptation Policy Framework for Climate: Developing Strategies, Policies, and Measures [24], and the World Health Organization-Health Canada Methods of Assessing Human Vulnerability and Public Health Adaptation to Climate Change [25]. To help guide efforts in the United States, the Centers for Disease Control and Prevention (CDC) established the 5-step Building Resilience Against Climate Effects (BRACE) Framework [7]. The BRACE Framework guides local-level practitioners to systematically generate or use climate, environment, population, and health data to understand and project health impacts, review evidence-based interventions, strategize intervention implementation and evaluation [7]. Notably, all the adaptation frameworks cited above include monitoring and evaluation as a core component.

Evaluation challenges for climate adaptation

Evaluation can generate critical evidence about the effectiveness of climate and health adaptation, as well as insights about how to improve program performance [9,18]. Evaluation is a systematic way of asking “Are we doing the right things? And are we doing the things right?” [26,27]. Aptly, there is increasing demand for “evidence-based public health” that develops, implements, and evaluates the effectiveness of programs and policies through scientific reasoning [11,28,29]. There is also growing consensus for explicit consideration of who receives those benefits [3032]. Standard health monitoring and evaluation practices and indicators, however, are ill-equipped to track and ultimately be used to enhance system-level climate resilience [28]. Despite the urgent need, there is sparse evidence for the health benefits of climate adaptation [11,33]. However, with climate change impacts becoming more frequent and severe, there is increasing demand for evaluations of adaptations to quantify health benefits [34].

The challenges of conducting evaluation in a climate and health context are many and parallel those experienced by adaptation planners in general. Some of the most formidable include dynamic baseline conditions; the need to measure impacts over multiple overlapping time scales, across sectors, and in the context of multiple, interacting, up-and down-stream moderating and mediating factors; inherent uncertainty in the rate, magnitude, and effects of climate change for a given location; and the contingencies poised by current climate mitigation policy decisions [18,27,28,3537]. Attribution is often difficult in evaluation but is especially so in climate adaptation due to the long term and multifaceted set of influences outside of a single program or policy. Moser and Ekstrom identified specific organizational barriers to evaluation of climate adaptation that included perceived need and feasibility of evaluation; availability of funding, expertise, data, and methods; and willingness to learn. In addition, they noted a constellation of barriers around the willingness, feasibility, and legal aspects of revisiting prior decisions [38].

Another perennial evaluation challenge is assessing maladaptation. Ultimately, adaptation success depends not only on how it meets its intended goals but, crucially, how it affects the ability of others, present day or in the future, to meet theirs [39,40]. Barnett and O’Neill have identified five manifestations of maladaptation: increase of greenhouse gasses, disproportionate burden to those most at risk, high opportunity costs, reduced inventiveness or capacity to adapt, and decisions that limit future choices and thus increase vulnerability [41]. Fundamentally, maladaptation may be more likely when critical drivers of climate change vulnerability are poorly understood or not considered when the project scope is too narrow [36].

Because evaluation in health-focused climate adaptation requires many considerations beyond those typically addressed in traditional public health programs, multiple frameworks have been developed to guide evaluators. Most have been developed in the context of international development programs and originally intended for middle- and low-income countries [24,27,36,4251]. While some standard evaluation practices such as use of logic models seem to be commonly incorporated by health-focused adaptation evaluation frameworks, often additional tools and approaches responsive to the complexity of the field are included [24,4245,4749,51]. For example, several frameworks are cross-sectoral [24,4245,4749] and some contain guidance on adaptation practice [24,42,46,49,50]. To our knowledge these frameworks have not been widely used in the US context, suggesting a missed opportunity.

The climate ready states and cities initiative

In the US, CDC’s model for climate and health adaptation, BRACE, has primarily been implemented through the Climate-Ready States and Cities Initiative (CRSCI) [52,53]. This is the largest source of domestic climate and health funding from the federal government. The 2016–2021 funding period awarded annual grants ranging from $100,000 to $250,000 [54] to 16 state and 2 local health departments. All recipients were funded to implement BRACE in their local jurisdictions. CDC encouraged recipients to use the CDC Framework for Program Evaluation to guide their evaluation practice [55]. This framework aims for evaluations to be more contextualized and participatory to encourage use of the findings. The Framework involves six steps and five standards for effective, accurate, useful, feasible, and ethical evaluation. Steps place special emphasis on setting the appropriate evaluation focus, by engaging stakeholders and clearly describing the program to be evaluated. The last step prompts practitioners to ensure that lessons learned are shared and findings used. The framework conceptualizes evaluation as integral to a cycle of continuous program improvement [56].

Given that evaluation in the context of climate and health adaptation is challenging and there is not yet full consensus about the best ways to undertake this work, we can look to the CRSCI program as one example of practitioners tasked with a common purpose, through the CDC grant. The evaluation experiences of CRSCI recipients provides an opportunity to understand the real-world challenges and opportunities of frontline climate and health evaluation practice. The objectives of this paper are to describe the adaptation actions that were evaluated among CRSCI recipients and the approaches used, present a series of case studies on a subset of evaluations reflecting key challenges and opportunities, identify promising evaluation strategies, and propose how these insights might be applied to strengthening climate and health adaptation evaluation for future recipients and practitioners in the field. Our guiding questions were: 1) How have CRSCI recipients conducted evaluations of climate and health adaptation actions? 2) How have CRSCI recipient evaluation practices been similar or distinct? 3) What evaluation challenges and barriers have recipients encountered? 4) What factors facilitated evaluation?

Methods

We used a mixed-methods design involving a descriptive analysis of evaluations conducted by CRSCI recipients and case studies via content analysis of evaluation reporting documents. The case study method complements the descriptive analysis and provides a more nuanced and contextualized understanding of processes and complexities [5759]. Our analysis consists of two primary datasets: evaluation data submitted to CDC by CRSCI grant recipients and case studies of three selected evaluations conducted by a subset of CRSCI grant recipients.

Data collection

First, to compile the CRSCI evaluation dataset, we reviewed 54 Annual Performance Reports (APRs) and 72 sets of performance measures (PMs) that CRSCI recipients submitted to CDC as part of the annual grants management process. APRs and PMs are intended to ensure accountability in grant implementation and help CDC staff provide technical assistance to the recipient. They also provide an opportunity for an overall assessment of program accomplishments and impact. APRs follow a prescribed narrative format that includes work plans and a review of accomplishments. PMs include specific quantitative and qualitative metrics across four categories of recipient activities: capacity building; interventions; communications; and evaluation. For this project, we included APRs submitted from 2018 to 2020 and PMs submitted from 2018 to 2021, based on the availability of data at the time of the analysis. The grant cycle considered in this review concluded in August 2021. We reviewed the PM category of evaluation and the APRs to identify any evaluation-related activities. Evaluations did not have to be completed to be included.

The second dataset consists of select case studies of CRSCI evaluations. The principal aims of the case studies were exploratory and descriptive [59]. We selected multiple case studies to enhance external validity of the inquiry [60]. To select the cases, we applied three of Patton’s 16 purposeful sampling principles. We sought evaluation cases that were 1) reflective of evaluations conducted by CRSCI recipients (typical case), 2) completed and included a plan for use of the results (criterion), and 3) diverse in the climate hazards addressed (variation) [61]. The abstracted data on evaluation activities, described above, were used to guide the selection of case studies according to criteria suggested by the purposeful sampling principles. The case studies selected were: New York City’s Be a Buddy Program; Maricopa County Arizona’s Heat Toolkit Distribution; and Sarasota County, Florida’s Emergency Management Building Resilience Against Climate Effects Workshops.

To develop the case studies, we collected additional in-depth information from the project teams leading the evaluations through a standardized template, presented in the supporting materials (see S1 Text). The template consists of a series of prompts, to which teams initially responded in writing or via guided discussions. The prompts covered key elements of the intervention and the evaluation, such as design, methods, evaluation questions, findings, challenges, and facilitators. The template provided examples of facilitators (i.e., “staff, skills, relationships, champions, leadership, or strengths of the program”) and “challenges” (e.g., “staff, skills, access to data/information/population, resources, timelines, weakness of the program”). We used these terms to enhance inclusivity, as they are more familiar nomenclature with the case study discussants. In our analysis and the remainder of this report, we use the term “enabler,” which is conceptualized similarly to Mallen et al. [62] and defined as a factor that helped the project team conduct the evaluation. Similarly, we use the term “barrier” to mean an obstacle that makes evaluation less efficient or effective or result in additional delays or costs [62,63]. We completed several cycles of feedback and clarification before finalizing the case notes.

Data analysis

For this project, evaluation is defined as “the systematic collection of information about the activities, characteristics, and results of programs, to make judgments about the program, improve or further develop program effectiveness, inform decisions about future programming, or increase understanding” [64]. Once evaluations were identified, analysts abstracted and coded key attributes of each project. PM data was abstracted by compiling all the PM data into a single table and searching for specific phrases in the text. Search terms included the health hazard addressed, scope of the evaluation, evaluation purpose, design, and methods (none of these were mutually exclusive categories). Evaluations were not always described in these terms in the PMs and APRs; analysts additionally used descriptive information about the activities provided by the recipient to determine the most appropriate categorization.

The hazards addressed were based on pre-existing PM categories which included: all hazards (defined as extreme weather or climate-related hazard of any kind), heat, vector-borne disease, flooding/extreme precipitation, hurricanes, or wildfire and wildfire smoke. The scope of the evaluation was either action or project level, portfolio (about a set of related activities or subawards to other agencies doing similar projects) or the overall program (typically involving many disparate activities or projects). Evaluations were also categorized as formative, process, and/or outcome (see S1 Table). Each type involves the systematic collection of information. Formative evaluation occurs in the early stages of implementation to guide selecting, developing, tailoring, or improving an activity or program. Process evaluation aims to monitor implementation, often in terms of fidelity and reach, and support mid-course changes and improvements [65]. Outcome evaluation aims to assess the effectiveness, impact, or merit of a program to make recommendations about future program direction or improvement. Evaluation design categories were quasi-experimental, non-experimental strictly monitoring, or qualitative. For this project, quasi-experimental involved the collection of the same data at multiple time points or use of a comparison group (e.g., pre-post tests without a comparison group, pre-post tests with a nonequivalent comparison group or post-test only) [66]. Non-experimental involved data collection at a single time point or collection of different indicators over time (e.g., post-test only, cross-sectional, or case studies) [66]. Last, the methods were coded according to the reported data collection strategy in the PM or APR.

In some cases, the APR did not designate an activity as evaluation, however if a recipient reported a systematic process for collecting, analyzing or using information to achieve one of the three objectives associated with an evaluation type, the analyst documented it as an evaluation. Attributes of the evaluations were summarized using basic descriptive statistics [5961]. The qualitative case notes for the three case studies were uploaded into a separate dataset for coding in the qualitative analysis software Dedoose (v9.0.46). Two members of the research team served as coders and a third managed the process for finalizing the codebook and adjudicating coding disagreements. The team used an inductive approach to develop an initial set of codes [67]. Both coders then independently coded samples of data and met regularly to reconcile coded content and update the code list and definitions. This process was repeated until the coders achieved a combined average Cohen’s Kappa above 0.70 for all of the codes, indicating “good agreement” [68]. After this point was reached, both coders finished coding the remaining qualitative data (i.e. double coded). An analyst performed content analysis to determine those themes which emerged most prominently [69]. To enable a better understanding of theme salience, coders quantified the frequency of theme mentions. The synthesis below is based on the presence of the theme within and across the three cases.

Ethics statement

CDC determined that the data collection was non-research, and no human subjects review was conducted in accordance with applicable federal law and CDC policy. Data was not collected from human research subjects to complete this project; informed consent was not applicable.

Results

Review of CRSCI evaluations

Recipients reported a total of 76 distinct evaluations (Table 1). The evaluated projects or programs most commonly addressed all hazards or extreme weather (44.7%), followed by heat (27.6%). Most evaluations were conducted at the project level (75%), rather than for an entire portfolio (10.5%) or overall program (10.5%). Many evaluations had multiple purposes (23.7%), but the most common was to monitor implementation (59.2%) and assess whether the program was effective (42.1%). The most common type of design was non-experimental (44.7%), followed by quasi-experimental (27.6%). Surveys were the most common method used to collect data (56.6%).

thumbnail
Table 1. Attributes of evaluations conducted by CRSCI recipients, 2016–2021.

https://doi.org/10.1371/journal.pclm.0000102.t001

Case studies

Case studies are summarized in Table 2 and more fully described below.

thumbnail
Table 2. Summary of evaluation case studies, CRSCI recipients, 2016–2021.

https://doi.org/10.1371/journal.pclm.0000102.t002

  1. Case Study 1: New York City’s Be a Buddy program for expanding social support during weather emergencies

Program description and context.

The New York City Department of Health and Mental Hygiene (NYC DOHMH) launched the “Be a Buddy (BaB)” pilot project in July 2017 with the objective of strengthening relationships and connections among community members to promote social cohesion and, in the long-term, community capacity to prepare for, withstand, and recover from extreme weather. BaB was one of several projects associated with Cool Neighborhoods NYC, coordinated by the NYC Mayor’s Office of Resiliency. In May 2018, BaB partnered with three community-based organizations (CBOs) in three NYC communities: Brownsville in Brooklyn, East Harlem in Manhattan, and Hunts Point in the Bronx. In the first two years of the program, BaB CBO partners identified and enrolled 1,311 BaB participants living in their communities who were at increased risk of adverse health impacts of extreme heat and other extreme weather (e.g., older adults with multiple chronic illnesses without access to air conditioning at home). In addition, the program trained seven staff and 66 locally-based volunteers on risks of heat waves, winter storms, and other weather emergencies and ways to prepare for these events.

During non-emergency times the CBO partners hosted over 540 community engagement events to create social connections between volunteers and participants, many of whom were neighbors living in the same block or building, as well as between community members and CBOs. During extreme weather emergencies (e.g., heat waves, flooding, winter storms), CBOs activated their trained volunteer “buddies” to check on participants, provide social and emotional support, and refer these priority community members to city services, resources, and programs, when appropriate. There were ten weather-related activations from May 2018 to March 2020 during which CBOs conducted 7,081 emergency buddy checks and made 883 referrals to services, such as home energy assistance, food assistance, and home health aide services. Starting in March 2020, the BaB CBO networks expanded their scope in response to the COVID-19 public health emergency, for which the characteristics of those most at risk overlapped with BaB participants, activating at first weekly, then monthly, then based on client need through June 31, 2022. During this time there were gaps in funding, however the BaB networks continued to conduct check-ins because the established volunteer-participant relationships continued naturally.

Evaluation approach.

The evaluation team was composed of staff from the NYC Health Department’s Bureau of Environmental Surveillance and Policy, the Director of New Initiatives from the NYC Health Department’s South Bronx Neighborhood Health Action Center, as well as a senior policy advisor from the Mayor’s Office of Climate Resiliency and Environmental Justice. Since the start of the program, CBOs provided process evaluation data (e.g., number of volunteers, participant demographic data) via quarterly reports. Although evaluation was considered a priority by the program implementers from the beginning, funding and in-kind support for an outcome evaluation was not available until nearly two and a half years into the pilot when the COVID-19 response highlighted the acute need for increased support for community-engaged programs that address the inequitable burden of morbidity and mortality on low-income communities of color.

BaB implementers convened an Evaluation Advisory Committee with community partners, academic experts, BaB volunteers and community members to help design and guide evaluation implementation. The primary outcome evaluation questions are listed in Table 2. The non-experimental evaluation design involved formative-, process-, and outcome-focused components. The evaluation team established a monitoring system to track program implementation, conducted key informant interviews with CBO staff, and administered surveys and focus groups with volunteers and participants. In addition, CBO staff were trained to lead and support buddies and participants in a process of generating and documenting narrative reflection via digital recordings. The team used these narratives as a form of evaluation data, as well as a communications tool.

Findings from the formative and process evaluation have been used to make refinements. For example, CBO reports indicated that participants preferred tech-enabled check ins over “door knocking,” the original outreach strategy. This approach has shown high acceptability; in one of the first COVID-19-related activations, 92% of attempted check-ins (calls, texts, or emails from buddies) were picked up or responded to by participants. When available, the evaluation team will disseminate outcome results to program stakeholders and decision-makers in a process informed by academic and community experts from the Evaluation Advisory Committee.

  1. Case Study 2: Maricopa County’s Heat Toolkit Distribution to mobile home residents

Program description and context.

In 2020, The Arizona Department of Health Services supported Maricopa County Department of Public Health (MCDPH) to partner with a CBO, Salud en Balance, to pilot a heat health awareness campaign in a Maricopa County mobile home community. The objectives were to increase knowledge of heat-associated risk factors, awareness of resources for reducing heat-associated risk factors, protective health behaviors related to heat, and use of resources for reducing heat-associated risk factors among campaign participants. The mobile home community was selected because it was in a zip code with higher rates of heat deaths compared to the county (4.5 per 100,000 vs. 4.0 per 100,000, according to Maricopa County’s Heat Death Surveillance Reports) and a higher proportion of renters (75% vs. 37%) in 2018 [70]. Additionally, a high percentage (30%) of heat deaths from high indoor temperatures are associated with mobile homes [71].

The campaign involved distribution of the Heat Toolkit to the community, which contained information on heat illness, heat safety tips, and community resources. Toolkits were distributed to 156 households by six Salud en Balance community health workers (CHWs). Specifically, the toolkit contained information about heat deaths and elevated risk in mobile homes in Maricopa County, tips for staying safe in extreme heat, signs and symptoms of heat illness, and how to respond, a list of cooling center locations, and information on utility assistance, rent assistance, weatherization, and eviction prevention programs. Most information was available in English and Spanish. Three workshops for CHWs provided training on the toolkit and engaging in conversations with residents and administration of the evaluation surveys.

Evaluation approach.

The evaluation team consisted of a Health Equity Epidemiologist, a Climate and Health Senior Epidemiologist, an Epidemiology Data Analyst, a Climate and Health Senior Epidemiologist, an Active Living Specialist and a Community Health Worker. The evaluation featured formative, process, and outcome components. The formative information was sought as a needs assessment that included continuous feedback from the CHWs on the overall projects that would support the potential development of additional interventions. The process evaluation component was meant to monitor implementation progress and fidelity. The outcome evaluation intended to assess the positive effect of receiving the toolkit, discussing it with the CHW, and using it throughout the heat season. Key evaluation questions are presented in Table 2.

The primary method was a pre-post survey of the mobile home community residents (conducted in July and October 2020). Simple quantitative analysis without inferential statistics indicated an increase in knowledge of heat and heat illness, understanding that heat can pose a risk to health, willingness to leave home to go to an air-conditioned place to cool-off, awareness of programs to help with the cost of utility bills, and awareness of programs to help with cooling system repairs. Application to utility assistance programs did not increase. Residents reported they would not apply to or use assistance programs due to lack of Spanish-language staff and materials, fear of showing identification, lack of computer access, complicated application processes, and assumptions that they would not qualify for these programs/services.

Based on the findings, the toolkits were modified to include more detailed information on how to apply to assistance programs. Additionally, more components of the toolkit will be available in both Spanish and English. Salud en Balance, in partnership with MCDPH and Foundation for Senior Living, began to coordinate an A/C repair and home weatherization workshop for the residents and presented information to residents about resource assistance programs. The toolkit program was expanded to six additional mobile home communities.

  1. Case Study 3: Sarasota County’s Emergency Management Building Resilience Against Climate Effects (EMBRACE) workshops

Program description and context.

Florida State University (acting as a bona fide agent of Florida Department of Health) provided grant support to the Florida Department of Health of Sarasota County (DOH-Sarasota) to design, develop and conduct a formative evaluation to generate insights for improving emergency management responses, as well as policies and procedures in the event of major hurricanes, storms, and flooding events for access and functional needs (AFN) populations.

Sarasota County is a Southwest Florida coastal community of approximately 430,000 residents of which 37% are elderly, 8% have disabilities under the age 65, 9% live in poverty, and many of whom live alone [72]. During a review of disaster planning for vulnerable populations, DOH-Sarasota staff reviewed the 2010 state-funded hurricane evacuation studies to determine the state of readiness and effectiveness of existing evacuation routes. It was noted that the assessment did not address the capacity of the county’s vulnerable population (VP) residents to self-evacuate or the effectiveness of emergency messaging to this population. DOH-Sarasota decided that having answers to those questions was important for informing future VP emergency preparedness planning.

To collect this information, DOH-Sarasota implemented a series of Emergency Management Building Resilience Against Climate Effects (EMBRACE) Workshops in 2014 to gather insights from at-risk AFN residents about their level of preparedness, emergency communication capacity and transportation barriers to assess potential ways that emergency plans could be amended to better meet this population’s needs. The project objectives were to identify the current levels of disaster preparedness and recovery capacity of AFN groups, assess barriers and challenges at-risk AFN residents must overcome to access emergency communications and transportation during evacuations, and use feedback from key risk groups to inform assumptions of AFN emergency needs to develop functional capability and capacity. The feedback gathered would be used to update FL DOH-Sarasota Emergency Operations Plans and build stronger collaborative partnerships across key stakeholders.

Evaluation approach.

Key evaluation questions are presented in Table 2. DOH-Sarasota selected a workshop format to solicit information rather than surveys, due to the concern that surveys would result in low response from this population. The evaluation team used climate vulnerability maps overlaid with medical, social and community resilience data indices to map the location of AFN populations relevant to storm surge vulnerability. EMBRACE community workshop locations were chosen to best serve the identified vulnerable communities. Ultimately, DOH-Sarasota facilitated three inquiry-focused workshops in North, South, and Central Sarasota County and one collaborative problem-solving workshop in Central Sarasota. Participants were invited through email invitations and presentations at agencies and community meetings to attend the 6-hour long workshops; ultimately there were 45 unique participants across the four workshops, which were facilitated by DOH-Sarasota staff. The format involved seminar style and small roundtable discussions with open-ended questions with directed probing to elicit detailed explanations to obtain actionable feedback from workshop participants. Additionally, DOH-Sarasota met with residents and their caretakers at the Sarasota Center for Independent Living facility to conduct one-on-one interviews for those who were unable to attend the workshops and to gain additional insight into daily challenges that could impact their ability to safely navigate an emergency event.

The fourth problem-solving workshop was conducted to present the insights gathered from the preceding workshops to decision makers, community champions and additional emergency planning partners. Also, information was gathered about the extent to which participants found the workshops an acceptable method for generating input from AFN populations.

Key evaluation insights confirmed assumptions practitioners made about the additional limitations beyond road capacity that AFN populations faced, and their decision-making processes made under physical and resource constraints. Food deserts were also flagged as a key concern. Workshop findings were used to inform a disaster preparedness-focused Health Impact Assessment, develop an All-Hazards Survival and Active by-stander Training and compile survival kits to help at-risk groups be better prepared for an emergency. Lessons from the workshop set the foundation for future assessment of progress and identification of resources and strategies to continue outreach to AFN and other culturally diverse groups who were not engaged in the workshops.

Cross-cutting practices and themes from case studies

We noted several similarities and contrasts between the three case studies (Fig 1). These span several domains, including prioritization of health equity goals, inclusion of stakeholders in the evaluation process, diversity and local relevance of evaluation aims, and inconsistencies in using an evaluation framework.

thumbnail
Fig 1. Count of practices and cross-cutting themes from evaluation case studies, CRSCI recipients, 2016–2021.

https://doi.org/10.1371/journal.pclm.0000102.g001

Common practices across case studies.

Health equity was considered a priority by all three evaluation teams. This was reflected in both aims of the adaptation intervention to benefit specific populations disproportionately burdened by climate change and the evaluation process itself. In all cases, the evaluation teams sought close collaborative relationships with implementing agencies and partners to include perspectives from populations intended to benefit from the intervention. Evaluation teams also sought to include stakeholders to ensure that evaluation findings would ultimately meet the needs of the intended users. While considered a high priority by all three evaluation teams, the mechanisms for stakeholder engagement varied across the cases. In the New York City evaluation, a formal, multi-disciplinary advisory panel with representatives from CBOs, academics, and evaluators was established early and convened regularly. In Maricopa County, the evaluation team met regularly with the implementing CBO that possessed a deep understanding of the community’s needs and thus could advise on developing culturally and linguistically appropriate survey instruments. While in Sarasota, the evaluation team prioritized accessibility of participants through careful selection and vetting of workshop venues.

The aims of the evaluations were all determined locally, rather than by CDC, and ranged from needs assessment to establishing the effectiveness of the adaptation intervention. Each evaluation entailed specific, explicit evaluation questions. In all cases, formative evaluation questions were posed. Two evaluations set out to establish effectiveness, but only one team used a quasi-experimental design. In this case, the lack of a comparison group would inhibit answering that question with a high degree of certainty. Each evaluation used multiple methods to answer a suite of evaluation questions. In Sarasota, workshops as well as interviews were conducted to include those who could not travel. In NYC, the evaluation team conducted key informant interviews with CBO staff and surveys and focus groups with buddies and participants.

The evaluation teams did not consistently report seeking and using a published evaluation framework to identify and sequence steps or make decisions. To varying degrees, all teams used CDC guidance provided via a reporting template called the Implementation and Monitoring Strategy, which seemed to reinforce use of best practices found in the CDC Evaluation Framework. For example, each evaluation engaged individuals and organizations with an interest in the program and the evaluation. Additionally, Maricopa County developed a logic model and used reflective practice (i.e., a practice of asking and answering questions intended to steer implementation decisions toward best practices). In New York City, a comprehensive evaluation plan based on evaluation questions was established, documented, and shared with evaluation partners. All recipients reported that the CRSCI grant-reporting requirements informed the design of their evaluation. Maricopa County reported that these encouraged the team to capture both process and outcome measures. However, New York City commented that flexibility and responsiveness to community needs were higher priorities in both the design of the intervention and the evaluation than following a framework, which was considered potentially rigid and academic.

Barriers and enablers to evaluation.

We also explored barriers and enablers to conducting evaluations. In terms of barriers, themes that emerged across more than one case included inadequate staffing, the need to shift plans, and lack of specific tools, especially related to informational technology or analytic software.

In two of three case studies, teams reported that the demands of the evaluation sometimes exceeded the capacity of the staff in terms of skills and time. No evaluations were led by full-time professional evaluators. Instead, evaluation teams were generally composed of epidemiologists, academic advisors, public health generalists, and analysts. Another important barrier was insufficient communication with key stakeholder groups. Even though all evaluation teams sought stakeholder input, at times communication was not as frequent or comprehensive as needed. For example, in New York City, meetings were scheduled according to community partner availability, but after implementation challenges surfaced, increased communication about the actual preferences of community members helped resolve these issues.

The COVID-19 pandemic was also a key factor that in two cases inhibited evaluation progress by shifting attention and resources away from climate and health programming. However, the pandemic response also provided opportunities. For example, DOH-Sarasota was able to co-present on climate and health topics during COVID-19 trainings with emergency response partners and Maricopa County bundled COVID-19 with heat resources expanding on the needs met for participants. In New York City, support for the program and evaluation increased during the pandemic response, as the Be a Buddy program was increasingly seen as a way to meet the needs of a population disproportionately at risk to COVID-19 and findings to substantiate this assumption were needed. This team also observed that limiting methods to quantitative surveys was not meeting the need for a richer, contextual understanding about why certain program elements were more or less successful. Additional resources allowed the team to expand methods to include qualitative approaches, such as focus groups, key informant interviews and digital storytelling, that could deliver these insights.

The most pronounced enabler for evaluation, found across the three cases, was the engagement of those who implement, are affected by, or make decisions about the program. In the case of DOH-Sarasota, engagement of affected community members to understand their needs was the primary goal of the evaluation itself. In New York City, the Community Evaluation Advisory Board helped with the management of partnership challenges when they arose and helped set explicit expectations for decision making and timeline management. This team also leveraged their relationship with CBOs formed around the evaluation to redefine “at risk” groups and enhance program practices that were not meeting participants’ needs. In Maricopa County, the strong relationship with Salud en Balance facilitated knowledge sharing about actual community needs with the evaluation team. This relationship also benefited the evaluation by leveraging Salud en Balance’s relationships with other partners to maintain consistent communication. Additional enablers reported in at least two case studies were leadership support, trust with the community, establishment of an evaluation plan, flexibility in implementing the evaluation, and adequate evaluation staffing, which varied over time.

Discussion

We share insights from CRSCI, the nation’s first and largest initiative to implement health-focused climate adaptation at the state and local levels. Our analysis benefits from the mixed methods approach, which facilitated a composite picture of evaluation activities across CRSCI, as well as a deeper exploration of evaluation implementation by three recipients. Results highlight several opportunities for enhancing evaluation within CRSCI and for the broader practice context of health-focused climate adaptation.

Improving evaluation for CDC’s climate ready states and cities initiative and beyond

The review of evaluation activities across CRSCI indicates that evaluations were conducted by all recipients and focused on a range of adaptation initiatives. The majority of evaluations intended to monitor program implementation or improvement, and as expected based on the aims of the evaluations, non-experimental designs were most common. While some recipients also intended to assess the effectiveness of the adaptation intervention (42.1%), not all of those evaluations were designed to be able to answer outcome questions (none were experimental and only 27.6% were quasi-experimental, which could have involved a single group pre-post test without a comparison).

The case studies provided more insight as to why this might be the case. The grant stipulated recipients use the BRACE framework to plan, implement, and evaluate their adaptation interventions. However, in practice, recipients had difficulty stretching the funding award amount to cover the costs of robust evaluations in addition to planning and implementation activities, often translating into the inability to hire evaluation staff with the knowledge and skills required to mount more sophisticated evaluation designs.

In response to these challenges, evaluation was further emphasized and prioritized during the 2021–2026 cycle of CRSCI; recipients were required to conduct outcome evaluations of at least two adaptation interventions and to measure outcomes in terms of health equity. In this cycle, a modest increase in funding and a de-emphasis of other BRACE activities were intended to help facilitate this effort. Within the first year of the grant, two thirds have hired or plan to hire staff with evaluation training who will focus on these activities (internal administrative data, 2022). In addition, CHP has provided monthly evaluation training via a community of practice, evaluation resources, and templates to recipients. CHP also partnered with the American Public Health Association to publish a practical guide for justice and equity-focused climate adaptation and evaluation [73,74]. Beyond CRSCI, we recommend that adaptation funders provide adequate evaluation funding, supply training resources, use evaluation frameworks, and set clear expectations for locally driven evaluations that consider impacts among the most vulnerable, as well as conduct portfolio-level monitoring and evaluation that can reflect progress and, ultimately, the value of the overall program.

We noted that all three case studies included formative evaluation questions. In each instance, evaluation teams systematically sought input from populations intended to benefit from the intervention that would help inform design and implementation of the present intervention or the next iteration of interventions seeking the same outcome. Seeking to understand how a proposed intervention meaningfully responds to the community’s needs and context aligns with the stated aims to improve health outcomes among marginalized populations acutely vulnerable to climate hazards due to low adaptive capacity and high sensitivity. Formative evaluations also likely reflect that program planners were able to draw upon a limited evidence base that is largely unable to speak to the nuances of specific climate-hazard contexts. This highlights the vast information needs of programs such as these and suggests that evaluation will likely be pulling “double duty” for the foreseeable future. Double-loop learning is another way of conceptualizing learning for and from adaptation [75], in which evaluators routinely need to ask not only “How effective are we?” but also “What else could we be doing?”.

The case studies revealed a few additional barriers and enablers that correspond to those identified by prior research. Moser and Ekstrom [38] and Mallen et al. [62] similarly identified limitations in resources and funding, challenges with leadership, and lack of climate-specific expertise to be substantial barriers to successful climate adaptation. These findings imply that several barriers to successful adaptation also apply to evaluation. With two of three case studies noting lack of evaluation expertise as a significant barrier, lack of expertise may continue to present substantial challenges in this nascent field. The COVID-19 pandemic also presented significant barriers to evaluation progress for two of the three cases. This supports similar findings by Mallen et al. that the pandemic was one of the most common barriers to climate and health adaptation, impacting 10 out of 11 climate adaptation actions analyzed by the authors. We noted that enabling factors included the perceived need of the evaluation by the local team, especially among stakeholders in a position to provide resources, and the availability of both evaluation expertise and time that could be dedicated among existing staff. In all three cases, the evaluations were instigated and led by local teams who shared the perspective that learning about the program was of value. This aligns with a key principle of adaptive management, upon which the BRACE model for adaptation is built [7].

Use of evaluation frameworks

No evaluation teams featured in the case studies described seeking and using a published evaluation framework to guide planning and implementation. To varying degrees, all recipients followed a grant-required planning and reporting template, which was based on the CDC Evaluation Framework. We echo the calls of others that using a framework can be of great value [27,36], especially for evaluation teams without a professional evaluator. Multiple frameworks available and most offer step-by-step guidance along with training guides, reflective questions, and case studies [24,4251]. Notably, few frameworks were created for practitioners in high-income countries [24,42,44,49] and fewer still explicitly address equity concerns [24,46,49]. There may also be a need to socialize the use of frameworks as standard practice and help practioners become more familiar and comfortable with using them. Health-focused climate adaptation in the US domestic setting could benefit from the development and promotion of a comprehensive yet flexible evaluation model.

There is growing evidence of maladaptation across all sectors, which can increase vulnerability and exacerbate existing inequalities [22,39]. We found that CRSCI evaluations did not generally address the risks of maladaptation. Evaluation should routinely prioritize conceptualizing and measuring these risks and could start with using theory of change or logic models to articulate how the adaptation will intervene on the fundamental drivers of climate vulnerability and health inequity [26,36,76,77]. Logic models can also prompt planners to consider the implementer’s sphere of control or authority and the range of potential outcomes outside of that sphere.

More specifically, it may prove useful for CRSCI and other practitioners to use rubrics or guides on the practical steps to make these considerations explicit in the planning process. Magnan et al [39] recommend three frameworks to preemptively and objectively assess maladaptation risks; however, none are specific to health-focused climate adaptation. Additional research and development of practical guidance will be crucial for crafting adaptation interventions and assessing their impact–intended and unintended, positive and negative.

Limitations and future directions

There are several limitations to note in this report. First, this study was based on grant recipient reporting via PMs and APRs. Our findings assume that all recipients reported relevant data accurately, comprehensively, and in a standardized fashion. We recognize that this may not be the case for all recipients via PM and APR reporting or the case studies. In some cases, reporting may be incomplete or not describe the extent to which an action or intervention was successful, as the PMs did not directly collect information about the effectiveness of an intervention or evaluation activities. In the APRs and PMs, grant recipients may have only reported what they deemed to be worthy or required of them to report to CDC and may not have reported all evaluation activities. As a result, our findings may be an underestimate of the total evaluation activities across all grant recipients.

Furthermore, in the case studies, we did not require that the evaluation itself be considered a “success” by pre-defined metrics or by the local team. This study seeks to identify methods to improve evaluation practice rather than identify successful interventions or evaluations, so defining success or success metrics was outside the scope of the current study. Second, our recommendations are based on the assumption that the experiences of these recipients are generalizable to other practitioners, which may not be accurate due to differences in funding and access to federally-provided technical assistance.

Third, qualitative findings were based on a limited set of case studies. Rather than statistical generalization, case studies rely on analytical generalization, which allows the user to apply a particular set of results to a broader theory [59]. The selection of multiple case studies aimed to meet a standard of replication and thereby expand the possible interpretation and application of the results [78]. There were also challenges during the grant cycle that impacted the data reporting strategy. Approval for performance measure collection was delayed, resulting in a one-year gap in reporting. In practice APRs were not entirely standardized across recipients, resulting in variability in the depth and details provided.

Evaluation, in the context of adaptation and health, is likely to grow in quality, scope, and scale over time, in response to the demands of funders for accountability and evidence of effectiveness, along with growing availability of data reflecting both climate vulnerability and resilience. With climate impacts on health increasing in frequency and severity, there is a growing need for rigorous evaluation of adaptation interventions to improve public health and justify public investment [34]. In tandem, evaluators may have more choices in terms of the evaluation questions, designs, and methods they may employ. In this context, it should become routine practice to explicitly understand and assert the inherent assumptions reflected in the choice of evaluation questions, design, and methods, as well as the evaluation stakeholders invited to the table. There are multiple ways of understanding effectiveness [18]. The evaluation questions highlighted in the case studies appear to be using two implicit frames for understanding and assessing effectiveness, namely “improved wellbeing” and “reduced vulnerability or increased adaptive capacity.” Adaptation success should be defined on a case-by-case basis, keeping local context in mind, and informed by the intervention stakeholders. However, for future iterations of CRSCI, evaluation teams may be encouraged to articulate what effectiveness frame is being applied and why. Further, CHP could also encourage the use of additional frames suggested by Singh et al [19], such as “enhanced resilience,” “sustainable adaptation,” “avoiding maladaptation,” “just and equitable adaptation,” or “transformative adaptation”.

Conclusion

In this paper, we aim to contribute to the growing discussion that recognizes the urgent need for accountability, expanding the evidence base for effective health-focused climate adaptation, and iterative learning to help improve program delivery. By examining the evaluation practices among all CRSCI recipients and delving deeper into the experiences of three recipients, we have identified several evaluation practices, barriers, and enablers that support several recommendations.

We encourage adaptation funders to include dedicated resources to support evaluation, set clear expectations for locally driven evaluations, and implement portfolio-level monitoring and evaluation that reflects the progress and value of sponsored activities. In addition, adaptation practitioners and evaluators should recognize the outsized need to conduct formative evaluations that can illuminate dynamic needs and perspectives critical to the adaptation strategy’s ultimate success, with intentional focus on those who will be impacted by the adaptation intervention, those who will implement it, and those in a position to make decisions. We recommend that funders and practitioners cultivate an organizational culture of learning via systematic inquiry, a core principle of adaptive management.

We also encourage the development and use of justice and equity-driven evaluation frameworks specific to the needs of health-focused climate adaptation. These could be especially useful for teams who do not have the benefit of a professional evaluator. Maladaptation is a currently under-examined aspect of climate adaptation that threatens to increase climate vulnerability for some people and places; the use of frameworks and logic models can help bring this issue to the forefront. And last, frameworks that include reflective practice can help practitioners and evaluation teams articulate what effectiveness frame is being applied and why. As the health effects of climate change become more widespread and severe, greater attention on our adaptation approach is sure to follow. Having clarity about our expectations for what interventions can achieve reflects how far our sights are set. Do we aim to reduce narrowly defined risks and minimize costs, or do we aim to fundamentally change societal structures that cause, exacerbate, and create unequal climate vulnerability? These kinds of decisions are always inherent in evaluation practice but have pronounced urgency and saliency due to the scale and scope of today’s climate crisis. Long standing evaluation traditions such as using reflective practice, making values explicit, and understanding the influence of stakeholder power dynamics imply how robustly evaluation can potentially meet the moment.

Society currently has the scientific understanding, technology, and financial means to keep climate change within a range that allows for human adaptation [22]. We have seemingly endless options for climate adaptation strategies with potential to directly promote health, though complexity and uncertainty makes action and decision-making daunting. Evaluation as routine practice brings the ability to generate knowledge and learning, providing opportunities to question assumptions, test theories, and improve practices that can move communities toward improved health and climate resilience.

Supporting information

S1 Text. Case study data collection instrument and responses.

https://doi.org/10.1371/journal.pclm.0000102.s002

(DOCX)

Acknowledgments

We would like to thank the following agencies and individuals for their support of this work. First, we thank the entire cohort of CRSCI awardees (grant CDC-EH16-1602). See https://www.cdc.gov/climateandhealth/crsci_grantees.htm for listing of all 18. We also thank Madhumita Govindu, Robyn Borgman, and Maureen Wilce from CDC’s Division of Environmental Health Science and Prevention. From Florida, we thank Sarasota County Emergency Management, Edward J. McCrane Jr., Sarasota Community Organizations Active in Disaster, Suncoast Disaster Healthcare Coalition, DOH-Sarasota Community Health Action Teams, Kristian Blessington (DOH-Sarasota Environmental Health), and Sophee Payne (DOH-Sarasota Environmental Health). From Arizona, we thank Salud en Balance, Vjollca Berisha (Maricopa County Department of Public Health), Aaron Gettel (Maricopa County Department of Public Health), Jessica Whitney (Maricopa County Environmental Services) and Gail LaGrander (Maricopa County Department of Public Health). From New York City, we thank Union Settlement Senior Services, The POINT Community Development Corporation, Brooklyn Community Services at Seth Low, The Fund for Public Health in NYC and our colleagues at the Mayor’s Office of Climate Resiliency and the NYC Health Department Centers for Health Equity and Community Wellness and the Bureau of Environmental Sustainability and Policy.

References

  1. 1. Ebi KL, Balbus JM, Luber G, Bole A, Crimmins A, Glass GE, et al. Impacts, risks, and adaptation in the United States. Vol. 2. In: Human Health: The Fourth National Climate Assessment; 2018.
  2. 2. Crimmins A, Balbus JM, Gamble JL, Beard CB, Bell JE, Dodgen D, et al. Executive summary. The impacts of climate change on human health in the United States: a scientific assessment. 2016.
  3. 3. McCarthy JJ, Canziani OF, Leary NA, Dokken DJ, KS. W, editors. Climate change 2001: impacts, adaptation, and vulnerability. Intergovernmental Panel on Climate Change: Cambridge University Press; 2001.
  4. 4. Smit B, Burton I, Klein RJT, Street R. The science of adaptation: a framework for assessment. Mitig Adapt Strateg Glob Change. 1999;4(3):199–213.
  5. 5. Meerow S, Newell JP, Stults M. Defining urban resilience: A review. Landscape and Urban Planning. 2016;147:38–49. https://doi.org/10.1016/j.landurbplan.2015.11.011.
  6. 6. Brand F, Jax K. Focusing the Meaning(s) of Resilience: Resilience as a Descriptive Concept and a Boundary Object. Ecology and Society. 2007;12:23.
  7. 7. Marinucci GD, Luber G, Uejio CK, Saha S, Hess JJ. Building resilience against climate effects—a novel framework to facilitate climate readiness in public health agencies. Int J Environ Res Public Health. 2014;11(6):6433–58. pmid:24991665; PubMed Central PMCID: PMC4078588.
  8. 8. O’Brien KL, Leichenko R. Double exposure: assessing the impacts of climate change within the context of economic globalization. Glob Environ Change. 2000;10:221–32.
  9. 9. Adger WN, Arnell NW, Tompkins EL. Successful adaptation to climate change across scales. Glob Environ Change. 2005;15:77–86.
  10. 10. Portier CJ, Tart KT, Carter SR, Dilworth CH, Grambsch AE, Gohlke J, et al. A human health perspective on climate change: a report outlining the research needs on the health effects of climate change. Environmental Health Perspectives and the National Institute of Environmental Health Sciences: 2010.
  11. 11. Hess jj, Malilay JN, Parkinson AJ. Climate change: the importance of place. Am J Prev Med. 2008;35(5):468–78. pmid:18929973
  12. 12. Kovats RS, Bouma MJ, Hajat S, Worrall E, Haines A. El Niño and health. Lancet. 2003;362(9394):1481–9. pmid:14602445.
  13. 13. Füssel H-M. Assessing adaptation to the health risks of climate change: what guidance can existing frameworks provide? Int J Environ Health Res. 2008;18(1):37–63. pmid:18231945
  14. 14. Dilling L, Prakash A, Zommers Z, Ahmad F, Singh N, Wit S, et al. Is adaptation success a flawed concept? Nat Clim Chang. 2019;9:1.
  15. 15. Gajjar SP, Singh C, Deshpande TN. Tracing back to move ahead: a review of development pathways that constrain adaptation futures. Clim Dev. 2019;11:223–37.
  16. 16. Magnan AK, Schipper ELF, Duvat VKE. Frontiers in climate change adaptation science: advancing guidelines to design adaptation pathways. Curr Clim Change Rep. 2020;6(4):166–77.
  17. 17. Epule TE, Chehbouni A, Dhiba D, Moto MW, Peng C. African climate change policy performance index. Environmental and Sustainability Indicators. 2021;12:100163. https://doi.org/10.1016/j.indic.2021.100163.
  18. 18. Owen G. What makes climate change adaptation effective? A systematic review of the literature. Glob Environ Change. 2020;62:102071. https://doi.org/10.1016/j.gloenvcha.2020.102071.
  19. 19. Singh C, Iyer S, New MG, Few R, Kuchimanchi BR, Segnon AC, et al. Interrogating ‘effectiveness’ in climate change adaptation: 11 guiding principles for adaptation research and practice. Clim Dev. 2021.
  20. 20. Carter TR, Parry ML, Harasawa H, Nishioka S. IPCC technical guidelines for assessing climate change impacts and adaptations. IPCC Special Report to the First Session of the Conference of the Parties to the UN Framework Convention on Climate Change: 1994.
  21. 21. Feenstra JF, Burton I, Smith JB, Tol RSJ, editors. Handbook on methods for climate change impact assessment and adaptation strategies.1998: United Nations Environmental Programme.
  22. 22. Portner HO, Roberts DC, Adams H, Adler C, Aldunce P, Ali E, et al. Climate change 2022: impacts, adaptation and vulnerability. Netherlands: IPCC: 2022.
  23. 23. The UKCIP adaptation wizard v 4.0. [Internet]. UKCIP. 2013.
  24. 24. Lim B, Spander-Siegfried E, Burton I, Malone EL, Huq S. Adaptation policy frameworks for climate change: developing strategies, policies and measures. United Nations Development Programme: 2005.
  25. 25. Kovats RS, Ebi KL, Menne B, Campbell-Lendrum DH, Canziani O, Githeko AK, et al. Methods of assessing human health vulnerability and public health adaptation to climate change. 2003.
  26. 26. Pringle P. AdatpME: adaptation monitoring and evaluation. UKCIP: 2011.
  27. 27. Bours D, McGinn C, Pringle P. Monitoring & evaluation for climate change adaptation and resilience: a synthesis of tools, frameworks and approaches. SEA Change CoP, Phnom Penh and UKCIP: 2014.
  28. 28. Ebi KL, Boyer C, Bowen KJ, Frumkin H, Hess J. Monitoring and evaluation indicators for climate change-related health impacts, risks, adaptation, and resilience. Int J Environ Res Public Health. 2018;15(9):1943. pmid:30200609
  29. 29. Baker EA, Brownson RC, Dreisinger M, McIntosh LD, Karamehic-Muratovic A. Examining the role of training in evidence-based public health: a qualitative study. Health Promot Pract. 2009;10(3):342–8. pmid:19574586.
  30. 30. Krieger N. Measures of racism, sexism, heterosexism, and gender binarism for health equity research: from structural injustice to embodied harm-an ecosocial analysis. Annu Rev Public Health. 2020;41:37–62. Epub 20191125. pmid:31765272.
  31. 31. Egede LE, Walker RJ. Structural racism, social risk factors, and covid-19—a dangerous convergence for black americans. N Engl J Med. 2020;383(12):e77. Epub 20200722. pmid:32706952; PubMed Central PMCID: PMC7747672.
  32. 32. Castillo EG, Harris C. Directing research toward health equity: a health equity research impact assessment. J Gen Intern Med. 2021;36(9):2803–8. Epub 20210504. pmid:33948804; PubMed Central PMCID: PMC8096150.
  33. 33. Scheelbeek PFD, Dangour AD, Jarmul S, Turner G, Sietsma AJ, Minx JC, et al. The effects on public health of climate change adaptation responses: a systematic review of evidence from low- and middle-income countries. Environ Res Lett. 2021;16(7):073001. Epub 20210713. pmid:34267795; PubMed Central PMCID: PMC8276060.
  34. 34. Bedi NS, Adams QH, Hess JJ, Wellenius GA. The Role of Cooling Centers in Protecting Vulnerable Individuals from Extreme Heat. Epidemiology. 2022;33(5):611–5. Epub 20220616. pmid:35706096; PubMed Central PMCID: PMC9378433.
  35. 35. Turner SP, Moloney S, Glover A, Fuenfgeld H, editors. A review of the monitoring and evaluation literature for climate change adaptation. 2014.
  36. 36. Jupp V. Good practice study on principles for indicator development, selection, and use in climate change adaptation monitoring and evaluation. Climate-Eval Community of Practice: 2015.
  37. 37. Conlon KC, Austin CM. Climate change and public health interventions. In: Pinkerton KE, Rom WN, editors. Climate Change and Global Public Health: Springer International Publishing; 2021. p. 549–64.
  38. 38. Moser SC EJ. A framework to diagnose barriers to climate change adaptation. Proc Natl Acad Sci USA. 2010;107(51):22026–31. pmid:21135232
  39. 39. Magnan AK, Schipper ELF, Burkett M, Bharwani S, Burton I, Eriksen S, et al. Addressing the risk of maladaptation to climate change. Wiley Interdiscip Rev Clim Change. 2016;7(5):646–65. https://doi.org/10.1002/wcc.409.
  40. 40. Schipper ELF. Maladaptation: when adaptation to climate change goes very wrong. One Earth. 2020;3(4):409–14. https://doi.org/10.1016/j.oneear.2020.09.014.
  41. 41. Barnett J, O’Neill S. Maladaptation. Global Environmental Change. 2010;20(2):211–3. https://doi.org/10.1016/j.gloenvcha.2009.11.004.
  42. 42. Villanueva P. Learning to ADAPT: monitoring and evaluation approaches in climate change adaptation and disaster risk reduction: challenges, gaps and ways forward, Strengthening Climate Resilience discussion Paper 9. 2011.
  43. 43. Spearman M, Mcgray H. Making adaptation count: concepts and options for monitoring and evaluation of adaptation to climate change. 2011.
  44. 44. Pringle P, editor AdaptME toolkit: adaptation monitoring and evaluation. 2011.
  45. 45. Olivier J, Leiter T, Linke J. Adaptation made to measure: a guidebook to the design and results-based monitoring of climate change adaptation projects. 2012.
  46. 46. Ayers J, Anderson S, Pradhan S, Rossing T. Participatory monitoring, evaluation, reflection and learning (PMERL) manual. CARE International’s Poverty, Environment, and Climate Change Network: 2012.
  47. 47. Updated results-based management framework for adaptation to climate change under the least developed countries fund and the special climate change fund. Global Environment Facility. 2014.
  48. 48. Brooks N, Fisher S, Rai N, Anderson S, Karani I, Levine T, et al. Tracking adaptation and measuring development: a step-by-step guide. 2014.
  49. 49. PROVIA. PROVIA guidance on assessing vulnerability, impacts and adaptation to climate change.: United Nations Environment Programme.; 2013.
  50. 50. Community based resilience assessment (CoBRA) conceptual framework and methodology. United Nations Development Programme Drylands Development Center: 2013.
  51. 51. Results framework and baseline guidance: project level. Adaptation Fund: 2011.
  52. 52. Sheehan MC, Fox MA, Kaye C, Resnick B. Integrating health into local climate response: lessons from the U.S. CDC climate-ready states and cities initiative. Environ Health Perspect. 2017;125(9):094501. Epub 20170920. pmid:28934724; PubMed Central PMCID: PMC5915203.
  53. 53. CDC’s climate-ready states & cities initiative Centers for Disease Control and Prevention 2022 [cited 2022 June, 2022]. Available from: https://www.cdc.gov/climateandhealth/climate_ready.htm.
  54. 54. Climate-ready states & cities initiative grant recipients. Centers for Disease Control and Prevention. 2020 [cited 2020]. Available from: https://www.cdc.gov/climateandhealth/crsci_grantees.htm.
  55. 55. Framework for program evaluation in public health. MMWR. September 17, 1999;48(RR-11).
  56. 56. Kidder DP, Chapel TJ. CDC’s program evaluation journey: 1999 to present. Public Health Rep. 2018;133(4):356–9. pmid:29928844.
  57. 57. Benbasat I, Goldstein DK, Mead M. The case research strategy in studies of information systems. MIS Q. 1987;11:369–86.
  58. 58. Darke P, Shanks GG, Broadbent M. Successfully completing case study research: combining rigour, relevance and pragmatism. Inf Syst J. 1998;8.
  59. 59. Yin R. Discovering the future of the case study method in evaluation research. Eval Pract. 1994;15(3):283–90.
  60. 60. Creswell JW. Research design: qualitative and quantitative approaches: SAGE Publications; 1994.
  61. 61. Patton M. Qualitative evaluation and research methods: Sage; 1990.
  62. 62. Mallen E, Joseph HA, McLaughlin M, English DQ, Olmedo C, Roach M, et al. Overcoming Barriers to Successful Climate and Health Adaptation Practice: Notes from the Field. Int J Environ Res Public Health. 2022;19(12). Epub 20220611. pmid:35742418; PubMed Central PMCID: PMC9222828.
  63. 63. Eisenack K, Moser SC, Hoffmann E, Klein RJT, Oberlack C, Pechan A, et al. Explaining and overcoming barriers to climate change adaptation. Nature Climate Change. 2014;4(10):867–72.
  64. 64. Patton MQ. Utilization-focused evaluation: SAGE Publications; 2008.
  65. 65. Issel M. Health program planning and evaluation: a practical, systematic approach for community health: Jones & Bartlett Learning; 2009.
  66. 66. Trochim WMK. Research methods knowledge base. 2020. Available from: https://conjointly.com/kb/.
  67. 67. MacQueen KM, McLellan E, Kay K, Milstein B. Codebook development for team-based qualitative analysis. CAM Journal. 1998;10(2):31–6.
  68. 68. Cicchetti DV. Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychol Assess. 1994;6(4):284–90.
  69. 69. Ryan GW. Measuring the typicality of text: using multiple coders for more than just reliability and validity checks. Hum Organ. 1999;58(3):313–22.
  70. 70. Explore census data [Internet]. United States Census Bureau. 2022 [cited 2018]. Available from: https://data.census.gov/cedsci/.
  71. 71. Heat reports: Maricopa County; 2022 [cited 2018]. Available from: https://www.maricopa.gov/1858/Heat-Surveillance.
  72. 72. QuickFacts: Sarasota County, Florida U.S. Census Bureau: U.S. Census Bureau; 2022. Available from: https://www.census.gov/quickfacts/sarasotacountyflorida.
  73. 73. CDC’s building resilience against climate effects (BRACE) framework Centers for Disease Control and Prevention. 2022 [June 2022]. Available from: https://www.cdc.gov/climateandhealth/BRACE.htm.
  74. 74. Climate change and health playbook: American Public Health Association; 2022 [cited 2022 June 2022]. Available from: https://www.apha.org/Topics-and-Issues/Climate-Change/JEDI/.
  75. 75. Argyris C. On organizational learning: Wiley; 1999.
  76. 76. Dinshaw A. Monitoring and evaluating mainstreamed adaptation to climate change: a synthesis study on climate change in development cooperation. IOB Evaluation. 2018;(426).
  77. 77. Best practices in monitoring and evaluation of urban climate adaptation. USAID: 2019.
  78. 78. Shakir M. The selection of case studies: strategies and their applications to IS implementation case studies. Math Res Lett. 2002;3:191–8.