Evaluation Policy

I. Purpose

The City of Tempe established the Evaluation Program to affirm Tempe’s commitment to using evaluations to improve programs, policies, or services to provide better outcomes for the community. This policy establishes Tempe’s high-quality evaluation guidelines and guiding principles to create a culture of evaluations that are high-quality with results that inform decisions and are shared with internal and external partners.

The Tempe Evaluation Policy is established in accordance with the City Council Resolution executed on X, 2022. This policy defines the principles governing Tempe’s Evaluation Program and describes expectations for governance and departmental participation in the Evaluation Program.

II. Applicability

This policy applies to programs, policies, or services that are created, managed, or supervised by any City department, office, or employee on behalf of the City. The policy is not intended to be applied to personnel or used in the evaluating the job performance of specific City employees. It is also not intended to apply to evaluation of vendor proposals, which will continue to be evaluated under their own specific and unique criteria and in accordance with any applicable law.

III. Policy Statement

The City of Tempe conducts meaningful, novel, and actionable evaluations to more fully understand the ways the city uses resources to achieve its strategic priorities and related performance measures. Tempe generates and uses evidence from evaluations to inform decisions about programs, better enabling city departments to achieve performance outcomes, increase efficiency and provide greater accountability to the community. Evaluations support:

  1. Organizational learning. Evaluations that are well designed and implemented can systemically generate knowledge that increases understanding of the effectiveness, relevance, and efficacy of programs. Learning takes place when individuals engage in discussion of evaluation results with a focus on understanding how or why various elements of a program are/are not progressing in order to look for opportunities to make positive changes or replicate successful programs, and not as an opportunity to place blame or impose negative consequences.

  2. Program and performance improvement. Evaluations identify when and how the department has met its goals, providing leaders with evidence they need to make decisions about changes that should be made to strategies to maintain or increase progress towards its goal.

  3. Resource priority determinations. Evaluations help to inform decisions about resource requests made through the through the city’s performance led budgeting process, highlighting where resources are needed to achieve performance outcomes. This includes decisions about the future of programs, such as whether to continue as is, enhance/scale up through additional resource requests, or shift resources.

  4. Stakeholder engagement. Evaluations share valuable information internally and externally, starting with scoping of the project, to sharing results, and final decisions. Sharing through the evaluation lifecycle promotes transparency and accountability for stewardship of public funds and lead to advances in research, policy, and practice in and beyond the department leading the evaluation.

IV. Definitions

  1. Actionable. Provides information that can be used to improve programs.

  2. Counter Factual. An estimate of what would have happened in the absence of the intervention.

  3. Evaluation. A systemic method for collecting, analyzing, and using data to examine the impact, effectiveness, and efficiency of a program. Evaluations require (1) asking a specific question, (2) making a plan to answer the question, (3) collecting data and (4) using that data to answer the question.

  4. Evaluation Agenda. A one-year plan that summarizes the city’s evaluation needs and identifies priority evaluations.

  5. Evaluation Associates. Staff within city departments whose role is to advocate for evaluations and identify opportunities for evaluation within the city and their department. They have training in evaluation methods, have the skills to support evaluation projects and translate evaluation findings into actionable recommendations. They may also collaborate with the evaluations team on projects or training.

  6. Evaluation Community of Practice (ECP). A group of individuals responsible for identifying, scoping, designing, and conducting evaluations within the city. They maintain their training in evaluation and share resources with others. Members of the ECP include the Evaluation Team, Evaluation Associates and Evaluation Steering Committee and other interested individuals.

  7. Evaluation Lead. Responsible for oversight of the Evaluation Program, including leading the Evaluation Team, Chairing the Evaluation Steering Committee, and fostering the Evaluation Community of Practice.

  8. Evaluation Steering Committee (ESC). Focuses on developing the evaluation agenda, prioritizing evaluations for performance led budgeting and identifying strategies to expand city evaluation capabilities and capacity. Members may include Directors, Deputies, or others with knowledge of city priorities and the budget process.

  9. Evaluation Team. City employees who collaborate with Evaluation Associates and Departments implementing an evaluation to provide technical and planning support. Part of the IT Enterprise GIS & Analytics team or other group as identified by the Chief Innovation Officer.

  10. Evidence. Research, statistical data, qualitative data, and evaluation results obtained through rigorous methods.

  11. Experimental Design. Evaluations that offer a means for establishing a causal connection between a policy or program and its effects (e.g. Randomized Control Trials (RCT)).

  12. External Evaluation Partner. Provider of expertise or additional evaluation capacity through city partnerships.

  13. Impacts. The positive or negative; direct or indirect; intended or unintended; primary or secondary effects produced by an intervention.

  14. Intervention. A new or changed program designed to bring about a desired change.

  15. Meaningful. Answers a question the organization cares about that aligns with city’s strategic priorities and provides a clear benefit to residents.

  16. Novel. Provides new information that the organization wouldn’t have otherwise.

  17. Performance Led Budgeting. A component of the city’s annual operating and capital budget process where additional budget appropriation (resource) requests for the next fiscal year are linked to one or more of the strategic priorities and the additional budget appropriation will support one or more related performance measure strategies that advance the achievement of the performance measure(s).

  18. Program. When related to evaluations, program refers to any program, policy, service, strategy, or process implemented by or for the city.

  19. Quasi-Experimental. An evaluation design that uses comparison groups to draw inferences about causal relationships but does not use randomization to create the groups.

  20. Randomized Control Trial (RCT). A study design that compares the effect of an intervention to what would have happened without the intervention, by randomly assigning participants into two groups, one which receives the intervention, and one that does not receive the intervention.

  21. Theory of Change. Sets out the how and why a desired change is expected to happen because of a new or changed program. It documents why a planned activity will lead to the intended results and explains the underlying reasoning. It serves as a way to check on whether the strategies being used align with the goals we are trying to achieve.

V. Program Principles

Evidence should inform strategic and operational decisions to improve outcomes across the organization. The intent of the Evaluation Program is to integrate the use of evidence into critical decisions about city programs. Rigorous evaluation provides an opportunity to gather impactful evidence to learn and adapt, providing feedback that allows for adjustments in strategies or program refinements.

Evaluations take time and resources to design, implement, and analyze. In order to efficiently use city resources and to generate evidence with high impact potential, evaluations:

  1. Have a clear purpose. Actions and decisions drive evaluation design. Clearly articulating the purpose including how and when the results will be used helps select meaningful evaluations. Identifying the information needs up front supports designing evaluations that are useful for understanding the impacts and effectiveness of a new or changing program. Lack of a clear purpose and commitment to use results leads to evaluation results that are not actively implemented and are ultimately a waste of city resources.

  2. Are fundamentally a learning process. We learn and adapt through the evaluation lifecycle, including planning, implementing, and leveraging evaluation results. Evaluations provide insights into the impact and implementation of strategies, and opportunities to adjust strategies when they are not performing as excepted.

  3. Are chosen strategically. City Council Strategic Goals and key city initiatives drive decisions about where to focus evaluation efforts. Considerations include performance led budgeting, situations where strategies need to adapt to urgent or dynamic conditions, opportunities for learning or where there is high risk to the organization.

  4. Match methods to the question. We identify evaluation methods up front to reduce bias and strengthen evaluation design. Methods are matched to the research question instead of using the same approach every time. Evaluation design should clearly explain the methods and their limitations prior to implementation. Decisions about methods should balance evaluation robustness with attainable skills and resources.

  5. Are shared with internal and external stakeholders from proposed evaluation agendas to results. As we plan the evaluation agenda and evaluation projects we consider and identify audiences for sharing the agenda and results. We communicate our intention to evaluate early through sharing our Evaluation Agenda and proactively share evaluation results and decisions made based on those results. We will make exceptions on a case-by-case basis with consideration of confidentiality.

  6. Provide actionable results that are used in decision making. We consider ways to integrate the results into decision making during evaluation design. We take the time to consider the results, identify implications for policy and practice and adapt where appropriate. We combine evaluation results with knowledge gained from experience to inform decisions.

VI. Evaluation Program

  1. Core Evaluation Principles

    1. Rigor. All evaluations will use methods that generate high quality and credible evidence that reflects the research question(s). All evaluations regardless of type (impact, process, outcome) and methods (qualitative or quantitative) must adhere to widely accepted scientific principles, employing methods that are most appropriate for the evaluation’s objectives and consistent with feasibility and available resources. Tempe evaluations will

      1. Ensure that any inferences about cause and effect are well founded.

      2. Use measures that accurately and consistently capture intended information.

      3. Have clarity about the populations, settings, or conditions to which the results can be generalized.

      4. Seek to understand and adjust for biases that reflect unconscious attitudes towards people or associated stereotypes (implicit bias).

    2. Relevance. Evaluations must address questions that are high impact and align with Tempe’s strategic priorities and initiatives and serve the information needs of stakeholders. Results should be actionable and timely and should inform actions such as budgeting, program improvement, process design, policy development and accountability.

    3. Transparency. Evaluation design and findings should be made public by default and only withheld for legal, ethical or security reasons. Decisions about an evaluation’s purpose and objectives, whether it will be internal or public, stakeholders that will have access to details and results, design and methods, and timeline and approach for releasing findings should be documented before the evaluation is conducted. Findings should provide enough detail so that others can review, interpret, or replicate/reproduce the work.

    4. Independence. Evaluations must be objective. Stakeholders should be engaged, but evaluation scoping, design, implementation, and interpretation and sharing of results should avoid bias, conflict of interests, and other sources of partiality.

    5. Equity. Equity should be considered in design, conduct and interpretation to incorporate the needs of our diverse stakeholders. We will aim to understand the community and systems to better represent all stakeholders in the creation of evaluation questions, designs, and implementation. We will include evaluation questions that are relevant to all stakeholders and leverage both qualitative and quantitative approaches as part of our design. Data collection will seek to reduce bias in data collection approaches and analysis.

    6. Ethical Practices. The city will conduct evaluation activities in an ethical manner, safeguarding the dignity, rights, safety and privacy of participants, stakeholders and affected entities. The city will align with the Federal Data Strategy Data Ethics Framework.

  2. Types of Evaluations

    1. Impact. This type of evaluation assesses the causal impact of a program or policy, or portions of them, on outcomes relative to a counter factual. In other words, impact evaluations estimate and compare results with and without the program or policy. These evaluations include experimental (i.e., randomized control trials) and quasi-experimental designs. Impact evaluations can help answer questions such as “does it work?” “is this program better than the status quo?” or “what impact does the intervention make?”.

    2. Process. Assess how a program is implemented relative to its intended theory of change. Often includes information on program content, quality, quantity or a structure of services provided. These evaluations can answer questions such as “Was the program implemented as intended?” or “How is the program operating in practice?”.

    3. Outcome. Measures the extent that a program has achieved its intended outcome(s), assessing effectiveness through outputs and outcomes. Outcome evaluations cannot typically determine causal relationships between an intervention and any observed impacts. Importantly, it is complementary to, but distinct from performance measurement. Outcome evaluations help answer questions such as “Were the intended outcomes of the program achieved?”. Example analysis approaches include pre/post testing and comparison.

  3. Institutionalizing the Evaluation Program

    1. As part of the city’s annual budget process, the Performance Led Budgeting process will identify three to five evaluation projects each fiscal year with the following considerations

      1. Evaluations will be required for a selection of approved operating budget supplemental requests for the upcoming fiscal year.

      2. Approved supplemental funding requests will be reviewed and ranked by the Evaluation Steering Committee using criteria initially developed by the Municipal Budget Director and approved by the Evaluation Steering Committee.

      3. The final ranking of approved supplementals that should be considered for evaluation will be provided to the Evaluation Steering Committee who will select 5 requests for incorporation into the Evaluation Agenda.

      4. Evaluation design should be completed prior to the start of the fiscal year for each included supplemental funding request. The design should include at minimum, the evaluation question, stakeholders list, data collection strategy/approach, analysis methods, and plan for implementing and sharing of results.

    2. The Evaluation Agenda is a one-year plan that summarizes the city’s evaluation needs and identifies priority evaluations. The Evaluation Steering Committee will leverage the ranked approved supplemental requests from the performance led budgeting process as well as knowledge of current city initiatives or pressing problems. The agenda will be created at the beginning of each fiscal year and reviewed at the start of the third quarter at minimum. The Evaluation Agenda will be set by the Evaluation Steering Committee and shall

      1. Incorporate results from the community, employee, and business survey

      2. Integrate the final ranking of approved supplementals

      3. Incorporate knowledge of current and upcoming priority initiatives

      4. Review and revise the agenda when there are significant changes in city priorities or new issues arise that would benefit from evaluation.

  4. Administration and Operation

    1. Share evaluation agenda with internal and external stakeholders. The evaluation agenda will be shared internally and externally for transparency.

      1. Internally this encourages transparency and agreement related to the purpose and use of evaluations for the upcoming year.

      2. Externally, this provides an opportunity to be transparent about and accountable for key evaluation activities including sharing and implementation of results.

    2. Design evaluations ahead of time

      1. An evaluation’s design and methods should be available ahead of time, and in sufficient detail, before the evaluation is conducted to achieve rigor, transparency, and credibility. This reduces the risk of adopting inappropriate methods or of selective reporting of findings.

    3. Perform analysis with data obtained from performing the evaluation.

      1. Evaluation designs will include a pre-specified plan for how the data will be analyzed after the evaluation is completed. Evaluations are chosen strategically, so the importance of the results and intended use of the data in decisions should be clear prior to the evaluation being conducted.

    4. Use the evaluation results for decision making.

      1. Ways to integrate evaluation results into decision making should be identified prior to the start of the evaluation. Adjustments may need to be made based on new information gained through the evaluation process, but the ultimate end is that we combine evaluation results with knowledge gained from experience to inform decisions.

    5. Share evaluation results internally and externally.

      1. We will share the evaluation results with internal and external stakeholders, including explanations of the research question, purpose, results, and the decisions made using those results.

      2. We will align with any privacy or security standards to determine whether there are restrictions on sharing the evaluations outside of the department.

  5. Roles and Responsibilities

    1. Evaluation Lead

      1. Responsible for the oversight of the Evaluation Program including leading the Evaluations Team

      2. Chairs the Evaluation Steering Committee including convening the quarterly meetings and ensuring that the committee completes the Evaluation Agenda annually

      3. Fosters the Evaluation Community of Practice, including identifying Evaluation Associates and supporting training efforts for those working on evaluations in the city.

    2. Municipal Budget Director

      1. Chairs the performance led budgeting review of the upcoming fiscal year supplementals and identifies those that align with current city initiatives.

      2. Develops initial criteria for identifying requests that will be required to have an evaluation.

      3. Accountable for the identification and completion of required evaluations for supplementals identified by the sub-committee.

    3. Department Directors

      1. Conduct evaluations to examine performance of their programs, projects and processes at a rate that aligns with their work, breadth of programs and resources.

      2. Each department should strive to complete at least one meaningful evaluation each fiscal year. Evaluations of pilot programs should be considered prior to fully implementing the program.

      3. Departments may conduct internal, external, and collaborative evaluations. If conducting internal evaluations without collaboration with outside experts, the department leadership should ensure that staff have completed relevant evaluations training or experience or consult with the Evaluation Associates in their department, the Evaluation Team, or Community of Practice.

      4. Performance measure analysis is not considered evaluation for the purpose of this policy, but departments are encouraged to leverage evaluations to measure the extent that a program is achieving its intended results.

    4. Evaluation Community of Practice (ECP)

      1. A group of individuals responsible for identifying, scoping, designing, and conducting evaluations within the city.

      2. Maintains training in evaluation and shares resources with others across departments.

      3. Members of the ECP include the Evaluation Team, Evaluation Associates and Evaluation Steering Committee and other interested individuals.

      4. Engage with external evaluation experts including participating in evaluations run by external resources.

      5. Individuals may join the ECP by participating in evaluation projects or evaluation training opportunities

    5. Evaluation Team

      1. Collaborate with Evaluation Associates and Departments implementing an evaluation to provide technical and planning support.

      2. Members of Community of Practice

      3. Supports identification of training opportunities in evaluation for city staff and ECP members

      4. Supports or leads evaluations for departments without Evaluations Associates or cross departmental efforts

    6. Evaluation Associates

      1. Advocates for evaluations and identify opportunities for evaluation within the city and their department.

      2. Have training in evaluation methods and the skills to support evaluation projects and translate evaluations findings into actionable recommendations.

      3. Supports or runs departmental evaluations

      4. Collaborates with the evaluations team on projects or training

      5. Members of the Community of Practice.

    7. Evaluations Steering Committee (ESC)

      1. Develops the evaluation agenda annually with reviews at minimum once per year,

      2. Prioritizes evaluations for performance led budgeting

      3. Identifies strategies to expand city evaluation capabilities and capacity.

Last updated