We suggest that you copy and save the URL from your browser’s toolbar. This will allow you return to this page of personalised guidance and share the information with colleagues in future.
Before you start
Before starting work:
- Make sure you have a defined purpose for your evaluation.
- Ensure your key stakeholders understand what is to be produced (e.g. a business case or a report to a funder).
- Draft a logic model for your evaluation. This is like a road map for your initiative that shows its different parts and how they combine to achieve your goal.
- For help, check out the logic model template from Evaluation Support Scotland and the guidance for filling it in.
- View stage one of our step-by-step-guide for more tips on how to get started.
Once you have completed these steps, you can get the advice from an external expert who can potentially commission additional support. Head here for more details.
Defining your service
Your expert will help you determine what design is most appropriate. However, before you meet them, start by answering some basic questions using the PICO framework:
- What is your Population? (The group of people who receive your initiative)
- What is your Intervention/initiativeand what parts need evaluating (especially with a complex initiative)?
- What is your control or Comparator group? (If there is no group of people not receiving your initiative, can you at least measure changes in the same group of people before and after they receive your initiative?)
- What are your key Outcomes? (See next section)
- For more info on the PICO framework itself, check out the PICO explainer here.
Your expert might have expertise in and suggest more advanced evaluation designs such as:
Difference-in-Differences (DiD), Regression-based
This compares the changes in outcomes over time between a group that received your initiative and a similar group that did not. This helps to mitigate for external factors.
Interrupted Time Series
This examines the trends in outcomes over time, identifying any interruptions or changes that coincide with your initiative’s implementation.
Cost-Benefit or Cost-Utility Analysis
This helps you assess the financial and practical benefits of the initiative relative to its costs, providing a comprehensive view of its economic impact.
Data explained: Qualitative data
Qualitative data is non-numerical data that is observed and described. It can be sourced by conducting interviews at the start and end of your initiative:
- Your expert may recommend using more complex qualitative methods including deliberation, which is designed to involve the public or patients in a meaningful way.
- Your expert may also wish to use more complex methods for analysing qualitative results such as content analysis or thematic analysis.
Data explained: Quantitative data
Discuss your quantitative methods with the expert to ensure any data collection is robust and well-designed. Here are some approaches the expert may recommend:
Custom surveys with power calculations
This helps you to collect quantitative data on key outcomes, using power calculations to determine the appropriate sample size needed to detect meaningful differences or changes. Use these online calculation tools to get started.
- Surveys should ideally include both quantitative questions (for example, rating scales) and qualitative open-text questions (such as suggestions for improvement)
- To gather comprehensive feedback and create detailed summary metrics, design a survey to be conducted at multiple time points. This approach allows you to track changes over time.
- When designing your survey, also consider how it will be distributed (either paper or online) and the time required for data analysis.
- Online tools like Google Forms or Microsoft Forms offer built-in data summarisation tools that can save time and resources.
- Head here for more details.
Virtual control groups
This method helps estimate what would have happened if your initiative hadn’t occurred, providing you with a ‘what-if’ scenario for comparison – even without a real-world control group.
Predictive modelling
This uses statistical techniques to forecast future outcomes based on current and historical data. This approach helps to identify potential impacts and areas for improvement, helping to enhance your overall evaluation.
Metrics to use
What metrics you should collect will depend on the key questions you are trying to answer and what the output of your evaluation is:
- When using the evaluation design tool, select your initiative type and size to source the right list of metrics.
- Initiative size relates to the budget for your initiative ie the total spend listed in your business case.
- Initiative type relates to the type of initiative you are evaluating. Select the type that best matches your initiative (if yours has no match to any available types, select ‘Other’).
Make sure to capture the following minimum set of metrics:
- How many people (staff or patients/members of the local population) you are reaching with your initiative.
- Satisfaction with the initiative.
- Both total costs and costs per user.
- Basic set of demographic metrics (user age/gender/ethnicity) to understand inequalities in your initiative delivery.
For more information, view our page on determining what you want to measure.
Key takeaways
Here’s a quick overview of the different tasks that you’ll need to carry out to ensure evaluation success:
Planning and design
- Define the purpose of your evaluation and ensure all stakeholders are aligned with your goals.
- Use a logic model to visualise all the elements that make up your initiative and how they contribute to your goals.
- Answer the questions in the PICO framework (Population, Intervention, Control, Outcomes) to guide your evaluation design.
- Once this info has been gathered, book a session with your expert for personalised advice on the complexity of your potential evaluation design and more.
Evaluation designs
- Select a suitable design (eg Difference-in-Differences, Interrupted Time Series).
- Use both quantitative and qualitative data to create a more comprehensive understanding of your initiative’s effectiveness.
Data collection
- For qualitative data, use more complex qualitative methods including deliberation.
- For quantitative data, consider surveys for larger-scale data collection.
- Also consider using virtual control groups and predictive modelling.
Metrics
- Choose metrics based on your evaluation questions and project size.
- Include metrics for reach, satisfaction, costs and demographics.