We suggest that you copy and save the URL from your browser’s toolbar. This will allow you return to this page of personalised guidance and share the information with colleagues in future.
Before you start
Before starting work:
- Make sure you have a defined purpose for your evaluation.
- Ensure your key stakeholders understand what is to be produced (eg a business case or a report to a funder).
- Draft a logic model for your evaluation. This is like a road map for your project that shows the different parts of your project and how they combine to achieve your goal.
- For help, check out the logic model template from Evaluation Support Scotland and the guidance for filling it in.
- View stage one in the step-by-step guide for more tips on how to get started.
Defining your initiative
Before you choose how you are going to design your evaluation, you first need to understand the core aspects of your initiative. To do this, use the PICO framework to help you answer the following questions:
- What is your Population? (The group of people who receive your service)
- What is your Intervention/initiativeand what parts need evaluating (especially with a complex service)?
- What is your control or Comparator group? (If there is no group of people not receiving your initiative, can you at least measure changes in the same group of people before and after they receive your initiative?)
- What are your key Outcomes? (See next section)
- For more info on the PICO framework itself, check out the PICO explainer here.
Choosing the right design
Producing evidence that shows your initiative is effective is crucial to your evaluation’s success. To help you achieve this, you need to select an evaluation design model. This is a practical plan for working out the real-world effectiveness of your initiative For example:
- The sees you collecting information about the people who received your service before and after to judge whether their outcomes improved.
- However, this is considered a weak design because there is no control group to compare the people who didn’t receive the service to those who did.
- Instead, consider the ‘before and after’ model that uses both quantitative data and qualitative data (see below).
Ideally if you can, you should also try to set up a control group:
- For example, if you are using routinely collected data, look at a similar group of people (eg in a different hospital or borough) that did not use your service.
- This will enable you to see how your initiative impacted on your group compared to a group that didn’t receive it.
- Do ensure that both groups are comparable ie such as demographics.
- Check out stage two in the step-by-step guide for more resources on selecting an evaluation design.
Data explained: Qualitative data
Qualitative data is non-numerical data that is observed and described. It can be sourced by conducting interviews at the start and end of your initiative.
This helps you ascertain how people felt about your initiative as they progressed through it. For a light-touch evaluation, keep your qualitative data collection methods simple:
- For example, hold a limited number of interviews with selected staff delivering your initiative or with the users who received it.
- If you are interested in sourcing comparable views from your interviewees, use structured or semi-structured interviews (ie keep questions consistent).
- To uncover more complex insights – and all possible challenges – conduct in-depth interviews instead to get richer feedback.
Data explained: Quantitative data
Quantitative data is numerical data that is measured, counted or compared. It allows you to monitor how key numerical metrics have changed because of your initiative.
If you are interested in collecting feedback from a large number of people, design and conduct a simple survey:
- Before proceeding, consider how you will share the survey with people and the time it will take to both collate and analyse their responses.
- For instance, paper-based surveys require significant time and effort to collate and results are more error-prone. However, some demographics are more comfortable answering paper-based surveys.
- Using online surveys gives you access to tools such as Google Forms or Microsoft Forms that can summarise data findings, saving you time and resources.
- Surveys can feature both quantitative questions (i.e., ‘How would you rate this service from 1-5?’) and qualitative questions (i.e., ‘What would you improve about this service?’).
Alternate solutions
If custom data collection is not practical, consider analysing routinely collected data instead:
- This data is regularly collected by your organisation such as electronic health record data or submissions to national datasets.
- For example, look at how data results (such as specific questions featured in an NHS staff survey) have changed since the your initiative was rolled out compared to before it was introduced.
- For more information, check out this bite-size guide to patient insight.
Metrics to use
What metrics you should collect will depend on the key questions you are trying to answer and what the output of your evaluation is. When filling in the evaluation design tool questionnaire, select your service type and size to source the right list of metrics:
- Initiative size relates to the budget for your initiative ie the total spend listed in your business case.
- Initiative type relates to the type of initiative you are evaluating. Select the type that best matches your initiative (if yours has no match to any available types, select ‘Other’).
- For more information, view our our page about determining what outcomes to measure.
Minimum set of metrics
If you are conducting a light touch evaluation, keep your number of metrics relatively small but do capture the following minimum set of metrics:
- How many people (staff or patients/members of the local population) you are reaching with your service.
- Satisfaction with the service.
- Both total costs and costs per user.
- Basic demographic metrics (user age/gender/ethnicity) to allow you to understand inequalities in your initiative and/or its delivery.
- If you need further assistance, check out the resource library.
Key takeaways
Here’s a quick overview of the different tasks that you’ll need to carry out to ensure evaluation success.
Planning and Design
- Define the purpose of your evaluation and ensure all stakeholders are aligned with your goals.
- Use a logic model to visualise all the elements that make up your initiative and how they contribute to your goals.
- Answer the questions in the PICO framework (Population, Intervention, Control, Outcomes) to guide your evaluation design.
Evaluation Designs
- Collect data before and after your service – but do consider the limitations of this design without a control group.
- Consider using both quantitative and qualitative data to create a more comprehensive understanding of your initiative’s effectiveness.
- If possible, establish a control group for a stronger comparison.
Data Collection
- For qualitative data, use interviews to gather insights and feedback.
- For quantitative data, consider surveys for larger-scale data collection.
- Use existing data sources like electronic health records.
Metrics
- Choose metrics based on your evaluation questions and project size.
- Include metrics for reach, satisfaction, costs and demographics.