Evaluating Government Communication Activity - Standards and Guidance
←
→
Page content transcription
If your browser does not render page correctly, please read the page content below
Evaluating Government Communication Activity Standards and Guidance
Evaluating Government Communication Activity Contents Introduction 3 PROOF: five guiding principles for evaluation 4 The big IDIA: the four-stage evaluation process 5 Stage 1: Identify 6 Stage 2: Develop 8 Stage 3: Implement 17 Stage 4: Analyse and report 23 Conclusions 27 Appendix: Recommended metrics 28 2
Evaluating Government Communication Activity Introduction As government communicators, we’re all aware of the need to make every piece of work that we produce as effective and efficient as possible. To do this, we need to understand what’s already working well and where there’s room for improvement. This in turn requires us to evaluate our work and apply the learning from this evaluation to future activity. We’re also increasingly required to demonstrate how we’re applying evaluation in our day-to-day jobs, through the plans and reports that we submit to the Efficiency and Reform Group (ERG) for activities with a spend of £100,000 or more and the information that we supply to feed into the annual communication plan. The revised Government Communication Network (GCN) Core Skills for Government Communicators will set out clear evaluation standards that we should all follow, based on our grade and the discipline that we work in. To help us evaluate our activity effectively, in line with expected standards, we need clarity on what good evaluation practice looks like. This guide sets out an approach to evaluation that should be followed as a minimum for all government communication activity, regardless of size, discipline or budget. This approach is pragmatic and focuses on helping you to produce the best possible evaluation given the scope of your activity and the time, resource and budget that you have available for evaluation. By following the guide, you can be confident that you will produce an evaluation that meets the required standards for your activity and for your role. If you’re new to evaluation, use the guide to help you get started, recognising that it will take time to build up your approach and gather what’s working. Remember, a partial evaluation is almost always better than no evaluation at all. If you’re already evaluating your activity effectively, ensure that you’re following the standards required of you, making modifications as necessary. Think about how you can share what you’ve learned with others to help them evaluate more effectively. By evaluating the activity that we carry out, we will be able to improve the effectiveness and efficiency of our work over time in the future. We will also be able to demonstrate the contribution that well-planned and executed communication activity makes to government overall and hence justify further investment in our work. 3
Evaluating Government Communication Activity PROOF: five guiding principles for evaluation Whatever the size or scope of your communication activity, following these guiding principles will help to ensure that your evaluation is as effective as possible. P Pragmatic – best available within budget, not best ever A pre-planned, but partial, evaluation is better than no evaluation at all. Be transparent: acknowledge the gaps in your evaluation and the implications of these gaps. Even if you are not able to fully quantify the effect of your communication activity, you will still be able to draw valuable learning from the evidence that you have obtained. R Realistic – prove what you can, acknowledge what you can’t Don’t worry if you can only collect a small amount of data in the short term. By establishing a robust evaluation framework that is linked to a clear set of communication objectives, you will be able to interpret and analyse whatever information you gather. Over time, you will be able to build on this knowledge, increasing the amount of data that you collect from each subsequent activity. O Objective – approach your evaluation with an open mind Be honest and constructive about what was achieved, so that we can all learn for the future. Learn from your successes and from things that didn’t work as well as you’d hoped. Use this to refine your future strategy. O Open – record and share as much as possible Share your learning as widely as possible so that colleagues can also benefit from your experience. Work with GCN to develop a detailed case study. F Fully integrated – integrate evaluation into activity planning and delivery Plan ahead. Start thinking about how to evaluate your activity as soon as possible, ideally well before it begins. This will help you to put the right mechanisms in place for subsequent measurement and data collection. Retrospective evaluation is often less effective because the right data may not have been collected or objectives may not be measurable. 4
Evaluating Government Communication Activity The big IDIA: the four-stage evaluation process You can evaluate all kinds of communication activity, including press and media management, marketing and internal communication. Follow the four-stage process set out in Figure 1 below and you will carry out an effective evaluation in line with the five guiding principles for evaluation. Figure 1: The four-stage evaluation process 1 Identify the scope of your project 2 Develop your evaluation plan 3 Implement – source data to measure performance 4 Analyse and report performance against the plan 5
Evaluating Government Communication Activity Stage 1: Identify the scope of your project Checklist Task: to define what you need to evaluate by asking: ç What activity am I evaluating? ç What do I already know? ç What is my evaluation expected to achieve? Output: summary of your proposed evaluation approach What activity are you evaluating? Begin by establishing exactly what you are going to evaluate. What activities do you need to include? Are these activities part of a wider communication strategy? Identify the time period over which you will evaluate the activity. For a one-off piece of activity with clear start and end dates, this is generally straightforward. For ongoing activity, you need to identify and agree appropriate time periods for evaluation, i.e. how often you will assess performance against objectives. Examples of types of activity Press For ongoing reputation management work, it’s often most effective to track the effect of your work over time. Identify the key messages that you want to track and monitor coverage on a regular basis, providing monthly or quarterly updates rather than reporting on individual activities. Marketing If the activity that you are evaluating runs across a range of channels (which might include, for example, paid-for activity, partnerships, leaflets, a website or social media), check that you’ve included all of them in your plan. Internal If you are evaluating the effect of a change management communication programme, ensure that you include all the elements of the programme in your evaluation plan (e.g. briefings to senior staff, communication via email and the intranet, events, training etc). 6 Stage 1: Identify
Evaluating Government Communication Activity What do you already know? Review similar previous activity that your team or others have run to see what you can learn. This will give you a benchmark to measure performance against and may help to refine your objectives and your approach to the activity that you are about to run. $ Useful tip – gathering evidence for ERG exemption requests If your activity is subject to ERG approval, use your previous evaluation results as evidence for why you believe it will meet its objectives in sections 3 and 10 of the exemption request form. What is your evaluation expected to achieve? Ask the following questions to help identify what is expected from your evaluation: • What are the key questions that your evaluation report needs to answer? • What level of detail is expected from your report and what format does it need to be in? • Is any budget available for your evaluation? • Who will do the work? This might include you and others on your team, research, analysis and evaluation specialists from within your hub or the Shared Communications Service, an external agency or a combination of all of these. $ Useful tip – ERG evaluation standards If your activity is subject to ERG approval, there is a standard format that you must use for your evaluation report. A report template can be found here. Ensure that your evaluation plans are designed with this in mind Summarising your approach to evaluation Before beginning work, set out your proposed approach to evaluation, basing this on the information that you have gathered so far. Depending on the scope of the activity that you’re evaluating and the person for whom you are carrying it out, this may be a detailed proposal to stakeholders in your department, an email to your manager or a note to yourself. Whatever format you use, you can find a useful template here that sets out the questions you should consider. $ Useful tip – managing expectations Sometimes, there may be a gap between the time, resource and budget that you have available and others’ expectations of what your evaluation can achieve. Discuss whether it’s better to increase available resource, budget or time or to lower expectations. Decide this early in the process. Stage 1: Identify 7
Evaluating Government Communication Activity Stage 2: Develop your evaluation plan Checklist Task: to define how you’ll measure ç Mapping out how your activity success by: will work ç Setting out activity objectives ç Setting performance metrics ç Defining target audiences ç Agreeing metrics and targets Output: draft evaluation plan Setting out activity objectives Clear and measurable communication objectives are the cornerstone of any evaluation plan. They set out what your activity aims to achieve and the overall goals against which you should judge success. Your objectives should already be identified in the communication strategy for the activity you are evaluating. It may be useful to summarise them as shown in Figure 2, clearly demonstrating how each sub-objective links to the overall communication and departmental objective. • Start by identifying the overall departmental objective and the issue it is designed to address. Your activity will be part of a set of interventions which link back to this objective. • Next, identify the communication objective. This is the overall role that communication is expected to play in achieving the policy objective. • Different channels or activities may have distinct roles to play in achieving the overall communication objective. Each of these should be set out as a separate communication sub-objective. You may only be evaluating the effectiveness of one communication sub-objective, but it is important to understand how this is expected to contribute towards the overall communication objective. Ensure that all communication objectives are clear and set out what each activity was put in place to achieve, together with a measure of success. Always consider whether or not you will be able to prove a communication objective has been met. If not, it will need to be revised. You may find it useful to map out your objectives as shown in Figure 2. 8 Stage 2: Develop
Evaluating Government Communication Activity Figure 2: Hierarchy of objectives Departmental Communication Communication objective objective sub-objectives Put in place to address Role that communication Role that individual a specific issue will contribute to activities or channels Includes: policy development, achieving departmental will play in meeting the policy delivery, reputation objective overall communication management objective Sample objectives might include: Press Departmental objective: To ensure that compliance with a new tax regulation is above 80%. Communication objective: To ensure that the majority of the general public understands the reasons why complying with the regulation benefits the economy. Press-specific sub-objective: To ensure that the public is given a fair and balanced view of the policy, via the media. Marketing Departmental objective: To get 10,000 more people working as community service volunteers in your area. Communication objective: To get 40,000 people in the area to register as potential volunteers on your community website. Sub-objective 1: To increase the proportion of the public who recognise the value of volunteering from 20% to 40%. Sub-objective 2: To get 80,000 people to visit the website and find out more about how to volunteer. Sub-objective 3: To secure 40,000 incremental registrations. Internal Departmental objective: To ensure that unauthorised staff absences communication fall by 50%. Communication objective: To ensure that all staff are able to follow the correct processes for reporting absences from work. Sub-objective 1: To ensure that all staff recognise that there is a policy for reporting absences from work and that they must follow this. Sub-objective 2: To ensure that all staff understand how to access the guidance on how to report absences. Stage 2: Develop 9
Evaluating Government Communication Activity Defining target audiences All communication activity has an end audience – the people at whom it is ultimately targeted. But an activity may also have an intermediary audience – a group of people targeted so that they will deliver the message to the end audience on your behalf. Typical intermediaries include the media, stakeholders, such as non-governmental organisations and charities involved in delivering a policy objective, and commercial partners working with you to deliver a piece of marketing activity. If your activity includes an intermediary audience, you should evaluate: • how effectively the intermediary was engaged by the activity • how effectively the intermediary communicated the message to the end audience. Audiences might include: Press Where you are using a media engagement or PR campaign to try and raise volunteer levels among the public overall: Intermediary (the press): How did the media react to the activity targeted at them? Did they feature your messages, what volume and quality of coverage did you get for the story? End audience (the general public): How many people volunteered as a result? Marketing Where you are trying to engage a partner to run events promoting volunteering on your behalf: Intermediary (partner): How effectively did you engage the partner and how many events did they run as a result? End audience (the general public): How many people volunteered as a result? Internal If you are training staff on how to communicate better with communication members of the public so that customer satisfaction improves: Intermediary (staff): How effectively did you train them? Did they put their skills into practice in their interaction with the public? End audience (the general public): Did they notice that they received a better service from trained staff? Were they more satisfied? $ Useful tip – intermediary audiences If your activity includes an intermediary audience, make sure you include performance metrics that enable you to: (1) evaluate the effect of your activity on the intermediary; and (2) evaluate the effect of their activity on your end audience. 10 Stage 2: Develop
Evaluating Government Communication Activity Mapping out how your activity will work Spend some time thinking about how your activity is expected to achieve its objectives. If it is successful, what messages will the target audience(s) see, what will they think or feel and what will they do? Mapping out the steps to success will help you identify the right performance metrics to evaluate your activity. Draw on any behavioural insight1 modelling or customer journey work that has already been done. Setting performance metrics Having identified the objectives and target audiences for your activity and mapped out how you expect it to work, you need to build a set of performance metrics. These are the measures you will use to assess the activity’s performance against its objectives and to identify which elements of the activity were most and least successful. Make sure that your evaluation plan includes a range of performance metrics from the following five categories: Figure 3: The five types of performance metrics for evaluation 1. Inputs The activity carried out 2. Outputs How many people had the opportunity to see or hear your activity? 3. Out-takes What was its immediate effect on them? 4. Intermediate outcomes Did they do anything as a result of your activity? 5. Outcomes Did you achieve your overall objective? Using these five categories as a guide will help you to pick the right performance metrics for your activity. The categories can be applied equally to the full range of press, marketing and internal communication activity. 1 For more on behavioural insight, see MINDSPACE: Influencing behaviour through public policy (Institute for Government/Cabinet Office – www.instituteforgovernment.org.uk/our-work/better-policy-making/mindspace-behavioural-economics) Stage 2: Develop 11
Evaluating Government Communication Activity Inputs Inputs include details of the actual activity that has been undertaken, including the channels that you used to communicate. The channels to include will have been identified at Stage 1. Examples of input metrics Press Number of press releases sent out or engagement work carried out Marketing Paid-for media plan Website or digital space created Number of partners contacted, types of message shared or requests made of partners (intermediary audience) Internal Number of staff events organised communication Number of briefings or training sessions organised Web content created and put on to the intranet Include the costs of carrying out the activity in your input metrics. Include all external and internal costs and time spent (including staff time). When comparing results for more than one piece of activity, use a consistent methodology to record the costs and time spent against each one. This will be essential for calculating return on marketing investment.2 $ Useful tip – choosing the right performance metrics At this stage, don’t worry about whether you can get data for the performance metrics you choose. Pick those that you would need for an effective evaluation. Stage 3 looks at how to secure data and deal with gaps. 2 For more information refer to Evaluating the financial impact of public sector marketing communication: An Introduction to Payback, Return on Marketing Investment (ROMI) and Cost per Result (https://gcn.civilservice.gov.uk/wp-content/uploads/2012/12/intro-to-payback-romi-and-cpr.pdf) 12 Stage 2: Develop
Evaluating Government Communication Activity Outputs Output metrics measure the number of people who had the opportunity to see or hear your activity, regardless of whether they recall or recognise it. Try to include: • Reach – the total number of people or organisations in your target audience who were exposed to your activity. • Frequency – the number of times they saw or heard the activity. Examples of output metrics Press Number of pieces of coverage achieved Frequency of exposure to coverage by end audience Marketing Proportion of the target audience reached by media activity Number of impressions (one page-view) on the web page Number of stakeholders you contacted and number of contacts made Internal Number of staff attending events and training sessions communication Number of impressions on the intranet $ Useful tip – activity mapping Use the activity map that you created earlier to help you identify the right out-take, intermediate outcome and outcome performance metrics for your activity. Out-takes Out-take metrics look at the impact that the activity had on your target audience’s awareness, understanding and attitude. Think about what you wanted people to recall, think or feel about your activity and include performance metrics that allow you to measure this. $ Useful tip – other metrics Performance metrics for out-takes, intermediate outcomes and final outcomes cannot be standardised in the same way as those for inputs and outputs. They will need to be tailored to reflect how you expect your activity to work and what it is trying to achieve. Stage 2: Develop 13
Evaluating Government Communication Activity Examples of out-take metrics Recall How many people are aware of your activity or the message(s) that it is promoting? Think How many people understand the key messages that your activity is trying to get across? Feel What effect has your activity had on people’s attitudes? Do they intend to behave differently as a result of your communication? Intermediate outcomes Intermediate outcome metrics capture any action taken by the target audience as a result of your activity which may lead to the eventual end behaviour. Think about including performance metrics that allow you to measure the following: Examples of intermediate outcome metrics Talk How many people discussed the activity or its message with peers, friends and families? How many partners or stakeholders discussed it with their work colleagues? Direct response How many people responded to or otherwise interacted directly with you as a result of the activity? This could include visiting a website, attending training, ringing a phone line or having a face-to-face discussion. Indirect response How many people responded to or otherwise interacted with third parties as a result of your activity? This could include people interacting with stakeholders or partners or other local and national services. Other actions How many people took (or claim to have taken) any other action as a result of your activity? For example, in a stop-smoking campaign this might include people buying books, patches or other similar products. Refer back to your activity map and make sure that you include measures to cover the full range of actions that people could have taken. 14 Stage 2: Develop
Evaluating Government Communication Activity Outcomes Outcome metrics look at the effect that an activity has had on the overall communication and policy objectives that it was put in place to address. Include metrics that enable you to measure whether your communication objective was met and the effect your activity had on the wider policy objective. This could be related to changing behaviour, adopting a service, increasing positive reputation, increasing understanding and awareness or increasing participation. Y The appendix to this document provides a list of recommended metrics for each component. You can use this to help identify the appropriate metrics for your evaluation plan. This will enable you to compare your activity with other cross-hub work and will also ensure that your plan meets the standards required by ERG where applicable. Agreeing KPIs and targets You may decide to set a small number of key performance indicators (KPIs) based on the performance metrics that you have chosen or KPIs may already have been set by your team, department or hub. KPIs are measures of success that can help you track how well an activity is progressing towards its end objective or contributing to a broader communication strategy. KPIs are particularly useful where your activity won’t achieve its end objective for some time. They will enable you to track how effectively the activity is progressing towards this end objective and may provide timely information to make changes to the existing communication plan, if necessary. KPIs are easier to set when the activity has run a number of times before, as you will have a better feel for which metrics most accurately predict how it will ultimately perform. KPIs may be single performance metrics or combine several metrics. • You may choose to bring together responses by phone, web or face-to-face to give a total number of responses. • Where people’s overall satisfaction is driven by four or five different factors, you could bring these together in one composite satisfaction measure. How to set KPIs • You don’t need KPIs to evaluate effectively – only set them if they are useful in helping you understand whether an activity is progressing as expected or contributing effectively to overall goals. • Ensure that your KPIs are measurable. • Select no more than five KPIs for your end audience (and the same for any intermediary audiences). • Set targets for each KPI and specify the time-frame in which you expect to achieve them. Stage 2: Develop 15
Evaluating Government Communication Activity • Make targets as meaningful and as realistic as possible, drawing on previous results where applicable. The less historical data you have available, the broader your targets should be. • As far as possible, benchmark targets against other activity carried out by you and others within your team, department or hub. This will give you a broader context for what success looks like. Creating the evaluation plan Your evaluation plan should bring together: • the objectives and target audiences that you will evaluate performance against • the performance metrics (and KPIs and targets if you’re using them) that you will use in your evaluation. Your team or department may already have a standard template for evaluation plans. If not, you may find these templates for activities with and without an intermediary audience useful. $ Useful tip – multi-channel activity For more complex activities, include a separate set of performance metrics for each activity/channel and audience. 16 Stage 2: Develop
Evaluating Government Communication Activity Stage 3: Implement – source data to measure performance Checklist Task: to identify and gather evaluation ç Using monitoring, market research data by: and feedback ç Identifying available data and ç Reviewing any remaining gaps evidence ç Agreeing who will collect data ç Creating proxies and assumptions ç Completing the evaluation plan Output: completed evaluation plan At this stage, you need to source data and evidence for the performance metrics you identified in Stage 2. Budget, time and resource may restrict the amount of data you are able to gather but this should not stop you evaluating your activity. It’s better to produce an evaluation report with gaps in it than to produce nothing at all, provided that you are clear about what is missing. Identifying available data and evidence Begin by identifying the performance metrics for which data is immediately available. This will come from three main sources: • data gathered from your activity • data gathered by stakeholders and partners • existing data sources (e.g. government data and wider media and lifestyle data). Data gathered from your activity Gather as much data as possible directly while your communication activity is running. Identify all the ways in which people can respond to it and ensure that data is being gathered for each one. Data might include web visits, telephone calls or face-to-face interaction. Also, look for ways to gather additional data from these responses – for example, asking for people’s personal details (while remaining mindful of the requirements of the Data Protection Act 1998) or asking for permission to contact them again for follow-up research. $ Useful tip – plan ahead Always try to identify the data sources that you plan to use in your evaluation before the activity actually runs. This will give you time to ensure that the right data is being gathered in the right format while the activity is live. Stage 3: Implement 17
Evaluating Government Communication Activity Data gathered by stakeholders and partners Look at what data or evidence could be collected by an agency, stakeholder, partner or colleague working with you to deliver the activity. This might include the number of responses to an event or helpline run by a partner; any feedback received by stakeholders; data from surveys; competitions or promotions; and website statistics. $ Useful tip – getting data from agencies When you’re procuring an agency to work on planning, running or evaluating an activity for you, always ask what data they can provide to feed into the evaluation at the procurement stage. This can then be built into their contract. Ask for data well in advance and, where appropriate, agree to share the results of your evaluation. Existing data sources As well as fresh data gathered from current activity, you may find it useful to look at data sources that already exist to help you put your results in context and back them up with supporting evidence. For example: Media consumption surveys such as NRS (newspapers), BARB (TV) or comScore (digital media) provide information on the number of people reached by different media channels. These are particularly useful for input and output measures. Syndicated consumer lifestyle and media surveys such as TGI, TouchPoints or ACORN can be used to build up lifestyle and behavioural profiles for a range of audiences. These are particularly useful for tracking longer-term outcome measures and optimising future campaign planning. Existing government demographic and research data gathered centrally or by individual departments can be used for measuring longer-term outcomes. The Office for National Statistics publishes a number of surveys across all sectors of public interest. For more information on how you might be able to use and access these and other surveys, talk to colleagues in your department’s or hub’s research and analytics teams, the Shared Communications Service or external media and research agencies. Creating proxies and assumptions If you can’t get the exact data that you need for a performance metric, look at whether you can source a similar piece of evidence as an alternative. This is known as a proxy measure. For example: • if an activity is asking people to check their smoke alarms more regularly, a good proxy would be the number of nine-volt batteries being sold (a type of battery almost exclusively used in smoke alarms). 18 Stage 3: Implement
Evaluating Government Communication Activity Is your activity similar to other activity run in the past for which you’ve got accurate results? If so, you might create an assumption that this activity will perform in the same way. For example: • if your activity encourages people to sign up for a stop-smoking service, you may not have the budget to carry out research to see how many people have given up smoking as a result of using the service. However, all things being equal, you could assume that cessation rates were the same as in previous research studies. Using monitoring, market research and feedback If gaps still remain in your evaluation plan once you have exhausted all the available data sources, consider filling them using bespoke monitoring, market research and feedback. Monitoring You can use monitoring to track how many people your communication is reaching. The main monitoring techniques and tools include: PR and media monitoring: If your activity is designed to generate media coverage, you could use an agency to track how many people it reached, how many times the key messages were mentioned and how favourable the coverage was. If you don’t have the budget for this, consider whether you could monitor coverage yourself, either by subscribing to a PR-monitoring service such as Gorkana or by looking at what’s being said across a representative selection of media channels. Web monitoring: The Government Digital Service (GDS) measures a range of standard digital metrics for gov.uk and other government and partner sites. Topline data is available in the regular reports provided by GDS and more detailed analysis will be available on request. If you are responsible for monitoring performance of a stand-alone website, there is a range of free and paid-for tools that you can use to monitor performance, including Google Analytics. Social media monitoring: GDS will also provide guidance on social media monitoring. There are various tools that you can use to monitor performance yourself: • Where your activity is hosted on a social media site such as Facebook, LinkedIn or YouTube, you can set up standard user reports to monitor interactions with your content. Look both at how many people interact with you and at the quality of these interactions. • If you share information via Twitter, monitor how many people follow you and how many retweet your messages. Tweetdeck enables you to analyse interactions more effectively. • Buzz-monitoring tools enable you to see whether people are commenting online about your message, and whether the comments are positive and from credible sources. Effective buzz-monitoring relies on you defining the terms that you want to monitor in advance; the more specific you can be, the more accurate the results. There is a wide range of free and paid-for buzz-monitoring tools. Google Alerts is one example of a free tool. • There is a range of free and paid-for tools that can be used to monitor what terms people are searching for online. Google Trends provides a useful free snapshot. Stage 3: Implement 19
Evaluating Government Communication Activity Market research If you have gaps in the data for out-take, intermediate outcome and outcome metrics, then paid-for market research is generally the most effective way of filling them, provided that budget is available. Market research for evaluation doesn’t necessarily have to involve large-scale face-to-face quantitative surveys. Lower-cost research methodologies can be a useful source of insight. These include: Omnibus studies: An omnibus study is a quantitative survey of a representative sample of an audience (usually, the general public). The questionnaire is made up of groups of questions placed by different clients, which means the overall costs are shared, making it a relatively cheap option. Omnibus surveys may not be appropriate if your activity is localised, your audience is niche, or if you want to ask many and/or in-depth questions about a particular topic. Commissioned online surveys: Standard online surveys recruit respondents from large panels of people who have agreed to take part in research. They can be very cost-effective if your target audience is digitally engaged. However, check the quality of the panel and how it is managed in advance. Online panels are not always suitable for tracking long-term activities, as you may not wish to survey the same panel members repeatedly. Qualitative research: Qualitative research, including discussion groups or interviews, can be a useful alternative to quantitative research in some circumstances. It may be appropriate for small-scale activities; when audiences are hard to reach; or when activity is only running for a short period with no requirement to track its impact over time. When commissioning paid-for research, you may be able to lower your costs by reducing the size or specification of your sample, shortening the length of your questionnaire or simplifying your reporting requirements. Research and evaluation specialists in your marketing hub or the Shared Communications Service (SCS) can provide advice and help with commissioning paid-for research or conducting your own online research. Feedback Feedback is informal comment and opinion that you gather yourself. It can be a valuable alternative to robust research when there are limitations in time, resource and budget. This may take the form of a few informal telephone or face-to-face interviews with the primary target audience, those involved in delivering the activity or direct engagement with the audience; alternatively, feedback can be obtained through an online survey that you run yourself, or via a blog, email or text. If possible, seek input from colleagues in research or insight roles within your department or SCS before gathering feedback yourself. They will be able to give you advice on questionnaire design, data protection and propriety issues. They will also be able to advise you on how to use online survey tools such as SurveyMonkey when seeking feedback. 20 Stage 3: Implement
Evaluating Government Communication Activity Examples of feedback include: Press Speaking to a few journalists Monitoring the type of enquiries that you get when you run a story Marketing Counting the number of people attending an event Informally asking stakeholders or partners for their observations on how an activity went Running a survey on your website or social media space Adding a question about your service to a call-centre script Internal Informal interviews with frontline staff communication Feedback from training sessions Feedback is not scientific. It will give you anecdotal evidence, rather than statistically robust measures. Use as many different sources as possible and ensure that your analysis reflects the limitations of using such informal methods (see Stage 4 for more detail). $ Useful tip – data protection If you’re asking members of the public to provide you with personal information, this will need to be gathered and stored in line with government and industry standards. Check your department’s information risk policy and talk to market research specialists in your marketing hub if you need more advice. Reviewing any remaining gaps After gathering the data, review your evaluation plan to identify any performance metrics that have no data source. If gaps still exist, consider how important it is to measure that particular performance metric or KPI. If it is not central to the evaluation, you may choose not to measure it at all but point out this limitation in the final report. If a performance metric or KPI is crucial to your evaluation, consider asking for additional resource or budget to measure it, and make clear the implications of not obtaining it. Agreeing who will collect data Before the activity runs, agree who will gather the data for each source that you have identified, when and in what format. Collecting each piece of data in the same format using consistent time periods and target audiences will help with the analysis later on. Stage 3: Implement 21
Evaluating Government Communication Activity Completing the evaluation plan You should now complete your evaluation plan by setting out the data sources that you will use for each of your performance metrics and by noting any gaps that still exist. Your completed plan will now include: • objectives and target audiences • performance metrics and KPIs, and the data sources that you will use to measure each one • any limitations in your evaluation • agreed budget and resource needed to gather the data (signed off by the budget- holder if needed). The three Cs – principles for good data collection Whatever you’re measuring, make the data that you collect as continuous, consistent and comparable as possible. Continuous Most communication activity aims to get people to start, stop or continue a particular attitude or behaviour. To quantify its effect, try to measure people’s attitudes or behaviour before, during and after the activity runs. Benchmarking before it runs is particularly important to demonstrate the effect of your activity. For ongoing activity, measure performance regularly enough to show the effect it is having on attitudes or behaviour. The more data points you can capture, the more obvious the trends will be. Consistent Use consistent measures and methodology to assess activity that you repeat or run continuously over time. Using the same wording for questions, tracking against the same audience and collecting data in the same way every time will enable you to measure longer-term trends accurately. Comparable As far as you can, try to use the same measures for every piece of activity that you run. Try to make your measures as similar as possible to the ones used by other communicators in your team and across government. This will make it easier to compare results in the future. The appendix gives you guidance on the types of measures that you might want to use for your activity to help with standardisation. 22 Stage 3: Implement
Evaluating Government Communication Activity Stage 4: Analyse and report performance against the plan Checklist Task: to assess the success of your activity by: ç Analysing effectiveness ç Demonstrating efficiency and value for money Output: final evaluation report Analysing effectiveness Once your activity has run, you will need to measure how effectively it met its objectives. Begin by gathering data and evidence from all your sources and bringing it together in a centralised database or folder. Check to see whether it looks correct before beginning the analysis. Also, check that the activity ran as planned and whether anything unexpected is likely to have affected its performance. $ Useful tip – activity diary You will find it useful to keep a diary while your activity is running, noting down any external or operational factors that might affect performance as they happen (e.g. bad weather affecting event attendance; negative news stories affecting public perceptions; or downtime on a website affecting visitor numbers). This will make subsequent analysis easier. How to approach analysis Did your activity work as you expected? At Stage 2, you mapped out how you expected your activity to work. Use this as the basis for your analysis. Create some key hypotheses – results or outcomes that you might expect to see based on your objectives, activity map and past performance. Check performance against your KPIs and targets Have these been met? If so, what is driving success? If not, consider whether your targets were realistic and whether your KPIs are accurate measures of success. For example, if you have set KPIs for a new activity, you may need to consider revising these in the future. Stage 4: Analyse and report 23
Evaluating Government Communication Activity Be objective Analyse the full range of potential outcomes and results for your activity. Do not ignore results or trends that don’t fit with your hypotheses or with the general patterns in the data. Instead, look for the reasons behind these. If something didn’t work in the way that you expected or failed to meet its objectives, look at the reasons why. This will help you to amend future activity and so improve future performance. Try to isolate any operational or external issues Did any unforeseen operational issues affect your activity’s performance? For example: • Were there enough contact centre staff to respond to the calls which your activity generated? • Did the website to which you were directing people stop working? • Did external factors (e.g. negative PR, bad weather, the economic situation) have an impact on your activity’s success? An activity diary can be very useful in helping you to identify these factors. Analyse data on a bottom-up basis If you are analysing the effect of more than one activity or channel, analyse data on a bottom-up basis against your evaluation plan. First, analyse the effectiveness of each individual activity against its sub-objective. Then, look at the effect that each activity had on the wider communication objective, considering which one had the greatest effect. Assess the effect of communication on the policy objective Consider the effect that communication as a whole has had on the wider policy objective, bearing in mind the effect of other interventions designed to meet that objective. You may not have sufficient data to fully isolate the effect of communication on the policy objective, but try to draw conclusions based on the available evidence. Review progress When evaluating activity that will not achieve its communication objectives for some time, look for evidence of progress towards these based on the KPIs and performance metrics that you have identified. Cross-check your conclusions Where you have data from a range of different sources, check to see if the results from each one are pointing towards the same conclusions. This is particularly important where you are relying on less robust data sources such as informal feedback. One set of results may not always give a conclusive answer, but several pieces of evidence all pointing in the same direction may allow you to be more confident in your conclusions. 24 Stage 4: Analyse and report
Evaluating Government Communication Activity Signpost gaps and limitations Your evaluation plan will identify the gaps in available data and the implications of those gaps. Be aware of the limitations that this puts on the evaluation. Where possible, try to draw assumptions about missing results from the available data. For example, if you don’t know how many people have stopped smoking as a result of your stop-smoking event, can you make an assumption about possible cessation rates based on the number of people who tell you that they’re going to quit at the end of the event? If you can’t make such assumptions, signpost missing data within the evaluation report. Get specialist support where necessary Some evaluation will require more sophisticated analysis, for example econometric modelling techniques to predict future performance or isolate the effect of different factors on performance. This type of analysis will generally be carried out by specialists, either within your department or externally. Only seek external support where you are unable to carry out the analysis within your marketing hub. Demonstrating efficiency and value for money When you evaluate performance, always ask yourself whether the results that you achieved justify the time, resource and money that you spent. As part of your evaluation, also consider whether the results that the activity achieved justify the time and money that were spent on it. Where you are able to quantify the actual effect that the activity had on the overall policy objective and put a financial value on this, you should be able to calculate the overall return on marketing investment. Otherwise, calculate the cost per result – the amount of time or money that was invested for each person who carried out a specified action. Compare your results with other activities that you or others have run to see which ones are most cost-effective and time-efficient and use your learning to optimise future activity. Examples of return on marketing investment and valid cost per result figures include: Return on marketing investment: Where you are able to demonstrate how many lives have been saved as a result of a campaign to reduce speeding on residential streets and calculate the financial value of each life saved, the return on marketing investment is the total number of lives saved multiplied by the financial value of each life saved minus the cost of running the campaign. Cost per result: Where you are not able to demonstrate how many lives have been saved as a result of the campaign, you may instead choose to calculate the cost per result, basing this on any action that people took after seeing the activity. This might include the cost per web visit (total web visits divided by total campaign cost) if the activity sends people to a website for more information, or the cost per event attendee (total event attendees divided by total campaign cost) if the activity involves running events on road safety. Further guidance on demonstrating financial return is available in a separate GCN publication.3 3 Evaluating the financial impact of public sector marketing communication: An Introduction to Payback, Return on Marketing Investment (ROMI) and Cost Per Result (https://gcn.civilservice.gov.uk/wp-content/uploads/2012/12/intro-to-payback-romi-and-cpr.pdf) Stage 4: Analyse and report 25
Evaluating Government Communication Activity The evaluation report You will have agreed the format for your report at Stage 1. Check that this is still correct. Whatever format your report is in, always try to follow these principles: • be objective – and include all results, positive and negative • always provide clear conclusions and recommendations – what should be done as a result of this report and by whom? • separate fact from opinion and recommendations so that others can review the evidence on which you have based your conclusions • acknowledge gaps in the data – and their implications • state any assumptions that you have made and what you based these on • include references for all data sources used in your report • include appropriate graphs and images to aid interpretation. A template for your evaluation report is included here. If you are submitting an evaluation report to ERG, you must use the following template. 26 Stage 4: Analyse and report
Evaluating Government Communication Activity Conclusions Adopting the principles, processes and practical examples set out in this guide will enable you to design and implement evaluation plans for the full range of government communication activity to the standards required by ERG and the Communication Delivery Board. By following the recommended approach, you can also be confident that your work will meet the revised GCN competencies for government communicators. Adopting good evaluation standards will also enable you both to demonstrate the contribution that your communication activity makes towards achieving overall policy objectives and to make a well-informed business case to invest further resource and budget in your work. Sharing your learning with other government communicators in your team, department or arm’s-length body and hub, and across GCN more widely, will enable others to benefit from your knowledge and insight as well. Finally, remember that it can take time to fully implement the approach set out in this guide and to gather the data needed to understand what is driving success. Begin gradually, recognising that it is always better to produce a partial evaluation than nothing at all, provided that you are clear on the limitations of your end report. This guide and the examples that it contains reflect knowledge and experience drawn from a wide range of colleagues from across the government communication network. Further comment, feedback and examples are always welcome. Please contact evaluation@co.gsi.gov.uk. Helpful contacts and resources If you want more help or support in applying the content in this guide, talk to your hub lead or to the evaluation specialists within your hub. The GCN website includes their contact details. There are also a number of teams within the Government Communication Centre who may be able to help: • The Campaigns and Strategy Team works with the communication hubs to define good practice evaluation standards across government and to ensure that these are reflected in all annual communication plans and ERG submissions. • The Evaluation Team in the Shared Communications Service can provide further advice on how to apply these evaluation standards to specific projects; they can offer practical support in planning and conducting your evaluation and advice on the procurement of external agencies and should be your first point of contact for general evaluation queries. • The Government Procurement Service will provide access to specialist external agencies, if required. • The Government Communication Network runs training courses on evaluation and the GCN website contains a wealth of information and access to groups and individuals to help you with your evaluation. Go to https://gcn.civilservice.gov.uk/ for more information. Conclusions 27
Evaluating Government Communication Activity Appendix Recommended metrics Introduction This appendix gives examples of the types of performance metric that you should consider including in your evaluation plan. They are split by discipline: 1. Press 2. Marketing 3. Internal communication Within each discipline, performance metrics are further split by channel and objective type. Choose the set that is most relevant to your activity, adapt it as appropriate and include it in your evaluation plan. If you are using more than one channel, you will need to pick separate sets for each channel that your activity uses. If your activity reaches the end audience via an intermediary (e.g. the media or a partner), use separate sets of metrics to measure: • how effectively your activity engaged the intermediary; • how effectively your intermediary engaged the end audience on your behalf. Templates for activities with and without an intermediary audience are available on the GCN website. $ Useful tip – adapting performance metrics to your activity Ensure that you are clear on your activity’s objectives, the channels that you are using and the key messages that you are promoting. This will enable you to adapt the example performance metrics to fit your activity. 28 Appendix
Evaluating Government Communication Activity 1. Press activity Press activity includes proactive publicity (activity which proactively promotes a policy or your organisation), reactive media handling (activity put in place to respond to a specific issue or event or to rebut inaccurate coverage of this issue or event) and media briefing and handling sessions that you organise to support ministers and senior policy officials in their work. All types of press activity have an intermediary audience (e.g. the media or other spokesperson (a minister or senior official)) and an end audience (generally members of the public). Choose the performance metrics that are most relevant for your activity from the list below. $ Useful tip – evaluating intermediary and end audiences The final outcome for your intermediary audience should always form the input for your end audience – use this evaluation plan template to help you. Inputs for intermediary audience The number and nature of press or media activities that you carry out. This might include: • Proactive publicity – the number of PR activities, media briefings and packages issued to the media • Reactive media handling – the number of corrections, reactive statements and rebuttals issued, the number of interviews arranged • Media briefing and handling – the number of media briefings and media handling sessions that you organise for ministers and officials • Any costs incurred in running the activity, time and internal resources used. Outputs for intermediary audience • Proactive publicity – the number of media contacts that you reach with your activity, the number of times that you contact them, the messages that you pass on • Reactive media handling – the number of media contacts that you reach with your corrections, reactive statements and rebuttals, the number of interviews that take place • Media briefing and handling – the number of briefings and training sessions that you organise, the information and skills that you pass on. Appendix 29
Evaluating Government Communication Activity Out-takes for intermediary audience • Proactive publicity – the attitude of the media overall (and key media contacts where applicable) towards the message that you are promoting or towards your organisation more generally • Reactive media handling – the attitude of the media towards the issue that you are working on and your handling of it. Have you changed their knowledge or attitude? • Media briefing and handling – what do ministers and senior officials think about the briefing and training that you provide? Do they find it useful? Do they intend to put it into practice? Intermediate outcomes for intermediary audience • Proactive publicity – the number of media contacts that you reach with your activity, the number of times that you contact them, the messages that you pass on • Reactive media handling – the number of media contacts that you reach with your corrections, reactive statements and rebuttals, the number of interviews that take place • Media briefing and handling – knowledge and skills that ministers and officials gain as a result of your briefing or training sessions. Final outcomes for intermediary audience/Inputs for end audience (these performance metrics are the same) The volume and quality of media coverage achieved by your activity. This might include: • Proactive publicity – number of pieces of coverage achieved, accuracy of coverage, favourability of coverage, key message penetration, quotes and interviews used • Reactive media handling – number of pieces of coverage or interviews containing your responses, amendments or corrections, overall accuracy of coverage, favourability of coverage, the amount of negative coverage that has been prevented as a result of your work $ Useful tip – measuring a reduction in negative coverage It is often far harder to measure the negative coverage that has been avoided as a result of the corrections and rebuttals that you issue. By keeping a record of the contacts that you have with the media for each issue that you work on, over time it becomes easier to demonstrate how your interventions are affecting coverage in the longer term. • Media briefing and handling – the number of times that your contacts use the knowledge and skills that you have passed on in their contact with the media, the volume of coverage achieved as a result of these contacts, accuracy and favourability of coverage, key message penetration. 30 Appendix
You can also read