Five Es For Campaign Design and Management

Campaign Strategy Blog 24 January 2021. Chris Rose chris@campaignstrategy.co.uk

Here’s one for campaign planners, funders and managers.  This blog argues for consideration of 5-Es in campaign design adding Evidence and Ethics to the usual Economy, Efficiency and Effectiveness.  As campaigning typically requires a bespoke design, acquiring the right evidence is of great importance to test the effectiveness of any proposed critical path.  While most cause groups are ‘ethical’, to prevent a debilitating accumulation of objectives simply as they are ethically desirable, identifying and conserving the primary ethical purpose is another test that should be applied.  Honing campaign tools, and strategies in the limited case of campaigns that are essentially ‘repeat business’, are the main cases where optimising economy and efficiency are a worthwhile use of resources.

download as pdf

Introduction

Anyone who’s ever ventured into a conversation with managers versed in ‘value for money’ thinking will probably have come across the ‘3-E’s’: economy, efficiency and effectiveness. These useful distinctions apply to campaigning as much as to anything else, and particularly to making design and investment choices across a programme of campaigns, or between campaigning and other activities.

The OECD’s explanation suggests ‘value for money’ can be found in balancing the 3’Es viz:

Economy: Reducing the cost of resources used for an activity, with a regard for maintaining quality.

Efficiency: Increasing output for a given input, or minimising input for a given output, with a regard for maintaining quality.

Effectiveness: Successfully achieving the intended outcomes from an activity.

Seeing as almost any campaign has proponents who think it is of supreme importance, they will always want to prioritise effectiveness: throw everything at it. That’s not very helpful if you have a suite of organisational commitments.

Inadequate Resources

On the other hand it’s very common that campaign resources are spread too thinly for any of them to have much chance of working, especially in organisations with weak leadership (no effective prioritisation, everything is priority) or where there is no practice or culture of finding evidence that something will or won’t work, before committing to campaign design and execution. That may sound obvious but it’s a widespread problem.  Such evidence needs to be real, verifiable and independent of the aspirations or preconceptions of the campaigners.

The same issue of under-resourced campaigns arises when organisations fail to distinguish between advocacy and campaigning.  This happens most often in organisations which don’t just do campaigning but which do a lot of policy-advocacy work. In these groups ‘campaigning’ may just mean mobilising signs of public support for advocacy positions, and the policy/ advocacy units or staff are often the de facto gatekeepers of target choices, priorities and resources. This may work if for some reason a bit of mobilisation is all that’s needed to tip the balance. In my experience, in many more cases such ‘campaigns’ fail because that isn’t enough to achieve an objective.  Yet consciously or unconsciously, the organisation prefers to run such enhanced-advocacy to the alternative of an instrumental campaign which makes changes to outcomes, through making changes in the real world.  Such changes of course are often more less popular and more controversial than just advocating change.

Façade Campaigns

Façade villages created by Potemkin to impress Catherine the Great en route to Crimea are a legend or myth which have become a by-word or metaphor for fakery (image Wikimedia Commons)

In other cases ‘campaigns’ are presented as such but in reality are adjuncts to fundraising or membership, for instance as list-building or prospect-acquisition exercises.  These are ‘Potemkin’ campaigns, modern, usually digital equivalents of cardboard facades built to create an impression of substance. In this case they can achieve the 3-Es but not for the ostensible purpose presented to the public.

Evidence

So as a rule instrumental campaign planning also requires a fourth E – evidence.

Use the issue mapping exercise  to identify possible interventions (aka dialogue mapping) and the need for evidence.

For example if you want a thing to be stopped, how might it be stopped?  Don’t know?  Then find out how it works, what things, steps, processes does it need to happen, to continue.  Then each of those is a potential way to stop it, if you can take one away or block it.  How do those things function?  Ask questions of answers from questions (Horst Rittell) until you have a big enough network of potential causes and effects mapped out to start to see possible routes to change – the start of a candidate critical path.

Things put forwards as evidence also need to be questioned. Possible types of evidence of what will make a difference might include:

  • Observation – we’ve seen it happen, or fail to happen, or someone else has (but was it cause and effect?)
  • Claim (they say – who? what’s their evidence or is it just a belief?)
  • Inference (whose ? needs more testing against empirical evidence if possible)
  • Independent analysis (ie not ours, preferably from a source which is neither for or against us on ‘the issue’, of how the system in question works)
  • Experimental proof – someone has run an experiment, or de facto experiment whether intended or not
  • One or more of the above that we know to be accepted by the target decision-maker as likely to lead to the result we want (usually from intelligence about the thinking and preconceptions of others)

In the commercial communications world, when planners are concerned with specific audiences, such evidence is often called ‘insight’.  It’s why qualitative research is used to test assumptions made from polling, testing what data actually means.

Asking and trying to answer questions about evidence also reveals the knowns and unknowns.  When you do a mapping process to generate a possible critical path, make a list of things that need researching in order to validate assumptions – assumptions are not facts until validated. You or your team may not know something, for instance – does D really lead to E, and if so how ? – but someone else might.  The most cost-effective step may be to find that person rather than trying to generate the knowledge from primary research.

Knowns and Unknowns

Many strategists, risk analysts and project managers like to use a known/unknown grid.  In 2002 this emerged into the popular media when US Secretary of State for Defense said at a press conference about the Iraq War:

“there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know”

Rumsfeld missed out unknown knowns – things your organisation actually knows but you don’t, which is why it’s a good idea to ‘empty the pockets’ of your organisation and allies before making your plan, as this is free but unutilized knowledge.  It may be for example that your head campaigner knows more than any one else in your team about the topic but that does not mean s/he knows everything your network knows about it.

On the basis that knowledge you have accessed and used is ‘tapped’, blogger Management Yogi produced the above version of the grid.  The things your network knows but you don’t, are the ‘hidden facts’.

Uncovering this reality is one reason why any ‘mapping’ should not become a one-step decision making process (even more so if it uses a closed pre-formed selection of factors such as PEST). A desire for speed can lead you to make a decision based just on what you know for sure, and to wrongly assume that can’t be improved in with a bit of research.  As Bill Fournet of Persimmon Group wrote, this known/unknown technique:

‘provides a quick and simple approach to identify and determine which assumptions you need to focus on first. Sometimes, all it takes is a phone call or an email to get an answer. Yet, so many teams fail to take that step’

Here’s his version of the grid:

  

Resources

The question obviously arises, how much effort do you invest in trying to shift things into the known-known fact box?  The answer to that partly depends on how much ‘getting it right’ is important to you.  A priority campaign with a large investment is presumably more important than one with a small investment of resources, and even more so if the opportunity is rare, or so far as you know, unique.

Yet because important is often transposed to urgent, campaigns get launched despite a very weak evidence base.  This may also happen simply because the group concerned does not research change mechanisms at all, and only look at the mission-level importance of an objective, find it huge, and assume that ‘we must do something’ > ‘this is something’ > so we’ll do this.

It’s clear though that validating the known unknowns (the unknown facts), and the unknown knowns, (untapped knowledge), ought have first call on your research resource, as these are the most resolvable categories. The unknown unknowns, are harder to investigate and may need to be set aside – triaged out – if there is a deadline for deciding action.

The unknown unknowns are better dealt with in horizon-scanning exercises and include ‘black swans’, unpredictable catastrophic events or those assumed to be impossible.

In practice the divisions between the categories are not always completely impermeable.  Some campaigns are largely or wholly about issues with a high degree of ignorance but where existing knowledge means you can infer there may be a big problem. For example a high potential impact from a hazard might be inferred from a known unknown, such as ‘once released, we don’t know how to get this back’, coupled with some known facts, for example ‘things like this have caused serious problems’, even where the probability of occurrence and the specific consequences may not be knowable at present. Some new technologies and chemicals are perhaps the best known examples.

Andy Stirling at Sussex University has separated ignorance into strict uncertainty, ambiguity and ignorance, together covered by the term ‘incertitude’.  Where there is no evidence basis for assigning a probability of risk and outcomes, a precautionary approach is the appropriate response.  He says: ‘dilemmas of incertitude typically mean that no particular policy can be uniquely validated by the available evidence. The idea of a single ‘evidence based policy’ is an oxymoron’.  Although writing about policy, Stirling’s point also applies to looking at evidence for campaigns.

Efficiency and Economy

The over-riding importance of  effectiveness does not mean there is no place for improving economy or efficiency in campaigning but this is strongest where a campaign is repeat business.  In this case, once an effective model has been devised, so long as you can reasonably expect to do much the same thing in the same circumstances, it’s worth investing time and effort in doing it in a cheaper more efficient manner.

This however is more likely to apply to the tools, logistical assets or tactics used in a campaign, for instance means of communication, than the strategy itself. It’s also more likely to apply to non-campaign work, such as service delivery.  For example a nature conservation organisation may need to campaign as well as acquiring and running protected areas but each campaign is likely to have particular circumstances not predictable in advance, and to require a bespoke strategy.  The land acquisition and management work is more routine and predictable  not least as it is largely governed by accepted and regulated frameworks, whereas campaigning may be necessary for the very reason that the established political and social systems have failed, or need changing.

This needs to be understood in the Management and Governance functions of an organisation.  You cannot apply the same evaluation metrics placing a lot of emphasis on economy and efficiency (or productivity) to campaigns, as you can for routine repeat business.

Ambition

As described in How to Win Campaigns: Communications for Change (Ch. 11) each organisation needs to develop its own campaign style, including the tone and organisational role played by campaigning, so it feels comfortable within the brand and is understood and accepted in the community of the organisation.  Some organisations typically run campaigns that are much more strategically ambitious than others (eg in the nature case, restricted site-defence campaigns at one end of the ambition dimension and changes to the prevailing social and economic model and how it affects nature, at the other end).  One way of looking at this is the ambition box.

Ambition Box from How to Win Campaigns Communications for Change (read more)

Finally, it may well be worth looking at the efficiency, economy and effectiveness of the campaign planning, strategy and programming system itself.  If that’s not adequate then evaluating the downstream campaigns is a bit of a waste of time, as their failings may be symptoms of the upstream problem.

Ethics

In the case of cause organisations a fifth E often comes into play:  Ethics.  If morals are rules given by authority and ethics are self-adopted principles governing our lives, the default campaign design problem is not too little ethics but too much, or rather too many objectives, added for or justified by, ethical purposes.

That’s because most change-campaigners and their organisations are Pioneers, with a psychological commitment to act ethically. Coupled with the Pioneer tendency to think that the more ideas and consultation thrown into the decision-making the better, plus a love of doing things differently, campaign plans and execution can become encrusted with ethical barnacles.   This is why I suggest Ethics as the fifth E for campaign planners: so that effectiveness does not fall foul of trying to serve too many ethical purposes at once.

To be clear about this, it’s not an argument about being ethical per se. The very act of deciding to develop, run, support or finance a campaign is in many cases, ethical at root.

It’s a design question. Each campaign needs to have a single clear ultimate change-objective.  That objective might serve several ethical purposes but if those would best be served by making a set of different changes, then they should be pursued with different campaigns. Failing because you attempted to do too many ethical things at once is not a very ethical use of time and money.

The same applies if a set of possible changes all serve the same ethical purpose.  For pursuing the mission of an organisation set on that purpose, they might all be equally valid but if they involve different targets in different systems (eg social, cultural, temporal economic, or geographic), they will require different critical paths.

This is a simple reality of design, not confined to campaigns.  The screwdriver attachment multi-purpose tool is unlikely to be as good at the screwdriver job as a set of screwdrivers made with the same amount of metal and effort.  The meal-ready-to-eat nutrition bar designed as survival food is never going to give the sophisticated flavours of a meal in a five star restaurant.  And the family saloon car design may be fairly good at lots of things but it’s never going to be as good at high speed travel as a racing car or as good at sustained off-roading as purpose-made 4×4.  As form follows intended function, effective design can include zero sum choices.  That in turn means that achieving the objective dictates the design, and it can’t take on unlimited ethical tasks along the way. This is easily lost sight of during internal consultation.

It’s tempting for cause organisations to try and add extra ethical functions to a campaign because they all have internal advocates, and decision-makers might like the campaign to deliver on them all. If this is an issue, candidate designs should be tested against real world evidence.

Ethical over-load can also lead to wider unintended consequences. If we signal that we would like others to change behaviours or practices for ethical reasons A B and C, when audiences are far from ready to do so, we may create a values-bombing effect of resentment and opposition (as I have argued Political Correctness did in the case of pre-Brexit developments, particularly but not only with some Settlers). If the campaign also fails to achieve its objectives, we look like failures (especially unattractive to Prospectors) and the overall impact is negative.

Criticisms to Ignore

My advice to groups faced with arguments over ethical objectives is to bear in mind the core mission of your organisation and why you want to run a campaign on a particular issue.  There is an almost unlimited universe of ethical causes which could become imperatives, and they are unlikely to be effectively optimised in one campaign.

This risk is mitigated by picking a strategic objective.    If you have picked the thing to change because it’s the biggest available and achievable change you can make on subject A, then the fact that your campaign could have also targeted topic B, or B through to F, is not a criticism of it that you need to accept.   The critics really need to go away and find an organisation whose primary task is to change B or C or D or E or F, or accept that you will you run a campaign on those another day.

Plus even in an organisation which maybe has a policy on, or advocates for change on a, b, c, d, e and f,  running a change campaign is a much heavier duty more resource- and opportunity-focused exercise than advocacy, so the same applies.  For practical purposes of producing campaigns that may actually make gains rather than simply drawing attention to the case for making changes, the ethical profile of a campaign often needs to be limited by its primary purpose in order to produce an ethical gain.

The Limitations of Campaigning

This is also one reason why campaigning is a limited tool.  It has to focus attention and engagement on a single change, and often seizing a single moment of opportunity or, more onerous, creating one.  Similar limitations mean campaigning cannot be a good way of doing education (as education generates increases awareness of possibilities whereas each campaign step necessarily focuses on supporting a specific call to action), and cannot properly substitute for politics and government which involve ongoing negotiated trade-offs.

Finally, ethics, fashions and moral norms are not fixed so at an organisational level, and campaign groups face similar follower-supporter and wider social expectations to companies and public bodies, in moving with the times.

What count as ‘hygiene factors’, expectations that would apply to anything an organisation or brand does, will change over time but not all of these will be motivational factors determining whether a particular campaign or organisation is supported.  For instance being low carbon is becoming an expectation of businesses whereas it used to be a distinguishing exception. If not already, this will be expected from all cause groups as well as corporations.  Right now however speeding up the elimination of carbon emissions is not the primary purpose of every campaign by every campaign group.

Summary

The campaigning five Es:

See also

The UK National Audit Office on Value For Money and the 3-Es in commissioning

The Australian Government on 3-Es with ethical procurement as a 4th

Some research-impact researchers on Effectiveness and Efficiency, having jettisoned economy for Equity

Brand hygiene ideas evolved from Herzberg’s ‘two factor’ motivational theory, originating in studies of employee satisfaction https://en.wikipedia.org/wiki/Two-factor_theory

 

Share
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *