Risk/Crisis Communication

Sunday, February 18, 2007

Ronnie's thoughts on evaluation

Every risk communication effort should be evaluated. Naturally. The chapter opened with that premise and that’s a no-brainer. If you don’t evaluate what you did this time, how will you know how to improve upon it or whether you need to make changes, the next time around?

That being said, it takes time and effort to make a proper evaluation. And sometimes you may not have the resources in terms of time and/or money to make a full-scale effort. Can you shoot from the hip and make a reasonable assessment of what you did right and what you did wrong. I would say yes.

I think most of us engaged in the business of communication can make a fairly good assessment about when something works – or when it does not. I’ll start with the premise that most of us are reasonably intelligent and reasonably fair in judging ourselves and in judging the efforts of others.

It wouldn’t take a rocket scientist to figure out that FEMA did a really poor job of interacting with people in New Orleans and attending to risk communication there and with the rest of the nation in the aftermath of Hurricane Katrina.

Or that Jet Blue will take a hit now -- after years of favorable public and consumer relations – after hundreds its passengers were left for up to 11 hours in parked jets at New York's John F. Kennedy Airport when the planes couldn’t take off because of bad weather. I can only imagine the kind of evaluation and measurement that will be going on at Jet Blue now. (As an FYI, there is currently a movement to establish a federal airline passengers’ bill or rights).

In both cases, tens or even hundreds of thousands of dollars can be spent in assessing and evaluation the actions and inactions of the risk communication. But the final conclusions will probably not be much different from that which the communicator can assess right from the start – FEMA and Jet Blue respectively mishandled and mismanaged communications. (There’s actually a more common vernacular I might have used here to describe FEMA and Jet Blue’s efforts, but in the interests of verbal delicacy, I’ll refrain)!

This “eyeball” assessment is not unlike a journalist’s crowd count. Obviously, when a reporter covers a particular event and provides a crowd tally, the journalist does not count every person there. They will eyeball the crowd to determine whether there are dozens, hundreds, thousands or tens of thousands people present. (Of course, the journalist will also ask for a number from the event organizer – but eyeballs can still work and are a good counter to inflated attendance claims.)

Organizations like FEMA and Jet Blue, however, can afford to conduct full evaluations. Smaller community-based organizations often cannot do so common sense is relied upon even more. For example, here in town, St. Francis House may not have the resources to assess its success in communicating the risk to the homeless in Gainesville and the risk to the rest of the community if the homeless are not taken into account. But they can make a fairly accurate and quick assessment (I think) of how successful they are by looking at the news coverage they get; the attention paid to homelessness by the city and county governments and seeing what kind of response they get to requests for volunteers and donations from members of the community.

In undertaking an evaluation of a risk communication effort, common sense can count for a lot, especially when time and money are issues. Evaluation is not dissimilar to measurement, but I for think care has to be taken not to spend so much time on evaluating and measure success (or failure) that it prevents getting the job done that one is trying to measure or evaluate. In other words, let’s not get caught up so excessively in proving our worth that we find ourselves with nothing worth evaluating or measuring.

As far as legal matters are concerned – as raised in the chapter reading -- superficially you can show you adhered to the letter of the law when risk communication involves a legal technicality. But if you are working for an organization whose life is conducted in the public eye – whether as a government agency, an advocacy group or a non-profit, it stands to reason that your organization would be operating legally. If not, you have a much bigger risk assessment to make than simple communication!

The set of factors developed by Kasperson and Palmlund are useful and should be taken into account. It is important to make a correct choice in determining who will conduct the evaluation. Does the evaluator understand the issues and the topics at hand? The example given about the project manager who had his risk communication messages evaluated by technical experts made sense as did his efforts to provide them with guidelines and a frame of reference about what he was trying to do. But again, their guidelines presuppose that funds are available for this kind of effort.

Evaluation can show management that your risk communication effort was successful.
But it seems to a little too easy to simply use as a standard of success whether objectives were met. The question might be whether the objectives that were laid out were the correct ones and whether they are truly measurable in a way that is meaningful to you and your organization.

0 Comments:

Post a Comment

<< Home