Why most speaker evaluations are useless
Most organizers never bother to collect feedback from the attendees, and of those who do, often it doesn’t get passed on to the speakers. It’s a shame because it’s most appropriate for the organizers to share feedback with the speakers; after all, they invited them to speak, so technically the speakers work for the hosts. But being as busy as they are, the organizers don’t always communicate the gathered data back to the speakers. They ask the good speakers to come back and leave the rest to figure out life for themselves.
Some do provide feedback, and Figure 8-1 shows a typical report for a speaker at an event. This is real data from a real event, and the speaker was me.
Figure 8-1. My scorecard from a recent speaking engagement.
At first glance, this looks good. Apparently 58 of the 129 people who responded were “Somewhat satisfied.” That doesn’t sound too bad. I even managed to score a “Very satisfied” from 38 additional people. But a rating of “Neutral” from 29 people is worthless. I’d rather they were forced to decide—if they’re not sure where they stand, I’d consider them dissatisfied. Or perhaps they dozed off. That would actually be fascinating data to know: how many people fell asleep during the lecture? (That’s a stat I’d love to see at all lectures, especially in universities.)
But the single most valuable data point is how my scores ...
Get Confessions of a Public Speaker now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.