This past week, I had the pleasure of attending the Public Relations Society of America (PRSA) Yankee Chapter’s event, “AI and PR: Sparking Curiosity, Removing Fear,” hosted at the Nackey S. Loeb School of Communications at St. Anselm College in Manchester, New Hampshire.
Yeah, everybody is writing about AI, pushing AI products, or bragging about how they have integrated AI into existing products, but I put aside my usual distaste for things everyone else is into (I call it the “Grateful Dead Effect”) to dive into a very informative and necessary session.
Rather than recap everything said (why be boring?) I’ll drop a few of my personal impressions here, but first, a nod to the speakers:
- Mark McClennan, APR, Fellow PRSA, a former colleague from my beginning days in public relations, spoke about AI strategies and ethics (listen to his podcast, Ethical Voices).
- Rebecca Emery, APR gave a great overall of AI tools and strategies for using them (she has more at SeacoastAI.com).
- Jim Schachter of New Hampshire Public Radio gave the media perspective, joining the above and moderator Todd Van Hoosear in a panel to cap off the event.
Anyway, on to my thoughts:
- The highlight of Mark’s presentation was a list of “rules” (not commandments, for the non-religious) for AI in PR. A pervasive theme was not to put confidential or proprietary information into AI chat prompts (for example, if you want AI help writing a press release, don’t put the unannounced news in there!). It seems simple, but the fact that fully half of these rules covered a variation of that point bears repeating. And repeating.
- One of the rules was not to rely on AI for translations. I agree with that, but it brought a question to mind: what if AI tools get so good at translation that people universally deem it to be reliable? Can we change the rule then? Could this apply to other guidelines and guardrails we put around AI? (For the record, Mark was open to changing rules as circumstances change.)
- One last thought on ethics and AI. In public relations, do we have “ethical use” properly defined?
Rebecca Emery focused on strategies for AI as well; we need to be prepared that employers and clients will ask us to use them for campaigns. In particular, she reminded us to shape our prompts to consider our precise audiences, and to use AI to ask you questions back to shape better outcomes.
Helpfully, her presentation focused more on the tools available and their differences, rather than the dos and don’ts. Unsaid was that most of us probably dabble in ChatGPT, and are exposed to tools like Microsoft Copilot or Google’s Gemini because they are pushed on us via their flagship products, but there is a whole roster of tools with different features, such as:
- ChatGPT
- Anthropic Claude (interesting as it does not retain data and does not access the internet, making it a better choice to retain privacy)
- Perplexity AI (a “conversational search engine”)
- Microsoft Copilot (I believe this was once known as Bing Chat, but it’s probably for the best that the Bing branding is gone)
- Google Gemini (once known as Bard)
- PI AI (billed as “emotionally intelligent” AI).
Rather than trying to recreate Rebecca’s talk, I will just encourage all to experiment with these different tools for different use cases. That’s what I will be doing.
As a very useful aside, my former employer, Eric Enge (author of “The Art of SEO), conducted a study to compare four of these (ChatGPT, Google, Microsoft, and Claude). The results published in January – and, I suspect, likely to be updated- are worth reading.