(revision 14th Feb 2023)
This article
This article is being constantly updated. If you know someone else who might like to read it, feel free to share the link below so they can read the most up-to-date version. If they would like to sign up to the weekly newsletter, they’ll also be sent the link automatically too.
This article: https://thepublicrelationspodcast.com/courses/how-to-use-ai-for-your-pr-today/
Newsletter: https://thepublicrelationspodcast.com/subscribe/
How to access Chat GPT yourself
- Head to https://openai.com/blog/chatgpt/
- Click “Try ChatGPT”
- Sign up for a free account and start typing.
- Try any question you like. Ask it to write in the style of Shakespeare even.
Can A.I. write a press release or article for you?
In theory yes. It all depends on how much data it has on the topic.
If you type “Write a press release about ChatGPT” into ChatGPT it will write a press release with quotes in a format that is pretty much ready to go.
If you write “Write a press release on the outbreak of COVID” it’ll do that.
If you want a short article on COVID it will do that too.
If you are writing some content and are stuck and need inspiration, A.I. can help to finish that off. It will even help you write a book telling you what to include and questions for interviewees.
If you need 3 case studies to fill out your article today, you may well be able to find them by simply asking it as well.
If you need to do research to find the top experts on “Cordyceps” (the fungus in the HBO zombie show The Last of Us) then type in “Who are the top experts on the brain infection Cordyceps?” It will give you a list of them and tell you about them.
Can it suggest a good title for an article? Try the phrase, “What is the best SEO title for an article on A.I.?”
Could it write the “perfect” press release for every journalist?
ChatGPT can already answer a question in the style of Shakespeare. It turns its normal answer into poetry to do it.
So yes, in theory, it is possible but in practise, not quite yet.
A.I. would need a lot of data on the journalists or influencers being targeted to be able to assess their style and favourite angles on topics. That means it needs lots of words to analyse.
This is complicated by the fact that journalists often move from job to job and editorial guidelines change from one outlet to another. It’s something that might actually be easier with influencer bloggers.
As A.I. gets better, it will inevitably get better at this though. Based on the Shakespeare example, it’s not far-fetched to think that in the next few years time, A.I. could write an article for your website and rewrite it into a press release which targets individuals’ unique styles all based on a fact sheet you give it.
Watch out for more!
Can you repurpose content?
Yes. There are tools already such as https://www.summarizer.org/ which try to summarise articles for you. You enter an article, it tries to pull out the key bits and sum it up. This “summary” could be repurposed as blog posts or anything.
You can also ask it to search for the main bullet points in an article. These could be made into lists that could be shared on social media and turned into lead magnets and fact sheets, social media, etc.
The quality certainly varies but once again this is just the start.
Can I write my press release in one language and send it out in hundreds of languages around the world?
Yes. While it might not seem like A.I. it is. So much of language translation is not about direct translation but about translating context and meaning. One platform currently in alpha is Notion AI but there are numerous translation services and they are getting a lot better all the time. That said, it may be worth putting caveats on your press release because the translation won’t be perfect and that could seriously put off journalists who may be less than impressed..
My finance director recorded a video/audio announcement and gave the wrong figures, could A.I. fix it
Tools like Descript.com (a video and audio editor) already have a feature called “Overdub” which uses a voice model which you train.
For example, if I said “We made a 20% improvement in sales this year” but actually it was 22%, I could tell it to say 22% instead and it would use my voice to do that.
While it can’t change the video (yet) and a lipreader may see the words differently, it could save a huge amount on costly reshoots.
Can I describe an image and A.I. will create an image for me?
Yes, you can.
DALL-E for example generates images based on a description.
There is also a system called Midjourney. It’s quite fiddly to use but produces some fascinating images.
Midjourney generated the following images after I asked it for the following – “William Shakespeare as a robot at the Globe theatre”.

Can I create music for my creative work?
Yes. SoundDraw allows you to do that already. Google is also working on a more advanced system.
Google published a paper in January 2023 outlining a way of typing in a text description of a piece of music and it would generate that music for you.
A system was around a few years ago that allowed you to hum a tune and it would try to convert it into a style of music.
It would literally turn your humming into a piece of classical music played by a fully synthesized orchestra. Although my humming wasn’t exactly Mozart!
Can I hold a zoom meeting and read my notes but never lose eye contact with the camera?
Eye contact is a key part of human relations as any public relations person knows. When we look down at notes, we lose credibility. We all know the “Zoom look” where everyone in the meeting is looking down.
Nvidia Broadcast fixes that by making your eyes look at the camera even if “you” are not looking at it. It’s creepy but fascinating too.
Can I monitor the exact mood of my customers minute by minute?
Again, yes but with caveats. One of the hardest things for A.I. to do is to think like a human.
To draw conclusions it has to be told what means a negative assumption. What may seem negative to one product may be a benefit to another.
So it comes down to the programmer’s ability to define that based on the avatar your organisation has selected.
Customer service and marketing software have provided “sentiment” scores for years but left the final assessment to humans. A.I. certainly has the potential to do a lot of the heavy lifting work here saving you a lot of time trudging through that data but accurate assessments are going to need a lot of sophistication which translates as they’ll be expensive.
Is it a fad that will pass?
Try it!
That’s probably the best way to answer that.
Will it speed up my work, and make my PR efforts more efficient?
Yes, potentially by quite a bit but maybe not as much as it will in the future.
It will certainly help you find ideas and prompt your thought process and do a lot of the “grunt” work but you’ll still need to check and fine-tune it.
A.I. expert, Dr Mark van Rijmenam, speaking on The Public Relations Podcast, wrote a book in 5 days with A.I. but told the show that the quality was nowhere near as high as his human-written books, lacking the insight and storytelling that only humans can deliver at the moment.
Will search engines start using A.I. and what will it mean for me?
A.I. is simply not good enough (yet) to provide us with 100% accurate, up-to-date information but it’s not going to stop people from trying.
As discussed in this article, accuracy remains a significant problem for A.I. currently. So the traditional approach of listing results for you as a human to sift through isn’t going away just yet but it may not be far off.
Google and Baidu, the Chinese search engine is looking to implement a system like ChatGPT.
The goal will be to add a “chat bot” style interface to the search engines whereby you ask a question and it delivers a complete answer on what you want in the way you want it. Google have pointed to the fact that you could ask it to explain astrophysics to a 6 year old and it would.
Perhaps the biggest question is how much people will choose to use an inaccurate answer simply because it’s easier. Wikipedia is a flawed model but so useful we still use it and just hope the errors aren’t too serious.
So Chatbot search is coming.
As Search engines move to a more chatbot focus what will it mean for SEO and PR generated content?
As accuracy improves it will no longer be good enough to just be on page 1 of a search engine’s results.
Search engines and chatbots like OpenAI will no longer provide 100s of possible answers unless you ask them to. Instead, it will look at multiple sources and compile its own answer. There is a real danger you won’t even be mentioned even if your content was a primary source.
If you are mentioned, you’ll probably need to make sure you are in the top 2-3 results. The goal of chatbot search engines will be to give one answer to the person asking the question not a list of results.
On a plus side, when someone asks an A.I. bot about the experts in a field if you are considered a good source, you may be recommended with much more authority than a current search results list.
For this reason alone, “Inbound PR” will remain important in order to ensure when A.I. does mention you and people ask where to find such a service, your name is in there.
To appear in the AI results, public relations people will need to “feed the bot”.
The impact of A.I. on online quality and search results
A.I. is likely to lead to a flood of “average”-quality content.
Some marketers are already trying to pump out SEO-optimised content before search engines get better at spotting it and everyone else gets in on the A.I. gold rush.
But there are a few factors at play here.
A.I. content will certainly be better than the spammy, keyword-stuffed content of the old days. It will write naturally, the answers are pretty clear, the grammar is generally pretty good, the spelling good, structure and keywords. That gives them a head start over what we saw in the past.
Search engines are also not going to ignore A.I. generated content either. Some tweets sent out by Google said that they are not against A.I. in principle. That’s probably partly because they want a slice of the action but also because they’ve said their goal is to serve the best answers to their user’s questions whether it is generated by A.I. or not.
But the problem is this. A.I. can indeed generate very good answers fast BUT what it can’t do (yet) is provide genuinely original insight. It can only “compile” answers from the information that is already out there. That means it’s not going to give the “best” answers and that means they are not going to stand out at the top of search results or appear in chat bot search results.
Plus this average-level content will also be competing against a huge amount of other average-level content. As A.I. content appears at a faster and faster rate across the internet, there will be a lot of “average” answers. A.I. in itself won’t be enough to stand out from the noise because the noise will grow at a whole new level to what we have seen in the past.
Search engines are also likely to get better at identifying “bot” generated content. They’ll be looking for patterns in the writing, common themes and styles that will inevitably appear as the bots have to work from a pattern to generate their answers. They simply don’t have that human ability to find the new angle or the quirky approach. Google have hinted they’ll be able to spot thus.
Website authority will also be just as important as search engines look to see which sources are the ones people reference above the A.I. noise but they’ll also be looking to check those backlinks are authentic and reflect human patterns for linking too and not A.I. generated in themselves.
It may also be more important than ever to niche in a topic so as to be the one search engines and A.I. bots reference too.
So unique research and unique insight, ie even higher quality content that only you can put out will be needed if you want the search engines to take you seriously and the bots to reference you in their answers.
Its limitations
A.I. can only be as accurate as its analysis skills and the amount of data it has access to.
In 2023 ChatGPT version 3 is based on a limited data set that only contains information up to 2021. Anything that happened in the world after that, in effect, never happened.
The content that A.I. can produce also currently lacks any unique insight. All it is doing is pooling information from other places and combining them in a way that reflects what you are asking. There is no real “analysis” or “intelligence”.
That means the content it produces is not original, it lacks insightless, and provides superficial answers. That doesn’t mean it’s not useful but it’s never going to be the unique content that makes someone a thought leader.
Another problem it faces is its ability to know what is “correct”.
It has to make a judgement based on what it has found and the signals that point to the most credible sources. A.I. expert Dr Mark van Rijmenam talking on the show was cited as being the author of three academic papers that he didn’t write.
A.I. for now, is best seen as a platform to help humans do what they do better and faster.
As Mark said….
A.I. is better than a human,
but a human combined with A.I. will beat A.I. every day.
The Copyright Problem
ChatGPT (and all A.I.) gets its vast knowledge and power by reading what is out there on the internet already. In other words, it takes work created by other people and uses it to give answers which appear to be its own, often without attributing the original source.
There have already been a number of lawsuits for A.I. image generation (early 2023) claiming that image creators’ work has been used to help train the model without their permission. Image houses such as Getty (who face a real threat to their business model) as well as a number of other firms are taking the action. In the UK there was going to be an exception in copyright law that would allow AI to use images but this has been paused. So there is a strong motivation for them to pursue this.
A.I. can’t draw an image itself. It doesn’t know what an owl is or who Shakespeare was. It needs to gather that information from original sources made by people in order to combine it into an image. It is very possible therefore that recognisable elements will appear even if they are adjusted with image filters.
Some image creators provide permission for “derivative” works for free (for example Pexels.com). This is what A.I, image generation does but almost all those creators require attribution which A.I. currently isn’t doing.
Will this be a problem for text-based content?
By the nature of text, it’ll be a lot harder to prove a copyright breach if the answers are rephrased as they are.
Unless the answer is unique and so can be attributed to your work it will be hard to prove.
As mentioned before, if the answer is unique, such as content taken from academic research or specialist publications then the bots will need to start attributing this to the creator which could be you. There will be no pressure to do this for low-quality content
Will law firms target you if you use A.I. Well as always, check with your lawyer. This is not legal advice. If we look at copyright issues in the past though we can see how legal firms first go after the provider of services and then the people who used those services.
For example, people who used Google images many years ago are now targeted by bots (ironically) which scan the web looking for people to prosecute.
As Mark said in the episode, A.I. is perhaps best used as the inspiration for content, rather than content in itself.
With all those warnings there, a lot of people are already using A.I. to write content, especially for websites and SEO. The temptation is too much when an article (albeit low quality) can be written in minutes not hours.
As always, you’ll need to check with your legal team if you have concerns about the future as it may come back to bite.
EFFECTS ON THE INDUSTRY
Will journalist needs public relations people anymore?
Just as with PR, quality media outlets are unlikely to turn to A.I. fully. Quality outlets exist because of their insight, contacts, and ability to provide unique content.
This could be anything from national media outlets to trade industry outlets where there simply isn’t the data for A.I. to deliver what people pay these journalists for.
Quality media outlets will still need PR people to give them access to things they wouldn’t get through an automated system.
However, free to low-cost media outlets, are likely to turn more and more to A.I. generated, clickbait content.
Fully automated newsrooms with no real journalists are a real possibility. For example, when Microsoft bought a stake in OpenAI they fired 70 of their human journalists and replaced them with ChatGPT.
These low-cost outlets may still accept high-quality content if it gives them a way to run unique content they won’t get from bots. Writing content which is ready to go and follows their guidelines will be crucial. When you pitch the story, you may be talking to the only person in the newsroom and they won’t have time to re-write the whole thing.
An area of opportunity that is unlikely to go anywhere is with opinion media outlets that rely on talking directly with viewers. For example, news-talk, live guests and interaction. These outlets rely on having a “human” to host those chats. Watching people argue over the color of ice cream will never be quite as exciting if bots are doing the arguing.
Will public relations officers be needed anymore?
Yes, and for some time, but the role may change to reflect its title better.
The transition from “press officer” to a person who manages “relationships” with the “public” is likely to continue.
If anyone is likely to be axed, it’ll be lower-skilled general marketing people. Adverts and marketing are likely to be generated more and more by A.I. which analyses the real-time mood of the target audience and adapts in real-time. Infographics and design will be generated by bots in seconds and be running live as soon as a skilled human gives it the go-ahead.
Skills such as video editing will be replaced more and more. “Quik” from GoPro was one of the early automatic video editors. It doesn’t take an experienced editor to produce something that is “ok” anymore, a TikToker can beat million-dollar advertising campaigns with an iPhone
But the thing A.I. can’t do yet is the human element.
Business is meant to run on data but we all know businesses and organisations run on people and people don’t always act rationally. A.I. will always struggle to work this out (just like we do often!).
So highly skilled humans who understand the organisation’s avatar and the implications of what the bot has come up with will still be needed for years to come.
Over the next few years, A.I. will get very good at monitoring, suggesting and assessing based on data. It’s going to be a powerful tool. But would you let it write a joke to send out to your customers completely on its own without checking? If not, why?
Another point. There is a strong phenomenon already which shows that when humans talk to bots they are a lot more rude and aggressive. In one survey 50% of all commands given to a virtual assistant contained aggressive words. Part of that is frustration due to the inability to answer questions, especially in customer service but we still do it even when the bot gets things right.
And finally, on a practical level, who is going to staff the PR stunt you organise? Who is going to chaperone influencers and journalists?
For the same reason, bots will never be able to fully manage long-term emotional relationships with the public. They’ll get very good at it but it’ll always need a skilled human to make the final call.
What will it mean for crisis PR
Deep fake, the system which can replace people with someone else’s face, body and voice, or generate an entirely new one will get better.
Fully life-like humans, or “close enough to human” to be able to fool us, are coming. It’s not there yet and it is going to take time but it’s coming.
An example of the kind of impact this could have on your organisation can be seen recently with Eli Lilly, a pharmaceutical company.
The firm saw $15 billion USD wiped off its market value after someone tweeted apparently from their account, that the firm would give insulin away for free in future. That was caused by a tweet, imagine the effect of video.
It may be that society grows to distrust any sort of content but if enough people are pulled into the ruse, or even if it’s just shared for “fun” even though people know it’s not true, mud sticks.
A.I. will certainly get better at detecting A.I. fakes. You can envisage YouTube adding a warning that content may be fake but the people doing it will always be one step ahead.
Just like Eli Lilly, attacks could come from activists looking to change an organisation by force or from criminals.
It will be possible to threaten brands with bad press just like DDOS attacks can currently take down websites.
In the case of DDOS attacks, hackers can bring down websites by swamping them with visits from bots. The same could be true for fake content.
If a central fake piece of content is sent out that is backed up by multiple other pieces of content that appear to validate the original content, the damage could be done.
PR people are already on call 24/7 but will it mean they are now on call every second of the day? Well A.I. (if you can afford it) should in future be able to monitor growing mentions and trends around the brand you are looking after quicker and before they even happen.
And, as always you should have a crisis plan ready anyway to kick in.
Can you afford AI?
A.I. expert Dr Mark van Rijmenam, speaking on The Public Relations Podcast, said it’s not a question of whether you can use A.I. but what quality level you can afford.
The more you pay, the higher quality results you will get and the more useful it will be.
For example, small PR operations could use simple open-source A.I. to help generate draft press releases and articles.
Larger outlets could be analysing trends in their customer base before they even emerge and analysing where those trends are likely to go next, allowing PR people to steer the human side of things such as events, stunts and real human interactions to reflect the data for the day.
Will the results you get from A.I. tools be limited or restricted on ethical grounds?
Yes, but it depends.
ChatGPT, for example, already won’t provide a “press release on the death of Diana Princess of Wales”. It tells you the topic is not suitable.
It’s inevitable that governments, law enforcement, companies and activists will try to influence the results.
This is likely to have the most effect on open-source A.I. systems with higher quality (more expensive) systems providing less affected data.
What is the best way to approach AI for now?
A.I. expert Dr Mark van Rijmenam, speaking on The Public Relations Podcast, summed it up….
“AI beats humans, but humans augmented with AI beats AI.
So, it’s the combination of human intelligence and AI that is the most powerful.
What do you think?
What do you think will happen with A.I. and PR?