If you haven’t already heard about ChatGPT – OpenAI’s chatbot – it’s been dubbed a game changer in the world of artificial intelligence (AI), and rightfully so. ChatGPT can mimic a therapist and provide seemingly adequate mental health advice. Like a journalist, content writer, or screenwriter, it can spit out long-form prose and stories in seconds. It is even capable of emulating an engineer and writing code.
The discourse around this technology has been all-consuming lately. Scroll through Twitter, and you’ll surely see at least a couple of tweets on your timeline mentioning it. Dozens of articles are published on a daily basis about its capabilities, limitations, and the ethical concerns around it.
I’ve been doing my best to ignore all of this chatter because – to be quite honest – it scared me. As someone who identifies first and foremost as a writer, I couldn’t help but become defensive. This feeling that, in the foreseeable future, I could be replaced by a machine, that all of the time, energy, money, and emotions I’ve poured into developing my skill set as a writer could go out the door has not been an easy thing to grapple with.
But, I’m finding that I can’t keep ignoring this discomfort. After all, Buffer is currently working to embed AI technology into our product, and as a content writer here, I’ve had to finally confront the elephant in the room.
I don’t normally write personal pieces for the Buffer blog, but this essay is my attempt to tackle my complicated feelings about AI head-on and potentially find a middle ground.
To better understand my unwillingness to adapt to this technology, here’s some context. Ever since I could remember, storytelling has been an integral part of who I am. I have distinct memories of watching Bollywood films as a little girl and being in awe of the sentiments, the dancing, and the stories. These films inspired me to write my first script at ten years old (I didn’t get very far).
I went on to major in communication studies and creative writing in undergrad and received my master’s degree in cinema and media studies. Writing has always been at the core of all of my educational and work experiences. I’ve written academic papers, journalism pieces, short stories, screenplays – you name it – about representation and diversity in the media.
Some of the most impactful classes I took were small seminars in grad school where, for three hours every week, my classmates and I workshopped our screenplays. These were stories that we came up with. Scripts filled with dialogue that we painstakingly wrote, characters we crafted and – to us – existed as fully formed individuals, and with lessons and themes that we hoped others could relate to. We did it because we cared about the story. We did it because, to us, storytelling is the way we connect with others and make sense of the world.
My peers and I didn’t get these advanced degrees for an ego boost or because the job market highly values creatives – but because we’re truly passionate about the craft.
Even when I switched over from my creative endeavors to journalism and content writing, the work has remained personal for me. When I write blog posts for Buffer, I talk to real people and use examples from actual small businesses and content creators. I’m always inserting myself and a human connection into every single piece of writing I produce. And that’s what makes it good.
You’re telling me some computer chatbot can replicate that? Give me a break.
But to my dismay, AI is already replacing writers. CNET just made headlines for quietly publishing several articles entirely written by AI. Not only were these articles filled with errors that needed to be corrected by real people – The Washington Post even dubbed it “a journalistic disaster” – but the AI also seemed to be plagiarizing several sentences from other pieces. A Futurism investigation found, “extensive evidence that the CNET AI’s work has demonstrated deep structural and phrasing similarities to articles previously published elsewhere, without giving credit.”
While Open AI hasn’t shared exactly how they’ve trained ChatGPT, according to this CNBC article, the chatbot was fed information from the web, archived books, and Wikipedia and learned text patterns to create writing that is similar. While this may not be outright copying, it still feels like this technology is unethically pulling from other writers without proper approval or citation. (See how I credit my sources?)
Sure, maybe there are still tweaks that need to be made with this technology. Maybe with ongoing updates these robot journalists will make fewer factual errors, and maybe, they will learn to remix others’ work well enough that the plagiarism will no longer be obvious. However, these ethical concerns will always be issues in my eyes.
Now that I’ve laid out my stance on AI, I think it’s only fair to tell you how Buffer is approaching this space. We value transparency here at Buffer, which is why I can be so frank about my dislike of this technology on our blog ahead of us launching AI on our own platform.
I spoke to two of my colleagues who are currently working on Buffer’s AI assistant – Diego and Ismail – about my personal hesitations. They both assured me Buffer’s main goal with AI is to help our users – mostly made up of creators and small and medium-sized businesses – who are running most or all of the operations by themselves and have limited resources.
Diego said his vision for the tool is that it, “never replaces human creativity but be a sidekick that assists you and that can actually – if done correctly – unlock a lot of potential.”
Specifically, Ismail believes that these AI writing tools can help with writer’s block, making it easier for our users to write social media captions or generate text for their blog post. He also pointed out that what the AI spits out will not always be the final version – just a jumping-off point – and someone will need to reshape and edit the words a bit.
While I have my personal qualms with AI writing tools, I’ve interviewed many small business owners while working at Buffer, and know firsthand how swamped they can be with their day-to-day work. In fact, many of them are composed of one to three-person teams and social media marketing is usually not their first priority, understandably. These are the very individuals who our Design and Product teams are hoping Buffer’s AI can assist.
In our conversation, Ismail also suggested I be more flexible in my mindset. Rather than view AI as threatening my very livelihood, he believes I can use it to my advantage. And he’s not alone. While these tools have been met with hesitancy by a lot of writers, many have chosen to embrace them.
A VICE article looks at how a university in Australia is supportive of its students using tools such as ChatGPT. Instead of viewing it as cheating, they believe this type of technology can usher in a new standard of learning. Similarly, an Atlantic article titled, “How ChatGPT Will Destabilize White-Collar Work,” discusses the fact that while some jobs will evidently be lost, writers can utilize this technology to advance their skills. MIT professor David Autor is quoted in the piece saying, “AI will help people use expertise more. It means that we’ll specialize more.”
There is so much speculation around AI and its impact, but time will only tell. For Diego and the rest of our Product and Design team, now is the perfect opportunity for Buffer to explore this technology.
“It is important for us to play in this space,” Diego said. “To understand the disruptive potential that it has and how much value it can unlock for our customers.”
Up until now, my resistance to this technology has been strong. Not only have I been avoiding all of the AI writing and photo tools on the market, I’ve even turned off Google Doc smart suggestions in protest.
But, after some reflection, I have decided to push past my reluctance and start utilizing these tools. It’s becoming quite clear that AI is here to stay, and I know that my stubbornness to adapt could hurt me down the line.
To be clear, I am planning to use these tools for outlining and brainstorming purposes only, never to supplant my own writing.
I am also starting to come around and see the potential value this could add to Buffer users – real people who just need a little bit of help when it comes to creating social media copy for their businesses.
There is an uneasy feeling that I still can’t shake, however, and this example from writer Arnesa Buljušmić-Kustura encapsulates my concerns. She tweeted about being replaced by ChatGPT only to have her former employer ask her to edit the AI’s subpar copy for free.
While the example clearly demonstrates that this technology is not more capable than humans, it also highlights the worrying fact that writers are being undermined and undervalued because of these very tools.
In a perfect world, ChatGPT and other writing softwares would be used in a limited capacity, as mere assistants for writers like my colleagues envision. Instead, many employers are already choosing to go all in on these tools hoping to cut costs, rank for SEO, and bring in more traffic, regardless of the quality or integrity of the work.
The one thing that gives me peace of mind is that I don’t think anything could ever replace human ingenuity. Afterall, these tools are being fed content from real writers. There are also certain skills, including interviewing and original reporting, that AI just can’t do yet.
Still, it’s becoming obvious that AI written content will become more and more common. But, I believe there will come a point when all of these computer generated words will begin to stand out for all of the wrong reasons – the fact that most of it is rudimentary and dry, devoid of empathy, humanity, and wit.
So despite my genuine concerns, I am convinced that ChatGPT and the slate of tools like it are no match for human inventiveness. Writers and their dedication and commitment to the craft will always win out at the end of the day.