"The Ghost Writer in the Machine: The Evolution of AI, Ghostwriting and the Quest for Authenticity," demystifies the dance between the artful craft of ghostwriting and the mechanical precision of AI.
Ghostwriters have been around since the dawn of writing itself. And as a profession, very little has changed in terms of the fundamentals. But with the recent arrival of ChatGPT and AI writing tools, the content creation landscape has shifted suddenly and dramatically. Because now, theoretically at least, a capable ghostwriter lives inside every machine connected to the net. Does this momentous paradigm shift mark the beginning of the end for human writers, or merely the dawn of a new era in human-AI collaboration? This article seeks to demystify the dance between the artful craft of ghostwriting and the mechanical precision of AI.
Ghostwriting - the act of writing on behalf of someone else without being directly accredited for the work - is an occupation as old as the hills. Down through the millennia, ghostwriters have been the silent voices behind powerful monarchs, political orators, and - in more recent times - celebrities and bestselling authors.
In the popular imagination, ghostwriting may seem at times like an occult, shadowy practice – deliberately shielded from view, and therefore somehow shrouded in mystery. These unsung literary heroes, much like their supernatural counterparts, aren’t really meant to be seen. They inhabit a ghostly, quasi-mythical realm – hence the name.
The earliest recorded forms of ghostwriting date back to ancient Greece and Rome, where scribes were hired to write speeches and other forms of official communication for public figures – a practice that continues uninterrupted to this day. But post-Gutenberg, with the steady mechanization and industrialisation of print publishing, ghostwriting evolved to meet the growing demand for what we in the modern era often refer to as “content”.
The advent of the digital age has introduced new tools, mediums, platforms, and contexts – expanding the demand for ghostwriters exponentially, and propelling them into the mainstream. And while ghostwriting may have once seemed like a mysterious practice, in the 21st Century attention economy, ghostwriting is an unremarkable, widely distributed global industry, estimated to reach $43B by 2030¹ – with perhaps tens of millions of freelance writers offering their services, typically remotely, in every niche imaginable. These days, however, in the world of ‘digital first’ communications, ghostwriters are much more likely to simply be called “content creators” or “writers” – and offer their services via online marketplaces, such as Fiverr, and Upwork.
But irrespective of all the changes and developments that have taken place through the years, at its core, ghostwriting hasn’t really changed all that much. In the modern age, as in ancient times, the art of ghostwriting (of writing to order, without attribution) is based on the same fundamentals – involving human agency, diplomacy, wordcraft, intellect, and ingenuity.
That is until now. Because things have recently been seriously shaken up in the world of ghostwriting – with the arrival of a new form of AI. Suddenly, there’s a ghostwriter in every machine…
In November 2022, OpenAI’s "Generative Pre-trained Transformer," known as ChatGPT, exploded onto the scene – like a new class of technology arriving from an alien planet. Born out of layers upon layers of neural networks, and built on top of a powerful, multi-purpose Large Language Model (LLM), the new AI-powered chatbot went from zero to 100 million monthly active users in just two short months – making it the fastest-growing consumer application in history.
And now, in 2024, as a result of ChatGPT and many other LLMs that have since launched, AI is rapidly reshaping the media landscape. No longer embedded within the confines of an invisible, algorithmic layer - deployed by the social media giants to commoditize our eyeballs - this is a radically different type of AI, one that we are interacting with directly as end users in ways that have an immediate, demonstrable impact on our personal and professional lives.
Nowhere has this rapid technological paradigm shift been felt more than in the domain of content creation. From drafting articles and speeches, to summarizing existing texts, to creating conversational agents, the content creation capabilities of LLMs are both profound and varied. And suddenly, the art of ghostwriting isn't just about human writers working behind the scenes without taking the credit. It's about increasingly sophisticated forms of AI, based on powerful natural language models – learning to mimic human language, and rapidly assimilating the art of weaving words into stories.
The impact of all this - less than 18 months after ChatGPT launched - already feels profound. At the very least, we are witnessing a wholesale reshaping of the boundaries between human and machine-generated content, sparking heated debates around authenticity, originality and copyright, the ownership and use of data, content overload, the potential perils of AI centralization, the threat of new forms of social engineering and political propaganda, concerns over censorship and misinformation, discussions around the importance of human agency and sovereignty, and questions concerning the very essence of creativity itself. That’s a lot!
This paper seeks to explore and unpack this new domain, as we immerse ourselves in the depths of this seemingly supernatural dance between humans and AI. We aim to dispel the myths, shine a light on some of the hard truths, and to suggest how we might chart a course for the future of content creation – a future where humans and machines collaborate harmoniously, rather than pitting one against the other in a ‘winner-takes-all' scenario.
Join us as we navigate this thrilling new frontier, in a journey to understand the balance between human creativity and machine-driven efficiency…
It is undeniably the case that AI tools such as ChatGPT offer immense potential in terms of accelerating and streamlining the content creation process. Their primary strengths lie in their incredible speed, with the concomitant ability to produce content at vast scale, alongside their powerful data analysis and information gathering capabilities.
ChatGPT can generate in-depth articles, reports, stories, media packs, sales literature, and pretty much any other kind of content you can imagine at a pace that no human writer can match. LLMs can tailor material for publishing on multiple platforms simultaneously – ensuring both greater reach and consistent, high-volume output. Tools like ChatGPT’s Advanced Data Analytics feature can analyze user data, including feedback and engagement metrics, in order to continually refine content strategies, and optimize for different audience preferences.
AI also excels in some aspects of research, content structuring, and summarizing. By quickly parsing through vast datasets, it can aggregate relevant information and structure it to serve as a solid foundation for human-driven content creation. Furthermore, by recognizing patterns from existing high quality work, AI can also suggest optimal content structures that enhance logical flow and readability – which in turn increases audience engagement. Perplexity is a fantastic example of an AI model that is pushing itself forward as a research tool rather than just a content generation tool².
Content editing itself, which is often a multi-stage process, can be significantly enhanced with AI's ability to identify inconsistencies, grammatical errors, structural anomalies and areas that may require further refinement. While human oversight remains essential, AI can substantially reduce the editing time – ensuring much cleaner drafts in the initial stages, and ultimately more polished output.
However, as we reveal more below, it's important to distinguish between AI-assisted content generation and fully AI-generated content. The former involves AI tools aiding human writers in research, structuring, and editing, while the core creative process remains human-driven. And the latter refers to content entirely created by algorithms as the primary mechanism or without substantive human input – which may lack nuance and emotional resonance.
The ease by which virtually unlimited amounts of AI-generated content can now be created - ostensibly with the push of a button - represents a significant paradigm shift in the world of online media. We call it “push-button GPT” content. And for some of the Web 2.0 giants, such as Google and Amazon, this is a serious bone of contention – and something they are in the very early stages of grappling with.
Amazon, for example, after seeing an influx of AI generated content in the form of ebooks, recently changed its policies involving AI content. Now, authors must declare if their work is human-made, AI-Generated, or AI-Assisted. The company specifies that AI-generated material such as "text, visuals, or translations produced by an AI-driven tool," even if significant alterations are applied subsequently, are still considered AI-Generated. On the other hand, AI-aided content is identified as that which is crafted by authors and sellers independently – albeit with the utilization of AI tools to "polish, rectify, scrutinize for errors, or enhance in other ways³."
In terms of Google search rankings and the company’s SEO policy, it was initially feared by the early adopters of ChatGPT for content generation that Google was moving towards staunch opposition to any AI-generated content, but the company has since clarified its position, with the announcement of a core update⁴. The focus appears to remain on the content's adherence to the historical principles of "expertise, experience, authoritativeness, and trustworthiness" – collectively known as E-E-A-T.
Understandably, Google states that: “Using automation - including AI - to generate content with the primary purpose of manipulating ranking in search results is a violation of our spam policies”.
This is part of a long-standing game of ‘cat and mouse’ between Google and parties trying to ‘game the algorithm’ for commercial gain (with little or no interest in providing valuable content). Google does, however, clarify that, “not all use of automation, including AI generation, is spam”; and that “AI has the ability to power new levels of expression and creativity”, as well as “to serve as a critical tool to help people create great content for the web”.
Google’s recommendation for creators considering deploying AI-generated content online is that - irrespective of how the content is produced - “those seeking success in Google Search should be looking to produce original, high-quality, people-first content,” which prioritizes the aforementioned principles of E-E-A-T.
Of course, it’s perfectly natural that the search engine folks would want to take a qualitative approach, and to dissuade certain ‘spammy’ practices by enforcing harsh penalties – they are, after all, in the business of “organizing the world’s information and making it universally accessible and useful”.
But perhaps more importantly, the entire purpose of content marketing is to drive not only engagement but conversion of the ideal customer profile or "audience" you desire to drive conversions into sales. Put simply, content marketing is more a question of quality than quantity – so don't penalize yourself by creating a mountain of content. At best, you might 100x your engagement but still not attract the right audience.
From direct experience using AI to create 40,000 articles, Pete Crowley, Head of Growth at RB2B speaks on his experience at a previous company. “AI content may drive a ton of engagement, but if it does not drive the right inbound audience, there is a limited chance of conversion.”
In conclusion, contrary to the initial post-GPT narrative, Google does not specifically single out or penalize AI-generated content. Amazon appears to take a slightly more aggressive approach regarding its policy on ebooks, which may frequently be published purely for commercial gain rather than to provide meaningful value to readers. In both instances, the quality of the content is clearly paramount – as is the importance of human oversight and involvement. Businesses using AI for content generation should take heed of these moves by the web leviathans to categorize the different types of AI-generated content, and should ensure that their content is well-researched, user intent-based – with a deliberate focus on quality over quantity, in order to avoid any potential negative consequences on their SEO rankings or brand reputation.
Following on from this, it’s worthwhile to specifically underline the differences between using these new systems as supplementary tools, versus using them in a “push-button” context to generate content.
AI-assisted content refers to material where AI tools aid human writers in the research, structuring, or editing of material – but where the core content originates from human thought and creativity. This synergy harnesses the strengths of both AI, from its own technical and data-driven standpoint, and human intellect, or the “spirit” of human ingenuity. This approach will typically yield results that are both high-quality and efficient.
Conversely, AI-generated content is exclusively crafted by algorithms, without any human agency in the creation origination process – except of course the initial prompt or “push-button” activation. While impressive in its coherence and blistering speed, this content typically lacks the nuance, depth, original thinking, and emotional resonance that human writers typically bring to the table.
The allure of push-button, AI-generated content is undeniable. We are seemingly presented with the promise of a world where, with little more than a single click, entire pitch decks, white papers, articles, stories, and essays literally spring into existence, like a magician pulling a rabbit out of a hat. For many early adopters exploring the capabilities of these new AI models, this is the ultimate promise: the creation of a fully-automated, SEO-friendly, content generation flywheel – churning out unimaginable quantities of material that is indistinguishable from handcrafted human content.
However, as AI-generated content begins to propagate around the internet, we are becoming increasingly intune with the subtle (and not so subtle) differences between the real and the synthetic – between human and AI. And it’s already clear that this utopian vision of push-button writing, at the speed of thought - however enticing that vision may be - is, at its core, both a fallacy and a dystopia.
In greater detail, here's why:
Every human-written piece is complex and rich in its own way – a tapestry woven with threads of experience, emotion, culture, and context. While LLMs such as ChatGPT can simulate human-like writing, they lack the lived experiences that give human writing its uniqueness. And though one could argue that with the right training, coupled to the right prompt, in the right context, such a system could overcome many of these situational nuances, ultimately the process still requires human oversight and agency – and an advanced understanding of psychology, AI, and writing.
Language is not just a tool for communication – it's a complex, multifaceted reflection of culture, history, and societal values. The idioms, metaphors, and colloquialisms we use are deeply rooted in our cultural contexts. LLMs, despite their prowess, often miss these nuances – leading to content that may sound technically correct but feels out of place or inauthentic.
Great writing often lies in what is left unsaid: the pauses, the spaces between words, the humorous analogies, and the subtle implications and inferences. It's about reading between the lines, evoking emotions, and letting the reader's imagination fill in the blanks. Much like Star Trek’s Captain Spock, machines operate on logic and data – and the intricate art of subtlety and suggestion is often beyond their grasp.
People write not just to convey information but to communicate with other human beings on a deeper, more abstract level – to share their dreams, to explore memories, to connect or resonate, to evoke emotional responses. And empathy - the ability to understand and share the feelings or perspectives of another person - remains a uniquely human trait. While AI can mimic styles and patterns, it cannot genuinely "feel" or understand human emotions in the same way that we do.
In summary, while LLMs like OpenAI's ChatGPT offer tremendous potential in aiding the content creation process, the notion that they can replace the intricate and nuanced nature of human writing is misguided. Push-button AI writing, as a wholesale replacement for human creativity, remains a myth. What we need instead, is a blend – where AI assists, enhances, and amplifies, but humans remain in control and accountable.
Human expertise remains unmatched in its capacity to discern factuality, nuance, emotion, and the cultural implications of text. Whereas AI inhabits an ocean of algorithms and data, human beings are immersed in the culture itself that provides that data. And while AI can process vast amounts of data and identify patterns that we wouldn’t even begin to recognise with astonishing speed, it lacks the innate human ability to understand nuances in context and sentiment.
Consequently, it's vitally important for human experts to oversee AI outputs, ensuring that content not only meets technical standards but also resonates authentically with its intended audience. This is what distinguishes what is purely AI-generated from that which is a combination of human and AI input and expertise.
In other words, as humans, we need to exercise a clear degree of overall accountability, responsibility and control when using AI as a tool. It is child’s play to simply create something with the push of a button and consider it done – but it's simply not the right thing to do. As business leaders and users of such a powerful technology, we must hold ourselves accountable to the processes that we put in place.
Accountability - taking responsibility for everything you do and being able to give a satisfactory reason for doing it - needs to be a firm commitment where humans and AI systems are working together. It may seem obvious, but it’s currently a bit of a blind spot. There’s a tendency to absolve ourselves of overall accountability because of the novelty and sophistication of the technology and the output. AI is clearly not accountable for the work it produces – humans are responsible for the outcome. This implies that, in human-led work using AI as part of our process, we must understand what the work is and how to do it – using clear systems to uplevel, explore, and gain efficiency where it matters, as well as implementing feedback loops for continuous improvement. In the context of marketing, particularly content marketing, measurable KPIs for brand growth or pipeline growth should arguably be the same whether or not AI is part of the process. In either scenario, after all, poor output has negative consequences for the brand and the business.
Responsibility in the context of this framework is the dimension of ethics in our work. AI is a reflection of us and our data up to this point in history, for better or worse – which means societal biases, misinformation, and ugly truths are either in the data or present by their omission. We have already witnessed AI models inadvertently absorbing and propagating biases related to race, gender, ethnicity and other factors – contributing to the entrenchment of stereotypes and misinformation.
A model only knows what it's trained on. Accordingly, if we are acting responsibly in storytelling and content creation, we must be aware of the AI’s limitations with regards to depth of writing, the human dimension – and ultimately truth.
It is here that the human gift of storytelling and adherence to facts on subject matter is essential to producing consistent, high quality work – where brand trust and loyalty can be won and lost in the words said and not said.
Ethical considerations around transparency, originality and authenticity also demand attention. As AI-generated content becomes harder to distinguish from human work, protocols mandating disclosure of AI involvement are likely to proliferate further. There are also complex questions around data rights, potential misinformation, and content manipulation that require nuanced approaches.
Control in the ARC framework emphasizes the importance of maintaining human oversight and steering AI towards desired outcomes. Humans must decide when and how to use AI, establishing clear boundaries and methodologies for its application. In the context of marketing, where “content is king” as a means of driving brand outcomes and positive business outcomes, it is important for a business to decide where AI should and should not be used.
As a nascent and increasingly powerful capability, it is essential at this juncture to establish policy without stifling innovation. While many large companies are wrestling with how to do more for less, staying focused on the desired outcomes of the work can keep things in check, by helping to distinguish good work from work not meeting those standards. Working with these AI systems is a nuanced skill, which we are only just beginning to learn as a society – where anyone who is a master of human language can, as we say, conjure the model.
When contemplating the potential implementation of an AI-native content marketing strategy across an organization, there are a handful of key considerations that should not go overlooked. Adopting AI as a supporting contributor, specifically in content creation workflows, requires a well-defined strategy that aligns with the overarching understanding that humans must maintain oversight of the work and responsibility for the quality of the output.
The very first stage of developing a strategy involves thoroughly understanding your current content creation process and the methods that underpin it. This comprehension then enables you to identify areas where AI can add value without undermining the human element – ensuring that the technology enhances creativity rather than replaces it.
The next step is to develop clear policies around the use of AI, as well as guidelines and techniques to achieve consistent output. These policies and guidelines will help maintain a steady brand voice as well as a benchmark for quality that’s repeatable – allowing AI to assist in a controlled and predictable manner. This framework should also include standards for fact-checking and editing, so as to always ensure accuracy and reliability – in order to preserve the credibility of the human authors and the brands they represent.
As previously mentioned, being aware of and vigilant about the potential for echo chambers and biases that AI can inadvertently introduce into content cannot be stressed enough. It is essential to use diverse datasets and continually reevaluate the AI’s output in order to prevent the reinforcement of narrow and incomplete perspectives. Implementing a system of checks and balances to identify and mitigate bias is key to fostering a broad, inclusive, and effective content strategy.
Consistency in delivery reigns supreme. Sure, AI can help streamline the content creation process and ensure timely publication, but it should not dictate the pace or the quality of output. Regular content audits that analyze details such as tone, style, readability, etc., will help maintain consistency, ensuring that AI remains a tool for the craftsman – rather than becoming the craftsman itself.
The rise of AI ghostwriting tools ushers in a new era, one that holds both immense potential and extraordinarily complex challenges. As we stand on this technological precipice, one truth remains constant: the indispensable value of the human element in authentic content creation. Accomplished writers, alongside other subject experts traditionally involved in the process of content creation, are as indispensable as they ever were – and given the nature of the transformations underway, perhaps even more so.
While AI can certainly be a highly potent accelerant for both efficiency and scale, its true power manifests through seamless integration with human expertise, discernment and emotional intelligence. It is this symbiotic interplay that will forge the highest caliber content – considerably improved and streamlined by the involvement of AI, while still being resonant, nuanced, unique, and imbued with the depth of meaning that only human agency and creativity can bring.
As content creators, we must embrace AI responsibly – as an enabler, complementing our creative capacities, not replacing them. We must know and own our processes and procedures – to be accountable and responsible in order to fully control this new collaborative dynamic between humans and AI. We must develop policies, styles, methodologies, and techniques for content creation in order to maintain consistency in content delivery. And we must be mindful of the inherent biases and echo chambers that may be reflected in the training data these new models use.
We are the ghosts in the machine, the ethereal guides providing soul and context, as stewards of these formidable technologies. With judicious human oversight and the recognition that AI is a tool and not a panacea or an overwrite of the human operating system, we can harness AI’s capabilities to scale new heights of creative expression – while upholding the transcendent ideals of authenticity, diversity and ethical integrity.
¹ Coherent Market Insights: “COPYWRITING SERVICES MARKET ANALYSIS”, cited in May 2024 (Source)
² PCguide: “What is Perplexity AI and what are its uses?” cited in May 2024 (Source)
³ The Guardian: “Self-publishers must declare if content sold on Amazon’s site is AI-generated” cited in May 2024 (Source)
⁴ Google Search Central: “What web creators should know about our March 2024 core update and new spam policies”, cited in May 2024 (Source)