
Recently, at the When Words Collide writer’s conference in Calgary, I was on a panel that confronted the ethics of Artificial Intelligence applications in writing. With the rise of Large Language Models like ChatGPT and the capacity for machines to generate stories that are difficult to distinguish from those of human authors, artists are feeling a wide range of emotions.
On the positive side there is excitement and curiosity… AI could boost productivity, help to generate income, and make literature more accessible.
But on the negative side there are fears that AI will take away jobs, and anger that these models have been trained on large bodies of work without fair compensation to those who produced the work in the first place. Even more, writers are feeling threatened because the art of writing had long been seen as a great bastion of creativity, impenetrable by our best machines. But now, it turns out that creative writing is not as computationally complex of a process as we once thought.
So, is the use of artificial intelligence to generate fiction cheating? Is it stealing? Can one still maintain ethical integrity as a writer while using AI-based tools?
Before deep diving into these questions, it’s important to acknowledge various forms of artificial intelligence all around us. Internet search engines use it. Marketing algorithms use it. Social media platforms determine which posts/pictures/videos we see through it. However, the specific context I’m looking at here is the use of Large Language Models (LLMs) such as ChatGPT for the generation of fiction (short stories, novels, scripts, etc.)
Even this context is murky though.
On one hand you can use an LLM to generate an entire manuscript. But you can also use it for prompt generation, brainstorming ideas, research, manuscript organization, and translation. You can ask it for ideas on how to finish a scene that’s not working. You can use it to write a paragraph describing a beach you’ve never been to, or how a car works, or what a fashionable gentleman in 1890s Boston might wear. You can also take a short story produced by an LLM based on your own detailed prompts and edit it until the original draft is unrecognizable.
The line between human and AI-generated work can get blurry in a hurry.
With all of this said, a fellow science fiction author, Ron S Friedman, has been working on some guidelines to help writers navigate the ethical murk of an AI world. Ron was kind enough to share his thoughts with me, and I have built on them. I present my own version these guidelines as a work in progress and invite constructive feedback and discussion from my fellow authors, editors and other creatives in this field.
Writers’ Guidelines for the Ethical Use of AI
1. Disclosure
Those who publish work generated by an artificial intelligence have an obligation to disclose this. Authors have a right to credit for work they have created, but shall not claim credit for work they did not create. In cases where content was collaboratively produced (such as an artificial intelligence wrote the first draft, a human edited the manuscript, or an LLM was used to write the final chapter of a novel), the relative proportion generated by the artificial intelligence needs to be fairly and accurately disclosed.
2. Malicious Use
Artificial intelligence must not be used to intentionally deceive consumers. This includes, but is not limited to deep fakes (misrepresenting the source of the content), the intentional generation or propagation of misinformation, manipulation, coercion, hate speech or other deceitful practices. Fiction must be presented as fiction. Parody must be presented as parody.
3. Informed Consent
Those who use artificial intelligence to produce content need to understand what that technology is doing, consent to the use of the technology, and be in a position to do so freely. No one should be forced, coerced or mislead into using artificial intelligence technology against their will. People have a right to unplug.
Publishers have a right to refuse to publish content generated by artificial intelligence. And they have a right to define what constitutes such content.
Further, artificial intelligence requires data for training. LLMs use large bodies of text for this training. Authors and content creators must give informed consent, and be in a position to do so freely, for their work to be used to train computer models.
And finally, mechanisms need to be in place for consent to be revoked as is reasonable.
4. Fair Compensation
Revenue generated based on the work of content creators, in particular but not limited to those whose work is used for model training, testing and validation, needs to be distributed fairly among those who produced the work.
5. Obligation and the Right to Understand
Users of artificial intelligence have both an obligation and a right to understand how the technology they are using works, what it is capable of, and the reasonable downstream effects of the material it produces.
Concluding Thoughts
At the conference there were some people who believed we were on the edge of something big, that artificial intelligence is going to fundamentally bring about profound changes not just in the writing world, but in many different dimensions of our lives from the media we consume to the news we hear, our spending habits and voting patterns. Indeed, on perhaps a lesser scale it is already doing these things.
But I don’t think it’s something necessarily to be afraid of.
Ethics are rarely simple. And whatever “rules” you come up with can almost always have exceptions. In my own experience, if you start with the intention of treating people fairly, with dignity, honesty and respect, the vast majority of ethical decisions are reasonably straight forward. I think what we really need to be aware of are the pressures that will drive people to turn a blind eye to those things they really should be paying attention to.