Distorting Reality: The Implications of Content Generated by Artificial Intelligence.

Daniel Hammocks

Science fiction has long anticipated the arrival of artificial intelligence (AI), with popular television shows and films depicting a world in which we live amongst sentient technology. Whilst this paradigm is far afield, the capabilities of AI are accelerating with phenomenal celerity.

Applications of AI are ubiquitous in everyday life, whether users are aware of it’s pervasion, or not. AI is used by email providers to help determine spam content[21], social media platforms to identify objects within images, and music streaming services to provide personalised song recommendations to name but a few. Though these may seem like mundane examples recent state-of-the-art research in AI has unearthed a new way to enable computers to generate content, that is deceivingly realistic or could be perceived as created by a human. All devised in a way which could be described as imagined by machine.

In this report we look at the potential implications on crime of generated content, who has the power to prevent such a crime harvest, and the solutions from parallels and precedents that can be adapted to mitigate any foreseen problems.


Note: I apologise for using the AI buzzword as it is not something I believe we have reached yet BUT the intended audience for this article is non-technical individuals whom will relate better to AI over ML/DL.


File Contains Report, Source Rating Spreadsheet and Associated Poster.

Leave a Reply

Your email address will not be published. Required fields are marked *