
The issue of how to make the best of Artificial Intelligence in the Creative, Digital & Marketing sector while managing its risk is so existential, so ubiquitous and developing so quickly that it can’t be ignored, least of all by any professional looking to support the growth of the businesses in it.
Amidst a very public debate on how the Government should be working to protect copyright and the UK’s creative industries via the Make it Fair campaign, statistics that suggest that professionals up and down the country, not just digital natives, are embracing the potential of generative AI whilst training it with content that’s protected by intellectual property rights and a number of legal claims making their way through the Courts against a number of AI platforms, it’d be easy to feel as if you either can’t keep up with the pace of change and to do everything you possibly can to avoid your teams feeling like they need to run particularly innovative content past a lawyer against a background of clients who continually demand better work for lower fees using the best available tech.
Moreover, most clients will make any of these risks your problem, usually by asking you to confirm that anything you create on their behalf is wholly original and their use of it won’t get you sued and meeting their legal and other costs if an issue is raised against them.
In the absence of many platforms setting out their own stall on how to deal with legal risk to themselves and their users other than in court filings, maybe the best thing to do is set out your own. If you don’t, then the risk of a claim or regulatory action as a result of IP infringement based on inputs or outputs, misuse of personal data or compromising confidential information based on the fact that many platforms require that anything uploaded to their systems are available to all other users, is only going to increase.
So, where to start? In the best traditions of Beetlejuice and the Hitch-Hikers’ Guide to the Galaxy, with a manual. In this case, your Employment Handbook (if you have one, and if you don’t, you need one) or alternatively a standalone AI Policy. AI adoption across your business will almost certainly be wider than you expect, so by setting out how to do so after a frank and inclusive discussion with your teams over what they use, who for and why and aligning your approach with the terms of your client contracts and wider best practice you’ll be at the very least managing expectation and helping to making yourself investable, insurable and adaptable.
What goes in the policy depends upon what your business does, but at the very least will identify what support you’ll roll out, what use cases you approve, what platforms you authorise for use, what uses you allow and… what to do if something goes wrong. We’re here to help you if and when that happens, and if we can do so with a policy already in place, then our and your response will be about as effective as possible.
Risk is inherent in any creative endeavour, especially where that endeavour is underpinned by bleeding-edge tech. We’re not saying you should never take it, but we are saying that it needs to be calculated and mitigated. Maybe there are jobs for the right kinds of lawyers with the right kinds of approach in the new world after all…
Are you a creative or marketing professional wanting to know more about how to embrace generative AI while managing legal and reputational risks? Get in touch with our Head of Creative, Digital & Marketing, Steve Kuncewicz, on 0161 832 4666 or email: