October 6, 2017

Creative Social, AI & The Future Of Creativity.

We sent Lara Baxter & Edie Gill-Holder to this weeks Creative Social, get her low down on all things AI here:

Creative Social’s Production Social event, AI & The Future of Creativity, hosted in the very shiny and very spacious Twitter buildings, featured six diverse talks that, albeit concerning completely different uses of AI, seemed to thematically be about ridding us of the ideas that (a) the future success of AI will mean we’re all out of a job and (b) that computers don’t have the potential to be creative.

Jeremy Waite, evangelist at IBM Watson who presented in that TED Talks-esque lyrical and unapologetic manner that is always close by in the tech industry, summed it all up nicely in his opening slide:

First up, if you’re unfamiliar with the concept of AI, watch this handy 90-second infographic video from a straight-talking American dude with too much time on his hands.

Excited? Yeah, thought so.

We were taken through examples of existing AI in everything from art to advertising: Luba Elliot introduced us to visual artists Memo Atken and Gene Kogan, the AI-written short film Sunspring and the brilliantly strange auto-encoder pix2pix, and R/GA’s Paola Colombo talked about the San Francisco advertising agency’s award-winning BotBot, a bot that creates bots (totally meta), all via a videolink that was glitch- and iPhone-X-facial-recognition-gaffe- free.
We were then taken deep into the not-so-underworld of uber-clever machine learning: IBM Watson, the world’s largest AI, created a ‘highlights’ video of Wimbledon 2017 entirely on its own, meaning the machine understood what it meant for a snippet of a match to warrant being part of a highlights video, whether it be speed of shot or audience reaction. Goodbye long hours in the editing room...

Saeema Ahmed-Kristensen, Professor at Imperial, touched on the fascinating field of generative design, which is about developing AI that could design products that are not only functional and efficient, but that humans would actually enjoy using.

Her team managed to teach a computer what a vase was, along with the rules on what constituted an ‘ugly’, ‘boring’ or ‘elegant’ vase. The computer then generated hundreds of designs, before - this being the crux of the power of AI - using its cognitive ability to understand and suggest which designs would appeal most to humans.

By having a computer do the legwork of drawing up accurate vase designs and modelling whether they could physically do their job properly, more time was left for the designer to address the essential and fundamental design questions of what a user really wants from a product, and creatively stretching the boundaries of what a vase could be.

A similar argument was made by Patrick Stobbs, co-founder of Jukedeck – a company that uses AI to compose original music – in saying that AI could reduce the time and effort spent by musicians churning out version after version of tracks for adverts and the like; an extensive production process that actually ended up hindering the development of the initial creative ‘vision’ or idea.

Stobbs also got everyone’s brain-cogs turning with his ideas on the future of AI and music, particularly the notion of ‘real-time composition’, which would entail a computer analysing your body’s data while you were, say, running or going to sleep, so that it could tailor the perfect soundtrack to get you revved up at crucial points during exercise, or guide you sonically into those sweet zzz’s. This would give rise to a kind of creativity reserved only for computers but that would be beneficial, not detrimental, to us.

Both Stobbs and designer Caroline Sinders touched upon some interesting issues on the ethics of designing how, and what, AI learns.

Stobbs mentioned the potential of music copyright issues, since in teaching AI to create music they are fed existing tunes to learn from… would this mean we’ll be hearing the infamous similarities of “four-chord” melodies forever more? Will a legal definition of android rights need to be established so that intellectual property rights extend to the machine? What next? Freedom from slavery and forced labour? Back to square one.

Sinders gave us the quite sinister examples of facial recognition cameras that didn’t recognise black people, or the Google search “unprofessional hair” bringing up images of common African hairstyles, raising questions about the engineers who were feeding these AIs data sets, and how we are going to be able to ensure ethical design practice in the future.

The lesson of the evening was that AI still has a long way to go. For now, we are still very reliant on our fellow humans to iron out some problems. But if all goes to plan, the future is looking very interesting. Cue Vangelis ‘Love Theme’. Fade out.

Comments are closed.