5 Thoughts About Artificial Intelligence and Design
Artificial Intelligence. Such a buzzword, and such curiosity and mixed emotions come with this topic as it relates to design and creative fields. I’m sharing this writeup with the caveat that I currently feel torn about AI — there are some super cool things about it and equally important is that I believe there are some potentially shady or horrific things that could happen if used unethically.
We are at a point that it’s kind of hard to tell between ‘real’ and ‘fake’. Especially in the world of social media where everyone aims to feature the ‘top hits’ of their life, with perfect filters, angles, lighting, and the opportunity for unlimited retakes before posting (I’m somewhat guilty here too as much as I’d like to not admit it). We make the time to curate and show a version of ourselves that might not at all align with who we really are, or where we live, or what we do. Brands have the ability to do the same. Heck, if I told you I was writing this from an alpine beach on Lake Tahoe, would you believe me? Perhaps I am… ;-)
As a designer who began her career when Flash and Dreamweaver were the two leading tools to design for web, I will attest that tech has come a long way and continues to evolve faster all the time.
Will AI take over my design career? How does it work? Where are images sourced and how are they created? There are so many apprehensions, excitements, and questions the design world is trying to understand as we scratch the surface of a technology that will change how we work — likely for better and worse. Similarly, I’m sure there were major apprehensions and curiosities around the invention of desktop publishing… would mechanical paste-ups be a thing of the past? Sure, but other jobs came about and designers figured other skills and career tracks.
My curiosity led me down the path of attempting to find understanding through learning, and “doing the thing” myself. So, a few months back, I took on the challenge to create 100 images in 100 days that were generated by AI. In this particular case, I chose the tool Midjourney to experiment with. At the time, version 4.0 was out and 5.0 had just launched. I stuck with it and learned a lot over the 100 days, and still find myself logging into Discord and playing with the Midjourney Bot about 2–3 sessions a month now to help with idea generation, and sometimes to create images, textures, and assets for clients. The basics are easy to pick up on, and there are more nuanced prompts and tweaks you can make to things to achieve what you’re looking for. It takes about an afternoon to understand, and inevitably get into a rabbit hole of entering whacky prompts to get a laugh at all the weird things the tool can spit out.
With what limited things I know (I am certainly no prompt engineer at this point), I want to share 5 things I’ve thought about so that they might help you if you’re beginning to dabble or question the strange world of artificial intelligence.
Why wasn’t the result what I expected?
What about that sixth finger and weird word?
Where do all the images come from?
How should designers leverage artificial intelligence?
Should we be scared?
1. Why did I get a totally different result that I expected from my prompt and how to I get closer to what I’m looking for?
Entering prompts is a learning experience. A prompt is what you type in to generate an image. It usually begins with “/imagine”. Prompts that are built well can yield really interesting results. Based on my experience, there is always a little element of surprise, and you have to try not to imagine in your head what something is going to look like or you might be disappointed. With the newer version of Midjourney, there are more possibilities for remixing and refining images more towards what you’re envisioning, but it is sometimes tricky to get exactly what you’re looking for. I’d also note that unless you share a specific medium or style, there is sort of a ‘image gen’ vibe that the images give off. They feel hyperrealistic and dramatic usually with high contrast and a bit of graininess. I have a feeling the ‘look’ will go down in the design history books as the zeitgeist for early AI generation image tools.
A few basic tips I found useful as I explored prompting:
Avoid super long, descriptive prompts. The tool seems to get confused and generates odd results.
You can refer to other images to reference styles in your prompt.
You can adjust the aspect ratio of your prompt. By default, images are square. However you can choose -- ar 4:3 or similar to adjust size.
If you’re looking for a specific style, consider choosing a medium to avoid the standard ‘Midjourney’ look.
Reference to emotions, colors, and time periods also help achieve more what you’re looking for so that you don’t end up with an unexpected result.
But… what if you want something unexpected? Add the ‘chaos’ parameter to the end of your prompt using “--c 0” for similar results or “--c 100” for unexpected ranges of results.
Still curious? Here is a bit more about basic prompt construction on the Midjourney site.
2. What about that sixth finger and weird word?
Humans and AI… some results are truly fantastic. And some are truly horrifying. In this case, it was 9 toes on one foot. See for yourself below…
I don’t think we will get close to emulating perfect human proportions, movements, or authentic emotions tomorrow, although I’m sure it will happen soon. As for now, I’ve run into a lot of oddities with feet, hands, and human expression. In addition, I have not yet been able to render words appropriately. Even some letters have a hard time being rendered when called out specifically. Avoiding focus on these items seems to be a smart idea, although I’d love to see if you are able to crack the formula for words and extremities! Every once and a while, it works out.
3. Where do artificially generated images come from?
I’ve noticed odd remnants in many of my images… almost as if they’re being pulled and remixed.
But to what point can a previously constructed image have too much influence on an artificially generated image? This is where my biggest apprehension lies in terms of image generation. It would be great to figure a way to move forward ethically for image generation so that artists, designers, photographers, and creators don’t get their work ripped off. On the flip, I’ve heard the argument that everything is appropriated or referred to or inspired from something, so isn’t AI kind of doing the same thing and re-interpreting it? I guess so. But where’s the line? We see this come up with IP/copyright in art all of the time, and there are no hard and fast rules.
In addition, I believe that as I write this (and according to this Aug 2023 article), any image generated artificially can not be copyrighted—so anyone may use what you’ve created for anything. Only art with human authors can be copyrighted. Again, it will be interesting to see how much of an artificially generated image needs to be further manipulated to be considered someone’s ‘original’ work.
To further complicate things, or support the argument that AI images aren’t ripping anyone off, I found several resources that noted that AI image generators use neural networks to create images from scratch, which have the capacity to create original, realistic visuals based on textual input.
Also, this article from Hypotenuse AI shares in layman’s terms the different types of text-to-image AI models (GANs) and Diffusion models. Diffusion models are trained to learn relationships between text and image (sort of like how a toddler learns about the world). Once trained, they don’t modify existing images—they generate everything from scratch without references to images on the internet.
To keep it simple, I always disclose when something has been generated via artificial intelligence, and share that it is essentially an open source asset. I avoid copying or referencing other work in my prompts when using it for things beyond personal work.
4. How should designers leverage AI?
If you can get past some of the apprehensions around AI, I think you’ll smartly position yourself for the future. You’ll build skills to leverage AI, which will take away some of the mundane grunt work, speed up workflow, and help us focus on ideation and the things that add value to the design process.
I primarily appreciate that AI increasingly offers a lot of ways to speed up our process (from removing logos in the Photoshop generative AI feature, to using Chat GPT for a list of ideas related to a topic or brand, to playing with Midjourney for textures, assets, and backgrounds).
A few ways that you can get started and begin exploring if you haven’t yet:
Sign up for Midjourney here. There is a ‘free’ plan that gives you enough prompts to get a basic understanding of how things work.
Try to use Photoshop generative AI. Generative Fill and Generative Expand are simple, but super helpful places to begin.
Join Chat GPT and throw in some ridiculous prompts. “Give me a list of 20 names for a luxury condo company in Aspen, Colorado”. It might have some whacky answers but sure does get the creative ideas rolling.
What other artificial intelligence tools have you been using and what tips can you share? Email casey@limbic.studio and I’ll continue to collect resources and post to my resource section!
5. Should we be scared of artificial intelligence?
It’s hard to say. Forbes has shared articles that go both ways—and far beyond the impact that it has on design. (here’s the good, here’s the bad.
The potentially good stuff:
It’s fun to play with, and opens us to more ideas and explorations.
Image prompting yields nearly immediate results.
It may better society (speed up workflow; take on mundane tasks; help with health and drug development; combat human trafficking; serve society for the better).
The potentially scary stuff:
Lack of transparency and ethical issues, including privacy and security concerns.
Bias and discrimination based on past inputs and learnings.
Job changes or elimination.
Manipulation, lack of human connection.
Ultimately, robots taking over the world. Beep. Boop.
Also—very apt timing—I listened to a wonderful conversation this morning with Chris Do and LA motion designer Kevin Lau on The Futur podcast (check out Episode 265 “Accelerating Creativity — with Kevin Lau”). It’s definitely worth a listen if you’re a designer, design educator, or creative professional and are interested in the AI conversation.
And, don’t forget to check out my 100 days of artificial experimentation project here—there were some pretty whacky results.