A couple months ago the world was introduced to DALL·E 2 — an artificial intelligence program that takes text based inputs and turns them into fully illustrated images. I was instantly blown away by what this program could do. Not only was it accurately depicting the phrases passed along to it but it was applying style and creativity to the output as well. Here’s one quick example:
Since then I have had a number of conversations about the future of artists and designers. The big question on everybody’s mind is what the future for artists and designers will look like? Will technology replace us and are we going to have jobs in the future? The good news is that we still have some time.
What’s left out of all these articles, and discussions, are all the bizarre images from DALL·E’s output. DALL·E is still just a program at the end of the day and as a result, is influenced from the biases of its creators. This means that for the foreseeable future humans are still going to be a necessary part of the equation. Not only to provide the inputs but they’re also needed to curate the outputs. I experienced this first hand while working with a βeta version of remx’s AI powered creator tool.
The creator tool is part of remx’s web3 platform for artists and designers. We created it to lower the barrier of entry for creating and selling NFT collections of digital wearables for the metaverse. The creator tool is powered by an AI program that takes human inputs, such as colours and images, and integrates them in infinite combinations onto a 3D model. Sounds cool right!?
As we’re getting ready to launch our Genesis token, we’ve partnered with Upper Echelon Studios to ramp up our marketing efforts and build awareness about our company and platform. To kick things off, the fine folks at Upper Echelon Studios came up with a cool campaign called #remxoftheday. As part of this campaign, we’re taking NFTs we own and remx them on our Genesis sneaker. Being a small company, I signed up to help generate the initial set of marketing assets.
Since we’re actively working on our creator tool, the idea of using it in a more practical context was exciting for me. My initial impression was that this was going to be easy: pick some colours, upload some images, and voilà! done. I couldn’t have been more wrong as a I found out when our CEO Naz shared his Smilesss NFT for the first asset:
The image itself isn’t bad in any way. I actually quite like it. The problem was that it didn’t translate very nicely to our shoe right out of the gate. It’s a dark image with lots of intricate details and the initial composition doesn’t mapped very nicely to the different parts of our 3D model:
After the first couple tries, it became apparent to me that I was going to need to do a little more work to make this look good. My initial thought was to crop the image into smaller sections because it contains some nice details on the pants and skateboard. The problem with that was that I didn’t have a high resolution version of the image so the output turned out really blurry:
After that, I started playing with the placement of the character within the image frame to see if I could control the output better. There was a lot of trial and error with this approach but the outcome began to improve so I stuck with it. I eventually found the right placement with the character’s face appearing right where I wanted it:
I felt like I was almost there but something was still missing. As I continued to play around with the colours and images a little more, I had a realization. Nobody told me that I couldn’t use anything but the NFT image so I generated some patterns of my own. After a couple more iterations, I finally settled on a pattern with the smiley faces from the NFT. Here’s the end result that we tweeted out for the inaugural #remxoftheday post:
All in all, I was really happy with how this turned out but more importantly, I learned a couple things in the process. For one, as we continue to build this tool out in the open, I think it was great that I got this chance to use this product in such a practical way. Comparing notes with some other artists that have already used it, it felt good to see we shared some similar thoughts about some of the changes we need to make.
Secondly, I got a good glimpse of what the future of design looks like where humans and machines co-create together. I’m not so worried about AI programs taking over our jobs just yet. As I experienced for myself, there’s still a need for a human to make crucial decisions that affect the final outcome. Not just any human too, in our case, someone with creative skills and the taste to determine what’s good and not so good. I’m sure the tools will evolve over time but I feel some reassurance that they will continue to be just that — tools for humans to use.