In this article, I’m presenting generative AI works I did for a course in AI and Design—Concepts and Methods, that focused on generative AI tools for image generation, 3D AI generation, AI Video Generation, using LLMs as a design method, and ethics. The tools we explored were:
- AI tools in Adobe Photoshop and Illustrator
- MidJourney
- Leonardo AI
- Blender AI Render (Stable Diffusion in Blender)
- Pika labs
- Runway
- Eleven Labs
- Using LLMs (ChatGPT) as a part of the design process
I explored image AI generation, video AI generation, 3D AI generation, and ways of using ChatGPT for UX/UI design. Some projects were generated from scratch, others used reference images as a starting point. A big part of my explorations involved prompt engineering, utilizing prompt codes from various software, and using ChatGPT as a prompt assistant for developing more complex camera movements for video generations.
The biggest lesson from this course was how important prompt engineering was to achieve the results I wanted, as well as learning how to use reference images to achieve a desired outcome. However, my explorations showed that these types of image generation tools and LLMs are not even close to being able to replace UX Researchers or product designers, but could be used as a helpful tool to explore and generate ideas.
The biggest weakness of these generative AI tools for design purposes is their lack of contextual awareness. If you give it precise directions, it might execute it fairly well but without understanding the context use. This was especially true for UX and UI design, where ChatGPT could assist in generating a color palette or font set, but shows weaknesses when it comes to implementation, it doesn’t understand where to apply the guidelines. This leads me to the opinion that generative AI can be used for generating things such as personas, user stories, and color palettes, but it is still up to the designer to implement these parts in a bigger context.
Work 1 – 3D & Blender Generative AI
Workshop: 3D and AI


Reflective statement
This exploration aimed to see if I could create my creative vision using my minimal 3D skills combined with the power of generative AI. I wanted to see how good generative AI is today using an example image as a base.
In this work, I encountered many challenges. Firstly, I am not well versed in Blender, so just creating the basic shapes that look like an octopus and setting up a proper scene with camera angles and lighting was a technical challenge. Choosing an octopus to render in 3D form might have been a bigger challenge than I needed, but I learned a lot from it.
Another challenge was that I had problems installing the AI render engine in the Blender app, and once I succeeded, the generated images were not what I wanted. It wasn’t until I exported the rendered image from Blender and used it for the DreamStudios web application that I felt that AI generation was working from my rendered scene.
In future developments, I could learn to use Blender better to set up my 3D scene in more detail, for example, adding more objects surrounding the octopus. I could also combine this image with a video generation AI program to make the octopus animated and come alive.
Work 2 – Video & Audio Generative AI
Workshop: Video and Audio Generators
Reflective statement
I wanted to explore how to use generative video AI to transform still photography into a cinematic scene. The first scene in the video is the original picture, and I wanted to explore if I could make the sky turn dramatically into a darker mood, and explore how to generate camera movements using generative AI.
I faced a lot of challenges with this because there were multiple parts I wanted to generate. For example, in earlier tries, I managed to generate the sky turning darker, but there was no camera movement. Or, I managed to generate the camera movement but the sky was still. I solved these challenges by using ChatGPT to help me write better prompts, and after many tries, I successfully created this version.
In future works, I’d like to combine this generated scene with a few other videos, into a small story. It could also be cool to combine this generated video with a voice-over of a storyteller giving context to the scene.
Work 3 – Video & Audio Generative AI
Workshop: Video and Audio Generators
Reflective statement:
In this work, the aim was to create a similar scene as that of ‘Work 2’ (above), but without using a real photograph as a reference. I wanted to generate a video of a rose garden, where the sky turns from happy and bright into dark, moody, and foreboding, to create a dramatic story scene. I also wanted the camera movement to pan from the details of the flowers up to the looming darkness up above.
In this work, I first tried many times without using ChatGPT as help, but it was truly a challenge to generate all the parts as I saw it in my vision. I met similar challenges as with work 2, in that it seemed that depending on the word order in my prompt, various parts were successfully generated while others seemed to be ignored. Even with ChatGPT’s help, the prompt was too long because of the complexity of the scene, so I then needed to shorten it manually and try which phrases were essential and which weren’t.
For future development or improvements, I’d like to add a voice-over to the video. I’d also like an even more dramatic effect where the rose petals would dry up and fall off, which I didn’t manage to combine into the same prompt this time.
Work 4 – Video & Audio Generative AI
Workshop: Video and Audio generators
Reflective statement:
The aim of this exploration was to create a futuristic-looking landscape with an overall red foggy vibe, with the city looking like it’s in chaos, with a big fire burning in the middle of the city. I wanted the camera to move at a cinematic slow-moving pace, capturing the skyline from above as if it were news coverage footage.
One challenge I faced was getting the camera angle and movement as I wanted, which I solved by trial and error. Another challenge was that the red color didn’t look right until I specified that the atmosphere should be foggy, which made the red color much softer and diffused, as I wished. I also had to make some different variations to make the one big fire stand out much more.
For future developments, it would be fun to add more characters in the form of humans or animals in the scene, but that are more customizable or manually added into the scene.
Work 5 – Video & Audio Generative AI
Workshop: Video and Audio Generators
Reflection:
In this exploration, I wanted to continue on the same storyline as in ‘Work 4’, but I wanted to make the scene even more cinematographic, with more details, depth, and feeling. I wanted it to look a bit more like a high-production type of movie scene.
It was challenging to produce this piece overall. I tried many times, using up a lot of credits. But, then I also incorporated ChatGPT into my workflow which helped me achieve this more detailed look with the water reflecting the fire and the depth of field. I noticed in this exploration that it is hard to regenerate specific parts of a video using Pika Labs, and often if I changed a part of the prompt, the entire scene changed dramatically.
Potential future developments or improvements could be to add more dynamic camera movements, such as going from the side of a building, turning up, and moving on from there.
Work 6 – Video & Audio Generative AI
Workshop: Video and Audio Generators
Reflection:
In this exploration, I wanted to create a cinematic scene of a picturesque pastel blue house in Copenhagen, with large old windows, and roses growing outside. I also wanted to combine this image with a subtle changing sky and lighting, to portray the passing of a day.
One challenge I faced was to generate apartment buildings that didn’t look too cartoonish but had a more real-world look, this took a few trial and error. Another challenge was to combine a subtle camera movement with the changing of the lighting, which sometimes canceled out each other for some reason when prompting. In some tries, the changing of the lighting would also be too drastic and not subtle at all.
A potential future improvement would be to have the sky change a bit more color or add some clouds in movement to portray a changing environment.
Work 7 – Using AI for UX/UI design
Workshop: Design Methods

Reflection:
In this exploration, I wanted to use ChatGPT to design a mobile mockup for me, of an AI-generation podcast app. I wanted it to give me all the details and instructions so that I wouldn’t need to make any decisions on my own. The mockup to the left is how the design looked when I tried to follow its instruction to the literal and without trying to add any of my own thinking. The mockup on the right is with a bit of adjustment of my own tastes to make the UI a bit more improved.
I think it’s a challenge to get ChatGPT to provide you with good step-by-step design solutions for UX and UI design, because it lacks the bigger context. For example, it knows to use accent colors, but it does not fully understand when/where/why to apply them. It also seems to know to create color palettes and typography sets, but do not follow it throughout its design, which again I reflect on as a lack of understanding of why they should be created and how it should be used.
Potential future developments of this work could be to create a template of more contextual description for it, and see how the instructions change after that. For example, giving it clearer directives to only use accent colors on CTA-buttons, and to only reuse things from its color palette and typography set.
Work 8 – Video & Audio Generative AI
Workshop: Video and Audio Generators
Reflection:
In this exploration, I wanted to challenge the cinemagraphic and camera moments that could be generated using AI, at the same time as I wanted to create a beautiful harmonious zen garden with notebook and pens floating around. So the goal was to create something new, interesting, dynamic, 3D-feeling.
It was not easy to achieve more fun camera movements, which I think was the hardest part. I tried many times before I found some inspiration in the ‘Explore’ tab in Pika Labs, where the term ‘360 camera orbiting around’, which finally made something interesting. Before that, I tried to prompt fast, slow, and dynamic camera movements which did not amount to very interesting results.
For future developments, it would be interesting to see if one can combine this with another generation that makes the camera orbit upwards toward the sky and fly away.
Work 9 – Video & Audio Generative AI
Workshop: Video and Audio Generation
Reflection:
The aim of this exploration was to generate my creative vision of capturing the feeling of living in the moment in a nightclub, at a moment when the music is really great and the crowd is amped up, celebrating! I wanted to capture one main character in the center and to generate a video where the camera was moving in an engaging way, capturing the crowd and environment around the center character.
I was met with challenge primarily with generating the camera movements. On a couple of tries, the camera just panned a little bit from left to right, without showing much of the scene around the main character. This I found out was solved by adding “hyper speed”, for some reason. I guess because this camera function is specific to the model and works combined with the “hyper speed” wording.
Potential improvements or development of this piece could be to add in generated music, to make the scene come alive more, or use a reference image from a real nightclub celebratory moment and generate from that.
Work 10 – Using ChatGPT for UX/UI Design
Workshop: Design Methods

Reflection:
The aim of this exploration was to explore how ChatGPT can be utilized as a co-creator for UI design. I wanted to create a mockup of a mobile bookstore and wanted to see what result I could get from blindly following ChatGPT’s instructions, with only a very basic prompt.
One challenge I faced was how to format my prompt in a way that was suited for Figma, and for UI design instead of getting HTML & CSS code. Another challenge was how to translate ChatGPT’s very minimal directions (see the mockup to the left) to something more high fidelity, without needing to design too much on my own – since for this exploration I wanted to see how much ChatGPT’s directions could help me.
Potentials for future developments would be to continue to prompt it for the following pages, such as the category page, the cart, and the profile.
Work 11 – Video & Audio Generative AI using PicaLabs
Workshop: Video and Audio Generation
Reflection:
This exploration aimed to create a short Christmas video scene, with a cozy feeling, filled with Christmas trees with lights, a snowy atmosphere, and an interesting dynamic camera movement. I wanted to see if I could create both the atmosphere and content I wanted, combined with a camera effect.
It was challenging for me to achieve the camera movement I wanted, possibly because I didn’t know what the names were for different effects. So, I needed to do some research and try a few different prompts for camera movements to achieve this result.
Potential future improvements could be to slow down the camera movement a bit, and make the sky a bit lighter so that the atmosphere looks warmer and cozier.
Work 12 – Video & Audio Generative AI Using PicaLabs
Workshop: Video and Audio Generation
Reflection:
The aim of this exploration was to create a short video where the camera flies through a futuristic dystopian landscape, with a big explosion happening. I wanted it to look like a movie scene.
It was very challenging to set up the right prompt for this one because it had three parts: The futuristic landscape, the explosion, and the flying camera movement to capture it all. I had to try many times with the positioning of the wordings around the camera movement to succeed. In some tries the camera was still but I got the cityscape and explosion, in others, I got the cityscape and camera movement, but not the explosion, and so forth.
There is potential improvement that can be made in the work in terms of that the explosion could be happening in the city instead, whereas now it looks like it’s happening a bit in the distance. But once again, it was difficult already to prompt all three parts simultaneously.