One of our favorite tracks from this year, originally published October 27, 2022.
Since then I’ve been playing a lot with the AI art tool Stable Diffusion Automatic1111 web UI version (opens in new tab) started first. I’m not the command line guy, so a simple mouseable interface is much more my thing. And it’s a fun toy for a man with no visual artistic bones in his body. I imagined The Hitchhiker’s Guide to the Galaxy, a Boris Johnson Monet painting sitting on the toilet in the middle of a pond, and Donald Trump reading my beloved PC format.
But nothing has impacted me quite like hammering the Nvidia RTX 4090 (opens in new tab) Eight and a half hours straight, teaching him to paint like my great-uncle Hermann.
You will not know the name of Hermann Kahn. I would also be incredibly surprised if you recognized him by the name he was actually more commonly known by, Aaron Kahana (opens in new tab). To be honest, I didn’t know him either; Unfortunately he died long before I was born.
But I heard so many stories, talked so much about Uncle Hermann, both from my mother and my late grandmother growing up, that I feel like I kind of know him. At least part of him.
Kahana’s art style is distinctive and a distinctive feature of my childhood.
The family bond is strong, even more so since I traveled to Tel Aviv just before the birth of my three-year-old son. It was the place where my grandmother Inge and great-grandmother Rosa Kahn fled Germany from Kristallnacht in the mid-1930s. And the place where Hermann Khan settled after meeting his wife while studying art in Berlin.
I walked the streets they walked, passed my grandmother’s apartment, rode the Haifa road that Rosa took to work every morning, and visited Hermann’s house in Ramat Gan.
The house he shared with his wife Mideh has become a museum for his art and although it was closed when I visited and apparently had been for some time, it has apparently since reopened and is hosting exhibitions again.
Kahana’s art style is distinctive and a distinctive feature of my childhood. In the homes of my parents and grandparents, I was surrounded by his early and late style ceramics and paintings. Even as a child I felt drawn to them. There’s a certain vase I could never do Not Check out Starship Enterprise thanks to its Trek-like saucer section.
A totally abstract geometric picture of what I had always assumed was a pair of lovers adorning our chimney breast, a picture of Parisian rooftops and a stormy-looking beach scene in thick oil paint ran up our stairs.
But inevitably, this early 20th-century German-Israeli painter and ceramist was not included in Stable Diffusion’s list of artists. And although I experimented with detailed prompts, toying with X/Y plots to find levers to pull to get an accurate approximation of the abstract paintings he was producing, I never really got there.
The stable diffusion checkpoint file just doesn’t have the necessary reference points. But there are ways to encourage the AI to understand and build specifically on different, related images. They’re called embeddings, and people have used them to train the tool to recognize their own faces. This way, you can lock yourself into all the wild, furry, AI-painted fantasies you could ever want.
But I wanted to teach him to recognize and understand the art of Aharon Kahana – as well as a relatively simple AI could. It’s a surprisingly powerful tool, especially given the caveats in the embeds’ explanation that “the feature is very raw, use at your own risk”. However, thanks to the latest version of the Web UI app on Github, this can all be done from a browser.
You need Stable Diffusion and therefore Python already running on your computer, but you can then assemble a folder of images under a specific name and it will load your GPU to 100% and 50% of your CPU, for hours around reference points to create one that Stable Diffusion can use when prompted with the exact name of the embed.
I wanted to teach him to recognize and understand the art of Aharon Kahana as well as a relatively simple AI could.
Sounds relatively easy, but it certainly took some trial and error on my part. Not least after realizing that after downloading about 70 images of my great-uncle’s work from various auction sites around the world, I actually needed to annotate them with something vaguely detailed for the training to be effective.
This took a lot of time figuring out the medium and themes of each of the pieces I downloaded and then hand renaming each file. And when you’re working with sometimes seriously abstract imagery, it’s not always that easy.
Then I pointed the RTX 4090 and my Core i9 10900K at the appropriate folder, created the embedding wrapper, and let it hang around for over eight and a half hours to come to terms with what I’d done to it. All 16,432 cores and a decent chunk of the 24GB of memory from the new Nvidia card, as well as half of my 10th gen Core i9, were used for this task.
I won’t pretend to be smart enough to really understand what I commissioned the world’s most powerful consumer GPU to do, but when I checked in with her over the evening I could see that she had captured the input images and her own make approximations.
It was like a lesson from beyond the grave, as if my pc had spent the night learning from Hermann, doodling around in homage to his style, trying to figure out how to do it without the artist’s help.
In the morning the embed was complete and I could boot back up the web UI – now listed with a textual inversion embed – and append the text “from aharon_kahana” to the end of each prompt and see what the AI had learned overnight.
And it was remarkable. My computer created homage to homage to my great-uncle, even more fascinating when it took pictures of things Kahana would never meet. I’m a complete novice to the mystical art of prompting, but my simple requests yielded images that evoked memories of the artist.
Where it lacked pure soul and an understanding of what it was actually doing, it made up for it with weird digital creativity and GPU-assisted efforts. Surely all of this was recognizable and inseparable from his art style.
I know that many modern artists are railing against AI art development, frustrated by the flood of images of fantasy women created by people without artistic talent – along with said furry fantasies – and I don’t pretend to know exactly how Aharon Kahana would have felt, but I can’t help but think he would have embraced this new tool.
And that’s what it is, a tool. As much as I was impressed by how close Stable Diffusion came to emulating his art style, that’s all it really does: emulate. It won’t really evolve the style on its own; it will still take a human artist to advance the art. And it still needs detailed human input to give it enough substance to build on.
Rather than replacing artists, it’s just another tool — as high-resolution SLRs and Photoshop have become for landscape painters — that will add itself to the arsenal of artists interested in taking the technology to new, interesting places.
At its current level, AI art feels more like a starting point than something that can really create the finished product. But that probably won’t stop me from filling my PC with a million colorful, endlessly abstract images. All inspired by a part of my family that I’ve never really known but still hope to embrace.
This article was previously published on Source link