Please tell me a bit about your background, especially how you went from design to data visualisation and academic research. That's quite a change.
So, I first started out in graphic design and digital media, which is what my BFA was in. I graduated in 2015, and started working for an advertising agency. We built banner ads and pop-ups for car dealerships using automated templates that connected to the dealership servers to pull in the car images and prices available in inventory.
It was a great learning experience for getting me skilled up quickly in front-end development and interacting with data, but from the perspective of wanting to do something meaningful and beneficial to society, I didn’t really feel like pressuring people into buying cars they couldn’t afford using data they probably hadn’t knowledgably consented to share was really what I wanted to spend my time doing.
In 2016 I went back to school, into a (now sunsetted to be re-configured) Masters’ of Fine Arts (MFA) program in Emergent Media at Champlain College. The curriculum was divided between understanding how humans respond to new technologies, and learning some of the cutting edge tech of the time, including VR, physical computing with Arduino and Raspberry Pi, 3D printing and laser cutting and welding and whatever else the local maker space had to offer, and, for me, data visualization.
The nice thing about that program was the ability for us to build our own curriculum a bit. So once I started learning Java using Processing, I decided to take the dive into Python. I petitioned the data science program at my school to let me take some analytics courses in Python, and that’s how I really got interested in data visualization.
Then, in one of my Emergent Media courses, we had a guest lecture from the University of Vermont Complex Systems Center — specifically, the Computational Story Lab, which built the Hedonometer for measuring happiness on Twitter. I was fascinated by the idea of complex systems as a research field, which is an inherently interdisciplinary area studying high-dimensional, temporally dynamic systems with emergent behaviors.
I hadn’t thought much about networks as a data structure before that, but it was a whole new way of thinking about analytics beyond tabular data. I joined the Complex Systems Center in 2018 as a Data Visualization Artist in Residence, concurrent to finishing my Master’s thesis on the similarities and symbiotic relationship between arts and science.
At CSC, I split my time across two main projects: the first was building a complement to the Hedonometer called Storywrangler, which allowed for exploration of linguistic dynamics from a database we built of tokenized n-grams (words, phrases, hashtags, handles, emojis) across over 150 different languages on Twitter. Here’s a thread on that project.
Thanks to everyone who tuned in to my keynote at @dnds_ceu Datastories 2021 today! It was an honor to share the work of our @compstorylab and colleagues with @storywrangling - A journey in visualizing 100 billion tweets in 100 languages
— Jane Adams (@janelydiaadams) May 25, 2021
Slides: https://t.co/7daHIMs4Ot
Thread: pic.twitter.com/zibK5QEP6g
That research led me to work on a number of papers related to mental health discourse, networked protest, political conversation, natural disasters… just so many super cool projects. You can see some of those projects on my Google Scholar page. Through that work, I learned React and really got my hands dirty with full stack development, building and working with a Mongo database and thinking in terms of templates and non-relational database structures.
The other project I worked on was a collaboration with MassMutual Life Insurance, where we worked with PII on human health outcomes and built a data visualization dashboard tool for understanding comorbidities to improve mortality risk models through network representation of mutual information of mixed data types. I loved working on that project because it taught me a lot about statistical methods and high-dimensional data analysis, and set me on a path of fascination with feature extraction and dimensionality reduction.
What was your introduction to AI art, and what was your 'aha' moment where it began to make sense?
That’s I guess how I really started to take a deep dive into AI. I started reading all these great resources like Distil.pub and the VISxAI proceedings to better understand machine learning and the underlying transformations happening in some of these methods for classification and generation.
Concurrent to this research, I was still maintaining an art practice — in 2019, I was building aquaponic sculptures and colored music instruments and really thinking large-scale physical stuff. But we all know what happened in 2020… so when Covid hit, I couldn’t go to the maker space any more, and the art galleries all shut down, and I really needed to find a creative outlet that was more mobile, that I could fit into my tiny Vermont apartment for the indefinite future.
So I guess that’s really what drove me to consider AI as a creative tool. And I had seen the work of Helena Sarin the previous year at EyeO Festival, and Jenn Karson at UVM, and both of them had been using RunwayML to train their own StyleGAN models. So that’s where I got started, as I think a lot of people back then did — with a WYSIWYG sort of code-less system, which helped me to get some of my first models built, after many, many hours of collecting my own training data from all kinds of creative commons sources like historical image databases and royalty-free stock photo websites.
But then I started to run into the limits of RunwayML — it was expensive, for one, and there were changes I wanted to make to the networks that I needed a coding interface to be able to handle. Also, I had started doing a lot of daisy-chaining models together, passing HiDT transforms into StyleGAN, upscaling outputs with ESRGAN, passing training data through DeepDream, etc. And I had all this knowledge of Python and databases from my work in Complex Systems, so I took the leap into coding with Google Colab.
A big breakthrough for me was setting up my own server and realizing that instead of uploading to my drive using the UI, I could just FTP training data to my server and then wget the files temporarily to drive, which was much, much faster. I also was doing a lot of scripting for image scraping — I was writing scripts for Photoshop batch processing actions (hello again, graphic design education!); I was filtering image repositories using k-means clustering to select a subset of images with a specific palette of dominant colors; I was applying chroma-keying to make green-screened latent walks.
I started a Google Doc to keep track of all this stuff I was doing ('Resources for GAN Artists'), and then ended up co-moderating a Discord channel set up by Jeremy Torman for fellow GAN artists.
You've been fairly involved in the NFT and crypto space. Was that something you were interested in already or did it happen due to your AI artwork?
I was creating all this AI art, and I just had this insatiable hunger now for training custom StyleGAN models, and I needed to scale up my compute capacity. So selling NFTs gave me some pocket change to pay for my Colab subscription and my server costs and even a new laptop so I could render videos at higher resolution.
Also, as cringe as talk about 'the Metaverse' can be, I really found that at the height of pandemic isolation, the NFT world that erupted onto Twitter and Instagram and VR worlds like CryptoVoxels and Decentraland really helped me maintain some semblance of normalcy. There’s something special about walking into a space and seeing a piece of art that is 20ft. taller than you, or wandering a museum district and happening across installations by accident, that is a really wonderful affordance now of 3D virtual environments we didn’t have before. It makes museum experiences more accessible, and allows you to attend an art opening with someone literally on the other side of the globe, and I think that’s special and an affordance that shouldn’t get lost under the kind of icky capitalistic sheen that has been pasted across crypto assets and the metaverse in the last year or so.
There are a lot of concerns with NFTs (like wash trading, money laundering, and the carbon impact of PoW currencies), and with the metaverse (like online harassment, and corporatization of yet another social realm), and with AI art as well (like companies scraping platforms like Deviantart and Artstation, or the environmental impact of training some of these very large models, or b.s. carbon offset credits that don’t actually aid carbon recapture efforts, or biased models, or the labor concerns of HIT workers). I’m not oblivious to all these issues, and we have a responsibility as artists and developers and voters to take action on these issues, but I also see some really wonderful things on the horizon because of these technologies.
So as with any emergent tech, there are challenges but there are also some tremendous affordances, and we can appreciate new technologies without subscribing to always wear rose-colored glasses.
How would you describe your artistic style and the work you do? Your work is very varied, but do you see any threads that run through it?
I started thinking about how to bring these digital works back into the physical world, by printing fabrics and getting prints made and building sculptures; I talked about this at the NVIDIA Global Technology Conference this past spring — apologies that the slides from that talk aren’t annotated, but there are some fun gifs in there.
I’ve actually got a new sculpture I’m working on based on the idea of a 'latent walk cube' — treating time as the z-axis to build a 3D illuminated sculpture of a latent walk from a model I trained on landscapes.
I made an AI music video album for my friend and colleague Alexa Woodward’s new pandemic album too, and that was super cool to see projected on a building in Burlington’s City Hall Park at like 30ft. high. There’s a thread on that here.
I’m not sure that I have a favorite project, but some that I really liked were that music video album; these 'surreal cinemagraphs' like 'Where Did I Get Here?', 'The Trees Whisper', 'What Will Be Left?', and 'Hiraeth Greenhouse', which taught me a lot about using Adobe After Effects for video editing and especially infinite loops; and, one that got a lot of interest, the Biophilia Hypothesis collection, which has a whole thread on that story here and you can see some of the sheer time that went into things like manually potting digital plants in a plant-pot adjacency matrix.
It has taken many weeks of long days, but I'm so excited today to announce my new GAN art collection:
— Jane (@artistjaneadams) August 22, 2021
Biophilia Hypothesishttps://t.co/Fa0yn9Zi9h
Here's a thread of my process: pic.twitter.com/86llSnyDcx
I guess I would characterize my style as one that is borne out of the desire to surround myself with images that bring me joy, pretty simply. My interest in the sciences has always come from my fascination with natural phenomena, like mycorrhizal networks, or nebulae, and especially how we make sense of ourselves as humans in those natural environments.
Growing up, I lived near this old abandoned military complex — massive utilitarian buildings that had succumbed to rust and were being pulled back into the earth by vines, trees erupting out of concrete; and that was one of my favorite places to ride my bike, explore, and make art. There’s something scary about the passage of time, sure, but I think there’s also something beautiful about the propensity for things to return to entropy, just as it’s delightful to see the tenacity of the human spirit in continuing to get up and fight off that chaos, whether it’s by brushing our teeth or building wayfinding systems or organizing books by the Dewey decimal system. It’s a beautiful dance we do, with chaos.
What are you working on now – and what's next?
Right now, I’m beginning my second year as a PhD in Computer Science at Northeastern University, which is something that I never could have foreseen five years ago, but it feels so right now. I’m working in the data visualization lab, and my daytime research is primarily in human health — medical imaging, genomics and proteomics research, etc. I guess I define my research area specifically as 'exploratory analysis of high-dimensional data'. And so I’d like to do more with AI from a research perspective, like right now I’m working on a paper thinking about how to use word2vec on Reddit posts of Dall-e 2 pictures and their associated prompts to kind of project into lower-dimensional space some of the ‘families’ of images that the community is generating.
But honestly, from an arts perspective? The explosion of text-to-image generators has kind of diluted AI art for me. The challenge and part that I loved most was the grind of long processing times, and waiting for models to like… 'cure in the computational kiln'. So the introduction and widespread adoption of tools like Dall-e and Midjourney and Stable Diffusion have been cool to see from the perspective of democratization of this medium, and I love seeing what other people create with them, but the excitement isn’t there for me the way it was with StyleGAN 2 and then later, SG3.
I did go beyond just prompt to image by building a tarot card deck and testing my hand at some video in-painting, but it’s all just so easy.
Video version with some pixel motion frame blending; definitely still a lil janky though (butterfly with wings down is easier for #dalle2 to in-paint, so there are more frames I saved from wings-down than wings-up) @openai #aiart pic.twitter.com/ZAQER2dIIl
— Jane (@artistjaneadams) July 25, 2022
So I’m actually taking a step back now from AI art, because I don’t want to see that become my entire identity as an artist. We shouldn’t be defined by our medium. And I need a new challenge, so I’m looking to the world of 3D. I know that Blender supports Python scripting, so I’m super curious about what I can do with that software, and plugins like Geometry Nodes (I’ve always been a big fan of Aristid Lindenmeyer, so stay tuned maybe for some L-systems in the future!).
So it’s kind of bittersweet, because I do feel like a chapter has closed for me; I’m pivoting somewhat, and I’m glad that I used 'Nodradek' as my artist name for this one period of time, about 18 months that corresponds pretty perfectly actually to the pandemic and our slow cultural recovery from that global tragedy. And it feels like a real phase shift for me as an artist, and important to make that leap into feeling challenged again.
I learned so much during that period, and also, for collectors of my art, it makes the artifacts they own that much more valuable, because there won’t be more in that same style. It was pretty incredible, when I decided to make this shift, to collect this thread of all the stuff I’d done since 2020, and really see how much ground I’d covered.
Tweet archiving the era of Nodradek - #AIart spanning from the era of #StyleGAN2, all the way to the explosion of text-to-image models. Link archive: https://t.co/TdQLtbcwcE
— Jane (@artistjaneadams) August 26, 2022
Roughly spanning February 2021 - August 2022, in no particular order: #generativeart
I think I trained well over 100 different StyleGAN models, and I filled the hard drive of my computer in that time — nearly a full terabyte of data, which I just last week backed up to some cloud and physical storage locations and factory reset my computer, which feels very symbolic. At some point, I’d really like to do some more synthesized writing on what this journey is for me, but right now it’s just kind of this mess of Twitter threads and talks here and there, so thank you for giving me the opportunity to put some thoughts and reflections down in a bit more of a cohesive way here.
I’m always happy to talk with fellow artists about what it means to be an artist, or even just a human, in this era of constantly evolving technology.
Please do visit Jane's website or follow on Twitter.
Thanks for reading! If you have any questions, comments or suggestions, I’d love to hear from you. Give me a shout on Twitter, or send an email.