Being an LLM, there’s plenty it doesn’t really understand; it’s also quite possible it was trained on a set that included the patterns we’ve contributed in the library or on Github. But to give you an idea of how I developed this prompt, here’s the relevant snippets form my first chat ever with it. It was on my phone, so I apologize for the many skinny screenshots.
Sorry I didn’t screenshot the full final output here, but it correctly incorporated all these changes, many times making the correct adjustments in multiple places inside and outside the render function.
I think this gives some interesting examples of where it’s surprisingly smart while also still just a clever refined synthesis trick. Const is back, as is a 360 degree hue; there’s no implicit understanding of random or intervals.
That this even works at all is astounding. LLM things are (a) marvelous, (b) slightly terrifying and (c) still reassuringly stupid. I’ve played with copilot a bit – for the stuff I’m doing, it’s fascinating that it works at all, but it hasn’t yet been actually helpful.
I wonder if there’s anybody out there doing this for song lyric transcription/comprehension. My dream, for both Pixelblaze and Giant Art Car: A chain of AIs that:
listens to the currently playing song and understands the lyrics
generates a sentence or two describing what’s going on in the song and what emotions it evokes
asks DALL-E, Midjourney, etc. to create an image based on the descriptive text (or to just cough out a palette and pattern style reflecting what’s going on in the song.)
I’ve had to modify the way I use it. It has produced copyrighted code verbatim, and sometimes (often?) just wrong code, and hilariously sometimes author names. But, it has been a powerful muse when I get code-writers block and are looking for different ways of doing things.