Findings #9

From The Web

This week, there was an interesting look at how DALLE-2 handles internal representation. This thread from Giannis Daras looked at what he called DALLE-2's secret language.

Asking DALLE-2 to create images of farmers talking about vegetables, the author discovered something odd. The image looked reasonable, but the text was incoherent. This prompted some experimentation using the generated text as a prompt, and low and behold, the images were vegetables. Taking this a few steps further, the author found other seemingly nonsense prompts that gave consistent images.

The results were undoubtedly fascinating but not everyone seemed as convinced that there was something deeper going on. This thread from Benjamin Hilton looked at counter examples. Pointing out where these prompts would generate images unrelated to the original context.

Overall I think Benjamin's criticism is a little too strong. There is clearly something going on behind the scenes and even if there are counter-examples, there seems to be enough of a pattern here to dig deeper.

Overall this seems to raise the deeper question of whether it matters what happens within an AI. Some would argue that it does, that not knowing the internals opens us up to risk. Others would argue that as long as results hold up it is okay for some mystery.

Looking at human counterparts I would say that few of us are fully explainable. Whatever the stance, I think we are fast reaching a point where AI models are complex enough to generate their own meta language. Prompt shaping and learning how to best interact with these models to produce fruitful results is an interesting topic all on its own.

I am excited to see where this goes next.

From Me

Last week I published a longer post, looking back on the last two decades of programming.

A look back at twenty years of writing code
In one way or another, I have been writing code for the last two decades. If I am being honest with you, dear reader, I can’t pinpoint the exact time I wrote my first software, but it is close enough to twenty years to take some poetic license with this.

Tom and I also put out the eighth episode of our podcast, you can check that out on youtube.

Bizzare Signs of Recession and CRISPR Tomatoes - WASSAP Ep 008
Episode 8 of We Absolutely Should Start A PodcastTom and Elliot discuss some of the more questionable signs that we might be in a recession and new work incr...

Final Thoughts

What is the minimum bound on conciousness? With AI progress moving as it is, we'll eventually need an answer to this question. Will it be an ever moving goal post?

I hope you enjoyed this week's post. If you did, consider subscribing to get a copy of everything in your inbox. Unsubscribe at any time.

Until next time.

Subscribe to Elliot C Smith

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe