2025 - 03 March
A simple reminder to not “sit in a room alone and write a bunch of code”, as one colleague put it this year at AGU. The reference to “You and Your Research” is a good analogy, as writing and reflecting on the importance of you work is a good and necessary part of research.
Welcome to the semantic apocalypse and Natalie Wolchover article on Causal Emergence ^4bc6a2
I don’t remember following Eric Hoel on Substack; I must have gotten linked to some article, or maybe just a Note he posted. This month I saw we wrote the Semantic Apocalypse article after the Ghibli ChatGPT craze started online. I actually did not like the article as I read it, since it felt too much like the general arguments I’ve seen against LLMs’ capabilities based on normative statements about AI and art.
His article is, in fact, more nuanced, even though I believe it might be a bit dramatic. A commenter pointed out that there was similar revulsion to American Fast Food, and this led to a backlash, rather than a complete takeover of fast food.
After poking around more of his writing, I found that much of his research seems right up my alley - his “Causal Emergence” research is a much fuller exploration of hierarchies of meaning, and a better argument against the idea “LLMs are just next token predictors”. My immediate reaction to hearing that was “well humans are not just gene spreaders”, where the argument is based on the feeling “obviously that feels absurd”. His ideas try to mathematically show why that is wrong based on quantifying how much new causal power exists at macroscopic scale. He starts more from the idea that reductionists are wrong to say “my atoms made me do it, we are just atoms”, but he brings in an appealing amount of information theory and rigor. I might place this on my list of “when I get more free time, this would be great to dig into”.
I take it as a positive sign that I can still be shocked by the incompetence