I just found this very interesting research paper by Anthropic on how LLMs “think”: Tracing the thoughts of a large language model