Brandon's profile picture
Brandon@bvalosek
Replying to @EigenGender

@EigenGender naive thought: would it be possible to have the LLM continually summarize the conversation thus far into a more terse outline, and use that to have a longer running history that still fits within the model's 4096 token limit?

14 112/1/2022
Permalink View on Twitter