As is often the case with technology, we have to navigate between catastrophic outlooks and the exaggerated optimism surrounding the tools we use. Artificial intelligences are no exception.
For some time now, tools like ChatGPT have introduced ways to enrich the memories of large language models with custom instructions or similar features. In particular, ChatGPT offers three options for this:
Custom instructions: These are general pieces of information you can provide to the model. For instance, my ChatGPT "knows" that I studied biomedical engineering up until the last year and that I’m now studying intercultural communication. It "knows" what work I do, and that when it writes for me, it should avoid overly male-centric language, catastrophic viewpoints, and hyper-enthusiastic tones.
Memory: This is a relatively recent function. By explicitly asking the model to remember something (for example, starting a prompt with "Remember that..."), that specific information is stored in memory.
Custom GPTs: These represent a more complex form of memory and personalization.
Anthropocentric terms like these can be confusing. Unfortunately, inventing new ones would likely be just as unhelpful at this stage.
On Reddit and other social media, sometimes picked up by mainstream outlets, conversations with ChatGPT appear where users ask the model, "Tell me something you know about me that I don’t yet know, based on our previous conversations." Another prompt that’s popular is, "Roast me based on our previous interactions."
While it’s fun to try and then turn it into posts or articles—spoiler alert: catastrophic and alarming ones tend to get more views—it’s important to remember how ChatGPT “roasts” us or tells us something we didn’t know: it extrapolates patterns from conversations, suggesting things that are likely to be true.
For instance, it pointed out to me that I juggle too many tasks and try to take on the concept of "slow" too proudly. It didn’t take a machine to see this, but it still surprised me.
Beyond the entertainment of these scenarios—which unfortunately too often flood discussions around AI—it’s interesting to use these tools to selectively populate their memory with what matters to us, enabling them to function as true personal assistants.