Over the past several years, I’ve given considerable attention to the concept of having a personal "context window", wondering if other creators conceptualize their attention similarly. I’m probably mostly reflecting on this when seeking to optimize my productivity and focus —because I’m either feeling distraction coming on, am in a flow state, or somewhere in between and wondering if I’m exploring sufficiently to perform a good decision.
When distractions infiltrate my window, they seem to displace elements I did not intend to lose, often not just the immediate thought but a cascade of related ideas and threads. Most of us have probably experienced this, it’s the moment our train of thought derails or we are abruptly pulled out of a productive flow state.
By examining and hypothesizing how human context windows function, we might gain deeper insight into how to interact more effectively with models like Claude, ChatGPT, and others. I should preface everything I’m saying here with a disclaimer: I have between zero formal training in the fields of neurocognition, artificial intelligence, the biological workings of human memory, or for that matter —getting stuff done. What follows are merely thoughts at this moment in time.
It’s my sense that the way humans approach a project isn’t just about what’s in front of us at that moment, there may even be aspects we don’t consciously bring to the table. A human context window is both a mental and temporal space, shaped not only by what we know and feel right now, but also by everything we’ve experienced and everything we anticipate in the future. Even when we don’t think we have a view of the future, we actually do —because survival. This isn’t just a cognitive trick, best I can figure it’s an evolutionary advantage. Our brains evolved to draw on memories, past experiences, and long-term goals. We don’t just react to the present moment; we integrate it with everything from our childhood to yesterday’s mistakes to our hopes for next year.
It feels like an LLM’s context window is more limited (though IDK). These models processes a fixed amount of text at once—perhaps a few paragraphs or a few thousand words. It doesn’t ‘remember’ anything until its context window becomes loaded. Okay, maybe loading a model’s context window is in essence having a memory. IDK. It doesn’t have a ‘sense’ of the future, which limits its view. Maybe that’s a good thing. It can’t draw from a personal history or temporal continuity. Its creativity doesn’t come from past mistakes or future aspirations, but from patterns in its training data.
Okay, and here’s where things get interesting. Our human context window is far more fluid and dynamic. We bring together what we’ve done in the past, what we’re doing now, and what we hope to achieve. We have the ability to connect seemingly unrelated things—like an emotion felt last week while dining with a friend and discussing a project idea, or whatever. These temporal shifts in focus are what differentiate human creativity from what LLMs spit. We’re not just reacting to the present, instead we’re in dialogue with our own life. When we chat with other humans our context window join, though not merely overlap. It’s more specifically dimensional than that.
The power of collaboration between human and machine is also in a joining our context windows. In writing a novel or business plan, an LLM can instantly bring up historical contexts, linguistic structures, or even suggest plot twists from patterns it has learned across a vast corpus of text —from our thoughts. We, on the other hand, bring in the long-term: our previous works, our emotional states and visceral reactions, and our personal historical knowledge. We more so shape the direction while the LLM provides a kind of real-time, data-driven support. It’s dimensionally similar to chatting with other humans, however the dimensions are limited in one sense and accretive in another.
Together with an LLM we create a hybrid context window, something we didn’t have access to before, a realm where both the immediate and the temporal come together. It’s something like a Vulcan mind meld, just imagine Spock saying “My mind to your mind, my thoughts to your thoughts”.
Whether we’re designing a building, developing a video game or creating a new recipe, the LLM doesn’t replace our unique human temporal contexts and intuitions but instead engages with them. We bring our past experiences, memories, views, and goals, while the LLM, and successors (cue François Chollet and Mike Knoop), bring their vast computational power.
The human mind, with its memories and future dreams, will always hold the advantage in some areas, however in collaboration with LLMs and what’s coming down the road, we can now begin pushing the boundaries of what’s possible. The era we’re entering has been compared to the industrial revolution however my view is that’s not the most significant bit.