@chrismessina Great question! We don't rely on a fixed max context window—instead, we dynamically break your codebase into smaller, vectorized chunks stored in a vector db. While the underlying LLM has a token limit (say, 200k tokens for 3.5 sonnet), our approach effectively lets you work with context spanning 100K+ lines of code.