in that case, yes -- for now.
Currently. the local AI model resides in memory, and we dynamically adjust its size based on your available RAM and device specs. We load different models so it doesn't consume excessive memory resources.
We're working on a new approach for future versions that'll free up the memory when the app is idle.