Wow, I feel shamed.
damn, wth did you say? 🤣
Thanks for posting! We have encountered this bug as well and are investigating. Have you seen this before or has it only been the one time?
Messina I've been thinking about this some more... The reason we handle it this way is that we know the models sometimes refuse responses, which is obviously frustrating. We want to be transparent and communicate clearly to users that this isn't a TalkTastic issue - it's coming from Claude/Anthropic. While ideally these refusals wouldn't happen at all, the reality is they do, and we currently can't detect them in real-time. This approach of being upfront about it seems like our best option for graceful failure handling. Would love to hear your thoughts on this approach. While it's not ideal that we're getting refusals, do you think this is an acceptable way to handle it? Or do you see a better solution given the underlying limitations we're working with?
When you talk to Chris at DS you'll see their system allows graceful backup models (from default). Recommendation here is to alert user that current model was unable to complete and moving to model [model] as a simple UI addendum. User ultimately doesn't care, but TT can make the fallbacks more graceful through explanation on lag (or less performant responses).
Matt Mireles that's fair, but the formatting and content of the message is confusing... Saying something like "Anthropic refused to rewrite this content" is more informative than "[CENSORED BY ANTHROPIC]", which is ambiguous and leaves me feeling like I have no recourse or that I did something wrong. Furthermore, you could have a fallback option to "Try again with a different model." if a model refuses to rewrite, or to provide context from the model about why it refused (sometimes it'll explain its behavior).