TalkTastic Insiders Icon
  • Avatar of Niket Desai
    Niket Desai
    ·
    ·

    damn, wth did you say? 🤣

  • Avatar of Devo at Talktastic
    Devo at Talktastic
    ·
    ·

    Thanks for posting! We have encountered this bug as well and are investigating. Have you seen this before or has it only been the one time?

  • Avatar of Messina
    Messina
    ·
    ·

    This is the first time, but I'm aware that Claude is as spicy as a glass of tofu-water so probably was offended by the topic I was talking about... All the reason to let us connect with Ollama or LM Studio models locally!

  • Avatar of Matt Mireles
    Matt Mireles
    ·
    ·

    Messina Yes. Agree 100%! That is the direction we are taking things.

  • Avatar of Matt Mireles
    Matt Mireles
    ·
    ·

    Messina I've been thinking about this some more... The reason we handle it this way is that we know the models sometimes refuse responses, which is obviously frustrating. We want to be transparent and communicate clearly to users that this isn't a TalkTastic issue - it's coming from Claude/Anthropic. While ideally these refusals wouldn't happen at all, the reality is they do, and we currently can't detect them in real-time. This approach of being upfront about it seems like our best option for graceful failure handling. Would love to hear your thoughts on this approach. While it's not ideal that we're getting refusals, do you think this is an acceptable way to handle it? Or do you see a better solution given the underlying limitations we're working with?

  • Avatar of Niket Desai
    Niket Desai
    ·
    ·

    When you talk to Chris at DS you'll see their system allows graceful backup models (from default). Recommendation here is to alert user that current model was unable to complete and moving to model [model] as a simple UI addendum. User ultimately doesn't care, but TT can make the fallbacks more graceful through explanation on lag (or less performant responses).

  • Avatar of Messina
    Messina
    ·
    ·

    Matt Mireles that's fair, but the formatting and content of the message is confusing... Saying something like "Anthropic refused to rewrite this content" is more informative than "[CENSORED BY ANTHROPIC]", which is ambiguous and leaves me feeling like I have no recourse or that I did something wrong. Furthermore, you could have a fallback option to "Try again with a different model." if a model refuses to rewrite, or to provide context from the model about why it refused (sometimes it'll explain its behavior).