hey fron, i have this generic question that i’m not sure how to ask in the best way.
so considering that i want to make a chatbot interface where i want to accept many lm token providers: think openai, anthropic, replicate apis, that kind of stuff. now, each of these apis may have different specifications for both receiving all tokens at once and streaming tokens.
my main confusions are:
- do good engineers typically start to summarize common patterns and sort out differences at this point, or they don’t do this?
- what else do they do such that they work the differences out? is there going to be a decision made about what api specification to support so as to reduce complexity, or people typically yank switch cases here anyway?