* Increase min width of prompt variant
* Increase width of custom instructions input
* Start recording API docs
* Provide better instructions for converting to/from Claude
* Fix prettier
- Always stream the visible scenarios, if the modelProvider supports it
- Never stream the invisible scenarios
Also actually runs our query tasks in a background worker, which we weren't quite doing before.
* Add descriptions of fields in llama 2 input schema
* Let GPT-4 know when the provider stays the same
* Allow refetching in the event of any errors
* Define refinement actions in model providers
* Fix prettier
* Make CompareFunctions more configurable
* Change RefinePromptModal styles
* Accept newModel in getModifiedPromptFn
* Show prompt comparison in SelectModelModal
* Pass variant to SelectModelModal
* Update instructions
* Properly use isDisabled
Adds a `modelProvider` field to `promptVariants`, currently just set to "openai/ChatCompletion" for all variants for now.
Adds a `modelProviders/` directory where we can define and store pluggable model providers. Currently just OpenAI. Not everything is pluggable yet -- notably the code to actually generate completions hasn't been migrated to this setup yet.
Does a lot of work to get the types working. Prompts are now defined with a function `definePrompt(modelProvider, config)` instead of `prompt = config`. Added a script to migrate old prompt definitions.
This is still partial work, but the diff is large enough that I want to get it in. I don't think anything is broken but I haven't tested thoroughly.