- Always stream the visible scenarios, if the modelProvider supports it
- Never stream the invisible scenarios
Also actually runs our query tasks in a background worker, which we weren't quite doing before.
* Move DeleteButton into a separate file
* Rename plural relations
* Add ability to fork
* Fork automatically after auth upon return
* Add experiment card skeleton
* Create HeaderButtons component
* return no header buttons while experiment loading
* Fix prettier
* Remove unused variable
* Remove newline
* Default json values to undefined
* Change header styles
* Fix prettier
* Give AddScenario icon less width
* Move useEffect
* Skip invalidating experiments list after forking
* Require user to be able to view experiment to fork it
* Move experiment creation into same transaction
* Only return the forked experiment id
* Put delete button in experiment settings drawer
* Move useEffect hook
Adds a `modelProvider` field to `promptVariants`, currently just set to "openai/ChatCompletion" for all variants for now.
Adds a `modelProviders/` directory where we can define and store pluggable model providers. Currently just OpenAI. Not everything is pluggable yet -- notably the code to actually generate completions hasn't been migrated to this setup yet.
Does a lot of work to get the types working. Prompts are now defined with a function `definePrompt(modelProvider, config)` instead of `prompt = config`. Added a script to migrate old prompt definitions.
This is still partial work, but the diff is large enough that I want to get it in. I don't think anything is broken but I haven't tested thoroughly.
* Add dropdown header for model switching
* Allow variant duplication
* Fix prettier
* Use env variable to restrict prisma logs
* Fix env.mjs
* Remove unnecessary scroll bar from function call output
* Properly record when 404 error occurs in queryLLM task
* Add SelectedModelInfo in SelectModelModal
* Add react-select
* Calculate new prompt after switching model
* Send newly selected model with creation request
* Get new prompt construction function back from GPT-4
* Fix prettier
* Fix prettier
* Rename tables, add graphile workers, update types
* Add dev:worker command
* Update pnpm-lock.yaml
* Remove sentry config import from worker.ts
* Stop generating new cells in cell router get query
* Generate new cells for new scenarios, variants, and experiments
* Remove most error throwing from queryLLM.task.ts
* Remove promptVariantId and testScenarioId from ModelOutput
* Remove duplicate index from ModelOutput
* Move inputHash from cell to output
* Add TODO
* Add todo
* Show cost and time for each cell
* Always show output stats if there is output
* Trigger LLM outputs when scenario variables are updated
* Add newlines to ends of files
* Add another newline
* Cascade ModelOutput deletion
* Fix linting and prettier
* Return instead of throwing for non-pending cell
* Remove pnpm dev:worker from pnpm:dev
* Update pnpm-lock.yaml
We want Monaco to treat the prompt constructor as Typescript so we get type checks, but we actually want to save the prompt constructor as Javascript so we can run it directly without transpiling.
* List number of scenarios
* Retry requests after 429
* Rename requestCallback
* Add sleep function
* Allow manual retry on frontend
* Remove unused utility functions
* Auto refetch
* Display wait time with Math.ceil
* Take one second modulo into account
* Add pluralize
* Default to streaming in config
* Add tokens to database
* Add NEXT_PUBLIC_SOCKET_URL to .env.example
* Disable streaming for functions
* Add newline to types