Adds a `modelProvider` field to `promptVariants`, currently just set to "openai/ChatCompletion" for all variants for now.
Adds a `modelProviders/` directory where we can define and store pluggable model providers. Currently just OpenAI. Not everything is pluggable yet -- notably the code to actually generate completions hasn't been migrated to this setup yet.
Does a lot of work to get the types working. Prompts are now defined with a function `definePrompt(modelProvider, config)` instead of `prompt = config`. Added a script to migrate old prompt definitions.
This is still partial work, but the diff is large enough that I want to get it in. I don't think anything is broken but I haven't tested thoroughly.
Storing the model on promptVariant is problematic because it isn't always in sync with the actual prompt definition. I'm removing it for now to see if we can get away with that -- might have to add it back in later if this causes trouble.
Added `cost` to modelOutput as well so we can cache that, which is important given that the cost calculations won't be the same between different API providers.
* Rename tables, add graphile workers, update types
* Add dev:worker command
* Update pnpm-lock.yaml
* Remove sentry config import from worker.ts
* Stop generating new cells in cell router get query
* Generate new cells for new scenarios, variants, and experiments
* Remove most error throwing from queryLLM.task.ts
* Remove promptVariantId and testScenarioId from ModelOutput
* Remove duplicate index from ModelOutput
* Move inputHash from cell to output
* Add TODO
* Add todo
* Show cost and time for each cell
* Always show output stats if there is output
* Trigger LLM outputs when scenario variables are updated
* Add newlines to ends of files
* Add another newline
* Cascade ModelOutput deletion
* Fix linting and prettier
* Return instead of throwing for non-pending cell
* Remove pnpm dev:worker from pnpm:dev
* Update pnpm-lock.yaml