* Continue polling stats until all evals complete
* Return evaluation changes early, before it has run
* Add task for running new eval
* requeue rate-limited tasks
* Fix prettier
Storing the model on promptVariant is problematic because it isn't always in sync with the actual prompt definition. I'm removing it for now to see if we can get away with that -- might have to add it back in later if this causes trouble.
Added `cost` to modelOutput as well so we can cache that, which is important given that the cost calculations won't be the same between different API providers.
* Prevent zoom in on iOS
* Expand function return code background to fill cell
* Keep OutputStats on far right of cells
* Continue polling prompt stats while cells are retrieving from LLM
* Add comment to _document.tsx
* Fix prettier