Files
OpenPipe-llm/Dockerfile
Kyle Corbitt ded6678e97 Prep for more model providers
Adds a `modelProvider` field to `promptVariants`, currently just set to "openai/ChatCompletion" for all variants for now.

Adds a `modelProviders/` directory where we can define and store pluggable model providers. Currently just OpenAI. Not everything is pluggable yet -- notably the code to actually generate completions hasn't been migrated to this setup yet.

Does a lot of work to get the types working. Prompts are now defined with a function `definePrompt(modelProvider, config)` instead of `prompt = config`. Added a script to migrate old prompt definitions.

This is still partial work, but the diff is large enough that I want to get it in. I don't think anything is broken but I haven't tested thoroughly.
2023-07-20 14:49:22 -07:00

42 lines
713 B
Docker

# Adapted from https://create.t3.gg/en/deployment/docker#3-create-dockerfile
FROM node:20.1.0-bullseye as base
RUN yarn global add pnpm
# DEPS
FROM base as deps
WORKDIR /app
COPY prisma ./
COPY package.json pnpm-lock.yaml ./
RUN pnpm install --frozen-lockfile
# BUILDER
FROM base as builder
# Include all NEXT_PUBLIC_* env vars here
ARG NEXT_PUBLIC_POSTHOG_KEY
ARG NEXT_PUBLIC_SOCKET_URL
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN SKIP_ENV_VALIDATION=1 pnpm build
# RUNNER
FROM base as runner
WORKDIR /app
ENV NODE_ENV production
ENV NEXT_TELEMETRY_DISABLED 1
COPY --from=builder /app/ ./
EXPOSE 3000
ENV PORT 3000
# Run the "run-prod.sh" script
CMD /app/run-prod.sh