diff --git a/.env.example b/.env.example
index 739d930..940a45c 100644
--- a/.env.example
+++ b/.env.example
@@ -11,7 +11,8 @@
# Prisma
# https://www.prisma.io/docs/reference/database-reference/connection-urls#env
-DATABASE_URL="postgresql://username:password@localhost:5432/prompt-lab?schema=public"
+DATABASE_URL="postgresql://postgres:postgres@localhost:5432/prompt-lab?schema=public"
-# OpenAI
+# OpenAI API key. Instructions on generating a key can be found here:
+# https://help.openai.com/en/articles/4936850-where-do-i-find-my-secret-api-key
OPENAI_API_KEY=""
\ No newline at end of file
diff --git a/README.md b/README.md
index 18c5349..d01bd70 100644
--- a/README.md
+++ b/README.md
@@ -2,27 +2,33 @@
Prompt Lab is a flexible playground for comparing and optimizing LLM prompts. It lets you quickly generate, test and compare candidate prompts with realistic sample data.
-
+
Currently there's a public playground available at [https://promptlab.corbt.com/](https://promptlab.corbt.com/), but the recommended approach is to [run locally](#running-locally).
## High-Level Features
- - **Configure Multiple Prompts** - Set up multiple prompt configurations and compare their output side-by-side. Each configuration can be configured independently.
+**Configure Multiple Prompts**
+Set up multiple prompt configurations and compare their output side-by-side. Each configuration can be configured independently.
- - **Visualize Responses** - Inspect prompt completions side-by-side.
+**Visualize Responses**
+Inspect prompt completions side-by-side.
- - **Test Many Inputs** - Prompt Lab lets you *template* a prompt. Use the templating feature to run the prompts you're testing against many potential inputs for broader coverage of your problem space than you'd get with manual testing.
+**Test Many Inputs**
+Prompt Lab lets you *template* a prompt. Use the templating feature to run the prompts you're testing against many potential inputs for broader coverage of your problem space than you'd get with manual testing.
- - **🪄 Auto-generate Test Scenarios** - Prompt Lab includes a tool to generate new test scenarios based on your existing prompts and scenarios. Just click "Autogenerate Scenario" to try it out!
+**🪄 Auto-generate Test Scenarios**
+Prompt Lab includes a tool to generate new test scenarios based on your existing prompts and scenarios. Just click "Autogenerate Scenario" to try it out!
- - **Prompt Validation and Typeahead** - We use OpenAI's OpenAPI spec to automatically provide typeahead and validate prompts.
+**Prompt Validation and Typeahead**
+We use OpenAI's OpenAPI spec to automatically provide typeahead and validate prompts.
-
+
- - **Function Call Support** - Natively supports [OpenAI function calls](https://openai.com/blog/function-calling-and-other-api-updates) on supported models.
+**Function Call Support**
+Natively supports [OpenAI function calls](https://openai.com/blog/function-calling-and-other-api-updates) on supported models.
-
+
## Supported Models
Prompt Lab currently supports GPT-3.5 and GPT-4. Wider model support is planned.
@@ -35,6 +41,6 @@ Prompt Lab currently supports GPT-3.5 and GPT-4. Wider model support is planned.
4. Clone this repository: `git clone https://github.com/prompt-lab/prompt-lab`
5. Install the dependencies: `cd prompt-lab && pnpm install`
6. Create a `.env` file (`cp .env.example .env`) and enter your `OPENAI_API_KEY`.
-7. Update `DATABASE_URL` and run `pnpm prisma db push` to create the database.
-8. Start the app: `pnpm dev`
+7. Update `DATABASE_URL` if necessary to point to your Postgres instance and run `pnpm prisma db push` to create the database.
+8. Start the app: `pnpm dev`.
9. Navigate to [http://localhost:3000](http://localhost:3000)