Improve docs on self-hosting and metrics. Fix formatting of markdown files.

This commit is contained in:
Jeremy Dorn
2021-07-22 13:34:46 -05:00
parent a8dfb17d07
commit 4a0dfe451b
23 changed files with 636 additions and 503 deletions

View File

@@ -17,23 +17,23 @@ diverse, inclusive, and healthy community.
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
- Demonstrating empathy and kindness toward other people
- Being respectful of differing opinions, viewpoints, and experiences
- Giving and gracefully accepting constructive feedback
- Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
- Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
- The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
- Trolling, insulting or derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
- Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
@@ -106,7 +106,7 @@ Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within

View File

@@ -17,7 +17,7 @@ docker-compose up -d
Then visit http://localhost:3000
[![Grwoth Book Screenshot](https://user-images.githubusercontent.com/1087514/124157227-26f05e00-da5e-11eb-9f73-3ceabc6ecf9e.png)](https://www.growthbook.io)
[![Growth Book Screenshot](https://user-images.githubusercontent.com/1087514/124157227-26f05e00-da5e-11eb-9f73-3ceabc6ecf9e.png)](https://www.growthbook.io)
## Our Philosophy
@@ -28,7 +28,7 @@ Growth Book gives you the flexibility and power of a fully-featured in-house A/B
## Major Features
- ❄️ Pull data from Snowflake, Redshift, BigQuery, ClickHouse, Mixpanel, Postgres, Presto, Athena, or Google Analytics
- ❄️ Pull data from Snowflake, Redshift, BigQuery, Mixpanel, Google Analytics, [and more](https://docs.growthbook.io/app/datasources)
- 🆎 Bayesian statistics engine with support for binomial, count, duration, and revenue metrics
- ⬇️ Drill down into A/B test results by browser, country, or any other attribute
- 💻 Client libraries for [React](https://github.com/growthbook/growthbook-react), [Javascript](https://github.com/growthbook/growthbook-js), [PHP](https://github.com/growthbook/growthbook-php), [Ruby](https://github.com/growthbook/growthbook-ruby), and [Python](https://github.com/growthbook/growthbook-python) with more coming soon
@@ -45,8 +45,6 @@ Create a free [Growth Book Cloud](https://app.growthbook.io) account to get star
### Open Source
Growth Book is built with React, NodeJS, and Python, bundled together in a single [Docker Image](https://hub.docker.com/r/growthbook/growthbook).
The included [docker-compose.yml](https://github.com/growthbook/growthbook/blob/main/docker-compose.yml) file contains the Growth Book App and a MongoDB instance (for storing cached experiment results and metadata):
```sh
@@ -57,7 +55,7 @@ docker-compose up -d
Then visit http://localhost:3000 to view the app.
Check out the full [installation and configuration instructions](https://docs.growthbook.io/app) for more details.
Check out the full [Self-Hosting Instructions](https://docs.growthbook.io/self-host) for more details.
## Documentation and Support
@@ -71,7 +69,7 @@ We're here to help - and to make Growth Book even better!
## Contributors
We ❤️ all contributions!
We ❤️ all contributions, big and small!
Read [CONTRIBUTING.md](/CONTRIBUTING.md) for how to setup your local development environment.

View File

@@ -21,4 +21,13 @@ module.exports = withMDX({
future: {
webpack5: true,
},
redirects: async () => {
return [
{
source: '/api-docs',
destination: '/app/api',
permanent: true,
},
]
}
});

View File

@@ -11,6 +11,7 @@ import {
FaChevronRight,
FaGithub,
FaCloud,
FaHome,
} from "react-icons/fa";
type ModAppProps = AppProps & {
@@ -22,9 +23,13 @@ const navLinks = [
href: "/",
name: "Docs Home",
},
{
href: "/self-host",
name: "Self-Host",
},
{
href: "/app",
name: "Growth Book App",
name: "User Guide",
links: [
{
href: "/app/datasources",
@@ -44,7 +49,7 @@ const navLinks = [
beta: true,
},
{
href: "/api-docs",
href: "/app/api",
name: "API",
},
{
@@ -214,7 +219,7 @@ function App({
}`}
>
<Link href={link.href}>
<a className="block">{link.name}</a>
<a className="block whitespace-nowrap">{link.name}</a>
</Link>
</div>
@@ -231,7 +236,7 @@ function App({
key={j}
>
<Link href={sublink.href}>
<a className="block">
<a className="block whitespace-nowrap">
{sublink.name}
{sublink.beta ? (
<span className="bg-yellow-400 dark:bg-yellow-600 p-1 rounded text-xs ml-1">
@@ -255,7 +260,7 @@ function App({
<div className="flex max-w-3xl">
<div className="hidden md:block text-lg text-gray-600 dark:text-gray-400">
<a href="https://www.growthbook.io" className="mr-6">
Home
<FaHome className="inline" /> Home
</a>
<a
href="https://github.com/growthbook/growthbook"
@@ -313,6 +318,22 @@ function App({
</div>
</nav>
<main className="p-5 flex-grow overflow-y-auto w-full">
{!currentIndex && (
<div className="md:hidden flex justify-center border-b border-gray-100 dark:border-gray-700 mb-4 pb-4 text-gray-600 dark:text-gray-400">
<a href="https://www.growthbook.io" className="mr-6">
<FaHome className="inline" /> Home
</a>
<a
href="https://github.com/growthbook/growthbook"
className="mr-6"
>
<FaGithub className="inline" /> GitHub
</a>
<a href="https://app.growthbook.io">
<FaCloud className="inline" /> Try for free
</a>
</div>
)}
<div className="prose prose-purple lg:prose-lg dark:prose-dark max-w-3xl w-full">
<div className="float-right ml-4 mb-4 hidden lg:block">
<a

View File

@@ -45,21 +45,20 @@ The `status` field just mirrors the HTTP status code.
The `overrides` field has one entry per experiment with overrides that should take precedence over hard-coded values in your code.
- **status** - Either "draft", "running", or "stopped". Stopped experiments are only included in the response if a non-control variation won.
- **weights** - How traffic should be weighted between variations. Will add up to 1.
- **coverage** - A float from 0 to 1 (inclusive) which specifies what percent of users to include in the experiment.
- **groups** - An array of user groups who are eligible for the experiment
- **url** - A regex for which URLs the experiment should run on
- **force** - Force all users to see the specified variation index (`0` = control, `1` = first variation, etc.).
- **status** - Either "draft", "running", or "stopped". Stopped experiments are only included in the response if a non-control variation won.
- **weights** - How traffic should be weighted between variations. Will add up to 1.
- **coverage** - A float from 0 to 1 (inclusive) which specifies what percent of users to include in the experiment.
- **groups** - An array of user groups who are eligible for the experiment
- **url** - A regex for which URLs the experiment should run on
- **force** - Force all users to see the specified variation index (`0` = control, `1` = first variation, etc.).
## Official Client Libraries
We offer official client libraries that work with these data structures in a few popular languages with more coming soon.
* [Javascript/Typescript](/lib/js)
* [React](/lib/react)
* [PHP](/lib/php)
* [Ruby](/lib/ruby)
* [Python](/lib/python)
* [Build your own](/lib/build-your-own)
- [Javascript/Typescript](/lib/js)
- [React](/lib/react)
- [PHP](/lib/php)
- [Ruby](/lib/ruby)
- [Python](/lib/python)
- [Build your own](/lib/build-your-own)

View File

@@ -6,15 +6,15 @@ You can use Growth Book without a Data Source, but the user experience is not as
Below are the currently supported Data Sources:
- Redshift
- Snowflake
- BigQuery
- ClickHouse
- AWS Athena
- Postgres
- PrestoDB (and Trino)
- Mixpanel
- Google Analytics
- Redshift
- Snowflake
- BigQuery
- ClickHouse
- AWS Athena
- Postgres
- PrestoDB (and Trino)
- Mixpanel
- Google Analytics
## Configuration Settings
@@ -22,15 +22,15 @@ To effectively use Growth Book, you'll need to tell us a little about the shape
### SQL Sources
To query SQL data warehouses, we need 3 core SELECT statements.
To query SQL data warehouses, we need 3 core SELECT statements.
The default values assume you are using Segment to populate your data warehouse, although you can customize the SQL at any time.
Growth Book combines these queries as needed and adds filters/grouping on top of them using Common Table Expressions (CTEs).
Growth Book combines these queries as needed and adds filters/grouping on top of them using Common Table Expressions (CTEs).
Any time a query is run, you should see a `View Queries` link in the app to view the raw SQL sent to the data warehouse.
#### 1. Experiments Query
This SELECT statment is used to pull experiment variation assignments. There should be one row every time a user is put into an experiment along with the variation they were assigned.
This SELECT statment is used to pull experiment variation assignments. There should be one row every time a user is put into an experiment along with the variation they were assigned.
Default value:
@@ -81,25 +81,25 @@ FROM
### Mixpanel
We query Mixpanel using JQL. We have sensible defaults for the event and property names, but you can change them if you need to.
We query Mixpanel using JQL. We have sensible defaults for the event and property names, but you can change them if you need to.
- Experiments
- **View Experiments Event** - The name of the event you are firing when a user is put into a variation
- **Experiment Id Property** - The property name that stores the experiment tracking key
- **Variation Id Property** - The property name that stores the variation the user was assigned
- **Variation Id Format** - What format the variation id is stored in.
1. Numeric (0 = control, 1 = variation 1, etc.)
2. Unique String Keys (e.g. "blue", "random-uuid", etc.)
- **View Experiments Event** - The name of the event you are firing when a user is put into a variation
- **Experiment Id Property** - The property name that stores the experiment tracking key
- **Variation Id Property** - The property name that stores the variation the user was assigned
- **Variation Id Format** - What format the variation id is stored in.
1. Numeric (0 = control, 1 = variation 1, etc.)
2. Unique String Keys (e.g. "blue", "random-uuid", etc.)
- Page Views
- **Page Views Event** - the name of the event you are firing for every page view on your site
- **URL Path Property** - in the event, the property name that stores the URL path for the pageview
- **User Agent Property** - In the event, the property name that stores the user agent for the pageview
- **Page Views Event** - the name of the event you are firing for every page view on your site
- **URL Path Property** - in the event, the property name that stores the URL path for the pageview
- **User Agent Property** - In the event, the property name that stores the user agent for the pageview
## Connection Info
Connection info is encrypted twice - once within the app and again by the database when persisting to disk.
Growth Book only runs `SELECT` queries (or the equivalent for non-SQL data sources). We still always recommend creating read-only users with as few permissions as possible.
Growth Book only runs `SELECT` queries (or the equivalent for non-SQL data sources). We still always recommend creating read-only users with as few permissions as possible.
If you are using Growth Book Cloud (https://app.growthbook.io), make sure to whitelist the ip address `52.70.79.40` if applicable.
@@ -107,7 +107,7 @@ If you are using Growth Book Cloud (https://app.growthbook.io), make sure to whi
Unlike other database engines with their own user management system, Athena uses IAM for authentication.
We recommend creating a new IAM user with readonly permissions for Growth Book. The managed [Quick Sight Policy](https://docs.aws.amazon.com/athena/latest/ug/awsquicksightathenaaccess-managed-policy.html) is a good starting point.
We recommend creating a new IAM user with readonly permissions for Growth Book. The managed [Quick Sight Policy](https://docs.aws.amazon.com/athena/latest/ug/awsquicksightathenaaccess-managed-policy.html) is a good starting point.
For the S3 results url, we recommend naming your bucket with the prefix `aws-athena-query-results-`
@@ -121,6 +121,7 @@ It should contain the project_id, client_email, and private_key.
You must first create a Service Account in Mixpanel under your [Project Settings](https://mixpanel.com/settings/project#serviceaccounts).
To add the datasource in Growth Book, you will need:
1. The service account username
2. The service account secret
3. Your project id (found on the Project Settings Overview page)
@@ -130,8 +131,9 @@ To add the datasource in Growth Book, you will need:
Because of Google Analytics tracking limitations, a user can only be in a single experiment at a time. We highly recommend using a more full-featured data source for serious A/B testing.
We require 3 things to query the Google Analytics API:
1. OAuth Authorization
2. View ID (found in Admin -> View Settings)
3. Custom Dimension Index
When tracking experiment views, the custom dimension value must be formatted as `experiment-key:variation-index`. For example: `my-test:0` for the control and `my-test:1` for the 1st variation.
When tracking experiment views, the custom dimension value must be formatted as `experiment-key:variation-index`. For example: `my-test:0` for the control and `my-test:1` for the 1st variation.

View File

@@ -1,6 +1,6 @@
# Experiments
Experiments are the core of Growth Book. This page covers several topics:
Experiments are the core of Growth Book. This page covers several topics:
1. Creating
2. Targeting
@@ -12,7 +12,7 @@ Experiments are the core of Growth Book. This page covers several topics:
When you create a new experiment, it starts out as a draft and remains fully editable until you start it.
Experiment drafts are a great place to collaborate between PMs, designers, and engineers.
Experiment drafts are a great place to collaborate between PMs, designers, and engineers.
PMs can spec out the requirements, designers can upload mockups and screenshots, and engineers can work on implementation.
@@ -31,7 +31,7 @@ All of these rules are evaluated locally in your app at runtime and no HTTP requ
## Implementation
There are generally two ways to implement experiments.
There are generally two ways to implement experiments.
The first is using our [Visual Editor](/app/visual) and the second is using one of our [Client Libraries](/lib) (for Javascript, React, PHP, Ruby, or Python).
It's also possible to use a completely custom implementation (or another library like PlanOut).
@@ -39,19 +39,19 @@ The only requirement is that you track in your datasource when users are put int
## Starting and Stopping
When you start an experiment, you will be prompted for how you want to split traffic between the variations and who you want to roll the test out to.
When you start an experiment, you will be prompted for how you want to split traffic between the variations and who you want to roll the test out to.
When stopping an experiment, you'll be prompted to enter which variation (if any) won and why you are stopping the test.
### Client Library Integration
If you are using the Client Libraries to implement experiments, there are some additional steps you must take.
If you are using the Client Libraries to implement experiments, there are some additional steps you must take.
The Client Libraries never communicate with the Growth Book servers. That means as soon as your deploy the A/B test code to production, people will start getting put into the experiment immediately and the experiment will continue until you remove the code and do another deploy.
The Client Libraries never communicate with the Growth Book servers. That means as soon as your deploy the A/B test code to production, people will start getting put into the experiment immediately and the experiment will continue until you remove the code and do another deploy.
This separation has huge performance and reliability benefits (if Growth Book goes down, it has no effect on your app), but it can be a bit unintuitive when you press the "Stop" button in the UI and people continue to be put into the experiment.
To get the best of both worlds, you can store a cached copy of experiments in Redis (or similar) and keep it up-to-date either by periodically hitting the [Growth Book API](/api-docs) or setting up a [Webhook Endpoint](/app/webhooks). Then your app can query the cache at runtime to get the latest experiment statuses.
To get the best of both worlds, you can store a cached copy of experiments in Redis (or similar) and keep it up-to-date either by periodically hitting the [Growth Book API](/app/api) or setting up a [Webhook Endpoint](/app/webhooks). Then your app can query the cache at runtime to get the latest experiment statuses.
## Results
@@ -63,18 +63,18 @@ Each row of this table is a different metric.
**Value** is the conversion rate or average value per user. In small print you can see the raw numbers used to calculate this.
**Chance to Beat Control** tells you the probability that the variation is better. Anything above 95% is highlighted green indicating a very clear winner. Anything below 5% is highlighted red, indicating a very clear loser. Anything in between is grayed out indicating it's inconclusive. If that's the case, there's either no measurable difference or you haven't gathered enough data yet.
**Chance to Beat Control** tells you the probability that the variation is better. Anything above 95% is highlighted green indicating a very clear winner. Anything below 5% is highlighted red, indicating a very clear loser. Anything in between is grayed out indicating it's inconclusive. If that's the case, there's either no measurable difference or you haven't gathered enough data yet.
**Percent Change** shows how much better/worse the variation is compared to the control. It is a probability density graph and the thicker the area, the more likely the true percent change will be there.
As you collect more data, the tails of the graphs will shorten, indicating more certainty around the estimates.
### Sample Ratio Mismatch (SRM)
Every experiment automatically checks for a Sample Ratio Mismatch and will warn you if found. This happens when you expect a certain traffic split (e.g. 50/50) but you see something significantly different (e.g. 46/54). We only show this warning if the p-value is less than `0.001`, which means it's extremely unlikely to occur by chance.
Every experiment automatically checks for a Sample Ratio Mismatch and will warn you if found. This happens when you expect a certain traffic split (e.g. 50/50) but you see something significantly different (e.g. 46/54). We only show this warning if the p-value is less than `0.001`, which means it's extremely unlikely to occur by chance.
![SRM Warning](/images/srm.png)
Like the warning says, you shouldn't trust the results since they are likely misleading. Instead, find and fix the source of the bug and restart the experiment.
Like the warning says, you shouldn't trust the results since they are likely misleading. Instead, find and fix the source of the bug and restart the experiment.
### Guardrails
@@ -83,7 +83,7 @@ For example, if you are trying to improve page load times, you may add revenue a
to inadvertantly harm it.
Guardrail results show up beneath the main table of metrics and you can click on one to expand it and show more info. They are colored based on "Chance of Being Worse", which is just the complement of "Chance to Beat Control". If there are more than 2 variations, the max value is used to determine the overall color.
A "Chance of Being Worse" less than 65% is green and of no concern. Between 65% and 90% is yellow and should be watched as more data comes in. Above 90% is red and you may consider stopping the experiment. If we don't have enough data to accurately predict the "Chance of Being Worse", we will color the metric grey.
A "Chance of Being Worse" less than 65% is green and of no concern. Between 65% and 90% is yellow and should be watched as more data comes in. Above 90% is red and you may consider stopping the experiment. If we don't have enough data to accurately predict the "Chance of Being Worse", we will color the metric grey.
![Guardrails](/images/guardrails.png)
@@ -92,5 +92,5 @@ A "Chance of Being Worse" less than 65% is green and of no concern. Between 65%
If you have defined dimensions for your users, you can use the **Dimension** dropdown to drill down into your results.
This is very useful for debugging (e.g. if Safari is down, but the other browser are fine, you may have an implementation bug).
Be careful. The more metrics and dimensions you look at, the more likely you are to see a false positive. If you find something that looks
surprising, it's often worth a dedicated follow-up experiment to verify that it's real.
Be careful. The more metrics and dimensions you look at, the more likely you are to see a false positive. If you find something that looks
surprising, it's often worth a dedicated follow-up experiment to verify that it's real.

View File

@@ -1,126 +1,9 @@
# Growth Book App
# User Guide
The Growth Book App is a web application to manage your A/B tests and analyze results.
The following sections will walk you through the various parts of the Growth Book app.
## Installation
<div className="bg-blue-200 dark:bg-blue-900 py-2 px-4 rounded flex">
<div className="text-yellow-500 pt-1 mr-3">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" style={{fill:"currentColor"}}><path d="M12 .587l3.668 7.568 8.332 1.151-6.064 5.828 1.48 8.279-7.416-3.967-7.417 3.967 1.481-8.279-6.064-5.828 8.332-1.151z"/></svg>
</div>
<div>Don't want to install or host the app yourself? <a href="https://app.growthbook.io">Growth Book Cloud</a> is a fully managed version that's free to get started.</div>
<div className="text-yellow-500 pt-1 ml-3">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" style={{fill:"currentColor"}}><path d="M12 .587l3.668 7.568 8.332 1.151-6.064 5.828 1.48 8.279-7.416-3.967-7.417 3.967 1.481-8.279-6.064-5.828 8.332-1.151z"/></svg>
</div>
</div>
Growth Book consists of a NextJS front-end, an ExpressJS API, and a Python stats engine. Everything is bundled together in a single [Docker Image](https://hub.docker.com/r/growthbook/growthbook).
In addition to the app itself, you will also need a MongoDB instance to store login credentials, cached experiment results, and metadata.
You can use **docker-compose** to get started quickly:
```yml
# docker-compose.yml
version: "3"
services:
mongo:
image: "mongo:latest"
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=password
growthbook:
image: "growthbook/growthbook:latest"
ports:
- "3000:3000"
- "3100:3100"
depends_on:
- mongo
environment:
- MONGODB_URI=mongodb://root:password@mongo:27017/
```
Then, just run `docker-compose up -d` to start everything and view the app at [http://localhost:3000](http://localhost:3000)
## Configuration
The Growth Book App is configured via environment variables. Below are all of the configuration options:
- **NODE_ENV** - Set to "production" to turn on additional optimizations and API request logging
- **JWT_SECRET** - Auth signing key (use a long random string)
- **ENCRYPTION_KEY** - Data source credential encryption key (use a long random string)
- **APP_ORIGIN** - Used for CORS (default set to http://localhost:3000)
- **MONGODB_URI** - The MongoDB connection string
- **DISABLE_TELEMETRY** - We collect anonymous telemetry data to help us improve Growth Book. Set to "true" to disable.
- **API_HOST** - (default set to http://localhost:3100)
- Email SMTP Settings:
- **EMAIL_ENABLED** ("true" or "false")
- **EMAIL_HOST**
- **EMAIL_PORT**
- **EMAIL_HOST_USER**
- **EMAIL_HOST_PASSWORD**
- **EMAIL_USE_TLS** ("true" or "false")
- Google OAuth Settings (only if using Google Analytics as a data source)
- **GOOGLE_OAUTH_CLIENT_ID**
- **GOOGLE_OAUTH_CLIENT_SECRET**
### Changing the Ports
The Docker image exposes 2 ports: `3000` for the front-end and `3100` for the API.
If you need to change these, you can use Docker port mappings. You'll also need to set the environment variables **API_HOST** and **APP_ORIGIN** to include your new ports.
Here's an example of switching to ports `4000` and `4100` in `docker-compose.yml`:
```yml
growthbook:
image: "growthbook/growthbook:latest"
ports:
- "4000:3000"
- "4100:3100"
depends_on:
- mongo
environment:
- APP_ORIGIN=http://localhost:4000
- API_HOST=http://localhost:4100
...
```
Now your app would be available on [http://localhost:4000](http://localhost:4000)
### Volumes
Images uploaded in the Growth Book app are stored in `/usr/local/src/app/packages/back-end/uploads`. We recommend mounting a volume there so images can be persisted.
Also, if you are running MongoDB through Docker, you will need to mount a volume to `/data/db` to persist data between container restarts. In production, we highly suggest just using a hosted solution like [MongoDB Atlas](https://www.mongodb.com/cloud/atlas) instead.
### Commands
These are the possible commands you can use:
- `["yarn", "start"]` - default, start both front-end and back-end in parallel
- `["yarn", "workspace", "front-end", "run"]` - run only the front-end
- `["yarn", "workspace", "back-end", "run"]` - run only the back-end
## Docker Tags
Builds are published automatically from the [GitHub repo](https://github.com/growthbook/growthbook) main branch.
The most recent commit is tagged with `latest`. GitHub Releases are also tagged (e.g. `0.2.1`).
If you need to reference the image for a specific git commit for any reason, you can use the git shorthash tag (e.g. `git-41278e9`).
### Updating to Latest
If you are using docker-compose, you can update with:
```bash
docker-compose pull growthbook
docker-compose stop growthbook
docker-compose up -d --no-deps growthbook
```
## Using the App
The Growth Book App consists of [Data Sources](/app/datasources), [Metrics](/app/metrics), and [Experiments](/app/experiments). Read on to learn more about these and how to use them.
- Connect to your [Data Sources](/app/datasources) so we can pull experiment results automatically.
- Define your [Metrics](/app/metrics) that you can use as goals and guardrails for experiments.
- Create an [Experiment](/app/experiments) and view results.
- Enable our [Visual Editor](/app/visual) so non-technical teammates can launch experiments without writing code.
- Integrate with our [API](/app/api) or [Webhooks](/app/webhooks) for a more seamless experience.

View File

@@ -1,19 +1,19 @@
# Metrics
Metrics are what your experiments are trying to optimize. Growth Book has a very flexible and powerful way to define metrics.
Metrics are what your experiments are trying to improve (or at least not hurt). Growth Book has a very flexible and powerful way to define metrics.
## Conversion Types
Metrics can have different statistical distributions. Below are the ones Growth Book supports:
Metrics can have different units and statistical distributions. Below are the ones Growth Book supports:
| Conversion Type | Description | Example |
|-----------------|-------------------------------------------|--------------------------|
| binomial | A simple yes/no conversion | Created Account |
| count | Counts multiple conversions per user | Pages per Visit |
| duration | How much time something takes on average | Time on Site |
| revenue | The revenue gained/lost on average | Revenue per User |
| Conversion Type | Description | Example |
| --------------- | ---------------------------------------- | ---------------- |
| binomial | A simple yes/no conversion | Created Account |
| count | Counts multiple conversions per user | Pages per Visit |
| duration | How much time something takes on average | Time on Site |
| revenue | The revenue gained/lost on average | Revenue per User |
Need a metric type we don't support yet? Let us know!
Need a metric type we don't support yet? Let us know!
## Query settings
@@ -26,6 +26,7 @@ If your data source supports SQL, this is the preferred way to define metrics. Y
Your SELECT statement should return one row per "conversion event". This may be a page view, a purchase, a session, or something else.
Example:
```sql
SELECT
user_id as user_id,
@@ -54,4 +55,53 @@ FROM
The query builder prompts you for things such as table/column names and constructs a query behind the scenes.
For non-SQL data sources (e.g. Google Analytics, Mixpanel), this is the only option. Otherwise, if you data sources supports it, inputting raw SQL is easier and more flexible.
For non-SQL data sources (e.g. Google Analytics, Mixpanel), this is the only option. Otherwise, if you data sources supports it, inputting raw SQL is easier and more flexible.
## Behavior
The behavior tab lets you tweak how the metric is used in experiments. Depending on the metric type and datasource you chose, some or all of the following will be available:
### What is the Goal?
For the vast majority of metrics, the goal is to increase the value. But for some metrics like "Bounce Rate" and "Page Load Time", lower is actually better.
Setting this to "decrease" basically inverts the "Chance to Beat Control" value in experiment results so that "beating" the control means decreasing the value. This will also reverse the red and green coloring on graphs.
### Capped Value
Large outliers can have an outsized effect on experiment results. For example, if your normal order size is $10 and someone happens to make a $5000 order, whatever variation that person is in will automatically "win" any experiment even if it had no effect on their behavior.
If set above zero, all values will be capped at this value. So in the above example, if you set the cap to $100, the $5000 purchase will still be counted, but only as $100 and will have a much smaller effect on the results. It will still give a boost to whatever variation the person is in, but it won't completely dominate all of the other orders and is unlikely to make a winner just on its own.
We recommend setting this at the 99th percentile in most cases.
### Converted Users Only
This setting controls the denominator for the metric.
When set to `No` (the default), the average metric value is calculated as `(total value) / (number of users)`.
When set to `Yes`, the denominator only includes users who have a non-null value for this metric. So the average value is calculated as `(total value) / (number of users with non-null value)`.
The most common use case is with Revenue. Setting to "No" gives you `Revenue per User`. Setting to "Yes" gives you `Average Order Value`.
### In an Experiment, start counting...
This controls when metrics start counting for a user.
The vast majority of the time, you only want to count metrics **after** the user is assigned a variation (the default). So if someone clicks a button and then later views your experiment, that previous click doesn't count.
If you instead pick `At the start of the user's session`, we will also include conversions that happen up to 30 minutes **before** being put into the experiment.
Why would you ever want to do this?
Imagine the average person stays on your site for 60 seconds and your experiment can trigger at any time.
If you just look at the average time spent after the experiment, the numbers will lose a lot of meaning. A value of `20 seconds` might be horrible if it happened to someone after only 5 seconds on your site since they are staying a lot less time than average. But, that same `20 seconds` might be great if it happened to someone after 55 seconds since their visit is a lot longer than usual.
Over time, these things will average out and you can eventually see patterns, but you need an enormous amount of data to get to that point.
If instead, you consider the entire session duration, you can reduce the amount of data you need to see patterns. For example, you may see your average go from 60 seconds to 65 seconds.
Keep in mind, these two things are answering slightly different questions.
`How much longer do people stay after viewing the experiment?` vs `How much longer is an average session that includes the experiment?`.
The first question is more direct and often a more strict test of your hypothesis, but it may not be worth the extra running time.

View File

@@ -1,6 +1,6 @@
# Visual Editor (beta)
The Visual Editor enables non-technical users to create A/B tests without writing code.
The Visual Editor enables non-technical users to create A/B tests without writing code.
You can decide on a test-by-test basis whether you want to write code using the Client Libraries or use the Visual Editor.
@@ -12,17 +12,17 @@ Once enabled, you will see the `<script>` tags that must be added to the site yo
```html
<script>
window.GROWTHBOOK_CONFIG = {
// Optional logged-in user id
userId: "123",
// Impression tracking callback (e.g. Segment, Mixpanel, GA)
track: function(experimentId, variationId) {
analytics.track("Experiment Viewed", {
experimentId,
variationId
})
}
}
window.GROWTHBOOK_CONFIG = {
// Optional logged-in user id
userId: "123",
// Impression tracking callback (e.g. Segment, Mixpanel, GA)
track: function (experimentId, variationId) {
analytics.track("Experiment Viewed", {
experimentId,
variationId,
});
},
};
</script>
<script async src="http://localhost:3100/js/key_abc123.js"></script>
```
@@ -31,17 +31,17 @@ window.GROWTHBOOK_CONFIG = {
The following config options are allowed:
- **userId** - The logged-in user id for the current user
- **anonId** - An anonymous identifier for the user (deviceId, cookieId, sessionId, etc.)
- **groups** - An array of groups the user belongs to. Used for targeting. (e.g. `internal`, `qa`, `premium`, etc.)
- **enabled** - If set to `false`, no experiments will be run
- **track** - A function that will be called every time the user is put into an experiment. Use this to track the impression in your analytics tool (GA, Segment, Mixpanel, or something else).
- **userId** - The logged-in user id for the current user
- **anonId** - An anonymous identifier for the user (deviceId, cookieId, sessionId, etc.)
- **groups** - An array of groups the user belongs to. Used for targeting. (e.g. `internal`, `qa`, `premium`, etc.)
- **enabled** - If set to `false`, no experiments will be run
- **track** - A function that will be called every time the user is put into an experiment. Use this to track the impression in your analytics tool (GA, Segment, Mixpanel, or something else).
#### User/Anon Ids
If no `anonId` is set in the config options, a new one will be generated and persisted in localStorage.
Experiments can target based on a login state of `Anonymous` or `User`.
Experiments can target based on a login state of `Anonymous` or `User`.
If a user is missing the specified id type, they will be excluded from the experiment.
This login state also determines which id will be used for assinging users to variations.
@@ -50,6 +50,7 @@ This login state also determines which id will be used for assinging users to va
If you have a Single Page Application (SPA) that does client-side routing, you'll need to refresh experiments on a route change.
For example, with Next.js you can add the following to your `pages/_app.js` component:
```ts
const router = useRouter();
@@ -68,7 +69,7 @@ When creating an experiment, you can choose to create a `Code` experiment (using
![Experiment Form](/images/experiment-type-form.png)
After creating a Visual experiment, you can open the editor. Here you can inject CSS styles to the page and specify DOM mutations (e.g. changing innerHTML or adding a class to an element).
After creating a Visual experiment, you can open the editor. Here you can inject CSS styles to the page and specify DOM mutations (e.g. changing innerHTML or adding a class to an element).
![Visual Editor](/images/visual-editor.png)
@@ -78,7 +79,7 @@ There is even a built-in screenshot tool if you want an easy way to capture what
Once you have a draft Visual experiment, you can preview variations by adding a querystring to your site.
For example, `http://example.com/?my-experiment=1`. The querystring key is the experiment "tracking key" and the value is the variation number (`0` is the control, `1` is the first variation, etc.)
For example, `http://example.com/?my-experiment=1`. The querystring key is the experiment "tracking key" and the value is the variation number (`0` is the control, `1` is the first variation, etc.)
Until an experiment is moved out of the "draft" phase and started, this is the only way to view it on your site.
@@ -87,5 +88,5 @@ Until an experiment is moved out of the "draft" phase and started, this is the o
When an experiment is started, users are immediately assigned variations.
If an experiment is stopped and a winner is declared, the winning variation will be rolled out to 100% of users.
If you later go back and implement the winning variation in code, you can **archive** the experiment in Growth Book
and it will stop being included in the javascript code.
If you later go back and implement the winning variation in code, you can **archive** the experiment in Growth Book
and it will stop being included in the javascript code.

View File

@@ -2,7 +2,7 @@
Webhooks are one way to keep cached experiment configuration up-to-date on your servers.
With the [API](/api-docs), your servers pull experiment overrides from Growth Book in a cronjob (or similar).
With the [API](/app/api), your servers pull experiment overrides from Growth Book in a cronjob (or similar).
With **Webhooks**, Growth Book pushes experiment overrides to your servers as soon as they change.
@@ -41,12 +41,12 @@ Here's an example payload:
The `overrides` field has one entry per experiment with overrides that should take precedence over hard-coded values in your code.
- **status** - Either "draft", "running", or "stopped". Stopped experiments are only included in the response if a non-control variation won.
- **weights** - How traffic should be weighted between variations. Will add up to 1.
- **coverage** - A float from 0 to 1 (inclusive) which specifies what percent of users to include in the experiment.
- **groups** - An array of user groups who are eligible for the experiment
- **url** - A regex for which URLs the experiment should run on
- **force** - Force all users to see the specified variation index (`0` = control, `1` = first variation, etc.).
- **status** - Either "draft", "running", or "stopped". Stopped experiments are only included in the response if a non-control variation won.
- **weights** - How traffic should be weighted between variations. Will add up to 1.
- **coverage** - A float from 0 to 1 (inclusive) which specifies what percent of users to include in the experiment.
- **groups** - An array of user groups who are eligible for the experiment
- **url** - A regex for which URLs the experiment should run on
- **force** - Force all users to see the specified variation index (`0` = control, `1` = first variation, etc.).
## VPCs and Firewalls
@@ -60,22 +60,22 @@ Here is example code in NodeJS for verifying the signature. Other languages shou
```js
const crypto = require("crypto");
const express = require('express')
const bodyParser = require('body-parser')
const express = require("express");
const bodyParser = require("body-parser");
// Retrieve from Growth Book settings
const GROWTHBOOK_WEBHOOK_SECRET = 'abc123';
const GROWTHBOOK_WEBHOOK_SECRET = "abc123";
const app = express()
const app = express();
app.post("/webhook", bodyParser.raw(), (req, res) => {
const payload = req.body;
const sig = req.get('X-GrowthBook-Signature');
const sig = req.get("X-GrowthBook-Signature");
const computed = crypto
.createHmac('sha256', GROWTHBOOK_WEBHOOK_SECRET)
.createHmac("sha256", GROWTHBOOK_WEBHOOK_SECRET)
.update(req.body)
.digest('hex')
.digest("hex");
if (!crypto.timingSafeEqual(Buffer.from(computed), Buffer.from(sig))) {
throw new Error("Signatures do not match!");

View File

@@ -11,14 +11,13 @@ Growth Book gives you the flexibility and power of a fully-featured in-house A/B
We believe A/B testing should sit on top of your **existing data and metrics**, wherever they live and however they are defined.
We believe in **data transparency**. Growth Book
We believe in **data transparency**. Growth Book
tells you exactly how your data is being queried and our stats engine is developed out in the open on [GitHub](https://github.com/growthbook/growthbook/tree/main/packages/back-end/src/python/bayesian).
We are fanatical about **performance**. Our [Client Libraries](/lib) are crazy fast and light-weight and evaluate everything locally with no network requests.
We believe good ideas come from everywhere. Growth Book gives you **access controls** and guardrails to safely open up A/B testing to your entire organization.
## Documentation
Use the menu or the Previous/Next links at the bottom to navigate these docs.

View File

@@ -1,4 +1,4 @@
# Build Your Own Client Library
# Build Your Own Client Library
This guide is meant for library authors looking to build a Growth Book client library in a currently unsupported language.
@@ -18,14 +18,14 @@ Defines the experimental context (attributes used for variation assignment and t
At a minimum, the context should support the following optional properties:
- **enabled** (`boolean`) - Switch to globally disable all experiments. Default true.
- **user** (`Map`) - Map of user attributes that are used to assign variations
- **groups** (`Map`) - A map of which groups the user belongs to (key is the group name, value is boolean)
- **url** (`string`) - The URL of the current page
- **overrides** (`Map`) - Override properties of specific experiments (used for Remote Config)
- **forcedVariations** (`Map`) - Force specific experiments to always assign a specific variation (used for QA)
- **qaMode** (`boolean`) - If true, random assignment is disabled and only explicitly forced variations are used.
- **trackingCallback** (`function`) - A function that takes `experiment` and `result` as arguments.
- **enabled** (`boolean`) - Switch to globally disable all experiments. Default true.
- **user** (`Map`) - Map of user attributes that are used to assign variations
- **groups** (`Map`) - A map of which groups the user belongs to (key is the group name, value is boolean)
- **url** (`string`) - The URL of the current page
- **overrides** (`Map`) - Override properties of specific experiments (used for Remote Config)
- **forcedVariations** (`Map`) - Force specific experiments to always assign a specific variation (used for QA)
- **qaMode** (`boolean`) - If true, random assignment is disabled and only explicitly forced variations are used.
- **trackingCallback** (`function`) - A function that takes `experiment` and `result` as arguments.
An example of a `user`:
@@ -43,7 +43,7 @@ An example of `trackingCallback` in javascript:
function track(experiment, result) {
analytics.track("Experiment Viewed", {
experimentId: experiment.key,
variationId: result.variationId
variationId: result.variationId,
});
}
```
@@ -52,16 +52,16 @@ function track(experiment, result) {
Defines a single experiment:
- **key** (`string`) - The globally unique tracking key for the experiment
- **variations** (`any[]`) - The different variations to choose between
- **weights** (`number[]`) - How to weight traffic between variations. Must add to 1.
- **status** (`string`) - "running" is the default and always active. "draft" is only active during QA and development. "stopped" is only active when forcing a winning variation to 100% of users.
- **coverage** (`number`) - What percent of users should be included in the experiment (between 0 and 1, inclusive)
- **url** (`RegExp`) - Users can only be included in this experiment if the current URL matches this regex
- **include** (`() => boolean`) - A callback that returns true if the user should be part of the experiment and false if they should not be
- **groups** (`string[]`) - Limits the experiment to specific user groups
- **force** (`number`) - All users included in the experiment will be forced into the specific variation index
- **hashAttribute** (`string`) - What user attribute should be used to assign variations (defaults to `id`)
- **key** (`string`) - The globally unique tracking key for the experiment
- **variations** (`any[]`) - The different variations to choose between
- **weights** (`number[]`) - How to weight traffic between variations. Must add to 1.
- **status** (`string`) - "running" is the default and always active. "draft" is only active during QA and development. "stopped" is only active when forcing a winning variation to 100% of users.
- **coverage** (`number`) - What percent of users should be included in the experiment (between 0 and 1, inclusive)
- **url** (`RegExp`) - Users can only be included in this experiment if the current URL matches this regex
- **include** (`() => boolean`) - A callback that returns true if the user should be part of the experiment and false if they should not be
- **groups** (`string[]`) - Limits the experiment to specific user groups
- **force** (`number`) - All users included in the experiment will be forced into the specific variation index
- **hashAttribute** (`string`) - What user attribute should be used to assign variations (defaults to `id`)
The only required properties are `key` and `variations`. Everything else is optional.
@@ -69,11 +69,11 @@ The only required properties are `key` and `variations`. Everything else is opti
The result of running an Experiment given a specific Context
- **inExperiment** (`boolean`) - Whether or not the user is part of the experiment
- **variationId** (`string`) - The array index of the assigned variation
- **value** (`any`) - The array value of the assigned variation
- **hashAttribute** (`string`) - The user attribute used to assign a variation
- **hashValue** (`string)` - The value of that attribute
- **inExperiment** (`boolean`) - Whether or not the user is part of the experiment
- **variationId** (`string`) - The array index of the assigned variation
- **value** (`any`) - The array value of the assigned variation
- **hashAttribute** (`string`) - The user attribute used to assign a variation
- **hashValue** (`string)` - The value of that attribute
The `variationId` and `value` should always be set, even when `inExperiment` is false.
@@ -101,11 +101,16 @@ There are a bunch of ordered steps to run an experiment:
11. If `experiment.force` is set, return immediately (not in experiment, variationId `experiment.force`)
12. If `experiment.status` is "stopped", return immediately (not in experiment, variationId `0`)
13. If `context.qaMode` is true, return immediately (not in experiment, variationId `0`)
14. Compute a hash using the [FowlerNollVo](https://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function) algorithm (specifically fnv32-1a)
14. Compute a hash using the [FowlerNollVo](https://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function) algorithm (specifically fnv32-1a)
```js
n = (fnv32_1a(id + experiment.key) % 1000) / 1000
```
15. Apply coverage to weights:
15. Apply coverage to weights:
```js
// Default weights to an even split
weights = experiment.weights
@@ -117,7 +122,10 @@ There are a bunch of ordered steps to run an experiment:
// Multiple each weight by the coverage (or 1 if not set)
weights = weights.map(w => w*(experiment.coverage || 1));
```
16. Loop through `weights` until you reach the hash value
16. Loop through `weights` until you reach the hash value
```js
cumulative = 0
assigned = -1
@@ -129,9 +137,10 @@ There are a bunch of ordered steps to run an experiment:
}
}
```
17. If not assigned a variation (`assigned === -1`), return immediately (not in experiment, variationId `0`)
18. Fire `context.trackingCallback` if set and the combination of hashAttribute, hashValue, experiment.key, and variationId has not been tracked before
19. Return (**in experiment**, assigned variation)
17. If not assigned a variation (`assigned === -1`), return immediately (not in experiment, variationId `0`)
18. Fire `context.trackingCallback` if set and the combination of hashAttribute, hashValue, experiment.key, and variationId has not been tracked before
19. Return (**in experiment**, assigned variation)
## Remote Config
@@ -152,12 +161,13 @@ The overrides are typically stored in a database or cache so there should be an
```
The full list of supported override properties is:
- weights
- status
- force
- coverage
- groups
- url
- weights
- status
- force
- coverage
- groups
- url
Note that the above list specifically does not include `variations`. The only way to change the variations is to change the code.
This restriction makes testing and maintaining code much much easier.
@@ -173,10 +183,10 @@ It should be very simple to run a basic A/B test:
```js
result = growthbook.run({
key: "my-experiment",
variations: ["A", "B"]
})
variations: ["A", "B"],
});
print(result.value) // "A" or "B"
print(result.value); // "A" or "B"
```
And it should feel natural to scale up to more complex use cases:
@@ -186,22 +196,22 @@ And it should feel natural to scale up to more complex use cases:
result = growthbook.run({
key: "complex-experiment",
variations: [
{color: "blue", size: "small"},
{color: "green", size: "large"}
{ color: "blue", size: "small" },
{ color: "green", size: "large" },
],
weights: [0.8, 0.2],
coverage: 0.5,
groups: ["beta-testers"]
})
groups: ["beta-testers"],
});
print(result.value.color, result.value.size) // "blue,small" OR "green,large"
print(result.value.color, result.value.size); // "blue,small" OR "green,large"
```
### Type Hinting
Most languages have some sort of strong typing support, whether in the language itself or via annotations. This helps to reduce errors and is highly encouraged for client libraries.
If possible, use generics to type the return value. If `experiment.variations` is type `T[]`, then `result.value` should be type `T`.
If possible, use generics to type the return value. If `experiment.variations` is type `T[]`, then `result.value` should be type `T`.
If your type system supports specifying a minimum array length, it's best to type `experiment.variations` as requiring at least 2 elements.
@@ -213,7 +223,7 @@ If your language has support for a native regex type, you should use that instea
However, in all languages, `context.overrides` needs to remain serializeable to JSON, so strings must be used there. When importing overrides from JSON, you would convert the strings to actual regex objects.
Since the regex deals with URLs, make sure you are escaping `/` if needed. The string value `"^/post/[0-9]+"` should work as expected and not throw an error.
Since the regex deals with URLs, make sure you are escaping `/` if needed. The string value `"^/post/[0-9]+"` should work as expected and not throw an error.
### Handling Errors
@@ -222,29 +232,31 @@ The general rule is to be strict in development and lenient in production.
You can throw exceptions in development, but someone's production app should never crash because of a call to `growthbook.run`.
For the below edge cases in production, just act as if the problematic property didn't exist and ignore errors:
- `experiment.weights` is a different length from `experiment.variations`
- `experiment.weights` adds up to something other than 1
- `experiment.coverage` is greater than 1
- `context.trackingCallback` throws an error
- URL querystring specifies an invalid variation index
- `experiment.weights` is a different length from `experiment.variations`
- `experiment.weights` adds up to something other than 1
- `experiment.coverage` is greater than 1
- `context.trackingCallback` throws an error
- URL querystring specifies an invalid variation index
For the below edge cases in production, the experiment should be disabled (everyone gets assigned variation `0`):
- `experiment.url` is an invalid regex
- `experiment.coverage` is less than 0
- `experiment.force` specifies an invalid variation index
- `context.forcedVariations` specifies an invalid variation index
- `experiment.include` throws an error
- `experiment.status` is set to an unknown value
- `experiment.hashAttribute` is an empty string
- `experiment.url` is an invalid regex
- `experiment.coverage` is less than 0
- `experiment.force` specifies an invalid variation index
- `context.forcedVariations` specifies an invalid variation index
- `experiment.include` throws an error
- `experiment.status` is set to an unknown value
- `experiment.hashAttribute` is an empty string
### Subscriptions
Sometimes it's useful to be able to "subscribe" to a GrowthBook instance and be alerted every time `growthbook.run` is called. This is different from the tracking callback since it also fires when a user is *not* included in an experiment.
Sometimes it's useful to be able to "subscribe" to a GrowthBook instance and be alerted every time `growthbook.run` is called. This is different from the tracking callback since it also fires when a user is _not_ included in an experiment.
```js
growthbook.subscribe(function(experiment, result) {
growthbook.subscribe(function (experiment, result) {
// do something
})
});
```
It's best to only re-fire the callbacks for an experiment if the result has changed. That means either the `inExperiment` flag has changed or the `variationId` has changed.
@@ -278,4 +290,4 @@ Join our [Slack community](https://join.slack.com/t/growthbookusers/shared_invit
## Attribution
Open a [GitHub issue](https://github.com/growthbook/growthbook/issues) with a link to your project and we'll make sure we add it to our docs and give you proper credit for your hard work.
Open a [GitHub issue](https://github.com/growthbook/growthbook/issues) with a link to your project and we'll make sure we add it to our docs and give you proper credit for your hard work.

View File

@@ -2,12 +2,12 @@
We offer official client libraries in a few popular languages:
* [Javascript/Typescript](/lib/js)
* [React](/lib/react)
* [PHP](/lib/php)
* [Ruby](/lib/ruby)
* [Python](/lib/python)
- [Javascript/Typescript](/lib/js)
- [React](/lib/react)
- [PHP](/lib/php)
- [Ruby](/lib/ruby)
- [Python](/lib/python)
There is also a guide if you want to [build your own](/lib/build-your-own).
It's not required to use any of these libraries. The only requirement is that you track in your datasource when users are put into an experiment and which variation they received.
It's not required to use any of these libraries. The only requirement is that you track in your datasource when users are put into an experiment and which variation they received.

View File

@@ -4,9 +4,9 @@ View the full documentation on [GitHub](https://github.com/growthbook/growthbook
## Installation
`yarn add @growthbook/growthbook`
`yarn add @growthbook/growthbook`
or
or
`npm install --save @growthbook/growthbook`
@@ -14,15 +14,15 @@ or use directly in your HTML without installing first:
```html
<script type="module">
import GrowthBook from 'https://unpkg.com/@growthbook/growthbook/dist/growthbook.esm.js';
//...
import { GrowthBook } from "https://unpkg.com/@growthbook/growthbook/dist/growthbook.esm.js";
//...
</script>
```
## Quick Usage
```ts
import GrowthBook from '@growthbook/growthbook';
import { GrowthBook } from "@growthbook/growthbook";
// Define the experimental context
const growthbook = new GrowthBook({
@@ -32,48 +32,47 @@ const growthbook = new GrowthBook({
trackingCallback: (experiment, result) => {
analytics.track("Experiment Viewed", {
experimentId: experiment.key,
variationId: result.variationId
})
}
})
variationId: result.variationId,
});
},
});
// Run an experiment
const {value} = growthbook.run({
const { value } = growthbook.run({
key: "my-experiment",
variations: ["A", "B"]
})
variations: ["A", "B"],
});
console.log(value) // "A" or "B"
console.log(value); // "A" or "B"
```
## GrowthBook class
The GrowthBook constructor takes a `Context` object. Below are all of the possible Context properties:
- **enabled** (`boolean`) - Switch to globally disable all experiments. Default true.
- **user** (`{}`) - Map of user attributes that are used to assign variations
- **groups** (`{}`) - A map of which groups the user belongs to (key is the group name, value is boolean)
- **url** (`string`) - The URL of the current page (defaults to `window.location.href` when in a browser environment)
- **overrides** (`{}`) - Override properties of specific experiments (used for Remote Config)
- **forcedVariations** (`{}`) - Force specific experiments to always assign a specific variation (used for QA)
- **qaMode** (`boolean`) - If true, random assignment is disabled and only explicitly forced variations are used.
- **trackingCallback** (`function`) - A function that takes `experiment` and `result` as arguments.
- **enabled** (`boolean`) - Switch to globally disable all experiments. Default true.
- **user** (`{}`) - Map of user attributes that are used to assign variations
- **groups** (`{}`) - A map of which groups the user belongs to (key is the group name, value is boolean)
- **url** (`string`) - The URL of the current page (defaults to `window.location.href` when in a browser environment)
- **overrides** (`{}`) - Override properties of specific experiments (used for Remote Config)
- **forcedVariations** (`{}`) - Force specific experiments to always assign a specific variation (used for QA)
- **qaMode** (`boolean`) - If true, random assignment is disabled and only explicitly forced variations are used.
- **trackingCallback** (`function`) - A function that takes `experiment` and `result` as arguments.
## Experiments
Below are all of the possible properties you can set for an Experiment:
- **key** (`string`) - The globally unique tracking key for the experiment
- **variations** (`any[]`) - The different variations to choose between
- **weights** (`number[]`) - How to weight traffic between variations. Must add to 1.
- **status** (`string`) - "running" is the default and always active. "draft" is only active during QA and development. "stopped" is only active when forcing a winning variation to 100% of users.
- **coverage** (`number`) - What percent of users should be included in the experiment (between 0 and 1, inclusive)
- **url** (`RegExp`) - Users can only be included in this experiment if the current URL matches this regex
- **include** (`() => boolean`) - A callback that returns true if the user should be part of the experiment and false if they should not be
- **groups** (`string[]`) - Limits the experiment to specific user groups
- **force** (`number`) - All users included in the experiment will be forced into the specific variation index
- **hashAttribute** (`string`) - What user attribute should be used to assign variations (defaults to "id")
- **key** (`string`) - The globally unique tracking key for the experiment
- **variations** (`any[]`) - The different variations to choose between
- **weights** (`number[]`) - How to weight traffic between variations. Must add to 1.
- **status** (`string`) - "running" is the default and always active. "draft" is only active during QA and development. "stopped" is only active when forcing a winning variation to 100% of users.
- **coverage** (`number`) - What percent of users should be included in the experiment (between 0 and 1, inclusive)
- **url** (`RegExp`) - Users can only be included in this experiment if the current URL matches this regex
- **include** (`() => boolean`) - A callback that returns true if the user should be part of the experiment and false if they should not be
- **groups** (`string[]`) - Limits the experiment to specific user groups
- **force** (`number`) - All users included in the experiment will be forced into the specific variation index
- **hashAttribute** (`string`) - What user attribute should be used to assign variations (defaults to "id")
## Running Experiments
@@ -81,14 +80,14 @@ Run experiments by calling `growthbook.run(experiment)` which returns an object
```ts
const {
inExperiment,
variationId,
value,
hashAttribute,
hashValue
inExperiment,
variationId,
value,
hashAttribute,
hashValue,
} = growthbook.run({
key: "my-experiment",
variations: ["A", "B"]
key: "my-experiment",
variations: ["A", "B"],
});
// If user is part of the experiment
@@ -107,39 +106,42 @@ console.log(hashAttribute); // "id"
console.log(hashValue); // e.g. "123"
```
The `inExperiment` flag is only set to true if the user was randomly assigned a variation. If the user failed any targeting rules or was forced into a specific variation, this flag will be false.
The `inExperiment` flag is only set to true if the user was randomly assigned a variation. If the user failed any targeting rules or was forced into a specific variation, this flag will be false.
### Example Experiments
3-way experiment with uneven variation weights:
```ts
growthbook.run({
key: "3-way-uneven",
variations: ["A","B","C"],
weights: [0.5, 0.25, 0.25]
})
variations: ["A", "B", "C"],
weights: [0.5, 0.25, 0.25],
});
```
Slow rollout (10% of users who opted into "beta" features):
```ts
// User is in the "qa" and "beta" groups
const growthbook = new GrowthBook({
user: {id: "123"},
user: { id: "123" },
groups: {
qa: isQATester(),
beta: betaFeaturesEnabled()
}
})
beta: betaFeaturesEnabled(),
},
});
growthbook.run({
key: "slow-rollout",
variations: ["A", "B"],
coverage: 0.1,
groups: ["beta"]
})
groups: ["beta"],
});
```
Complex variations and custom targeting
```ts
const {value} = growthbook.run({
key: "complex-variations",
@@ -155,56 +157,59 @@ console.log(value.color, value.size); // blue,large OR green,small
```
Assign variations based on something other than user id
```ts
const growthbook = new GrowthBook({
user: {
id: "123",
company: "growthbook"
}
})
company: "growthbook",
},
});
growthbook.run({
key: "by-company-id",
variations: ["A", "B"],
hashAttribute: "company"
})
hashAttribute: "company",
});
// Users in the same company will always get the same variation
```
### Overriding Experiment Configuration
It's common practice to adjust experiment settings after a test is live. For example, slowly ramping up traffic, stopping a test automatically if guardrail metrics go down, or rolling out a winning variation to 100% of users.
It's common practice to adjust experiment settings after a test is live. For example, slowly ramping up traffic, stopping a test automatically if guardrail metrics go down, or rolling out a winning variation to 100% of users.
For example, to roll out a winning variation to 100% of users:
```ts
const growthbook = new GrowthBook({
user: {id: "123"},
user: { id: "123" },
overrides: {
"experiment-key": {
status: "stopped",
force: 1
}
}
})
force: 1,
},
},
});
const {value} = growthbook.run({
const { value } = growthbook.run({
key: "experiment-key",
variations: ["A", "B"]
variations: ["A", "B"],
});
console.log(value); // Always "B"
```
The full list of experiment properties you can override is:
* status
* force
* weights
* coverage
* groups
* url (can use string instead of regex if serializing in a database)
If you use the Growth Book App (https://github.com/growthbook/growthbook) to manage experiments, there's a built-in API endpoint you can hit that returns overrides in this exact format. It's a great way to make sure your experiments are always up-to-date.
- status
- force
- weights
- coverage
- groups
- url (can use string instead of regex if serializing in a database)
If you use the Growth Book App (https://github.com/growthbook/growthbook) to manage experiments, there's a built-in API endpoint you can hit that returns overrides in this exact format. It's a great way to make sure your experiments are always up-to-date.
## Typescript
@@ -214,62 +219,65 @@ This is especially useful if experiments are defined as a variable before being
```ts
import type {
Context,
Experiment,
Result,
ExperimentOverride
} from "@growthbook/growthbook"
Context,
Experiment,
Result,
ExperimentOverride,
} from "@growthbook/growthbook";
// The "number" part refers to the variation type
const exp: Experiment<number> = {
key: "my-test",
variations: [0, 1],
status: "stoped" // Type error! (should be "stopped")
}
status: "stoped", // Type error! (should be "stopped")
};
```
## Event Tracking and Analyzing Results
This library only handles assigning variations to users. The 2 other parts required for an A/B testing platform are Tracking and Analysis.
This library only handles assigning variations to users. The 2 other parts required for an A/B testing platform are Tracking and Analysis.
### Tracking
It's likely you already have some event tracking on your site with the metrics you want to optimize (Google Analytics, Segment, Mixpanel, etc.).
For A/B tests, you just need to track one additional event - when someone views a variation.
For A/B tests, you just need to track one additional event - when someone views a variation.
```ts
// Specify a tracking callback when instantiating the client
const growthbook = new GrowthBook({
user: {id: "123"},
user: { id: "123" },
trackingCallback: (experiment, result) => {
// ...
}
})
},
});
```
Below are examples for a few popular event tracking tools:
#### Google Analytics
```ts
ga('send', 'event', 'experiment', experiment.key, result.variationId, {
ga("send", "event", "experiment", experiment.key, result.variationId, {
// Custom dimension for easier analysis
'dimension1': `${experiment.key}::${result.variationId}`
dimension1: `${experiment.key}::${result.variationId}`,
});
```
#### Segment
```ts
analytics.track("Experiment Viewed", {
experimentId: experiment.key,
variationId: result.variationId
variationId: result.variationId,
});
```
#### Mixpanel
```ts
mixpanel.track("$experiment_started", {
'Experiment name': experiment.key,
'Variant name': result.variationId
"Experiment name": experiment.key,
"Variant name": result.variationId,
});
```
```

View File

@@ -48,13 +48,13 @@ As shown above, the simplest experiment you can define has an id and an array of
There is an optional 3rd argument, which is an associative array of additional options:
- **weights** (`float[]`) - How to weight traffic between variations. Must add to 1 and be the same length as the number of variations.
- **status** (`string`) - "running" is the default and always active. "draft" is only active during QA and development. "stopped" is only active when forcing a winning variation to 100% of users.
- **coverage** (`float`) - What percent of users should be included in the experiment (between 0 and 1, inclusive)
- **url** (`string`) - Users can only be included in this experiment if the current URL matches this regex
- **groups** (`string[]`) - User groups that should be included in the experiment (e.g. internal employees, qa testers)
- **force** (`int`) - All users included in the experiment will be forced into the specific variation index
- **randomizationUnit** (`string`) - The type of user id to use for variation assignment. Defaults to `id`.
- **weights** (`float[]`) - How to weight traffic between variations. Must add to 1 and be the same length as the number of variations.
- **status** (`string`) - "running" is the default and always active. "draft" is only active during QA and development. "stopped" is only active when forcing a winning variation to 100% of users.
- **coverage** (`float`) - What percent of users should be included in the experiment (between 0 and 1, inclusive)
- **url** (`string`) - Users can only be included in this experiment if the current URL matches this regex
- **groups** (`string[]`) - User groups that should be included in the experiment (e.g. internal employees, qa testers)
- **force** (`int`) - All users included in the experiment will be forced into the specific variation index
- **randomizationUnit** (`string`) - The type of user id to use for variation assignment. Defaults to `id`.
## Running Experiments
@@ -75,7 +75,7 @@ echo $result->variationId; // 0 or 1
echo $result->value; // "A" or "B"
```
The `inExperiment` flag can be false if the experiment defines any sort of targeting rules which the user does not pass. In this case, the user is always assigned variation index `0` and the first variation value.
The `inExperiment` flag can be false if the experiment defines any sort of targeting rules which the user does not pass. In this case, the user is always assigned variation index `0` and the first variation value.
## Client Configuration
@@ -90,10 +90,10 @@ $client = new Growthbook\Client($config);
The `Growthbook\Config` constructor takes an associative array of options. Below are all of the available options currently:
- **enabled** - Default true. Set to false to completely disable all experiments.
- **logger** - An optional psr-3 logger instance
- **url** - The url of the page (defaults to `$_SERVER['REQUEST_URL']` if not set)
- **enableQueryStringOverride** - Default false. If true, enables forcing variations via the URL. Very useful for QA. https://example.com/?my-experiment=1
- **enabled** - Default true. Set to false to completely disable all experiments.
- **logger** - An optional psr-3 logger instance
- **url** - The url of the page (defaults to `$_SERVER['REQUEST_URL']` if not set)
- **enableQueryStringOverride** - Default false. If true, enables forcing variations via the URL. Very useful for QA. https://example.com/?my-experiment=1
You can change configuration options at any time by setting properties directly:
@@ -152,9 +152,9 @@ In the above example, if the user is not in either `beta` or `qa`, then `$result
## Overriding Weights and Targeting
It's common practice to adjust experiment settings after a test is live. For example, slowly ramping up traffic, stopping a test automatically if guardrail metrics go down, or rolling out a winning variation to 100% of users.
It's common practice to adjust experiment settings after a test is live. For example, slowly ramping up traffic, stopping a test automatically if guardrail metrics go down, or rolling out a winning variation to 100% of users.
Instead of constantly changing your code, you can use client overrides. For example, to roll out a winning variation to 100% of users:
Instead of constantly changing your code, you can use client overrides. For example, to roll out a winning variation to 100% of users:
```php
$client->overrides->set("experiment-key", [
@@ -165,14 +165,15 @@ $client->overrides->set("experiment-key", [
```
The full list of experiment properties you can override is:
* status
* force
* weights
* coverage
* groups
* url
This data structure can be easily seralized and stored in a database or returned from an API. There is a small helper function if you have all of your overrides in a single JSON object:
- status
- force
- weights
- coverage
- groups
- url
This data structure can be easily seralized and stored in a database or returned from an API. There is a small helper function if you have all of your overrides in a single JSON object:
```php
$json = '{
@@ -195,7 +196,7 @@ $client->importOverrides($overrides);
It's likely you already have some event tracking on your site with the metrics you want to optimize (Google Analytics, Segment, Mixpanel, etc.).
For A/B tests, you just need to track one additional event - when someone views a variation.
For A/B tests, you just need to track one additional event - when someone views a variation.
You can call `$client->getViewedExperiments()` at the end of a request to forward to your analytics tool of choice.
@@ -215,12 +216,12 @@ foreach($impressions as $impression) {
```
Each impression object has the following properties:
- experiment (the full experiment object)
- result (the result of the $user->experiment call)
- userId (the id used to randomize the experiment result)
Often times you'll want to do the event tracking from the front-end with javascript. To do this, simply add a block to your template (shown here in plain PHP, but similar idea for Twig, Blade, etc.).
- experiment (the full experiment object)
- result (the result of the $user->experiment call)
- userId (the id used to randomize the experiment result)
Often times you'll want to do the event tracking from the front-end with javascript. To do this, simply add a block to your template (shown here in plain PHP, but similar idea for Twig, Blade, etc.).
```php
<script>
@@ -233,31 +234,34 @@ Often times you'll want to do the event tracking from the front-end with javascr
Below are examples for a few popular front-end tracking libraries:
#### Google Analytics
```php
ga('send', 'event', 'experiment',
"<?= $impression->experiment->key ?>",
"<?= $impression->result->variationId ?>",
ga('send', 'event', 'experiment',
"<?= $impression->experiment->key ?>",
"<?= $impression->result->variationId ?>",
{
// Custom dimension for easier analysis
'dimension1': "<?=
$impression->experiment->key.':'.$impression->result->variationId
'dimension1': "<?=
$impression->experiment->key.':'.$impression->result->variationId
?>"
}
);
```
#### Segment
```php
analytics.track("Experiment Viewed", <?=json_encode([
"experimentId" => $impression->experiment->key,
"variationId" => $impression->result->variationId
"variationId" => $impression->result->variationId
])?>);
```
#### Mixpanel
```php
mixpanel.track("Experiment Viewed", <?=json_encode([
'Experiment name' => $impression->experiment->key,
'Variant name' => $impression->result->variationId
'Variant name' => $impression->result->variationId
])?>);
```
```

View File

@@ -37,30 +37,29 @@ print(result.value) # "A" or "B"
The GrowthBook constructor has the following parameters:
- **enabled** (`boolean`) - Flag to globally disable all experiments. Default true.
- **user** (`dict`) - Dictionary of user attributes that are used to assign variations
- **groups** (`dict`) - A dictionary of which groups the user belongs to (key is the group name, value is boolean)
- **url** (`string`) - The URL of the current request (if applicable)
- **overrides** (`dict`) - Nested dictionary of experiment property overrides (used for Remote Config)
- **forcedVariations** (`dict`) - Dictionary of forced experiment variations (used for QA)
- **qaMode** (`boolean`) - If true, random assignment is disabled and only explicitly forced variations are used.
- **trackingCallback** (`callable`) - A function that takes `experiment` and `result` as arguments.
- **enabled** (`boolean`) - Flag to globally disable all experiments. Default true.
- **user** (`dict`) - Dictionary of user attributes that are used to assign variations
- **groups** (`dict`) - A dictionary of which groups the user belongs to (key is the group name, value is boolean)
- **url** (`string`) - The URL of the current request (if applicable)
- **overrides** (`dict`) - Nested dictionary of experiment property overrides (used for Remote Config)
- **forcedVariations** (`dict`) - Dictionary of forced experiment variations (used for QA)
- **qaMode** (`boolean`) - If true, random assignment is disabled and only explicitly forced variations are used.
- **trackingCallback** (`callable`) - A function that takes `experiment` and `result` as arguments.
## Experiment class
Below are all of the possible properties you can set for an Experiment:
- **key** (`string`) - The globally unique tracking key for the experiment
- **variations** (`any[]`) - The different variations to choose between
- **weights** (`float[]`) - How to weight traffic between variations. Must add to 1.
- **status** (`string`) - "running" is the default and always active. "draft" is only active during QA and development. "stopped" is only active when forcing a winning variation to 100% of users.
- **coverage** (`float`) - What percent of users should be included in the experiment (between 0 and 1, inclusive)
- **url** (`string`) - Users can only be included in this experiment if the current URL matches this regex
- **include** (`callable`) - A function that returns true if the user should be part of the experiment and false if they should not be
- **groups** (`string[]`) - Limits the experiment to specific user groups
- **force** (`int`) - All users included in the experiment will be forced into the specified variation index
- **hashAttribute** (`string`) - What user attribute should be used to assign variations (defaults to "id")
- **key** (`string`) - The globally unique tracking key for the experiment
- **variations** (`any[]`) - The different variations to choose between
- **weights** (`float[]`) - How to weight traffic between variations. Must add to 1.
- **status** (`string`) - "running" is the default and always active. "draft" is only active during QA and development. "stopped" is only active when forcing a winning variation to 100% of users.
- **coverage** (`float`) - What percent of users should be included in the experiment (between 0 and 1, inclusive)
- **url** (`string`) - Users can only be included in this experiment if the current URL matches this regex
- **include** (`callable`) - A function that returns true if the user should be part of the experiment and false if they should not be
- **groups** (`string[]`) - Limits the experiment to specific user groups
- **force** (`int`) - All users included in the experiment will be forced into the specified variation index
- **hashAttribute** (`string`) - What user attribute should be used to assign variations (defaults to "id")
## Running Experiments
@@ -88,11 +87,12 @@ print(result.hashAttribute) # "id"
print(result.hashValue) # e.g. "123"
```
The `inExperiment` flag is only set to true if the user was randomly assigned a variation. If the user failed any targeting rules or was forced into a specific variation, this flag will be false.
The `inExperiment` flag is only set to true if the user was randomly assigned a variation. If the user failed any targeting rules or was forced into a specific variation, this flag will be false.
### Example Experiments
3-way experiment with uneven variation weights:
```python
gb.run(Experiment(
key = "3-way-uneven",
@@ -102,6 +102,7 @@ gb.run(Experiment(
```
Slow rollout (10% of users who opted into "beta" features):
```python
# User is in the "qa" and "beta" groups
gb = GrowthBook(
@@ -121,6 +122,7 @@ gb.run(Experiment(
```
Complex variations and custom targeting
```python
result = gb.run(Experiment(
key = "complex-variations",
@@ -137,6 +139,7 @@ print(result.value["color"] + "," + result.value["size"])
```
Assign variations based on something other than user id
```python
gb = GrowthBook(
user = {
@@ -156,9 +159,10 @@ gb.run(Experiment(
### Overriding Experiment Configuration
It's common practice to adjust experiment settings after a test is live. For example, slowly ramping up traffic, stopping a test automatically if guardrail metrics go down, or rolling out a winning variation to 100% of users.
It's common practice to adjust experiment settings after a test is live. For example, slowly ramping up traffic, stopping a test automatically if guardrail metrics go down, or rolling out a winning variation to 100% of users.
For example, to roll out a winning variation to 100% of users:
```python
gb = GrowthBook(
user = {"id": "123"},
@@ -179,14 +183,15 @@ print(result.value) # Always "B"
```
The full list of experiment properties you can override is:
* status
* force
* weights
* coverage
* groups
* url
If you use the Growth Book App (https://github.com/growthbook/growthbook) to manage experiments, there's a built-in API endpoint you can hit that returns overrides in this exact format. It's a great way to make sure your experiments are always up-to-date.
- status
- force
- weights
- coverage
- groups
- url
If you use the Growth Book App (https://github.com/growthbook/growthbook) to manage experiments, there's a built-in API endpoint you can hit that returns overrides in this exact format. It's a great way to make sure your experiments are always up-to-date.
### Django
@@ -222,13 +227,13 @@ def index(request):
## Event Tracking and Analyzing Results
This library only handles assigning variations to users. The 2 other parts required for an A/B testing platform are Tracking and Analysis.
This library only handles assigning variations to users. The 2 other parts required for an A/B testing platform are Tracking and Analysis.
### Tracking
It's likely you already have some event tracking on your site with the metrics you want to optimize (Segment, Mixpanel, etc.).
For A/B tests, you just need to track one additional event - when someone views a variation.
For A/B tests, you just need to track one additional event - when someone views a variation.
```python
# Specify a tracking callback when instantiating the client
@@ -241,6 +246,7 @@ gb = GrowthBook(
Below are examples for a few popular event tracking tools:
#### Segment
```python
def on_experiment_viewed(experiment, result):
analytics.track(userId, "Experiment Viewed", {
@@ -250,10 +256,11 @@ def on_experiment_viewed(experiment, result):
```
#### Mixpanel
```python
def on_experiment_viewed(experiment, result):
mp.track(userId, "$experiment_started", {
'Experiment name': experiment.key,
'Variant name': result.variationId
})
```
```

View File

@@ -4,42 +4,41 @@ View the full documentation on [GitHub](https://github.com/growthbook/growthbook
## Installation
`yarn add @growthbook/growthbook-react`
`yarn add @growthbook/growthbook-react`
or
or
`npm install --save @growthbook/growthbook-react`
## Quick Start
### Step 1: Configure your app
### Step 1: Configure your app
```tsx
import {
GrowthBook, GrowthBookProvider
} from '@growthbook/growthbook-react'
import { GrowthBook, GrowthBookProvider } from "@growthbook/growthbook-react";
// Create a GrowthBook instance
const growthbook = new GrowthBook({
// The attributes you want to use to assign variations
user: {
id: "123"
id: "123",
},
// Called every time the user is put into an experiment
trackingCallback: (experiment, result) => {
// Mixpanel, Segment, GA, or custom tracking
mixpanel.track("Experiment Viewed", {
experiment: experiment.key,
variation: result.variationId
})
}
variation: result.variationId,
});
},
});
export default function App() {
return (
<GrowthBookProvider growthbook={growthbook}>
<OtherComponent/>
<OtherComponent />
</GrowthBookProvider>
)
);
}
```
@@ -48,15 +47,15 @@ export default function App() {
#### Hooks (recommended)
```tsx
import { useExperiment } from '@growthbook/growthbook-react'
import { useExperiment } from "@growthbook/growthbook-react";
export default function OtherComponent() {
const { value } = useExperiment({
key: "new-headline",
variations: ["Hello", "Hi", "Good Day"],
})
});
return <h1>{ value }</h1>
return <h1>{value}</h1>;
}
```
@@ -65,16 +64,16 @@ export default function OtherComponent() {
**Note:** This library uses hooks internally, so still requires React 16.8 or above.
```tsx
import { withRunExperiment } from '@growthbook/growthbook-react';
import { withRunExperiment } from "@growthbook/growthbook-react";
class MyComponent extends Component {
render() {
// The `runExperiment` prop is identical to the `useExperiment` hook
const {value} = this.props.runExperiment({
const { value } = this.props.runExperiment({
key: "headline-test",
variations: ["Hello World", "Hola Mundo"]
variations: ["Hello World", "Hola Mundo"],
});
return <h1>{value}</h1>
return <h1>{value}</h1>;
}
}
// Wrap your component in `withRunExperiment`
@@ -91,7 +90,7 @@ The easiest way to accomplish this is with the Growth Book App (https://github.c
If `process.env.NODE_ENV !== "production"` AND you are in a browser environment, dev mode is enabled by default. You can override this behavior by explicitly passing in the `disableDevMode` prop to `GrowthBookProvider`.
Dev Mode adds a variation switcher UI that floats on the bottom left of pages. Use this to easily test out all the experiment combinations. It also includes a screenshot tool to download images of all your variations.
Dev Mode adds a variation switcher UI that floats on the bottom left of pages. Use this to easily test out all the experiment combinations. It also includes a screenshot tool to download images of all your variations.
[View Live Demo](https://growthbook.github.io/growthbook-react/)
@@ -99,4 +98,4 @@ Dev Mode adds a variation switcher UI that floats on the bottom left of pages.
## Configuration and Usage
This package is a small React wrapper around the [javascript client library](/lib/js). Look at those docs for more info on how to configure your GrowthBook instance and define Experiments.
This package is a small React wrapper around the [javascript client library](/lib/js). Look at those docs for more info on how to configure your GrowthBook instance and define Experiments.

View File

@@ -57,9 +57,9 @@ client.enabled=true
The `client.user` method takes a single hash with a few possible keys:
- `id` - The logged-in user id
- `anonId` - An anonymous identifier for the user (session id, cookie, ip, etc.)
- `attributes` - A hash with user attributes. These are never sent across the network and are only used to locally evaluate experiment targeting rules.
- `id` - The logged-in user id
- `anonId` - An anonymous identifier for the user (session id, cookie, ip, etc.)
- `attributes` - A hash with user attributes. These are never sent across the network and are only used to locally evaluate experiment targeting rules.
Although all of these are technically optional, at least 1 type of id must be set or the user will be excluded from all experiments.
@@ -96,7 +96,7 @@ user.attributes={
## Experiment Configuration
The default test is a 50/50 split with no targeting or customization. There are a few ways to configure this on a test-by-test basis.
The default test is a 50/50 split with no targeting or customization. There are a few ways to configure this on a test-by-test basis.
### Option 1: Global Configuration
@@ -109,9 +109,9 @@ client.experiments=[
# 3-way test with reduced coverage and unequal weights
Growthbook::Experiment.new(
"my-other-test",
3,
:coverage => 0.4,
"my-other-test",
3,
:coverage => 0.4,
:weights => [0.5, 0.25, 0.25]
)
]
@@ -148,6 +148,7 @@ client.importExperimentsHash(parsed["experiments"])
As shown in the quick start above, you can use a `Growthbook::Experiment` object directly to run an experiment.
The below example shows all of the possible experiment options you can set:
```ruby
# 1st argument is the experiment id
# 2nd argument is the number of variations
@@ -207,7 +208,7 @@ end
With this approach, you parameterize the variations by associating them with data.
```ruby
experiment = Growthbook::Experiment.new("experiment-id", 2,
experiment = Growthbook::Experiment.new("experiment-id", 2,
:data => {
"color" => ["blue", "green"]
}
@@ -224,7 +225,7 @@ result.data["unknown"] == nil # true
### Approach 3: Configuration System
If you already have an existing configuration or feature flag system, you can do a deeper integration that
If you already have an existing configuration or feature flag system, you can do a deeper integration that
avoids `experiment` calls throughout your code base entirely.
All you need to do is modify your existing config system to get experiment overrides before falling back to your normal lookup process:
@@ -232,7 +233,7 @@ All you need to do is modify your existing config system to get experiment overr
```ruby
# Your existing function
def getConfig(key)
# Look for a valid matching experiment.
# Look for a valid matching experiment.
# If found, choose a variation and return the value for the requested key
result = user.lookupByDataKey(key)
if result != nil
@@ -247,6 +248,7 @@ end
Instead of generic keys like `color`, you probably want to be more descriptive with this approach (e.g. `homepage.cta.color`).
With the following experiment data:
```ruby
{
:data => {
@@ -261,7 +263,7 @@ You can now do:
buttonColor = getConfig("homepage.cta.color")
```
Your code now no longer cares where the value comes from. It could be a hard-coded config value or part of an experiment. This is the cleanest approach of the 3, but it can be difficult to debug if things go wrong.
Your code now no longer cares where the value comes from. It could be a hard-coded config value or part of an experiment. This is the cleanest approach of the 3, but it can be difficult to debug if things go wrong.
## Tracking
@@ -280,4 +282,4 @@ For example, if you are using Segment on the front-end, you can add something li
})
</script>
<% end %>
```
```

View File

@@ -0,0 +1,139 @@
# Self Hosting Growth Book
Growth Book consists of a NextJS front-end, an ExpressJS API, and a Python stats engine. Everything is bundled together in a single [Docker Image](https://hub.docker.com/r/growthbook/growthbook).
In addition to the app itself, you will also need a MongoDB instance to store login credentials, cached experiment results, and metadata.
<div className="bg-blue-200 dark:bg-blue-900 py-2 px-4 rounded flex">
<div className="text-yellow-500 pt-1 mr-3">
<svg
xmlns="http://www.w3.org/2000/svg"
width="24"
height="24"
viewBox="0 0 24 24"
style={{ fill: "currentColor" }}
>
<path d="M12 .587l3.668 7.568 8.332 1.151-6.064 5.828 1.48 8.279-7.416-3.967-7.417 3.967 1.481-8.279-6.064-5.828 8.332-1.151z" />
</svg>
</div>
<div>
Don't want to install or host the app yourself?{" "}
<a href="https://app.growthbook.io">Growth Book Cloud</a> is a fully managed
version that's free to get started.
</div>
<div className="text-yellow-500 pt-1 ml-3">
<svg
xmlns="http://www.w3.org/2000/svg"
width="24"
height="24"
viewBox="0 0 24 24"
style={{ fill: "currentColor" }}
>
<path d="M12 .587l3.668 7.568 8.332 1.151-6.064 5.828 1.48 8.279-7.416-3.967-7.417 3.967 1.481-8.279-6.064-5.828 8.332-1.151z" />
</svg>
</div>
</div>
## Installation
You can use **docker-compose** to get started quickly:
```yml
# docker-compose.yml
version: "3"
services:
mongo:
image: "mongo:latest"
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=password
growthbook:
image: "growthbook/growthbook:latest"
ports:
- "3000:3000"
- "3100:3100"
depends_on:
- mongo
environment:
- MONGODB_URI=mongodb://root:password@mongo:27017/
```
Then, just run `docker-compose up -d` to start everything and view the app at [http://localhost:3000](http://localhost:3000)
## Configuration
The Growth Book App is configured via environment variables. Below are all of the configuration options:
- **NODE_ENV** - Set to "production" to turn on additional optimizations and API request logging
- **JWT_SECRET** - Auth signing key (use a long random string)
- **ENCRYPTION_KEY** - Data source credential encryption key (use a long random string)
- **APP_ORIGIN** - Used for CORS (default set to http://localhost:3000)
- **MONGODB_URI** - The MongoDB connection string
- **DISABLE_TELEMETRY** - We collect anonymous telemetry data to help us improve Growth Book. Set to "true" to disable.
- **API_HOST** - (default set to http://localhost:3100)
- Email SMTP Settings:
- **EMAIL_ENABLED** ("true" or "false")
- **EMAIL_HOST**
- **EMAIL_PORT**
- **EMAIL_HOST_USER**
- **EMAIL_HOST_PASSWORD**
- **EMAIL_USE_TLS** ("true" or "false")
- Google OAuth Settings (only if using Google Analytics as a data source)
- **GOOGLE_OAUTH_CLIENT_ID**
- **GOOGLE_OAUTH_CLIENT_SECRET**
### Changing the Ports
The Docker image exposes 2 ports: `3000` for the front-end and `3100` for the API.
If you need to change these, you can use Docker port mappings. You'll also need to set the environment variables **API_HOST** and **APP_ORIGIN** to include your new ports.
Here's an example of switching to ports `4000` and `4100` in `docker-compose.yml`:
```yml
growthbook:
image: "growthbook/growthbook:latest"
ports:
- "4000:3000"
- "4100:3100"
depends_on:
- mongo
environment:
- APP_ORIGIN=http://localhost:4000
- API_HOST=http://localhost:4100
...
```
Now your app would be available on [http://localhost:4000](http://localhost:4000)
### Volumes
Images uploaded in the Growth Book app are stored in `/usr/local/src/app/packages/back-end/uploads`. We recommend mounting a volume there so images can be persisted.
Also, if you are running MongoDB through Docker, you will need to mount a volume to `/data/db` to persist data between container restarts. In production, we highly suggest just using a hosted solution like [MongoDB Atlas](https://www.mongodb.com/cloud/atlas) instead.
### Commands
These are the possible commands you can use:
- `["yarn", "start"]` - default, start both front-end and back-end in parallel
- `["yarn", "workspace", "front-end", "run"]` - run only the front-end
- `["yarn", "workspace", "back-end", "run"]` - run only the back-end
## Docker Tags
Builds are published automatically from the [GitHub repo](https://github.com/growthbook/growthbook) main branch.
The most recent commit is tagged with `latest`. GitHub Releases are also tagged (e.g. `0.2.1`).
If you need to reference the image for a specific git commit for any reason, you can use the git shorthash tag (e.g. `git-41278e9`).
### Updating to Latest
If you are using docker-compose, you can update with:
```bash
docker-compose pull growthbook
docker-compose stop growthbook
docker-compose up -d --no-deps growthbook
```

View File

@@ -12,7 +12,7 @@ module.exports = {
purge: {
content: [path.join(__dirname, "pages", "**", "*.{tsx,mdx}")],
options: {
safelist: ["border"],
safelist: ["border", "justify-content", "pb-4"],
},
},
theme: {

View File

@@ -334,7 +334,7 @@ const GetStarted = ({
rel="noreferrer"
href="https://docs.growthbook.io/app"
>
View the <strong>Growth Book App</strong> docs
Read our <strong>User Guide</strong>
</a>
</div>
</div>
@@ -348,7 +348,7 @@ const GetStarted = ({
rel="noreferrer"
href="https://docs.growthbook.io/lib"
>
View our <strong>Client Library</strong> docs
View docs for our <strong>Client Libraries</strong>
</a>
</div>
</div>

View File

@@ -26,7 +26,7 @@ const ApiKeys: FC = () => {
API keys can be used with our Client Libraries (Javascript, React, PHP,
Ruby, or Python) or the Visual Editor.{" "}
<a
href="https://docs.growthbook.io/api-docs"
href="https://docs.growthbook.io/app/api"
target="_blank"
rel="noreferrer"
>