Docs related to running in production. (#174)

* Fixed up api.md, removed Titan references.

* Adding more documentation on running in production.

* Update deps for ironmq.
This commit is contained in:
Travis Reeder
2016-10-17 11:31:58 -07:00
committed by GitHub
parent 42efb2ed6b
commit 41c06644d9
19 changed files with 365 additions and 107 deletions

10
docs/README.md Normal file
View File

@@ -0,0 +1,10 @@
# IronFunctions Documentation
* [Running IronFunctions](api.md)
* [Databases](databases/README.md)
* [Message Queues](mqs/README.md)
* [Running in Production](production.md)
* [Triggers](triggers.md)
* [Metrics](metrics.md)
* [Extending IronFunctions](extending.md)

View File

@@ -1,7 +1,14 @@
## API Options
#### Env Variables
## IronFunctions Config Options
When starting IronFunctions, you can pass in the following configuration variables as environment variables. Use `-e VAR_NAME=VALUE` in
docker run. For example:
```
docker run -e VAR_NAME=VALUE ...
```
<table>
<tr>
@@ -13,10 +20,6 @@
<td>The database URL to use in URL format. See Databases below for more information. Default: BoltDB in current working directory `bolt.db`.</td>
</tr>
<tr>
<td>PORT</td>
<td>Default (8080), sets the port to run on.</td>
</tr>
<tr>
<td>MQ</td>
<td>The message queue to use in URL format. See Message Queues below for more information. Default: BoltDB in current working directory `queue.db`.</td>
</tr>
@@ -25,79 +28,16 @@
<td>The primary functions api URL to pull tasks from (the address is that of another running functions process).</td>
</tr>
<tr>
<td>PORT</td>
<td>Default (8080), sets the port to run on.</td>
</tr>
<tr>
<td>NUM_ASYNC</td>
<td>The number of async runners in the functions process (default 1).</td>
</tr>
<tr>
<td>LOG_LEVEL</td>
<td>Set to `DEBUG` to enable debugging. Default is INFO.</td>
</tr>
</table>
## Databases
We currently support the following databases and they are passed in via the `DB` environment variable. For example:
```sh
docker run -v /titan/data:/titan/data -e "DB=postgres://user:pass@localhost:6212/mydb" ...
```
### Memory
URL: `memory:///`
Stores all data in memory. Fast and easy, but you'll lose all your data when it stops! NEVER use this in production.
### [Bolt](https://github.com/boltdb/bolt)
URL: `bolt:///titan/data/bolt.db`
Bolt is an embedded database which stores to disk. If you want to use this, be sure you don't lose the data directory by mounting
the directory on your host. eg: `docker run -v $PWD/data:/titan/data -e DB=bolt:///titan/data/bolt.db ...`
### [Redis](http://redis.io/)
URL: `redis://localhost:6379/`
Uses any Redis instance. Be sure to enable [peristence](http://redis.io/topics/persistence).
### [PostgreSQL](http://www.postgresql.org/)
URL: `postgres://user3123:passkja83kd8@ec2-117-21-174-214.compute-1.amazonaws.com:6212/db982398`
If you're using Titan in production, you should probably start here.
### What about database X?
We're happy to add more and we love pull requests, so feel free to add one! Copy one of the implementations above as a starting point.
## Message Queues
A message queue is used to coordinate the jobs that run through Titan.
We currently support the following message queues and they are passed in via the `MQ` environment variable. For example:
```sh
docker run -v /titan/data:/titan/data -e "MQ=redis://localhost:6379/" ...
```
### Memory
See memory in databases above.
### Bolt
URL: `bolt:///titan/data/bolt-mq.db`
See Bolt in databases above. The Bolt database is locked at the file level, so
the file cannot be the same as the one used for the Bolt Datastore.
### Redis
See Redis in databases above.
### What about message queue X?
We're happy to add more and we love pull requests, so feel free to add one! Copy one of the implementations above as a starting point.
## Troubleshooting
Enable debugging by passing in the `LOG_LEVEL` env var with DEBUG level.

BIN
docs/architecture.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

4
docs/architecture.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 384 KiB

38
docs/databases/README.md Normal file
View File

@@ -0,0 +1,38 @@
# Databases
We currently support the following databases and they are passed in via the `DB` environment variable. For example:
```sh
docker run -e "DB=postgres://user:pass@localhost:6212/mydb" ...
```
## [Bolt](https://github.com/boltdb/bolt) (default)
URL: `bolt:///functions/data/functions.db`
Bolt is an embedded database which stores to disk. If you want to use this, be sure you don't lose the data directory by mounting
the directory on your host. eg: `docker run -v $PWD/data:/functions/data -e DB=bolt:///functions/data/bolt.db ...`
[More on BoltDB](databases/boltdb.md)
## [Redis](http://redis.io/)
URL: `redis://localhost:6379/`
Use a Redis instance as your database. Be sure to enable [peristence](http://redis.io/topics/persistence).
[More on Redis](databases/redis.md)
## [PostgreSQL](http://www.postgresql.org/)
URL: `postgres://user123:pass456@ec2-117-21-174-214.compute-1.amazonaws.com:6212/db982398`
Use a PostgreSQL database. If you're using IronFunctions in production, you should probably start here.
[More on Postgres](databases/postgres.md)
## What about database X?
We're happy to add more and we love pull requests, so feel free to add one! Copy one of the implementations above as a starting point.

View File

@@ -4,7 +4,7 @@ BoltDB is the default database, you just need to run the API.
## Persistent
To keep it persistent you add a volume flag to the command:
To keep it persistent, add a volume flag to the command:
```
docker run --rm -it --privileged -v $PWD/bolt.db:/app/bolt.db -p 8080:8080 iron/functions

View File

@@ -1,8 +1,13 @@
IronFunctions is extensible so you can add custom functionality and extend the project without needing to modify the core.
# Extending IronFunctions
IronFunctions is extensible so you can add custom functionality and extend the project without needing to modify the core.
## Listeners
This is the main way to do it. To add listeners, copy main.go and use one of the following functions on the Server.
Listeners are the main way to extend IronFunctions.
To add listeners, copy `main.go` into your own repo and add your own listener implementations. When ready,
compile your main package to create your extended version of IronFunctions.
### AppListener

10
docs/metrics.md Normal file
View File

@@ -0,0 +1,10 @@
# Metrics
TODO: Explain metrics format. Log Metric Format - LMF(AO)
```
metric=someevent value=1 type=count
metric=somegauge value=50 type=gauge
```
TODO: List all metrics we emit to logs.

37
docs/mqs/README.md Normal file
View File

@@ -0,0 +1,37 @@
# Message Queues
A message queue is used to coordinate asynchronous function calls that run through IronFunctions.
We currently support the following message queues and they are passed in via the `MQ` environment variable. For example:
```sh
docker run -e "MQ=redis://localhost:6379/" ...
```
## [Bolt](https://github.com/boltdb/bolt) (default)
URL: `bolt:///titan/data/functions-mq.db`
See Bolt in databases above. The Bolt database is locked at the file level, so
the file cannot be the same as the one used for the Bolt Datastore.
## [Redis](http://redis.io/)
See Redis in databases above.
## [IronMQ](https://www.iron.io/platform/ironmq/)
URL: `ironmq://project_id:token@mq-aws-us-east-1.iron.io/queue_prefix`
IronMQ is a hosted message queue service provided by [Iron.io](http://iron.io). If you're using IronFunctions in production and don't
want to manage a message queue, you should start here.
The IronMQ connector uses HTTPS by default. To use HTTP set the scheme to
`ironmq+http`. You can also use a custom port. An example URL is:
`ironmq+http://project_id:token@localhost:8090/queue_prefix`.
## What about message queue X?
We're happy to add more and we love pull requests, so feel free to add one! Copy one of the implementations above as a starting point.

46
docs/production.md Normal file
View File

@@ -0,0 +1,46 @@
# Running IronFunctions in Production
The [QuickStart guide](/README.md) is intended to quickly get started and kick the tires. To run in production and be ready to scale, you need
to use more production ready components.
* Put the IronFunctions API behind a load balancer and launch run several instances of them (the more the merrier).
* Run a database that can scale.
* Asynchronous functions requires a message queue (preferably one that scales).
Here's a rough diagram of what a production deployment looks like:
![IronFunctions Architecture Diagram](architecture.svg)
## Load Balancer
Any load balancer will work, put every instance of IronFunctions that you run behind the load balancer.
**Note**: We will work on a smart load balancer that can direct traffic in a smarter way. See [#151](https://github.com/iron-io/functions/issues/151).
## Database
We've done our best to keep the database usage to a minimum. There are no writes during the request/response cycle which where most of the load will be.
The database is pluggable and we currently support a few options that can be [found here](/docs/databases/). We welcome pull requests for more!
## Message Queue
The message queue is an important part of asynchronous functions, essentially buffering requests for processing when resources are available. The reliability and scale of the message queue will play an important part
in how well IronFunctions runs, in particular if you use a lot of asynchronous function calls.
The message queue is pluggable and we currently support a few options that can be [found here](/docs/mqs/). We welcome pull requests for more!
## Logging, Metrics and Monitoring
Logging is a particularly important part of IronFunctions. It not only emits logs, but metrics are also emitted to the logs. Ops teams can then decide how they want
to use the logs and metrics without us prescribing a particular technology. For instance, you can [logspout-statsd](https://github.com/iron-io/logspout-statsd) to capture metrics
from the logs and forward them to statsd.
[More about Metrics](metrics.md)
## Scaling
There are metrics emitted to the logs that can be used to notify you when to scale. The most important being the `wait_time` metrics for both the
synchronous and asynchronous functions. If `wait_time` increases, you'll want to start more IronFunctions instances.

View File

@@ -1,15 +1,2 @@
# Scaling IronFunctions
The QuickStart guide is intended just to quickly get started and kick the tires. To run in production and be ready to scale, there are a few more steps.
* Run a database that can scale, such as Postgres.
* Put the iron/functions API behind a load balancer and launch more than one machine.
* For asynchronous functions:
* Start a separate message queue (preferably one that scales)
* Start multiple iron/functions-runner containers, the more the merrier
There are metrics emitted to the logs that can be used to notify you when to scale. The most important being the `wait_time` metrics for both the
synchronous and asynchronous functions. If `wait_time` increases, you'll want to start more servers with either the `iron/functions` image or the `iron/functions-runner` image.

5
docs/triggers.md Normal file
View File

@@ -0,0 +1,5 @@
# Triggers
Triggers are integrations that you can use in other systems to fire off functions in IronFunctions.
TODO: