docs: moving documentation around to be more clear and easier to browse (#236)

* moving documentation around to be more clear and easier to browse

- moved assets into their own directory and updated links to them
- moved operating docs into their own directory
- consolidated kubernetes docs
- added docker-swarm folder for docs
- updated docs layout in docs/README.md to reflect the changes and make it easier to read

* docs: s/Operating Functions/Operating IronFunctions/

* docs: removing duplicate database link

* docs: moving databases into general docs

* docs: moving databases/mqs back

* docs: removing memory.md (duplicate of operating/routes.md)

* docs: converting to markdown bullets
This commit is contained in:
Benji Visser
2016-11-09 12:39:53 -05:00
committed by Travis Reeder
parent 4e32aeda26
commit a32ca3d90a
28 changed files with 42 additions and 37 deletions

21
docs/operating/docker.md Normal file
View File

@@ -0,0 +1,21 @@
# Docker Configuration
To get the best performance, you'll want to ensure that Docker is configured properly. These are the environments known to produce the best results:
1. Linux 4.7 or newer with aufs or overlay2 module.
2. Ubuntu 16.04 LTS or newer with aufs or overlay2 module.
3. Docker 1.12 or newer to be available.
It is important to reconfigure host's Docker with this filesystem module. Thus, in your Docker start scripts you must do as following:
```
docker daemon [...] --storage-driver=overlay2
```
In case you are using Ubuntu, you can reconfigure Docker easily by updating `/etc/docker/daemon.json` and restarting Docker:
```json
{
"storage-driver": "overlay2"
}
```

View File

@@ -0,0 +1,18 @@
# Extending IronFunctions
IronFunctions is extensible so you can add custom functionality and extend the project without needing to modify the core.
## Listeners
Listeners are the main way to extend IronFunctions.
To add listeners, copy `main.go` into your own repo and add your own listener implementations. When ready,
compile your main package to create your extended version of IronFunctions.
### AppListener
Implement `ifaces/AppListener` interface, then add it using:
```go
server.AddAppListener(myAppListener)
```

32
docs/operating/logging.md Normal file
View File

@@ -0,0 +1,32 @@
# Logging
There are a few things to note about what IronFunctions logs.
## Logspout
We recommend using [logspout](https://github.com/gliderlabs/logspout) to forward your logs to a log aggregator of your choice.
## Format
All logs are emitted in [logfmt](https://godoc.org/github.com/kr/logfmt) format for easy parsing.
## Call ID
Every function call/request is assigned a `call_id`. If you search your logs, you can track all the activity
for each function call and find errors on a call by call basis. For example, these are the log lines for an aynschronous
function call:
![async logs](/docs/assets/async-log-full.png)
Note the easily searchable `call_id=x` format.
```sh
call_id=477949e2-922c-5da9-8633-0b2887b79f6e
```
## Metrics
Metrics are emitted via the logs.
See [Metrics](metrics.md) doc for more information.

25
docs/operating/metrics.md Normal file
View File

@@ -0,0 +1,25 @@
# Metrics
Metrics are emitted via the logs for few couple of reasons:
1. Everything supports STDERR.
2. User can optionally use them, if not, they just end up in the logs.
3. No particular metrics system required, in other words, all metrics systems can be used via adapters (see below).
## Metrics
The metrics format follows logfmt format and looks like this:
```
metric=someevent value=1 type=count
metric=somegauge value=50 type=gauge
```
It's a very simple format that can be easily parsed by any logfmt parser and passed on to another stats service.
TODO: List all metrics we emit to logs.
## Statsd
The [Logspout Statsd Adapter](https://github.com/iron-io/logspout-statsd) adapter can parse the log metrics and forward
them to any statsd server.

View File

@@ -0,0 +1,45 @@
# Running IronFunctions in Production
The [QuickStart guide](/README.md) is intended to quickly get started and kick the tires. To run in production and be ready to scale, you need
to use more production ready components.
* Put the IronFunctions API behind a load balancer and launch run several instances of them (the more the merrier).
* Run a database that can scale.
* Asynchronous functions requires a message queue (preferably one that scales).
Here's a rough diagram of what a production deployment looks like:
![IronFunctions Architecture Diagram](/docs/assets/architecture.svg)
## Load Balancer
Any load balancer will work, put every instance of IronFunctions that you run behind the load balancer.
**Note**: We will work on a smart load balancer that can direct traffic in a smarter way. See [#151](https://github.com/iron-io/functions/issues/151).
## Database
We've done our best to keep the database usage to a minimum. There are no writes during the request/response cycle which where most of the load will be.
The database is pluggable and we currently support a few options that can be [found here](/docs/databases/). We welcome pull requests for more!
## Message Queue
The message queue is an important part of asynchronous functions, essentially buffering requests for processing when resources are available. The reliability and scale of the message queue will play an important part
in how well IronFunctions runs, in particular if you use a lot of asynchronous function calls.
The message queue is pluggable and we currently support a few options that can be [found here](/docs/mqs/). We welcome pull requests for more!
## Logging, Metrics and Monitoring
Logging is a particularly important part of IronFunctions. It not only emits logs, but metrics are also emitted to the logs. Ops teams can then decide how they want
to use the logs and metrics without us prescribing a particular technology. For instance, you can [logspout-statsd](https://github.com/iron-io/logspout-statsd) to capture metrics
from the logs and forward them to statsd.
[More about Metrics](metrics.md)
## Scaling
There are metrics emitted to the logs that can be used to notify you when to scale. The most important being the `wait_time` metrics for both the
synchronous and asynchronous functions. If `wait_time` increases, you'll want to start more IronFunctions instances.

90
docs/operating/routes.md Normal file
View File

@@ -0,0 +1,90 @@
# IronFunctions Routes
Routes have a many-to-one mapping to an [app](apps.md).
A good practice to get the best performance on your IronFunctions API is define
the required memory for each function.
## Route level configuration
When creating a route, you can configure it to tweak its behavior, the possible
choices are: `memory`, `type` and `config`.
`memory` is number of usable MiB for this function. If during the execution it
exceeds this maximum threshold, it will halt and return an error in the logs. It
expects to be an integer. Default: `128`.
`type` is the type of the function. Either `sync`, in which the client waits
until the request is successfully completed, or `async`, in which the clients
dispatches a new request, gets a task ID back and closes the HTTP connection.
Default: `sync`.
`config` is a map of values passed to the route runtime in the form of
environment variables prefixed with `CONFIG_`.
Note: Route level configuration overrides app level configuration.
TODO: link to swagger doc on swaggerhub after it's updated.
## Understanding IronFunctions memory management
When IronFunctions starts it registers the total available memory in your system
in order to know during its runtime if the system has the required amount of
free memory to run each function. Every function starts the runner reduces the
amount of memory used by that function from the available memory register. When
the function finishes the runner returns the used memory to the available memory
register.
### Creating function
```
curl -H "Content-Type: application/json" -X POST -d '{
"route": {
"path":"<route name>",
"image":"<route image>",
"memory": <memory mb number>,
"type": "<route type>",
"config": {"<unique key>": <value>}
}
}' http://localhost:8080/v1/apps/<app name>/routes
```
Eg. Creating `/myapp/hello` with required memory as `100mb`, type `sync` and
some container configuration values.
```
curl -H "Content-Type: application/json" -X POST -d '{
"route": {
"path":"/hello",
"image":"iron/hello",
"memory": 100,
"type": "sync",
"config": {"APPLOG": "stderr"}
}
}' http://localhost:8080/v1/apps/myapp/routes
```
### Updating function
```
curl -H "Content-Type: application/json" -X POST -d '{
"route": {
"memory": <memory mb number>,
"type": "<route type>",
"config": {"<unique key>": <value>}
}
}' http://localhost:8080/v1/apps/<app name>/routes/<route name>
```
Eg. Updating `/myapp/hello` required memory as `100mb`, type `async` and changed
container configuration values.
```
curl -H "Content-Type: application/json" -X POST -d '{
"route": {
"memory": 100,
"type": "async",
"config": {"APPLOG": "stdout"}
}
}' http://localhost:8080/v1/apps/myapp/routes/hello
```

View File

@@ -0,0 +1,2 @@
# Scaling IronFunctions

View File

@@ -0,0 +1,5 @@
# Triggers
Triggers are integrations that you can use in other systems to fire off functions in IronFunctions.
TODO:

View File

@@ -0,0 +1,9 @@
# Running on Windows
Windows doesn't support Docker in Docker so you'll change the run command to the following:
```sh
docker run --rm --name functions -it -v /var/run/docker.sock:/var/run/docker.sock -v $PWD/data:/app/data -p 8080:8080 iron/functions
```
Then everything should work as normal.