doc: hot containers (#411)

* doc: hot containers

* doc: update with code review
This commit is contained in:
C Cirello
2016-12-11 10:58:58 +01:00
committed by Seif Lotfy سيف لطفي
parent e132ce4217
commit 8a24cc432e
2 changed files with 105 additions and 0 deletions

View File

@@ -14,6 +14,8 @@ If you are a developer using IronFunctions through the API, this section is for
* [Packaging functions](packaging.md) * [Packaging functions](packaging.md)
* [Open Function Format](function-format.md) * [Open Function Format](function-format.md)
* [API Reference](http://petstore.swagger.io/?url=https://raw.githubusercontent.com/iron-io/functions/master/docs/swagger.yml) * [API Reference](http://petstore.swagger.io/?url=https://raw.githubusercontent.com/iron-io/functions/master/docs/swagger.yml)
* [Hot containers](hot-containers.md)
## For Operators ## For Operators

103
docs/hot-containers.md Normal file
View File

@@ -0,0 +1,103 @@
# Hot containers
IronFunctions is built on top of container technologies, for each incoming
workload, it spins a new container, feed it with the payload and sends the
answer back to the caller. You can expect an average start time of 300ms per
container. You may refer to [this blog](https://medium.com/travis-on-docker/the-overhead-of-docker-run-f2f06d47c9f3#.96tj75ugb) post to understand the details better.
In the case you need faster start times for your function, you may use a hot
container instead.
Hot containers are started once and kept alive while there is incoming workload.
Thus, it means that once you decide to use a hot container, you must be able to
tell the moment it should reading from standard input to start writing to
standard output.
Currently, IronFunctions implements a HTTP-like protocol to operate hot
containers, but instead of communication through a TCP/IP port, it uses standard
input/output.
## Implementing a hot container
In the [examples directory](https://github.com/iron-io/functions/blob/master/examples/hotcontainers/http/func.go), there is one simple implementation of a hot container
which we are going to get in the details here.
The basic cycle comprises three steps: read standard input up to a previosly
known point, process the work, the write the output to stdout with some
information about when functions daemon should stop reading from stdout.
In the case at hand, we serve a loop, whose first part is plugging stdin to a
HTTP request parser:
```go
r := bufio.NewReader(os.Stdin)
req, err := http.ReadRequest(r)
// ...
} else {
l, _ := strconv.Atoi(req.Header.Get("Content-Length"))
p := make([]byte, l)
r.Read(p)
}
```
Note how `Content-Length` is used to help determinate how far standard input
must be read.
The next step in the cycle is to do some processing:
```go
//...
var buf bytes.Buffer
fmt.Fprintf(&buf, "Hello %s\n", p)
for k, vs := range req.Header {
fmt.Fprintf(&buf, "ENV: %s %#v\n", k, vs)
}
//...
```
And finally, we return the result with a `Content-Length` header, so
IronFunctions daemon would know when to stop reading the gotten response.
```go
res := http.Response{
Proto: "HTTP/1.1",
ProtoMajor: 1,
ProtoMinor: 1,
StatusCode: 200,
Status: "OK",
}
res.Body = ioutil.NopCloser(&buf)
res.ContentLength = int64(buf.Len())
res.Write(os.Stdout)
```
Rinse and repeat for each incoming workload.
## Deploying a hot container
Once your functions is adapted to be handled as hot container, you must tell
IronFunctions daemon that this function is now ready to be reused across
requests:
```json
{
"route":{
"app_name": "myapp",
"path": "/hot",
"image": "USERNAME/hchttp",
"memory": 64,
"type": "sync",
"config": null,
"format": "http",
"max_concurrency": "1"
}
}
```
`format` (mandatory) either "default" or "http". If "http", then it is a hot
container.
`max_concurrency` (optional) - the number of simultaneous hot containers for
this functions. This is a per-node configuration option. Default: 1