From 021b706f5d586fa0ec79369f05ecd06e9cb247d5 Mon Sep 17 00:00:00 2001 From: Travis Reeder Date: Tue, 13 Sep 2016 11:36:30 -0700 Subject: [PATCH 1/2] Updating API to take into account having async tasks and sync. WIP. --- README.md | 27 ++++++++++++++++++++++++--- 1 file changed, 24 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 05af69a93..9c8a7d178 100644 --- a/README.md +++ b/README.md @@ -2,16 +2,15 @@ ## Quick Start - ### Start the IronFunctions API First let's start our IronFunctions API ```sh -docker run --rm --privileged -it -e "DB=bolt:///app/data/bolt.db" -v $PWD/data:/app/data -p 8080:8080 iron/functions +docker run --rm --name functions --privileged -it -e "DB=bolt:///app/data/bolt.db" -v $PWD/data:/app/data -p 8080:8080 iron/functions ``` -This command will quickly start IronFunctions using the default database `Bolt` running on `:8080`. +This command will quickly start IronFunctions using an embedded `Bolt` database running on `:8080`. ### Create an Application @@ -54,6 +53,28 @@ curl -H "Content-Type: application/json" -X POST -d '{ }' http://localhost:8080/r/myapp/hello ``` + +## Adding Asynchronous Data Processing Support + +Data processing is for functions that run in the background. This type of functionality is good for functions that are CPU heavy or take more than a few seconds to complete. +Architecturally, the main difference between synchronous you tried above and asynchronous is that requests +to asynchronous functions are put in a queue and executed on separate `runner` machines so that they do not interfere with the fast synchronous responses required by an API. Also, since +it uses a queue, you can queue up millions of jobs without worrying about capacity as requests will just be queued up and run at some point in the future. + +TODO: Add link to differences here in README.io docs here. + +### Start Runner(s) + +Start a runner: + +```sh +docker run --rm -it --link functions --privileged -e "API_URL=http://functions:8080" iron/functions-runner +``` + +You can start as many runners as you want. The more the merrier. + +For runner configuration, see the [Runner README](runner/README.md). + ## Using IronFunctions Hosted by Iron.io Simply point to https://functions.iron.io instead of localhost and add your Iron.io Authentication header (TODO: link), like this: From a48c0e38aacdc0f1e770c0ee8229f7472a76ae9a Mon Sep 17 00:00:00 2001 From: Travis Reeder Date: Tue, 13 Sep 2016 11:53:53 -0700 Subject: [PATCH 2/2] Added info on scaling. --- docs/scaling.md | 15 +++++++++++++++ 1 file changed, 15 insertions(+) create mode 100644 docs/scaling.md diff --git a/docs/scaling.md b/docs/scaling.md new file mode 100644 index 000000000..9d1540e72 --- /dev/null +++ b/docs/scaling.md @@ -0,0 +1,15 @@ + +# Scaling IronFunctions + +The QuickStart guide is intended just to quickly get started and kick the tires. To run in production and be ready to scale, there are a few more steps. + +* Run a database that can scale, such as Postgres. +* Put the iron/functions API behind a load balancer and launch more than one machine. +* For asynchronous functions: + * Start a separate message queue (preferably one that scales) + * Start multiple iron/functions-runner containers, the more the merrier + +There are metrics emitted to the logs that can be used to notify you when to scale. The most important being the `wait_time` metrics for both the +synchronous and asynchronous functions. If `wait_time` increases, you'll want to start more servers with either the `iron/functions` image or the `iron/functions-runner` image. + + \ No newline at end of file