Commit Graph

75 Commits

Author SHA1 Message Date
Tolga Ceylan
74a51f3f88 fn: reorg agent config (#853)
* fn: reorg agent config

*) Moving constants in agent to agent config, which helps
with testing, tuning.
*) Added max total cpu & memory for testing & clamping max
mem & cpu usage if needed.

* fn: adjust PipeIO time
* fn: for hot, cannot reliably test EndOfLogs in TestRouteRunnerExecution
2018-03-13 18:38:47 -07:00
Tolga Ceylan
e80a06937b fn: timeouts and container exists should stop slot queuing (#843)
1) in theory it may be possible for an exited container to
requeue a slot, close this gap by always setting fatal error
for a slot if a container has exited.
2) when a client request times out or cancelled (client
disconnect, etc.) the slot should not be allowed to be
requeued and container should terminate to avoid accidental
mixing of previous response into next.
2018-03-12 11:18:55 -07:00
Tolga Ceylan
ea2b3f214c fn: enable log checks in runner test (#838) 2018-03-12 10:18:55 -07:00
Tolga Ceylan
afeb8e6f6a fn: json excess data check should ignore whitespace (#830)
* fn: json excess data check should ignore whitespace

* fn: adjustments and test case
2018-03-09 11:59:30 -08:00
Tolga Ceylan
7177bf3923 fn: enable failing test back (#826)
* fn: enable failing test back

* fn: fortifying the stderr output

Modified limitWriter to discard excess data instead
of returning error, this is to allow stderr/stdout
pipes flowing to avoid head-of-line blocking or
data corruption in container stdout/stderr output stream.
2018-03-09 09:57:28 -08:00
Gerardo Viedma
1c49b3e38e Removes flaky runner test TestRouteRunnerIOPipes while #822 is resolved (#823)
* Removes flaky runner test TestRouteRunnerIOPipes while #822 is resolved

* removes flaky log test from TestRouteRunnerExecution
2018-03-08 13:18:42 +00:00
Tolga Ceylan
7677aad450 fn: I/O related improvements (#809)
*) I/O protocol parse issues should shutdown the container as the container
goes to inconsistent state between calls. (eg. next call may receive previous
calls left overs.)
*) Move ghost read/write code into io_utils in common.
*) Clean unused error from docker Wait()
*) We can catch one case in JSON, if there's remaining unparsed data in
decoder buffer, we can shut the container
*) stdout/stderr when container is not handling a request are now blocked if freezer is also enabled.
*) if a fatal err is set for slot, we do not requeue it and proceed to shutdown
*) added a test function for a few cases with freezer strict behavior
2018-03-07 15:09:24 -08:00
Tolga Ceylan
89a1fc7c72 Response size clamp (#786)
*) Limit response http body or json response size to FN_MAX_RESPONSE_SIZE (default unlimited)
*) If limits are exceeded 502 is returned with 'body too large' in the error message
2018-03-01 17:14:50 -08:00
Tolga Ceylan
37ee5f6823 fn: runner tests and test-utils enhancements (#807)
This is prep-work for more tests to come.

*) remove http response -1, this will break in go 1.10
*) add docker id & hostname to fn-test-utils (will be useful
   to check/test which instance a request landed on.)
*) add container start/stop logs in fn-test-utils. To detect
   if/how we miss logs during container start & end.
2018-03-01 12:49:17 -08:00
Tolga Ceylan
820baf36dc fn: clean api tests: removed multi log (#801)
fn-test-utils covers this, with sleep in between.
2018-02-27 21:03:03 -08:00
Reed Allman
a56d204450 fix up response headers (#788)
* fix up response headers

* stops defaulting to application/json. this was something awful, go stdlib has
a func to detect content type. sadly, it doesn't contain json, but we can do a
pretty good job by checking for an opening '{'... there are other fish in the
sea, and now we handle them nicely instead of saying it's a json [when it's
not]. a test confirms this, there should be no breakage for any routes
returning a json blob that were relying on us defaulting to this format
(granted that they start with a '{').
* buffers output now to a buffer for all protocol types (default is no longer
left out in the cold). use a little response writer so that we can still let
users write headers from their functions. this is useful for content type
detection instead of having to do it in multiple places.
* plumbs the little content type bit into fn-test-util just so we can test it,
we don't want to put this in the fdk since it's redundant.

I am totally in favor of getting rid of content type from the top level json
blurb. it's redundant, at best, and can have confusing behaviors if a user
uses both the headers and the content_type field (we override with the latter,
now). it's client protocol specific to http to a certain degree, other
protocols may use this concept but have their own way to set it (like http
does in headers..). I realize that it mostly exists because it's somewhat gross
to have to index a list from the headers in certain languages more than
others, but with the ^ behavior, is it really worth it?

closes #782

* reset idle timeouts back

* move json prefix to stack / next to use
2018-02-27 10:30:33 -08:00
Tolga Ceylan
95d64f3aa9 fn: minor test improvements (#794) 2018-02-26 16:10:40 -07:00
Reed Allman
cbfd659e7e cap docker retries to fixed number (#762)
previously we would retry infinitely up to the context with some backoff in
between. for hot functions, since we don't set any dead line on pulling or
creating the image, this means it would retry forever without making any
progress if e.g. the registry is inaccessable or any other temporary error
that isn't actually temporary.  this adds a hard cap of 10 retries, which
gives approximately 13s if the ops take no time, still respecting the context
deadline enclosed.

the case where this was coming up is now tested for and was otherwise
confusing for users to debug, now it spits out an ECONNREFUSED with the
address of the registry, which should help users debug without having to poke
around fn logs (though I don't like this as an excuse, not all users will be
operators at some point in the near future, and this one makes sense)

closes #727
2018-02-12 18:45:30 -08:00
Reed Allman
97194b3d8b return bad function http resp error (#728)
* return bad function http resp error

this was being thrown into the fn server logs but it's relatively easy to get
this to crop up if a function user forgets that they left a `println` laying
around that gets written to stdout, it garbles the http (or json, in its case)
output and they just see 'internal server error'. for certain clients i could
see that we really do want to keep this as 'internal server error' but for
things like e.g. docker image not authorized we're showing that in the
response, so this seems apt.

json likely needs the same treatment, will file a bug.

as always, my error messages are rarely helpful enough, help me please :)

closes #355

* add formatting directive

* fix up http error

* output bad jasons to user

closes #729

woo
2018-02-12 17:51:45 -08:00
Tolga Ceylan
b2c95410f4 fn: test case additions (#755)
1) oom test
2) invalid http resp code test
3) check for error string contents in various error cases
2018-02-12 10:34:35 -08:00
Tolga Ceylan
fdf5a67f6f fn: error image is now deprecated (#737)
Please use fn-test-utils instead for testing.
2018-02-05 11:12:27 -08:00
Dario Domizioli
e2dad00a83 Add simple test for calling several hot functions in parallel (#675)
* Add test for calling several hot functions in parallel
2018-01-31 12:08:05 +00:00
Tolga Ceylan
6f1f5e365d fn: URL parsing updates to fix json request_url (#657)
*) Updated fn-test-utils to latest fdk-go
*) Added hot-json to runner tests
*) Removed anon function in FromRequest which had
a side effect to set req.URL.Host. This is now more
explicit and eliminates some corresponding logic in
protocol http.
*) in gin, http request RequestURI is not set, removed
code that references this. (use Call.URL instead)
2018-01-08 10:28:50 -08:00
Travis Reeder
5cdee5579d Fixes 404 responses from functions that go through NoRoute path. (#651)
* Fixes 404 responses from functions that go through NoRoute path.

* cleanup

* cleanupp

* fix link

* Rollback a bad change.
2018-01-08 10:03:33 -08:00
Tolga Ceylan
fe80b50e30 fn: tests: revert a hack for agent shutdown (#632)
Reverting due to fix for #623
2018-01-02 15:17:44 -06:00
Tolga Ceylan
feeeca3321 fn: agent shutdown improvements (#622) 2017-12-22 12:52:31 -08:00
Tolga Ceylan
7290579e7d fn: tests: adding hot container timeout and huge memory cases (#611)
* fn: adding hot container timeout and huge memory cases

*) switching TestRouteRunnerTimeout to fn-test-utils to handle
    both hot and cold.
*) in server_test added content-length handling as protocol http
    does not create content-length if it is not present.
2017-12-20 10:11:57 -08:00
Tolga Ceylan
419298e1c0 Async hot hdr fix (#604)
* fn: for async hot requests ensure/fix content-length/type

* fn: added tests for FromModel for content type/length

* fn: restrict the content-length fix to async in FromModel()
2017-12-15 14:32:25 -08:00
Reed Allman
bb92547b95 Hybrid plumby (#585)
* fix configuration of agent and server to be future proof and plumb in the hybrid client agent

* fixes up the tests, turns off /r/ on api nodes

* fix up defaults for runner nodes

* shove the runner async push code down into agent land to use client

* plumb up async-age

* return full call from async dequeue endpoint, since we're storing a whole
call in the MQ we don't need to worry about caching of app/route [for now]
* fast safe shutdown of dequeue looper in runner / tidying of agent
* nice errors for path not found against /r/, /v1/ or other path not found
* removed some stale TODO in agent
* mq backends are only loud mouths in debug mode now

* update tests

* Add caching to hybrid client

* Fix HTTP error handling in hybrid client.

The type switch was on the value rather than a pointer.

* Gofmt.

* Better caching with a nice caching wrapper

* Remove datastore cache which is now unused

* Don't need to manually wrap interface methods

* Go fmt
2017-12-12 15:54:55 -08:00
Reed Allman
2ebc9c7480 hybrid mergy (#581)
* so it begins

* add clarification to /dequeue, change response to list to future proof

* Specify that runner endpoints are also under /v1

* Add a flag to choose operation mode (node type).

This is specified using the `FN_NODE_TYPE` environment variable. The
default is the existing behaviour, where the server supports all
operations (full API plus asynchronous and synchronous runners).

The additional modes are:
* API - the full API is available, but no functions are executed by the
  node. Async calls are placed into a message queue, and synchronous
  calls are not supported (invoking them results in an API error).
* Runner - only the invocation/route API is present. Asynchronous and
  synchronous invocation requests are supported, but asynchronous
  requests are placed onto the message queue, so might be handled by
  another runner.

* Add agent type and checks on Submit

* Sketch of a factored out data access abstraction for api/runner agents

* Fix tests, adding node/agent types to constructors

* Add tests for full, API, and runner server modes.

* Added atomic UpdateCall to datastore

* adds in server side endpoints

* Made ServerNodeType public because tests use it

* Made ServerNodeType public because tests use it

* fix test build

* add hybrid runner client

pretty simple go api client that covers surface area needed for hybrid,
returning structs from models that the agent can use directly. not exactly
sure where to put this, so put it in `/clients/hybrid` but maybe we should
make `/api/runner/client` or something and shove it in there. want to get
integration tests set up and use the real endpoints next and then wrap this up
in the DataAccessLayer stuff.

* gracefully handles errors from fn
* handles backoff & retry on 500s
* will add to existing spans for debuggo action

* minor fixes

* meh
2017-12-11 10:43:19 -08:00
Travis Reeder
a67d5a6290 Drop viper dependency (#550)
* Removed viper dependency.

* removed from glide files
2017-11-28 15:46:17 -08:00
Tolga Ceylan
2551be446a fn: introducing 503 responses for out of capacity case (#518)
* fn: introducing 503 responses for out of capacity case

*) Adding 503 with Retry-After header case if request failed
during waiting for slots.
*) TODO: return 503 without Retry-After if the request can
never be met by this fn server.
*) fn: runner test docker pull fixup
*) fn: MaxMemory for routes is now a variable to allow
testing and adjusting it according to fleet memory sizes.
2017-11-21 12:42:02 -08:00
Reed Allman
2d8c528b48 S3 loggyloo (#511)
* add minio-go dep, update deps

* add minio s3 client

minio has an s3 compatible api and is an open source project and, notably, is
not amazon, so it seems best to use their client (fwiw the aws-sdk-go is a
giant hair ball of things we don't need, too). it was pretty easy and seems
to work, so rolling with it. also, minio is a totally feasible option for fn
installs in prod / for demos / for local.

* adds 's3' package for s3 compatible log storage api, for use with storing
logs from calls and retrieving them.
* removes DELETE /v1/apps/:app/calls/:call/log endpoint
* removes internal log deletion api
* changes the GetLog API to use an io.Reader, which is a backwards step atm
due to the json api for logs, I have another branch lined up to make a plain
text log API and this will be much more efficient (also want to gzip)
* hooked up minio to the test suite and fixed up the test suite
* add how to run minio docs and point fn at it docs

some notes: notably we aren't cleaning up these logs. there is a ticket
already to make a Mr. Clean who wakes up periodically and nukes old stuff, so
am punting any api design around some kind of TTL deletion of logs. there are
a lot of options really for Mr. Clean, we can notably defer to him when apps
are deleted, too, so that app deletion is fast and then Mr. Clean will just
clean them up later (seems like a good option).

have not tested against BMC object store, which has an s3 compatible API. but
in theory it 'just works' (the reason for doing this). in any event, that's
part of the service land to figure out.

closes #481
closes #473

* add log not found error to minio land
2017-11-20 17:39:45 -08:00
Reed Allman
caba9e0ec6 more strict configuration of routes
* idle_timeout max of 1h
* timeout max of 120s for sync, 1h for async
* max memory of 8GB
* do full route validation before call invocation
* ensure that idle_timeout >= timeout

we are now doing validation of updating route inside of the database
transaction, which is what we should have been doing all along really.
we need this behavior to ensure that the idle timeout is longer than the
timeout, among other benefits (like not updating the most recent version of
the existing struct and overwriting previous updates, yay). since we have
this, we can get rid of the weird skipZero behavior on validate too and
validate the real deal holyfield.

validating the route before making the call is handy so that we don't do weird
things like run a func that wants to use 300GB of RAM and run for 3 weeks.

closes #192
closes #344
closes #162
2017-09-21 04:04:34 -07:00
Reed Allman
700078ccb9 bubble up some docker errors to user
currently:

* container ran out of memory (code 137)
* container exited with other code != 0
* unable to pull image (auth/404)

there may be others but this is a good start (the most common). notably, for
both hot and cold these should bubble up (if deterministic, which hub isn't
always), and these are useful for users to use in debugging why things aren't
working.

added tests to make sure that these behaviors are working.

also changed the behavior such that when the container exits we return a 502
instead of a 503, just to be able to distinguish the fact that fn is working
as expected but the container is acting funky (400 is weird here, so idk).

removed references to old IsUserVisible crap and slightly changed the
interface for RunResult for plumbing reasons (to get the error type,
specifically).

fixed an issue where if ~/.docker/config.json exists sometimes pulling images
wouldn't work deterministically (should be more inline w/ expectations now)

closes #275
2017-09-07 11:55:50 -07:00
Reed Allman
71a88a991c hang the runner, agent=new sheriff (#270)
* fix docker build

this is trivially incorrect since glide doesn't actually provide reproducible
builds. the idea is to build with the deps that we have checked into git, so
that we actually know what code is executing so that we might debug it...

all for multi stage build instead of what we had, but adding the glide step is
wrong. i added a loud warning so as to discourage this behavior in the future.

* hang the runner, agent=new sheriff

tl;dr agent is now runner, with a hopefully saner api

the general idea is get rid of all the various 'task' structs now, change our
terminology to only be 'calls' now, push a lot of the http construction of a
call into the agent, allow calls to mutate their state around their execution
easily and to simplify the number of code paths, channels and context timeouts
in something [hopefully] easy to understand.

this introduces the idea of 'slots' which are either hot or cold and are
separate from reserving memory (memory is denominated in 'tokens' now).
a 'slot' is essentially a container that is ready for execution of a call, be
it hot or cold (it just means different things based on hotness). taking a
look into Submit should make these relatively easy to grok.

sorry, things were pretty broken especially wrt timings. I tried to keep good
notes (maybe too good), to highlight stuff so that we don't make the same
mistakes again (history repeating itself blah blah quote). even now, there is
lots of work to do :)

I encourage just reading the agent.go code, Submit is really simple and
there's a description of how the whole thing works at the head of the file
(after TODOs). call.go contains code for constructing calls, as well as Start
/ End (small atm). I did some amount of code massaging to try to make things
simple / straightforward / fit reasonable mental model, but as always am open
to critique (the more negative the better) as I'm just one guy and wth do i
know...

-----------------------------------------------------------------------------

below enumerates a number of changes as briefly as possible (heh..):

models.Call all the things

removes models.Task as models.Call is now what it previously was.
models.FnCall is now rid of in favor of models.Call, despite the datastore
only storing a few fields of it [for now]. we should probably store entire
calls in the db, since app & route configurations can change at any given
moment, it would be nice to see the parameters of each call (costs db space,
obviously).

this removes the endpoints for getting & deleting messages, we were just
looping back to localhost to call the MQ (wtf? this was for iron integration i
think) and just calls the MQ.

changes the name of the FnLog to LogStore, confusing cause there's also a
`FuncLogger` which uses the Logstore (punting). removes other `Fn` prefixed
structs (redundant naming convention).

removes some unused and/or weird structs (IDStatus, CompleteTime)

updates the swagger

makes the db methods consistent to use 'Call' nomenclature.

remove runner nuisances:

* push down registry stuff to docker driver
* remove Environment / Stats stuff of yore
* remove unused writers (now in FuncLogger)
* remove 2 of the task types, old hot stuff, runner, etc

fixes ram available calculation on startup to not always be 300GB (helps a lot
on a laptop!)

format for DOCKER_AUTH env now is not a list but a map (there are no docs,
would prefer to get rid of this altogether anyway). the ~/.docker/cfg expected
format is unchanged.

removes arbitrary task queue, if a machine is out of ram we can probably just
time out without queueing... (can open separate discussion) in any case the
old one didn't really account well for hot tasks, it just lined everyone up in
the task queue if there wasn't a place to run hot and then timed them out
[even if a slot became free].

removes HEADER_ prefixing on any headers in the request to a invoke a call.
(this was inconsistent with cli for test anyway)

removes TASK_ID header sent in to hot only (this is a dupe of FN_CALL_ID,
which has not been removed)

now user functions can reply directly to the client. this means that for
cold containers if they write to stdout it will send a 200 + headers. for
hot containers, the user can reply directly to the client from the container,
i.e. with its preferred status code / headers (vs. always getting a 200).
the dispatch itself is a little http specific atm, i think we can add an
interchange format but the current version is easily extended to add json for
now, separate discussion. this eliminates a lot of the request/response
rewriting and buffering we were doing (yey). now Dispatch ONLY does input and
output, vs. managing the call timeout and having access to a call's fields.

cache is pushed down into agent now instead of in the front end, would like to
push it down to the datastore actually but it's here for now anyway. cache
delete functions removed (b/c fn is distributed anyway?). added app caching,
should help with latency.

in general, a lot of server/runner.go got pushed down into the agent. i think
it will be useful in testing to be able to construct calls without having to
invoke http handlers + async also needs to construct calls without a handler.

safe shutdown actually works now for everything (leaked / didn't wait on
certain things before)

now we're waiting for hot slots to open up while we're attempting to get ram
to launch a container if we didn't find any hot slots to run the call in
immediately. we can change this policy really easily now (no more channel
jungle; still some channels). also looking for somewhere else to go while the
container is launching now. slots now get sent _out_ of a container, vs.
a container receiving calls, which makes this kind of policy easier to
implement. this fixes a number of bugs around things like trying to execute
calls against containers that have not and may never start and trying to
launch a bazillion containers when there are no free containers. the driver api
underwent some changes to make this possible (relatively minimal, added Wait).
the easiest way to think about this is that allocating ram has moved 'up'
instead of just wrapping launching containers, so that we can select on a
channel trying to find ram.

not dispatching hot calls to containers that died anymore either...

the timeout is now started at the beginning of Submit, rather than Dispatch or
the container itself having to manage the call timeout, which was an
inaccurate way of doing things since finding a slot / allocating ram / pulling
image can all take a non-trivial (timeout amount, even!) amount of time. this
makes for much more reasonable response times from fn under load, there's
still a little TODO about handling cold+timeout container removal response
times but it's much improved.

if call.Start is called with < call.timeout/2 time left, then the call will
not be executed and return a timeout. we can discuss. this makes async play
_a lot_ nicer, specifically. for large timeouts / 2 makes less sense.

env is no longer getting upper cased (admittedly, this can look a little weird
now). our whole route.Config/app.Config/env/headers stuff probably deserves a
whole discussion...

sync output no longer has the call id in json if there's an error / timeout.
we could add this back to signify that it's _us_ writing these but this was
out of place. FN_CALL_ID is still shipped out to get the id for sync calls,
and async [server] output remains unchanged.

async logs are now an entire raw http request (so that a user can write a 400
or something from their hot async container)

async hot now 'just works'

cold sync calls can now reply to the client before container removal, which
shaves a lot of latency off of those (still eat start). still need to figure
out async removal if timeout or something.

-----------------------------------------------------------------------------

i've located a number of bugs that were generally inherited, and also added
a number of TODOs in the head of the agent.go file according to robustness we
probably need to add. this is at least at parity with the previous
implementation, to my knowledge (hopefully/likely a good bit ahead). I can
memorialize these to github quickly enough, not that anybody searches before
adding bugs anyway (sigh).

the big thing to work on next imo is async being a lot more robust,
specifically to survive fn server failures / network issues.

thanks for review (gulp)
2017-09-05 20:32:51 +03:00
Travis Reeder
f559acd7ed Renamed a bunch of images to use fnproject org. (#239)
* Renamed a bunch of images to use fnproject org.

* Multi-stage build for Docker.

* Added tmp vendor dirs to gitignore.

* Run docker-build at beginning of test.
2017-08-23 22:43:53 +03:00
Travis Reeder
48e3781d5e Rename to GitHub (#3)
* circle

* Rename to github and fn->cli

*  Rename to github and fn->cli
2017-07-26 10:50:19 -07:00
Denis Makogon
97b0b97bd8 Reject async requests in case if MQ is not reachable 2017-07-25 10:03:04 -07:00
Reed Allman
dc5e67b6d2 add opentracing spans for metrics 2017-07-25 08:55:22 -07:00
Travis Reeder
c3630eaa41 Expiring cache 2017-07-20 08:44:56 -07:00
Travis Reeder
0f83355f0a Merge branch 'fix-root-route-invocation' into 'master'
Allow calling root route on app

Closes #64

See merge request !91
2017-07-11 09:44:29 -07:00
James Jeffrey
81e39b210d Add go fmt 2017-07-07 10:14:08 -07:00
Will Price
02d6349cf8 Allow calling root route on app
Fixes #64

Previously calling a root registered route would result in an error
message "Not Found" suggesting the route hadn't been registed, yet when
listing the routes, `fn routes list myapp` you could see the `/` route.

You can now successfully call a root registered route with `fn call
myapp /`
2017-07-06 10:40:34 +01:00
James
8a3edb8309 All of the changes for func logs 2017-06-19 11:38:11 -07:00
Reed Allman
636af2f7ea fix up the tests 2017-06-06 05:04:22 -07:00
Reed Allman
9edacae928 clean up hotf(x) concurrency, rm max c
this patch gets rid of max concurrency for functions altogether, as discussed,
since it will be challenging to support across functions nodes. as a result of
doing so, the previous version of functions would fall over when offered 1000
functions, so there was some work needed in order to push this through.
further work is necessary as docker basically falls over when trying to start
enough containers at the same time, and with this patch essentially every
function can scale infinitely. it seems like we could add some kind of
adaptive restrictions based on task run length and configured wait time so
that fast running functions will line up to run in a hot container instead of
them all creating new hot containers.

this patch takes a first cut at whacking out some of the insanity that was the
previous concurrency model, which was problematic in that it limited
concurrency significantly across all functions since every task went through
the same unbuffered channel, which could create blocking issues for all
functions if the channel is not picked off fast enough (it's not apparent that
this was impossible in the previous implementation). in any event, each
request has a goroutine already, there's no reason not to use it. not too hard
to wrap a map in a lock, not sure what the benefits were (added insanity?) in effect
this is marginally easier to understand and less insane (marginally). after
getting rid of max c this adds a blocking mechanism for the first invocation
of any function so that all other hot functions will wait on the first one to
finish to avoid a herd issue (was making docker die...) -- this could be
slightly improved, but works in a pinch. reduced some memory usage by having
redundant maps of htfnsvr's and task.Requests (by a factor of 2!). cleaned up
some of the protocol stuff, need to clean this up further. anyway, it's a
first cut. have another patch that rewrites all of it but was getting into
rabbit hole territory, would be happy to oblige if anybody else has problems
understanding this rat's nest of channels. there is a good bit of work left to
make this prod ready (regardless of removing max c).

a warning that this will break the db schemas, didn't put the effort in to add
migration stuff since this isn't deployed anywhere in prod...

TODO need to clean out the htfnmgr bucket with LRU
TODO need to clean up runner interface
TODO need to unify the task running paths across protocols
TODO need to move the ram checking stuff into worker for noted reasons
TODO need better elasticity of hot f(x) containers
2017-06-05 20:04:13 -07:00
Denis Makogon
3f065ce6bf [Feature] Function status 2017-06-06 14:12:50 -07:00
James Jeffrey
c7a5bae587 Merge branch 'chad-gitlab-url-change' into 'master'
Chad gitlab url change

See merge request !28
2017-05-30 11:34:22 -07:00
Denis Makogon
31b4ac4516 Address broken tests 2017-05-30 08:50:53 -07:00
Chad Arimura
49d397293b global url replace 2017-05-29 17:10:47 -07:00
Travis Reeder
9cc12b4b12 Remove iron... 2017-05-18 18:59:34 +00:00
James
e4bb04887e Rewrite imports to use forks files on gitlab not use githubs. 2017-05-16 11:06:32 -07:00
Travis Reeder
4b9bba352d Rename location. 2017-05-15 11:00:15 -07:00
Travis Reeder
615ae5c36f Mass s&r: iron-io -> kumokit 2017-04-19 09:49:12 -06:00