Commit Graph

20 Commits

Author SHA1 Message Date
Tolga Ceylan
222e06f5f5 fn: add fetch API error code helper function (#800) 2018-02-27 14:30:58 -08:00
Reed Allman
235cbc2d67 Fix default setting (#740)
* push validate/defaults into datastore

we weren't setting a timestamp in route insert when we needed to create an app
there. that whole thing isn't atomic, but this fixes the timestamp issue.

closes #738

seems like we should do similar with the FireBeforeX stuff too.

* fix tests

* app name validation was buggy, an upper cased letter failed. now it doesn't.
uses unicode now.
* removes duplicate errors for datastore and models validation that were used
interchangably but weren't.
2018-02-05 11:54:09 -08:00
Tolga Ceylan
39b2cb2d9b Cpu resources (#642)
* fn: cpu quota implementation
2018-01-12 11:38:28 -08:00
Reed Allman
bb92547b95 Hybrid plumby (#585)
* fix configuration of agent and server to be future proof and plumb in the hybrid client agent

* fixes up the tests, turns off /r/ on api nodes

* fix up defaults for runner nodes

* shove the runner async push code down into agent land to use client

* plumb up async-age

* return full call from async dequeue endpoint, since we're storing a whole
call in the MQ we don't need to worry about caching of app/route [for now]
* fast safe shutdown of dequeue looper in runner / tidying of agent
* nice errors for path not found against /r/, /v1/ or other path not found
* removed some stale TODO in agent
* mq backends are only loud mouths in debug mode now

* update tests

* Add caching to hybrid client

* Fix HTTP error handling in hybrid client.

The type switch was on the value rather than a pointer.

* Gofmt.

* Better caching with a nice caching wrapper

* Remove datastore cache which is now unused

* Don't need to manually wrap interface methods

* Go fmt
2017-12-12 15:54:55 -08:00
Reed Allman
2ebc9c7480 hybrid mergy (#581)
* so it begins

* add clarification to /dequeue, change response to list to future proof

* Specify that runner endpoints are also under /v1

* Add a flag to choose operation mode (node type).

This is specified using the `FN_NODE_TYPE` environment variable. The
default is the existing behaviour, where the server supports all
operations (full API plus asynchronous and synchronous runners).

The additional modes are:
* API - the full API is available, but no functions are executed by the
  node. Async calls are placed into a message queue, and synchronous
  calls are not supported (invoking them results in an API error).
* Runner - only the invocation/route API is present. Asynchronous and
  synchronous invocation requests are supported, but asynchronous
  requests are placed onto the message queue, so might be handled by
  another runner.

* Add agent type and checks on Submit

* Sketch of a factored out data access abstraction for api/runner agents

* Fix tests, adding node/agent types to constructors

* Add tests for full, API, and runner server modes.

* Added atomic UpdateCall to datastore

* adds in server side endpoints

* Made ServerNodeType public because tests use it

* Made ServerNodeType public because tests use it

* fix test build

* add hybrid runner client

pretty simple go api client that covers surface area needed for hybrid,
returning structs from models that the agent can use directly. not exactly
sure where to put this, so put it in `/clients/hybrid` but maybe we should
make `/api/runner/client` or something and shove it in there. want to get
integration tests set up and use the real endpoints next and then wrap this up
in the DataAccessLayer stuff.

* gracefully handles errors from fn
* handles backoff & retry on 500s
* will add to existing spans for debuggo action

* minor fixes

* meh
2017-12-11 10:43:19 -08:00
jan grant
e05afebed1 Nitfix for #548 (#571) 2017-12-06 09:15:16 -08:00
Tolga Ceylan
2551be446a fn: introducing 503 responses for out of capacity case (#518)
* fn: introducing 503 responses for out of capacity case

*) Adding 503 with Retry-After header case if request failed
during waiting for slots.
*) TODO: return 503 without Retry-After if the request can
never be met by this fn server.
*) fn: runner test docker pull fixup
*) fn: MaxMemory for routes is now a variable to allow
testing and adjusting it according to fleet memory sizes.
2017-11-21 12:42:02 -08:00
Reed Allman
84239b4a14 remove idle_timeout > timeout check. d'oh 2017-09-21 04:32:56 -07:00
Reed Allman
caba9e0ec6 more strict configuration of routes
* idle_timeout max of 1h
* timeout max of 120s for sync, 1h for async
* max memory of 8GB
* do full route validation before call invocation
* ensure that idle_timeout >= timeout

we are now doing validation of updating route inside of the database
transaction, which is what we should have been doing all along really.
we need this behavior to ensure that the idle timeout is longer than the
timeout, among other benefits (like not updating the most recent version of
the existing struct and overwriting previous updates, yay). since we have
this, we can get rid of the weird skipZero behavior on validate too and
validate the real deal holyfield.

validating the route before making the call is handy so that we don't do weird
things like run a func that wants to use 300GB of RAM and run for 3 weeks.

closes #192
closes #344
closes #162
2017-09-21 04:04:34 -07:00
Reed Allman
337e962416 add pagination to all list endpoints
calls, apps, and routes listing were previously returning the entire data set,
which just won't scale. this adds pagination with cursoring forward to each of
these endpoints (see the [docs](docs/definitions.md)).

the patch is really mostly tests, shouldn't be that bad to pick through.

some blarble about implementation is in order:

calls are sorted by ids but allow searching within certain `created_at` ranges
(finally). this is because sorting by `created_at` isn't feasible when
combined with paging, as `created_at` is not guaranteed to be unique -- id's
are (eliding theoreticals). i.e. on a page boundary, if there are 200 calls
with the same `created_at`, providing a `cursor` of that `created_at` will
skip over the remaining N calls with that `created_at`.  also using id will be
better on the index anyway (well, less of them). yay having sortable ids! I
can't discern any issues doing this, as even if 200 calls have the same
created_at, they will have different ids, and the sort should allow paginating
them just fine. ids are also url safe, so the id works as the cursor value
just fine.

apps and routes are sorted by alphabetical order. as they aren't guaranteed to
be url safe, we are base64'ing them in the front end to a url safe format and
then returning them, and then base64 decoding them when we get them. this does
mean that they can be relatively large if the path/app is long, but if we
don't want to add ids then they were going to be pretty big anyway. a bonus
that this kind of obscures them. if somebody has better idea on formatting, by
all means.

notably, we are not using the sql paging facilities, and we are baking our own
based on cursors, which ends up being much more efficient for querying longer
lists of resources. this also should be easy to implement in other non-sql dbs
and the cursoring formats we can change on the fly since we are just exposing
them as opaque strings. the front end deals with the base64 / formatting, etc
and the back end is taking raw values (strfmt.DateTime or the id for calls).
the cursor that is being passed to/by the user is simply the last resource on the
previous page, so in theory we don't even need to return it, but it does make
it a little easier to use, also, cursor being blank on the last page depends
on page full-ness, so sometimes users will get a cursor when there are no
results on next page (1/N chance, and it's not really end of world -- actually
searching for the next thing would make things more complex). there are ample
tests for this behavior.

I've turned off all query parameters allowing `LIKE` queries on certain listing
endpoints, as we should not expose sql behavior through our API in the event
that we end up not using a sql db down the road. I think we should only allow
prefix matching, which sql can support as well as other types of databases
relatively cheaply, but this is not hooked up here as it didn't 'just work'
when I was fiddling with it (can add later, they're unnecessary and weren't
wired in before in front end).

* remove route listing across apps (unused)
* fix panic when doing `/app//`. this is prob possible for other types of
endpoints, out of scope here. added a guard in front of all endpoints for this
* adds `from_time` and `to_time` query parameters to calls, so you can e.g.
list the last hour of tasks. these are not required and default to
oldest/newest.
* hooked back up the datastore tests to the sql db, only run with sqlite atm,
but these are useful, added a lot to them too.
* added a bunch of tests to the front end, so pretty sure this all works now.
* added to swagger, we'll need to re-gen. also wrote some words about
pagination workings, I'm not sure how best to link to these, feedback welcome.
* not sure how we want to manage indexes, but we may need to add some (looking
at created_at, mostly)
* `?route` changed to `?path` in routes listing, to keep consistency with
everything else
* don't 404 when searching for calls where the route doesn't exist, just
return an empty list (it's a query param ffs)

closes #141
2017-09-20 06:50:49 -07:00
Travis Reeder
75e2051169 Example app structure. round 1. 2017-09-18 17:16:59 -07:00
Reed Allman
700078ccb9 bubble up some docker errors to user
currently:

* container ran out of memory (code 137)
* container exited with other code != 0
* unable to pull image (auth/404)

there may be others but this is a good start (the most common). notably, for
both hot and cold these should bubble up (if deterministic, which hub isn't
always), and these are useful for users to use in debugging why things aren't
working.

added tests to make sure that these behaviors are working.

also changed the behavior such that when the container exits we return a 502
instead of a 503, just to be able to distinguish the fact that fn is working
as expected but the container is acting funky (400 is weird here, so idk).

removed references to old IsUserVisible crap and slightly changed the
interface for RunResult for plumbing reasons (to get the error type,
specifically).

fixed an issue where if ~/.docker/config.json exists sometimes pulling images
wouldn't work deterministically (should be more inline w/ expectations now)

closes #275
2017-09-07 11:55:50 -07:00
Reed Allman
71a88a991c hang the runner, agent=new sheriff (#270)
* fix docker build

this is trivially incorrect since glide doesn't actually provide reproducible
builds. the idea is to build with the deps that we have checked into git, so
that we actually know what code is executing so that we might debug it...

all for multi stage build instead of what we had, but adding the glide step is
wrong. i added a loud warning so as to discourage this behavior in the future.

* hang the runner, agent=new sheriff

tl;dr agent is now runner, with a hopefully saner api

the general idea is get rid of all the various 'task' structs now, change our
terminology to only be 'calls' now, push a lot of the http construction of a
call into the agent, allow calls to mutate their state around their execution
easily and to simplify the number of code paths, channels and context timeouts
in something [hopefully] easy to understand.

this introduces the idea of 'slots' which are either hot or cold and are
separate from reserving memory (memory is denominated in 'tokens' now).
a 'slot' is essentially a container that is ready for execution of a call, be
it hot or cold (it just means different things based on hotness). taking a
look into Submit should make these relatively easy to grok.

sorry, things were pretty broken especially wrt timings. I tried to keep good
notes (maybe too good), to highlight stuff so that we don't make the same
mistakes again (history repeating itself blah blah quote). even now, there is
lots of work to do :)

I encourage just reading the agent.go code, Submit is really simple and
there's a description of how the whole thing works at the head of the file
(after TODOs). call.go contains code for constructing calls, as well as Start
/ End (small atm). I did some amount of code massaging to try to make things
simple / straightforward / fit reasonable mental model, but as always am open
to critique (the more negative the better) as I'm just one guy and wth do i
know...

-----------------------------------------------------------------------------

below enumerates a number of changes as briefly as possible (heh..):

models.Call all the things

removes models.Task as models.Call is now what it previously was.
models.FnCall is now rid of in favor of models.Call, despite the datastore
only storing a few fields of it [for now]. we should probably store entire
calls in the db, since app & route configurations can change at any given
moment, it would be nice to see the parameters of each call (costs db space,
obviously).

this removes the endpoints for getting & deleting messages, we were just
looping back to localhost to call the MQ (wtf? this was for iron integration i
think) and just calls the MQ.

changes the name of the FnLog to LogStore, confusing cause there's also a
`FuncLogger` which uses the Logstore (punting). removes other `Fn` prefixed
structs (redundant naming convention).

removes some unused and/or weird structs (IDStatus, CompleteTime)

updates the swagger

makes the db methods consistent to use 'Call' nomenclature.

remove runner nuisances:

* push down registry stuff to docker driver
* remove Environment / Stats stuff of yore
* remove unused writers (now in FuncLogger)
* remove 2 of the task types, old hot stuff, runner, etc

fixes ram available calculation on startup to not always be 300GB (helps a lot
on a laptop!)

format for DOCKER_AUTH env now is not a list but a map (there are no docs,
would prefer to get rid of this altogether anyway). the ~/.docker/cfg expected
format is unchanged.

removes arbitrary task queue, if a machine is out of ram we can probably just
time out without queueing... (can open separate discussion) in any case the
old one didn't really account well for hot tasks, it just lined everyone up in
the task queue if there wasn't a place to run hot and then timed them out
[even if a slot became free].

removes HEADER_ prefixing on any headers in the request to a invoke a call.
(this was inconsistent with cli for test anyway)

removes TASK_ID header sent in to hot only (this is a dupe of FN_CALL_ID,
which has not been removed)

now user functions can reply directly to the client. this means that for
cold containers if they write to stdout it will send a 200 + headers. for
hot containers, the user can reply directly to the client from the container,
i.e. with its preferred status code / headers (vs. always getting a 200).
the dispatch itself is a little http specific atm, i think we can add an
interchange format but the current version is easily extended to add json for
now, separate discussion. this eliminates a lot of the request/response
rewriting and buffering we were doing (yey). now Dispatch ONLY does input and
output, vs. managing the call timeout and having access to a call's fields.

cache is pushed down into agent now instead of in the front end, would like to
push it down to the datastore actually but it's here for now anyway. cache
delete functions removed (b/c fn is distributed anyway?). added app caching,
should help with latency.

in general, a lot of server/runner.go got pushed down into the agent. i think
it will be useful in testing to be able to construct calls without having to
invoke http handlers + async also needs to construct calls without a handler.

safe shutdown actually works now for everything (leaked / didn't wait on
certain things before)

now we're waiting for hot slots to open up while we're attempting to get ram
to launch a container if we didn't find any hot slots to run the call in
immediately. we can change this policy really easily now (no more channel
jungle; still some channels). also looking for somewhere else to go while the
container is launching now. slots now get sent _out_ of a container, vs.
a container receiving calls, which makes this kind of policy easier to
implement. this fixes a number of bugs around things like trying to execute
calls against containers that have not and may never start and trying to
launch a bazillion containers when there are no free containers. the driver api
underwent some changes to make this possible (relatively minimal, added Wait).
the easiest way to think about this is that allocating ram has moved 'up'
instead of just wrapping launching containers, so that we can select on a
channel trying to find ram.

not dispatching hot calls to containers that died anymore either...

the timeout is now started at the beginning of Submit, rather than Dispatch or
the container itself having to manage the call timeout, which was an
inaccurate way of doing things since finding a slot / allocating ram / pulling
image can all take a non-trivial (timeout amount, even!) amount of time. this
makes for much more reasonable response times from fn under load, there's
still a little TODO about handling cold+timeout container removal response
times but it's much improved.

if call.Start is called with < call.timeout/2 time left, then the call will
not be executed and return a timeout. we can discuss. this makes async play
_a lot_ nicer, specifically. for large timeouts / 2 makes less sense.

env is no longer getting upper cased (admittedly, this can look a little weird
now). our whole route.Config/app.Config/env/headers stuff probably deserves a
whole discussion...

sync output no longer has the call id in json if there's an error / timeout.
we could add this back to signify that it's _us_ writing these but this was
out of place. FN_CALL_ID is still shipped out to get the id for sync calls,
and async [server] output remains unchanged.

async logs are now an entire raw http request (so that a user can write a 400
or something from their hot async container)

async hot now 'just works'

cold sync calls can now reply to the client before container removal, which
shaves a lot of latency off of those (still eat start). still need to figure
out async removal if timeout or something.

-----------------------------------------------------------------------------

i've located a number of bugs that were generally inherited, and also added
a number of TODOs in the head of the agent.go file according to robustness we
probably need to add. this is at least at parity with the previous
implementation, to my knowledge (hopefully/likely a good bit ahead). I can
memorialize these to github quickly enough, not that anybody searches before
adding bugs anyway (sigh).

the big thing to work on next imo is async being a lot more robust,
specifically to survive fn server failures / network issues.

thanks for review (gulp)
2017-09-05 20:32:51 +03:00
Reed Allman
53cbe2d5a4 stop riding the short bus, no clue why this stuff is here. only adds confusion, removing (#1)
server exposes Router field
2017-07-30 16:31:31 -07:00
Travis Reeder
c3630eaa41 Expiring cache 2017-07-20 08:44:56 -07:00
Reed Allman
c0aed2fbb0 mask errors in api response, log real error
we had this _almost_ right, in that we were trying, but we weren't masking the
error from the user response for any error we don't intend to show. this also
adds a stack trace from any internal server errors, so that we might be able
to track them down in the future (looking at you, 'context deadline
exceeded'). in addition, this adds a new `models.APIError` interface which all
of the errors in `models` now implement, and can be caught easily / added to
easily.

the front end now does no status rewriting based on api errors, now when we
get a non-nil error we can call `handleResponse(c, err)` with it and if it's a
proper error, return it to the user with the right status code, otherwise log
a stack trace and return `internal server error`. this cleans up a lot of the
front end code.

also rewrites start task ctx deadline exceeded as timeout. with iw we had
async tasks so we could start the clock later and it didn't matter, but now
with sync tasks time out sometimes just making docker calls, and we want the
task status to show up as timed out. we may want to just catch all this above
in addition to this, but this seems like the right thing to do.

remove squishing together errors. this was weird, now we return the first
error for the purposes of using the new err interface.

removed a lot of 5xx errors that really should have been 4xx errors. changed
some of the 400 errors to 409 errors, since they are from sending in
conflicting info and not a malformed request.

removed unused errors / useless errors (many were used for logging, and didn't
provide any context. now with stack traces we don't need context as much in
the logs).
2017-07-14 03:44:16 -07:00
Carlos C
d5fb1afda7 Revert "Assert License (#224)"
This reverts commit a61c4dab78.
2016-11-06 09:25:12 -08:00
C Cirello
a61c4dab78 Assert License (#224)
* license: assert license for Go code
* license: add in shell scripts
* license: assert license for Ruby code
* license: assert license to individual cases
* license: assert license to Dockerfile
2016-11-05 23:33:07 +01:00
Pedro Nasser
2489fd851f added wrapper on models; changed handlers; fixes 2016-07-26 00:10:45 -03:00
Pedro Nasser
66fa3d4035 refactoring API and added dbs: postgres, bolt 2016-07-21 16:04:58 -03:00