Commit Graph

38 Commits

Author SHA1 Message Date
Reed Allman
bb92547b95 Hybrid plumby (#585)
* fix configuration of agent and server to be future proof and plumb in the hybrid client agent

* fixes up the tests, turns off /r/ on api nodes

* fix up defaults for runner nodes

* shove the runner async push code down into agent land to use client

* plumb up async-age

* return full call from async dequeue endpoint, since we're storing a whole
call in the MQ we don't need to worry about caching of app/route [for now]
* fast safe shutdown of dequeue looper in runner / tidying of agent
* nice errors for path not found against /r/, /v1/ or other path not found
* removed some stale TODO in agent
* mq backends are only loud mouths in debug mode now

* update tests

* Add caching to hybrid client

* Fix HTTP error handling in hybrid client.

The type switch was on the value rather than a pointer.

* Gofmt.

* Better caching with a nice caching wrapper

* Remove datastore cache which is now unused

* Don't need to manually wrap interface methods

* Go fmt
2017-12-12 15:54:55 -08:00
Reed Allman
2ebc9c7480 hybrid mergy (#581)
* so it begins

* add clarification to /dequeue, change response to list to future proof

* Specify that runner endpoints are also under /v1

* Add a flag to choose operation mode (node type).

This is specified using the `FN_NODE_TYPE` environment variable. The
default is the existing behaviour, where the server supports all
operations (full API plus asynchronous and synchronous runners).

The additional modes are:
* API - the full API is available, but no functions are executed by the
  node. Async calls are placed into a message queue, and synchronous
  calls are not supported (invoking them results in an API error).
* Runner - only the invocation/route API is present. Asynchronous and
  synchronous invocation requests are supported, but asynchronous
  requests are placed onto the message queue, so might be handled by
  another runner.

* Add agent type and checks on Submit

* Sketch of a factored out data access abstraction for api/runner agents

* Fix tests, adding node/agent types to constructors

* Add tests for full, API, and runner server modes.

* Added atomic UpdateCall to datastore

* adds in server side endpoints

* Made ServerNodeType public because tests use it

* Made ServerNodeType public because tests use it

* fix test build

* add hybrid runner client

pretty simple go api client that covers surface area needed for hybrid,
returning structs from models that the agent can use directly. not exactly
sure where to put this, so put it in `/clients/hybrid` but maybe we should
make `/api/runner/client` or something and shove it in there. want to get
integration tests set up and use the real endpoints next and then wrap this up
in the DataAccessLayer stuff.

* gracefully handles errors from fn
* handles backoff & retry on 500s
* will add to existing spans for debuggo action

* minor fixes

* meh
2017-12-11 10:43:19 -08:00
Tolga Ceylan
9481f811b7 fn: fail count should include timeouts (#577)
* fn: fail count should include timeouts
2017-12-06 16:11:59 -08:00
Nigel Deakin
96f27070be More metrics (#561)
* Add new spans to agent.submit

* Add new spans to agent.submit

* Add new spans to agent.submit

* Add new spans to agent.submit
2017-12-05 10:26:28 -08:00
Travis Reeder
0798f9fac8 Middleware upgrade (#554)
* Adds root level middleware

* Added todo

* Better way for extensions to be added.

* Bad conflict merge?
2017-12-05 08:22:03 -08:00
Tolga Ceylan
25f6706642 Container memory tracking related changes (#541)
* squash# This is a combination of 10 commits2

fn: get available memory related changes

*) getAvailableMemory() improvements
*) early fail if requested memory too large to meet
*) tracking async and sync pools individually. Sync pool
is reserved for sync jobs only, while async pool can be
used by all jobs.
*) head room estimation for available memory in Linux.
2017-12-01 11:21:16 -08:00
Reed Allman
892c843d87 add error to call model (#539)
* add error to call model

closes #331

previously, for async this error was being masked completely even if it was
something useful like the image not existing. for sync, the error was returned
in the http request but now it's also being stored. this error itself can
cover a lot of landscape, it could be an error in getting a slot, pulling an
image, running a container, among other things. anyway, no longer being
masked. we can likely improve it in certain cases we run into in the future,
but it's open ended at the moment and not being masked like some errors in
sync http request returns (503 non-models.APIError) for now.

* tucks in callTrigger stuff to keep api clean
* adds swagger
* adds migration
* adds tests for datastore and agent to ensure behavior

* pull images before tests are ran

* gofmt migrations file
2017-11-28 11:21:39 -06:00
Nigel Deakin
954f69e74a Add appname to basic metrics (#547)
* Add app labels to queued/running/completed/failed metrics

* Add app labels to queued/running/completed/failed metrics

* Add app labels to queued/running/completed/failed metrics
2017-11-28 10:17:24 -06:00
Reed Allman
c9198b8525 add per call stats field as histogram (#528)
* add per call stats field as histogram

this will add a histogram of up to 240 data points of call data, produced
every second, stored at the end of a call invocation in the db. the same
metrics are also still shipped to prometheus (prometheus has the
not-potentially-reduced version). for the API reference, see the updates to
the swagger spec, this is just added onto the get call endpoint.

this does not add any extra db calls and the field for stats in call is a json
blob, which is easily modified to add / omit future fields. this is just
tacked on to the call we're making to InsertCall, and expect this to add very
little overhead; we are bounding the set to be relatively small, planning to
clean out the db of calls periodically, functions will generally be short, and
the same code used at a previous firm did not cause a notable db size increase
with production workload that is worse, wrt histogram size (I checked). the
code changes are really small aside from changing to strfmt.DateTime,
adding a migration and implementing sql.Valuer; needed to slightly modify the
swap function so that we can safely read `call.Stats` field to upload at end.

with the full histogram in hand, we can compute max/min/average/median/growth
rate/bernoulli distributions/whatever very easily in a UI or tooling. in
particular, this data is easily chartable [for a UI], which is beneficial.

* adds swagger spec of api update to calls endpoint
* adds migration for call.stats field
* adds call.stats field to sql queries
* change swapping of hot logger to exec, so we know that call.Stats is no
longer being modified after `exec` [in call.End]
* throws out docker stats between function invocations in hot functions (no
call to store them on, we could change this later for debug; they're in prom)
* tested in tests and API

closes #19

* add format of ints to swag
2017-11-27 08:52:53 -06:00
Tolga Ceylan
2551be446a fn: introducing 503 responses for out of capacity case (#518)
* fn: introducing 503 responses for out of capacity case

*) Adding 503 with Retry-After header case if request failed
during waiting for slots.
*) TODO: return 503 without Retry-After if the request can
never be met by this fn server.
*) fn: runner test docker pull fixup
*) fn: MaxMemory for routes is now a variable to allow
testing and adjusting it according to fleet memory sizes.
2017-11-21 12:42:02 -08:00
Reed Allman
2d8c528b48 S3 loggyloo (#511)
* add minio-go dep, update deps

* add minio s3 client

minio has an s3 compatible api and is an open source project and, notably, is
not amazon, so it seems best to use their client (fwiw the aws-sdk-go is a
giant hair ball of things we don't need, too). it was pretty easy and seems
to work, so rolling with it. also, minio is a totally feasible option for fn
installs in prod / for demos / for local.

* adds 's3' package for s3 compatible log storage api, for use with storing
logs from calls and retrieving them.
* removes DELETE /v1/apps/:app/calls/:call/log endpoint
* removes internal log deletion api
* changes the GetLog API to use an io.Reader, which is a backwards step atm
due to the json api for logs, I have another branch lined up to make a plain
text log API and this will be much more efficient (also want to gzip)
* hooked up minio to the test suite and fixed up the test suite
* add how to run minio docs and point fn at it docs

some notes: notably we aren't cleaning up these logs. there is a ticket
already to make a Mr. Clean who wakes up periodically and nukes old stuff, so
am punting any api design around some kind of TTL deletion of logs. there are
a lot of options really for Mr. Clean, we can notably defer to him when apps
are deleted, too, so that app deletion is fast and then Mr. Clean will just
clean them up later (seems like a good option).

have not tested against BMC object store, which has an s3 compatible API. but
in theory it 'just works' (the reason for doing this). in any event, that's
part of the service land to figure out.

closes #481
closes #473

* add log not found error to minio land
2017-11-20 17:39:45 -08:00
Tolga Ceylan
17d4271ffb fn: move memory/token code into resource (#512)
*) bugfix: fix nil ptr access in docker registry RoundTrip
*) move async and ram token related code into resource.go
2017-11-17 15:25:53 -08:00
Nigel Deakin
910612d0b1 Docker stats to Prometheus (#486)
* Docker stats to Prometheus

* Fix compilation error in docker_test

* Refactor docker driver Run function to wait for  the container to have stopped before stopping the colleciton of statistics

* Fix go fmt errors

* Updates to sending docker stats to Prometheus

* remove new test TestWritResultImpl because we changes to support multiple waiters have been removed

* Update docker.Run to use channels not contextrs to shut down stats collector
2017-11-16 11:02:33 -08:00
Travis Reeder
96cfc9f5c1 Update json (#463)
* wip

* wip

* Added more fields to JSON and added blank line between objects.

* Update tests.

* wip

* Updated to represent recent discussions.

* Fixed up the json test

* More docs

* Changed from blank line to bracket, newline, open bracket.

* Blank line added back, easier for delimiting.
2017-11-16 09:59:13 -08:00
Tolga Ceylan
a530cd9be3 Minor naming and control flow changes to satisfy golint 2017-11-02 15:36:55 -07:00
Reed Allman
ce252d0448 Merge pull request #424 from fnproject/call-listener
CallListener - replaces RunnerListener
2017-10-26 10:36:14 -07:00
Travis Reeder
de04562b8e Pushed triggers into start() and end() 2017-10-25 14:14:31 +02:00
Travis Reeder
d080c23981 First draft of modifying RunnerListener to CallListener to get it closer to the action (and named better). 2017-10-25 14:13:25 +02:00
Nigel Deakin
39feaf8b69 Send tracing spans to Prometheus 2017-10-20 16:30:19 +01:00
Nigel Deakin
ae31944224 Add Prometheus statistics and an example to showcase them using Grafana 2017-10-05 16:21:31 +01:00
Reed Allman
6b7b1e3c63 Merge pull request #354 from fnproject/stats
Extend stats to report Failed calls
2017-09-22 10:50:59 -07:00
Nigel Deakin
54407f7b74 Extend stats to report Failed calls 2017-09-22 17:36:43 +01:00
Reed Allman
22a1b296e3 fix slot races
I'd be pretty surprised if these were happening but meh, a computer running at
capacity can make the runtime scheduler do all kinds of weird shit, so this
locks down the behavior around slot launching.

I didn't load test much as there are cries of 'wolf' running amok, and it's
late, so this could be off a little -- but I think it's about this easy.  cold
is the only one launching slots for itself, so it should always receive its
own slot (provided within time bounds). for hot we just need a way to tell the
ram token allocator that we aren't there anymore, so that somebody can close
the token (important).

If the bug still persists then it seems likely that there is another bug
around timing I'm not aware of (possible, but unlikely) or the more likely
case that it's actually taking up to the timeout to launch a container / find
a ram slot / find a free container. Otherwise, it's not related to the agent
and the http server timeouts may need fiddling with (read / write timeout),
if ruby client is failing to connect though I'm guessing that it's just that
nobody is reading the body (i.e. no function runs) and the error handling
isn't very well done, as we are replying with 504 if we hit a timeout (but if
nobody is listening, they won't get it).
2017-09-20 10:43:12 -07:00
Nigel Deakin
ae69bb37e3 Update global stats charts to show bteakdown by function 2017-09-19 15:05:37 +01:00
Reed Allman
53ff665d69 not ready for spans yet in hot land 2017-09-08 05:06:35 -07:00
Reed Allman
4ce9163d99 nuke some TODO yey 2017-09-07 20:15:39 -07:00
Reed Allman
1811b4e230 make fn logger more reasonable
something still feels off with this, but i tinkered with it for a day-ish and
didn't come up with anything a whole lot better. doing a lot of the
maneuvering in the caller seemed better but it was just bloating up GetCall so
went back to having it basically like it was, but returning the limited
underlying buffer to read from so we can ship to the db.

some small changes to the LogStore interface, swapped it to take an
io.Reader instead of a string for more flexibility in the future while
essentially maintaining the same level of performance that we have now.
i'm guessing in the not so distant future we'll ship these to some s3 like
service and it would be better to stream them in than carry around a giant
string anyway. also, carrying around up to 1MB buffers in memory isn't great,
we may want to switch to file backed logs for calls, too. using io.Reader for
logs should make #279 more reasonable if/once we move to some s3-like thing,
we can stream from the log storage service direct to clients.

this fixes the span being out of whack and allows the 'right' context to be
used to upload logs (next to inserting the call). deletes the dbWriter we had,
and we just do this in call.End now (which makes sense to me at least).
removes the dupe code for making an stderr for hot / cold and simplifies the
way to get a func logger (no more 7 param methods yay).

closes #298
2017-09-07 20:15:39 -07:00
Reed Allman
1d0a63ca99 add id to all call invocation logs 2017-09-07 18:37:22 -07:00
Reed Allman
700078ccb9 bubble up some docker errors to user
currently:

* container ran out of memory (code 137)
* container exited with other code != 0
* unable to pull image (auth/404)

there may be others but this is a good start (the most common). notably, for
both hot and cold these should bubble up (if deterministic, which hub isn't
always), and these are useful for users to use in debugging why things aren't
working.

added tests to make sure that these behaviors are working.

also changed the behavior such that when the container exits we return a 502
instead of a 503, just to be able to distinguish the fact that fn is working
as expected but the container is acting funky (400 is weird here, so idk).

removed references to old IsUserVisible crap and slightly changed the
interface for RunResult for plumbing reasons (to get the error type,
specifically).

fixed an issue where if ~/.docker/config.json exists sometimes pulling images
wouldn't work deterministically (should be more inline w/ expectations now)

closes #275
2017-09-07 11:55:50 -07:00
Reed Allman
2341456334 FN_ prefix env vars
this adds `FN_` in front of env vars that we are injecting into calls, for
namespacing reasons. this will break code relying on the current variables but
if we want to do this, the chance is now really. alternatively, we could
maintain both the old and new for a short period of time to ease the
adjustment (speak now...). updated the docs, as well.

this also adds tests for the notoriously finicky configuration of the env vars
and headers when setting up a call. this won't test the container / request
for the call is actually receiving them, but it's a decent start and will yell
loudly enough upon formatting breakage.

added back FXLB_WAIT to a couple places so the lb can ride again

one thing for feedback:

headers are a bit confusing at the moment (not from this change, but that
behavior is kept here for now), we've a chance to fix them. currently, headers
in the request __are not__ prefixed with `FN_HEADER_`, i.e. 'hot'+sync containers
will receive `Content-Length` in the http request headers, yet a 'cold'
container from the same request would receive `FN_HEADER_Content-Length` in
its environment. This is additionally confusing because if this function were
hot+async, it would receive `FN_HEADER_Content-Length` in the headers, where
just changing it to sync goes back to `Content-Length`. If that was confusing,
then point made ;)

I propose to remove the `FN_HEADER_` prefix for request headers in the
environment, so that the request headers and env will match, as request
headers already are of this format (not prefixed). please lmk thoughts here

Would be fine with going back to the 'plain' vars too, then this patch will
mostly just be adding tests and changing `FN_FORMAT` to `FORMAT`. obviously,
from the examples, it's a bit ingrained now. anyway, entirely up to y'all.
2017-09-06 07:24:50 -07:00
Reed Allman
59d95d660a push app/route cache down to datastore (#303)
cache now implements models.Datastore by just embedding one and then changing
GetApp and GetRoute to have the cache inside. this makes it really flexible
for things like testing, so now the agent doesn't automagically do caching,
now it must be passed a datastore that was wrapped with a cache datastore.
the datastore in the server can remain separate and not use the cache still,
and then now the agent when running fn 'for real' is configured with the cache
baked in. this seems a lot cleaner than what we had and gets the cache out of
the way and it's easier to swap in / out / extend.
2017-09-08 09:18:36 -07:00
Denis Makogon
8a337e744b Addresing new comments 2017-09-07 15:17:39 +03:00
Denis Makogon
57a577dfc9 Wiring new context with initial span 2017-09-06 21:55:52 +03:00
Denis Makogon
9a89366d1b Addressing review comments
reverting query string caching in favour of Go 1.9 sqlx features
 moving context definition out of call.End to upper level
2017-09-06 21:48:29 +03:00
Denis Makogon
6a541139a9 Make call.End more solid 2017-09-06 21:48:29 +03:00
Reed Allman
4569b7fd69 stop forcing GET bodies through ?payload
it's unclear why we had this behavior in the first place, but alas, no more.

closes #264
2017-09-05 14:26:03 -07:00
Reed Allman
27e43c5d94 remove ccirrelo/supervisor, update
everything seems to work even though sirupsen is upper case?

:cyfap:
2017-09-05 11:36:47 -07:00
Reed Allman
71a88a991c hang the runner, agent=new sheriff (#270)
* fix docker build

this is trivially incorrect since glide doesn't actually provide reproducible
builds. the idea is to build with the deps that we have checked into git, so
that we actually know what code is executing so that we might debug it...

all for multi stage build instead of what we had, but adding the glide step is
wrong. i added a loud warning so as to discourage this behavior in the future.

* hang the runner, agent=new sheriff

tl;dr agent is now runner, with a hopefully saner api

the general idea is get rid of all the various 'task' structs now, change our
terminology to only be 'calls' now, push a lot of the http construction of a
call into the agent, allow calls to mutate their state around their execution
easily and to simplify the number of code paths, channels and context timeouts
in something [hopefully] easy to understand.

this introduces the idea of 'slots' which are either hot or cold and are
separate from reserving memory (memory is denominated in 'tokens' now).
a 'slot' is essentially a container that is ready for execution of a call, be
it hot or cold (it just means different things based on hotness). taking a
look into Submit should make these relatively easy to grok.

sorry, things were pretty broken especially wrt timings. I tried to keep good
notes (maybe too good), to highlight stuff so that we don't make the same
mistakes again (history repeating itself blah blah quote). even now, there is
lots of work to do :)

I encourage just reading the agent.go code, Submit is really simple and
there's a description of how the whole thing works at the head of the file
(after TODOs). call.go contains code for constructing calls, as well as Start
/ End (small atm). I did some amount of code massaging to try to make things
simple / straightforward / fit reasonable mental model, but as always am open
to critique (the more negative the better) as I'm just one guy and wth do i
know...

-----------------------------------------------------------------------------

below enumerates a number of changes as briefly as possible (heh..):

models.Call all the things

removes models.Task as models.Call is now what it previously was.
models.FnCall is now rid of in favor of models.Call, despite the datastore
only storing a few fields of it [for now]. we should probably store entire
calls in the db, since app & route configurations can change at any given
moment, it would be nice to see the parameters of each call (costs db space,
obviously).

this removes the endpoints for getting & deleting messages, we were just
looping back to localhost to call the MQ (wtf? this was for iron integration i
think) and just calls the MQ.

changes the name of the FnLog to LogStore, confusing cause there's also a
`FuncLogger` which uses the Logstore (punting). removes other `Fn` prefixed
structs (redundant naming convention).

removes some unused and/or weird structs (IDStatus, CompleteTime)

updates the swagger

makes the db methods consistent to use 'Call' nomenclature.

remove runner nuisances:

* push down registry stuff to docker driver
* remove Environment / Stats stuff of yore
* remove unused writers (now in FuncLogger)
* remove 2 of the task types, old hot stuff, runner, etc

fixes ram available calculation on startup to not always be 300GB (helps a lot
on a laptop!)

format for DOCKER_AUTH env now is not a list but a map (there are no docs,
would prefer to get rid of this altogether anyway). the ~/.docker/cfg expected
format is unchanged.

removes arbitrary task queue, if a machine is out of ram we can probably just
time out without queueing... (can open separate discussion) in any case the
old one didn't really account well for hot tasks, it just lined everyone up in
the task queue if there wasn't a place to run hot and then timed them out
[even if a slot became free].

removes HEADER_ prefixing on any headers in the request to a invoke a call.
(this was inconsistent with cli for test anyway)

removes TASK_ID header sent in to hot only (this is a dupe of FN_CALL_ID,
which has not been removed)

now user functions can reply directly to the client. this means that for
cold containers if they write to stdout it will send a 200 + headers. for
hot containers, the user can reply directly to the client from the container,
i.e. with its preferred status code / headers (vs. always getting a 200).
the dispatch itself is a little http specific atm, i think we can add an
interchange format but the current version is easily extended to add json for
now, separate discussion. this eliminates a lot of the request/response
rewriting and buffering we were doing (yey). now Dispatch ONLY does input and
output, vs. managing the call timeout and having access to a call's fields.

cache is pushed down into agent now instead of in the front end, would like to
push it down to the datastore actually but it's here for now anyway. cache
delete functions removed (b/c fn is distributed anyway?). added app caching,
should help with latency.

in general, a lot of server/runner.go got pushed down into the agent. i think
it will be useful in testing to be able to construct calls without having to
invoke http handlers + async also needs to construct calls without a handler.

safe shutdown actually works now for everything (leaked / didn't wait on
certain things before)

now we're waiting for hot slots to open up while we're attempting to get ram
to launch a container if we didn't find any hot slots to run the call in
immediately. we can change this policy really easily now (no more channel
jungle; still some channels). also looking for somewhere else to go while the
container is launching now. slots now get sent _out_ of a container, vs.
a container receiving calls, which makes this kind of policy easier to
implement. this fixes a number of bugs around things like trying to execute
calls against containers that have not and may never start and trying to
launch a bazillion containers when there are no free containers. the driver api
underwent some changes to make this possible (relatively minimal, added Wait).
the easiest way to think about this is that allocating ram has moved 'up'
instead of just wrapping launching containers, so that we can select on a
channel trying to find ram.

not dispatching hot calls to containers that died anymore either...

the timeout is now started at the beginning of Submit, rather than Dispatch or
the container itself having to manage the call timeout, which was an
inaccurate way of doing things since finding a slot / allocating ram / pulling
image can all take a non-trivial (timeout amount, even!) amount of time. this
makes for much more reasonable response times from fn under load, there's
still a little TODO about handling cold+timeout container removal response
times but it's much improved.

if call.Start is called with < call.timeout/2 time left, then the call will
not be executed and return a timeout. we can discuss. this makes async play
_a lot_ nicer, specifically. for large timeouts / 2 makes less sense.

env is no longer getting upper cased (admittedly, this can look a little weird
now). our whole route.Config/app.Config/env/headers stuff probably deserves a
whole discussion...

sync output no longer has the call id in json if there's an error / timeout.
we could add this back to signify that it's _us_ writing these but this was
out of place. FN_CALL_ID is still shipped out to get the id for sync calls,
and async [server] output remains unchanged.

async logs are now an entire raw http request (so that a user can write a 400
or something from their hot async container)

async hot now 'just works'

cold sync calls can now reply to the client before container removal, which
shaves a lot of latency off of those (still eat start). still need to figure
out async removal if timeout or something.

-----------------------------------------------------------------------------

i've located a number of bugs that were generally inherited, and also added
a number of TODOs in the head of the agent.go file according to robustness we
probably need to add. this is at least at parity with the previous
implementation, to my knowledge (hopefully/likely a good bit ahead). I can
memorialize these to github quickly enough, not that anybody searches before
adding bugs anyway (sigh).

the big thing to work on next imo is async being a lot more robust,
specifically to survive fn server failures / network issues.

thanks for review (gulp)
2017-09-05 20:32:51 +03:00