Commit Graph

169 Commits

Author SHA1 Message Date
Reed Allman
bae13d6c29 fix the http protocol dumper (#705)
we were using the httputil.DumpRequest when there is a perfectly good
req.Write method hanging out in the stdlib, that even does the chunked thing
that a few people ran into if they don't provide a content length:
https://golang.org/pkg/net/http/#Request.Write -- so we shouldn't run into
that issue again. I hit this in testing and it was not very fun to debug, so
added a test that repro'd it on master and fixes it here. of course, adding a
content length works too. tested this and it appears to work pretty well, also
cleaned up the control flow a little bit in http protocol.
2018-01-22 11:41:04 -08:00
Nigel Deakin
e1df053de9 Change timedout to timeouts (#709) 2018-01-22 16:55:30 +00:00
Tolga Ceylan
8c31e47c01 fn: agent slot improvements (#704)
*) Stopped using latency previous/current stats, this
was not working as expected. Fresh starts usually have
these stats zero for a long time, and initial samples
are high due to downloads, caches, etc.

*) New state to track: containers that are idle. In other
words, containers that have an unused token in the slot
queue.

*) Removed latency counts since these are not used in
container start decision anymore. Simplifies logs.

*) isNewContainerNeeded() simplified to use idle count
to estimate effective waiters. Removed speculative
latency based logic and progress check comparison.
In agent, waitHot() delayed signalling compansates
for these changes. If the estimation may fail, but
this should correct itself in the next 200 msec
signal.
2018-01-19 12:35:52 -08:00
Tolga Ceylan
2f0de2b574 fn: resource and slot cancel and broadcast improvements (#696)
* fn: resource and slot cancel and broadcast improvements

*) Context argument does not wake up the waiters correctly upon
cancellation/timeout.
*) Avoid unnecessary broadcasts in slot and resource.

* fn: limit scope of context in resource/slot calls in agent
2018-01-18 13:43:56 -08:00
Reed Allman
c9e995292c if a slot is available, don't launch more (#701)
since we were sending a signal before checking if a slot was available, even
in the case of serial calls locally I was seeing 2 containers launch. if we
only send a signal after first checking if a slot is available, this goes
away. 1 usec should not be too offensive of an additional wait, all things
considered here.
2018-01-18 13:19:25 -08:00
Tolga Ceylan
5a7778a656 fn: cancellations in WaitAsyncResource (#694)
* fn: cancellations in WaitAsyncResource

Added go context with cancel to wait async resource. Although
today, the only case for cancellation is shutdown, this cleans
up agent shutdown a little bit.

* fn: locked broadcast to avoid missed wake-ups

* fn: removed ctx arg to WaitAsyncResource and startDequeuer

This is confusing and unnecessary.
2018-01-17 16:08:54 -08:00
Nigel Deakin
8bf26efa29 Add new Prom metrics fn_timeout and fn_errors (#679)
* Add new Prom metric fn_timedout

* Add new Prometheus metric fn_errors

* Tidy up variable name

* Add new Prometheus metric fn_errors

* gofmt
2018-01-15 14:49:33 +00:00
Reed Allman
0bde666395 clean up agent.Submit (#681)
this was getting bloated with various contexts and spans and stats
administrivia that obfuscated what was going on a lot. this makes some helper
methods to shove most of that stuff into, and simplifies the context handling
around getting a slot by moving it inside of slot acquisition code. also
removed most uses of `call.Model()` -- I'll kill this thing some day, but if a
reason is needed, then the overhead of dynamic dispatch is unnecessary, we're
inside of the implementee for the agent, we don't want to use the interface
methods inside of that.
2018-01-12 13:56:17 -08:00
Tolga Ceylan
39b2cb2d9b Cpu resources (#642)
* fn: cpu quota implementation
2018-01-12 11:38:28 -08:00
Tolga Ceylan
1c8029e4f1 fn: more tests for hot container launch logic (#678) 2018-01-11 16:00:37 -08:00
Tolga Ceylan
db159e595f fn: new container lauch adjustments (#677)
*) revert executor wait queue size comparison. This is too
   aggresive and with stall check below, now unnecessary.
*) new container logic now checks if stats are constant, if
   this is the case, then we assume the system is stalled (eg
   running functions that take long time), this means we need
   to make progress and spin up a new container.
2018-01-11 14:09:21 -08:00
Nigel Deakin
ac2bfd3462 Change basic stats to use opentracing rather than Prometheus API (#671)
* Change basic stats to use opentracing rather than Prometheus API directly

* Just ran gofmt

* Extract opentracing access for metrics to common/metrics.go

* Replace quotes strings with constants where possible
2018-01-11 17:34:51 +00:00
Tolga Ceylan
7c91b98a72 fn: hot container launcher adjustment (#673)
Latency stats are not always read-time updated and
if calls are stuck in waiting state, isNewContainerNeeded()
needs to be a bit more aggresive if the wait queue grows.
2018-01-10 14:14:19 -08:00
Tolga Ceylan
23ae1fe723 fn: removed dead code (#672) 2018-01-10 12:32:19 -08:00
Travis Reeder
3b9818bc58 Switch to dep from glide (#664) 2018-01-09 14:11:08 -08:00
Reed Allman
20089c4e83 make headers quasi-consistent (#660)
possible breakages:

* `FN_HEADER` on cold are no longer `s/-/_/` -- this is so that cold functions
can rebuild the headers as they were when they came in on the request (fdks,
specifically), there's no guarantee that a reversal `s/_/-/` is the original
header on the request.
* app and route config no longer `s/-/_/` -- it seemed really weird to rewrite
the users config vars on these. should just pass them exactly as is to env.
* headers no longer contain the environment vars (previously, base config; app
config, route config, `FN_PATH`, etc.), these are still available in the
environment.

this gets rid of a lot of the code around headers, specifically the stuff that
shoved everything into headers when constructing a call to begin with. now we
just store the headers separately and add a few things, like FN_CALL_ID to
them, and build a separate 'config' now to store on the call. I thought
'config' was more aptly named, 'env' was confusing, though now 'config' is
exactly what 'base_vars' was, which is only the things being put into the env.
we weren't storing this field in the db, this doesn't break unless there are
messages in a queue from another version, anyway, don't think we're there and
don't expect any breakage for anybody with field name changes.

this makes the configuration stuff pretty straight forward, there's just two
separate buckets of things, and cold just needs to mash them together into the
env, and otherwise hot containers just need to put 'config' in the env, and then
hot format can shove 'headers' in however they'd like. this seems better than
my last idea about making this easier but worse (RIP).

this means:

* headers no longer contain all vars, the set of base vars can only be found
in the environment.
* headers is only the headers from request + call_id, deadline, method, url
* for cold, we simply add the headers to the environment, prepending
`FN_HEADER_` to them, BUT NOT upper casing or `s/-/_/`
* fixes issue where async hot functions would end up with `Fn_header_`
prefixed headers
* removes idea of 'base' vars and 'env'. this was a strange concept. now we just have
'config' which was base vars, and headers, which was base_env+headers; i.e.
they are disjoint now.
* casing for all headers will lean to be `My-Header` style, which should help
with consistency. notable exceptions for cold only are FN_CALL_ID, FN_METHOD,
and FN_REQUEST_URL -- this is simply to avoid breakage, in either hot format
they appear as `Fn_call_id` still.
* removes FN_PARAM stuff
* updated doc with behavior

weird things left:

`Fn_call_id` e.g. isn't a correctly formatted http header, it should likely be
`Fn-Call-Id` but I wanted to live to fight another day on this one, it would
add some breakage.

examples to be posted of each format below

closes #329
2018-01-09 10:08:30 -08:00
Travis Reeder
580dd3e5cb Removes FN_PARAM_xxx (#661) 2018-01-09 16:42:25 +00:00
Tolga Ceylan
18716911b9 fn: agent slot and execution wait correction (#658)
Since by policy we require timeout/2 remaining time
before we can execute the request, we should also
bound the slot wait time by timeout/2 to avoid
waiting for full timeout in slot wait phase.
2018-01-08 12:33:37 -08:00
Tolga Ceylan
6f1f5e365d fn: URL parsing updates to fix json request_url (#657)
*) Updated fn-test-utils to latest fdk-go
*) Added hot-json to runner tests
*) Removed anon function in FromRequest which had
a side effect to set req.URL.Host. This is now more
explicit and eliminates some corresponding logic in
protocol http.
*) in gin, http request RequestURI is not set, removed
code that references this. (use Call.URL instead)
2018-01-08 10:28:50 -08:00
Tolga Ceylan
7185f8306a fn: async wait bugfix and low mem warnings (#649)
*) fix async high water mark logic
*) warning on 256 mem for async and async+sync pools
2018-01-05 10:27:26 -08:00
Tolga Ceylan
14789aba41 Slot mgr fixes (#613)
*) during shutdown, errors should be 503
*) new inactivity time out for hot queue, we previously kept hot queues in memory forever.
*) each hot queue now has a hot launcher to monitor and launch hot containers
*) consumers now create a consumer channel with startDequeuer() that can be cancelled via context
*) consumers now ping (signal) hot launcher every 200 msecs until they get a slot
*) tests for slot queue & mgr
2018-01-04 11:34:43 -08:00
Nigel Deakin
c0dec594ef Add calls metric (#637)
* Add new Prometheus metric fn_api_calls

* Change fn_api_calls to a counter
2018-01-03 10:42:37 -06:00
Reed Allman
f51792ae5e Timestamps on apps / routes (#614)
* route updated_at

* add app created at, fix some route updated_at bugs

* add app updated_at

TODO need to add tests through front end
TODO for validation we don't really want to use the validate wrapper since
it's a programmer error and not a user error, hopefully tests block this.

* add tests for timestamps to exist / change on apps&routes

* route equals at done, fix tests wit dis

* fix up the equals sugar

* add swagger

* fix rebase

* precisely allocate maps in clone

* vetted

* meh

* fix api tests
2017-12-23 09:57:36 -06:00
Tolga Ceylan
feeeca3321 fn: agent shutdown improvements (#622) 2017-12-22 12:52:31 -08:00
Tolga Ceylan
25a72146f5 slot tracking improvements (#562)
* fn: remove 100 msec sleep for hot containers

*) moved slot management to its own file
*) slots are now implemented with LIFO semantics, this is important since we do
   not want to round robin hot containers. Idle hot containers should timeout properly.
*) each slot queue now stores a few basic stats such as avg time a call spent in a given
   state and number of running/launching containers, number of waiting calls in those states.
*) first metrics in these basic stats are discarded to avoid initial docker pull/start spikes.
*) agent now records/updates slot queue state and how much time a call stayed in that state.
*) waitHotSlot() replaces the previous wait 100 msec logic where it sends a msg to
   hot slot go routine launchHot() and waits for a slot
*) launchHot() is now a go routine for tracking containers in hot slots, it determines
   if a new containers is needed based on slot queue stats.
2017-12-15 15:50:07 -08:00
Tolga Ceylan
419298e1c0 Async hot hdr fix (#604)
* fn: for async hot requests ensure/fix content-length/type

* fn: added tests for FromModel for content type/length

* fn: restrict the content-length fix to async in FromModel()
2017-12-15 14:32:25 -08:00
Nigel Deakin
f1fc040948 Fix spans for prometheus (#606) 2017-12-15 10:31:57 -08:00
Tolga Ceylan
3b12f3fa3d Fn deadline (#591)
* fn: added fn_deadline as RFC3339
2017-12-14 19:25:36 -08:00
Tolga Ceylan
eccce881a6 fn: exclude timeouts from failed error count (#590)
* fn: exclude timeouts from failed error count
2017-12-14 13:10:07 -08:00
Reed Allman
bb92547b95 Hybrid plumby (#585)
* fix configuration of agent and server to be future proof and plumb in the hybrid client agent

* fixes up the tests, turns off /r/ on api nodes

* fix up defaults for runner nodes

* shove the runner async push code down into agent land to use client

* plumb up async-age

* return full call from async dequeue endpoint, since we're storing a whole
call in the MQ we don't need to worry about caching of app/route [for now]
* fast safe shutdown of dequeue looper in runner / tidying of agent
* nice errors for path not found against /r/, /v1/ or other path not found
* removed some stale TODO in agent
* mq backends are only loud mouths in debug mode now

* update tests

* Add caching to hybrid client

* Fix HTTP error handling in hybrid client.

The type switch was on the value rather than a pointer.

* Gofmt.

* Better caching with a nice caching wrapper

* Remove datastore cache which is now unused

* Don't need to manually wrap interface methods

* Go fmt
2017-12-12 15:54:55 -08:00
Tolga Ceylan
b0937f236f fn: headroom error case to clarify OOM (#589) 2017-12-12 12:02:12 -08:00
Reed Allman
2ebc9c7480 hybrid mergy (#581)
* so it begins

* add clarification to /dequeue, change response to list to future proof

* Specify that runner endpoints are also under /v1

* Add a flag to choose operation mode (node type).

This is specified using the `FN_NODE_TYPE` environment variable. The
default is the existing behaviour, where the server supports all
operations (full API plus asynchronous and synchronous runners).

The additional modes are:
* API - the full API is available, but no functions are executed by the
  node. Async calls are placed into a message queue, and synchronous
  calls are not supported (invoking them results in an API error).
* Runner - only the invocation/route API is present. Asynchronous and
  synchronous invocation requests are supported, but asynchronous
  requests are placed onto the message queue, so might be handled by
  another runner.

* Add agent type and checks on Submit

* Sketch of a factored out data access abstraction for api/runner agents

* Fix tests, adding node/agent types to constructors

* Add tests for full, API, and runner server modes.

* Added atomic UpdateCall to datastore

* adds in server side endpoints

* Made ServerNodeType public because tests use it

* Made ServerNodeType public because tests use it

* fix test build

* add hybrid runner client

pretty simple go api client that covers surface area needed for hybrid,
returning structs from models that the agent can use directly. not exactly
sure where to put this, so put it in `/clients/hybrid` but maybe we should
make `/api/runner/client` or something and shove it in there. want to get
integration tests set up and use the real endpoints next and then wrap this up
in the DataAccessLayer stuff.

* gracefully handles errors from fn
* handles backoff & retry on 500s
* will add to existing spans for debuggo action

* minor fixes

* meh
2017-12-11 10:43:19 -08:00
Tolga Ceylan
9481f811b7 fn: fail count should include timeouts (#577)
* fn: fail count should include timeouts
2017-12-06 16:11:59 -08:00
Nigel Deakin
96f27070be More metrics (#561)
* Add new spans to agent.submit

* Add new spans to agent.submit

* Add new spans to agent.submit

* Add new spans to agent.submit
2017-12-05 10:26:28 -08:00
Travis Reeder
0798f9fac8 Middleware upgrade (#554)
* Adds root level middleware

* Added todo

* Better way for extensions to be added.

* Bad conflict merge?
2017-12-05 08:22:03 -08:00
Tolga Ceylan
25f6706642 Container memory tracking related changes (#541)
* squash# This is a combination of 10 commits2

fn: get available memory related changes

*) getAvailableMemory() improvements
*) early fail if requested memory too large to meet
*) tracking async and sync pools individually. Sync pool
is reserved for sync jobs only, while async pool can be
used by all jobs.
*) head room estimation for available memory in Linux.
2017-12-01 11:21:16 -08:00
Travis Reeder
a67d5a6290 Drop viper dependency (#550)
* Removed viper dependency.

* removed from glide files
2017-11-28 15:46:17 -08:00
Reed Allman
892c843d87 add error to call model (#539)
* add error to call model

closes #331

previously, for async this error was being masked completely even if it was
something useful like the image not existing. for sync, the error was returned
in the http request but now it's also being stored. this error itself can
cover a lot of landscape, it could be an error in getting a slot, pulling an
image, running a container, among other things. anyway, no longer being
masked. we can likely improve it in certain cases we run into in the future,
but it's open ended at the moment and not being masked like some errors in
sync http request returns (503 non-models.APIError) for now.

* tucks in callTrigger stuff to keep api clean
* adds swagger
* adds migration
* adds tests for datastore and agent to ensure behavior

* pull images before tests are ran

* gofmt migrations file
2017-11-28 11:21:39 -06:00
Nigel Deakin
954f69e74a Add appname to basic metrics (#547)
* Add app labels to queued/running/completed/failed metrics

* Add app labels to queued/running/completed/failed metrics

* Add app labels to queued/running/completed/failed metrics
2017-11-28 10:17:24 -06:00
Reed Allman
c9198b8525 add per call stats field as histogram (#528)
* add per call stats field as histogram

this will add a histogram of up to 240 data points of call data, produced
every second, stored at the end of a call invocation in the db. the same
metrics are also still shipped to prometheus (prometheus has the
not-potentially-reduced version). for the API reference, see the updates to
the swagger spec, this is just added onto the get call endpoint.

this does not add any extra db calls and the field for stats in call is a json
blob, which is easily modified to add / omit future fields. this is just
tacked on to the call we're making to InsertCall, and expect this to add very
little overhead; we are bounding the set to be relatively small, planning to
clean out the db of calls periodically, functions will generally be short, and
the same code used at a previous firm did not cause a notable db size increase
with production workload that is worse, wrt histogram size (I checked). the
code changes are really small aside from changing to strfmt.DateTime,
adding a migration and implementing sql.Valuer; needed to slightly modify the
swap function so that we can safely read `call.Stats` field to upload at end.

with the full histogram in hand, we can compute max/min/average/median/growth
rate/bernoulli distributions/whatever very easily in a UI or tooling. in
particular, this data is easily chartable [for a UI], which is beneficial.

* adds swagger spec of api update to calls endpoint
* adds migration for call.stats field
* adds call.stats field to sql queries
* change swapping of hot logger to exec, so we know that call.Stats is no
longer being modified after `exec` [in call.End]
* throws out docker stats between function invocations in hot functions (no
call to store them on, we could change this later for debug; they're in prom)
* tested in tests and API

closes #19

* add format of ints to swag
2017-11-27 08:52:53 -06:00
Denis Makogon
347edea56e Use valid call type instead in protocol (#534) 2017-11-24 10:32:17 -06:00
Tolga Ceylan
89dc79f0b0 fn: remove redundant httprouter code (#532)
*) tree from https://github.com/julienschmidt/httprouter
is already in Gin and this only seems to be parsing
parameters from URI.
2017-11-22 13:58:10 -06:00
Tolga Ceylan
2551be446a fn: introducing 503 responses for out of capacity case (#518)
* fn: introducing 503 responses for out of capacity case

*) Adding 503 with Retry-After header case if request failed
during waiting for slots.
*) TODO: return 503 without Retry-After if the request can
never be met by this fn server.
*) fn: runner test docker pull fixup
*) fn: MaxMemory for routes is now a variable to allow
testing and adjusting it according to fleet memory sizes.
2017-11-21 12:42:02 -08:00
Reed Allman
2d8c528b48 S3 loggyloo (#511)
* add minio-go dep, update deps

* add minio s3 client

minio has an s3 compatible api and is an open source project and, notably, is
not amazon, so it seems best to use their client (fwiw the aws-sdk-go is a
giant hair ball of things we don't need, too). it was pretty easy and seems
to work, so rolling with it. also, minio is a totally feasible option for fn
installs in prod / for demos / for local.

* adds 's3' package for s3 compatible log storage api, for use with storing
logs from calls and retrieving them.
* removes DELETE /v1/apps/:app/calls/:call/log endpoint
* removes internal log deletion api
* changes the GetLog API to use an io.Reader, which is a backwards step atm
due to the json api for logs, I have another branch lined up to make a plain
text log API and this will be much more efficient (also want to gzip)
* hooked up minio to the test suite and fixed up the test suite
* add how to run minio docs and point fn at it docs

some notes: notably we aren't cleaning up these logs. there is a ticket
already to make a Mr. Clean who wakes up periodically and nukes old stuff, so
am punting any api design around some kind of TTL deletion of logs. there are
a lot of options really for Mr. Clean, we can notably defer to him when apps
are deleted, too, so that app deletion is fast and then Mr. Clean will just
clean them up later (seems like a good option).

have not tested against BMC object store, which has an s3 compatible API. but
in theory it 'just works' (the reason for doing this). in any event, that's
part of the service land to figure out.

closes #481
closes #473

* add log not found error to minio land
2017-11-20 17:39:45 -08:00
Reed Allman
f08ea57bc0 add docker health check waiter to start (#434)
before returning the cookie in the driver, wait for health checks
https://docs.docker.com/engine/reference/builder/#healthcheck if provided.
for images that don't have health checks, this will have no affect (an added
call to inspect container, for hot it's small potatoes).

this will be useful for containers so that they can pull large files or do
setup that takes a while before accepting tasks. since this is before start,
it won't run into the idle timeout. we could likely use these for hot
containers in general and check between runs or something, but didn't do that
here.

one nascient concern is that for hot if the containers never become healthy
I don't think we will ever kill them and the slot will 'leak'. this is true
for this and for other cases (pulling image) I think, we should probably
recycle hot containers every hour or something which would also close this.
anyway, not a huge blocker I don't think, there will likely be 1 user of this
feature for a bit, it's not documented since we're not sure we want to support
it.

closes #336
2017-11-17 20:31:33 -08:00
Tolga Ceylan
17d4271ffb fn: move memory/token code into resource (#512)
*) bugfix: fix nil ptr access in docker registry RoundTrip
*) move async and ram token related code into resource.go
2017-11-17 15:25:53 -08:00
Nigel Deakin
910612d0b1 Docker stats to Prometheus (#486)
* Docker stats to Prometheus

* Fix compilation error in docker_test

* Refactor docker driver Run function to wait for  the container to have stopped before stopping the colleciton of statistics

* Fix go fmt errors

* Updates to sending docker stats to Prometheus

* remove new test TestWritResultImpl because we changes to support multiple waiters have been removed

* Update docker.Run to use channels not contextrs to shut down stats collector
2017-11-16 11:02:33 -08:00
Travis Reeder
96cfc9f5c1 Update json (#463)
* wip

* wip

* Added more fields to JSON and added blank line between objects.

* Update tests.

* wip

* Updated to represent recent discussions.

* Fixed up the json test

* More docs

* Changed from blank line to bracket, newline, open bracket.

* Blank line added back, easier for delimiting.
2017-11-16 09:59:13 -08:00
Tolga Ceylan
a530cd9be3 Minor naming and control flow changes to satisfy golint 2017-11-02 15:36:55 -07:00
Reed Allman
ce252d0448 Merge pull request #424 from fnproject/call-listener
CallListener - replaces RunnerListener
2017-10-26 10:36:14 -07:00