Commit Graph

239 Commits

Author SHA1 Message Date
Reed Allman
9cbe4ea536 add pprof endpoints, additional spans (#770)
i would split this commit in two if i were a good dev.

the pprof stuff is really useful and this only samples when called. this is
pretty standard go service stuff. expvar is cool, too.

the additional spannos have turned up some interesting tid bits... gonna slide
em in
2018-02-13 20:01:41 -08:00
Reed Allman
1a1250e5ea disable fail whale logs (#768)
we have been getting these from attach all this time and never needed these
anyway.

I ran cpu profiles of dockerd and this was 90% of docker cpu usage (json
logs). woot. this will reduce i/o quite a bit, and we don't have to worry
about them taking up any disk space either.

from tests i get about 50% speedup with these off. the hunt continues...
2018-02-13 17:45:11 -08:00
Reed Allman
f287ad274e support deeper / nesting of image names (#765)
closes #764
2018-02-13 11:26:28 -08:00
Reed Allman
cbfd659e7e cap docker retries to fixed number (#762)
previously we would retry infinitely up to the context with some backoff in
between. for hot functions, since we don't set any dead line on pulling or
creating the image, this means it would retry forever without making any
progress if e.g. the registry is inaccessable or any other temporary error
that isn't actually temporary.  this adds a hard cap of 10 retries, which
gives approximately 13s if the ops take no time, still respecting the context
deadline enclosed.

the case where this was coming up is now tested for and was otherwise
confusing for users to debug, now it spits out an ECONNREFUSED with the
address of the registry, which should help users debug without having to poke
around fn logs (though I don't like this as an excuse, not all users will be
operators at some point in the near future, and this one makes sense)

closes #727
2018-02-12 18:45:30 -08:00
Reed Allman
97194b3d8b return bad function http resp error (#728)
* return bad function http resp error

this was being thrown into the fn server logs but it's relatively easy to get
this to crop up if a function user forgets that they left a `println` laying
around that gets written to stdout, it garbles the http (or json, in its case)
output and they just see 'internal server error'. for certain clients i could
see that we really do want to keep this as 'internal server error' but for
things like e.g. docker image not authorized we're showing that in the
response, so this seems apt.

json likely needs the same treatment, will file a bug.

as always, my error messages are rarely helpful enough, help me please :)

closes #355

* add formatting directive

* fix up http error

* output bad jasons to user

closes #729

woo
2018-02-12 17:51:45 -08:00
Tolga Ceylan
567136cb5e fn: required docker version fix (#759) 2018-02-12 15:53:05 -08:00
Tolga Ceylan
c848fc6181 fn: hot container timer improvements (#751)
* fn: hot container timer improvements

With this change, now we are allocating the timers
when the container starts and managing them via
stop/clear as needed, which should not only be more
efficient, but also easier to follow.

For example, previously, if eject time out was
set to 10 secs, this could have delayed idle timeout
up to 10 secs as well. It is also not necessary to do
any math for elapsed time.

Now consumers avoid any requeuing when startDequeuer() is cancelled.
This was triggering additional dequeue/requeue causing
containers to wake up spuriously. Also in startDequeuer(),
we no longer remove the item from the actual queue and
leave this to acquire/eject, which side steps issues related
with item landing in the channel, not consumed, etc.
2018-02-12 14:12:03 -08:00
Reed Allman
27179ddf54 plumb ctx for container removal spanno (#750)
these were just dangling off on the side, took some plumbing work but not so
bad
2018-02-08 22:48:23 -08:00
Reed Allman
3ab49d4701 limit log size in containers (#748)
closes #317

we could fiddle with this, but we need to at least bound these. this
accomplishes that. 1m is picked since that's our default max log size for the
time being per call, it also takes a little time to generate that many bytes
through logs, typically (i.e. without trying to). I tested with 0, which
spiked the i/o rate on my machine because it's constantly deleting the json
log file. I also tested with 1k and it was similar (for a task that generated
about 1k in logs quickly) -- in testing, this halved my throughput, whereas
using 1m did not change the throughput at all. trying the 'none' driver and
'syslog' driver weren't great, 'none' turns off all stderr and 'syslog' blocks
every log line (boo). anyway, this option seems to have no affect on the
output we get in 'attach', which is what we really care about (i.e. docker is
not logically capping this, just swapping out the log file).

using 1m for this, e.g. if we have 500 hot containers on a machine we have
potentially half a gig of worthless logs laying around. we don't need the
docker logs laying around at all really, but short of writing a storage driver
ourselves there don't seem to be too many better options. open to idears, but
this is likely to hold us over for some time.
2018-02-08 17:16:26 -08:00
Tolga Ceylan
f27d47f2dd Idle Hot Container Freeze/Preempt Support (#733)
* fn: freeze/unfreeze and eject idle under resource contention
2018-02-07 17:21:53 -08:00
Tolga Ceylan
dc4d90432b fn: memory limit adjustments (#746)
1) limit kernel memory which was previously unlimited, using
   same limits as user memory for a unified approach.
2) disable swap memory for containers
2018-02-07 16:48:52 -08:00
Tolga Ceylan
ebc6657071 fn: docker version check2 (#744)
1) now required docker version is 17.06
2) enable circle ci latest docker install
3) docker driver & agent check minimum version before start
2018-02-06 16:16:40 -08:00
Tolga Ceylan
fdf5a67f6f fn: error image is now deprecated (#737)
Please use fn-test-utils instead for testing.
2018-02-05 11:12:27 -08:00
Tolga Ceylan
6b5486c699 fn: sleeper image is now deprecated (#736)
Please use fn-test-utils instead for testing.
2018-02-05 10:01:11 -08:00
Nigel Deakin
5089dd6119 Extend /stats API to handle two routes with the same path in different apps (#735)
* Extend deprecated /stats API to handle apps and paths correctly

* More changes (bugfixes) to the JSON structure returned by the stats API call
2018-02-05 15:51:53 +00:00
Reed Allman
3b261fc144 pipe swapparoo each slot (#721)
* pipe swapparoo each slot

previously, we made a pair of pipes for stdin and stdout for each container,
and then handed them out to each call (slot) to use. this meant that multiple
calls could have a handle on the same stdin pipe and stdout pipe to read/write
to/from from fn's perspective and could mix input/output and get garbage. this
also meant that each was blocked on the previous' reads.

now we make a new pipe every time we get a slot, and swap it out with the
previous ones. calls are no longer blocked from fn's perspective, and we don't
have to worry about timing out dispatch for any hot format. there is still the
issue that if a function does not finish reading the input from the previous
task, from its perspective, and reads the next call's it can error out the
second call. with fn deadline we provide the necessary tools to skirt this,
but without some additional coordination am not sure this is a closable hole
with our current protocols since terminating a previous calls input requires
some protocol specific bytes to go in (json in particular is tricky). anyway,
from fn's side fixing pipes was definitely a hole, but this client hole is
still hanging out. there was an attempt to send an io.EOF but the issue is
that will shut down docker's read on the stdin pipe (and the container). poop.

this adds a test for this behavior, and makes sure 2 containers don't get
launched.

this also closes the response writer header race a little, but not entirely, I
think there's still a chance that we read a full response from a function and
get a timeout while we're changing the headers. I guess we need a thread safe
header bucket, otherwise we have to rely on timings (racy). thinking on it.

* fix stats mu race
2018-01-31 17:25:24 -08:00
Dario Domizioli
e753732bd8 Hot protocols improvements (for 662) (#724)
* Improve deadline handling in streaming protocols

* Move special headers handling down to the protocols

* Adding function format documentation for JSON changes

* Add tests for request url and method in JSON protocol

* Fix  protocol missing fn-specific info

* Fix import

* Add panic for something that should never happen
2018-01-31 12:26:43 +00:00
Tolga Ceylan
97d78c584b fn: better slot/container/request state tracking (#719)
* fn: better slot/container/request state tracking
2018-01-26 12:21:11 -08:00
Reed Allman
bbd50a0e02 additional ctx spans / maid service (#716)
* add spans to async

* clean up / add spans to agent

* there were a few methods which had multiple contexts which existed in the same
scope (this doesn't end well, usually), flattened those out.
* loop bound context cancels now rely on defer (also was brittle)
* runHot had a lot of ctx shuffling, flattened that.
* added some additional spans in certain paths for added granularity
* linked up the hot launcher / run hot / wait hot to _a_ root span, the first
2 are follows from spans, but at least we can see the source of these and also
can see containers launched over a hot launcher's lifetime

I left TODO around the FollowsFrom because OpenCensus doesn't, at least at the
moment, appear to have any idea of FollowsFrom and it was an extra OpenTracing
method (we have to get the span out, start a new span with the option, then
add it to the context... some shuffling required). anyway, was on the fence
about adding at least.

* resource waiters need to manage their own goroutine lifecycle

* if we get an impossible memory request, bail instead of infinite loop

* handle timeout slippery case

* still sucks, but hotLauncher doesn't leak anything. even the time.After timer goroutines

* simplify GetResourceToken

GetCall can guard against the impossible to allocate resource tasks entering
the system by erroring instead of doling them out. this makes GetResourceToken
logic more straightforward for callers, who now simply have the contract that
they won't ever get a token if they let tasks into the agent that can't run
(but GetCall guards this, and there's a test for it).

sorry, I was going to make this only do that, but when I went to fix up the
tests, my last patch went haywire so I fixed that too. this also at least
tries to simplify the hotLaunch loop, which will now no longer leak time.After
timers (which were long, and with signaller, they were many -- I got a stack
trace :) -- this breaks out the bottom half of the logic to check to see if we
need to launch into its own function, and handles the cleaning duties only in
the caller instead of in 2 different select statements. played with this a
bit, no doubt further cleaning could be done, but this _seems_ better.

* fix vet

* add units to exported method contract docs

* oops
2018-01-23 19:52:22 -08:00
Tolga Ceylan
ee59361bda fn: added server too busy stats (#717) 2018-01-23 19:30:01 -08:00
Reed Allman
bae13d6c29 fix the http protocol dumper (#705)
we were using the httputil.DumpRequest when there is a perfectly good
req.Write method hanging out in the stdlib, that even does the chunked thing
that a few people ran into if they don't provide a content length:
https://golang.org/pkg/net/http/#Request.Write -- so we shouldn't run into
that issue again. I hit this in testing and it was not very fun to debug, so
added a test that repro'd it on master and fixes it here. of course, adding a
content length works too. tested this and it appears to work pretty well, also
cleaned up the control flow a little bit in http protocol.
2018-01-22 11:41:04 -08:00
Nigel Deakin
e1df053de9 Change timedout to timeouts (#709) 2018-01-22 16:55:30 +00:00
Tolga Ceylan
8c31e47c01 fn: agent slot improvements (#704)
*) Stopped using latency previous/current stats, this
was not working as expected. Fresh starts usually have
these stats zero for a long time, and initial samples
are high due to downloads, caches, etc.

*) New state to track: containers that are idle. In other
words, containers that have an unused token in the slot
queue.

*) Removed latency counts since these are not used in
container start decision anymore. Simplifies logs.

*) isNewContainerNeeded() simplified to use idle count
to estimate effective waiters. Removed speculative
latency based logic and progress check comparison.
In agent, waitHot() delayed signalling compansates
for these changes. If the estimation may fail, but
this should correct itself in the next 200 msec
signal.
2018-01-19 12:35:52 -08:00
Tolga Ceylan
2f0de2b574 fn: resource and slot cancel and broadcast improvements (#696)
* fn: resource and slot cancel and broadcast improvements

*) Context argument does not wake up the waiters correctly upon
cancellation/timeout.
*) Avoid unnecessary broadcasts in slot and resource.

* fn: limit scope of context in resource/slot calls in agent
2018-01-18 13:43:56 -08:00
Reed Allman
c9e995292c if a slot is available, don't launch more (#701)
since we were sending a signal before checking if a slot was available, even
in the case of serial calls locally I was seeing 2 containers launch. if we
only send a signal after first checking if a slot is available, this goes
away. 1 usec should not be too offensive of an additional wait, all things
considered here.
2018-01-18 13:19:25 -08:00
Tolga Ceylan
5a7778a656 fn: cancellations in WaitAsyncResource (#694)
* fn: cancellations in WaitAsyncResource

Added go context with cancel to wait async resource. Although
today, the only case for cancellation is shutdown, this cleans
up agent shutdown a little bit.

* fn: locked broadcast to avoid missed wake-ups

* fn: removed ctx arg to WaitAsyncResource and startDequeuer

This is confusing and unnecessary.
2018-01-17 16:08:54 -08:00
Nigel Deakin
8bf26efa29 Add new Prom metrics fn_timeout and fn_errors (#679)
* Add new Prom metric fn_timedout

* Add new Prometheus metric fn_errors

* Tidy up variable name

* Add new Prometheus metric fn_errors

* gofmt
2018-01-15 14:49:33 +00:00
Reed Allman
0bde666395 clean up agent.Submit (#681)
this was getting bloated with various contexts and spans and stats
administrivia that obfuscated what was going on a lot. this makes some helper
methods to shove most of that stuff into, and simplifies the context handling
around getting a slot by moving it inside of slot acquisition code. also
removed most uses of `call.Model()` -- I'll kill this thing some day, but if a
reason is needed, then the overhead of dynamic dispatch is unnecessary, we're
inside of the implementee for the agent, we don't want to use the interface
methods inside of that.
2018-01-12 13:56:17 -08:00
Tolga Ceylan
39b2cb2d9b Cpu resources (#642)
* fn: cpu quota implementation
2018-01-12 11:38:28 -08:00
Tolga Ceylan
1c8029e4f1 fn: more tests for hot container launch logic (#678) 2018-01-11 16:00:37 -08:00
Tolga Ceylan
db159e595f fn: new container lauch adjustments (#677)
*) revert executor wait queue size comparison. This is too
   aggresive and with stall check below, now unnecessary.
*) new container logic now checks if stats are constant, if
   this is the case, then we assume the system is stalled (eg
   running functions that take long time), this means we need
   to make progress and spin up a new container.
2018-01-11 14:09:21 -08:00
Nigel Deakin
ac2bfd3462 Change basic stats to use opentracing rather than Prometheus API (#671)
* Change basic stats to use opentracing rather than Prometheus API directly

* Just ran gofmt

* Extract opentracing access for metrics to common/metrics.go

* Replace quotes strings with constants where possible
2018-01-11 17:34:51 +00:00
Tolga Ceylan
7c91b98a72 fn: hot container launcher adjustment (#673)
Latency stats are not always read-time updated and
if calls are stuck in waiting state, isNewContainerNeeded()
needs to be a bit more aggresive if the wait queue grows.
2018-01-10 14:14:19 -08:00
Tolga Ceylan
23ae1fe723 fn: removed dead code (#672) 2018-01-10 12:32:19 -08:00
Travis Reeder
3b9818bc58 Switch to dep from glide (#664) 2018-01-09 14:11:08 -08:00
Reed Allman
20089c4e83 make headers quasi-consistent (#660)
possible breakages:

* `FN_HEADER` on cold are no longer `s/-/_/` -- this is so that cold functions
can rebuild the headers as they were when they came in on the request (fdks,
specifically), there's no guarantee that a reversal `s/_/-/` is the original
header on the request.
* app and route config no longer `s/-/_/` -- it seemed really weird to rewrite
the users config vars on these. should just pass them exactly as is to env.
* headers no longer contain the environment vars (previously, base config; app
config, route config, `FN_PATH`, etc.), these are still available in the
environment.

this gets rid of a lot of the code around headers, specifically the stuff that
shoved everything into headers when constructing a call to begin with. now we
just store the headers separately and add a few things, like FN_CALL_ID to
them, and build a separate 'config' now to store on the call. I thought
'config' was more aptly named, 'env' was confusing, though now 'config' is
exactly what 'base_vars' was, which is only the things being put into the env.
we weren't storing this field in the db, this doesn't break unless there are
messages in a queue from another version, anyway, don't think we're there and
don't expect any breakage for anybody with field name changes.

this makes the configuration stuff pretty straight forward, there's just two
separate buckets of things, and cold just needs to mash them together into the
env, and otherwise hot containers just need to put 'config' in the env, and then
hot format can shove 'headers' in however they'd like. this seems better than
my last idea about making this easier but worse (RIP).

this means:

* headers no longer contain all vars, the set of base vars can only be found
in the environment.
* headers is only the headers from request + call_id, deadline, method, url
* for cold, we simply add the headers to the environment, prepending
`FN_HEADER_` to them, BUT NOT upper casing or `s/-/_/`
* fixes issue where async hot functions would end up with `Fn_header_`
prefixed headers
* removes idea of 'base' vars and 'env'. this was a strange concept. now we just have
'config' which was base vars, and headers, which was base_env+headers; i.e.
they are disjoint now.
* casing for all headers will lean to be `My-Header` style, which should help
with consistency. notable exceptions for cold only are FN_CALL_ID, FN_METHOD,
and FN_REQUEST_URL -- this is simply to avoid breakage, in either hot format
they appear as `Fn_call_id` still.
* removes FN_PARAM stuff
* updated doc with behavior

weird things left:

`Fn_call_id` e.g. isn't a correctly formatted http header, it should likely be
`Fn-Call-Id` but I wanted to live to fight another day on this one, it would
add some breakage.

examples to be posted of each format below

closes #329
2018-01-09 10:08:30 -08:00
Travis Reeder
580dd3e5cb Removes FN_PARAM_xxx (#661) 2018-01-09 16:42:25 +00:00
Tolga Ceylan
18716911b9 fn: agent slot and execution wait correction (#658)
Since by policy we require timeout/2 remaining time
before we can execute the request, we should also
bound the slot wait time by timeout/2 to avoid
waiting for full timeout in slot wait phase.
2018-01-08 12:33:37 -08:00
Tolga Ceylan
6f1f5e365d fn: URL parsing updates to fix json request_url (#657)
*) Updated fn-test-utils to latest fdk-go
*) Added hot-json to runner tests
*) Removed anon function in FromRequest which had
a side effect to set req.URL.Host. This is now more
explicit and eliminates some corresponding logic in
protocol http.
*) in gin, http request RequestURI is not set, removed
code that references this. (use Call.URL instead)
2018-01-08 10:28:50 -08:00
Tolga Ceylan
7185f8306a fn: async wait bugfix and low mem warnings (#649)
*) fix async high water mark logic
*) warning on 256 mem for async and async+sync pools
2018-01-05 10:27:26 -08:00
Tolga Ceylan
14789aba41 Slot mgr fixes (#613)
*) during shutdown, errors should be 503
*) new inactivity time out for hot queue, we previously kept hot queues in memory forever.
*) each hot queue now has a hot launcher to monitor and launch hot containers
*) consumers now create a consumer channel with startDequeuer() that can be cancelled via context
*) consumers now ping (signal) hot launcher every 200 msecs until they get a slot
*) tests for slot queue & mgr
2018-01-04 11:34:43 -08:00
Nigel Deakin
c0dec594ef Add calls metric (#637)
* Add new Prometheus metric fn_api_calls

* Change fn_api_calls to a counter
2018-01-03 10:42:37 -06:00
Reed Allman
f51792ae5e Timestamps on apps / routes (#614)
* route updated_at

* add app created at, fix some route updated_at bugs

* add app updated_at

TODO need to add tests through front end
TODO for validation we don't really want to use the validate wrapper since
it's a programmer error and not a user error, hopefully tests block this.

* add tests for timestamps to exist / change on apps&routes

* route equals at done, fix tests wit dis

* fix up the equals sugar

* add swagger

* fix rebase

* precisely allocate maps in clone

* vetted

* meh

* fix api tests
2017-12-23 09:57:36 -06:00
Tolga Ceylan
feeeca3321 fn: agent shutdown improvements (#622) 2017-12-22 12:52:31 -08:00
Tolga Ceylan
25a72146f5 slot tracking improvements (#562)
* fn: remove 100 msec sleep for hot containers

*) moved slot management to its own file
*) slots are now implemented with LIFO semantics, this is important since we do
   not want to round robin hot containers. Idle hot containers should timeout properly.
*) each slot queue now stores a few basic stats such as avg time a call spent in a given
   state and number of running/launching containers, number of waiting calls in those states.
*) first metrics in these basic stats are discarded to avoid initial docker pull/start spikes.
*) agent now records/updates slot queue state and how much time a call stayed in that state.
*) waitHotSlot() replaces the previous wait 100 msec logic where it sends a msg to
   hot slot go routine launchHot() and waits for a slot
*) launchHot() is now a go routine for tracking containers in hot slots, it determines
   if a new containers is needed based on slot queue stats.
2017-12-15 15:50:07 -08:00
Tolga Ceylan
419298e1c0 Async hot hdr fix (#604)
* fn: for async hot requests ensure/fix content-length/type

* fn: added tests for FromModel for content type/length

* fn: restrict the content-length fix to async in FromModel()
2017-12-15 14:32:25 -08:00
Nigel Deakin
f1fc040948 Fix spans for prometheus (#606) 2017-12-15 10:31:57 -08:00
Tolga Ceylan
3b12f3fa3d Fn deadline (#591)
* fn: added fn_deadline as RFC3339
2017-12-14 19:25:36 -08:00
Tolga Ceylan
eccce881a6 fn: exclude timeouts from failed error count (#590)
* fn: exclude timeouts from failed error count
2017-12-14 13:10:07 -08:00
Reed Allman
bb92547b95 Hybrid plumby (#585)
* fix configuration of agent and server to be future proof and plumb in the hybrid client agent

* fixes up the tests, turns off /r/ on api nodes

* fix up defaults for runner nodes

* shove the runner async push code down into agent land to use client

* plumb up async-age

* return full call from async dequeue endpoint, since we're storing a whole
call in the MQ we don't need to worry about caching of app/route [for now]
* fast safe shutdown of dequeue looper in runner / tidying of agent
* nice errors for path not found against /r/, /v1/ or other path not found
* removed some stale TODO in agent
* mq backends are only loud mouths in debug mode now

* update tests

* Add caching to hybrid client

* Fix HTTP error handling in hybrid client.

The type switch was on the value rather than a pointer.

* Gofmt.

* Better caching with a nice caching wrapper

* Remove datastore cache which is now unused

* Don't need to manually wrap interface methods

* Go fmt
2017-12-12 15:54:55 -08:00