* fn: logging/context improvements for runner status calls
Avoid blocking calls to runStatusCall() to make sure gRPC
context can be cancelled/timedout. This is unlikely an
issue, but blocked runStatusCall() while gRPC is cancelled
is a hard to follow case mentally. New flow is a bit
easier to follow.
Log all error cases in Status() gRPC entry point including
client side cancellations.
There is no need to propagate CapacityFull to waitHot(). Instead
waitHot() can receive 503 which is easier to follow (and less
error prone) in handleCallEnd().
* docker-pull timeout is now a 504 which classifies it as a
service error. Avoid using 503 to make sure LB does not retry.
* Only applicable to detached mode, a timeout on LB is
now a ErrServiceReservationFailure (500). In detached mode,
this is unlikely to make it back to a client and it is mostly
for documentation/metrics purposes.
* For Triggers, avoid scrubbing service code.
* fn: uds init wait latency metric in prometheus
Adding tracker for UDS initialization during container start. This
complements our existing container state latency trackers and
docker-api latency trackers.
* fn: latency metrics for various call states
This complements the API latency metrics available
on LB agent. In this case, we would like to measure
calls that have finished with the following status:
"completed"
"canceled"
"timeouts"
"errors"
"server_busy"
and while measuring this latency, we subtract the
amount of time actual function execution took. This
is not precise, but an approximation mostly suitable
for trending.
Going forward, we could also subtract UDS wait time and/or
docker pull latency from this latency as an enhancement
to this PR.
* actually disable stdout/stderr. stdout>stderr
* for pure runner this turns it off for real this time.
* this also just makes the agent container type send stdout to stderr, since
we're not using stdout for function output anymore this is pretty
straightforward hopefully.
* I added a panic and some type checking printlns to ensure this is true for
pure_runner, both stdout and stderr are off, also added a unit test from agent
to ensure this behavior from its container type, which pure_runner utilizes
(no integration test though)
* tests ensure that logs still work if not trying to disable them (full agent)
* handle non ghost swapping
* disable pure runner logging
there's a racey bug where the logger is being written to when it's closing,
but this led to figuring out that we don't need the logger at all in pure
runner really, the syslog thing isn't an in process fn thing and we don't need
the logs from attach for anything further in pure runner. so this disables the
logger at the docker level, to save sending the bytes back over the wire, this
could be a nice little performance bump too. of course, with this, it means
agents can be configured to not log debug or have logs to store at all, and
not a lot of guards have been put on this for 'full' agent mode while it hangs
on a cross feeling the breeze awaiting its demise - the default configuration
remains the same, and no behavior changes in 'full' agent are here.
it was a lot smoother to make the noop than to try to plumb in 'nil' for
stdout/stderr, this has a lot lower risk of nil panic issues for the same
effect, though it's not perfect relying on type casting, plumbing in an
interface to check has the same issues (loss of interface adherence for any
decorator), so this seems ok. defaulting to not having a logger was similarly
painful, and ended up with this. but open to ideas.
* replace usage of old null reader writer impl
* make Read return io.EOF for io.Copy usage
In runHot(), it's safer to use a separate channel between
monitoring go-routine and processing go-routine to handle
cancellations triggered by monitorin go-routine.
Container initialization phase consumes resource tracker
resources (token), during lengthy operations.
In order for agent stability/liveness, this phase has
to be evictable/cancelable and time bounded.
With this change, introducing a new system wide environment setting
to bound the time spent in container initialization phase. This phase
includes docker-pull, docker-create, docker-attach, docker-start
and UDS wait operations. This initialization period is also now
considered evictable.
Now obsoleted driver.PrepareCookie() call handled image and
container creation. In agent, going forward we will need finer
grained control over the timeouts implied by the contexts.
For this reason, with this change, we split PrepareCookie()
into Validate/Pull/Create calls under Cookie interface.
This implements a "detached" mechanism to get an ack from the runner
once it actually starts to run a function. In this scenario the response
returned back is just a 202 if we placed the function in a specific
time-frame. If we hit some errors or we fail to place the fn in time we
return back different errors.
Moving the timeout management of various docker operations
to agent. This allows for finer control over what operation
should use. For instance, for pause/unpause our tolerance
is very low to avoid resource issues. For docker remove,
the consequences of failure will lead to potential agent
failure and therefore we wait up to 10 minute.
For cookie create/prepare (which includes docker-pull)
we cap this at 10 minutes by default.
With new UDS/FDK contract, health check is now obsoleted
as container advertise health using UDS availibility.
* get rid of old format stuff, utils usage, fix up for fdk2.0 interface
* pure agent format removal, TODO remove format field, fix up all tests
* shitter's clogged
* fix agent tests
* start rolling through server tests
* tests compile, some failures
* remove json / content type detection on invoke/httptrigger, fix up tests
* remove hello, fixup system tests
the fucking status checker test just hangs and it's testing that it doesn't
work so the test passes but the test doesn't pass fuck life it's not worth it
* fix migration
* meh
* make dbhelper shut up about dbhelpers not being used
* move fail status at least into main thread, jfc
* fix status call to have FN_LISTENER
also turns off the stdout/stderr blocking between calls, because it's
impossible to debug without that (without syslog), now that stdout and stderr
go to the same place (either to host stderr or nowhere) and isn't used for
function output this shouldn't be a big fuss really
* remove stdin
* cleanup/remind: fixed bug where watcher would leak if container dies first
* silence system-test logs until fail, fix datastore tests
postgres does weird things with constraints when renaming tables, took the
easy way out
system-tests were loud as fuck and made you download a circleci text file of
the logs, made them only yell when they goof
* fix fdk-go dep for test image. fun
* fix swagger and remove test about format
* update all the gopkg files
* add back FN_FORMAT for fdks that assert things. pfft
* add useful error for functions that exit
this error is really confounding because containers can exit for all manner of
reason, we're just guessing that this is the most likely cause for now, and
this error message should very likely change or be removed from the client
path anyway (context.Canceled wasn't all that useful either, but anyway, I'd
been hunting for this... so found it). added a test to avoid being publicly
shamed for 1 line commits (beware...).
Previously evictor did not perform an eviction
if total cpu/mem of evictable containers was less
than requested cpu/mem. With this change, we
try to perform evictions based on actual needed cpu & mem
reported by resource tracker.
* the dispatch span actually encloses dispatch and gives an accurate span now
* turning a call into an http request can't fail unless it's our fault, if
tests don't catch this, we don't deserve money
* moved http req creation inside of dispatch goroutine
there's further work to do cleaning up dispatch... removing the old formats
will make this slightly more clear, waiting for that. this was bugging me
anyway after seeing something else and was easy to fix up.
Streaming docker events is useful as we can record/capture some
asynchronous containers events such as out-of-memory. For now,
we record these in opencensus/prometheus stats.
If checkLaunch triggers evictions, it must wait
for these eviction to complete before returning.
Premature returning from checkLaunch will cause
checkLaunch to be called again by hot launcher.
This causes checkLaunch to receive an out of
capacity error and causes a 503.
The evictor is also improved with this PR and it
provides a slice of channels to wait on if evictions
are taking place.
Eviction token deletion is performed *after*
resource token close to ensure that once an
eviction is done, resource token is also free.
This simplifies resource tracker. Originally, logically we had
split the cpu/mem into two pools where a 20%
was kept specifically for sync calls to avoid
async calls dominating the system. However, resource
tracker should not handle such call prioritization.
Given the improvements to the evictor, I think
we can get rid of this code in resource tracker
for time being.
* Inital Refactor
Removing the repeated logic exposed some problems with the reponse
writers.
Currently, the trigger writer was overlaid on part of the header
writing. The main invoke blog writing into the different levels of the
overlays at different points in the logic.
Instead, by extending the types and embedded structs, the writer is
more transparent. So, at the end of the flow it goes over all the
headers available and removes our prefixes. This lets the invoke logic
just write to the top level.
Going to continue after lunch to try and remove some of the layers and
param passing.
* Try and repeat concurrency failure
* Nested FromHTTPFnRequest inside FromHTTPTriggerRequest
* Consolidate buffer pooling logic
* go fmt yourself
* fix import
*) removed faulty Idle state setter in runHot() since with
UDS wait, we need to wait until we can determine if a container
is idle. This is now moved to runHotReq().
*) evictor now more aggresive and no longer tied to pause
timer/configuration.
*) removed unnecessary optimization on timer=0 case for immediate
pause.
* adds parity level of testing http-stream invoke
the other formats had a gamut of tests, now http-stream does too. this makes
obvious some of its behaviors. some things changed / can change now that we
don't have pipes to worry about, the main one being that when containers blow
up now the uds client will get an EOF/ECONNREFUSED instead of the pipe getting
wedged up (allowing us to get the container error easily, previously). I made
my best 50% effort to make a reasonable error for when this happens (similar
to when http/json received garbage errors), open to ideas on verbiage / policy
there.
should be pretty straightforward. one thing to notice is that
http/json/default don't return our fancy new Fn-Http-Status or Fn-Http-H
headers... it's relatively easy to go add this to fdk-go just to test this,
but for invoke I'm really not sure we care (?) and for the gateway, the output
will be identical with the old formats bypassing the header decap. if anybody
has any feelings, feel free to express them.
* fix oomer up for new error
* Adding http header stripping to agent
Adding the header stripping into the agent, this should be low enough
that all routes to fns get treated the same.
* initial invoke testing
this assures that Content-Type and Fn-Http-Status are set for an http-stream
function. it took some fixing up of the test utils code for the plumbing to
work, looking forward to deleting most stuff in fn-test-utils.go file around
each format -- had to update fdk-go to latest for http-stream support. this
only adds 1 test, since there's some machinery here, and would like to unblock
working on the http gateway simultaneously while adding a full suite of invoke
tests (this work can be parallelized)...
i added debug logs back to the debugging output. turns out this is useful, but
it can get noisy (only when things fail, hopefully).
* fix oom tests?
* clean up hardcoded lsnr.sock refs
because what drivers.ContainerTask needs is another method, and we all know it
atoning for my sins the first time around. and yes, i refuse to use a cross
package exported constant (just think of the dep graphs)
* fix tests