Commit Graph

1319 Commits

Author SHA1 Message Date
Reed Allman
27179ddf54 plumb ctx for container removal spanno (#750)
these were just dangling off on the side, took some plumbing work but not so
bad
2018-02-08 22:48:23 -08:00
CI
aea3bab95e fnserver: 0.3.328 release [skip ci] 2018-02-09 01:23:22 +00:00
Reed Allman
3ab49d4701 limit log size in containers (#748)
closes #317

we could fiddle with this, but we need to at least bound these. this
accomplishes that. 1m is picked since that's our default max log size for the
time being per call, it also takes a little time to generate that many bytes
through logs, typically (i.e. without trying to). I tested with 0, which
spiked the i/o rate on my machine because it's constantly deleting the json
log file. I also tested with 1k and it was similar (for a task that generated
about 1k in logs quickly) -- in testing, this halved my throughput, whereas
using 1m did not change the throughput at all. trying the 'none' driver and
'syslog' driver weren't great, 'none' turns off all stderr and 'syslog' blocks
every log line (boo). anyway, this option seems to have no affect on the
output we get in 'attach', which is what we really care about (i.e. docker is
not logically capping this, just swapping out the log file).

using 1m for this, e.g. if we have 500 hot containers on a machine we have
potentially half a gig of worthless logs laying around. we don't need the
docker logs laying around at all really, but short of writing a storage driver
ourselves there don't seem to be too many better options. open to idears, but
this is likely to hold us over for some time.
2018-02-08 17:16:26 -08:00
CI
6c62bdb18a fnserver: 0.3.327 release [skip ci] 2018-02-08 01:29:10 +00:00
Tolga Ceylan
f27d47f2dd Idle Hot Container Freeze/Preempt Support (#733)
* fn: freeze/unfreeze and eject idle under resource contention
2018-02-07 17:21:53 -08:00
CI
105947d031 fnserver: 0.3.326 release [skip ci] 2018-02-08 00:56:39 +00:00
Tolga Ceylan
dc4d90432b fn: memory limit adjustments (#746)
1) limit kernel memory which was previously unlimited, using
   same limits as user memory for a unified approach.
2) disable swap memory for containers
2018-02-07 16:48:52 -08:00
CI
8f70b622cc fnserver: 0.3.325 release [skip ci] 2018-02-07 00:24:19 +00:00
Tolga Ceylan
ebc6657071 fn: docker version check2 (#744)
1) now required docker version is 17.06
2) enable circle ci latest docker install
3) docker driver & agent check minimum version before start
2018-02-06 16:16:40 -08:00
CI
640a47fe55 fnserver: 0.3.324 release [skip ci] 2018-02-06 00:23:02 +00:00
CI
4d802acc83 fnserver: 0.3.323 release [skip ci] 2018-02-05 20:00:05 +00:00
Reed Allman
235cbc2d67 Fix default setting (#740)
* push validate/defaults into datastore

we weren't setting a timestamp in route insert when we needed to create an app
there. that whole thing isn't atomic, but this fixes the timestamp issue.

closes #738

seems like we should do similar with the FireBeforeX stuff too.

* fix tests

* app name validation was buggy, an upper cased letter failed. now it doesn't.
uses unicode now.
* removes duplicate errors for datastore and models validation that were used
interchangably but weren't.
2018-02-05 11:54:09 -08:00
CI
b49f332e01 fnserver: 0.3.322 release [skip ci] 2018-02-05 19:18:17 +00:00
Tolga Ceylan
fdf5a67f6f fn: error image is now deprecated (#737)
Please use fn-test-utils instead for testing.
2018-02-05 11:12:27 -08:00
CI
18cbf440bd fnserver: 0.3.321 release [skip ci] 2018-02-05 18:07:23 +00:00
Tolga Ceylan
6b5486c699 fn: sleeper image is now deprecated (#736)
Please use fn-test-utils instead for testing.
2018-02-05 10:01:11 -08:00
CI
f76648a18a fnserver: 0.3.320 release [skip ci] 2018-02-05 16:14:17 +00:00
CI
606813a35f fnserver: 0.3.319 release [skip ci] 2018-02-05 15:57:15 +00:00
Nigel Deakin
5089dd6119 Extend /stats API to handle two routes with the same path in different apps (#735)
* Extend deprecated /stats API to handle apps and paths correctly

* More changes (bugfixes) to the JSON structure returned by the stats API call
2018-02-05 15:51:53 +00:00
CI
ac4dfa6077 fnserver: 0.3.318 release [skip ci] 2018-02-01 15:31:13 +00:00
CI
45675bc36d fnserver: 0.3.317 release [skip ci] 2018-02-01 01:30:56 +00:00
Reed Allman
3b261fc144 pipe swapparoo each slot (#721)
* pipe swapparoo each slot

previously, we made a pair of pipes for stdin and stdout for each container,
and then handed them out to each call (slot) to use. this meant that multiple
calls could have a handle on the same stdin pipe and stdout pipe to read/write
to/from from fn's perspective and could mix input/output and get garbage. this
also meant that each was blocked on the previous' reads.

now we make a new pipe every time we get a slot, and swap it out with the
previous ones. calls are no longer blocked from fn's perspective, and we don't
have to worry about timing out dispatch for any hot format. there is still the
issue that if a function does not finish reading the input from the previous
task, from its perspective, and reads the next call's it can error out the
second call. with fn deadline we provide the necessary tools to skirt this,
but without some additional coordination am not sure this is a closable hole
with our current protocols since terminating a previous calls input requires
some protocol specific bytes to go in (json in particular is tricky). anyway,
from fn's side fixing pipes was definitely a hole, but this client hole is
still hanging out. there was an attempt to send an io.EOF but the issue is
that will shut down docker's read on the stdin pipe (and the container). poop.

this adds a test for this behavior, and makes sure 2 containers don't get
launched.

this also closes the response writer header race a little, but not entirely, I
think there's still a chance that we read a full response from a function and
get a timeout while we're changing the headers. I guess we need a thread safe
header bucket, otherwise we have to rely on timings (racy). thinking on it.

* fix stats mu race
2018-01-31 17:25:24 -08:00
CI
4e989b0789 fnserver: 0.3.316 release [skip ci] 2018-01-31 12:32:20 +00:00
Dario Domizioli
e753732bd8 Hot protocols improvements (for 662) (#724)
* Improve deadline handling in streaming protocols

* Move special headers handling down to the protocols

* Adding function format documentation for JSON changes

* Add tests for request url and method in JSON protocol

* Fix  protocol missing fn-specific info

* Fix import

* Add panic for something that should never happen
2018-01-31 12:26:43 +00:00
CI
02c8aa1998 fnserver: 0.3.315 release [skip ci] 2018-01-31 12:13:33 +00:00
Dario Domizioli
e2dad00a83 Add simple test for calling several hot functions in parallel (#675)
* Add test for calling several hot functions in parallel
2018-01-31 12:08:05 +00:00
CI
5f8736500d fnserver: 0.3.314 release [skip ci] 2018-01-26 20:26:48 +00:00
Tolga Ceylan
97d78c584b fn: better slot/container/request state tracking (#719)
* fn: better slot/container/request state tracking
2018-01-26 12:21:11 -08:00
CI
a7223437df fnserver: 0.3.313 release [skip ci] 2018-01-24 17:36:52 +00:00
CI
cc4c157d56 fnserver: 0.3.312 release [skip ci] 2018-01-24 15:43:41 +00:00
CI
665997078e fnserver: 0.3.311 release [skip ci] 2018-01-24 03:57:41 +00:00
Reed Allman
bbd50a0e02 additional ctx spans / maid service (#716)
* add spans to async

* clean up / add spans to agent

* there were a few methods which had multiple contexts which existed in the same
scope (this doesn't end well, usually), flattened those out.
* loop bound context cancels now rely on defer (also was brittle)
* runHot had a lot of ctx shuffling, flattened that.
* added some additional spans in certain paths for added granularity
* linked up the hot launcher / run hot / wait hot to _a_ root span, the first
2 are follows from spans, but at least we can see the source of these and also
can see containers launched over a hot launcher's lifetime

I left TODO around the FollowsFrom because OpenCensus doesn't, at least at the
moment, appear to have any idea of FollowsFrom and it was an extra OpenTracing
method (we have to get the span out, start a new span with the option, then
add it to the context... some shuffling required). anyway, was on the fence
about adding at least.

* resource waiters need to manage their own goroutine lifecycle

* if we get an impossible memory request, bail instead of infinite loop

* handle timeout slippery case

* still sucks, but hotLauncher doesn't leak anything. even the time.After timer goroutines

* simplify GetResourceToken

GetCall can guard against the impossible to allocate resource tasks entering
the system by erroring instead of doling them out. this makes GetResourceToken
logic more straightforward for callers, who now simply have the contract that
they won't ever get a token if they let tasks into the agent that can't run
(but GetCall guards this, and there's a test for it).

sorry, I was going to make this only do that, but when I went to fix up the
tests, my last patch went haywire so I fixed that too. this also at least
tries to simplify the hotLaunch loop, which will now no longer leak time.After
timers (which were long, and with signaller, they were many -- I got a stack
trace :) -- this breaks out the bottom half of the logic to check to see if we
need to launch into its own function, and handles the cleaning duties only in
the caller instead of in 2 different select statements. played with this a
bit, no doubt further cleaning could be done, but this _seems_ better.

* fix vet

* add units to exported method contract docs

* oops
2018-01-23 19:52:22 -08:00
CI
ccd95b6f72 fnserver: 0.3.310 release [skip ci] 2018-01-24 03:35:45 +00:00
Tolga Ceylan
ee59361bda fn: added server too busy stats (#717) 2018-01-23 19:30:01 -08:00
CI
6873ed1fc1 fnserver: 0.3.309 release [skip ci] 2018-01-23 21:20:57 +00:00
CI
5fc01d6974 fnserver: 0.3.308 release [skip ci] 2018-01-22 22:23:01 +00:00
CI
2fa754ff48 fnserver: 0.3.307 release [skip ci] 2018-01-22 20:09:10 +00:00
CI
81e015d008 fnserver: 0.3.306 release [skip ci] 2018-01-22 19:48:28 +00:00
Reed Allman
bae13d6c29 fix the http protocol dumper (#705)
we were using the httputil.DumpRequest when there is a perfectly good
req.Write method hanging out in the stdlib, that even does the chunked thing
that a few people ran into if they don't provide a content length:
https://golang.org/pkg/net/http/#Request.Write -- so we shouldn't run into
that issue again. I hit this in testing and it was not very fun to debug, so
added a test that repro'd it on master and fixes it here. of course, adding a
content length works too. tested this and it appears to work pretty well, also
cleaned up the control flow a little bit in http protocol.
2018-01-22 11:41:04 -08:00
CI
4d7c951f76 fnserver: 0.3.305 release [skip ci] 2018-01-22 17:02:09 +00:00
Nigel Deakin
e1df053de9 Change timedout to timeouts (#709) 2018-01-22 16:55:30 +00:00
CI
5c7a21b59e fnserver: 0.3.304 release [skip ci] 2018-01-19 20:43:29 +00:00
Tolga Ceylan
8c31e47c01 fn: agent slot improvements (#704)
*) Stopped using latency previous/current stats, this
was not working as expected. Fresh starts usually have
these stats zero for a long time, and initial samples
are high due to downloads, caches, etc.

*) New state to track: containers that are idle. In other
words, containers that have an unused token in the slot
queue.

*) Removed latency counts since these are not used in
container start decision anymore. Simplifies logs.

*) isNewContainerNeeded() simplified to use idle count
to estimate effective waiters. Removed speculative
latency based logic and progress check comparison.
In agent, waitHot() delayed signalling compansates
for these changes. If the estimation may fail, but
this should correct itself in the next 200 msec
signal.
2018-01-19 12:35:52 -08:00
CI
31fd1276fb fnserver: 0.3.303 release [skip ci] 2018-01-19 18:09:40 +00:00
CI
1549534c3c fnserver: 0.3.302 release [skip ci] 2018-01-19 03:45:21 +00:00
Tolga Ceylan
2f0de2b574 fn: resource and slot cancel and broadcast improvements (#696)
* fn: resource and slot cancel and broadcast improvements

*) Context argument does not wake up the waiters correctly upon
cancellation/timeout.
*) Avoid unnecessary broadcasts in slot and resource.

* fn: limit scope of context in resource/slot calls in agent
2018-01-18 13:43:56 -08:00
Reed Allman
c9e995292c if a slot is available, don't launch more (#701)
since we were sending a signal before checking if a slot was available, even
in the case of serial calls locally I was seeing 2 containers launch. if we
only send a signal after first checking if a slot is available, this goes
away. 1 usec should not be too offensive of an additional wait, all things
considered here.
2018-01-18 13:19:25 -08:00
CI
3e2debae07 fnserver: 0.3.301 release [skip ci] 2018-01-18 00:16:12 +00:00
Tolga Ceylan
5a7778a656 fn: cancellations in WaitAsyncResource (#694)
* fn: cancellations in WaitAsyncResource

Added go context with cancel to wait async resource. Although
today, the only case for cancellation is shutdown, this cleans
up agent shutdown a little bit.

* fn: locked broadcast to avoid missed wake-ups

* fn: removed ctx arg to WaitAsyncResource and startDequeuer

This is confusing and unnecessary.
2018-01-17 16:08:54 -08:00
CI
65592c9d26 fnserver: 0.3.300 release [skip ci] 2018-01-17 15:23:53 +00:00