* fn: introducing 503 responses for out of capacity case
*) Adding 503 with Retry-After header case if request failed
during waiting for slots.
*) TODO: return 503 without Retry-After if the request can
never be met by this fn server.
*) fn: runner test docker pull fixup
*) fn: MaxMemory for routes is now a variable to allow
testing and adjusting it according to fleet memory sizes.
* add minio-go dep, update deps
* add minio s3 client
minio has an s3 compatible api and is an open source project and, notably, is
not amazon, so it seems best to use their client (fwiw the aws-sdk-go is a
giant hair ball of things we don't need, too). it was pretty easy and seems
to work, so rolling with it. also, minio is a totally feasible option for fn
installs in prod / for demos / for local.
* adds 's3' package for s3 compatible log storage api, for use with storing
logs from calls and retrieving them.
* removes DELETE /v1/apps/:app/calls/:call/log endpoint
* removes internal log deletion api
* changes the GetLog API to use an io.Reader, which is a backwards step atm
due to the json api for logs, I have another branch lined up to make a plain
text log API and this will be much more efficient (also want to gzip)
* hooked up minio to the test suite and fixed up the test suite
* add how to run minio docs and point fn at it docs
some notes: notably we aren't cleaning up these logs. there is a ticket
already to make a Mr. Clean who wakes up periodically and nukes old stuff, so
am punting any api design around some kind of TTL deletion of logs. there are
a lot of options really for Mr. Clean, we can notably defer to him when apps
are deleted, too, so that app deletion is fast and then Mr. Clean will just
clean them up later (seems like a good option).
have not tested against BMC object store, which has an s3 compatible API. but
in theory it 'just works' (the reason for doing this). in any event, that's
part of the service land to figure out.
closes#481closes#473
* add log not found error to minio land
before returning the cookie in the driver, wait for health checks
https://docs.docker.com/engine/reference/builder/#healthcheck if provided.
for images that don't have health checks, this will have no affect (an added
call to inspect container, for hot it's small potatoes).
this will be useful for containers so that they can pull large files or do
setup that takes a while before accepting tasks. since this is before start,
it won't run into the idle timeout. we could likely use these for hot
containers in general and check between runs or something, but didn't do that
here.
one nascient concern is that for hot if the containers never become healthy
I don't think we will ever kill them and the slot will 'leak'. this is true
for this and for other cases (pulling image) I think, we should probably
recycle hot containers every hour or something which would also close this.
anyway, not a huge blocker I don't think, there will likely be 1 user of this
feature for a bit, it's not documented since we're not sure we want to support
it.
closes#336
* Docker stats to Prometheus
* Fix compilation error in docker_test
* Refactor docker driver Run function to wait for the container to have stopped before stopping the colleciton of statistics
* Fix go fmt errors
* Updates to sending docker stats to Prometheus
* remove new test TestWritResultImpl because we changes to support multiple waiters have been removed
* Update docker.Run to use channels not contextrs to shut down stats collector
* wip
* wip
* Added more fields to JSON and added blank line between objects.
* Update tests.
* wip
* Updated to represent recent discussions.
* Fixed up the json test
* More docs
* Changed from blank line to bracket, newline, open bracket.
* Blank line added back, easier for delimiting.
our dear friend mr. funclogger was bypassing calls to our multi writer since
we were embedding a *bytes.Buffer, it was using ReadFrom and WriteString which
would never call the stderr logger's Write method (or, as I learned, other
things trying to wrap that buffer's Write method...).
the tl;dr is many times DEBUG lines don't get spat out, from async tasks
especially (few people using this).
I think the final solution is probably to make funclogger a 'more robust'
interface that we understand instead of trying to minimize it to an
io.ReaderWriterCloser, much like how bytes.Buffer has all kinds of
methods implemented on it, we can implement things like ReadFrom and
WriteString most likely. not a big fan of how things are now (and it's my own
doing) with the readerwritercloser coming from multiple places but meh,
will get to it some day soon, the log stuff will be a pretty hot path.
this change makes Dispatch write request body and
http headers directly to pipe one by one
in case of non-empty request body,
if not - write headers and close finalize JSON
What's new?
- better error handling
- still need to decode JSON from function because we need status code and body
- prevent request body to be a problem by deferring its close
- moving examples around: putting http and json samples into one folder
go vet caught some nifty bugs. so fixed those here, and also made it so that
we vet everything from now on since the robots seem to do a better job of
vetting than we have managed to.
also adds gofmt check to circle. could move this to the test.sh script (didn't
want a script calling a script, because $reasons) and it's nice and isolated
in its own little land as it is. side note, changed the script so it runs in
100ms instead of 3s, i think find is a lot faster than go list.
attempted some minor cleanup of various scripts
* idle_timeout max of 1h
* timeout max of 120s for sync, 1h for async
* max memory of 8GB
* do full route validation before call invocation
* ensure that idle_timeout >= timeout
we are now doing validation of updating route inside of the database
transaction, which is what we should have been doing all along really.
we need this behavior to ensure that the idle timeout is longer than the
timeout, among other benefits (like not updating the most recent version of
the existing struct and overwriting previous updates, yay). since we have
this, we can get rid of the weird skipZero behavior on validate too and
validate the real deal holyfield.
validating the route before making the call is handy so that we don't do weird
things like run a func that wants to use 300GB of RAM and run for 3 weeks.
closes#192closes#344closes#162