LB agent reports lb placer latency. It should also report
how long it took for the runner to initiate the call as
well as execution time inside the container if the runner
has accepted (committed) to the call.
* Don't try to delete an app that wasn't successfully created in the case of failure
* Allow datastore implementations to inject additional annotations on objects
* Allow for datastores transparently adding annotations on apps, fns and triggers. Change NameIn filter to Name for apps.
* Move *List types including JSON annotations for App, Fn and Trigger into models
* Change return types for GetApps, GetFns and GetTriggers on datastore to
be models.*List and ove cursor generation into datastore
* Trigger cursor handling fixed into db layer
Also changes the name generation so that it is not in the same order
as the id (well is random), this means we are now testing our name ordering.
* GetFns now respects cursors
* Apps now feeds cursor back
* Mock fixes
* Fixing up api level cursor decoding
* Tidy up treatment of cursors in the db layer
* Adding conditions for non nil items lists
* fix mock test
Vast commit, includes:
* Introduces the Trigger domain entity.
* Introduces the Fns domain entity.
* V2 of the API for interacting with the new entities in swaggerv2.yml
* Adds v2 end points for Apps to support PUT updates.
* Rewrites the datastore level tests into a new pattern.
* V2 routes use entity ID over name as the path parameter.
* Adding a way to inject a request ID
It is very useful to associate a request ID to each incoming request,
this change allows to provide a function to do that via Server Option.
The change comes with a default function which will generate a new
request ID. The request ID is put in the request context along with a
common logger which always logs the request-id
We add gRPC interceptors to the server so it can get the request ID out
of the gRPC metadata and put it in the common logger stored in the
context so as all the log lines using the common logger from the context
will have the request ID logged
* fn: introducing lb placer basic metrics
This change adds basic metrics to naive and consistent
hash LB placers. The stats show how many times we scanned
the full runner list, if runner pool failed to return a
runner list or if runner pool returned an empty list.
Placed and not placed status are also tracked along with
if TryExec returned an error or not. Most common error
code, Too-Busy is specifically tracked.
If client cancels/times out, this is also tracked as
a client cancel metric.
For placer latency, we would like to know how much time
the placer spent on searching for a runner until it
successfully places a call. This includes round-trip
times for NACK responses from the runners until a successful
TryExec() call. By excluding last successful TryExec() latency,
we try to exclude function execution & runner container
startup time from this metric in an attempt to isolate
Placer only latency.
* fn: latency and attempt tracker
Removing full scan metric. Tracking number of
runners attempted is a better metric for this
purpose.
Also, if rp.Runners() fail, this is an unrecoverable
error and we should bail out instead of retrying.
* fn: typo fix, ch placer finalize err return
* fn: enable LB placer metrics in WithAgentFromEnv if prometheus is enabled
* datastore no longer implements logstore
the underlying implementation of our sql store implements both the datastore
and the logstore interface, however going forward we are likely to encounter
datastore implementers that would mock out the logstore interface and not use
its methods - signalling a poor interface. this remedies that, now they are 2
completely separate things, which our sqlstore happens to implement both of.
related to some recent changes around wrapping, this keeps the imposed metrics
and validation wrapping of a servers logstore and datastore, just moving it
into New instead of in the opts - this is so that a user can have the
underlying datastore in order to set the logstore to it, since wrapping it in
a validator/metrics would render it no longer a logstore implementer (i.e.
validate datastore doesn't implement the logstore interface), we need to do
this after setting the logstore to the datastore if one wasn't provided
explicitly.
* splits logstore and datastore metrics & validation logic
* `make test` should be `make full-test` always. got rid of that so that
nobody else has to wait for CI to blow up on them after the tests pass locally
ever again.
* fix new tests
* App ID
* Clean-up
* Use ID or name to reference apps
* Can use app by name or ID
* Get rid of AppName for routes API and model
routes API is completely backwards-compatible
routes API accepts both app ID and name
* Get rid of AppName from calls API and model
* Fixing tests
* Get rid of AppName from logs API and model
* Restrict API to work with app names only
* Addressing review comments
* Fix for hybrid mode
* Fix rebase problems
* Addressing review comments
* Addressing review comments pt.2
* Fixing test issue
* Addressing review comments pt.3
* Updated docstring
* Adjust UpdateApp SQL implementation to work with app IDs instead of names
* Fixing tests
* fmt after rebase
* Make tests green again!
* Use GetAppByID wherever it is necessary
- adding new v2 endpoints to keep hybrid api/runner mode working
- extract CallBase from Call object to expose that to a user
(it doesn't include any app reference, as we do for all other API objects)
* Get rid of GetAppByName
* Adjusting server router setup
* Make hybrid work again
* Fix datastore tests
* Fixing tests
* Do not ignore app_id
* Resolve issues after rebase
* Updating test to make it work as it was
* Tabula rasa for migrations
* Adding calls API test
- we need to ensure we give "App not found" for the missing app and missing call in first place
- making previous test work (request missing call for the existing app)
* Make datastore tests work fine with correctly applied migrations
* Make CallFunction middleware work again
had to adjust its implementation to set app ID before proceeding
* The biggest rebase ever made
* Fix 8's migration
* Fix tests
* Fix hybrid client
* Fix tests problem
* Increment app ID migration version
* Fixing TestAppUpdate
* Fix rebase issues
* Addressing review comments
* Renew vendor
* Updated swagger doc per recommendations
* Move delegated agent creation within NewLBAgent so we can hide the fact we disable docker
* Move delegated agent creation within NewPureRunner for better encapsulation
* Move out node-pool manager and replace it with RunnerPool extension
* adds extension points for runner pools in load-balanced mode
* adds error to return values in RunnerPool and Runner interfaces
* Implements runner pool contract with context-aware shutdown
* fixes issue with range
* fixes tests to use runner abstraction
* adds empty test file as a workaround for build requiring go source files in top-level package
* removes flappy timeout test
* update docs to reflect runner pool setup
* refactors system tests to use runner abstraction
* removes poolmanager
* moves runner interfaces from models to api/runnerpool package
* Adds a second runner to pool docs example
* explicitly check for request spillover to second runner in test
* moves runner pool package name for system tests
* renames runner pool pointer variable for consistency
* pass model json to runner
* automatically cast to http.ResponseWriter in load-balanced call case
* allow overriding of server RunnerPool via a programmatic ServerOption
* fixes return type of ResponseWriter in test
* move Placer interface to runnerpool package
* moves hash-based placer out of open source project
* removes siphash from Gopkg.lock
* add jaeger support, link hot container & req span
* adds jaeger support now with FN_JAEGER_URL, there's a simple tutorial in the
operating/metrics.md file now and it's pretty easy to get up and running.
* links a hot request span to a hot container span. when we change this to
sample at a lower ratio we'll need to finagle the hot container span to always
sample or something, otherwise we'll hide that info. at least, since we're
sampling at 100% for now if this is flipped on, can see freeze/unfreeze etc.
if they hit. this is useful for debugging. note that zipkin's exporter does
not follow the link at all, hence jaeger... and they're backed by the Cloud
Empire now (CNCF) so we'll probably use it anyway.
* vendor: add thrift for jaeger
* fn, dockerd pid collector & go collector metrics
the prometheus client we're using has a nice collector for process metrics and
for go metrics. these are things we are very interested in operationally and
recently the benevolent team at opencensus made this possible again, so this
hooks it up for us with added dockerd sugar.
nannying the dockerd we're using should be super useful since that thing likes
to get carried away, it'll be nice to differentiate memory/cpu usage between
dockerd / the host / fn. this will basically only work in a 'dind'
environment, or on a linux host that is running fn outside of docker that is
configured with the permissions to be able to check this. otherwise, it will
simply fail. we also probably want disk i/o and net i/o information for that
as well, or at least it would be interesting to differentiate from the host,
but this isn't hooked up in the default collectors unfortunately.
dockerd:
```
dockerd_process_cpu_seconds_total 520.74
dockerd_process_max_fds 1.048576e+06
dockerd_process_resident_memory_bytes 9.033728e+07
dockerd_process_start_time_seconds 1.52029677322e+09
dockerd_process_virtual_memory_bytes 1.782509568e+09
```
fn:
```
fn_process_cpu_seconds_total 0.14
fn_process_max_fds 1024
fn_process_open_fds 12
fn_process_resident_memory_bytes 2.7348992e+07
fn_process_start_time_seconds 1.52056274238e+09
fn_process_virtual_memory_bytes 7.20068608e+08
```
go:
```
go_gc_duration_seconds{quantile="0"} 4.4194e-05
go_gc_duration_seconds{quantile="0.25"} 9.8118e-05
go_gc_duration_seconds{quantile="0.5"} 0.000105989
go_gc_duration_seconds{quantile="0.75"} 0.000106251
go_gc_duration_seconds{quantile="1"} 0.000157864
go_gc_duration_seconds_sum 0.000512416
go_gc_duration_seconds_count 5
go_goroutines 30
go_memstats_alloc_bytes 3.897696e+06
go_memstats_alloc_bytes_total 1.2916016e+07
go_memstats_buck_hash_sys_bytes 1.45034e+06
go_memstats_frees_total 75399
go_memstats_gc_sys_bytes 450560
go_memstats_heap_alloc_bytes 3.897696e+06
go_memstats_heap_idle_bytes 868352
go_memstats_heap_inuse_bytes 5.750784e+06
go_memstats_heap_objects 29925
go_memstats_heap_released_bytes_total 0
go_memstats_heap_sys_bytes 6.619136e+06
go_memstats_last_gc_time_seconds 1.520562751182639e+09
go_memstats_lookups_total 239
go_memstats_mallocs_total 105324
go_memstats_mcache_inuse_bytes 3472
go_memstats_mcache_sys_bytes 16384
go_memstats_mspan_inuse_bytes 90592
go_memstats_mspan_sys_bytes 98304
go_memstats_next_gc_bytes 6.31304e+06
go_memstats_other_sys_bytes 710548
go_memstats_stack_inuse_bytes 720896
go_memstats_stack_sys_bytes 720896
go_memstats_sys_bytes 1.0066168e+07
```
* cache pid until it stops working
* Refactor PureRunner as an Agent so that it encapsulates its grpc server
* Maintain a list of extra contexts for the server to select on to handle errors and cancellations
* Initial stab at the protocol
* initial protocol sketch for node pool manager
* Added http header frame as a message
* Force the use of WithAgent variants when creating a server
* adds grpc models for node pool manager plus go deps
* Naming things is really hard
* Merge (and optionally purge) details received by the NPM
* WIP: starting to add the runner-side functionality of the new data plane
* WIP: Basic startup of grpc server for pure runner. Needs proper certs.
* Go fmt
* Initial agent for LB nodes.
* Agent implementation for LB nodes.
* Pass keys and certs to LB node agent.
* Remove accidentally left reference to env var.
* Add env variables for certificate files
* stub out the capacity and group membership server channels
* implement server-side runner manager service
* removes unused variable
* fixes build error
* splits up GetCall and GetLBGroupId
* Change LB node agent to use TLS connection.
* Encode call model as JSON to send to runner node.
* Use hybrid client in LB node agent.
This should provide access to get app and route information for the call
from an API node.
* More error handling on the pure runner side
* Tentative fix for GetCall problem: set deadlines correctly when reserving slot
* Connect loop for LB agent to runner nodes.
* Extract runner connection function in LB agent.
* drops committed capacity counts
* Bugfix - end state tracker only in submit
* Do logs properly
* adds first pass of tracking capacity metrics in agent
* maked memory capacity metric uint64
* maked memory capacity metric uint64
* removes use of old capacity field
* adds remove capacity call
* merges overwritten reconnect logic
* First pass of a NPM
Provide a service that talks to a (simulated) CP.
- Receive incoming capacity assertions from LBs for LBGs
- expire LB requests after a short period
- ask the CP to add runners to a LBG
- note runner set changes and readvertise
- scale down by marking runners as "draining"
- shut off draining runners after some cool-down period
* add capacity update on schedule
* Send periodic capcacity metrics
Sending capcacity metrics to node pool manager
* splits grpc and api interfaces for capacity manager
* failure to advertise capacity shouldn't panic
* Add some instructions for starting DP/CP parts.
* Create the poolmanager server with TLS
* Use logrus
* Get npm compiling with cert fixups.
* Fix: pure runner should not start async processing
* brings runner, nulb and npm together
* Add field to acknowledgment to record slot allocation latency; fix a bug too
* iterating on pool manager locking issue
* raises timeout of placement retry loop
* Fix up NPM
Improve logging
Ensure that channels etc. are actually initialised in the structure
creation!
* Update the docs - runners GRPC port is 9120
* Bugfix: return runner pool accurately.
* Double locking
* Note purges as LBs stop talking to us
* Get the purging of old LBs working.
* Tweak: on restart, load runner set before making scaling decisions.
* more agent synchronization improvements
* Deal with teh CP pulling out active hosts from under us.
* lock at lbgroup level
* Send request and receive response from runner.
* Add capacity check right before slot reservation
* Pass the full Call into the receive loop.
* Wait for the data from the runner before finishing
* force runner list refresh every time
* Don't init db and mq for pure runners
* adds shutdown of npm
* fixes broken log line
* Extract an interface for the Predictor used by the NPM
* purge drained connections from npm
* Refactor of the LB agent into the agent package
* removes capacitytest wip
* Fix undefined err issue
* updating README for poolmanager set up
* ues retrying dial for lb to npm connections
* Rename lb_calls to lb_agent now that all functionality is there
* Use the right deadline and errors in LBAgent
* Make stream error flag per-call rather than global otherwise the whole runner is damaged by one call dropping
* abstracting gRPCNodePool
* Make stream error flag per-call rather than global otherwise the whole runner is damaged by one call dropping
* Add some init checks for LB and pure runner nodes
* adding some useful debug
* Fix default db and mq for lb node
* removes unreachable code, fixes typo
* Use datastore as logstore in API nodes.
This fixes a bug caused by trying to insert logs into a nil logstore. It
was nil because it wasn't being set for API nodes.
* creates placement abstraction and moves capacity APIs to NodePool
* removed TODO, added logging
* Dial reconnections for LB <-> runners
LB grpc connections to runners are established using a backoff stategy
in event of reconnections, this allows to let the LB up even in case one
of the runners go away and reconnect to it as soon as it is back.
* Add a status call to the Runner protocol
Stub at the moment. To be used for things like draindown, health checks.
* Remove comment.
* makes assign/release capacity lockless
* Fix hanging issue in lb agent when connections drop
* Add the CH hash from fnlb
Select this with FN_PLACER=ch when launching the LB.
* small improvement for locking on reloadLBGmembership
* Stabilise the list of Runenrs returned by NodePool
The NodePoolManager makes some attempt to keep the list of runner nodes advertised as
stable as possible. Let's preserve this effort in the client side. The main point of this
is to attempt to keep the same runner at the same inxed in the []Runner returned by
NodePool.Runners(lbgid); the ch algorithm likes it when this is the case.
* Factor out a generator function for the Runners so that mocks can be injected
* temporarily allow lbgroup to be specified in HTTP header, while we sort out changes to the model
* fixes bug with nil runners
* Initial work for mocking things in tests
* fix for anonymouse go routine error
* fixing lb_test to compile
* Refactor: internal objects for gRPCNodePool are now injectable, with defaults for the real world case
* Make GRPC port configurable, fix weird handling of web port too
* unit test reload Members
* check on runner creation failure
* adding nullRunner in case of failure during runner creation
* Refactored capacity advertisements/aggregations. Made grpc advertisement post asynchronous and non-blocking.
* make capacityEntry private
* Change the runner gRPC bind address.
This uses the existing `whoAmI` function, so that the gRPC server works
when the runner is running on a different host.
* Add support for multiple fixed runners to pool mgr
* Added harness for dataplane system tests, minor refactors
* Add Dockerfiles for components, along with docs.
* Doc fix: second runner needs a different name.
* Let us have three runners in system tests, why not
* The first system test running a function in API/LB/PureRunner mode
* Add unit test for Advertiser logic
* Fix issue with Pure Runner not sending the last data frame
* use config in models.Call as a temporary mechanism to override lb group ID
* make gofmt happy
* Updates documentation for how to configure lb groups for an app/route
* small refactor unit test
* Factor NodePool into its own package
* Lots of fixes to Pure Runner - concurrency woes with errors and cancellations
* New dataplane with static runnerpool (#813)
Added static node pool as default implementation
* moved nullRunner to grpc package
* remove duplication in README
* fix go vet issues
* Fix server initialisation in api tests
* Tiny logging changes in pool manager.
Using `WithError` instead of `Errorf` when appropriate.
* Change some log levels in the pure runner
* fixing readme
* moves multitenant compute documentation
* adds introduction to multitenant readme
* Proper triggering of system tests in makefile
* Fix insructions about starting up the components
* Change db file for system tests to avoid contention in parallel tests
* fixes revisions from merge
* Fix merge issue with handling of reserved slot
* renaming nulb to lb in the doc and images folder
* better TryExec sleep logic clean shutdown
In this change we implement a better way to deal with the sleep inside
the for loop during the attempt for placing a call.
Plus we added a clean way to shutdown the connections with external
component when we shut down the server.
* System_test mysql port
set mysql port for system test to a different value to the one set for
the api tests to avoid conflicts as they can run in parallel.
* change the container name for system-test
* removes flaky test TestRouteRunnerExecution pending resolution by issue #796
* amend remove_containers to remove new added containers
* Rework capacity reservation logic at a higher level for now
* LB agent implements Submit rather than delegating.
* Fix go vet linting errors
* Changed a couple of error levels
* Fix formatting
* removes commmented out test
* adds snappy to vendor directory
* updates Gopkg and vendor directories, removing snappy and addhing siphash
* wait for db containers to come up before starting the tests
* make system tests start API node on 8085 to avoid port conflict with api_tests
* avoid port conflicts with api_test.sh which are run in parallel
* fixes postgres port conflict and issue with removal of old containers
* Remove spurious println
* update vendor directory, add go.opencensus.io
* update imports
* oops
* s/opentracing/opencensus/ & remove prometheus / zipkin stuff & remove old stats
* the dep train rides again
* fix gin build
* deps from last guy
* start in on the agent metrics
* she builds
* remove tags for now, cardinality error is fussing. subscribe instead of register
* update to patched version of opencensus to proceed for now TODO switch to a release
* meh
fix imports
* println debug the bad boys
* lace it with the tags
* update deps again
* fix all inconsistent cardinality errors
* add our own logger
* fix init
* fix oom measure
* remove bugged removal code
* fix s3 measures
* fix prom handler nil
* push down app listeners to a datastore
fnext.NewDatastore returns a datastore that wraps the appropriate methods for
AppListener in a Datastore implementation. this is more future proof than
needing to wrap every call of GetApp/UpdateApp/etc with the listeners, there
are a few places where this can happen and it seems like the AppListener
behavior is supposed to wrap the datastore, not just the front end methods
surrounding CRUD ops on an app. the hairy case that came up was when fiddling
with the create/update route business.
this changes the FireBeforeApp* ops to be an AppListener implementation itself
rather than having the Server itself expose certain methods to fire off the
app listeners, now they're on the datastore itself, which the server can
return the instance of.
small change to BeforeAppDelete/AfterAppDelete -- we were passing in a half
baked struct with only the name filled in and not filling in the fields
anywhere. this is mostly just misleading, we could fill in the app, but we
weren't and don't really want to, it's more to notify of an app deletion event
so that an extension can behave accordingly instead of letting a user inspect
the app. i know of 3 extensions and the changes required to update are very
small.
cleans up all the front end implementations FireBefore/FireAfter.
this seems potentially less flexible than previous version if we do want to
allow users some way to call the database methods without using the
extensions, but that's exactly the trade off, as far as the AppListener's are
described it seems heavily implied that this should be the case.
mostly a feeler, for the above reasons, but this was kind of odorous so just
went for it. we do need to lock in the extension api stuff.
* hand em an app that's been smokin the reefer
i would split this commit in two if i were a good dev.
the pprof stuff is really useful and this only samples when called. this is
pretty standard go service stuff. expvar is cool, too.
the additional spannos have turned up some interesting tid bits... gonna slide
em in
1) in dind, prevent SIGINT reaching to dockerd. This kills
docker and prevents shutdown as fn server is trying to stop.
2) as init process, always reap child processes.
* add FN_LOG_DEST for logs, fixup init
* FN_LOG_DEST can point to a remote logging place (papertrail, whatever)
* FN_LOG_PREFIX can add a prefix onto each log line sent to FN_LOG_DEST
default remains stderr with no prefix. users need this to send to various
logging backends, though it could be done operationally, this is somewhat
simpler.
we were doing some configuration stuff inside of init() for some of the global
things. even though they're global, it's nice to keep them all in the normal
server init path.
we have had strange issues with the tracing setup, I tested the last repro of
this repeatedly and didn't have any luck reproducing it, though maybe it comes
back.
* add docs
* Ship call logs to the user as text/plain instead of JSON
* Fixing swagger doc
* c.String instead of c.JSON
* Make Logs API backward compatible
* Loop over accepted MIME types
* Bump swagger API version
* Fix client build script
previous version was producing the following "couldn't find a swagger spec"
* Logs API regression test
* Write response body without buffering
* Switch JSON and text/plain cases
* Handle Accepted content types properly
* More solid response content type handling
* Write HTTP 406 with corresponding error body
* Remove unused import
* Use handleErrorResponse
* Use retry func while trying to ping SQL datastore
- implements retry func specifically for SQL datastore ping
- fmt fixes
- using sqlx.Db.PingContext instead of sqlx.Db.Ping
- propogate context to SQL datastore
* Rely on context from ServerOpt
* Consolidate log instances
* Cleanup
* Fix server usage in API tests
* allow user configured agent in full node
this should keep the old default behavior but allow users to pass in a
configured agent to configure the server themselves, without having to worry
about a russian agent being a british agent.
also closes any agent given to an api node.
closes#623
* don't close agent in runner test
this patch has no behavior changes, changes are:
* server.Datastore() -> server.datastore
* server.MQ -> server.mq
* server.LogDB -> server.logstore
* server.Agent -> server.agent
these were at a minimum not uniform. further, it's probably better to force
configuration through initialization in `server.New` to ensure thread safety
of referencing if someone does want to modify these as well as forcing things
into our initialization path and reducing the surface area of the Server
abstraction.
* fix configuration of agent and server to be future proof and plumb in the hybrid client agent
* fixes up the tests, turns off /r/ on api nodes
* fix up defaults for runner nodes
* shove the runner async push code down into agent land to use client
* plumb up async-age
* return full call from async dequeue endpoint, since we're storing a whole
call in the MQ we don't need to worry about caching of app/route [for now]
* fast safe shutdown of dequeue looper in runner / tidying of agent
* nice errors for path not found against /r/, /v1/ or other path not found
* removed some stale TODO in agent
* mq backends are only loud mouths in debug mode now
* update tests
* Add caching to hybrid client
* Fix HTTP error handling in hybrid client.
The type switch was on the value rather than a pointer.
* Gofmt.
* Better caching with a nice caching wrapper
* Remove datastore cache which is now unused
* Don't need to manually wrap interface methods
* Go fmt
* so it begins
* add clarification to /dequeue, change response to list to future proof
* Specify that runner endpoints are also under /v1
* Add a flag to choose operation mode (node type).
This is specified using the `FN_NODE_TYPE` environment variable. The
default is the existing behaviour, where the server supports all
operations (full API plus asynchronous and synchronous runners).
The additional modes are:
* API - the full API is available, but no functions are executed by the
node. Async calls are placed into a message queue, and synchronous
calls are not supported (invoking them results in an API error).
* Runner - only the invocation/route API is present. Asynchronous and
synchronous invocation requests are supported, but asynchronous
requests are placed onto the message queue, so might be handled by
another runner.
* Add agent type and checks on Submit
* Sketch of a factored out data access abstraction for api/runner agents
* Fix tests, adding node/agent types to constructors
* Add tests for full, API, and runner server modes.
* Added atomic UpdateCall to datastore
* adds in server side endpoints
* Made ServerNodeType public because tests use it
* Made ServerNodeType public because tests use it
* fix test build
* add hybrid runner client
pretty simple go api client that covers surface area needed for hybrid,
returning structs from models that the agent can use directly. not exactly
sure where to put this, so put it in `/clients/hybrid` but maybe we should
make `/api/runner/client` or something and shove it in there. want to get
integration tests set up and use the real endpoints next and then wrap this up
in the DataAccessLayer stuff.
* gracefully handles errors from fn
* handles backoff & retry on 500s
* will add to existing spans for debuggo action
* minor fixes
* meh
* Extend extension mechanism to support per-route API extensions
* Tidy up comment
* Remove print statement
* Minor improvement to README
* Avoid calling c.Request.Context() twice
* add minio-go dep, update deps
* add minio s3 client
minio has an s3 compatible api and is an open source project and, notably, is
not amazon, so it seems best to use their client (fwiw the aws-sdk-go is a
giant hair ball of things we don't need, too). it was pretty easy and seems
to work, so rolling with it. also, minio is a totally feasible option for fn
installs in prod / for demos / for local.
* adds 's3' package for s3 compatible log storage api, for use with storing
logs from calls and retrieving them.
* removes DELETE /v1/apps/:app/calls/:call/log endpoint
* removes internal log deletion api
* changes the GetLog API to use an io.Reader, which is a backwards step atm
due to the json api for logs, I have another branch lined up to make a plain
text log API and this will be much more efficient (also want to gzip)
* hooked up minio to the test suite and fixed up the test suite
* add how to run minio docs and point fn at it docs
some notes: notably we aren't cleaning up these logs. there is a ticket
already to make a Mr. Clean who wakes up periodically and nukes old stuff, so
am punting any api design around some kind of TTL deletion of logs. there are
a lot of options really for Mr. Clean, we can notably defer to him when apps
are deleted, too, so that app deletion is fast and then Mr. Clean will just
clean them up later (seems like a good option).
have not tested against BMC object store, which has an s3 compatible API. but
in theory it 'just works' (the reason for doing this). in any event, that's
part of the service land to figure out.
closes#481closes#473
* add log not found error to minio land