1) in theory it may be possible for an exited container to
requeue a slot, close this gap by always setting fatal error
for a slot if a container has exited.
2) when a client request times out or cancelled (client
disconnect, etc.) the slot should not be allowed to be
requeued and container should terminate to avoid accidental
mixing of previous response into next.
* fn: enable failing test back
* fn: fortifying the stderr output
Modified limitWriter to discard excess data instead
of returning error, this is to allow stderr/stdout
pipes flowing to avoid head-of-line blocking or
data corruption in container stdout/stderr output stream.
* Initial stab at the protocol
* initial protocol sketch for node pool manager
* Added http header frame as a message
* Force the use of WithAgent variants when creating a server
* adds grpc models for node pool manager plus go deps
* Naming things is really hard
* Merge (and optionally purge) details received by the NPM
* WIP: starting to add the runner-side functionality of the new data plane
* WIP: Basic startup of grpc server for pure runner. Needs proper certs.
* Go fmt
* Initial agent for LB nodes.
* Agent implementation for LB nodes.
* Pass keys and certs to LB node agent.
* Remove accidentally left reference to env var.
* Add env variables for certificate files
* stub out the capacity and group membership server channels
* implement server-side runner manager service
* removes unused variable
* fixes build error
* splits up GetCall and GetLBGroupId
* Change LB node agent to use TLS connection.
* Encode call model as JSON to send to runner node.
* Use hybrid client in LB node agent.
This should provide access to get app and route information for the call
from an API node.
* More error handling on the pure runner side
* Tentative fix for GetCall problem: set deadlines correctly when reserving slot
* Connect loop for LB agent to runner nodes.
* Extract runner connection function in LB agent.
* drops committed capacity counts
* Bugfix - end state tracker only in submit
* Do logs properly
* adds first pass of tracking capacity metrics in agent
* maked memory capacity metric uint64
* maked memory capacity metric uint64
* removes use of old capacity field
* adds remove capacity call
* merges overwritten reconnect logic
* First pass of a NPM
Provide a service that talks to a (simulated) CP.
- Receive incoming capacity assertions from LBs for LBGs
- expire LB requests after a short period
- ask the CP to add runners to a LBG
- note runner set changes and readvertise
- scale down by marking runners as "draining"
- shut off draining runners after some cool-down period
* add capacity update on schedule
* Send periodic capcacity metrics
Sending capcacity metrics to node pool manager
* splits grpc and api interfaces for capacity manager
* failure to advertise capacity shouldn't panic
* Add some instructions for starting DP/CP parts.
* Create the poolmanager server with TLS
* Use logrus
* Get npm compiling with cert fixups.
* Fix: pure runner should not start async processing
* brings runner, nulb and npm together
* Add field to acknowledgment to record slot allocation latency; fix a bug too
* iterating on pool manager locking issue
* raises timeout of placement retry loop
* Fix up NPM
Improve logging
Ensure that channels etc. are actually initialised in the structure
creation!
* Update the docs - runners GRPC port is 9120
* Bugfix: return runner pool accurately.
* Double locking
* Note purges as LBs stop talking to us
* Get the purging of old LBs working.
* Tweak: on restart, load runner set before making scaling decisions.
* more agent synchronization improvements
* Deal with teh CP pulling out active hosts from under us.
* lock at lbgroup level
* Send request and receive response from runner.
* Add capacity check right before slot reservation
* Pass the full Call into the receive loop.
* Wait for the data from the runner before finishing
* force runner list refresh every time
* Don't init db and mq for pure runners
* adds shutdown of npm
* fixes broken log line
* Extract an interface for the Predictor used by the NPM
* purge drained connections from npm
* Refactor of the LB agent into the agent package
* removes capacitytest wip
* Fix undefined err issue
* updating README for poolmanager set up
* ues retrying dial for lb to npm connections
* Rename lb_calls to lb_agent now that all functionality is there
* Use the right deadline and errors in LBAgent
* Make stream error flag per-call rather than global otherwise the whole runner is damaged by one call dropping
* abstracting gRPCNodePool
* Make stream error flag per-call rather than global otherwise the whole runner is damaged by one call dropping
* Add some init checks for LB and pure runner nodes
* adding some useful debug
* Fix default db and mq for lb node
* removes unreachable code, fixes typo
* Use datastore as logstore in API nodes.
This fixes a bug caused by trying to insert logs into a nil logstore. It
was nil because it wasn't being set for API nodes.
* creates placement abstraction and moves capacity APIs to NodePool
* removed TODO, added logging
* Dial reconnections for LB <-> runners
LB grpc connections to runners are established using a backoff stategy
in event of reconnections, this allows to let the LB up even in case one
of the runners go away and reconnect to it as soon as it is back.
* Add a status call to the Runner protocol
Stub at the moment. To be used for things like draindown, health checks.
* Remove comment.
* makes assign/release capacity lockless
* Fix hanging issue in lb agent when connections drop
* Add the CH hash from fnlb
Select this with FN_PLACER=ch when launching the LB.
* small improvement for locking on reloadLBGmembership
* Stabilise the list of Runenrs returned by NodePool
The NodePoolManager makes some attempt to keep the list of runner nodes advertised as
stable as possible. Let's preserve this effort in the client side. The main point of this
is to attempt to keep the same runner at the same inxed in the []Runner returned by
NodePool.Runners(lbgid); the ch algorithm likes it when this is the case.
* Factor out a generator function for the Runners so that mocks can be injected
* temporarily allow lbgroup to be specified in HTTP header, while we sort out changes to the model
* fixes bug with nil runners
* Initial work for mocking things in tests
* fix for anonymouse go routine error
* fixing lb_test to compile
* Refactor: internal objects for gRPCNodePool are now injectable, with defaults for the real world case
* Make GRPC port configurable, fix weird handling of web port too
* unit test reload Members
* check on runner creation failure
* adding nullRunner in case of failure during runner creation
* Refactored capacity advertisements/aggregations. Made grpc advertisement post asynchronous and non-blocking.
* make capacityEntry private
* Change the runner gRPC bind address.
This uses the existing `whoAmI` function, so that the gRPC server works
when the runner is running on a different host.
* Add support for multiple fixed runners to pool mgr
* Added harness for dataplane system tests, minor refactors
* Add Dockerfiles for components, along with docs.
* Doc fix: second runner needs a different name.
* Let us have three runners in system tests, why not
* The first system test running a function in API/LB/PureRunner mode
* Add unit test for Advertiser logic
* Fix issue with Pure Runner not sending the last data frame
* use config in models.Call as a temporary mechanism to override lb group ID
* make gofmt happy
* Updates documentation for how to configure lb groups for an app/route
* small refactor unit test
* Factor NodePool into its own package
* Lots of fixes to Pure Runner - concurrency woes with errors and cancellations
* New dataplane with static runnerpool (#813)
Added static node pool as default implementation
* moved nullRunner to grpc package
* remove duplication in README
* fix go vet issues
* Fix server initialisation in api tests
* Tiny logging changes in pool manager.
Using `WithError` instead of `Errorf` when appropriate.
* Change some log levels in the pure runner
* fixing readme
* moves multitenant compute documentation
* adds introduction to multitenant readme
* Proper triggering of system tests in makefile
* Fix insructions about starting up the components
* Change db file for system tests to avoid contention in parallel tests
* fixes revisions from merge
* Fix merge issue with handling of reserved slot
* renaming nulb to lb in the doc and images folder
* better TryExec sleep logic clean shutdown
In this change we implement a better way to deal with the sleep inside
the for loop during the attempt for placing a call.
Plus we added a clean way to shutdown the connections with external
component when we shut down the server.
* System_test mysql port
set mysql port for system test to a different value to the one set for
the api tests to avoid conflicts as they can run in parallel.
* change the container name for system-test
* removes flaky test TestRouteRunnerExecution pending resolution by issue #796
* amend remove_containers to remove new added containers
* Rework capacity reservation logic at a higher level for now
* LB agent implements Submit rather than delegating.
* Fix go vet linting errors
* Changed a couple of error levels
* Fix formatting
* removes commmented out test
* adds snappy to vendor directory
* updates Gopkg and vendor directories, removing snappy and addhing siphash
* wait for db containers to come up before starting the tests
* make system tests start API node on 8085 to avoid port conflict with api_tests
* avoid port conflicts with api_test.sh which are run in parallel
* fixes postgres port conflict and issue with removal of old containers
* Remove spurious println
*) I/O protocol parse issues should shutdown the container as the container
goes to inconsistent state between calls. (eg. next call may receive previous
calls left overs.)
*) Move ghost read/write code into io_utils in common.
*) Clean unused error from docker Wait()
*) We can catch one case in JSON, if there's remaining unparsed data in
decoder buffer, we can shut the container
*) stdout/stderr when container is not handling a request are now blocked if freezer is also enabled.
*) if a fatal err is set for slot, we do not requeue it and proceed to shutdown
*) added a test function for a few cases with freezer strict behavior
* update vendor directory, add go.opencensus.io
* update imports
* oops
* s/opentracing/opencensus/ & remove prometheus / zipkin stuff & remove old stats
* the dep train rides again
* fix gin build
* deps from last guy
* start in on the agent metrics
* she builds
* remove tags for now, cardinality error is fussing. subscribe instead of register
* update to patched version of opencensus to proceed for now TODO switch to a release
* meh
fix imports
* println debug the bad boys
* lace it with the tags
* update deps again
* fix all inconsistent cardinality errors
* add our own logger
* fix init
* fix oom measure
* remove bugged removal code
* fix s3 measures
* fix prom handler nil
*) Limit response http body or json response size to FN_MAX_RESPONSE_SIZE (default unlimited)
*) If limits are exceeded 502 is returned with 'body too large' in the error message
This is prep-work for more tests to come.
*) remove http response -1, this will break in go 1.10
*) add docker id & hostname to fn-test-utils (will be useful
to check/test which instance a request landed on.)
*) add container start/stop logs in fn-test-utils. To detect
if/how we miss logs during container start & end.
* push down app listeners to a datastore
fnext.NewDatastore returns a datastore that wraps the appropriate methods for
AppListener in a Datastore implementation. this is more future proof than
needing to wrap every call of GetApp/UpdateApp/etc with the listeners, there
are a few places where this can happen and it seems like the AppListener
behavior is supposed to wrap the datastore, not just the front end methods
surrounding CRUD ops on an app. the hairy case that came up was when fiddling
with the create/update route business.
this changes the FireBeforeApp* ops to be an AppListener implementation itself
rather than having the Server itself expose certain methods to fire off the
app listeners, now they're on the datastore itself, which the server can
return the instance of.
small change to BeforeAppDelete/AfterAppDelete -- we were passing in a half
baked struct with only the name filled in and not filling in the fields
anywhere. this is mostly just misleading, we could fill in the app, but we
weren't and don't really want to, it's more to notify of an app deletion event
so that an extension can behave accordingly instead of letting a user inspect
the app. i know of 3 extensions and the changes required to update are very
small.
cleans up all the front end implementations FireBefore/FireAfter.
this seems potentially less flexible than previous version if we do want to
allow users some way to call the database methods without using the
extensions, but that's exactly the trade off, as far as the AppListener's are
described it seems heavily implied that this should be the case.
mostly a feeler, for the above reasons, but this was kind of odorous so just
went for it. we do need to lock in the extension api stuff.
* hand em an app that's been smokin the reefer
* fix up response headers
* stops defaulting to application/json. this was something awful, go stdlib has
a func to detect content type. sadly, it doesn't contain json, but we can do a
pretty good job by checking for an opening '{'... there are other fish in the
sea, and now we handle them nicely instead of saying it's a json [when it's
not]. a test confirms this, there should be no breakage for any routes
returning a json blob that were relying on us defaulting to this format
(granted that they start with a '{').
* buffers output now to a buffer for all protocol types (default is no longer
left out in the cold). use a little response writer so that we can still let
users write headers from their functions. this is useful for content type
detection instead of having to do it in multiple places.
* plumbs the little content type bit into fn-test-util just so we can test it,
we don't want to put this in the fdk since it's redundant.
I am totally in favor of getting rid of content type from the top level json
blurb. it's redundant, at best, and can have confusing behaviors if a user
uses both the headers and the content_type field (we override with the latter,
now). it's client protocol specific to http to a certain degree, other
protocols may use this concept but have their own way to set it (like http
does in headers..). I realize that it mostly exists because it's somewhat gross
to have to index a list from the headers in certain languages more than
others, but with the ^ behavior, is it really worth it?
closes#782
* reset idle timeouts back
* move json prefix to stack / next to use
i would split this commit in two if i were a good dev.
the pprof stuff is really useful and this only samples when called. this is
pretty standard go service stuff. expvar is cool, too.
the additional spannos have turned up some interesting tid bits... gonna slide
em in
1) in dind, prevent SIGINT reaching to dockerd. This kills
docker and prevents shutdown as fn server is trying to stop.
2) as init process, always reap child processes.
previously we would retry infinitely up to the context with some backoff in
between. for hot functions, since we don't set any dead line on pulling or
creating the image, this means it would retry forever without making any
progress if e.g. the registry is inaccessable or any other temporary error
that isn't actually temporary. this adds a hard cap of 10 retries, which
gives approximately 13s if the ops take no time, still respecting the context
deadline enclosed.
the case where this was coming up is now tested for and was otherwise
confusing for users to debug, now it spits out an ECONNREFUSED with the
address of the registry, which should help users debug without having to poke
around fn logs (though I don't like this as an excuse, not all users will be
operators at some point in the near future, and this one makes sense)
closes#727
* return bad function http resp error
this was being thrown into the fn server logs but it's relatively easy to get
this to crop up if a function user forgets that they left a `println` laying
around that gets written to stdout, it garbles the http (or json, in its case)
output and they just see 'internal server error'. for certain clients i could
see that we really do want to keep this as 'internal server error' but for
things like e.g. docker image not authorized we're showing that in the
response, so this seems apt.
json likely needs the same treatment, will file a bug.
as always, my error messages are rarely helpful enough, help me please :)
closes#355
* add formatting directive
* fix up http error
* output bad jasons to user
closes#729
woo
* push validate/defaults into datastore
we weren't setting a timestamp in route insert when we needed to create an app
there. that whole thing isn't atomic, but this fixes the timestamp issue.
closes#738
seems like we should do similar with the FireBeforeX stuff too.
* fix tests
* app name validation was buggy, an upper cased letter failed. now it doesn't.
uses unicode now.
* removes duplicate errors for datastore and models validation that were used
interchangably but weren't.
* Change basic stats to use opentracing rather than Prometheus API directly
* Just ran gofmt
* Extract opentracing access for metrics to common/metrics.go
* Replace quotes strings with constants where possible
* add FN_LOG_DEST for logs, fixup init
* FN_LOG_DEST can point to a remote logging place (papertrail, whatever)
* FN_LOG_PREFIX can add a prefix onto each log line sent to FN_LOG_DEST
default remains stderr with no prefix. users need this to send to various
logging backends, though it could be done operationally, this is somewhat
simpler.
we were doing some configuration stuff inside of init() for some of the global
things. even though they're global, it's nice to keep them all in the normal
server init path.
we have had strange issues with the tracing setup, I tested the last repro of
this repeatedly and didn't have any luck reproducing it, though maybe it comes
back.
* add docs
possible breakages:
* `FN_HEADER` on cold are no longer `s/-/_/` -- this is so that cold functions
can rebuild the headers as they were when they came in on the request (fdks,
specifically), there's no guarantee that a reversal `s/_/-/` is the original
header on the request.
* app and route config no longer `s/-/_/` -- it seemed really weird to rewrite
the users config vars on these. should just pass them exactly as is to env.
* headers no longer contain the environment vars (previously, base config; app
config, route config, `FN_PATH`, etc.), these are still available in the
environment.
this gets rid of a lot of the code around headers, specifically the stuff that
shoved everything into headers when constructing a call to begin with. now we
just store the headers separately and add a few things, like FN_CALL_ID to
them, and build a separate 'config' now to store on the call. I thought
'config' was more aptly named, 'env' was confusing, though now 'config' is
exactly what 'base_vars' was, which is only the things being put into the env.
we weren't storing this field in the db, this doesn't break unless there are
messages in a queue from another version, anyway, don't think we're there and
don't expect any breakage for anybody with field name changes.
this makes the configuration stuff pretty straight forward, there's just two
separate buckets of things, and cold just needs to mash them together into the
env, and otherwise hot containers just need to put 'config' in the env, and then
hot format can shove 'headers' in however they'd like. this seems better than
my last idea about making this easier but worse (RIP).
this means:
* headers no longer contain all vars, the set of base vars can only be found
in the environment.
* headers is only the headers from request + call_id, deadline, method, url
* for cold, we simply add the headers to the environment, prepending
`FN_HEADER_` to them, BUT NOT upper casing or `s/-/_/`
* fixes issue where async hot functions would end up with `Fn_header_`
prefixed headers
* removes idea of 'base' vars and 'env'. this was a strange concept. now we just have
'config' which was base vars, and headers, which was base_env+headers; i.e.
they are disjoint now.
* casing for all headers will lean to be `My-Header` style, which should help
with consistency. notable exceptions for cold only are FN_CALL_ID, FN_METHOD,
and FN_REQUEST_URL -- this is simply to avoid breakage, in either hot format
they appear as `Fn_call_id` still.
* removes FN_PARAM stuff
* updated doc with behavior
weird things left:
`Fn_call_id` e.g. isn't a correctly formatted http header, it should likely be
`Fn-Call-Id` but I wanted to live to fight another day on this one, it would
add some breakage.
examples to be posted of each format below
closes#329
*) Updated fn-test-utils to latest fdk-go
*) Added hot-json to runner tests
*) Removed anon function in FromRequest which had
a side effect to set req.URL.Host. This is now more
explicit and eliminates some corresponding logic in
protocol http.
*) in gin, http request RequestURI is not set, removed
code that references this. (use Call.URL instead)
* Ship call logs to the user as text/plain instead of JSON
* Fixing swagger doc
* c.String instead of c.JSON
* Make Logs API backward compatible
* Loop over accepted MIME types
* Bump swagger API version
* Fix client build script
previous version was producing the following "couldn't find a swagger spec"
* Logs API regression test
* Write response body without buffering
* Switch JSON and text/plain cases
* Handle Accepted content types properly
* More solid response content type handling
* Write HTTP 406 with corresponding error body
* Remove unused import
* Use handleErrorResponse
* Use retry func while trying to ping SQL datastore
- implements retry func specifically for SQL datastore ping
- fmt fixes
- using sqlx.Db.PingContext instead of sqlx.Db.Ping
- propogate context to SQL datastore
* Rely on context from ServerOpt
* Consolidate log instances
* Cleanup
* Fix server usage in API tests
* allow user configured agent in full node
this should keep the old default behavior but allow users to pass in a
configured agent to configure the server themselves, without having to worry
about a russian agent being a british agent.
also closes any agent given to an api node.
closes#623
* don't close agent in runner test
* route updated_at
* add app created at, fix some route updated_at bugs
* add app updated_at
TODO need to add tests through front end
TODO for validation we don't really want to use the validate wrapper since
it's a programmer error and not a user error, hopefully tests block this.
* add tests for timestamps to exist / change on apps&routes
* route equals at done, fix tests wit dis
* fix up the equals sugar
* add swagger
* fix rebase
* precisely allocate maps in clone
* vetted
* meh
* fix api tests
this patch has no behavior changes, changes are:
* server.Datastore() -> server.datastore
* server.MQ -> server.mq
* server.LogDB -> server.logstore
* server.Agent -> server.agent
these were at a minimum not uniform. further, it's probably better to force
configuration through initialization in `server.New` to ensure thread safety
of referencing if someone does want to modify these as well as forcing things
into our initialization path and reducing the surface area of the Server
abstraction.
* fn: adding hot container timeout and huge memory cases
*) switching TestRouteRunnerTimeout to fn-test-utils to handle
both hot and cold.
*) in server_test added content-length handling as protocol http
does not create content-length if it is not present.
* fn: for async hot requests ensure/fix content-length/type
* fn: added tests for FromModel for content type/length
* fn: restrict the content-length fix to async in FromModel()
* fix configuration of agent and server to be future proof and plumb in the hybrid client agent
* fixes up the tests, turns off /r/ on api nodes
* fix up defaults for runner nodes
* shove the runner async push code down into agent land to use client
* plumb up async-age
* return full call from async dequeue endpoint, since we're storing a whole
call in the MQ we don't need to worry about caching of app/route [for now]
* fast safe shutdown of dequeue looper in runner / tidying of agent
* nice errors for path not found against /r/, /v1/ or other path not found
* removed some stale TODO in agent
* mq backends are only loud mouths in debug mode now
* update tests
* Add caching to hybrid client
* Fix HTTP error handling in hybrid client.
The type switch was on the value rather than a pointer.
* Gofmt.
* Better caching with a nice caching wrapper
* Remove datastore cache which is now unused
* Don't need to manually wrap interface methods
* Go fmt
* so it begins
* add clarification to /dequeue, change response to list to future proof
* Specify that runner endpoints are also under /v1
* Add a flag to choose operation mode (node type).
This is specified using the `FN_NODE_TYPE` environment variable. The
default is the existing behaviour, where the server supports all
operations (full API plus asynchronous and synchronous runners).
The additional modes are:
* API - the full API is available, but no functions are executed by the
node. Async calls are placed into a message queue, and synchronous
calls are not supported (invoking them results in an API error).
* Runner - only the invocation/route API is present. Asynchronous and
synchronous invocation requests are supported, but asynchronous
requests are placed onto the message queue, so might be handled by
another runner.
* Add agent type and checks on Submit
* Sketch of a factored out data access abstraction for api/runner agents
* Fix tests, adding node/agent types to constructors
* Add tests for full, API, and runner server modes.
* Added atomic UpdateCall to datastore
* adds in server side endpoints
* Made ServerNodeType public because tests use it
* Made ServerNodeType public because tests use it
* fix test build
* add hybrid runner client
pretty simple go api client that covers surface area needed for hybrid,
returning structs from models that the agent can use directly. not exactly
sure where to put this, so put it in `/clients/hybrid` but maybe we should
make `/api/runner/client` or something and shove it in there. want to get
integration tests set up and use the real endpoints next and then wrap this up
in the DataAccessLayer stuff.
* gracefully handles errors from fn
* handles backoff & retry on 500s
* will add to existing spans for debuggo action
* minor fixes
* meh