This implements a "detached" mechanism to get an ack from the runner
once it actually starts to run a function. In this scenario the response
returned back is just a 202 if we placed the function in a specific
time-frame. If we hit some errors or we fail to place the fn in time we
return back different errors.
* get rid of old format stuff, utils usage, fix up for fdk2.0 interface
* pure agent format removal, TODO remove format field, fix up all tests
* shitter's clogged
* fix agent tests
* start rolling through server tests
* tests compile, some failures
* remove json / content type detection on invoke/httptrigger, fix up tests
* remove hello, fixup system tests
the fucking status checker test just hangs and it's testing that it doesn't
work so the test passes but the test doesn't pass fuck life it's not worth it
* fix migration
* meh
* make dbhelper shut up about dbhelpers not being used
* move fail status at least into main thread, jfc
* fix status call to have FN_LISTENER
also turns off the stdout/stderr blocking between calls, because it's
impossible to debug without that (without syslog), now that stdout and stderr
go to the same place (either to host stderr or nowhere) and isn't used for
function output this shouldn't be a big fuss really
* remove stdin
* cleanup/remind: fixed bug where watcher would leak if container dies first
* silence system-test logs until fail, fix datastore tests
postgres does weird things with constraints when renaming tables, took the
easy way out
system-tests were loud as fuck and made you download a circleci text file of
the logs, made them only yell when they goof
* fix fdk-go dep for test image. fun
* fix swagger and remove test about format
* update all the gopkg files
* add back FN_FORMAT for fdks that assert things. pfft
* add useful error for functions that exit
this error is really confounding because containers can exit for all manner of
reason, we're just guessing that this is the most likely cause for now, and
this error message should very likely change or be removed from the client
path anyway (context.Canceled wasn't all that useful either, but anyway, I'd
been hunting for this... so found it). added a test to avoid being publicly
shamed for 1 line commits (beware...).
* Inital Refactor
Removing the repeated logic exposed some problems with the reponse
writers.
Currently, the trigger writer was overlaid on part of the header
writing. The main invoke blog writing into the different levels of the
overlays at different points in the logic.
Instead, by extending the types and embedded structs, the writer is
more transparent. So, at the end of the flow it goes over all the
headers available and removes our prefixes. This lets the invoke logic
just write to the top level.
Going to continue after lunch to try and remove some of the layers and
param passing.
* Try and repeat concurrency failure
* Nested FromHTTPFnRequest inside FromHTTPTriggerRequest
* Consolidate buffer pooling logic
* go fmt yourself
* fix import
Largely a removal job, however many tests, particularly system level
ones relied on Routes. These have been migrated to use Fns.
* Add 410 response to swagger
* No app names in log tags
* Adding constraint in GetCall for FnID
* Adding test to check FnID is required on call
* Add fn_id to call selector
* Fix text in docker mem warning
* Correct buildConfig func name
* Test fix up
* Removing CPU setting from Agent test
CPU setting has been deprecated, but the code base is still riddled
with it. This just removes it from this layer. Really we need to
remove it from Call.
* Remove fn id check on calls
* Reintroduce fn id required on call
* Adding fnID to calls for execute test
* Correct setting of app id in middleware
* Removes root middlewares ability to redirect fun invocations
* Add over sized test check
* Removing call fn id check
1) Early call validation and return due to cpu/mem impossible
to meet (eg. request cpu/mem larger than max-mem or max-cpu
on server) now emits HTTP Bad Request (400) instead of 503.
This case is most likely due to client/service configuration
and/or validation issue.
2) 'failed' metric is now removed. 'failed' versus 'errors'
were too confusing. 'errors' is now a catch all error case.
3) new 'canceled' counter for client side cancels.
4) 'server_busy' now covers more cases than it previously did.
Clone of the trigger work to inject invoke urls into the annotations
on a fn when it is returned from the server.
Small changes to trigges code following code review of the fn code.
* fn: stats view/distribution improvements
*) View latency distribution is now an argument
in view creation functions. This allows easier
override to set custom buckets. It is simplistic
and assumes all latency views would use the same
set, but in practice this is already the case.
*) Removed API view creation to main, this should not
be enabled for all node types. This is consistent with
the rest of the system.
* fn: Docker samples of cpu/mem/disk with specific buckets
* fn: New timeout for LB Placer
Previously, LB Placers worked hard as long as
client contexts allowed for. Adding a Placer
config setting to bound this by 360 seconds by
default.
The new timeout is not accounted during actual
function execution and only applies to the amount
of wait time in Placers when the call is not
being executed.
* fn: runner status and docker load images
Introducing a function run for pure runner Status
calls. Previously, Status gRPC calls returned active
inflight request counts with the purpose of a simple
health checker. However this is not sufficient since
it does not show if agent or docker is healthy. With
this change, if pure runner is configured with a status
image, that image is executed through docker. The
call uses zero memory/cpu/tmpsize settings to ensure
resource tracker does not block it.
However, operators might not always have a docker
repository accessible/available for status image. Or
operators might not want the status to go over the
network. To allow such cases, and in general possibly
caching docker images, added a new environment variable
FN_DOCKER_LOAD_FILE. If this is set, fn-agent during
startup will load these images that were previously
saved with 'docker save' into docker.
* Initial suypport for invoking tiggers
* dupe method
* tighten server constraints
* runner tests not working yet
* basic route tests passing
* post rebase fixes
* add hybrid support for trigger invoke and tests
* consoloidate all hybrid evil into one place
* cleanup and make triggers unique by source
* fix oops with Agent
* linting
* review fixes
LB agent reports lb placer latency. It should also report
how long it took for the runner to initiate the call as
well as execution time inside the container if the runner
has accepted (committed) to the call.
Vast commit, includes:
* Introduces the Trigger domain entity.
* Introduces the Fns domain entity.
* V2 of the API for interacting with the new entities in swaggerv2.yml
* Adds v2 end points for Apps to support PUT updates.
* Rewrites the datastore level tests into a new pattern.
* V2 routes use entity ID over name as the path parameter.
* add DateTime sans mgo
* change all uses of strfmt.DateTime to common.DateTime, remove test strfmt usage
* remove api tests, system-test dep on api test
multiple reasons to remove the api tests:
* awkward dependency with fn_go meant generating bindings on a branched fn to
vendor those to test new stuff. this is at a minimum not at all intuitive,
worth it, nor a fun way to spend the finite amount of time we have to live.
* api tests only tested a subset of functionality that the server/ api tests
already test, and we risk having tests where one tests some thing and the
other doesn't. let's not. we have too many test suites as it is, and these
pretty much only test that we updated the fn_go bindings, which is actually a
hassle as noted above and the cli will pretty quickly figure out anyway.
* fn_go relies on openapi, which relies on mgo, which is deprecated and we'd
like to remove as a dependency. openapi is a _huge_ dep built in a NIH
fashion, that cannot simply remove the mgo dep as users may be using it.
we've now stolen their date time and otherwise killed usage of it in fn core,
for fn_go it still exists but that's less of a problem.
* update deps
removals:
* easyjson
* mgo
* go-openapi
* mapstructure
* fn_go
* purell
* go-validator
also, had to lock docker. we shouldn't use docker on master anyway, they
strongly advise against that. had no luck with latest version rev, so i locked
it to what we were using before. until next time.
the rest is just playing dep roulette, those end up removing a ton tho
* fix exec test to work
* account for john le cache
In pure-runner and LB agent, service providers might want to set specific driver options.
For example, to add cpu-shares to functions, LB can add the information as extensions
to the Call and pass this via gRPC to runners. Runners then pick these extensions from
gRPC call and pass it to driver. Using a custom driver implementation, pure-runners can
process these extensions to modify docker.CreateContainerOptions.
To achieve this, LB agents can now be configured using a call overrider.
Pure-runners can be configured using a custom docker driver.
RunnerCall and Call interfaces both expose call extensions.
An example to demonstrate this is implemented in test/fn-system-tests/system_test.go
which registers a call overrider for LB agent as well as a simple custom docker driver.
In this example, LB agent adds a key-value to extensions and runners add this key-value
as an environment variable to the container.
* Adding a way to inject a request ID
It is very useful to associate a request ID to each incoming request,
this change allows to provide a function to do that via Server Option.
The change comes with a default function which will generate a new
request ID. The request ID is put in the request context along with a
common logger which always logs the request-id
We add gRPC interceptors to the server so it can get the request ID out
of the gRPC metadata and put it in the common logger stored in the
context so as all the log lines using the common logger from the context
will have the request ID logged
* fn: introducing lb placer basic metrics
This change adds basic metrics to naive and consistent
hash LB placers. The stats show how many times we scanned
the full runner list, if runner pool failed to return a
runner list or if runner pool returned an empty list.
Placed and not placed status are also tracked along with
if TryExec returned an error or not. Most common error
code, Too-Busy is specifically tracked.
If client cancels/times out, this is also tracked as
a client cancel metric.
For placer latency, we would like to know how much time
the placer spent on searching for a runner until it
successfully places a call. This includes round-trip
times for NACK responses from the runners until a successful
TryExec() call. By excluding last successful TryExec() latency,
we try to exclude function execution & runner container
startup time from this metric in an attempt to isolate
Placer only latency.
* fn: latency and attempt tracker
Removing full scan metric. Tracking number of
runners attempted is a better metric for this
purpose.
Also, if rp.Runners() fail, this is an unrecoverable
error and we should bail out instead of retrying.
* fn: typo fix, ch placer finalize err return
* fn: enable LB placer metrics in WithAgentFromEnv if prometheus is enabled
* initial Db helper split - make SQL and datastore packages optional
* abstracting log store
* break out DB, MQ and log drivers as extensions
* cleanup
* fewer deps
* fixing docker test
* hmm dbness
* updating db startup
* Consolidate all your extensions into one convenient package
* cleanup
* clean up dep constraints
* fn: user friendly timeout handling changes
Timeout setting in routes now means "maximum amount
of time a function can run in a container".
Total wait time for a given http request is now expected
to be handled by the client. As long as the client waits,
the LB, runner or agents will search for resources to
schedule it.
* fn: lb and pure-runner with non-blocking agent
*) Removed pure-runner capacity tracking code. This did
not play well with internal agent resource tracker.
*) In LB and runner gRPC comm, removed ACK. Now,
upon TryCall, pure-runner quickly proceeds to call
Submit. This is good since at this stage pure-runner
already has all relevant data to initiate the call.
*) Unless pure-runner emits a NACK, LB immediately
streams http body to runners.
*) For retriable requests added a CachedReader for
http.Request Body.
*) Idempotenty/retry is similar to previous code.
After initial success in Engament, after attempting
a TryCall, unless we receive NACK, we cannot retry
that call.
*) ch and naive places now wraps each TryExec with
a cancellable context to clean up gRPC contexts
quicker.
* fn: err for simpler one-time read GetBody approach
This allows for a more flexible approach since we let
users to define GetBody() to allow repetitive http body
read. In default LB case, LB executes a one-time io.ReadAll
and sets of GetBody, which is detected by RunnerCall.RequestBody().
* fn: additional check for non-nil req.body
* fn: attempt to override IO errors with ctx for TryExec
* fn: system-tests log dest
* fn: LB: EOF send handling
* fn: logging for partial IO
* fn: use buffer pool for IO storage in lb agent
* fn: pure runner should use chunks for data msgs
* fn: required config validations and pass APIErrors
* fn: additional tests and gRPC proto simplification
*) remove ACK/NACK messages as Finish message type works
OK for this purpose.
*) return resp in api tests for check for status code
*) empty body json test in api tests for lb & pure-runner
* fn: buffer adjustments
*) setRequestBody result handling correction
*) switch to bytes.Reader for read-only safety
*) io.EOF can be returned for non-nil Body in request.
* fn: clarify detection of 503 / Server Too Busy
In the case the function execution fails the output returned is empty, an
empty output satisfies the "string.Contanis()" checks, as an empty string
is always contained in any string.
The change adds a check on the length of the actual output.
* fn: mutex while waiting I/O considered harmful
*) Removed hold mutex while wait I/O cases these
included possible disk I/O and network I/O.
*) Error/Context Close/Shutdown semantics changed since
the context timeout and comments were misleading. Close
always waits for pending gRPC session to complete.
Context usage here was merely 'wait up to x secs to
report an error' which only logs the error anyway.
Instead, the runner can log the error. And context
still can be passed around perhaps for future opencensus
instrumentation.
* fn: move call error/end handling to handleCallEnd
This simplifies submit() function but moves the burden
of retriable-versus-committed request handling and slot.Close()
responsibility to handleCallEnd().
*) removed unused cancel in api-test harness/server
*) removed hard coded port in getServerWithCancel along
with faulty health check code.
*) in SetupHarness() fixed code that skipped server start.
* App ID
* Clean-up
* Use ID or name to reference apps
* Can use app by name or ID
* Get rid of AppName for routes API and model
routes API is completely backwards-compatible
routes API accepts both app ID and name
* Get rid of AppName from calls API and model
* Fixing tests
* Get rid of AppName from logs API and model
* Restrict API to work with app names only
* Addressing review comments
* Fix for hybrid mode
* Fix rebase problems
* Addressing review comments
* Addressing review comments pt.2
* Fixing test issue
* Addressing review comments pt.3
* Updated docstring
* Adjust UpdateApp SQL implementation to work with app IDs instead of names
* Fixing tests
* fmt after rebase
* Make tests green again!
* Use GetAppByID wherever it is necessary
- adding new v2 endpoints to keep hybrid api/runner mode working
- extract CallBase from Call object to expose that to a user
(it doesn't include any app reference, as we do for all other API objects)
* Get rid of GetAppByName
* Adjusting server router setup
* Make hybrid work again
* Fix datastore tests
* Fixing tests
* Do not ignore app_id
* Resolve issues after rebase
* Updating test to make it work as it was
* Tabula rasa for migrations
* Adding calls API test
- we need to ensure we give "App not found" for the missing app and missing call in first place
- making previous test work (request missing call for the existing app)
* Make datastore tests work fine with correctly applied migrations
* Make CallFunction middleware work again
had to adjust its implementation to set app ID before proceeding
* The biggest rebase ever made
* Fix 8's migration
* Fix tests
* Fix hybrid client
* Fix tests problem
* Increment app ID migration version
* Fixing TestAppUpdate
* Fix rebase issues
* Addressing review comments
* Renew vendor
* Updated swagger doc per recommendations
* Move delegated agent creation within NewLBAgent so we can hide the fact we disable docker
* Move delegated agent creation within NewPureRunner for better encapsulation
* Move out node-pool manager and replace it with RunnerPool extension
* adds extension points for runner pools in load-balanced mode
* adds error to return values in RunnerPool and Runner interfaces
* Implements runner pool contract with context-aware shutdown
* fixes issue with range
* fixes tests to use runner abstraction
* adds empty test file as a workaround for build requiring go source files in top-level package
* removes flappy timeout test
* update docs to reflect runner pool setup
* refactors system tests to use runner abstraction
* removes poolmanager
* moves runner interfaces from models to api/runnerpool package
* Adds a second runner to pool docs example
* explicitly check for request spillover to second runner in test
* moves runner pool package name for system tests
* renames runner pool pointer variable for consistency
* pass model json to runner
* automatically cast to http.ResponseWriter in load-balanced call case
* allow overriding of server RunnerPool via a programmatic ServerOption
* fixes return type of ResponseWriter in test
* move Placer interface to runnerpool package
* moves hash-based placer out of open source project
* removes siphash from Gopkg.lock
* move mattes migrations to migratex
* changes format of migrations to migratex format
* updates test runner to use new interface (double checked this with printlns,
the tests go fully down and then up, and work on pg/mysql)
* remove mattes/migrate
* update tests from deps
* update readme
* fix other file extensions
* Refactor PureRunner as an Agent so that it encapsulates its grpc server
* Maintain a list of extra contexts for the server to select on to handle errors and cancellations
* Initial stab at the protocol
* initial protocol sketch for node pool manager
* Added http header frame as a message
* Force the use of WithAgent variants when creating a server
* adds grpc models for node pool manager plus go deps
* Naming things is really hard
* Merge (and optionally purge) details received by the NPM
* WIP: starting to add the runner-side functionality of the new data plane
* WIP: Basic startup of grpc server for pure runner. Needs proper certs.
* Go fmt
* Initial agent for LB nodes.
* Agent implementation for LB nodes.
* Pass keys and certs to LB node agent.
* Remove accidentally left reference to env var.
* Add env variables for certificate files
* stub out the capacity and group membership server channels
* implement server-side runner manager service
* removes unused variable
* fixes build error
* splits up GetCall and GetLBGroupId
* Change LB node agent to use TLS connection.
* Encode call model as JSON to send to runner node.
* Use hybrid client in LB node agent.
This should provide access to get app and route information for the call
from an API node.
* More error handling on the pure runner side
* Tentative fix for GetCall problem: set deadlines correctly when reserving slot
* Connect loop for LB agent to runner nodes.
* Extract runner connection function in LB agent.
* drops committed capacity counts
* Bugfix - end state tracker only in submit
* Do logs properly
* adds first pass of tracking capacity metrics in agent
* maked memory capacity metric uint64
* maked memory capacity metric uint64
* removes use of old capacity field
* adds remove capacity call
* merges overwritten reconnect logic
* First pass of a NPM
Provide a service that talks to a (simulated) CP.
- Receive incoming capacity assertions from LBs for LBGs
- expire LB requests after a short period
- ask the CP to add runners to a LBG
- note runner set changes and readvertise
- scale down by marking runners as "draining"
- shut off draining runners after some cool-down period
* add capacity update on schedule
* Send periodic capcacity metrics
Sending capcacity metrics to node pool manager
* splits grpc and api interfaces for capacity manager
* failure to advertise capacity shouldn't panic
* Add some instructions for starting DP/CP parts.
* Create the poolmanager server with TLS
* Use logrus
* Get npm compiling with cert fixups.
* Fix: pure runner should not start async processing
* brings runner, nulb and npm together
* Add field to acknowledgment to record slot allocation latency; fix a bug too
* iterating on pool manager locking issue
* raises timeout of placement retry loop
* Fix up NPM
Improve logging
Ensure that channels etc. are actually initialised in the structure
creation!
* Update the docs - runners GRPC port is 9120
* Bugfix: return runner pool accurately.
* Double locking
* Note purges as LBs stop talking to us
* Get the purging of old LBs working.
* Tweak: on restart, load runner set before making scaling decisions.
* more agent synchronization improvements
* Deal with teh CP pulling out active hosts from under us.
* lock at lbgroup level
* Send request and receive response from runner.
* Add capacity check right before slot reservation
* Pass the full Call into the receive loop.
* Wait for the data from the runner before finishing
* force runner list refresh every time
* Don't init db and mq for pure runners
* adds shutdown of npm
* fixes broken log line
* Extract an interface for the Predictor used by the NPM
* purge drained connections from npm
* Refactor of the LB agent into the agent package
* removes capacitytest wip
* Fix undefined err issue
* updating README for poolmanager set up
* ues retrying dial for lb to npm connections
* Rename lb_calls to lb_agent now that all functionality is there
* Use the right deadline and errors in LBAgent
* Make stream error flag per-call rather than global otherwise the whole runner is damaged by one call dropping
* abstracting gRPCNodePool
* Make stream error flag per-call rather than global otherwise the whole runner is damaged by one call dropping
* Add some init checks for LB and pure runner nodes
* adding some useful debug
* Fix default db and mq for lb node
* removes unreachable code, fixes typo
* Use datastore as logstore in API nodes.
This fixes a bug caused by trying to insert logs into a nil logstore. It
was nil because it wasn't being set for API nodes.
* creates placement abstraction and moves capacity APIs to NodePool
* removed TODO, added logging
* Dial reconnections for LB <-> runners
LB grpc connections to runners are established using a backoff stategy
in event of reconnections, this allows to let the LB up even in case one
of the runners go away and reconnect to it as soon as it is back.
* Add a status call to the Runner protocol
Stub at the moment. To be used for things like draindown, health checks.
* Remove comment.
* makes assign/release capacity lockless
* Fix hanging issue in lb agent when connections drop
* Add the CH hash from fnlb
Select this with FN_PLACER=ch when launching the LB.
* small improvement for locking on reloadLBGmembership
* Stabilise the list of Runenrs returned by NodePool
The NodePoolManager makes some attempt to keep the list of runner nodes advertised as
stable as possible. Let's preserve this effort in the client side. The main point of this
is to attempt to keep the same runner at the same inxed in the []Runner returned by
NodePool.Runners(lbgid); the ch algorithm likes it when this is the case.
* Factor out a generator function for the Runners so that mocks can be injected
* temporarily allow lbgroup to be specified in HTTP header, while we sort out changes to the model
* fixes bug with nil runners
* Initial work for mocking things in tests
* fix for anonymouse go routine error
* fixing lb_test to compile
* Refactor: internal objects for gRPCNodePool are now injectable, with defaults for the real world case
* Make GRPC port configurable, fix weird handling of web port too
* unit test reload Members
* check on runner creation failure
* adding nullRunner in case of failure during runner creation
* Refactored capacity advertisements/aggregations. Made grpc advertisement post asynchronous and non-blocking.
* make capacityEntry private
* Change the runner gRPC bind address.
This uses the existing `whoAmI` function, so that the gRPC server works
when the runner is running on a different host.
* Add support for multiple fixed runners to pool mgr
* Added harness for dataplane system tests, minor refactors
* Add Dockerfiles for components, along with docs.
* Doc fix: second runner needs a different name.
* Let us have three runners in system tests, why not
* The first system test running a function in API/LB/PureRunner mode
* Add unit test for Advertiser logic
* Fix issue with Pure Runner not sending the last data frame
* use config in models.Call as a temporary mechanism to override lb group ID
* make gofmt happy
* Updates documentation for how to configure lb groups for an app/route
* small refactor unit test
* Factor NodePool into its own package
* Lots of fixes to Pure Runner - concurrency woes with errors and cancellations
* New dataplane with static runnerpool (#813)
Added static node pool as default implementation
* moved nullRunner to grpc package
* remove duplication in README
* fix go vet issues
* Fix server initialisation in api tests
* Tiny logging changes in pool manager.
Using `WithError` instead of `Errorf` when appropriate.
* Change some log levels in the pure runner
* fixing readme
* moves multitenant compute documentation
* adds introduction to multitenant readme
* Proper triggering of system tests in makefile
* Fix insructions about starting up the components
* Change db file for system tests to avoid contention in parallel tests
* fixes revisions from merge
* Fix merge issue with handling of reserved slot
* renaming nulb to lb in the doc and images folder
* better TryExec sleep logic clean shutdown
In this change we implement a better way to deal with the sleep inside
the for loop during the attempt for placing a call.
Plus we added a clean way to shutdown the connections with external
component when we shut down the server.
* System_test mysql port
set mysql port for system test to a different value to the one set for
the api tests to avoid conflicts as they can run in parallel.
* change the container name for system-test
* removes flaky test TestRouteRunnerExecution pending resolution by issue #796
* amend remove_containers to remove new added containers
* Rework capacity reservation logic at a higher level for now
* LB agent implements Submit rather than delegating.
* Fix go vet linting errors
* Changed a couple of error levels
* Fix formatting
* removes commmented out test
* adds snappy to vendor directory
* updates Gopkg and vendor directories, removing snappy and addhing siphash
* wait for db containers to come up before starting the tests
* make system tests start API node on 8085 to avoid port conflict with api_tests
* avoid port conflicts with api_test.sh which are run in parallel
* fixes postgres port conflict and issue with removal of old containers
* Remove spurious println