* clean up hardcoded lsnr.sock refs
because what drivers.ContainerTask needs is another method, and we all know it
atoning for my sins the first time around. and yes, i refuse to use a cross
package exported constant (just think of the dep graphs)
* fix tests
* Define an interface for IOFS handling. Add no-op and temporary directory implementations.
* Move IOFS stuff out into separate file, add basic tmpfs implementation for linux only
* Switch between directory and tmpfs based on platform and config
* Respect FN_IOFS_OPTS
* Make directory iofs default on all platforms
* At least try to clean up a bit on failure
* Add backout if IOFS creation fails
* Add comment about iofs.Close
Largely a removal job, however many tests, particularly system level
ones relied on Routes. These have been migrated to use Fns.
* Add 410 response to swagger
* No app names in log tags
* Adding constraint in GetCall for FnID
* Adding test to check FnID is required on call
* Add fn_id to call selector
* Fix text in docker mem warning
* Correct buildConfig func name
* Test fix up
* Removing CPU setting from Agent test
CPU setting has been deprecated, but the code base is still riddled
with it. This just removes it from this layer. Really we need to
remove it from Call.
* Remove fn id check on calls
* Reintroduce fn id required on call
* Adding fnID to calls for execute test
* Correct setting of app id in middleware
* Removes root middlewares ability to redirect fun invocations
* Add over sized test check
* Removing call fn id check
1) Early call validation and return due to cpu/mem impossible
to meet (eg. request cpu/mem larger than max-mem or max-cpu
on server) now emits HTTP Bad Request (400) instead of 503.
This case is most likely due to client/service configuration
and/or validation issue.
2) 'failed' metric is now removed. 'failed' versus 'errors'
were too confusing. 'errors' is now a catch all error case.
3) new 'canceled' counter for client side cancels.
4) 'server_busy' now covers more cases than it previously did.
* update docs with pro tips for fdk http stream people
* fix bug where container could die before uds wait
we used to hang out for an hour. oopsie, thanks Owen
* POC code for inotify UDS-io-socket
* http-stream format
introducing the `http-stream` format support in fn. there are many details for
this, none of which can be linked from github :( -- docs are coming (I could
even try to add some here?). this is kinda MVP-ish level, but does not
implement the remaining spec, ie 'headers' fixing up / invoke fixing up. the
thinking being we can land this to test fdks / cli with and start splitting
work up on top of this. all other formats work the same as previous (no
breakage, only new stuff)
with the cli you can set `format: http-stream` and deploy, and then invoke a
function via the `http-stream` format. this uses unix domain socket (uds) on
the container instead of previous stdin/stdout, and fdks will have to support
this in a new fashion (will see about getting docs on here). fdk-go works,
which is here: https://github.com/fnproject/fdk-go/pull/30 . the output looks
the same as an http format function when invoking a function. wahoo.
there's some amount of stuff we can clean up here, enumerated:
* the cleanup of the sock files is iffy, high pri here
* permissions are a pain in the ass and i punted on dealing with them. you can
run `sudo ./fnserver` if running locally, it may/may not work in dind(?) ootb
* no pipe usage at all (yay), still could reduce buffer usage around the pipe
behavior, we could clean this up potentially before removal (and tests)
* my brain can’t figure out if dispatchOldFormats changes pipe behavior, but
tests work
* i marked XXX to do some clean up which will follow soon… need this to test fdk
tho so meh, any thoughts on those marked would be appreciated however (1 less
decision for me). mostly happy w/ general shape/plumbing tho
* there are no tests atm, this is a tricky dance indeed. attempts were made.
need to futz with the permission stuff before committing to adding any tests
here, which I don't like either. also, need to get the fdk-go based test image
updated according to the fdk-go, and there's a dance there too. rumba time..
* delaying the big big cleanup until we have good enough fdk support to kill
all the other formats.
open to ideas on how to maneuver landing stuff...
* fix unmount
* see if the tests work on ci...
* add call id header
* fix up makefile
* add configurable iofs opts
* add format file describing http-stream contract
* rm some cruft
* default iofs to /tmp, remove mounting
out of the box fn we can't mount. /tmp will provide a memory backed fs for us
on most systems, this will be fine for local developing and this can be
configured to be wherever for anyone that wants to make things more difficult
for themselves.
also removes the mounting, this has to be done as root. we can't do this in
the oss fn (short of requesting root, but no). in the future, we may want to
have a knob here to have a function that can be configured in fn that allows
further configuration here. since we don't know what we need in this dept
really, not doing that yet (it may be the case that it could be done
operationally outside of fn, eg, but not if each directory needs to be
configured itself, which seems likely, anyway...)
* add WIP note just in case...
* fn: paused and evicted container stats
With this change, now stats reports paused state
as well as incidents of container exit due to evictions.
* fn: update/document state transitions in state tracker
There's no case of a transition moving from done to waiting. This
must be deprecated behavior.
agent/lb-agent/runner roles execute call.End() in the background
in some cases to reduce latency. With this change, we simplify this
and switch to non-background execution of call.End(). This fixes
hard to detect issues such as non-deterministic calculation of
call.CompletedAt or incomplete Call.Stats in runners.
Downstream projects if impacted by the now blocking call.End()
latency should take steps to handle this according to their requirements.
fixes#1101
additional context:
* this was introduced in docker 1.13 (1/2017), we require docker 17.10
(10/2017), this should not have any issues dependency-wise, as `docker-init`
is in the docker install from that point in time. unless explicitly removed,
it should be in the dind container we use as well...
* the PR that introduced this to docker is
https://github.com/moby/moby/pull/26061 for additional context
* it may be wise to put this through some paces, if anybody has any...
interesting... function containers. the tests seem to work fine, however, and
this shouldn't be something users have to think about (?) at all, just
something that we are doing. this isn't the default in docker for
compatibility reasons, which is maybe a yellow flag but I am not sure tbh
* fn: introducing docker-syslog driver as default logger
With this change, fn-agent prefers RFC2454 docker-syslog driver
for logging stdout/stderr from containers. The advantage
of this is to offload it to docker itself instead of
streaming stderr along with stdout, which gets multiplexed
through single connection via docker-API.
The change will need support from FDKs in order to log
correct call-id and supress '\n' that splits syslog lines.
* fn: agent eviction revisited
Previously, the hot-container eviction logic used
number of waiters of cpu/mem resources to decide to
evict a container. An ejection ticker used to wake up
its associated container every 1 sec to reasses system
load based on waiter count. However, this does not work
for non-blocking agent since there are no waiters for
non-blocking mode.
Background on blocking versus non-blocking agent:
*) Blocking agent holds a request until the
the request is serviced or client times out. It assumes
the request can be eventually serviced when idle
containers eject themselves or busy containers finish
their work.
*) Non-blocking mode tries to limit this wait time.
However non-blocking agent has never been truly
non-blocking. This simply means that we only
make a request wait if we take some action in
the system. Non-blocking agents are configured with
a much higher hot poll frequency to make the system
more responsive as well as to handle cases where an
too-busy event is missed by the request. This is because
the communication between hot-launcher and waiting
requests are not 1-1 and lossy if another request
arrives for the same slot queue and receives a
too-busy response before the original request.
Introducing an evictor where each hot container can
register itself, if it is idle for more than 1 seconds.
Upon registry, these idle containers become eligible
for eviction.
In hot container launcher, in non-blocking mode,
before we attempt to emit a too-busy response, now
we attempt an evict. If this is successful, then
we wait some more. This could result in requests
waiting for more than they used to only if a
container was evicted. For blocking-mode, the
hot launcher uses hot-poll period to assess if
a request has waited for too long, then eviction
is triggered.
* fn: runner status and docker load images
Introducing a function run for pure runner Status
calls. Previously, Status gRPC calls returned active
inflight request counts with the purpose of a simple
health checker. However this is not sufficient since
it does not show if agent or docker is healthy. With
this change, if pure runner is configured with a status
image, that image is executed through docker. The
call uses zero memory/cpu/tmpsize settings to ensure
resource tracker does not block it.
However, operators might not always have a docker
repository accessible/available for status image. Or
operators might not want the status to go over the
network. To allow such cases, and in general possibly
caching docker images, added a new environment variable
FN_DOCKER_LOAD_FILE. If this is set, fn-agent during
startup will load these images that were previously
saved with 'docker save' into docker.
* Initial suypport for invoking tiggers
* dupe method
* tighten server constraints
* runner tests not working yet
* basic route tests passing
* post rebase fixes
* add hybrid support for trigger invoke and tests
* consoloidate all hybrid evil into one place
* cleanup and make triggers unique by source
* fix oops with Agent
* linting
* review fixes
* add DateTime sans mgo
* change all uses of strfmt.DateTime to common.DateTime, remove test strfmt usage
* remove api tests, system-test dep on api test
multiple reasons to remove the api tests:
* awkward dependency with fn_go meant generating bindings on a branched fn to
vendor those to test new stuff. this is at a minimum not at all intuitive,
worth it, nor a fun way to spend the finite amount of time we have to live.
* api tests only tested a subset of functionality that the server/ api tests
already test, and we risk having tests where one tests some thing and the
other doesn't. let's not. we have too many test suites as it is, and these
pretty much only test that we updated the fn_go bindings, which is actually a
hassle as noted above and the cli will pretty quickly figure out anyway.
* fn_go relies on openapi, which relies on mgo, which is deprecated and we'd
like to remove as a dependency. openapi is a _huge_ dep built in a NIH
fashion, that cannot simply remove the mgo dep as users may be using it.
we've now stolen their date time and otherwise killed usage of it in fn core,
for fn_go it still exists but that's less of a problem.
* update deps
removals:
* easyjson
* mgo
* go-openapi
* mapstructure
* fn_go
* purell
* go-validator
also, had to lock docker. we shouldn't use docker on master anyway, they
strongly advise against that. had no luck with latest version rev, so i locked
it to what we were using before. until next time.
the rest is just playing dep roulette, those end up removing a ton tho
* fix exec test to work
* account for john le cache
In pure-runner and LB agent, service providers might want to set specific driver options.
For example, to add cpu-shares to functions, LB can add the information as extensions
to the Call and pass this via gRPC to runners. Runners then pick these extensions from
gRPC call and pass it to driver. Using a custom driver implementation, pure-runners can
process these extensions to modify docker.CreateContainerOptions.
To achieve this, LB agents can now be configured using a call overrider.
Pure-runners can be configured using a custom docker driver.
RunnerCall and Call interfaces both expose call extensions.
An example to demonstrate this is implemented in test/fn-system-tests/system_test.go
which registers a call overrider for LB agent as well as a simple custom docker driver.
In this example, LB agent adds a key-value to extensions and runners add this key-value
as an environment variable to the container.
* fn: user friendly timeout handling changes
Timeout setting in routes now means "maximum amount
of time a function can run in a container".
Total wait time for a given http request is now expected
to be handled by the client. As long as the client waits,
the LB, runner or agents will search for resources to
schedule it.
* fn: size restricted tmpfs /tmp and read-only / support
*) read-only Root Fs Support
*) removed CPUShares from docker API. This was unused.
*) docker.Prepare() refactoring
*) added docker.configureTmpFs() for size limited tmpfs on /tmp
*) tmpfs size support in routes and resource tracker
*) fix fn-test-utils to handle sparse files better in create file
* test typo fix
* Implements graceful shutdown of agent.DataAccess and underlying Datastore/Logstore/MessageQueue
* adds tests for closing agent.DataAccess and Datastore
* add user syslog writers to app
users may specify a syslog url[s] on apps now and all functions under that app
will spew their logs out to it. the docs have more information around details
there, please review those (swagger and operating/logging.md), tried to
implement to spec in some parts and improve others, open to feedback on
format though, lots of liberty there.
design decision wise, I am looking to the future and ignoring cold containers.
the overhead of the connections there will not be worth it, so this feature
only works for hot functions, since we're killing cold anyway (even if a user
can just straight up exit a hot container).
syslog connections will be opened against a container when it starts up, and
then the call id that is logged gets swapped out for each call that goes
through the container, this cuts down on the cost of opening/closing
connections significantly. there are buffers to accumulate logs until we get a
`\n` to actually write a syslog line, and a buffer to save some bytes when
we're writing the syslog formatting as well. underneath writers re-use the
line writer in certain scenarios (swapper). we could likely improve the ease
of setting this up, but opening the syslog conns against a container seems
worth it, and is a different path than the other func loggers that we create
when we make a call object. the Close() stuff is a little tricky, not sure how
to make it easier and have the ^ benefits, open to idears.
this does add another vector of 'limits' to consider for more strict service
operators. one being how many syslog urls can a user add to an app (infinite,
atm) and the other being on the order of number of containers per host we
could run out of connections in certain scenarios. there may be some utility
in having multiple syslog sinks to send to, it could help with debugging at
times to send to another destination or if a user is a client w/ someone and
both want the function logs, e.g. (have used this for that in the past,
specifically).
this also doesn't work behind a proxy, which is something i'm open to fixing,
but afaict will require a 3rd party dependency (we can pretty much steal what
docker does). this is mostly of utility for those of us that work behind a
proxy all the time, not really for end users.
there are some unit tests. integration tests for this don't sound very fun to
maintain. I did test against papertrail with each protocol and it works (and
even times out if you're behind a proxy!).
closes#337
* add trace to syslog dial
* fn: allow specified docker networks in functions
If FN_DOCKER_NETWORK is specified with a list of
networks, then agent driver picks the least used
network to place functions on.
* add mutex comment
* fn: non-blocking resource tracker and notification
For some types of errors, we might want to notify
the actual caller if the error is directly 1-1 tied
to that request. If hotLauncher is triggered with
signaller, then here we send a back communication
error notification channel. This is passed to
checkLaunch to send back synchronous responses
to the caller that initiated this hot container
launch.
This is useful if we want to run the agent in
quick fail mode, where instead of waiting for
CPU/Mem to become available, we prefer to fail
quick in order not to hold up the caller.
To support this, non-blocking resource tracker
option/functions are now available.
* fn: test env var rename tweak
* fn: fixup merge
* fn: rebase test fix
* fn: merge fixup
* fn: test tweak down to 70MB for 128MB total
* fn: refactor token creation and use broadcast regardless
* fn: nb description
* fn: bugfix
* CloudEvents I/O format support.
* Updated format doc.
* Remove log lines
* This adds support for CloudEvent ingestion at the http router layer.
* Updated per comments.
* Responds with full CloudEvent message.
* Fixed up per comments
* Fix tests
* Checks for cloudevent content-type
* doesn't error on missing content-type.
* fn: common.WaitGroup improvements
*) Split the API into AddSession/DoneSession
*) Only wake up listeners when session count reaches zero.
* fn: WaitGroup go-routine blast test
* fn: test fix and rebase fixup
* fn: reduce lbagent and agent dependency
lbagent and agent code is too dependent. This causes
any changed in agent to break lbagent. In reality, for
LB there should be no delegated agent. Splitting these
two will cause some code duplication, but it reduces
dependency and complexity (eg. agent without docker)
* fn: post rebase fixup
* fn: runner/runnercall should use lbDeadline
* fn: fixup ln agent test
* fn: remove agent create option for common.WaitGroup
* fn: sync.WaitGroup replacement common.WaitGroup
agent/lb_agent/pure_runner has been incorrectly using
sync.WaitGroup semantics. Switching these components to
use the new common.WaitGroup() that provides a few handy
functionality for common graceful shutdown cases.
From https://golang.org/pkg/sync/#WaitGroup,
"Note that calls with a positive delta that occur when the counter
is zero must happen before a Wait. Calls with a negative delta,
or calls with a positive delta that start when the counter is
greater than zero, may happen at any time. Typically this means
the calls to Add should execute before the statement creating
the goroutine or other event to be waited for. If a WaitGroup
is reused to wait for several independent sets of events,
new Add calls must happen after all previous Wait calls have
returned."
HandleCallEnd introduces some complexity to the shutdowns, but this
is currently handled by AddSession(2) initially and letting the
HandleCallEnd() when to decrement by -1 in addition to decrement -1 in
Submit().
lb_agent shutdown sequence and particularly timeouts with runner pool
needs another look/revision, but this is outside of the scope of this
commit.
* fn: lb-agent wg share
* fn: no need to +2 in Submit with defer.
Removed defer since handleCallEnd already has
this responsibility.
* fn: move call error/end handling to handleCallEnd
This simplifies submit() function but moves the burden
of retriable-versus-committed request handling and slot.Close()
responsibility to handleCallEnd().
* fn: experimental prefork recycle and other improvements
*) Recycle and do not use same pool container again option.
*) Two state processing: initializing versus ready (start-kill).
*) Ready state is exempt from rate limiter.
* fn: experimental prefork pool multiple network support
In order to exceed 1023 container (bridge port) limit, add
multiple networks:
for i in fn-net1 fn-net2 fn-net3 fn-net4
do
docker network create $i
done
to Docker startup, (eg. dind preentry.sh), then provide this
to prefork pool using:
export FN_EXPERIMENTAL_PREFORK_NETWORKS="fn-net1 fn-net2 fn-net3 fn-net4"
which should be able to spawn 1023 * 4 containers.
* fn: fixup tests for cfg move
* fn: add ipc and pid namespaces into prefork pooling
* fn: revert ipc and pid namespaces for now
Pid/Ipc opens up the function container to pause container.
* fn: perform call.End() after request is processed
call.End() performs several tasks in sequence; insert call,
insert log, (todo) remove mq entry, fireAfterCall callback, etc.
These currently add up to the request latency as return
from agent.Submit() is blocked on these. We also haven't been
able to apply any timeouts on these operations since they are
handled during request processing and it is hard to come up
with a strategy for it. Also the error cases
(couldn't insert call or log) are not propagated to the caller.
With this change, call.End() handling becomes asynchronous where
we perform these tasks after the request is done. This improves
latency and we no longer have to block the call on these operations.
The changes will also free up the agent slot token more quickly
and now we are no longer tied to hiccups in call.End().
Now, a timeout policy is also added to this which can
be adjusted with an env variable. (default 10 minutes)
This accentuates the fact that call/log/fireAfterCall are not
completed when request is done. So, there's a window there where
call is done, but call/log/fireAfterCall are not yet propagated.
This was already the case especially for error cases.
There's slight risk of accumulating call.End() operations in
case of hiccups in these log/call/callback systems.
* fn: address risk of overstacking of call.End() calls.
* App ID
* Clean-up
* Use ID or name to reference apps
* Can use app by name or ID
* Get rid of AppName for routes API and model
routes API is completely backwards-compatible
routes API accepts both app ID and name
* Get rid of AppName from calls API and model
* Fixing tests
* Get rid of AppName from logs API and model
* Restrict API to work with app names only
* Addressing review comments
* Fix for hybrid mode
* Fix rebase problems
* Addressing review comments
* Addressing review comments pt.2
* Fixing test issue
* Addressing review comments pt.3
* Updated docstring
* Adjust UpdateApp SQL implementation to work with app IDs instead of names
* Fixing tests
* fmt after rebase
* Make tests green again!
* Use GetAppByID wherever it is necessary
- adding new v2 endpoints to keep hybrid api/runner mode working
- extract CallBase from Call object to expose that to a user
(it doesn't include any app reference, as we do for all other API objects)
* Get rid of GetAppByName
* Adjusting server router setup
* Make hybrid work again
* Fix datastore tests
* Fixing tests
* Do not ignore app_id
* Resolve issues after rebase
* Updating test to make it work as it was
* Tabula rasa for migrations
* Adding calls API test
- we need to ensure we give "App not found" for the missing app and missing call in first place
- making previous test work (request missing call for the existing app)
* Make datastore tests work fine with correctly applied migrations
* Make CallFunction middleware work again
had to adjust its implementation to set app ID before proceeding
* The biggest rebase ever made
* Fix 8's migration
* Fix tests
* Fix hybrid client
* Fix tests problem
* Increment app ID migration version
* Fixing TestAppUpdate
* Fix rebase issues
* Addressing review comments
* Renew vendor
* Updated swagger doc per recommendations
* Move delegated agent creation within NewLBAgent so we can hide the fact we disable docker
* Move delegated agent creation within NewPureRunner for better encapsulation
Added env FN_MAX_FS_SIZE_MB, which if defined and non-zero
is passed to docker as storage opt size. We do not validate
if this option is supported by docker currently. This is
because it's difficult to actually validate this since it
not only depends on storage driver and its backing filesystem,
but also the mount options used to mount that fs.
* fn: reorg agent config
*) Moving constants in agent to agent config, which helps
with testing, tuning.
*) Added max total cpu & memory for testing & clamping max
mem & cpu usage if needed.
* fn: adjust PipeIO time
* fn: for hot, cannot reliably test EndOfLogs in TestRouteRunnerExecution
* add jaeger support, link hot container & req span
* adds jaeger support now with FN_JAEGER_URL, there's a simple tutorial in the
operating/metrics.md file now and it's pretty easy to get up and running.
* links a hot request span to a hot container span. when we change this to
sample at a lower ratio we'll need to finagle the hot container span to always
sample or something, otherwise we'll hide that info. at least, since we're
sampling at 100% for now if this is flipped on, can see freeze/unfreeze etc.
if they hit. this is useful for debugging. note that zipkin's exporter does
not follow the link at all, hence jaeger... and they're backed by the Cloud
Empire now (CNCF) so we'll probably use it anyway.
* vendor: add thrift for jaeger