this patch gets rid of max concurrency for functions altogether, as discussed,
since it will be challenging to support across functions nodes. as a result of
doing so, the previous version of functions would fall over when offered 1000
functions, so there was some work needed in order to push this through.
further work is necessary as docker basically falls over when trying to start
enough containers at the same time, and with this patch essentially every
function can scale infinitely. it seems like we could add some kind of
adaptive restrictions based on task run length and configured wait time so
that fast running functions will line up to run in a hot container instead of
them all creating new hot containers.
this patch takes a first cut at whacking out some of the insanity that was the
previous concurrency model, which was problematic in that it limited
concurrency significantly across all functions since every task went through
the same unbuffered channel, which could create blocking issues for all
functions if the channel is not picked off fast enough (it's not apparent that
this was impossible in the previous implementation). in any event, each
request has a goroutine already, there's no reason not to use it. not too hard
to wrap a map in a lock, not sure what the benefits were (added insanity?) in effect
this is marginally easier to understand and less insane (marginally). after
getting rid of max c this adds a blocking mechanism for the first invocation
of any function so that all other hot functions will wait on the first one to
finish to avoid a herd issue (was making docker die...) -- this could be
slightly improved, but works in a pinch. reduced some memory usage by having
redundant maps of htfnsvr's and task.Requests (by a factor of 2!). cleaned up
some of the protocol stuff, need to clean this up further. anyway, it's a
first cut. have another patch that rewrites all of it but was getting into
rabbit hole territory, would be happy to oblige if anybody else has problems
understanding this rat's nest of channels. there is a good bit of work left to
make this prod ready (regardless of removing max c).
a warning that this will break the db schemas, didn't put the effort in to add
migration stuff since this isn't deployed anywhere in prod...
TODO need to clean out the htfnmgr bucket with LRU
TODO need to clean up runner interface
TODO need to unify the task running paths across protocols
TODO need to move the ram checking stuff into worker for noted reasons
TODO need better elasticity of hot f(x) containers
* Solving postgres marshal/unmarshal issue
Postgres datastore was not marshaling the App config during its insert, that behavior was resulting in issues when fetching the App and the datastore couldn't unmarshal the config.
The same issue was probably happening with the Route's headers in some situations.
This commit's idea is to always try to marshal configs and headers when inserting/updating Apps or Routes. But in Apps and Routes get methods, if the config/headers unmarshal fails, it returns an empty config/headers.
* fix one more unmarshal case
* returning error when unmarshaling non-empty
* Fix#418 Added MySQL as DB storage layer.
* Make the mysql stuff work
* Make the mysql stuff work
* Make the mysql stuff work
* Make the mysql stuff work
* small fixes
* Switch to Go 1.8 installation inside CI (#589)
* Switch to Go 1.8 installation inside CI
Partially Addresses: #588
* Use url.Hostname() instead of custom method
* Added PR review changes.
* Added missing check for error.
* Changed * with name, config
* Removed unused import.
* Added check for NoRows
* Merged changes with HEAD
* Added documentation to mysql.go
* update mysql to be on par with postgres
* Make datastore tests pass with remote Docker containers
* Make tests consume DOCKER_HOST IP address as bind host while constucting database URI.
This fix makes datastore tests pass against
remote Docker (with host IP different from 127.0.0.1)
Fixes: #586
* Make datastore tests pass on Go1.7.1
* add datastore validator; adapt mock and tests
* adapt bolt datastore to common validator
* adapt postgres datastore to validator
* adapt redis datastore to common validator
* Add support for redis as a datastore
Fixes: #388
* Use HEXISTS instead of HGET when checking for apps and routes
* Get rid of SADD SREM and SMEMBERS
* change redis test port
* Add buffer time for redis docker
* redis test ping loop (#552)
* redis test ping loop
* simplify
* Refactor redis_test.go to adapt to @jmank88 new testing code
* tiny fix
* Redis datastore test fixes (#555)
* redis datastore test fixes - UpdateRoute/UpdateApp
* redis datastore fix InsertRoute
* redis datastore fix GetRoutesByApp
* API endpoint extensions working.
extensions example.
* Added server.NewEnv and some docs for the API extensions example.
extensions example.
example main.go.
* Uncommented special handler stuff.
* Added section in docs for extending API linking to example main.go.
* Commented out special_handler test
* Changed to NewFromEnv
since the db files were being created inside of the docker container with only
permissions for the root user to rwx and docker run needs all of $PWD to be
readable in order to build a docker container on the host, `make run-docker`
was broken on any subsequent runs. If we create more permissive permissions
then we don't have that issue (group +rx)
* functions: modify datastore to accomodate hot containers support
* functions: protocol between functions and hot containers
* functions: add hot containers clockwork
* fn: add hot containers support
* Reduce test verbosity
* Divert gin's log to the test buffer
* Divert stdlib's log to the test buffer
* Add bolt tests into log buffer
* Add a linebreak to improve log output layout
By default, BoltDB will hang while waiting to acquire lock to the
datafile, thus the users might find themselves waiting for something
but not what. The added timeout aims inform use about what's
happening.
Also this renames MQADR to TASKSRV, refactor configuration to read
environment variables. RunAsyncRunner now fills the gaps when
parsing TASKSRV.
Fixes#119