Commit Graph

68 Commits

Author SHA1 Message Date
Tolga Ceylan
09bfa41da5 fn: file system 'size' option documentation (#889) 2018-03-26 21:30:39 -07:00
Gerardo Viedma
73ae77614c Moves out node pool manager behind an extension using runner pool abstraction (Part 2) (#862)
* Move out node-pool manager and replace it with RunnerPool extension

* adds extension points for runner pools in load-balanced mode

* adds error to return values in RunnerPool and Runner interfaces

* Implements runner pool contract with context-aware shutdown

* fixes issue with range

* fixes tests to use runner abstraction

* adds empty test file as a workaround for build requiring go source files in top-level package

* removes flappy timeout test

* update docs to reflect runner pool setup

* refactors system tests to use runner abstraction

* removes poolmanager

* moves runner interfaces from models to api/runnerpool package

* Adds a second runner to pool docs example

* explicitly check for request spillover to second runner in test

* moves runner pool package name for system tests

* renames runner pool pointer variable for consistency

* pass model json to runner

* automatically cast to http.ResponseWriter in load-balanced call case

* allow overriding of server RunnerPool via a programmatic ServerOption

* fixes return type of ResponseWriter in test

* move Placer interface to runnerpool package

* moves hash-based placer out of open source project

* removes siphash from Gopkg.lock
2018-03-16 13:46:21 +00:00
Reed Allman
9eaf824398 add jaeger support, link hot container & req span (#840)
* add jaeger support, link hot container & req span

* adds jaeger support now with FN_JAEGER_URL, there's a simple tutorial in the
operating/metrics.md file now and it's pretty easy to get up and running.
* links a hot request span to a hot container span. when we change this to
sample at a lower ratio we'll need to finagle the hot container span to always
sample or something, otherwise we'll hide that info. at least, since we're
sampling at 100% for now if this is flipped on, can see freeze/unfreeze etc.
if they hit. this is useful for debugging. note that zipkin's exporter does
not follow the link at all, hence jaeger... and they're backed by the Cloud
Empire now (CNCF) so we'll probably use it anyway.

* vendor: add thrift for jaeger
2018-03-13 15:57:12 -07:00
Matt Stephenson
924fe2b72b Fix missing instructions for building noop.so (#846) 2018-03-12 14:57:56 -07:00
Matt Stephenson
a787ccac36 Refactor controlplane into a go plugin (#833)
* Refactor controlplane into a go plugin

* Move vbox to controlplane package
2018-03-12 12:50:55 -07:00
Gerardo Viedma
8af57da7b2 Support load-balanced runner groups for multitenant compute isolation (#814)
* Initial stab at the protocol

* initial protocol sketch for node pool manager

* Added http header frame as a message

* Force the use of WithAgent variants when creating a server

* adds grpc models for node pool manager plus go deps

* Naming things is really hard

* Merge (and optionally purge) details received by the NPM

* WIP: starting to add the runner-side functionality of the new data plane

* WIP: Basic startup of grpc server for pure runner. Needs proper certs.

* Go fmt

* Initial agent for LB nodes.

* Agent implementation for LB nodes.

* Pass keys and certs to LB node agent.

* Remove accidentally left reference to env var.

* Add env variables for certificate files

* stub out the capacity and group membership server channels

* implement server-side runner manager service

* removes unused variable

* fixes build error

* splits up GetCall and GetLBGroupId

* Change LB node agent to use TLS connection.

* Encode call model as JSON to send to runner node.

* Use hybrid client in LB node agent.

This should provide access to get app and route information for the call
from an API node.

* More error handling on the pure runner side

* Tentative fix for GetCall problem: set deadlines correctly when reserving slot

* Connect loop for LB agent to runner nodes.

* Extract runner connection function in LB agent.

* drops committed capacity counts

* Bugfix - end state tracker only in submit

* Do logs properly

* adds first pass of tracking capacity metrics in agent

* maked memory capacity metric uint64

* maked memory capacity metric uint64

* removes use of old capacity field

* adds remove capacity call

* merges overwritten reconnect logic

* First pass of a NPM

Provide a service that talks to a (simulated) CP.

- Receive incoming capacity assertions from LBs for LBGs
- expire LB requests after a short period
- ask the CP to add runners to a LBG
- note runner set changes and readvertise
- scale down by marking runners as "draining"
- shut off draining runners after some cool-down period

* add capacity update on schedule

* Send periodic capcacity metrics

Sending capcacity metrics to node pool manager

* splits grpc and api interfaces for capacity manager

* failure to advertise capacity shouldn't panic

* Add some instructions for starting DP/CP parts.

* Create the poolmanager server with TLS

* Use logrus

* Get npm compiling with cert fixups.

* Fix: pure runner should not start async processing

* brings runner, nulb and npm together

* Add field to acknowledgment to record slot allocation latency; fix a bug too

* iterating on pool manager locking issue

* raises timeout of placement retry loop

* Fix up NPM

Improve logging

Ensure that channels etc. are actually initialised in the structure
creation!

* Update the docs - runners GRPC port is 9120

* Bugfix: return runner pool accurately.

* Double locking

* Note purges as LBs stop talking to us

* Get the purging of old LBs working.

* Tweak: on restart, load runner set before making scaling decisions.

* more agent synchronization improvements

* Deal with teh CP pulling out active hosts from under us.

* lock at lbgroup level

* Send request and receive response from runner.

* Add capacity check right before slot reservation

* Pass the full Call into the receive loop.

* Wait for the data from the runner before finishing

* force runner list refresh every time

* Don't init db and mq for pure runners

* adds shutdown of npm

* fixes broken log line

* Extract an interface for the Predictor used by the NPM

* purge drained connections from npm

* Refactor of the LB agent into the agent package

* removes capacitytest wip

* Fix undefined err issue

* updating README for poolmanager set up

* ues retrying dial for lb to npm connections

* Rename lb_calls to lb_agent now that all functionality is there

* Use the right deadline and errors in LBAgent

* Make stream error flag per-call rather than global otherwise the whole runner is damaged by one call dropping

* abstracting gRPCNodePool

* Make stream error flag per-call rather than global otherwise the whole runner is damaged by one call dropping

* Add some init checks for LB and pure runner nodes

* adding some useful debug

* Fix default db and mq for lb node

* removes unreachable code, fixes typo

* Use datastore as logstore in API nodes.

This fixes a bug caused by trying to insert logs into a nil logstore. It
was nil because it wasn't being set for API nodes.

* creates placement abstraction and moves capacity APIs to NodePool

* removed TODO, added logging

* Dial reconnections for LB <-> runners

LB grpc connections to runners are established using a backoff stategy
in event of reconnections, this allows to let the LB up even in case one
of the runners go away and reconnect to it as soon as it is back.

* Add a status call to the Runner protocol

Stub at the moment. To be used for things like draindown, health checks.

* Remove comment.

* makes assign/release capacity lockless

* Fix hanging issue in lb agent when connections drop

* Add the CH hash from fnlb

Select this with FN_PLACER=ch when launching the LB.

* small improvement for locking on reloadLBGmembership

* Stabilise the list of Runenrs returned by NodePool

The NodePoolManager makes some attempt to keep the list of runner nodes advertised as
stable as possible. Let's preserve this effort in the client side. The main point of this
is to attempt to keep the same runner at the same inxed in the []Runner returned by
NodePool.Runners(lbgid); the ch algorithm likes it when this is the case.

* Factor out a generator function for the Runners so that mocks can be injected

* temporarily allow lbgroup to be specified in HTTP header, while we sort out changes to the model

* fixes bug with nil runners

* Initial work for mocking things in tests

* fix for anonymouse go routine error

* fixing lb_test to compile

* Refactor: internal objects for gRPCNodePool are now injectable, with defaults for the real world case

* Make GRPC port configurable, fix weird handling of web port too

* unit test reload Members

* check on runner creation failure

* adding nullRunner in case of failure during runner creation

* Refactored capacity advertisements/aggregations. Made grpc advertisement post asynchronous and non-blocking.

* make capacityEntry private

* Change the runner gRPC bind address.

This uses the existing `whoAmI` function, so that the gRPC server works
when the runner is running on a different host.

* Add support for multiple fixed runners to pool mgr

* Added harness for dataplane system tests, minor refactors

* Add Dockerfiles for components, along with docs.

* Doc fix: second runner needs a different name.

* Let us have three runners in system tests, why not

* The first system test running a function in API/LB/PureRunner mode

* Add unit test for Advertiser logic

* Fix issue with Pure Runner not sending the last data frame

* use config in models.Call as a temporary mechanism to override lb group ID

* make gofmt happy

* Updates documentation for how to configure lb groups for an app/route

* small refactor unit test

* Factor NodePool into its own package

* Lots of fixes to Pure Runner - concurrency woes with errors and cancellations

* New dataplane with static runnerpool (#813)

Added static node pool as default implementation

* moved nullRunner to grpc package

* remove duplication in README

* fix go vet issues

* Fix server initialisation in api tests

* Tiny logging changes in pool manager.

Using `WithError` instead of `Errorf` when appropriate.

* Change some log levels in the pure runner

* fixing readme

* moves multitenant compute documentation

* adds introduction to multitenant readme

* Proper triggering of system tests in makefile

* Fix insructions about starting up the components

* Change db file for system tests to avoid contention in parallel tests

* fixes revisions from merge

* Fix merge issue with handling of reserved slot

* renaming nulb to lb in the doc and images folder

* better TryExec sleep logic clean shutdown

In this change we implement a better way to deal with the sleep inside
the for loop during the attempt for placing a call.
Plus we added a clean way to shutdown the connections with external
component when we shut down the server.

* System_test mysql port

set mysql port for system test to a different value to the one set for
the api tests to avoid conflicts as they can run in parallel.

* change the container name for system-test

* removes flaky test TestRouteRunnerExecution pending resolution by issue #796

* amend remove_containers to remove new added containers

* Rework capacity reservation logic at a higher level for now

* LB agent implements Submit rather than delegating.

* Fix go vet linting errors

* Changed a couple of error levels

* Fix formatting

* removes commmented out test

* adds snappy to vendor directory

* updates Gopkg and vendor directories, removing snappy and addhing siphash

* wait for db containers to come up before starting the tests

* make system tests start API node on 8085 to avoid port conflict with api_tests

* avoid port conflicts with api_test.sh which are run in parallel

* fixes postgres port conflict and issue with removal of old containers

* Remove spurious println
2018-03-08 14:45:19 -08:00
Tolga Ceylan
89a1fc7c72 Response size clamp (#786)
*) Limit response http body or json response size to FN_MAX_RESPONSE_SIZE (default unlimited)
*) If limits are exceeded 502 is returned with 'body too large' in the error message
2018-03-01 17:14:50 -08:00
Travis Reeder
5eb534f243 fix bug 2018-02-16 07:38:53 -08:00
Tolga Ceylan
c848fc6181 fn: hot container timer improvements (#751)
* fn: hot container timer improvements

With this change, now we are allocating the timers
when the container starts and managing them via
stop/clear as needed, which should not only be more
efficient, but also easier to follow.

For example, previously, if eject time out was
set to 10 secs, this could have delayed idle timeout
up to 10 secs as well. It is also not necessary to do
any math for elapsed time.

Now consumers avoid any requeuing when startDequeuer() is cancelled.
This was triggering additional dequeue/requeue causing
containers to wake up spuriously. Also in startDequeuer(),
we no longer remove the item from the actual queue and
leave this to acquire/eject, which side steps issues related
with item landing in the channel, not consumed, etc.
2018-02-12 14:12:03 -08:00
Tolga Ceylan
f27d47f2dd Idle Hot Container Freeze/Preempt Support (#733)
* fn: freeze/unfreeze and eject idle under resource contention
2018-02-07 17:21:53 -08:00
Travis Reeder
5a2602d42e Updated docs, cleaned things up, DRY's up function format stuff. (#688)
* Updated docs, cleaned things up, DRY's up function format stuff.

* deleted files

* updated bad link
2018-01-16 14:45:44 -08:00
Tolga Ceylan
aa17e3b7e0 fn: cpus documentation (#682) 2018-01-12 14:49:03 -08:00
Reed Allman
24aa911609 add FN_LOG_DEST for logs, fixup init (#663)
* add FN_LOG_DEST for logs, fixup init

* FN_LOG_DEST can point to a remote logging place (papertrail, whatever)
* FN_LOG_PREFIX can add a prefix onto each log line sent to FN_LOG_DEST

default remains stderr with no prefix. users need this to send to various
logging backends, though it could be done operationally, this is somewhat
simpler.

we were doing some configuration stuff inside of init() for some of the global
things. even though they're global, it's nice to keep them all in the normal
server init path.

we have had strange issues with the tracing setup, I tested the last repro of
this repeatedly and didn't have any luck reproducing it, though maybe it comes
back.

* add docs
2018-01-09 14:27:50 -08:00
Ben Meier
aac8ac3027 Added private registries documentation (#654) 2018-01-09 10:29:47 -08:00
Alexander Bransby-Sharples
1df4b46c56 [skip ci] Added the API_CORS env var to the documented options (#538) 2017-12-08 13:29:44 +00:00
Avi Miller
251a9b34d6 [skip ci] Fix the name of Oracle Linux to the official product name. (#584)
Signed-off-by: Avi Miller <avi.miller@oracle.com>
2017-12-08 11:04:56 +00:00
Derek Schultz
8de31c6b50 add link to Fn Helm Chart (#573)
Removes old k8s doc and script in favor of linking to the new Helm Chart.
2017-12-06 14:35:11 -07:00
Travis Reeder
0798f9fac8 Middleware upgrade (#554)
* Adds root level middleware

* Added todo

* Better way for extensions to be added.

* Bad conflict merge?
2017-12-05 08:22:03 -08:00
Dario Domizioli
b083390d6b Documentation notes about running Fn on SELinux systems (#507)
Documentation notes about running Fn on SELinux systems
2017-12-01 15:55:00 +00:00
Denis Makogon
5c68a88599 Fn-prefix everything (#545)
* Fn-prefix everything

Closes: #492

* Global replacement

* missed one fn_
2017-11-29 17:50:24 -08:00
Travis Reeder
a67d5a6290 Drop viper dependency (#550)
* Removed viper dependency.

* removed from glide files
2017-11-28 15:46:17 -08:00
Denis Makogon
81e3c4387c Docker compose file for simpler dev env (#526)
* Docker compose file for simpler dev env

* Updating readme, adding UI to compose

* Define dependencies between services

* Prometheus. Grafana

* Link Prometheus to Grafana service

* Addressing review comments

* Linking compose doc to common table of content
2017-11-28 13:31:34 -07:00
Derek Schultz
e27142cb8a use kubectl JSONPath (#544)
* remove extra export

* use kubectl jsonpath

* remove http prefix. included in API_URL

* show example of fn call
2017-11-28 11:14:45 -07:00
Reed Allman
2d8c528b48 S3 loggyloo (#511)
* add minio-go dep, update deps

* add minio s3 client

minio has an s3 compatible api and is an open source project and, notably, is
not amazon, so it seems best to use their client (fwiw the aws-sdk-go is a
giant hair ball of things we don't need, too). it was pretty easy and seems
to work, so rolling with it. also, minio is a totally feasible option for fn
installs in prod / for demos / for local.

* adds 's3' package for s3 compatible log storage api, for use with storing
logs from calls and retrieving them.
* removes DELETE /v1/apps/:app/calls/:call/log endpoint
* removes internal log deletion api
* changes the GetLog API to use an io.Reader, which is a backwards step atm
due to the json api for logs, I have another branch lined up to make a plain
text log API and this will be much more efficient (also want to gzip)
* hooked up minio to the test suite and fixed up the test suite
* add how to run minio docs and point fn at it docs

some notes: notably we aren't cleaning up these logs. there is a ticket
already to make a Mr. Clean who wakes up periodically and nukes old stuff, so
am punting any api design around some kind of TTL deletion of logs. there are
a lot of options really for Mr. Clean, we can notably defer to him when apps
are deleted, too, so that app deletion is fast and then Mr. Clean will just
clean them up later (seems like a good option).

have not tested against BMC object store, which has an s3 compatible API. but
in theory it 'just works' (the reason for doing this). in any event, that's
part of the service land to figure out.

closes #481
closes #473

* add log not found error to minio land
2017-11-20 17:39:45 -08:00
Travis Reeder
ab18e467fa updates functions -> fnserver (#516)
* updates functions -> fn-server and fnlb -> fn-lb

* changed to fnserver and fnlb
2017-11-17 15:53:44 -08:00
Derek Schultz
c281f96486 K8s docs update (#499)
* split fn-ui to its own service

* add fn namespace

* update path

* add namespace flag for kubectl

* simplify grabbing minikube IP and port

* typo: FUNCTIONS -> API_URL
2017-11-16 10:45:25 -07:00
Reed Allman
61b416a9b5 automagic sql db migrations (#461)
* adds migrations

closes #57

migrations only run if the database is not brand new. brand new
databases will contain all the right fields when CREATE TABLE is called,
this is for readability mostly more than efficiency (do not want to have
to go through all of the database migrations to ascertain what columns a table
has). upon startup of a new database, the migrations will be analyzed and the
highest version set, so that future migrations will be run. this should also
avoid running through all the migrations, which could bork db's easily enough
(if the user just exits from impatience, say).

otherwise, all migrations that a db has not yet seen will be run against it
upon startup, this should be seamless to the user whether they had a db that
had 0 migrations run on it before or N. this means users will not have to
explicitly run any migrations on their dbs nor see any errors when we upgrade
the db (so long as things go well). if migrations do not go so well, users
will have to manually repair dbs (this is the intention of the `migrate`
library and it seems sane), this should be rare, and I'm unsure myself how
best to resolve not having gone through this myself, I would assume it will
require running down migrations and then manually updating the migration
field; in any case, docs once one of us has to go through this.

migrations are written to files and checked into version control, and then use
go-bindata to generate those files into go code and compiled in to be consumed
by the migrate library (so that we don't have to put migration files on any
servers) -- this is also in vcs. this seems to work ok. I don't like having to
use the separate go-bindata tool but it wasn't really hard to install and then
go generate takes care of the args. adding migrations should be relatively
rare anyway, but tried to make it pretty painless.

1 migration to add created_at to the route is done here as an example of how
to do migrations, as well as testing these things ;) -- `created_at` will be
`0001-01-01T00:00:00.000Z` for any existing routes after a user runs this
version. could spend the extra time adding 'today's date to any outstanding
records, but that's not really accurate, the main thing is nobody will have to
nuke their db with the migrations in place & we don't have any prod clusters
really to worry about. all future routes will correctly have `created_at` set,
and plan to add other timestamps but wanted to keep this patch as small as
possible so only did routes.created_at.

there are tests that a spankin new db will work as expected as well as a db
after running all down & up migrations works. the latter tests only run on mysql
and postgres, since sqlite3 does not like ALTER TABLE DROP COLUMN; up
migrations will need to be tested manually for sqlite3 only, but in theory if
they are simple and work on postgres and mysql, there is a good likelihood of
success; the new migration from this patch works on sqlite3 fine.

for now, we need to use `github.com/rdallman/migrate` to move forward, as
getting integrated into upstream is proving difficult due to
`github.com/go-sql-driver/mysql` being broken on master (yay dependencies).
Fortunately for us, we vendor a version of the `mysql` bindings that actually
works, thus, we are capable of using the `mattes/migrate` library with success
due to that. this also will require go1.9 to use the new `database/sql.Conn`
type, CI has been updated accordingly.

some doc fixes too from testing.. and of course updated all deps.

anyway, whew. this should let us add fields to the db without busting
everybody's dbs. open to feedback on better ways, but this was overall pretty
simple despite futzing with mysql.

* add migrate pkg to deps, update deps

use rdallman/migrate until we resolve in mattes land

* add README in migrations package

* add ref to mattes lib
2017-11-14 12:54:33 -08:00
Chad Arimura
cb0ff466a0 small changes 2017-11-13 09:39:24 -08:00
Chad Arimura
5822072bbd few small changes 2017-11-13 09:35:43 -08:00
Travis Reeder
6c21103057 Rollback dockerfile (image too big) and add docker run command to docs. (#488) 2017-11-09 16:12:45 +01:00
Travis Reeder
633b26594e minor 2017-10-25 14:13:26 +02:00
Travis Reeder
d080c23981 First draft of modifying RunnerListener to CallListener to get it closer to the action (and named better). 2017-10-25 14:13:25 +02:00
Aurélien Pupier
f1d82fccaf Fix typo balanacer --> balancer (#384) 2017-10-06 12:29:24 +03:00
Matthew Hartman
d0ea620add Correct header 2017-10-03 06:30:52 -04:00
James Jeffrey
c7f3066c75 Update references remove refs to treeder oracle funcy (#376)
* Remove lots of refs to iron and funcy oracle etc..

* more ref replacements

* Replacing more refs. Treeder

* Use Fn not FN
2017-09-29 16:22:15 -07:00
Derek Schultz
b9bcb70f51 missed a setting 2017-09-28 13:04:08 -06:00
Derek Schultz
232069cbca updates docs for running fn on k8s
- removes docker swarm doc
2017-09-28 12:59:21 -06:00
Derek Schultz
2f090d0aab remove docker swarm howto 2017-09-28 10:44:16 -06:00
Reed Allman
71a88a991c hang the runner, agent=new sheriff (#270)
* fix docker build

this is trivially incorrect since glide doesn't actually provide reproducible
builds. the idea is to build with the deps that we have checked into git, so
that we actually know what code is executing so that we might debug it...

all for multi stage build instead of what we had, but adding the glide step is
wrong. i added a loud warning so as to discourage this behavior in the future.

* hang the runner, agent=new sheriff

tl;dr agent is now runner, with a hopefully saner api

the general idea is get rid of all the various 'task' structs now, change our
terminology to only be 'calls' now, push a lot of the http construction of a
call into the agent, allow calls to mutate their state around their execution
easily and to simplify the number of code paths, channels and context timeouts
in something [hopefully] easy to understand.

this introduces the idea of 'slots' which are either hot or cold and are
separate from reserving memory (memory is denominated in 'tokens' now).
a 'slot' is essentially a container that is ready for execution of a call, be
it hot or cold (it just means different things based on hotness). taking a
look into Submit should make these relatively easy to grok.

sorry, things were pretty broken especially wrt timings. I tried to keep good
notes (maybe too good), to highlight stuff so that we don't make the same
mistakes again (history repeating itself blah blah quote). even now, there is
lots of work to do :)

I encourage just reading the agent.go code, Submit is really simple and
there's a description of how the whole thing works at the head of the file
(after TODOs). call.go contains code for constructing calls, as well as Start
/ End (small atm). I did some amount of code massaging to try to make things
simple / straightforward / fit reasonable mental model, but as always am open
to critique (the more negative the better) as I'm just one guy and wth do i
know...

-----------------------------------------------------------------------------

below enumerates a number of changes as briefly as possible (heh..):

models.Call all the things

removes models.Task as models.Call is now what it previously was.
models.FnCall is now rid of in favor of models.Call, despite the datastore
only storing a few fields of it [for now]. we should probably store entire
calls in the db, since app & route configurations can change at any given
moment, it would be nice to see the parameters of each call (costs db space,
obviously).

this removes the endpoints for getting & deleting messages, we were just
looping back to localhost to call the MQ (wtf? this was for iron integration i
think) and just calls the MQ.

changes the name of the FnLog to LogStore, confusing cause there's also a
`FuncLogger` which uses the Logstore (punting). removes other `Fn` prefixed
structs (redundant naming convention).

removes some unused and/or weird structs (IDStatus, CompleteTime)

updates the swagger

makes the db methods consistent to use 'Call' nomenclature.

remove runner nuisances:

* push down registry stuff to docker driver
* remove Environment / Stats stuff of yore
* remove unused writers (now in FuncLogger)
* remove 2 of the task types, old hot stuff, runner, etc

fixes ram available calculation on startup to not always be 300GB (helps a lot
on a laptop!)

format for DOCKER_AUTH env now is not a list but a map (there are no docs,
would prefer to get rid of this altogether anyway). the ~/.docker/cfg expected
format is unchanged.

removes arbitrary task queue, if a machine is out of ram we can probably just
time out without queueing... (can open separate discussion) in any case the
old one didn't really account well for hot tasks, it just lined everyone up in
the task queue if there wasn't a place to run hot and then timed them out
[even if a slot became free].

removes HEADER_ prefixing on any headers in the request to a invoke a call.
(this was inconsistent with cli for test anyway)

removes TASK_ID header sent in to hot only (this is a dupe of FN_CALL_ID,
which has not been removed)

now user functions can reply directly to the client. this means that for
cold containers if they write to stdout it will send a 200 + headers. for
hot containers, the user can reply directly to the client from the container,
i.e. with its preferred status code / headers (vs. always getting a 200).
the dispatch itself is a little http specific atm, i think we can add an
interchange format but the current version is easily extended to add json for
now, separate discussion. this eliminates a lot of the request/response
rewriting and buffering we were doing (yey). now Dispatch ONLY does input and
output, vs. managing the call timeout and having access to a call's fields.

cache is pushed down into agent now instead of in the front end, would like to
push it down to the datastore actually but it's here for now anyway. cache
delete functions removed (b/c fn is distributed anyway?). added app caching,
should help with latency.

in general, a lot of server/runner.go got pushed down into the agent. i think
it will be useful in testing to be able to construct calls without having to
invoke http handlers + async also needs to construct calls without a handler.

safe shutdown actually works now for everything (leaked / didn't wait on
certain things before)

now we're waiting for hot slots to open up while we're attempting to get ram
to launch a container if we didn't find any hot slots to run the call in
immediately. we can change this policy really easily now (no more channel
jungle; still some channels). also looking for somewhere else to go while the
container is launching now. slots now get sent _out_ of a container, vs.
a container receiving calls, which makes this kind of policy easier to
implement. this fixes a number of bugs around things like trying to execute
calls against containers that have not and may never start and trying to
launch a bazillion containers when there are no free containers. the driver api
underwent some changes to make this possible (relatively minimal, added Wait).
the easiest way to think about this is that allocating ram has moved 'up'
instead of just wrapping launching containers, so that we can select on a
channel trying to find ram.

not dispatching hot calls to containers that died anymore either...

the timeout is now started at the beginning of Submit, rather than Dispatch or
the container itself having to manage the call timeout, which was an
inaccurate way of doing things since finding a slot / allocating ram / pulling
image can all take a non-trivial (timeout amount, even!) amount of time. this
makes for much more reasonable response times from fn under load, there's
still a little TODO about handling cold+timeout container removal response
times but it's much improved.

if call.Start is called with < call.timeout/2 time left, then the call will
not be executed and return a timeout. we can discuss. this makes async play
_a lot_ nicer, specifically. for large timeouts / 2 makes less sense.

env is no longer getting upper cased (admittedly, this can look a little weird
now). our whole route.Config/app.Config/env/headers stuff probably deserves a
whole discussion...

sync output no longer has the call id in json if there's an error / timeout.
we could add this back to signify that it's _us_ writing these but this was
out of place. FN_CALL_ID is still shipped out to get the id for sync calls,
and async [server] output remains unchanged.

async logs are now an entire raw http request (so that a user can write a 400
or something from their hot async container)

async hot now 'just works'

cold sync calls can now reply to the client before container removal, which
shaves a lot of latency off of those (still eat start). still need to figure
out async removal if timeout or something.

-----------------------------------------------------------------------------

i've located a number of bugs that were generally inherited, and also added
a number of TODOs in the head of the agent.go file according to robustness we
probably need to add. this is at least at parity with the previous
implementation, to my knowledge (hopefully/likely a good bit ahead). I can
memorialize these to github quickly enough, not that anybody searches before
adding bugs anyway (sigh).

the big thing to work on next imo is async being a lot more robust,
specifically to survive fn server failures / network issues.

thanks for review (gulp)
2017-09-05 20:32:51 +03:00
Travis Reeder
f559acd7ed Renamed a bunch of images to use fnproject org. (#239)
* Renamed a bunch of images to use fnproject org.

* Multi-stage build for Docker.

* Added tmp vendor dirs to gitignore.

* Run docker-build at beginning of test.
2017-08-23 22:43:53 +03:00
Reed Allman
6a7973e6b6 plumb all config fields into task
the mqs are storing a models.Task, which was not incorporating all the fields
that are in a task.Config. I would very much like to merge these two things,
but expect to do this in a future restructuring as both are used widely and
not cordoned off properly (Config has a channel, stdin, stdout, stderr -- and
isn't just a 'config', so to speak, as Task is).

Since a task.Config is what is used to actually run a container, the result of
the aforementioned deficiency was #193 where tasks are improperly configured
and ran (namely, memory wrong).

async tasks can still not be hot, they will be reverted to default format.
would also like to fix this (also part of restructuring). I actually started
doing this, hence the changes to those files (the surface area of the change
is small and discourages improper future use, so I've left what I've done).

this will:

closes #193
closes #195
closes #154

removes many unused fields in models.Task, since we have not implemented
retries. priority & delay are left, even though they are not used either,
the main goal of this is to resolve #193 and both these fields are strongly
plumbed into all the mqs, so punting on those two.
2017-08-03 06:33:30 -07:00
Brandon Philips
c4af465ccc kubernetes: add instructions for private services 2017-08-03 18:33:08 -07:00
Reed Allman
dc5e67b6d2 add opentracing spans for metrics 2017-07-25 08:55:22 -07:00
Reed Allman
4e52c595d2 merge datastores into sqlx package
replace default bolt option with sqlite3 option. the story here is that we
just need a working out of the box solution, and sqlite3 is just fine for that
(actually, likely better than bolt).

with sqlite3 supplanting bolt, we mostly have sql databases. so remove redis
and then we just have one package that has a `sql` implementation of the
`models.Datastore` and lean on sqlx to do query rewriting. this does mean
queries have to be formed a certain way and likely have to be ANSI-SQL (no
special features) but we weren't using them anyway and our base api is
basically done and we can easily extend this api as needed to only implement
certain methods in certain backends if we need to get cute.

* remove bolt & redis datastores (can still use as mqs)
* make sql queries work on all 3 (maybe?)
* remove bolt log store and use sqlite3
* shove the FnLog shit into the datastore shit for now (free pg/mysql logs...
just for demos, etc, not prod)
* fix up the docs to remove bolt references
* add sqlite3, sqlx dep
* fix up tests & mock stuff, make validator less insane
* remove put & get in datastore layer as nobody is using.

this passes tests which at least seem like they test all the different
backends. if we trust our tests then this seems to work great. (tests `make
docker-test-run-with-*` work now too)
2017-07-07 01:30:02 -07:00
James
8a3edb8309 All of the changes for func logs 2017-06-19 11:38:11 -07:00
Denis Makogon
31b4ac4516 Address broken tests 2017-05-30 08:50:53 -07:00
Travis Reeder
9cc12b4b12 Remove iron... 2017-05-18 18:59:34 +00:00
Travis Reeder
4b9bba352d Rename location. 2017-05-15 11:00:15 -07:00
Travis Reeder
615ae5c36f Mass s&r: iron-io -> kumokit 2017-04-19 09:49:12 -06:00
Travis Reeder
10f3178ae9 Switching to new dep tool (#616)
* making things work

* #506 - Add ability to login to a private docker registry

* Rolling back "make things work" to test them out more.

* Rolling back "make things work" to test them out more.

* credentials from docker/config.json if ENV is missing

* should get docker auth info just in the init

* update glide lock

* update glide

* Switched to new go dep tool, glide is too frikin annoying.

* Updated circle builds to use dep

* Added GOPATH/bin to path.

* Added GOPATH/bin to path.

* Using regular make test, instead of docker one (not sure why it was using the docker one?).
2017-04-07 11:22:08 -07:00