* fn: datastore dial retry should be configurable
Setting this to high (60) in api and system tests.
* fn: netcat connect is not meaningful in wait for DB.
* move calls to logstore, implement s3
closes#482
the basic motivation is that logs and calls will be stored with a very high
write rate, while apps and routes will be relatively infrequently updated; it
follows that we should likely split up their storage location, to back them
with appropriate storage facilities. s3 is a good candidate for ingesting
higher write rate data than a sql database, and will make it easier to manage
that data set. can read #482 for more detailed justification.
summary:
* calls api moved from datastore to logstore
* logstore used in front-end to serve calls endpoints
* agent now throws calls into logstore instead of datastore
* s3 implementation of calls api for logstore
* s3 logs key changed (nobody using / nbd?)
* removed UpdateCall api (not in use)
* moved call tests from datastore to logstore tests
* mock logstore now tested (prev. sqlite3 only)
* logstore tests run against every datastore (mysql, pg; prev. only sqlite3)
* simplify NewMock in tests
commentary:
brunt of the work is implementing the listing of calls in GetCalls for the s3
logstore implementation. the GetCalls API requires returning items in the
newest to oldest order, and the s3 api lists items in lexicographic order
based on created_at. An easy thing to do here seemed to be to reverse the
encoding of our id format to return a lexicographically descending order,
since ids are time based, reasonably encoded to be lexicographically
sortable, and de-duped (unlike created_at). This seems to work pretty well,
it's not perfect around the boundaries of to_time and from_time and a tiny
amount of results may be omitted, but to me this doesn't seem like a deal
breaker to get 6999 results instead of 7000 when trying to get calls between
3:00pm and 4:00pm Monday 3 weeks ago. Of course, without to_time and
from_time, there are no issues in listing results. We could use created at and
encode it, but it would be an additional marker for point lookup (GetCall)
since we would have to search for a created_at stamp, search for ids around
that until we find the matching one, just to do a point lookup. So, the
tradeoff here seems worth it. There is additional optimization around to_time
to seek over newer results (since we have descending order).
The other complication in GetCalls is returning a list of calls for a given
path. Since the keys to do point lookups are only app_id + call_id, and we
need listing across an app as well, this leads us to the 'marker' collection
which is sorted by app_id + path + call_id, to allow quick listing by path.
All in all, it should be pretty straightforward to follow the implementation
and I tried to be lavish with the comments, please let me know if anything
needs further clarification in the code.
The implementation itself has some glaring inefficiencies, but they're
relatively minute: json encoding is kinda lazy, but workable; s3 doesn't offer
batch retrieval, so we point look up each call one by one in get call; not
re-using buffers -- but the seeking around the keys should all be relatively
fast, not too worried about performance really and this isn't a hot path for
reads (need to make a cut point and turn this in!).
Interestingly, in testing, minio performs significantly worse than pg for
storing both logs and calls (or just logs, I tested that too). minio seems to
have really high cpu consumption, but in any event, we won't be using minio,
we'll be using a cloud object store that implements the s3 api. Anyway, mostly
a knock on using minio for high performance, not really anything to do with
this, just thought it was interesting.
I think it's safe to remove UpdateCall, admittedly this made implementing the
s3 api a lot easier. This operation may also be something we never need, it
was unused at present and was only in the cards for a previous hybrid
implementation, which we've now abandoned. If we need, we can always resurrect
from git.
Also not worried about changing the log key, we need to put a prefix on this
thing anyway, but I don't think anybody is using this anyway. in any event, it
simply means old logs won't show up through the API, but aside from nobody
using this yet, that doesn't seem a big deal breaker really -- new logs will
appear fine.
future:
TODO make logstore implementation optional for datastore, check in front-end
at runtime and offer a nil logstore that errors appropriately
TODO low hanging fruit optimizations of json encoding, re-using buffers for
download, get multiple calls at a time, id reverse encoding could be optimized
like normal encoding to not be n^2
TODO api for range removal of logs and calls
* address review comments
* push id to_time magic into id package
* add note about s3 key sizes
* fix validation check
* App ID
* Clean-up
* Use ID or name to reference apps
* Can use app by name or ID
* Get rid of AppName for routes API and model
routes API is completely backwards-compatible
routes API accepts both app ID and name
* Get rid of AppName from calls API and model
* Fixing tests
* Get rid of AppName from logs API and model
* Restrict API to work with app names only
* Addressing review comments
* Fix for hybrid mode
* Fix rebase problems
* Addressing review comments
* Addressing review comments pt.2
* Fixing test issue
* Addressing review comments pt.3
* Updated docstring
* Adjust UpdateApp SQL implementation to work with app IDs instead of names
* Fixing tests
* fmt after rebase
* Make tests green again!
* Use GetAppByID wherever it is necessary
- adding new v2 endpoints to keep hybrid api/runner mode working
- extract CallBase from Call object to expose that to a user
(it doesn't include any app reference, as we do for all other API objects)
* Get rid of GetAppByName
* Adjusting server router setup
* Make hybrid work again
* Fix datastore tests
* Fixing tests
* Do not ignore app_id
* Resolve issues after rebase
* Updating test to make it work as it was
* Tabula rasa for migrations
* Adding calls API test
- we need to ensure we give "App not found" for the missing app and missing call in first place
- making previous test work (request missing call for the existing app)
* Make datastore tests work fine with correctly applied migrations
* Make CallFunction middleware work again
had to adjust its implementation to set app ID before proceeding
* The biggest rebase ever made
* Fix 8's migration
* Fix tests
* Fix hybrid client
* Fix tests problem
* Increment app ID migration version
* Fixing TestAppUpdate
* Fix rebase issues
* Addressing review comments
* Renew vendor
* Updated swagger doc per recommendations
* move mattes migrations to migratex
* changes format of migrations to migratex format
* updates test runner to use new interface (double checked this with printlns,
the tests go fully down and then up, and work on pg/mysql)
* remove mattes/migrate
* update tests from deps
* update readme
* fix other file extensions
* Use retry func while trying to ping SQL datastore
- implements retry func specifically for SQL datastore ping
- fmt fixes
- using sqlx.Db.PingContext instead of sqlx.Db.Ping
- propogate context to SQL datastore
* Rely on context from ServerOpt
* Consolidate log instances
* Cleanup
* Fix server usage in API tests
* route updated_at
* add app created at, fix some route updated_at bugs
* add app updated_at
TODO need to add tests through front end
TODO for validation we don't really want to use the validate wrapper since
it's a programmer error and not a user error, hopefully tests block this.
* add tests for timestamps to exist / change on apps&routes
* route equals at done, fix tests wit dis
* fix up the equals sugar
* add swagger
* fix rebase
* precisely allocate maps in clone
* vetted
* meh
* fix api tests
* so it begins
* add clarification to /dequeue, change response to list to future proof
* Specify that runner endpoints are also under /v1
* Add a flag to choose operation mode (node type).
This is specified using the `FN_NODE_TYPE` environment variable. The
default is the existing behaviour, where the server supports all
operations (full API plus asynchronous and synchronous runners).
The additional modes are:
* API - the full API is available, but no functions are executed by the
node. Async calls are placed into a message queue, and synchronous
calls are not supported (invoking them results in an API error).
* Runner - only the invocation/route API is present. Asynchronous and
synchronous invocation requests are supported, but asynchronous
requests are placed onto the message queue, so might be handled by
another runner.
* Add agent type and checks on Submit
* Sketch of a factored out data access abstraction for api/runner agents
* Fix tests, adding node/agent types to constructors
* Add tests for full, API, and runner server modes.
* Added atomic UpdateCall to datastore
* adds in server side endpoints
* Made ServerNodeType public because tests use it
* Made ServerNodeType public because tests use it
* fix test build
* add hybrid runner client
pretty simple go api client that covers surface area needed for hybrid,
returning structs from models that the agent can use directly. not exactly
sure where to put this, so put it in `/clients/hybrid` but maybe we should
make `/api/runner/client` or something and shove it in there. want to get
integration tests set up and use the real endpoints next and then wrap this up
in the DataAccessLayer stuff.
* gracefully handles errors from fn
* handles backoff & retry on 500s
* will add to existing spans for debuggo action
* minor fixes
* meh
* add error to call model
closes#331
previously, for async this error was being masked completely even if it was
something useful like the image not existing. for sync, the error was returned
in the http request but now it's also being stored. this error itself can
cover a lot of landscape, it could be an error in getting a slot, pulling an
image, running a container, among other things. anyway, no longer being
masked. we can likely improve it in certain cases we run into in the future,
but it's open ended at the moment and not being masked like some errors in
sync http request returns (503 non-models.APIError) for now.
* tucks in callTrigger stuff to keep api clean
* adds swagger
* adds migration
* adds tests for datastore and agent to ensure behavior
* pull images before tests are ran
* gofmt migrations file
* add per call stats field as histogram
this will add a histogram of up to 240 data points of call data, produced
every second, stored at the end of a call invocation in the db. the same
metrics are also still shipped to prometheus (prometheus has the
not-potentially-reduced version). for the API reference, see the updates to
the swagger spec, this is just added onto the get call endpoint.
this does not add any extra db calls and the field for stats in call is a json
blob, which is easily modified to add / omit future fields. this is just
tacked on to the call we're making to InsertCall, and expect this to add very
little overhead; we are bounding the set to be relatively small, planning to
clean out the db of calls periodically, functions will generally be short, and
the same code used at a previous firm did not cause a notable db size increase
with production workload that is worse, wrt histogram size (I checked). the
code changes are really small aside from changing to strfmt.DateTime,
adding a migration and implementing sql.Valuer; needed to slightly modify the
swap function so that we can safely read `call.Stats` field to upload at end.
with the full histogram in hand, we can compute max/min/average/median/growth
rate/bernoulli distributions/whatever very easily in a UI or tooling. in
particular, this data is easily chartable [for a UI], which is beneficial.
* adds swagger spec of api update to calls endpoint
* adds migration for call.stats field
* adds call.stats field to sql queries
* change swapping of hot logger to exec, so we know that call.Stats is no
longer being modified after `exec` [in call.End]
* throws out docker stats between function invocations in hot functions (no
call to store them on, we could change this later for debug; they're in prom)
* tested in tests and API
closes#19
* add format of ints to swag
* add minio-go dep, update deps
* add minio s3 client
minio has an s3 compatible api and is an open source project and, notably, is
not amazon, so it seems best to use their client (fwiw the aws-sdk-go is a
giant hair ball of things we don't need, too). it was pretty easy and seems
to work, so rolling with it. also, minio is a totally feasible option for fn
installs in prod / for demos / for local.
* adds 's3' package for s3 compatible log storage api, for use with storing
logs from calls and retrieving them.
* removes DELETE /v1/apps/:app/calls/:call/log endpoint
* removes internal log deletion api
* changes the GetLog API to use an io.Reader, which is a backwards step atm
due to the json api for logs, I have another branch lined up to make a plain
text log API and this will be much more efficient (also want to gzip)
* hooked up minio to the test suite and fixed up the test suite
* add how to run minio docs and point fn at it docs
some notes: notably we aren't cleaning up these logs. there is a ticket
already to make a Mr. Clean who wakes up periodically and nukes old stuff, so
am punting any api design around some kind of TTL deletion of logs. there are
a lot of options really for Mr. Clean, we can notably defer to him when apps
are deleted, too, so that app deletion is fast and then Mr. Clean will just
clean them up later (seems like a good option).
have not tested against BMC object store, which has an s3 compatible API. but
in theory it 'just works' (the reason for doing this). in any event, that's
part of the service land to figure out.
closes#481closes#473
* add log not found error to minio land
* adds migrations
closes#57
migrations only run if the database is not brand new. brand new
databases will contain all the right fields when CREATE TABLE is called,
this is for readability mostly more than efficiency (do not want to have
to go through all of the database migrations to ascertain what columns a table
has). upon startup of a new database, the migrations will be analyzed and the
highest version set, so that future migrations will be run. this should also
avoid running through all the migrations, which could bork db's easily enough
(if the user just exits from impatience, say).
otherwise, all migrations that a db has not yet seen will be run against it
upon startup, this should be seamless to the user whether they had a db that
had 0 migrations run on it before or N. this means users will not have to
explicitly run any migrations on their dbs nor see any errors when we upgrade
the db (so long as things go well). if migrations do not go so well, users
will have to manually repair dbs (this is the intention of the `migrate`
library and it seems sane), this should be rare, and I'm unsure myself how
best to resolve not having gone through this myself, I would assume it will
require running down migrations and then manually updating the migration
field; in any case, docs once one of us has to go through this.
migrations are written to files and checked into version control, and then use
go-bindata to generate those files into go code and compiled in to be consumed
by the migrate library (so that we don't have to put migration files on any
servers) -- this is also in vcs. this seems to work ok. I don't like having to
use the separate go-bindata tool but it wasn't really hard to install and then
go generate takes care of the args. adding migrations should be relatively
rare anyway, but tried to make it pretty painless.
1 migration to add created_at to the route is done here as an example of how
to do migrations, as well as testing these things ;) -- `created_at` will be
`0001-01-01T00:00:00.000Z` for any existing routes after a user runs this
version. could spend the extra time adding 'today's date to any outstanding
records, but that's not really accurate, the main thing is nobody will have to
nuke their db with the migrations in place & we don't have any prod clusters
really to worry about. all future routes will correctly have `created_at` set,
and plan to add other timestamps but wanted to keep this patch as small as
possible so only did routes.created_at.
there are tests that a spankin new db will work as expected as well as a db
after running all down & up migrations works. the latter tests only run on mysql
and postgres, since sqlite3 does not like ALTER TABLE DROP COLUMN; up
migrations will need to be tested manually for sqlite3 only, but in theory if
they are simple and work on postgres and mysql, there is a good likelihood of
success; the new migration from this patch works on sqlite3 fine.
for now, we need to use `github.com/rdallman/migrate` to move forward, as
getting integrated into upstream is proving difficult due to
`github.com/go-sql-driver/mysql` being broken on master (yay dependencies).
Fortunately for us, we vendor a version of the `mysql` bindings that actually
works, thus, we are capable of using the `mattes/migrate` library with success
due to that. this also will require go1.9 to use the new `database/sql.Conn`
type, CI has been updated accordingly.
some doc fixes too from testing.. and of course updated all deps.
anyway, whew. this should let us add fields to the db without busting
everybody's dbs. open to feedback on better ways, but this was overall pretty
simple despite futzing with mysql.
* add migrate pkg to deps, update deps
use rdallman/migrate until we resolve in mattes land
* add README in migrations package
* add ref to mattes lib
* idle_timeout max of 1h
* timeout max of 120s for sync, 1h for async
* max memory of 8GB
* do full route validation before call invocation
* ensure that idle_timeout >= timeout
we are now doing validation of updating route inside of the database
transaction, which is what we should have been doing all along really.
we need this behavior to ensure that the idle timeout is longer than the
timeout, among other benefits (like not updating the most recent version of
the existing struct and overwriting previous updates, yay). since we have
this, we can get rid of the weird skipZero behavior on validate too and
validate the real deal holyfield.
validating the route before making the call is handy so that we don't do weird
things like run a func that wants to use 300GB of RAM and run for 3 weeks.
closes#192closes#344closes#162
sqlx has nice facilities for using structs to do queries and using their
fields, so decided to move us all over to this. now when you take a look at
models.Call it's really obvious what's in db and what's not. added omitempty
to some json fields that were bleeding through api too.
deletes a lot of code in the sql package for scanning and made some queries
use struct based sqlx methods now which seem easier to read than what we
previously had. moves all json stuff into sql.Valuer and sql.Scanner methods
in models/config.go, these are the only 2 types that ever need this. sadly,
sqlx would have done this marshaling for us, but to keep compat, I added json.
we can do some migrations later maybe for a more efficient encoding, but did
not want to fuss with it today.
it seems like we should probably aim to keep models.Call as small as possible
in the db as there will be a lot of them. interestingly, most functions
platforms I looked at do not seem to expose this kind of information that I
could find. so, i think only having timestamps, status, id, app, path and
maybe docker stats is really all that should be in here (agree w/ Denys on
284 as these and logs will end up taking up most db space in prod. notably,
payload, headers, and env vars could be extremely large and in the general
case they are always a copy of the routes (this breaks apart when routes are
updated, which would be useful considering we don't have versioning --
versioning may be cheaper).
removed unused field in apps too
this is lined up behind #349 so that I could use the tests...
closes#345closes#142closes#284
calls, apps, and routes listing were previously returning the entire data set,
which just won't scale. this adds pagination with cursoring forward to each of
these endpoints (see the [docs](docs/definitions.md)).
the patch is really mostly tests, shouldn't be that bad to pick through.
some blarble about implementation is in order:
calls are sorted by ids but allow searching within certain `created_at` ranges
(finally). this is because sorting by `created_at` isn't feasible when
combined with paging, as `created_at` is not guaranteed to be unique -- id's
are (eliding theoreticals). i.e. on a page boundary, if there are 200 calls
with the same `created_at`, providing a `cursor` of that `created_at` will
skip over the remaining N calls with that `created_at`. also using id will be
better on the index anyway (well, less of them). yay having sortable ids! I
can't discern any issues doing this, as even if 200 calls have the same
created_at, they will have different ids, and the sort should allow paginating
them just fine. ids are also url safe, so the id works as the cursor value
just fine.
apps and routes are sorted by alphabetical order. as they aren't guaranteed to
be url safe, we are base64'ing them in the front end to a url safe format and
then returning them, and then base64 decoding them when we get them. this does
mean that they can be relatively large if the path/app is long, but if we
don't want to add ids then they were going to be pretty big anyway. a bonus
that this kind of obscures them. if somebody has better idea on formatting, by
all means.
notably, we are not using the sql paging facilities, and we are baking our own
based on cursors, which ends up being much more efficient for querying longer
lists of resources. this also should be easy to implement in other non-sql dbs
and the cursoring formats we can change on the fly since we are just exposing
them as opaque strings. the front end deals with the base64 / formatting, etc
and the back end is taking raw values (strfmt.DateTime or the id for calls).
the cursor that is being passed to/by the user is simply the last resource on the
previous page, so in theory we don't even need to return it, but it does make
it a little easier to use, also, cursor being blank on the last page depends
on page full-ness, so sometimes users will get a cursor when there are no
results on next page (1/N chance, and it's not really end of world -- actually
searching for the next thing would make things more complex). there are ample
tests for this behavior.
I've turned off all query parameters allowing `LIKE` queries on certain listing
endpoints, as we should not expose sql behavior through our API in the event
that we end up not using a sql db down the road. I think we should only allow
prefix matching, which sql can support as well as other types of databases
relatively cheaply, but this is not hooked up here as it didn't 'just work'
when I was fiddling with it (can add later, they're unnecessary and weren't
wired in before in front end).
* remove route listing across apps (unused)
* fix panic when doing `/app//`. this is prob possible for other types of
endpoints, out of scope here. added a guard in front of all endpoints for this
* adds `from_time` and `to_time` query parameters to calls, so you can e.g.
list the last hour of tasks. these are not required and default to
oldest/newest.
* hooked back up the datastore tests to the sql db, only run with sqlite atm,
but these are useful, added a lot to them too.
* added a bunch of tests to the front end, so pretty sure this all works now.
* added to swagger, we'll need to re-gen. also wrote some words about
pagination workings, I'm not sure how best to link to these, feedback welcome.
* not sure how we want to manage indexes, but we may need to add some (looking
at created_at, mostly)
* `?route` changed to `?path` in routes listing, to keep consistency with
everything else
* don't 404 when searching for calls where the route doesn't exist, just
return an empty list (it's a query param ffs)
closes#141
We still need to delete app and check how may rows affected in apps table.
But we don't really care about other tables and rows affected there: routes, calls, logs
We need to delete app out of loop to check for invalid numbers of rows affected:
- zero rows means nothing happend
Despite apps table, zero rows affected if valid case for routes, calls and logs
something still feels off with this, but i tinkered with it for a day-ish and
didn't come up with anything a whole lot better. doing a lot of the
maneuvering in the caller seemed better but it was just bloating up GetCall so
went back to having it basically like it was, but returning the limited
underlying buffer to read from so we can ship to the db.
some small changes to the LogStore interface, swapped it to take an
io.Reader instead of a string for more flexibility in the future while
essentially maintaining the same level of performance that we have now.
i'm guessing in the not so distant future we'll ship these to some s3 like
service and it would be better to stream them in than carry around a giant
string anyway. also, carrying around up to 1MB buffers in memory isn't great,
we may want to switch to file backed logs for calls, too. using io.Reader for
logs should make #279 more reasonable if/once we move to some s3-like thing,
we can stream from the log storage service direct to clients.
this fixes the span being out of whack and allows the 'right' context to be
used to upload logs (next to inserting the call). deletes the dbWriter we had,
and we just do this in call.End now (which makes sense to me at least).
removes the dupe code for making an stderr for hot / cold and simplifies the
way to get a func logger (no more 7 param methods yay).
closes#298
* fix docker build
this is trivially incorrect since glide doesn't actually provide reproducible
builds. the idea is to build with the deps that we have checked into git, so
that we actually know what code is executing so that we might debug it...
all for multi stage build instead of what we had, but adding the glide step is
wrong. i added a loud warning so as to discourage this behavior in the future.
* hang the runner, agent=new sheriff
tl;dr agent is now runner, with a hopefully saner api
the general idea is get rid of all the various 'task' structs now, change our
terminology to only be 'calls' now, push a lot of the http construction of a
call into the agent, allow calls to mutate their state around their execution
easily and to simplify the number of code paths, channels and context timeouts
in something [hopefully] easy to understand.
this introduces the idea of 'slots' which are either hot or cold and are
separate from reserving memory (memory is denominated in 'tokens' now).
a 'slot' is essentially a container that is ready for execution of a call, be
it hot or cold (it just means different things based on hotness). taking a
look into Submit should make these relatively easy to grok.
sorry, things were pretty broken especially wrt timings. I tried to keep good
notes (maybe too good), to highlight stuff so that we don't make the same
mistakes again (history repeating itself blah blah quote). even now, there is
lots of work to do :)
I encourage just reading the agent.go code, Submit is really simple and
there's a description of how the whole thing works at the head of the file
(after TODOs). call.go contains code for constructing calls, as well as Start
/ End (small atm). I did some amount of code massaging to try to make things
simple / straightforward / fit reasonable mental model, but as always am open
to critique (the more negative the better) as I'm just one guy and wth do i
know...
-----------------------------------------------------------------------------
below enumerates a number of changes as briefly as possible (heh..):
models.Call all the things
removes models.Task as models.Call is now what it previously was.
models.FnCall is now rid of in favor of models.Call, despite the datastore
only storing a few fields of it [for now]. we should probably store entire
calls in the db, since app & route configurations can change at any given
moment, it would be nice to see the parameters of each call (costs db space,
obviously).
this removes the endpoints for getting & deleting messages, we were just
looping back to localhost to call the MQ (wtf? this was for iron integration i
think) and just calls the MQ.
changes the name of the FnLog to LogStore, confusing cause there's also a
`FuncLogger` which uses the Logstore (punting). removes other `Fn` prefixed
structs (redundant naming convention).
removes some unused and/or weird structs (IDStatus, CompleteTime)
updates the swagger
makes the db methods consistent to use 'Call' nomenclature.
remove runner nuisances:
* push down registry stuff to docker driver
* remove Environment / Stats stuff of yore
* remove unused writers (now in FuncLogger)
* remove 2 of the task types, old hot stuff, runner, etc
fixes ram available calculation on startup to not always be 300GB (helps a lot
on a laptop!)
format for DOCKER_AUTH env now is not a list but a map (there are no docs,
would prefer to get rid of this altogether anyway). the ~/.docker/cfg expected
format is unchanged.
removes arbitrary task queue, if a machine is out of ram we can probably just
time out without queueing... (can open separate discussion) in any case the
old one didn't really account well for hot tasks, it just lined everyone up in
the task queue if there wasn't a place to run hot and then timed them out
[even if a slot became free].
removes HEADER_ prefixing on any headers in the request to a invoke a call.
(this was inconsistent with cli for test anyway)
removes TASK_ID header sent in to hot only (this is a dupe of FN_CALL_ID,
which has not been removed)
now user functions can reply directly to the client. this means that for
cold containers if they write to stdout it will send a 200 + headers. for
hot containers, the user can reply directly to the client from the container,
i.e. with its preferred status code / headers (vs. always getting a 200).
the dispatch itself is a little http specific atm, i think we can add an
interchange format but the current version is easily extended to add json for
now, separate discussion. this eliminates a lot of the request/response
rewriting and buffering we were doing (yey). now Dispatch ONLY does input and
output, vs. managing the call timeout and having access to a call's fields.
cache is pushed down into agent now instead of in the front end, would like to
push it down to the datastore actually but it's here for now anyway. cache
delete functions removed (b/c fn is distributed anyway?). added app caching,
should help with latency.
in general, a lot of server/runner.go got pushed down into the agent. i think
it will be useful in testing to be able to construct calls without having to
invoke http handlers + async also needs to construct calls without a handler.
safe shutdown actually works now for everything (leaked / didn't wait on
certain things before)
now we're waiting for hot slots to open up while we're attempting to get ram
to launch a container if we didn't find any hot slots to run the call in
immediately. we can change this policy really easily now (no more channel
jungle; still some channels). also looking for somewhere else to go while the
container is launching now. slots now get sent _out_ of a container, vs.
a container receiving calls, which makes this kind of policy easier to
implement. this fixes a number of bugs around things like trying to execute
calls against containers that have not and may never start and trying to
launch a bazillion containers when there are no free containers. the driver api
underwent some changes to make this possible (relatively minimal, added Wait).
the easiest way to think about this is that allocating ram has moved 'up'
instead of just wrapping launching containers, so that we can select on a
channel trying to find ram.
not dispatching hot calls to containers that died anymore either...
the timeout is now started at the beginning of Submit, rather than Dispatch or
the container itself having to manage the call timeout, which was an
inaccurate way of doing things since finding a slot / allocating ram / pulling
image can all take a non-trivial (timeout amount, even!) amount of time. this
makes for much more reasonable response times from fn under load, there's
still a little TODO about handling cold+timeout container removal response
times but it's much improved.
if call.Start is called with < call.timeout/2 time left, then the call will
not be executed and return a timeout. we can discuss. this makes async play
_a lot_ nicer, specifically. for large timeouts / 2 makes less sense.
env is no longer getting upper cased (admittedly, this can look a little weird
now). our whole route.Config/app.Config/env/headers stuff probably deserves a
whole discussion...
sync output no longer has the call id in json if there's an error / timeout.
we could add this back to signify that it's _us_ writing these but this was
out of place. FN_CALL_ID is still shipped out to get the id for sync calls,
and async [server] output remains unchanged.
async logs are now an entire raw http request (so that a user can write a 400
or something from their hot async container)
async hot now 'just works'
cold sync calls can now reply to the client before container removal, which
shaves a lot of latency off of those (still eat start). still need to figure
out async removal if timeout or something.
-----------------------------------------------------------------------------
i've located a number of bugs that were generally inherited, and also added
a number of TODOs in the head of the agent.go file according to robustness we
probably need to add. this is at least at parity with the previous
implementation, to my knowledge (hopefully/likely a good bit ahead). I can
memorialize these to github quickly enough, not that anybody searches before
adding bugs anyway (sigh).
the big thing to work on next imo is async being a lot more robust,
specifically to survive fn server failures / network issues.
thanks for review (gulp)
we're routinely doing transactions which will hold up connections for some
time, it was pretty easy to run out of 30 conns from routine function
invocations. the 'right' thing is probably to add a config val to the url that
we can strip before passing into the db, but i'm not sure i want to have our
own query params in db urls, either.
we had this _almost_ right, in that we were trying, but we weren't masking the
error from the user response for any error we don't intend to show. this also
adds a stack trace from any internal server errors, so that we might be able
to track them down in the future (looking at you, 'context deadline
exceeded'). in addition, this adds a new `models.APIError` interface which all
of the errors in `models` now implement, and can be caught easily / added to
easily.
the front end now does no status rewriting based on api errors, now when we
get a non-nil error we can call `handleResponse(c, err)` with it and if it's a
proper error, return it to the user with the right status code, otherwise log
a stack trace and return `internal server error`. this cleans up a lot of the
front end code.
also rewrites start task ctx deadline exceeded as timeout. with iw we had
async tasks so we could start the clock later and it didn't matter, but now
with sync tasks time out sometimes just making docker calls, and we want the
task status to show up as timed out. we may want to just catch all this above
in addition to this, but this seems like the right thing to do.
remove squishing together errors. this was weird, now we return the first
error for the purposes of using the new err interface.
removed a lot of 5xx errors that really should have been 4xx errors. changed
some of the 400 errors to 409 errors, since they are from sending in
conflicting info and not a malformed request.
removed unused errors / useless errors (many were used for logging, and didn't
provide any context. now with stack traces we don't need context as much in
the logs).
replace default bolt option with sqlite3 option. the story here is that we
just need a working out of the box solution, and sqlite3 is just fine for that
(actually, likely better than bolt).
with sqlite3 supplanting bolt, we mostly have sql databases. so remove redis
and then we just have one package that has a `sql` implementation of the
`models.Datastore` and lean on sqlx to do query rewriting. this does mean
queries have to be formed a certain way and likely have to be ANSI-SQL (no
special features) but we weren't using them anyway and our base api is
basically done and we can easily extend this api as needed to only implement
certain methods in certain backends if we need to get cute.
* remove bolt & redis datastores (can still use as mqs)
* make sql queries work on all 3 (maybe?)
* remove bolt log store and use sqlite3
* shove the FnLog shit into the datastore shit for now (free pg/mysql logs...
just for demos, etc, not prod)
* fix up the docs to remove bolt references
* add sqlite3, sqlx dep
* fix up tests & mock stuff, make validator less insane
* remove put & get in datastore layer as nobody is using.
this passes tests which at least seem like they test all the different
backends. if we trust our tests then this seems to work great. (tests `make
docker-test-run-with-*` work now too)