* Don't try to delete an app that wasn't successfully created in the case of failure
* Allow datastore implementations to inject additional annotations on objects
* Allow for datastores transparently adding annotations on apps, fns and triggers. Change NameIn filter to Name for apps.
* Move *List types including JSON annotations for App, Fn and Trigger into models
* Change return types for GetApps, GetFns and GetTriggers on datastore to
be models.*List and ove cursor generation into datastore
* Trigger cursor handling fixed into db layer
Also changes the name generation so that it is not in the same order
as the id (well is random), this means we are now testing our name ordering.
* GetFns now respects cursors
* Apps now feeds cursor back
* Mock fixes
* Fixing up api level cursor decoding
* Tidy up treatment of cursors in the db layer
* Adding conditions for non nil items lists
* fix mock test
Vast commit, includes:
* Introduces the Trigger domain entity.
* Introduces the Fns domain entity.
* V2 of the API for interacting with the new entities in swaggerv2.yml
* Adds v2 end points for Apps to support PUT updates.
* Rewrites the datastore level tests into a new pattern.
* V2 routes use entity ID over name as the path parameter.
* add DateTime sans mgo
* change all uses of strfmt.DateTime to common.DateTime, remove test strfmt usage
* remove api tests, system-test dep on api test
multiple reasons to remove the api tests:
* awkward dependency with fn_go meant generating bindings on a branched fn to
vendor those to test new stuff. this is at a minimum not at all intuitive,
worth it, nor a fun way to spend the finite amount of time we have to live.
* api tests only tested a subset of functionality that the server/ api tests
already test, and we risk having tests where one tests some thing and the
other doesn't. let's not. we have too many test suites as it is, and these
pretty much only test that we updated the fn_go bindings, which is actually a
hassle as noted above and the cli will pretty quickly figure out anyway.
* fn_go relies on openapi, which relies on mgo, which is deprecated and we'd
like to remove as a dependency. openapi is a _huge_ dep built in a NIH
fashion, that cannot simply remove the mgo dep as users may be using it.
we've now stolen their date time and otherwise killed usage of it in fn core,
for fn_go it still exists but that's less of a problem.
* update deps
removals:
* easyjson
* mgo
* go-openapi
* mapstructure
* fn_go
* purell
* go-validator
also, had to lock docker. we shouldn't use docker on master anyway, they
strongly advise against that. had no luck with latest version rev, so i locked
it to what we were using before. until next time.
the rest is just playing dep roulette, those end up removing a ton tho
* fix exec test to work
* account for john le cache
* initial Db helper split - make SQL and datastore packages optional
* abstracting log store
* break out DB, MQ and log drivers as extensions
* cleanup
* fewer deps
* fixing docker test
* hmm dbness
* updating db startup
* Consolidate all your extensions into one convenient package
* cleanup
* clean up dep constraints
* datastore no longer implements logstore
the underlying implementation of our sql store implements both the datastore
and the logstore interface, however going forward we are likely to encounter
datastore implementers that would mock out the logstore interface and not use
its methods - signalling a poor interface. this remedies that, now they are 2
completely separate things, which our sqlstore happens to implement both of.
related to some recent changes around wrapping, this keeps the imposed metrics
and validation wrapping of a servers logstore and datastore, just moving it
into New instead of in the opts - this is so that a user can have the
underlying datastore in order to set the logstore to it, since wrapping it in
a validator/metrics would render it no longer a logstore implementer (i.e.
validate datastore doesn't implement the logstore interface), we need to do
this after setting the logstore to the datastore if one wasn't provided
explicitly.
* splits logstore and datastore metrics & validation logic
* `make test` should be `make full-test` always. got rid of that so that
nobody else has to wait for CI to blow up on them after the tests pass locally
ever again.
* fix new tests
* fn: size restricted tmpfs /tmp and read-only / support
*) read-only Root Fs Support
*) removed CPUShares from docker API. This was unused.
*) docker.Prepare() refactoring
*) added docker.configureTmpFs() for size limited tmpfs on /tmp
*) tmpfs size support in routes and resource tracker
*) fix fn-test-utils to handle sparse files better in create file
* test typo fix
* Implements graceful shutdown of agent.DataAccess and underlying Datastore/Logstore/MessageQueue
* adds tests for closing agent.DataAccess and Datastore
* fn: agent MaxRequestSize limit
Currently, LimitRequestBody() exists to install a
http request body size in http/gin server. For production
enviroments, this is expected to be used. However, in agents
we may need to verify/enforce these size limits and to be
able to assert in case of missing limits is valuable.
With this change, operators can define an agent env variable
to limit this in addition to installing Gin/Http handler.
http.MaxBytesReader is superior in some cases as it sets
http headers (Connection: close) to guard against subsequent
requests.
However, NewClampReadCloser() is superior in other cases,
where it can cleanly return an API error for this case alone
(http.MaxBytesReader() does not return a clean error type
for overflow case, which makes it difficult to use it without
peeking into its implementation.)
For lb agent, upcoming changes rely on such limits enabled
and using gin/http handler (http.MaxBytesReader) makes such
checks/safety validations difficult.
* fn: read/write clamp code adjustment
In case of overflows, opt for simple implementation
of a partial write followed by return error.
* add user syslog writers to app
users may specify a syslog url[s] on apps now and all functions under that app
will spew their logs out to it. the docs have more information around details
there, please review those (swagger and operating/logging.md), tried to
implement to spec in some parts and improve others, open to feedback on
format though, lots of liberty there.
design decision wise, I am looking to the future and ignoring cold containers.
the overhead of the connections there will not be worth it, so this feature
only works for hot functions, since we're killing cold anyway (even if a user
can just straight up exit a hot container).
syslog connections will be opened against a container when it starts up, and
then the call id that is logged gets swapped out for each call that goes
through the container, this cuts down on the cost of opening/closing
connections significantly. there are buffers to accumulate logs until we get a
`\n` to actually write a syslog line, and a buffer to save some bytes when
we're writing the syslog formatting as well. underneath writers re-use the
line writer in certain scenarios (swapper). we could likely improve the ease
of setting this up, but opening the syslog conns against a container seems
worth it, and is a different path than the other func loggers that we create
when we make a call object. the Close() stuff is a little tricky, not sure how
to make it easier and have the ^ benefits, open to idears.
this does add another vector of 'limits' to consider for more strict service
operators. one being how many syslog urls can a user add to an app (infinite,
atm) and the other being on the order of number of containers per host we
could run out of connections in certain scenarios. there may be some utility
in having multiple syslog sinks to send to, it could help with debugging at
times to send to another destination or if a user is a client w/ someone and
both want the function logs, e.g. (have used this for that in the past,
specifically).
this also doesn't work behind a proxy, which is something i'm open to fixing,
but afaict will require a 3rd party dependency (we can pretty much steal what
docker does). this is mostly of utility for those of us that work behind a
proxy all the time, not really for end users.
there are some unit tests. integration tests for this don't sound very fun to
maintain. I did test against papertrail with each protocol and it works (and
even times out if you're behind a proxy!).
closes#337
* add trace to syslog dial
* CloudEvents I/O format support.
* Updated format doc.
* Remove log lines
* This adds support for CloudEvent ingestion at the http router layer.
* Updated per comments.
* Responds with full CloudEvent message.
* Fixed up per comments
* Fix tests
* Checks for cloudevent content-type
* doesn't error on missing content-type.
* move calls to logstore, implement s3
closes#482
the basic motivation is that logs and calls will be stored with a very high
write rate, while apps and routes will be relatively infrequently updated; it
follows that we should likely split up their storage location, to back them
with appropriate storage facilities. s3 is a good candidate for ingesting
higher write rate data than a sql database, and will make it easier to manage
that data set. can read #482 for more detailed justification.
summary:
* calls api moved from datastore to logstore
* logstore used in front-end to serve calls endpoints
* agent now throws calls into logstore instead of datastore
* s3 implementation of calls api for logstore
* s3 logs key changed (nobody using / nbd?)
* removed UpdateCall api (not in use)
* moved call tests from datastore to logstore tests
* mock logstore now tested (prev. sqlite3 only)
* logstore tests run against every datastore (mysql, pg; prev. only sqlite3)
* simplify NewMock in tests
commentary:
brunt of the work is implementing the listing of calls in GetCalls for the s3
logstore implementation. the GetCalls API requires returning items in the
newest to oldest order, and the s3 api lists items in lexicographic order
based on created_at. An easy thing to do here seemed to be to reverse the
encoding of our id format to return a lexicographically descending order,
since ids are time based, reasonably encoded to be lexicographically
sortable, and de-duped (unlike created_at). This seems to work pretty well,
it's not perfect around the boundaries of to_time and from_time and a tiny
amount of results may be omitted, but to me this doesn't seem like a deal
breaker to get 6999 results instead of 7000 when trying to get calls between
3:00pm and 4:00pm Monday 3 weeks ago. Of course, without to_time and
from_time, there are no issues in listing results. We could use created at and
encode it, but it would be an additional marker for point lookup (GetCall)
since we would have to search for a created_at stamp, search for ids around
that until we find the matching one, just to do a point lookup. So, the
tradeoff here seems worth it. There is additional optimization around to_time
to seek over newer results (since we have descending order).
The other complication in GetCalls is returning a list of calls for a given
path. Since the keys to do point lookups are only app_id + call_id, and we
need listing across an app as well, this leads us to the 'marker' collection
which is sorted by app_id + path + call_id, to allow quick listing by path.
All in all, it should be pretty straightforward to follow the implementation
and I tried to be lavish with the comments, please let me know if anything
needs further clarification in the code.
The implementation itself has some glaring inefficiencies, but they're
relatively minute: json encoding is kinda lazy, but workable; s3 doesn't offer
batch retrieval, so we point look up each call one by one in get call; not
re-using buffers -- but the seeking around the keys should all be relatively
fast, not too worried about performance really and this isn't a hot path for
reads (need to make a cut point and turn this in!).
Interestingly, in testing, minio performs significantly worse than pg for
storing both logs and calls (or just logs, I tested that too). minio seems to
have really high cpu consumption, but in any event, we won't be using minio,
we'll be using a cloud object store that implements the s3 api. Anyway, mostly
a knock on using minio for high performance, not really anything to do with
this, just thought it was interesting.
I think it's safe to remove UpdateCall, admittedly this made implementing the
s3 api a lot easier. This operation may also be something we never need, it
was unused at present and was only in the cards for a previous hybrid
implementation, which we've now abandoned. If we need, we can always resurrect
from git.
Also not worried about changing the log key, we need to put a prefix on this
thing anyway, but I don't think anybody is using this anyway. in any event, it
simply means old logs won't show up through the API, but aside from nobody
using this yet, that doesn't seem a big deal breaker really -- new logs will
appear fine.
future:
TODO make logstore implementation optional for datastore, check in front-end
at runtime and offer a nil logstore that errors appropriately
TODO low hanging fruit optimizations of json encoding, re-using buffers for
download, get multiple calls at a time, id reverse encoding could be optimized
like normal encoding to not be n^2
TODO api for range removal of logs and calls
* address review comments
* push id to_time magic into id package
* add note about s3 key sizes
* fix validation check
* App ID
* Clean-up
* Use ID or name to reference apps
* Can use app by name or ID
* Get rid of AppName for routes API and model
routes API is completely backwards-compatible
routes API accepts both app ID and name
* Get rid of AppName from calls API and model
* Fixing tests
* Get rid of AppName from logs API and model
* Restrict API to work with app names only
* Addressing review comments
* Fix for hybrid mode
* Fix rebase problems
* Addressing review comments
* Addressing review comments pt.2
* Fixing test issue
* Addressing review comments pt.3
* Updated docstring
* Adjust UpdateApp SQL implementation to work with app IDs instead of names
* Fixing tests
* fmt after rebase
* Make tests green again!
* Use GetAppByID wherever it is necessary
- adding new v2 endpoints to keep hybrid api/runner mode working
- extract CallBase from Call object to expose that to a user
(it doesn't include any app reference, as we do for all other API objects)
* Get rid of GetAppByName
* Adjusting server router setup
* Make hybrid work again
* Fix datastore tests
* Fixing tests
* Do not ignore app_id
* Resolve issues after rebase
* Updating test to make it work as it was
* Tabula rasa for migrations
* Adding calls API test
- we need to ensure we give "App not found" for the missing app and missing call in first place
- making previous test work (request missing call for the existing app)
* Make datastore tests work fine with correctly applied migrations
* Make CallFunction middleware work again
had to adjust its implementation to set app ID before proceeding
* The biggest rebase ever made
* Fix 8's migration
* Fix tests
* Fix hybrid client
* Fix tests problem
* Increment app ID migration version
* Fixing TestAppUpdate
* Fix rebase issues
* Addressing review comments
* Renew vendor
* Updated swagger doc per recommendations
* Move out node-pool manager and replace it with RunnerPool extension
* adds extension points for runner pools in load-balanced mode
* adds error to return values in RunnerPool and Runner interfaces
* Implements runner pool contract with context-aware shutdown
* fixes issue with range
* fixes tests to use runner abstraction
* adds empty test file as a workaround for build requiring go source files in top-level package
* removes flappy timeout test
* update docs to reflect runner pool setup
* refactors system tests to use runner abstraction
* removes poolmanager
* moves runner interfaces from models to api/runnerpool package
* Adds a second runner to pool docs example
* explicitly check for request spillover to second runner in test
* moves runner pool package name for system tests
* renames runner pool pointer variable for consistency
* pass model json to runner
* automatically cast to http.ResponseWriter in load-balanced call case
* allow overriding of server RunnerPool via a programmatic ServerOption
* fixes return type of ResponseWriter in test
* move Placer interface to runnerpool package
* moves hash-based placer out of open source project
* removes siphash from Gopkg.lock
*) Limit response http body or json response size to FN_MAX_RESPONSE_SIZE (default unlimited)
*) If limits are exceeded 502 is returned with 'body too large' in the error message
* push validate/defaults into datastore
we weren't setting a timestamp in route insert when we needed to create an app
there. that whole thing isn't atomic, but this fixes the timestamp issue.
closes#738
seems like we should do similar with the FireBeforeX stuff too.
* fix tests
* app name validation was buggy, an upper cased letter failed. now it doesn't.
uses unicode now.
* removes duplicate errors for datastore and models validation that were used
interchangably but weren't.
possible breakages:
* `FN_HEADER` on cold are no longer `s/-/_/` -- this is so that cold functions
can rebuild the headers as they were when they came in on the request (fdks,
specifically), there's no guarantee that a reversal `s/_/-/` is the original
header on the request.
* app and route config no longer `s/-/_/` -- it seemed really weird to rewrite
the users config vars on these. should just pass them exactly as is to env.
* headers no longer contain the environment vars (previously, base config; app
config, route config, `FN_PATH`, etc.), these are still available in the
environment.
this gets rid of a lot of the code around headers, specifically the stuff that
shoved everything into headers when constructing a call to begin with. now we
just store the headers separately and add a few things, like FN_CALL_ID to
them, and build a separate 'config' now to store on the call. I thought
'config' was more aptly named, 'env' was confusing, though now 'config' is
exactly what 'base_vars' was, which is only the things being put into the env.
we weren't storing this field in the db, this doesn't break unless there are
messages in a queue from another version, anyway, don't think we're there and
don't expect any breakage for anybody with field name changes.
this makes the configuration stuff pretty straight forward, there's just two
separate buckets of things, and cold just needs to mash them together into the
env, and otherwise hot containers just need to put 'config' in the env, and then
hot format can shove 'headers' in however they'd like. this seems better than
my last idea about making this easier but worse (RIP).
this means:
* headers no longer contain all vars, the set of base vars can only be found
in the environment.
* headers is only the headers from request + call_id, deadline, method, url
* for cold, we simply add the headers to the environment, prepending
`FN_HEADER_` to them, BUT NOT upper casing or `s/-/_/`
* fixes issue where async hot functions would end up with `Fn_header_`
prefixed headers
* removes idea of 'base' vars and 'env'. this was a strange concept. now we just have
'config' which was base vars, and headers, which was base_env+headers; i.e.
they are disjoint now.
* casing for all headers will lean to be `My-Header` style, which should help
with consistency. notable exceptions for cold only are FN_CALL_ID, FN_METHOD,
and FN_REQUEST_URL -- this is simply to avoid breakage, in either hot format
they appear as `Fn_call_id` still.
* removes FN_PARAM stuff
* updated doc with behavior
weird things left:
`Fn_call_id` e.g. isn't a correctly formatted http header, it should likely be
`Fn-Call-Id` but I wanted to live to fight another day on this one, it would
add some breakage.
examples to be posted of each format below
closes#329
* route updated_at
* add app created at, fix some route updated_at bugs
* add app updated_at
TODO need to add tests through front end
TODO for validation we don't really want to use the validate wrapper since
it's a programmer error and not a user error, hopefully tests block this.
* add tests for timestamps to exist / change on apps&routes
* route equals at done, fix tests wit dis
* fix up the equals sugar
* add swagger
* fix rebase
* precisely allocate maps in clone
* vetted
* meh
* fix api tests
* fix configuration of agent and server to be future proof and plumb in the hybrid client agent
* fixes up the tests, turns off /r/ on api nodes
* fix up defaults for runner nodes
* shove the runner async push code down into agent land to use client
* plumb up async-age
* return full call from async dequeue endpoint, since we're storing a whole
call in the MQ we don't need to worry about caching of app/route [for now]
* fast safe shutdown of dequeue looper in runner / tidying of agent
* nice errors for path not found against /r/, /v1/ or other path not found
* removed some stale TODO in agent
* mq backends are only loud mouths in debug mode now
* update tests
* Add caching to hybrid client
* Fix HTTP error handling in hybrid client.
The type switch was on the value rather than a pointer.
* Gofmt.
* Better caching with a nice caching wrapper
* Remove datastore cache which is now unused
* Don't need to manually wrap interface methods
* Go fmt
* so it begins
* add clarification to /dequeue, change response to list to future proof
* Specify that runner endpoints are also under /v1
* Add a flag to choose operation mode (node type).
This is specified using the `FN_NODE_TYPE` environment variable. The
default is the existing behaviour, where the server supports all
operations (full API plus asynchronous and synchronous runners).
The additional modes are:
* API - the full API is available, but no functions are executed by the
node. Async calls are placed into a message queue, and synchronous
calls are not supported (invoking them results in an API error).
* Runner - only the invocation/route API is present. Asynchronous and
synchronous invocation requests are supported, but asynchronous
requests are placed onto the message queue, so might be handled by
another runner.
* Add agent type and checks on Submit
* Sketch of a factored out data access abstraction for api/runner agents
* Fix tests, adding node/agent types to constructors
* Add tests for full, API, and runner server modes.
* Added atomic UpdateCall to datastore
* adds in server side endpoints
* Made ServerNodeType public because tests use it
* Made ServerNodeType public because tests use it
* fix test build
* add hybrid runner client
pretty simple go api client that covers surface area needed for hybrid,
returning structs from models that the agent can use directly. not exactly
sure where to put this, so put it in `/clients/hybrid` but maybe we should
make `/api/runner/client` or something and shove it in there. want to get
integration tests set up and use the real endpoints next and then wrap this up
in the DataAccessLayer stuff.
* gracefully handles errors from fn
* handles backoff & retry on 500s
* will add to existing spans for debuggo action
* minor fixes
* meh
* add error to call model
closes#331
previously, for async this error was being masked completely even if it was
something useful like the image not existing. for sync, the error was returned
in the http request but now it's also being stored. this error itself can
cover a lot of landscape, it could be an error in getting a slot, pulling an
image, running a container, among other things. anyway, no longer being
masked. we can likely improve it in certain cases we run into in the future,
but it's open ended at the moment and not being masked like some errors in
sync http request returns (503 non-models.APIError) for now.
* tucks in callTrigger stuff to keep api clean
* adds swagger
* adds migration
* adds tests for datastore and agent to ensure behavior
* pull images before tests are ran
* gofmt migrations file
* add per call stats field as histogram
this will add a histogram of up to 240 data points of call data, produced
every second, stored at the end of a call invocation in the db. the same
metrics are also still shipped to prometheus (prometheus has the
not-potentially-reduced version). for the API reference, see the updates to
the swagger spec, this is just added onto the get call endpoint.
this does not add any extra db calls and the field for stats in call is a json
blob, which is easily modified to add / omit future fields. this is just
tacked on to the call we're making to InsertCall, and expect this to add very
little overhead; we are bounding the set to be relatively small, planning to
clean out the db of calls periodically, functions will generally be short, and
the same code used at a previous firm did not cause a notable db size increase
with production workload that is worse, wrt histogram size (I checked). the
code changes are really small aside from changing to strfmt.DateTime,
adding a migration and implementing sql.Valuer; needed to slightly modify the
swap function so that we can safely read `call.Stats` field to upload at end.
with the full histogram in hand, we can compute max/min/average/median/growth
rate/bernoulli distributions/whatever very easily in a UI or tooling. in
particular, this data is easily chartable [for a UI], which is beneficial.
* adds swagger spec of api update to calls endpoint
* adds migration for call.stats field
* adds call.stats field to sql queries
* change swapping of hot logger to exec, so we know that call.Stats is no
longer being modified after `exec` [in call.End]
* throws out docker stats between function invocations in hot functions (no
call to store them on, we could change this later for debug; they're in prom)
* tested in tests and API
closes#19
* add format of ints to swag
* fn: introducing 503 responses for out of capacity case
*) Adding 503 with Retry-After header case if request failed
during waiting for slots.
*) TODO: return 503 without Retry-After if the request can
never be met by this fn server.
*) fn: runner test docker pull fixup
*) fn: MaxMemory for routes is now a variable to allow
testing and adjusting it according to fleet memory sizes.
* add minio-go dep, update deps
* add minio s3 client
minio has an s3 compatible api and is an open source project and, notably, is
not amazon, so it seems best to use their client (fwiw the aws-sdk-go is a
giant hair ball of things we don't need, too). it was pretty easy and seems
to work, so rolling with it. also, minio is a totally feasible option for fn
installs in prod / for demos / for local.
* adds 's3' package for s3 compatible log storage api, for use with storing
logs from calls and retrieving them.
* removes DELETE /v1/apps/:app/calls/:call/log endpoint
* removes internal log deletion api
* changes the GetLog API to use an io.Reader, which is a backwards step atm
due to the json api for logs, I have another branch lined up to make a plain
text log API and this will be much more efficient (also want to gzip)
* hooked up minio to the test suite and fixed up the test suite
* add how to run minio docs and point fn at it docs
some notes: notably we aren't cleaning up these logs. there is a ticket
already to make a Mr. Clean who wakes up periodically and nukes old stuff, so
am punting any api design around some kind of TTL deletion of logs. there are
a lot of options really for Mr. Clean, we can notably defer to him when apps
are deleted, too, so that app deletion is fast and then Mr. Clean will just
clean them up later (seems like a good option).
have not tested against BMC object store, which has an s3 compatible API. but
in theory it 'just works' (the reason for doing this). in any event, that's
part of the service land to figure out.
closes#481closes#473
* add log not found error to minio land
* adds migrations
closes#57
migrations only run if the database is not brand new. brand new
databases will contain all the right fields when CREATE TABLE is called,
this is for readability mostly more than efficiency (do not want to have
to go through all of the database migrations to ascertain what columns a table
has). upon startup of a new database, the migrations will be analyzed and the
highest version set, so that future migrations will be run. this should also
avoid running through all the migrations, which could bork db's easily enough
(if the user just exits from impatience, say).
otherwise, all migrations that a db has not yet seen will be run against it
upon startup, this should be seamless to the user whether they had a db that
had 0 migrations run on it before or N. this means users will not have to
explicitly run any migrations on their dbs nor see any errors when we upgrade
the db (so long as things go well). if migrations do not go so well, users
will have to manually repair dbs (this is the intention of the `migrate`
library and it seems sane), this should be rare, and I'm unsure myself how
best to resolve not having gone through this myself, I would assume it will
require running down migrations and then manually updating the migration
field; in any case, docs once one of us has to go through this.
migrations are written to files and checked into version control, and then use
go-bindata to generate those files into go code and compiled in to be consumed
by the migrate library (so that we don't have to put migration files on any
servers) -- this is also in vcs. this seems to work ok. I don't like having to
use the separate go-bindata tool but it wasn't really hard to install and then
go generate takes care of the args. adding migrations should be relatively
rare anyway, but tried to make it pretty painless.
1 migration to add created_at to the route is done here as an example of how
to do migrations, as well as testing these things ;) -- `created_at` will be
`0001-01-01T00:00:00.000Z` for any existing routes after a user runs this
version. could spend the extra time adding 'today's date to any outstanding
records, but that's not really accurate, the main thing is nobody will have to
nuke their db with the migrations in place & we don't have any prod clusters
really to worry about. all future routes will correctly have `created_at` set,
and plan to add other timestamps but wanted to keep this patch as small as
possible so only did routes.created_at.
there are tests that a spankin new db will work as expected as well as a db
after running all down & up migrations works. the latter tests only run on mysql
and postgres, since sqlite3 does not like ALTER TABLE DROP COLUMN; up
migrations will need to be tested manually for sqlite3 only, but in theory if
they are simple and work on postgres and mysql, there is a good likelihood of
success; the new migration from this patch works on sqlite3 fine.
for now, we need to use `github.com/rdallman/migrate` to move forward, as
getting integrated into upstream is proving difficult due to
`github.com/go-sql-driver/mysql` being broken on master (yay dependencies).
Fortunately for us, we vendor a version of the `mysql` bindings that actually
works, thus, we are capable of using the `mattes/migrate` library with success
due to that. this also will require go1.9 to use the new `database/sql.Conn`
type, CI has been updated accordingly.
some doc fixes too from testing.. and of course updated all deps.
anyway, whew. this should let us add fields to the db without busting
everybody's dbs. open to feedback on better ways, but this was overall pretty
simple despite futzing with mysql.
* add migrate pkg to deps, update deps
use rdallman/migrate until we resolve in mattes land
* add README in migrations package
* add ref to mattes lib
go vet caught some nifty bugs. so fixed those here, and also made it so that
we vet everything from now on since the robots seem to do a better job of
vetting than we have managed to.
also adds gofmt check to circle. could move this to the test.sh script (didn't
want a script calling a script, because $reasons) and it's nice and isolated
in its own little land as it is. side note, changed the script so it runs in
100ms instead of 3s, i think find is a lot faster than go list.
attempted some minor cleanup of various scripts
* idle_timeout max of 1h
* timeout max of 120s for sync, 1h for async
* max memory of 8GB
* do full route validation before call invocation
* ensure that idle_timeout >= timeout
we are now doing validation of updating route inside of the database
transaction, which is what we should have been doing all along really.
we need this behavior to ensure that the idle timeout is longer than the
timeout, among other benefits (like not updating the most recent version of
the existing struct and overwriting previous updates, yay). since we have
this, we can get rid of the weird skipZero behavior on validate too and
validate the real deal holyfield.
validating the route before making the call is handy so that we don't do weird
things like run a func that wants to use 300GB of RAM and run for 3 weeks.
closes#192closes#344closes#162
sqlx has nice facilities for using structs to do queries and using their
fields, so decided to move us all over to this. now when you take a look at
models.Call it's really obvious what's in db and what's not. added omitempty
to some json fields that were bleeding through api too.
deletes a lot of code in the sql package for scanning and made some queries
use struct based sqlx methods now which seem easier to read than what we
previously had. moves all json stuff into sql.Valuer and sql.Scanner methods
in models/config.go, these are the only 2 types that ever need this. sadly,
sqlx would have done this marshaling for us, but to keep compat, I added json.
we can do some migrations later maybe for a more efficient encoding, but did
not want to fuss with it today.
it seems like we should probably aim to keep models.Call as small as possible
in the db as there will be a lot of them. interestingly, most functions
platforms I looked at do not seem to expose this kind of information that I
could find. so, i think only having timestamps, status, id, app, path and
maybe docker stats is really all that should be in here (agree w/ Denys on
284 as these and logs will end up taking up most db space in prod. notably,
payload, headers, and env vars could be extremely large and in the general
case they are always a copy of the routes (this breaks apart when routes are
updated, which would be useful considering we don't have versioning --
versioning may be cheaper).
removed unused field in apps too
this is lined up behind #349 so that I could use the tests...
closes#345closes#142closes#284
calls, apps, and routes listing were previously returning the entire data set,
which just won't scale. this adds pagination with cursoring forward to each of
these endpoints (see the [docs](docs/definitions.md)).
the patch is really mostly tests, shouldn't be that bad to pick through.
some blarble about implementation is in order:
calls are sorted by ids but allow searching within certain `created_at` ranges
(finally). this is because sorting by `created_at` isn't feasible when
combined with paging, as `created_at` is not guaranteed to be unique -- id's
are (eliding theoreticals). i.e. on a page boundary, if there are 200 calls
with the same `created_at`, providing a `cursor` of that `created_at` will
skip over the remaining N calls with that `created_at`. also using id will be
better on the index anyway (well, less of them). yay having sortable ids! I
can't discern any issues doing this, as even if 200 calls have the same
created_at, they will have different ids, and the sort should allow paginating
them just fine. ids are also url safe, so the id works as the cursor value
just fine.
apps and routes are sorted by alphabetical order. as they aren't guaranteed to
be url safe, we are base64'ing them in the front end to a url safe format and
then returning them, and then base64 decoding them when we get them. this does
mean that they can be relatively large if the path/app is long, but if we
don't want to add ids then they were going to be pretty big anyway. a bonus
that this kind of obscures them. if somebody has better idea on formatting, by
all means.
notably, we are not using the sql paging facilities, and we are baking our own
based on cursors, which ends up being much more efficient for querying longer
lists of resources. this also should be easy to implement in other non-sql dbs
and the cursoring formats we can change on the fly since we are just exposing
them as opaque strings. the front end deals with the base64 / formatting, etc
and the back end is taking raw values (strfmt.DateTime or the id for calls).
the cursor that is being passed to/by the user is simply the last resource on the
previous page, so in theory we don't even need to return it, but it does make
it a little easier to use, also, cursor being blank on the last page depends
on page full-ness, so sometimes users will get a cursor when there are no
results on next page (1/N chance, and it's not really end of world -- actually
searching for the next thing would make things more complex). there are ample
tests for this behavior.
I've turned off all query parameters allowing `LIKE` queries on certain listing
endpoints, as we should not expose sql behavior through our API in the event
that we end up not using a sql db down the road. I think we should only allow
prefix matching, which sql can support as well as other types of databases
relatively cheaply, but this is not hooked up here as it didn't 'just work'
when I was fiddling with it (can add later, they're unnecessary and weren't
wired in before in front end).
* remove route listing across apps (unused)
* fix panic when doing `/app//`. this is prob possible for other types of
endpoints, out of scope here. added a guard in front of all endpoints for this
* adds `from_time` and `to_time` query parameters to calls, so you can e.g.
list the last hour of tasks. these are not required and default to
oldest/newest.
* hooked back up the datastore tests to the sql db, only run with sqlite atm,
but these are useful, added a lot to them too.
* added a bunch of tests to the front end, so pretty sure this all works now.
* added to swagger, we'll need to re-gen. also wrote some words about
pagination workings, I'm not sure how best to link to these, feedback welcome.
* not sure how we want to manage indexes, but we may need to add some (looking
at created_at, mostly)
* `?route` changed to `?path` in routes listing, to keep consistency with
everything else
* don't 404 when searching for calls where the route doesn't exist, just
return an empty list (it's a query param ffs)
closes#141
something still feels off with this, but i tinkered with it for a day-ish and
didn't come up with anything a whole lot better. doing a lot of the
maneuvering in the caller seemed better but it was just bloating up GetCall so
went back to having it basically like it was, but returning the limited
underlying buffer to read from so we can ship to the db.
some small changes to the LogStore interface, swapped it to take an
io.Reader instead of a string for more flexibility in the future while
essentially maintaining the same level of performance that we have now.
i'm guessing in the not so distant future we'll ship these to some s3 like
service and it would be better to stream them in than carry around a giant
string anyway. also, carrying around up to 1MB buffers in memory isn't great,
we may want to switch to file backed logs for calls, too. using io.Reader for
logs should make #279 more reasonable if/once we move to some s3-like thing,
we can stream from the log storage service direct to clients.
this fixes the span being out of whack and allows the 'right' context to be
used to upload logs (next to inserting the call). deletes the dbWriter we had,
and we just do this in call.End now (which makes sense to me at least).
removes the dupe code for making an stderr for hot / cold and simplifies the
way to get a func logger (no more 7 param methods yay).
closes#298
currently:
* container ran out of memory (code 137)
* container exited with other code != 0
* unable to pull image (auth/404)
there may be others but this is a good start (the most common). notably, for
both hot and cold these should bubble up (if deterministic, which hub isn't
always), and these are useful for users to use in debugging why things aren't
working.
added tests to make sure that these behaviors are working.
also changed the behavior such that when the container exits we return a 502
instead of a 503, just to be able to distinguish the fact that fn is working
as expected but the container is acting funky (400 is weird here, so idk).
removed references to old IsUserVisible crap and slightly changed the
interface for RunResult for plumbing reasons (to get the error type,
specifically).
fixed an issue where if ~/.docker/config.json exists sometimes pulling images
wouldn't work deterministically (should be more inline w/ expectations now)
closes#275