* add per call stats field as histogram this will add a histogram of up to 240 data points of call data, produced every second, stored at the end of a call invocation in the db. the same metrics are also still shipped to prometheus (prometheus has the not-potentially-reduced version). for the API reference, see the updates to the swagger spec, this is just added onto the get call endpoint. this does not add any extra db calls and the field for stats in call is a json blob, which is easily modified to add / omit future fields. this is just tacked on to the call we're making to InsertCall, and expect this to add very little overhead; we are bounding the set to be relatively small, planning to clean out the db of calls periodically, functions will generally be short, and the same code used at a previous firm did not cause a notable db size increase with production workload that is worse, wrt histogram size (I checked). the code changes are really small aside from changing to strfmt.DateTime, adding a migration and implementing sql.Valuer; needed to slightly modify the swap function so that we can safely read `call.Stats` field to upload at end. with the full histogram in hand, we can compute max/min/average/median/growth rate/bernoulli distributions/whatever very easily in a UI or tooling. in particular, this data is easily chartable [for a UI], which is beneficial. * adds swagger spec of api update to calls endpoint * adds migration for call.stats field * adds call.stats field to sql queries * change swapping of hot logger to exec, so we know that call.Stats is no longer being modified after `exec` [in call.End] * throws out docker stats between function invocations in hot functions (no call to store them on, we could change this later for debug; they're in prom) * tested in tests and API closes #19 * add format of ints to swag
Migrations How-To
All migration files should be of the format:
[0-9]+_[add|remove]_model[_field]*.[up|down].sql
The number at the beginning of the file name should be monotonically
increasing, from the last highest file number in this directory. E.g. if there
is 11_add_foo_bar.up.sql, your new file should be 12_add_bar_baz.up.sql.
All *.up.sql files must have an accompanying *.down.sql file in order to
pass review.
The contents of each file should contain only 1 ANSI sql query. For help, you may refer to https://github.com/mattes/migrate/blob/master/MIGRATIONS.md which illustrates some of the finer points.
After creating the file you will need to run, in the same directory as this README:
$ go generate
NOTE: You may need to go get github.com/jteeuwen/go-bindata before running go generate in order for it to work.
After running go generate, the migrations.go file should be updated. Check
the updated version of this as well as the new .sql file into git.
After adding the migration, be sure to update the fields in the sql tables in
sql.go up one package. For example, if you added a column foo to routes,
add this field to the routes CREATE TABLE query, as well as any queries
where it should be returned.
After doing this, run the test suite to make sure the sql queries work as
intended and voila. The test suite will ensure that the up and down migrations
work as well as a fresh db. The down migrations will not be tested against
SQLite3 as it does not support ALTER TABLE DROP COLUMN, but will still be
tested against postgres and MySQL.