* update fsouza bindings
to get https://github.com/fsouza/go-dockerclient/pull/776
* add memory swappiness setting to 0
without this, is vm.swappiness is not turned off, then it allows containers to
use unlimited (up to host max) anonymous swap space on the host. this behavior
is not what we want, particularly for getting tests to pass on everybody's dev
environment, but we don't really want this behavior in any case.
* fix pids
* start from the top
remove runner configuration mode
* remove async, logs, calls... hybrid still has one use
* add note
* fix tests
* remove all async verbiage / cruft
* fix test
* remove logs and calls from swagger
* fix rebase
* fix system tests
* remove calls/logs from sql db
adds migration and removes datastore methods
* go mod tidy && go mod vendor
* remove unused env vars
* remove stale server options
go mod is nice and trims unused packages from large imports, so this is more
sensible to do now. we had one usage to switch to.
also removed THIRD_PARTY file, this was stale (now all 3 usages noted there
are no longer direct, either)... found this from grepping.
the protobuf generated go files we had in tow were not generated by the
version of protobuf that we had vendored. if one did attempt to delete the
runner.pb.go file and generate from the proto file we had with the protobuf
version we had vendored the build would error with a compile time check for
IsProtobufVersion3 which did not exist in the protobuf version we had
vendored. If one changed the runner proto file to use proto version 2,
generation would error as our proto is in format for 3. there is no tagged
release of golang/protobuf to vendor that contains a protobuf 3 generator,
this vendors tip until that happens.
this also updates the protobuf generated go file to use the version that we
now have in tow, which made some small changes. I have no clue how we had
protobufs that were generated with a proto version 3 set at the top by a
protobuf 2 generator to begin with, but I don't need another existential
question to come to grips with, really.
I did not test, but from the looks of it this shouldn't break anything as the
removed methods can just use the normal marshal path (I think they were just
optimizations?)
There is no need to propagate CapacityFull to waitHot(). Instead
waitHot() can receive 503 which is easier to follow (and less
error prone) in handleCallEnd().
* go modified
fiddling with vendor
got rid of the vendor directory
revendored but with the exact same versions of things
maybe better
added mods for the images
revendored
using `GOFLAGS` instead of repeating my self
vendor everything to the exact same commit hash as before
and fixed ugorji
Delete Deproxy.toml
empty file
cleaned up some file
cleaned up some cruft
get rid of some unused packages and exclude some Microsoft packages
added flags to the variables that get pushed into docker in the makefile
It works I suppose
added noop
excluded what we did not want
even less hacky
reverted to a version that has not been mangled
* get rid of my experiment
* get rid of old format stuff, utils usage, fix up for fdk2.0 interface
* pure agent format removal, TODO remove format field, fix up all tests
* shitter's clogged
* fix agent tests
* start rolling through server tests
* tests compile, some failures
* remove json / content type detection on invoke/httptrigger, fix up tests
* remove hello, fixup system tests
the fucking status checker test just hangs and it's testing that it doesn't
work so the test passes but the test doesn't pass fuck life it's not worth it
* fix migration
* meh
* make dbhelper shut up about dbhelpers not being used
* move fail status at least into main thread, jfc
* fix status call to have FN_LISTENER
also turns off the stdout/stderr blocking between calls, because it's
impossible to debug without that (without syslog), now that stdout and stderr
go to the same place (either to host stderr or nowhere) and isn't used for
function output this shouldn't be a big fuss really
* remove stdin
* cleanup/remind: fixed bug where watcher would leak if container dies first
* silence system-test logs until fail, fix datastore tests
postgres does weird things with constraints when renaming tables, took the
easy way out
system-tests were loud as fuck and made you download a circleci text file of
the logs, made them only yell when they goof
* fix fdk-go dep for test image. fun
* fix swagger and remove test about format
* update all the gopkg files
* add back FN_FORMAT for fdks that assert things. pfft
* add useful error for functions that exit
this error is really confounding because containers can exit for all manner of
reason, we're just guessing that this is the most likely cause for now, and
this error message should very likely change or be removed from the client
path anyway (context.Canceled wasn't all that useful either, but anyway, I'd
been hunting for this... so found it). added a test to avoid being publicly
shamed for 1 line commits (beware...).
* Update dockerclient for registryToken support
* Update docker/docker for fsouza/go-dockerclient dependency
* Update docker/libnetwork
* Updated docker/docker to v18.09.0-ce-beta1
* POC code for inotify UDS-io-socket
* http-stream format
introducing the `http-stream` format support in fn. there are many details for
this, none of which can be linked from github :( -- docs are coming (I could
even try to add some here?). this is kinda MVP-ish level, but does not
implement the remaining spec, ie 'headers' fixing up / invoke fixing up. the
thinking being we can land this to test fdks / cli with and start splitting
work up on top of this. all other formats work the same as previous (no
breakage, only new stuff)
with the cli you can set `format: http-stream` and deploy, and then invoke a
function via the `http-stream` format. this uses unix domain socket (uds) on
the container instead of previous stdin/stdout, and fdks will have to support
this in a new fashion (will see about getting docs on here). fdk-go works,
which is here: https://github.com/fnproject/fdk-go/pull/30 . the output looks
the same as an http format function when invoking a function. wahoo.
there's some amount of stuff we can clean up here, enumerated:
* the cleanup of the sock files is iffy, high pri here
* permissions are a pain in the ass and i punted on dealing with them. you can
run `sudo ./fnserver` if running locally, it may/may not work in dind(?) ootb
* no pipe usage at all (yay), still could reduce buffer usage around the pipe
behavior, we could clean this up potentially before removal (and tests)
* my brain can’t figure out if dispatchOldFormats changes pipe behavior, but
tests work
* i marked XXX to do some clean up which will follow soon… need this to test fdk
tho so meh, any thoughts on those marked would be appreciated however (1 less
decision for me). mostly happy w/ general shape/plumbing tho
* there are no tests atm, this is a tricky dance indeed. attempts were made.
need to futz with the permission stuff before committing to adding any tests
here, which I don't like either. also, need to get the fdk-go based test image
updated according to the fdk-go, and there's a dance there too. rumba time..
* delaying the big big cleanup until we have good enough fdk support to kill
all the other formats.
open to ideas on how to maneuver landing stuff...
* fix unmount
* see if the tests work on ci...
* add call id header
* fix up makefile
* add configurable iofs opts
* add format file describing http-stream contract
* rm some cruft
* default iofs to /tmp, remove mounting
out of the box fn we can't mount. /tmp will provide a memory backed fs for us
on most systems, this will be fine for local developing and this can be
configured to be wherever for anyone that wants to make things more difficult
for themselves.
also removes the mounting, this has to be done as root. we can't do this in
the oss fn (short of requesting root, but no). in the future, we may want to
have a knob here to have a function that can be configured in fn that allows
further configuration here. since we don't know what we need in this dept
really, not doing that yet (it may be the case that it could be done
operationally outside of fn, eg, but not if each directory needs to be
configured itself, which seems likely, anyway...)
* add WIP note just in case...
* update fsouza to v1.2.0
* unwind timeouts on docker
previously, we were setting our own transport on the docker client, but this
does not work anymore as fsouza now needs to call this:
https://github.com/fsouza/go-dockerclient/blob/master/client_unix.go
which makes a platform dependent client. fsouza now also appears to make a
transport that modifies the default http client with some saner values for
things like max idle conns per host (they get reaped if idle 90s):
https://github.com/fsouza/go-dockerclient/blob/master/client.go#L1059
-- these settings are sane and were why we were doing this to begin with.
additionally, have removed our setting of timeout on the docker client for 2
minutes. this is a leftover relic of a bygone era from a time when we relied
on these timeouts to timeout higher level things, which now we're properly
timing out in the enclosing methods. so, they gone, this makes the docker
client a little less whacky now.
Creates a test that aims to assert that the Equals and Clone functions
for our three entity structs actually work.
The bulk of the code is spent creating gopter generators for the entities. See information of generative or property based testing for
explainations on that topic, but basically it's an object that is
capable of creating a stream of unique instances of the given struct.
With the generator we then make three assertions:
1) Entities are always equal to themselves.
2) A .Clone() of an entity is Equal to the original entity.
3) A .Clone() of an entity that has a field modified is not equal to the
orignal.
The third property is the worse for implementation, as it does not
generate the field to modify, it simply loops all fields for each generated
entity, and checks Equals always breaks.
Break testing shows that this would have caught earlier bugs in Equals
due to field addition. It will add to the work to add further fields,
generators have to be manually specified for each field, but that
seems a worthy cost.
* add DateTime sans mgo
* change all uses of strfmt.DateTime to common.DateTime, remove test strfmt usage
* remove api tests, system-test dep on api test
multiple reasons to remove the api tests:
* awkward dependency with fn_go meant generating bindings on a branched fn to
vendor those to test new stuff. this is at a minimum not at all intuitive,
worth it, nor a fun way to spend the finite amount of time we have to live.
* api tests only tested a subset of functionality that the server/ api tests
already test, and we risk having tests where one tests some thing and the
other doesn't. let's not. we have too many test suites as it is, and these
pretty much only test that we updated the fn_go bindings, which is actually a
hassle as noted above and the cli will pretty quickly figure out anyway.
* fn_go relies on openapi, which relies on mgo, which is deprecated and we'd
like to remove as a dependency. openapi is a _huge_ dep built in a NIH
fashion, that cannot simply remove the mgo dep as users may be using it.
we've now stolen their date time and otherwise killed usage of it in fn core,
for fn_go it still exists but that's less of a problem.
* update deps
removals:
* easyjson
* mgo
* go-openapi
* mapstructure
* fn_go
* purell
* go-validator
also, had to lock docker. we shouldn't use docker on master anyway, they
strongly advise against that. had no luck with latest version rev, so i locked
it to what we were using before. until next time.
the rest is just playing dep roulette, those end up removing a ton tho
* fix exec test to work
* account for john le cache
* Adding a way to inject a request ID
It is very useful to associate a request ID to each incoming request,
this change allows to provide a function to do that via Server Option.
The change comes with a default function which will generate a new
request ID. The request ID is put in the request context along with a
common logger which always logs the request-id
We add gRPC interceptors to the server so it can get the request ID out
of the gRPC metadata and put it in the common logger stored in the
context so as all the log lines using the common logger from the context
will have the request ID logged
* initial Db helper split - make SQL and datastore packages optional
* abstracting log store
* break out DB, MQ and log drivers as extensions
* cleanup
* fewer deps
* fixing docker test
* hmm dbness
* updating db startup
* Consolidate all your extensions into one convenient package
* cleanup
* clean up dep constraints
* Move out node-pool manager and replace it with RunnerPool extension
* adds extension points for runner pools in load-balanced mode
* adds error to return values in RunnerPool and Runner interfaces
* Implements runner pool contract with context-aware shutdown
* fixes issue with range
* fixes tests to use runner abstraction
* adds empty test file as a workaround for build requiring go source files in top-level package
* removes flappy timeout test
* update docs to reflect runner pool setup
* refactors system tests to use runner abstraction
* removes poolmanager
* moves runner interfaces from models to api/runnerpool package
* Adds a second runner to pool docs example
* explicitly check for request spillover to second runner in test
* moves runner pool package name for system tests
* renames runner pool pointer variable for consistency
* pass model json to runner
* automatically cast to http.ResponseWriter in load-balanced call case
* allow overriding of server RunnerPool via a programmatic ServerOption
* fixes return type of ResponseWriter in test
* move Placer interface to runnerpool package
* moves hash-based placer out of open source project
* removes siphash from Gopkg.lock
* add jaeger support, link hot container & req span
* adds jaeger support now with FN_JAEGER_URL, there's a simple tutorial in the
operating/metrics.md file now and it's pretty easy to get up and running.
* links a hot request span to a hot container span. when we change this to
sample at a lower ratio we'll need to finagle the hot container span to always
sample or something, otherwise we'll hide that info. at least, since we're
sampling at 100% for now if this is flipped on, can see freeze/unfreeze etc.
if they hit. this is useful for debugging. note that zipkin's exporter does
not follow the link at all, hence jaeger... and they're backed by the Cloud
Empire now (CNCF) so we'll probably use it anyway.
* vendor: add thrift for jaeger
* move mattes migrations to migratex
* changes format of migrations to migratex format
* updates test runner to use new interface (double checked this with printlns,
the tests go fully down and then up, and work on pg/mysql)
* remove mattes/migrate
* update tests from deps
* update readme
* fix other file extensions
* Initial stab at the protocol
* initial protocol sketch for node pool manager
* Added http header frame as a message
* Force the use of WithAgent variants when creating a server
* adds grpc models for node pool manager plus go deps
* Naming things is really hard
* Merge (and optionally purge) details received by the NPM
* WIP: starting to add the runner-side functionality of the new data plane
* WIP: Basic startup of grpc server for pure runner. Needs proper certs.
* Go fmt
* Initial agent for LB nodes.
* Agent implementation for LB nodes.
* Pass keys and certs to LB node agent.
* Remove accidentally left reference to env var.
* Add env variables for certificate files
* stub out the capacity and group membership server channels
* implement server-side runner manager service
* removes unused variable
* fixes build error
* splits up GetCall and GetLBGroupId
* Change LB node agent to use TLS connection.
* Encode call model as JSON to send to runner node.
* Use hybrid client in LB node agent.
This should provide access to get app and route information for the call
from an API node.
* More error handling on the pure runner side
* Tentative fix for GetCall problem: set deadlines correctly when reserving slot
* Connect loop for LB agent to runner nodes.
* Extract runner connection function in LB agent.
* drops committed capacity counts
* Bugfix - end state tracker only in submit
* Do logs properly
* adds first pass of tracking capacity metrics in agent
* maked memory capacity metric uint64
* maked memory capacity metric uint64
* removes use of old capacity field
* adds remove capacity call
* merges overwritten reconnect logic
* First pass of a NPM
Provide a service that talks to a (simulated) CP.
- Receive incoming capacity assertions from LBs for LBGs
- expire LB requests after a short period
- ask the CP to add runners to a LBG
- note runner set changes and readvertise
- scale down by marking runners as "draining"
- shut off draining runners after some cool-down period
* add capacity update on schedule
* Send periodic capcacity metrics
Sending capcacity metrics to node pool manager
* splits grpc and api interfaces for capacity manager
* failure to advertise capacity shouldn't panic
* Add some instructions for starting DP/CP parts.
* Create the poolmanager server with TLS
* Use logrus
* Get npm compiling with cert fixups.
* Fix: pure runner should not start async processing
* brings runner, nulb and npm together
* Add field to acknowledgment to record slot allocation latency; fix a bug too
* iterating on pool manager locking issue
* raises timeout of placement retry loop
* Fix up NPM
Improve logging
Ensure that channels etc. are actually initialised in the structure
creation!
* Update the docs - runners GRPC port is 9120
* Bugfix: return runner pool accurately.
* Double locking
* Note purges as LBs stop talking to us
* Get the purging of old LBs working.
* Tweak: on restart, load runner set before making scaling decisions.
* more agent synchronization improvements
* Deal with teh CP pulling out active hosts from under us.
* lock at lbgroup level
* Send request and receive response from runner.
* Add capacity check right before slot reservation
* Pass the full Call into the receive loop.
* Wait for the data from the runner before finishing
* force runner list refresh every time
* Don't init db and mq for pure runners
* adds shutdown of npm
* fixes broken log line
* Extract an interface for the Predictor used by the NPM
* purge drained connections from npm
* Refactor of the LB agent into the agent package
* removes capacitytest wip
* Fix undefined err issue
* updating README for poolmanager set up
* ues retrying dial for lb to npm connections
* Rename lb_calls to lb_agent now that all functionality is there
* Use the right deadline and errors in LBAgent
* Make stream error flag per-call rather than global otherwise the whole runner is damaged by one call dropping
* abstracting gRPCNodePool
* Make stream error flag per-call rather than global otherwise the whole runner is damaged by one call dropping
* Add some init checks for LB and pure runner nodes
* adding some useful debug
* Fix default db and mq for lb node
* removes unreachable code, fixes typo
* Use datastore as logstore in API nodes.
This fixes a bug caused by trying to insert logs into a nil logstore. It
was nil because it wasn't being set for API nodes.
* creates placement abstraction and moves capacity APIs to NodePool
* removed TODO, added logging
* Dial reconnections for LB <-> runners
LB grpc connections to runners are established using a backoff stategy
in event of reconnections, this allows to let the LB up even in case one
of the runners go away and reconnect to it as soon as it is back.
* Add a status call to the Runner protocol
Stub at the moment. To be used for things like draindown, health checks.
* Remove comment.
* makes assign/release capacity lockless
* Fix hanging issue in lb agent when connections drop
* Add the CH hash from fnlb
Select this with FN_PLACER=ch when launching the LB.
* small improvement for locking on reloadLBGmembership
* Stabilise the list of Runenrs returned by NodePool
The NodePoolManager makes some attempt to keep the list of runner nodes advertised as
stable as possible. Let's preserve this effort in the client side. The main point of this
is to attempt to keep the same runner at the same inxed in the []Runner returned by
NodePool.Runners(lbgid); the ch algorithm likes it when this is the case.
* Factor out a generator function for the Runners so that mocks can be injected
* temporarily allow lbgroup to be specified in HTTP header, while we sort out changes to the model
* fixes bug with nil runners
* Initial work for mocking things in tests
* fix for anonymouse go routine error
* fixing lb_test to compile
* Refactor: internal objects for gRPCNodePool are now injectable, with defaults for the real world case
* Make GRPC port configurable, fix weird handling of web port too
* unit test reload Members
* check on runner creation failure
* adding nullRunner in case of failure during runner creation
* Refactored capacity advertisements/aggregations. Made grpc advertisement post asynchronous and non-blocking.
* make capacityEntry private
* Change the runner gRPC bind address.
This uses the existing `whoAmI` function, so that the gRPC server works
when the runner is running on a different host.
* Add support for multiple fixed runners to pool mgr
* Added harness for dataplane system tests, minor refactors
* Add Dockerfiles for components, along with docs.
* Doc fix: second runner needs a different name.
* Let us have three runners in system tests, why not
* The first system test running a function in API/LB/PureRunner mode
* Add unit test for Advertiser logic
* Fix issue with Pure Runner not sending the last data frame
* use config in models.Call as a temporary mechanism to override lb group ID
* make gofmt happy
* Updates documentation for how to configure lb groups for an app/route
* small refactor unit test
* Factor NodePool into its own package
* Lots of fixes to Pure Runner - concurrency woes with errors and cancellations
* New dataplane with static runnerpool (#813)
Added static node pool as default implementation
* moved nullRunner to grpc package
* remove duplication in README
* fix go vet issues
* Fix server initialisation in api tests
* Tiny logging changes in pool manager.
Using `WithError` instead of `Errorf` when appropriate.
* Change some log levels in the pure runner
* fixing readme
* moves multitenant compute documentation
* adds introduction to multitenant readme
* Proper triggering of system tests in makefile
* Fix insructions about starting up the components
* Change db file for system tests to avoid contention in parallel tests
* fixes revisions from merge
* Fix merge issue with handling of reserved slot
* renaming nulb to lb in the doc and images folder
* better TryExec sleep logic clean shutdown
In this change we implement a better way to deal with the sleep inside
the for loop during the attempt for placing a call.
Plus we added a clean way to shutdown the connections with external
component when we shut down the server.
* System_test mysql port
set mysql port for system test to a different value to the one set for
the api tests to avoid conflicts as they can run in parallel.
* change the container name for system-test
* removes flaky test TestRouteRunnerExecution pending resolution by issue #796
* amend remove_containers to remove new added containers
* Rework capacity reservation logic at a higher level for now
* LB agent implements Submit rather than delegating.
* Fix go vet linting errors
* Changed a couple of error levels
* Fix formatting
* removes commmented out test
* adds snappy to vendor directory
* updates Gopkg and vendor directories, removing snappy and addhing siphash
* wait for db containers to come up before starting the tests
* make system tests start API node on 8085 to avoid port conflict with api_tests
* avoid port conflicts with api_test.sh which are run in parallel
* fixes postgres port conflict and issue with removal of old containers
* Remove spurious println
* update vendor directory, add go.opencensus.io
* update imports
* oops
* s/opentracing/opencensus/ & remove prometheus / zipkin stuff & remove old stats
* the dep train rides again
* fix gin build
* deps from last guy
* start in on the agent metrics
* she builds
* remove tags for now, cardinality error is fussing. subscribe instead of register
* update to patched version of opencensus to proceed for now TODO switch to a release
* meh
fix imports
* println debug the bad boys
* lace it with the tags
* update deps again
* fix all inconsistent cardinality errors
* add our own logger
* fix init
* fix oom measure
* remove bugged removal code
* fix s3 measures
* fix prom handler nil
* Logs should support specifying region when using S3-compatible object store
* Use aws-sdk-go client for s3 backed logstore
* fixes vendor with aws-sdk-go dependencies
* WIP: add k8s grouper
- This shares a great deal of behaviour with allGrouper. Once it's
tested, refactor that to share as much as possible
- Glide hell. Checked in the yaml and lock files but a glide i -v
will be required to bring vendor/ up-to-date. Will address once this
is ready.
* Update README. Make the watch tracking work.
(To follow: add the junk that was pulled in via the glide update.)
* Vendor updates.
* go fmt
* Use the allGrouper with a k8s-backed DBStore instead.
This is much tidier :-)
* Fix up go vet
* add minio-go dep, update deps
* add minio s3 client
minio has an s3 compatible api and is an open source project and, notably, is
not amazon, so it seems best to use their client (fwiw the aws-sdk-go is a
giant hair ball of things we don't need, too). it was pretty easy and seems
to work, so rolling with it. also, minio is a totally feasible option for fn
installs in prod / for demos / for local.
* adds 's3' package for s3 compatible log storage api, for use with storing
logs from calls and retrieving them.
* removes DELETE /v1/apps/:app/calls/:call/log endpoint
* removes internal log deletion api
* changes the GetLog API to use an io.Reader, which is a backwards step atm
due to the json api for logs, I have another branch lined up to make a plain
text log API and this will be much more efficient (also want to gzip)
* hooked up minio to the test suite and fixed up the test suite
* add how to run minio docs and point fn at it docs
some notes: notably we aren't cleaning up these logs. there is a ticket
already to make a Mr. Clean who wakes up periodically and nukes old stuff, so
am punting any api design around some kind of TTL deletion of logs. there are
a lot of options really for Mr. Clean, we can notably defer to him when apps
are deleted, too, so that app deletion is fast and then Mr. Clean will just
clean them up later (seems like a good option).
have not tested against BMC object store, which has an s3 compatible API. but
in theory it 'just works' (the reason for doing this). in any event, that's
part of the service land to figure out.
closes#481closes#473
* add log not found error to minio land
The Gin middleware is being used if one or more Origins are specified. Default setup for each Origin is as follows:
- GET,POST, PUT, HEAD methods allowed
- Credentials share disabled
- Preflight requests cached for 12 hours
Which are the defaults gin-contrib/cors comes with out of the box.
Gin-cors will return a 403 if it gets a request with an Origin header that isn't on its' list. If no Origin header is specified then it will just return the servers response.
Start fn with CORS enabled:
`API_CORS="http://localhost:4000, http://localhost:3000" make run`
* adds migrations
closes#57
migrations only run if the database is not brand new. brand new
databases will contain all the right fields when CREATE TABLE is called,
this is for readability mostly more than efficiency (do not want to have
to go through all of the database migrations to ascertain what columns a table
has). upon startup of a new database, the migrations will be analyzed and the
highest version set, so that future migrations will be run. this should also
avoid running through all the migrations, which could bork db's easily enough
(if the user just exits from impatience, say).
otherwise, all migrations that a db has not yet seen will be run against it
upon startup, this should be seamless to the user whether they had a db that
had 0 migrations run on it before or N. this means users will not have to
explicitly run any migrations on their dbs nor see any errors when we upgrade
the db (so long as things go well). if migrations do not go so well, users
will have to manually repair dbs (this is the intention of the `migrate`
library and it seems sane), this should be rare, and I'm unsure myself how
best to resolve not having gone through this myself, I would assume it will
require running down migrations and then manually updating the migration
field; in any case, docs once one of us has to go through this.
migrations are written to files and checked into version control, and then use
go-bindata to generate those files into go code and compiled in to be consumed
by the migrate library (so that we don't have to put migration files on any
servers) -- this is also in vcs. this seems to work ok. I don't like having to
use the separate go-bindata tool but it wasn't really hard to install and then
go generate takes care of the args. adding migrations should be relatively
rare anyway, but tried to make it pretty painless.
1 migration to add created_at to the route is done here as an example of how
to do migrations, as well as testing these things ;) -- `created_at` will be
`0001-01-01T00:00:00.000Z` for any existing routes after a user runs this
version. could spend the extra time adding 'today's date to any outstanding
records, but that's not really accurate, the main thing is nobody will have to
nuke their db with the migrations in place & we don't have any prod clusters
really to worry about. all future routes will correctly have `created_at` set,
and plan to add other timestamps but wanted to keep this patch as small as
possible so only did routes.created_at.
there are tests that a spankin new db will work as expected as well as a db
after running all down & up migrations works. the latter tests only run on mysql
and postgres, since sqlite3 does not like ALTER TABLE DROP COLUMN; up
migrations will need to be tested manually for sqlite3 only, but in theory if
they are simple and work on postgres and mysql, there is a good likelihood of
success; the new migration from this patch works on sqlite3 fine.
for now, we need to use `github.com/rdallman/migrate` to move forward, as
getting integrated into upstream is proving difficult due to
`github.com/go-sql-driver/mysql` being broken on master (yay dependencies).
Fortunately for us, we vendor a version of the `mysql` bindings that actually
works, thus, we are capable of using the `mattes/migrate` library with success
due to that. this also will require go1.9 to use the new `database/sql.Conn`
type, CI has been updated accordingly.
some doc fixes too from testing.. and of course updated all deps.
anyway, whew. this should let us add fields to the db without busting
everybody's dbs. open to feedback on better ways, but this was overall pretty
simple despite futzing with mysql.
* add migrate pkg to deps, update deps
use rdallman/migrate until we resolve in mattes land
* add README in migrations package
* add ref to mattes lib