The Gin middleware is being used if one or more Origins are specified. Default setup for each Origin is as follows:
- GET,POST, PUT, HEAD methods allowed
- Credentials share disabled
- Preflight requests cached for 12 hours
Which are the defaults gin-contrib/cors comes with out of the box.
Gin-cors will return a 403 if it gets a request with an Origin header that isn't on its' list. If no Origin header is specified then it will just return the servers response.
Start fn with CORS enabled:
`API_CORS="http://localhost:4000, http://localhost:3000" make run`
* adds migrations
closes#57
migrations only run if the database is not brand new. brand new
databases will contain all the right fields when CREATE TABLE is called,
this is for readability mostly more than efficiency (do not want to have
to go through all of the database migrations to ascertain what columns a table
has). upon startup of a new database, the migrations will be analyzed and the
highest version set, so that future migrations will be run. this should also
avoid running through all the migrations, which could bork db's easily enough
(if the user just exits from impatience, say).
otherwise, all migrations that a db has not yet seen will be run against it
upon startup, this should be seamless to the user whether they had a db that
had 0 migrations run on it before or N. this means users will not have to
explicitly run any migrations on their dbs nor see any errors when we upgrade
the db (so long as things go well). if migrations do not go so well, users
will have to manually repair dbs (this is the intention of the `migrate`
library and it seems sane), this should be rare, and I'm unsure myself how
best to resolve not having gone through this myself, I would assume it will
require running down migrations and then manually updating the migration
field; in any case, docs once one of us has to go through this.
migrations are written to files and checked into version control, and then use
go-bindata to generate those files into go code and compiled in to be consumed
by the migrate library (so that we don't have to put migration files on any
servers) -- this is also in vcs. this seems to work ok. I don't like having to
use the separate go-bindata tool but it wasn't really hard to install and then
go generate takes care of the args. adding migrations should be relatively
rare anyway, but tried to make it pretty painless.
1 migration to add created_at to the route is done here as an example of how
to do migrations, as well as testing these things ;) -- `created_at` will be
`0001-01-01T00:00:00.000Z` for any existing routes after a user runs this
version. could spend the extra time adding 'today's date to any outstanding
records, but that's not really accurate, the main thing is nobody will have to
nuke their db with the migrations in place & we don't have any prod clusters
really to worry about. all future routes will correctly have `created_at` set,
and plan to add other timestamps but wanted to keep this patch as small as
possible so only did routes.created_at.
there are tests that a spankin new db will work as expected as well as a db
after running all down & up migrations works. the latter tests only run on mysql
and postgres, since sqlite3 does not like ALTER TABLE DROP COLUMN; up
migrations will need to be tested manually for sqlite3 only, but in theory if
they are simple and work on postgres and mysql, there is a good likelihood of
success; the new migration from this patch works on sqlite3 fine.
for now, we need to use `github.com/rdallman/migrate` to move forward, as
getting integrated into upstream is proving difficult due to
`github.com/go-sql-driver/mysql` being broken on master (yay dependencies).
Fortunately for us, we vendor a version of the `mysql` bindings that actually
works, thus, we are capable of using the `mattes/migrate` library with success
due to that. this also will require go1.9 to use the new `database/sql.Conn`
type, CI has been updated accordingly.
some doc fixes too from testing.. and of course updated all deps.
anyway, whew. this should let us add fields to the db without busting
everybody's dbs. open to feedback on better ways, but this was overall pretty
simple despite futzing with mysql.
* add migrate pkg to deps, update deps
use rdallman/migrate until we resolve in mattes land
* add README in migrations package
* add ref to mattes lib
prior to this patch we were allowing 256MB for every function run, just
because that was the default for the docker driver and we were not using the
memory field on any given route configuration. this fixes that, now docker
containers will get the correct memory limit passed into the container from
the route. the default is still 128.
there is also an env var now, `MEMORY_MB` that is set on each function call,
see the linked issue below for rationale.
closes#186
ran the given function code from #186, and now i only see allocations up to
32MB before the function is killed. yay.
notes:
there is no max for memory. for open source fn i'm not sure we want to
cap it, really. in the services repo we probably should add a cap before prod.
since we don't know any given fn server's ram, we can't try to make sure the
setting on any given route is something that can even be run.
remove envconfig & bytefmt
this updates the glide.yaml file to remove the unused deps, but trying to
install fresh is broken atm so i couldn't remove from vendor/, going to fix
separately (next update we just won't get these). also changed the skip dir to
be the cli dir now that its name has changed (related to brokenness).
fix how ram slots were being allocated. integer division is significantly
slower than subtraction.
replace default bolt option with sqlite3 option. the story here is that we
just need a working out of the box solution, and sqlite3 is just fine for that
(actually, likely better than bolt).
with sqlite3 supplanting bolt, we mostly have sql databases. so remove redis
and then we just have one package that has a `sql` implementation of the
`models.Datastore` and lean on sqlx to do query rewriting. this does mean
queries have to be formed a certain way and likely have to be ANSI-SQL (no
special features) but we weren't using them anyway and our base api is
basically done and we can easily extend this api as needed to only implement
certain methods in certain backends if we need to get cute.
* remove bolt & redis datastores (can still use as mqs)
* make sql queries work on all 3 (maybe?)
* remove bolt log store and use sqlite3
* shove the FnLog shit into the datastore shit for now (free pg/mysql logs...
just for demos, etc, not prod)
* fix up the docs to remove bolt references
* add sqlite3, sqlx dep
* fix up tests & mock stuff, make validator less insane
* remove put & get in datastore layer as nobody is using.
this passes tests which at least seem like they test all the different
backends. if we trust our tests then this seems to work great. (tests `make
docker-test-run-with-*` work now too)
* making things work
* #506 - Add ability to login to a private docker registry
* Rolling back "make things work" to test them out more.
* Rolling back "make things work" to test them out more.
* credentials from docker/config.json if ENV is missing
* should get docker auth info just in the init
* update glide lock
* update glide
* Switched to new go dep tool, glide is too frikin annoying.
* Updated circle builds to use dep
* Added GOPATH/bin to path.
* Added GOPATH/bin to path.
* Using regular make test, instead of docker one (not sure why it was using the docker one?).
* Fix#418 Added MySQL as DB storage layer.
* Make the mysql stuff work
* Make the mysql stuff work
* Make the mysql stuff work
* Make the mysql stuff work
* small fixes
* Switch to Go 1.8 installation inside CI (#589)
* Switch to Go 1.8 installation inside CI
Partially Addresses: #588
* Use url.Hostname() instead of custom method
* Added PR review changes.
* Added missing check for error.
* Changed * with name, config
* Removed unused import.
* Added check for NoRows
* Merged changes with HEAD
* Added documentation to mysql.go
* update mysql to be on par with postgres
* Fixes some route creation and updating bugs.
* Updated README
* Added more info the quickstart
* Updated based on PR comments.
* Fixed based on comments.
* Updated per comments.
* lb: library for creation of load balancer
* lb: library for creation of load balancer
* Add balance subcommand to fn
* make fnlb its own command
* Update Changelogg
* Add Makefile for fnlb
Currently, async workers are started before HTTP interface is available
to get their requests. It fixes by ensuring that async workers are
started after HTTP interface is up.
Essentially we are getting rid of an error message during bootstrap:
ERRO[0000] Could not fetch task error=Get http://127.0.0.1:8080/tasks: dial tcp 127.0.0.1:8080: getsockopt: connection refused