Commit Graph

188 Commits

Author SHA1 Message Date
CI
7015927ea3 fnlb: 0.0.30 release [skip ci] 2017-08-09 17:50:52 +00:00
CI
efb7d1065a fnlb: 0.0.29 release [skip ci] 2017-08-09 16:05:58 +00:00
CI
cad0c046c8 fnlb: 0.0.28 release [skip ci] 2017-08-09 12:33:25 +00:00
CI
71ae84c9e3 fnlb: 0.0.27 release [skip ci] 2017-08-08 18:17:36 +00:00
CI
280416eddf fnlb: 0.0.26 release [skip ci] 2017-08-08 16:17:08 +00:00
CI
a7ed30a55f fnlb: 0.0.25 release [skip ci] 2017-08-08 13:22:42 +00:00
CI
7758e4a76a fnlb: 0.0.24 release [skip ci] 2017-08-07 19:06:57 +00:00
CI
0a8ed020b2 fnlb: 0.0.23 release [skip ci] 2017-08-07 18:09:13 +00:00
CI
15ada7f17f fnlb: 0.0.22 release [skip ci] 2017-08-05 04:49:46 +00:00
CI
0322d2623e fnlb: 0.0.21 release [skip ci] 2017-08-03 22:42:43 +00:00
CI
b1f41f60bb fnlb: 0.0.20 release [skip ci] 2017-08-03 21:15:57 +00:00
CI
e47ab56ff0 fnlb: 0.0.19 release [skip ci] 2017-08-03 18:18:32 +00:00
CI
e40b5a832c fnlb: 0.0.18 release [skip ci] 2017-08-02 22:19:53 +00:00
CI
563c202a94 fnlb: 0.0.17 release [skip ci] 2017-08-02 22:08:22 +00:00
Reed Allman
b533350855 add traces to the lb
also fixed the broken version checking stuff so this works again
2017-08-02 13:51:10 -07:00
CI
58d2c72efb fnlb: 0.0.16 release [skip ci] 2017-08-02 19:46:12 +00:00
Denis Makogon
da0ab23f63 Adding per-node version check
Version check happens at start and every time attempting to add new node via API

Implements: #153
2017-08-01 20:37:54 +03:00
CI
ba77624082 fnlb: 0.0.15 release [skip ci] 2017-08-01 02:40:16 +00:00
CI
84bbfc0003 fnlb: 0.0.14 release [skip ci] 2017-07-31 19:46:02 +00:00
Reed Allman
a7e29a6ace Merge pull request #174 from fnproject/lb-die
don't wait for signals to die
2017-07-31 12:34:53 -07:00
CI
1328388526 fnlb: 0.0.13 release [skip ci] 2017-07-31 19:16:52 +00:00
CI
d3243b3ac9 fnlb: 0.0.12 release [skip ci] 2017-07-30 23:54:20 +00:00
CI
37f9d578c6 fnlb: 0.0.11 release [skip ci] 2017-07-30 23:42:58 +00:00
Reed Allman
2a7d9072d6 don't wait for signals to die 2017-07-28 17:17:16 -07:00
CI
85f7a53cc0 fnlb: 0.0.10 release [skip ci] 2017-07-28 19:09:06 +00:00
CI
dabf1f031d fnlb: 0.0.9 release [skip ci] 2017-07-28 18:57:15 +00:00
CI
90e1db8da2 fnlb: 0.0.8 release [skip ci] 2017-07-28 18:45:38 +00:00
CI
ef2800aa29 fnlb: 0.0.7 release [skip ci] 2017-07-28 18:16:15 +00:00
CI
5f206aa45b fnlb: 0.0.6 release [skip ci] 2017-07-28 14:49:25 +00:00
CI
8ade75b868 fnlb: 0.0.5 release [skip ci] 2017-07-28 01:28:50 +00:00
CI
42fb496036 fnlb: 0.0.4 release [skip ci] 2017-07-27 18:16:18 +00:00
Travis Reeder
b0494cd25d Boom, circle good to go, releases on commits to master too (#7)
* circle

* circle

* circle

* circle

* circle

* CIRCLE

* circle

* circle

* circle

* circle

* circle

* circle

* circle

* circle

* circle

* circle

* cijrcle

* circle

* circle

* circle

* circle

* c

* c

* circle

* testing release

* circle

* trying release

* c

* c

* functions: 0.3.25 release [skip ci]

* c

* functions: 0.3.26 release [skip ci]

* fn tool: 0.3.19 release [skip ci]

* testing cli release only

* fn tool: 0.3.20 release [skip ci]

* fn tool: 0.3.21 release [skip ci]

* hopefully the last thing

* fn tool: 0.3.22 release [skip ci]

* c

* fn tool: 0.3.23 release [skip ci]

* almost there....

* fn tool: 0.3.24 release [skip ci]

* fnlb: 0.0.2 release [skip ci]

* fn tool: 0.3.25 release [skip ci]

* fnlb: 0.0.3 release [skip ci]

* Added back in commented out lines.

* Fixing middleware example.
2017-07-26 17:38:37 -07:00
Travis Reeder
437616acb7 Fix tests 2017-07-26 11:19:18 -07:00
Travis Reeder
48e3781d5e Rename to GitHub (#3)
* circle

* Rename to github and fn->cli

*  Rename to github and fn->cli
2017-07-26 10:50:19 -07:00
Reed Allman
e637f9736e back the lb with a db for scale
now we can run multiple lbs in the same 'cluster' and they will all point to
the same nodes. all lb nodes are not guaranteed to have the same set of
functions nodes to route to at any point in time since each lb node will
perform its own health checks independently, but they will all be backed by
the same list from the db to health check at least. in cases where there will
be more than a few lbs we can rethink this strategy, we mostly need to back
the lbs with a db so that they persist nodes and remain fault tolerant in that
sense. the strategy of independent health checks is useful to reduce thrashing
the db during network partitions between lb and fn pairs. it would be nice to
have gossip health checking to reduce network traffic, but this works too, and
we'll need to seed any gossip protocol with a list from a db anyway.

db_url is the same format as what functions takes. i don't have env vars set
up for fnlb right now (low hanging fruit), the flag is `-db`, it defaults to
in memory sqlite3 so nodes will be forgotten between reboots. used the sqlx
stuff, decided not to put the lb stuff in the datastore stuff as this was easy
enough to just add here to get the sugar, and avoid bloating the datastore
interface. the tables won't collide, so can just use same pg/mysql as what the
fn servers are running in prod even, db load is low from lb (1 call every 1s
per lb).

i need to add some tests, touch testing worked as expected.
2017-07-07 07:45:17 -07:00
Reed Allman
9cc5fb8784 remove traces of iron 2017-06-28 21:25:56 -07:00
Reed Allman
bcd9f1253e adds docker & release stuff for fnlb 2017-06-28 20:41:16 -07:00
Reed Allman
398ecc388e move the lb stuff around in lego form
this structure should allow us to keep the consistent hash code and just use
consistent hashing on a subset of nodes, then in order to satisfy the oracle
service stuff in functions-service we can just implement a different "Grouper"
that does vm allocation and whatever other magic we need to manage nodes and
poop out sets of nodes based on tenant id / func.

for the suga... see main.go and proxy.go, the rest is basically renaming /
moving stuff (not easy to follow changes, nature of the beast).

the only 'issues' i can think of is that down in the ch stuff (or Router) we
will need a back channel to tell the 'Grouper' to add a node (i.e. all nodes for
that shard are currently loaded) which isn't great and also the grouper has no
way of knowing that a node in the given set may not be being used anymore.
still thinking about how to couple those two. basically don't want to have to
just copy that consistent hash code but after munging with stuff i'm almost at
'fuck it' level and maybe it's worth it to just copy and hack it up in
functions-service for what we need. we'll also need to have different key
funcs for groupers and routers eventually (grouper wants tenant id, router
needs tenant id + router). anyway, open to any ideas, i haven't come up with
anything great. feedback on interface would be great

after this can plumb the datastore stuff into the allGrouper pretty easily
2017-06-10 15:21:23 -07:00