mirror of
https://github.com/fnproject/fn.git
synced 2022-10-28 21:29:17 +03:00
ffcda9b82384efb308372848b41f3fd7e9ba4143
9 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
39b2cb2d9b |
Cpu resources (#642)
* fn: cpu quota implementation |
||
|
|
20089c4e83 |
make headers quasi-consistent (#660)
possible breakages: * `FN_HEADER` on cold are no longer `s/-/_/` -- this is so that cold functions can rebuild the headers as they were when they came in on the request (fdks, specifically), there's no guarantee that a reversal `s/_/-/` is the original header on the request. * app and route config no longer `s/-/_/` -- it seemed really weird to rewrite the users config vars on these. should just pass them exactly as is to env. * headers no longer contain the environment vars (previously, base config; app config, route config, `FN_PATH`, etc.), these are still available in the environment. this gets rid of a lot of the code around headers, specifically the stuff that shoved everything into headers when constructing a call to begin with. now we just store the headers separately and add a few things, like FN_CALL_ID to them, and build a separate 'config' now to store on the call. I thought 'config' was more aptly named, 'env' was confusing, though now 'config' is exactly what 'base_vars' was, which is only the things being put into the env. we weren't storing this field in the db, this doesn't break unless there are messages in a queue from another version, anyway, don't think we're there and don't expect any breakage for anybody with field name changes. this makes the configuration stuff pretty straight forward, there's just two separate buckets of things, and cold just needs to mash them together into the env, and otherwise hot containers just need to put 'config' in the env, and then hot format can shove 'headers' in however they'd like. this seems better than my last idea about making this easier but worse (RIP). this means: * headers no longer contain all vars, the set of base vars can only be found in the environment. * headers is only the headers from request + call_id, deadline, method, url * for cold, we simply add the headers to the environment, prepending `FN_HEADER_` to them, BUT NOT upper casing or `s/-/_/` * fixes issue where async hot functions would end up with `Fn_header_` prefixed headers * removes idea of 'base' vars and 'env'. this was a strange concept. now we just have 'config' which was base vars, and headers, which was base_env+headers; i.e. they are disjoint now. * casing for all headers will lean to be `My-Header` style, which should help with consistency. notable exceptions for cold only are FN_CALL_ID, FN_METHOD, and FN_REQUEST_URL -- this is simply to avoid breakage, in either hot format they appear as `Fn_call_id` still. * removes FN_PARAM stuff * updated doc with behavior weird things left: `Fn_call_id` e.g. isn't a correctly formatted http header, it should likely be `Fn-Call-Id` but I wanted to live to fight another day on this one, it would add some breakage. examples to be posted of each format below closes #329 |
||
|
|
892c843d87 |
add error to call model (#539)
* add error to call model closes #331 previously, for async this error was being masked completely even if it was something useful like the image not existing. for sync, the error was returned in the http request but now it's also being stored. this error itself can cover a lot of landscape, it could be an error in getting a slot, pulling an image, running a container, among other things. anyway, no longer being masked. we can likely improve it in certain cases we run into in the future, but it's open ended at the moment and not being masked like some errors in sync http request returns (503 non-models.APIError) for now. * tucks in callTrigger stuff to keep api clean * adds swagger * adds migration * adds tests for datastore and agent to ensure behavior * pull images before tests are ran * gofmt migrations file |
||
|
|
c9198b8525 |
add per call stats field as histogram (#528)
* add per call stats field as histogram this will add a histogram of up to 240 data points of call data, produced every second, stored at the end of a call invocation in the db. the same metrics are also still shipped to prometheus (prometheus has the not-potentially-reduced version). for the API reference, see the updates to the swagger spec, this is just added onto the get call endpoint. this does not add any extra db calls and the field for stats in call is a json blob, which is easily modified to add / omit future fields. this is just tacked on to the call we're making to InsertCall, and expect this to add very little overhead; we are bounding the set to be relatively small, planning to clean out the db of calls periodically, functions will generally be short, and the same code used at a previous firm did not cause a notable db size increase with production workload that is worse, wrt histogram size (I checked). the code changes are really small aside from changing to strfmt.DateTime, adding a migration and implementing sql.Valuer; needed to slightly modify the swap function so that we can safely read `call.Stats` field to upload at end. with the full histogram in hand, we can compute max/min/average/median/growth rate/bernoulli distributions/whatever very easily in a UI or tooling. in particular, this data is easily chartable [for a UI], which is beneficial. * adds swagger spec of api update to calls endpoint * adds migration for call.stats field * adds call.stats field to sql queries * change swapping of hot logger to exec, so we know that call.Stats is no longer being modified after `exec` [in call.End] * throws out docker stats between function invocations in hot functions (no call to store them on, we could change this later for debug; they're in prom) * tested in tests and API closes #19 * add format of ints to swag |
||
|
|
b6b9b55ca9 | apply/make Travis's json-format branch prototype to work with latest restructured master; added StatusCode to JSONOutput server-function contract | ||
|
|
46dfbd362d |
mask models.Call blank fields in api, sqlx
sqlx has nice facilities for using structs to do queries and using their fields, so decided to move us all over to this. now when you take a look at models.Call it's really obvious what's in db and what's not. added omitempty to some json fields that were bleeding through api too. deletes a lot of code in the sql package for scanning and made some queries use struct based sqlx methods now which seem easier to read than what we previously had. moves all json stuff into sql.Valuer and sql.Scanner methods in models/config.go, these are the only 2 types that ever need this. sadly, sqlx would have done this marshaling for us, but to keep compat, I added json. we can do some migrations later maybe for a more efficient encoding, but did not want to fuss with it today. it seems like we should probably aim to keep models.Call as small as possible in the db as there will be a lot of them. interestingly, most functions platforms I looked at do not seem to expose this kind of information that I could find. so, i think only having timestamps, status, id, app, path and maybe docker stats is really all that should be in here (agree w/ Denys on 284 as these and logs will end up taking up most db space in prod. notably, payload, headers, and env vars could be extremely large and in the general case they are always a copy of the routes (this breaks apart when routes are updated, which would be useful considering we don't have versioning -- versioning may be cheaper). removed unused field in apps too this is lined up behind #349 so that I could use the tests... closes #345 closes #142 closes #284 |
||
|
|
337e962416 |
add pagination to all list endpoints
calls, apps, and routes listing were previously returning the entire data set, which just won't scale. this adds pagination with cursoring forward to each of these endpoints (see the [docs](docs/definitions.md)). the patch is really mostly tests, shouldn't be that bad to pick through. some blarble about implementation is in order: calls are sorted by ids but allow searching within certain `created_at` ranges (finally). this is because sorting by `created_at` isn't feasible when combined with paging, as `created_at` is not guaranteed to be unique -- id's are (eliding theoreticals). i.e. on a page boundary, if there are 200 calls with the same `created_at`, providing a `cursor` of that `created_at` will skip over the remaining N calls with that `created_at`. also using id will be better on the index anyway (well, less of them). yay having sortable ids! I can't discern any issues doing this, as even if 200 calls have the same created_at, they will have different ids, and the sort should allow paginating them just fine. ids are also url safe, so the id works as the cursor value just fine. apps and routes are sorted by alphabetical order. as they aren't guaranteed to be url safe, we are base64'ing them in the front end to a url safe format and then returning them, and then base64 decoding them when we get them. this does mean that they can be relatively large if the path/app is long, but if we don't want to add ids then they were going to be pretty big anyway. a bonus that this kind of obscures them. if somebody has better idea on formatting, by all means. notably, we are not using the sql paging facilities, and we are baking our own based on cursors, which ends up being much more efficient for querying longer lists of resources. this also should be easy to implement in other non-sql dbs and the cursoring formats we can change on the fly since we are just exposing them as opaque strings. the front end deals with the base64 / formatting, etc and the back end is taking raw values (strfmt.DateTime or the id for calls). the cursor that is being passed to/by the user is simply the last resource on the previous page, so in theory we don't even need to return it, but it does make it a little easier to use, also, cursor being blank on the last page depends on page full-ness, so sometimes users will get a cursor when there are no results on next page (1/N chance, and it's not really end of world -- actually searching for the next thing would make things more complex). there are ample tests for this behavior. I've turned off all query parameters allowing `LIKE` queries on certain listing endpoints, as we should not expose sql behavior through our API in the event that we end up not using a sql db down the road. I think we should only allow prefix matching, which sql can support as well as other types of databases relatively cheaply, but this is not hooked up here as it didn't 'just work' when I was fiddling with it (can add later, they're unnecessary and weren't wired in before in front end). * remove route listing across apps (unused) * fix panic when doing `/app//`. this is prob possible for other types of endpoints, out of scope here. added a guard in front of all endpoints for this * adds `from_time` and `to_time` query parameters to calls, so you can e.g. list the last hour of tasks. these are not required and default to oldest/newest. * hooked back up the datastore tests to the sql db, only run with sqlite atm, but these are useful, added a lot to them too. * added a bunch of tests to the front end, so pretty sure this all works now. * added to swagger, we'll need to re-gen. also wrote some words about pagination workings, I'm not sure how best to link to these, feedback welcome. * not sure how we want to manage indexes, but we may need to add some (looking at created_at, mostly) * `?route` changed to `?path` in routes listing, to keep consistency with everything else * don't 404 when searching for calls where the route doesn't exist, just return an empty list (it's a query param ffs) closes #141 |
||
|
|
3e190342fb |
Implementing batch deletes for calls, logs and routes
Partially-Closes: #302 |
||
|
|
71a88a991c |
hang the runner, agent=new sheriff (#270)
* fix docker build this is trivially incorrect since glide doesn't actually provide reproducible builds. the idea is to build with the deps that we have checked into git, so that we actually know what code is executing so that we might debug it... all for multi stage build instead of what we had, but adding the glide step is wrong. i added a loud warning so as to discourage this behavior in the future. * hang the runner, agent=new sheriff tl;dr agent is now runner, with a hopefully saner api the general idea is get rid of all the various 'task' structs now, change our terminology to only be 'calls' now, push a lot of the http construction of a call into the agent, allow calls to mutate their state around their execution easily and to simplify the number of code paths, channels and context timeouts in something [hopefully] easy to understand. this introduces the idea of 'slots' which are either hot or cold and are separate from reserving memory (memory is denominated in 'tokens' now). a 'slot' is essentially a container that is ready for execution of a call, be it hot or cold (it just means different things based on hotness). taking a look into Submit should make these relatively easy to grok. sorry, things were pretty broken especially wrt timings. I tried to keep good notes (maybe too good), to highlight stuff so that we don't make the same mistakes again (history repeating itself blah blah quote). even now, there is lots of work to do :) I encourage just reading the agent.go code, Submit is really simple and there's a description of how the whole thing works at the head of the file (after TODOs). call.go contains code for constructing calls, as well as Start / End (small atm). I did some amount of code massaging to try to make things simple / straightforward / fit reasonable mental model, but as always am open to critique (the more negative the better) as I'm just one guy and wth do i know... ----------------------------------------------------------------------------- below enumerates a number of changes as briefly as possible (heh..): models.Call all the things removes models.Task as models.Call is now what it previously was. models.FnCall is now rid of in favor of models.Call, despite the datastore only storing a few fields of it [for now]. we should probably store entire calls in the db, since app & route configurations can change at any given moment, it would be nice to see the parameters of each call (costs db space, obviously). this removes the endpoints for getting & deleting messages, we were just looping back to localhost to call the MQ (wtf? this was for iron integration i think) and just calls the MQ. changes the name of the FnLog to LogStore, confusing cause there's also a `FuncLogger` which uses the Logstore (punting). removes other `Fn` prefixed structs (redundant naming convention). removes some unused and/or weird structs (IDStatus, CompleteTime) updates the swagger makes the db methods consistent to use 'Call' nomenclature. remove runner nuisances: * push down registry stuff to docker driver * remove Environment / Stats stuff of yore * remove unused writers (now in FuncLogger) * remove 2 of the task types, old hot stuff, runner, etc fixes ram available calculation on startup to not always be 300GB (helps a lot on a laptop!) format for DOCKER_AUTH env now is not a list but a map (there are no docs, would prefer to get rid of this altogether anyway). the ~/.docker/cfg expected format is unchanged. removes arbitrary task queue, if a machine is out of ram we can probably just time out without queueing... (can open separate discussion) in any case the old one didn't really account well for hot tasks, it just lined everyone up in the task queue if there wasn't a place to run hot and then timed them out [even if a slot became free]. removes HEADER_ prefixing on any headers in the request to a invoke a call. (this was inconsistent with cli for test anyway) removes TASK_ID header sent in to hot only (this is a dupe of FN_CALL_ID, which has not been removed) now user functions can reply directly to the client. this means that for cold containers if they write to stdout it will send a 200 + headers. for hot containers, the user can reply directly to the client from the container, i.e. with its preferred status code / headers (vs. always getting a 200). the dispatch itself is a little http specific atm, i think we can add an interchange format but the current version is easily extended to add json for now, separate discussion. this eliminates a lot of the request/response rewriting and buffering we were doing (yey). now Dispatch ONLY does input and output, vs. managing the call timeout and having access to a call's fields. cache is pushed down into agent now instead of in the front end, would like to push it down to the datastore actually but it's here for now anyway. cache delete functions removed (b/c fn is distributed anyway?). added app caching, should help with latency. in general, a lot of server/runner.go got pushed down into the agent. i think it will be useful in testing to be able to construct calls without having to invoke http handlers + async also needs to construct calls without a handler. safe shutdown actually works now for everything (leaked / didn't wait on certain things before) now we're waiting for hot slots to open up while we're attempting to get ram to launch a container if we didn't find any hot slots to run the call in immediately. we can change this policy really easily now (no more channel jungle; still some channels). also looking for somewhere else to go while the container is launching now. slots now get sent _out_ of a container, vs. a container receiving calls, which makes this kind of policy easier to implement. this fixes a number of bugs around things like trying to execute calls against containers that have not and may never start and trying to launch a bazillion containers when there are no free containers. the driver api underwent some changes to make this possible (relatively minimal, added Wait). the easiest way to think about this is that allocating ram has moved 'up' instead of just wrapping launching containers, so that we can select on a channel trying to find ram. not dispatching hot calls to containers that died anymore either... the timeout is now started at the beginning of Submit, rather than Dispatch or the container itself having to manage the call timeout, which was an inaccurate way of doing things since finding a slot / allocating ram / pulling image can all take a non-trivial (timeout amount, even!) amount of time. this makes for much more reasonable response times from fn under load, there's still a little TODO about handling cold+timeout container removal response times but it's much improved. if call.Start is called with < call.timeout/2 time left, then the call will not be executed and return a timeout. we can discuss. this makes async play _a lot_ nicer, specifically. for large timeouts / 2 makes less sense. env is no longer getting upper cased (admittedly, this can look a little weird now). our whole route.Config/app.Config/env/headers stuff probably deserves a whole discussion... sync output no longer has the call id in json if there's an error / timeout. we could add this back to signify that it's _us_ writing these but this was out of place. FN_CALL_ID is still shipped out to get the id for sync calls, and async [server] output remains unchanged. async logs are now an entire raw http request (so that a user can write a 400 or something from their hot async container) async hot now 'just works' cold sync calls can now reply to the client before container removal, which shaves a lot of latency off of those (still eat start). still need to figure out async removal if timeout or something. ----------------------------------------------------------------------------- i've located a number of bugs that were generally inherited, and also added a number of TODOs in the head of the agent.go file according to robustness we probably need to add. this is at least at parity with the previous implementation, to my knowledge (hopefully/likely a good bit ahead). I can memorialize these to github quickly enough, not that anybody searches before adding bugs anyway (sigh). the big thing to work on next imo is async being a lot more robust, specifically to survive fn server failures / network issues. thanks for review (gulp) |