hang the runner, agent=new sheriff (#270)

* fix docker build

this is trivially incorrect since glide doesn't actually provide reproducible
builds. the idea is to build with the deps that we have checked into git, so
that we actually know what code is executing so that we might debug it...

all for multi stage build instead of what we had, but adding the glide step is
wrong. i added a loud warning so as to discourage this behavior in the future.

* hang the runner, agent=new sheriff

tl;dr agent is now runner, with a hopefully saner api

the general idea is get rid of all the various 'task' structs now, change our
terminology to only be 'calls' now, push a lot of the http construction of a
call into the agent, allow calls to mutate their state around their execution
easily and to simplify the number of code paths, channels and context timeouts
in something [hopefully] easy to understand.

this introduces the idea of 'slots' which are either hot or cold and are
separate from reserving memory (memory is denominated in 'tokens' now).
a 'slot' is essentially a container that is ready for execution of a call, be
it hot or cold (it just means different things based on hotness). taking a
look into Submit should make these relatively easy to grok.

sorry, things were pretty broken especially wrt timings. I tried to keep good
notes (maybe too good), to highlight stuff so that we don't make the same
mistakes again (history repeating itself blah blah quote). even now, there is
lots of work to do :)

I encourage just reading the agent.go code, Submit is really simple and
there's a description of how the whole thing works at the head of the file
(after TODOs). call.go contains code for constructing calls, as well as Start
/ End (small atm). I did some amount of code massaging to try to make things
simple / straightforward / fit reasonable mental model, but as always am open
to critique (the more negative the better) as I'm just one guy and wth do i
know...

-----------------------------------------------------------------------------

below enumerates a number of changes as briefly as possible (heh..):

models.Call all the things

removes models.Task as models.Call is now what it previously was.
models.FnCall is now rid of in favor of models.Call, despite the datastore
only storing a few fields of it [for now]. we should probably store entire
calls in the db, since app & route configurations can change at any given
moment, it would be nice to see the parameters of each call (costs db space,
obviously).

this removes the endpoints for getting & deleting messages, we were just
looping back to localhost to call the MQ (wtf? this was for iron integration i
think) and just calls the MQ.

changes the name of the FnLog to LogStore, confusing cause there's also a
`FuncLogger` which uses the Logstore (punting). removes other `Fn` prefixed
structs (redundant naming convention).

removes some unused and/or weird structs (IDStatus, CompleteTime)

updates the swagger

makes the db methods consistent to use 'Call' nomenclature.

remove runner nuisances:

* push down registry stuff to docker driver
* remove Environment / Stats stuff of yore
* remove unused writers (now in FuncLogger)
* remove 2 of the task types, old hot stuff, runner, etc

fixes ram available calculation on startup to not always be 300GB (helps a lot
on a laptop!)

format for DOCKER_AUTH env now is not a list but a map (there are no docs,
would prefer to get rid of this altogether anyway). the ~/.docker/cfg expected
format is unchanged.

removes arbitrary task queue, if a machine is out of ram we can probably just
time out without queueing... (can open separate discussion) in any case the
old one didn't really account well for hot tasks, it just lined everyone up in
the task queue if there wasn't a place to run hot and then timed them out
[even if a slot became free].

removes HEADER_ prefixing on any headers in the request to a invoke a call.
(this was inconsistent with cli for test anyway)

removes TASK_ID header sent in to hot only (this is a dupe of FN_CALL_ID,
which has not been removed)

now user functions can reply directly to the client. this means that for
cold containers if they write to stdout it will send a 200 + headers. for
hot containers, the user can reply directly to the client from the container,
i.e. with its preferred status code / headers (vs. always getting a 200).
the dispatch itself is a little http specific atm, i think we can add an
interchange format but the current version is easily extended to add json for
now, separate discussion. this eliminates a lot of the request/response
rewriting and buffering we were doing (yey). now Dispatch ONLY does input and
output, vs. managing the call timeout and having access to a call's fields.

cache is pushed down into agent now instead of in the front end, would like to
push it down to the datastore actually but it's here for now anyway. cache
delete functions removed (b/c fn is distributed anyway?). added app caching,
should help with latency.

in general, a lot of server/runner.go got pushed down into the agent. i think
it will be useful in testing to be able to construct calls without having to
invoke http handlers + async also needs to construct calls without a handler.

safe shutdown actually works now for everything (leaked / didn't wait on
certain things before)

now we're waiting for hot slots to open up while we're attempting to get ram
to launch a container if we didn't find any hot slots to run the call in
immediately. we can change this policy really easily now (no more channel
jungle; still some channels). also looking for somewhere else to go while the
container is launching now. slots now get sent _out_ of a container, vs.
a container receiving calls, which makes this kind of policy easier to
implement. this fixes a number of bugs around things like trying to execute
calls against containers that have not and may never start and trying to
launch a bazillion containers when there are no free containers. the driver api
underwent some changes to make this possible (relatively minimal, added Wait).
the easiest way to think about this is that allocating ram has moved 'up'
instead of just wrapping launching containers, so that we can select on a
channel trying to find ram.

not dispatching hot calls to containers that died anymore either...

the timeout is now started at the beginning of Submit, rather than Dispatch or
the container itself having to manage the call timeout, which was an
inaccurate way of doing things since finding a slot / allocating ram / pulling
image can all take a non-trivial (timeout amount, even!) amount of time. this
makes for much more reasonable response times from fn under load, there's
still a little TODO about handling cold+timeout container removal response
times but it's much improved.

if call.Start is called with < call.timeout/2 time left, then the call will
not be executed and return a timeout. we can discuss. this makes async play
_a lot_ nicer, specifically. for large timeouts / 2 makes less sense.

env is no longer getting upper cased (admittedly, this can look a little weird
now). our whole route.Config/app.Config/env/headers stuff probably deserves a
whole discussion...

sync output no longer has the call id in json if there's an error / timeout.
we could add this back to signify that it's _us_ writing these but this was
out of place. FN_CALL_ID is still shipped out to get the id for sync calls,
and async [server] output remains unchanged.

async logs are now an entire raw http request (so that a user can write a 400
or something from their hot async container)

async hot now 'just works'

cold sync calls can now reply to the client before container removal, which
shaves a lot of latency off of those (still eat start). still need to figure
out async removal if timeout or something.

-----------------------------------------------------------------------------

i've located a number of bugs that were generally inherited, and also added
a number of TODOs in the head of the agent.go file according to robustness we
probably need to add. this is at least at parity with the previous
implementation, to my knowledge (hopefully/likely a good bit ahead). I can
memorialize these to github quickly enough, not that anybody searches before
adding bugs anyway (sigh).

the big thing to work on next imo is async being a lot more robust,
specifically to survive fn server failures / network issues.

thanks for review (gulp)
This commit is contained in:
Reed Allman
2017-09-05 10:32:51 -07:00
committed by Denis Makogon
parent 1b1b64436f
commit 71a88a991c
100 changed files with 2151 additions and 4121 deletions

View File

@@ -3,23 +3,15 @@ package server
import (
"bytes"
"context"
"fmt"
"io"
"io/ioutil"
"net/http"
"path"
"strings"
"time"
"github.com/sirupsen/logrus"
"github.com/fnproject/fn/api"
"github.com/fnproject/fn/api/id"
"github.com/fnproject/fn/api/agent"
"github.com/fnproject/fn/api/models"
"github.com/fnproject/fn/api/runner"
"github.com/fnproject/fn/api/runner/common"
"github.com/fnproject/fn/api/runner/task"
"github.com/gin-gonic/gin"
cache "github.com/patrickmn/go-cache"
)
type runnerResponse struct {
@@ -27,15 +19,7 @@ type runnerResponse struct {
Error *models.ErrorBody `json:"error,omitempty"`
}
func toEnvName(envtype, name string) string {
name = strings.ToUpper(strings.Replace(name, "-", "_", -1))
if envtype == "" {
return name
}
return fmt.Sprintf("%s_%s", envtype, name)
}
func (s *Server) handleRequest(c *gin.Context, enqueue models.Enqueue) {
func (s *Server) handleRequest(c *gin.Context) {
if strings.HasPrefix(c.Request.URL.Path, "/v1") {
c.Status(http.StatusNotFound)
return
@@ -43,22 +27,10 @@ func (s *Server) handleRequest(c *gin.Context, enqueue models.Enqueue) {
ctx := c.Request.Context()
reqID := id.New().String()
ctx, log := common.LoggerWithFields(ctx, logrus.Fields{"call_id": reqID})
var err error
var payload io.Reader
if c.Request.Method == "POST" {
payload = c.Request.Body
// Load complete body and close
defer func() {
io.Copy(ioutil.Discard, c.Request.Body)
c.Request.Body.Close()
}()
} else if c.Request.Method == "GET" {
reqPayload := c.Request.URL.Query().Get("payload")
payload = strings.NewReader(reqPayload)
if c.Request.Method == "GET" {
// TODO we _could_ check the normal body, this is still weird
// TODO do we need to flush the original body if we do this? (hint: yes)
c.Request.Body = ioutil.NopCloser(strings.NewReader(c.Request.URL.Query().Get("payload")))
}
r, routeExists := c.Get(api.Path)
@@ -73,228 +45,65 @@ func (s *Server) handleRequest(c *gin.Context, enqueue models.Enqueue) {
s.FireBeforeDispatch(ctx, reqRoute)
appName := reqRoute.AppName
path := reqRoute.Path
s.serve(c, reqRoute.AppName, reqRoute.Path)
app, err := s.Datastore.GetApp(ctx, appName)
if err != nil {
handleErrorResponse(c, err)
return
} else if app == nil {
handleErrorResponse(c, models.ErrAppsNotFound)
return
}
log.WithFields(logrus.Fields{"app": appName, "path": path}).Debug("Finding route on datastore")
route, err := s.loadroute(ctx, appName, path)
if err != nil {
handleErrorResponse(c, err)
return
}
if route == nil {
handleErrorResponse(c, models.ErrRoutesNotFound)
return
}
log = log.WithFields(logrus.Fields{"app": appName, "path": route.Path, "image": route.Image})
log.Debug("Got route from datastore")
if s.serve(ctx, c, appName, route, app, path, reqID, payload, enqueue) {
s.FireAfterDispatch(ctx, reqRoute)
return
}
handleErrorResponse(c, models.ErrRoutesNotFound)
s.FireAfterDispatch(ctx, reqRoute)
}
func (s *Server) loadroute(ctx context.Context, appName, path string) (*models.Route, error) {
if route, ok := s.cacheget(appName, path); ok {
return route, nil
}
key := routeCacheKey(appName, path)
resp, err := s.singleflight.do(
key,
func() (interface{}, error) {
return s.Datastore.GetRoute(ctx, appName, path)
},
// TODO it would be nice if we could make this have nothing to do with the gin.Context but meh
// TODO make async store an *http.Request? would be sexy until we have different api format...
func (s *Server) serve(c *gin.Context, appName, path string) {
// GetCall can mod headers, assign an id, look up the route/app (cached),
// strip params, etc.
call, err := s.Agent.GetCall(
agent.WithWriter(c.Writer), // XXX (reed): order matters [for now]
agent.FromRequest(appName, path, c.Request),
)
if err != nil {
return nil, err
}
route := resp.(*models.Route)
s.routeCache.Set(key, route, cache.DefaultExpiration)
return route, nil
}
// TODO: Should remove *gin.Context from these functions, should use only context.Context
func (s *Server) serve(ctx context.Context, c *gin.Context, appName string, route *models.Route, app *models.App, path, reqID string, payload io.Reader, enqueue models.Enqueue) (ok bool) {
ctx, log := common.LoggerWithFields(ctx, logrus.Fields{"app": appName, "route": route.Path, "image": route.Image})
params, match := matchRoute(route.Path, path)
if !match {
return false
handleErrorResponse(c, err)
return
}
var stdout bytes.Buffer // TODO: should limit the size of this, error if gets too big. akin to: https://golang.org/pkg/io/#LimitReader
// TODO we could add FireBeforeDispatch right here with Call in hand
if route.Format == "" {
route.Format = "default"
}
// baseVars are the vars on the route & app, not on this specific request [for hot functions]
baseVars := make(map[string]string, len(app.Config)+len(route.Config)+3)
baseVars["FN_FORMAT"] = route.Format
baseVars["APP_NAME"] = appName
baseVars["ROUTE"] = route.Path
baseVars["MEMORY_MB"] = fmt.Sprintf("%d", route.Memory)
// app config
for k, v := range app.Config {
k = toEnvName("", k)
baseVars[k] = v
}
for k, v := range route.Config {
k = toEnvName("", k)
baseVars[k] = v
}
// envVars contains the full set of env vars, per request + base
envVars := make(map[string]string, len(baseVars)+len(params)+len(c.Request.Header)+3)
for k, v := range baseVars {
envVars[k] = v
}
envVars["CALL_ID"] = reqID
envVars["METHOD"] = c.Request.Method
envVars["REQUEST_URL"] = fmt.Sprintf("%v://%v%v", func() string {
if c.Request.TLS == nil {
return "http"
}
return "https"
}(), c.Request.Host, c.Request.URL.String())
// params
for _, param := range params {
envVars[toEnvName("PARAM", param.Key)] = param.Value
}
// headers
for header, value := range c.Request.Header {
envVars[toEnvName("HEADER", header)] = strings.Join(value, ", ")
}
cfg := &task.Config{
AppName: appName,
Path: route.Path,
BaseEnv: baseVars,
Env: envVars,
Format: route.Format,
ID: reqID,
Image: route.Image,
Memory: route.Memory,
Stdin: payload,
Stdout: &stdout,
Timeout: time.Duration(route.Timeout) * time.Second,
IdleTimeout: time.Duration(route.IdleTimeout) * time.Second,
ReceivedTime: time.Now(),
Ready: make(chan struct{}),
}
// ensure valid values
if cfg.Timeout <= 0 {
cfg.Timeout = runner.DefaultTimeout
}
if cfg.IdleTimeout <= 0 {
cfg.IdleTimeout = runner.DefaultIdleTimeout
}
s.Runner.Enqueue()
newTask := task.TaskFromConfig(cfg)
switch route.Type {
case "async":
// TODO we should be able to do hot input to async. plumb protocol stuff
// TODO enqueue should unravel the payload?
// Read payload
pl, err := ioutil.ReadAll(cfg.Stdin)
if model := call.Model(); model.Type == "async" {
// TODO we should push this into GetCall somehow (CallOpt maybe) or maybe agent.Queue(Call) ?
buf := bytes.NewBuffer(make([]byte, 0, c.Request.ContentLength)) // TODO sync.Pool me
_, err := buf.ReadFrom(c.Request.Body)
if err != nil {
handleErrorResponse(c, models.ErrInvalidPayload)
return true
return
}
// Add in payload
newTask.Payload = string(pl)
model.Payload = buf.String()
// Push to queue
_, err = enqueue(c, s.MQ, newTask)
// TODO we should probably add this to the datastore too. consider the plumber!
_, err = s.MQ.Push(c.Request.Context(), model)
if err != nil {
handleErrorResponse(c, err)
return true
return
}
log.Info("Added new task to queue")
c.JSON(http.StatusAccepted, map[string]string{"call_id": cfg.ID})
default:
result, err := s.Runner.RunTrackedTask(newTask, ctx, cfg)
if result != nil {
waitTime := result.StartTime().Sub(cfg.ReceivedTime)
c.Header("XXX-FXLB-WAIT", waitTime.String())
}
if err != nil {
c.JSON(http.StatusInternalServerError, runnerResponse{
RequestID: cfg.ID,
Error: &models.ErrorBody{
Message: err.Error(),
},
})
log.WithError(err).Error("Failed to run task")
break
}
for k, v := range route.Headers {
c.Header(k, v[0])
}
// this will help users to track sync execution in a manner of async
// FN_CALL_ID is an equivalent of call_id
c.Header("FN_CALL_ID", newTask.ID)
switch result.Status() {
case "success":
c.Data(http.StatusOK, "", stdout.Bytes())
case "timeout":
c.JSON(http.StatusGatewayTimeout, runnerResponse{
RequestID: cfg.ID,
Error: &models.ErrorBody{
Message: models.ErrRunnerTimeout.Error(),
},
})
default:
c.JSON(http.StatusInternalServerError, runnerResponse{
RequestID: cfg.ID,
Error: &models.ErrorBody{
Message: result.Error(),
},
})
}
c.JSON(http.StatusAccepted, map[string]string{"call_id": model.ID})
return
}
return true
}
err = s.Agent.Submit(call)
if err != nil {
// NOTE if they cancel the request then it will stop the call (kind of cool),
// we could filter that error out here too as right now it yells a little
var fakeHandler = func(http.ResponseWriter, *http.Request, Params) {}
func matchRoute(baseRoute, route string) (Params, bool) {
tree := &node{}
tree.addRoute(baseRoute, fakeHandler)
handler, p, _ := tree.getValue(route)
if handler == nil {
return nil, false
if err == context.DeadlineExceeded {
err = models.ErrCallTimeout // 504 w/ friendly note
}
// NOTE: if the task wrote the headers already then this will fail to write
// a 5xx (and log about it to us) -- that's fine (nice, even!)
handleErrorResponse(c, err)
return
}
return p, true
// TODO plumb FXLB-WAIT somehow (api?)
// TODO we need to watch the response writer and if no bytes written
// then write a 200 at this point?
// c.Data(http.StatusOK)
}