Files
fn-serverless/api/agent/async.go
Reed Allman 2ebc9c7480 hybrid mergy (#581)
* so it begins

* add clarification to /dequeue, change response to list to future proof

* Specify that runner endpoints are also under /v1

* Add a flag to choose operation mode (node type).

This is specified using the `FN_NODE_TYPE` environment variable. The
default is the existing behaviour, where the server supports all
operations (full API plus asynchronous and synchronous runners).

The additional modes are:
* API - the full API is available, but no functions are executed by the
  node. Async calls are placed into a message queue, and synchronous
  calls are not supported (invoking them results in an API error).
* Runner - only the invocation/route API is present. Asynchronous and
  synchronous invocation requests are supported, but asynchronous
  requests are placed onto the message queue, so might be handled by
  another runner.

* Add agent type and checks on Submit

* Sketch of a factored out data access abstraction for api/runner agents

* Fix tests, adding node/agent types to constructors

* Add tests for full, API, and runner server modes.

* Added atomic UpdateCall to datastore

* adds in server side endpoints

* Made ServerNodeType public because tests use it

* Made ServerNodeType public because tests use it

* fix test build

* add hybrid runner client

pretty simple go api client that covers surface area needed for hybrid,
returning structs from models that the agent can use directly. not exactly
sure where to put this, so put it in `/clients/hybrid` but maybe we should
make `/api/runner/client` or something and shove it in there. want to get
integration tests set up and use the real endpoints next and then wrap this up
in the DataAccessLayer stuff.

* gracefully handles errors from fn
* handles backoff & retry on 500s
* will add to existing spans for debuggo action

* minor fixes

* meh
2017-12-11 10:43:19 -08:00

64 lines
1.8 KiB
Go

package agent
import (
"context"
"time"
"github.com/sirupsen/logrus"
)
func (a *agent) asyncDequeue() {
a.wg.Add(1)
defer a.wg.Done() // we can treat this thread like one big task and get safe shutdown fo free
for {
select {
case <-a.shutdown:
return
case <-a.resources.WaitAsyncResource():
// TODO we _could_ return a token here to reserve the ram so that there's
// not a race between here and Submit but we're single threaded
// dequeueing and retries handled gracefully inside of Submit if we run
// out of RAM so..
}
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) // TODO ???
model, err := a.da.Dequeue(ctx)
cancel()
if err != nil || model == nil {
if err != nil {
logrus.WithError(err).Error("error fetching queued calls")
}
time.Sleep(1 * time.Second) // backoff a little
continue
}
// TODO output / logger should be here too...
a.wg.Add(1) // need to add 1 in this thread to ensure safe shutdown
go func() {
defer a.wg.Done() // can shed it after this is done, Submit will add 1 too but it's fine
call, err := a.GetCall(FromModel(model))
if err != nil {
logrus.WithError(err).Error("error getting async call")
return
}
// TODO if the task is cold and doesn't require reading STDIN, it could
// run but we may not listen for output since the task timed out. these
// are at least once semantics, which is really preferable to at most
// once, so let's do it for now
err = a.Submit(call)
if err != nil {
// NOTE: these could be errors / timeouts from the call that we're
// logging here (i.e. not our fault), but it's likely better to log
// these than suppress them so...
id := call.Model().ID
logrus.WithFields(logrus.Fields{"id": id}).WithError(err).Error("error running async call")
}
}()
}
}