mirror of
https://github.com/fnproject/fn.git
synced 2022-10-28 21:29:17 +03:00
Support load-balanced runner groups for multitenant compute isolation (#814)
* Initial stab at the protocol * initial protocol sketch for node pool manager * Added http header frame as a message * Force the use of WithAgent variants when creating a server * adds grpc models for node pool manager plus go deps * Naming things is really hard * Merge (and optionally purge) details received by the NPM * WIP: starting to add the runner-side functionality of the new data plane * WIP: Basic startup of grpc server for pure runner. Needs proper certs. * Go fmt * Initial agent for LB nodes. * Agent implementation for LB nodes. * Pass keys and certs to LB node agent. * Remove accidentally left reference to env var. * Add env variables for certificate files * stub out the capacity and group membership server channels * implement server-side runner manager service * removes unused variable * fixes build error * splits up GetCall and GetLBGroupId * Change LB node agent to use TLS connection. * Encode call model as JSON to send to runner node. * Use hybrid client in LB node agent. This should provide access to get app and route information for the call from an API node. * More error handling on the pure runner side * Tentative fix for GetCall problem: set deadlines correctly when reserving slot * Connect loop for LB agent to runner nodes. * Extract runner connection function in LB agent. * drops committed capacity counts * Bugfix - end state tracker only in submit * Do logs properly * adds first pass of tracking capacity metrics in agent * maked memory capacity metric uint64 * maked memory capacity metric uint64 * removes use of old capacity field * adds remove capacity call * merges overwritten reconnect logic * First pass of a NPM Provide a service that talks to a (simulated) CP. - Receive incoming capacity assertions from LBs for LBGs - expire LB requests after a short period - ask the CP to add runners to a LBG - note runner set changes and readvertise - scale down by marking runners as "draining" - shut off draining runners after some cool-down period * add capacity update on schedule * Send periodic capcacity metrics Sending capcacity metrics to node pool manager * splits grpc and api interfaces for capacity manager * failure to advertise capacity shouldn't panic * Add some instructions for starting DP/CP parts. * Create the poolmanager server with TLS * Use logrus * Get npm compiling with cert fixups. * Fix: pure runner should not start async processing * brings runner, nulb and npm together * Add field to acknowledgment to record slot allocation latency; fix a bug too * iterating on pool manager locking issue * raises timeout of placement retry loop * Fix up NPM Improve logging Ensure that channels etc. are actually initialised in the structure creation! * Update the docs - runners GRPC port is 9120 * Bugfix: return runner pool accurately. * Double locking * Note purges as LBs stop talking to us * Get the purging of old LBs working. * Tweak: on restart, load runner set before making scaling decisions. * more agent synchronization improvements * Deal with teh CP pulling out active hosts from under us. * lock at lbgroup level * Send request and receive response from runner. * Add capacity check right before slot reservation * Pass the full Call into the receive loop. * Wait for the data from the runner before finishing * force runner list refresh every time * Don't init db and mq for pure runners * adds shutdown of npm * fixes broken log line * Extract an interface for the Predictor used by the NPM * purge drained connections from npm * Refactor of the LB agent into the agent package * removes capacitytest wip * Fix undefined err issue * updating README for poolmanager set up * ues retrying dial for lb to npm connections * Rename lb_calls to lb_agent now that all functionality is there * Use the right deadline and errors in LBAgent * Make stream error flag per-call rather than global otherwise the whole runner is damaged by one call dropping * abstracting gRPCNodePool * Make stream error flag per-call rather than global otherwise the whole runner is damaged by one call dropping * Add some init checks for LB and pure runner nodes * adding some useful debug * Fix default db and mq for lb node * removes unreachable code, fixes typo * Use datastore as logstore in API nodes. This fixes a bug caused by trying to insert logs into a nil logstore. It was nil because it wasn't being set for API nodes. * creates placement abstraction and moves capacity APIs to NodePool * removed TODO, added logging * Dial reconnections for LB <-> runners LB grpc connections to runners are established using a backoff stategy in event of reconnections, this allows to let the LB up even in case one of the runners go away and reconnect to it as soon as it is back. * Add a status call to the Runner protocol Stub at the moment. To be used for things like draindown, health checks. * Remove comment. * makes assign/release capacity lockless * Fix hanging issue in lb agent when connections drop * Add the CH hash from fnlb Select this with FN_PLACER=ch when launching the LB. * small improvement for locking on reloadLBGmembership * Stabilise the list of Runenrs returned by NodePool The NodePoolManager makes some attempt to keep the list of runner nodes advertised as stable as possible. Let's preserve this effort in the client side. The main point of this is to attempt to keep the same runner at the same inxed in the []Runner returned by NodePool.Runners(lbgid); the ch algorithm likes it when this is the case. * Factor out a generator function for the Runners so that mocks can be injected * temporarily allow lbgroup to be specified in HTTP header, while we sort out changes to the model * fixes bug with nil runners * Initial work for mocking things in tests * fix for anonymouse go routine error * fixing lb_test to compile * Refactor: internal objects for gRPCNodePool are now injectable, with defaults for the real world case * Make GRPC port configurable, fix weird handling of web port too * unit test reload Members * check on runner creation failure * adding nullRunner in case of failure during runner creation * Refactored capacity advertisements/aggregations. Made grpc advertisement post asynchronous and non-blocking. * make capacityEntry private * Change the runner gRPC bind address. This uses the existing `whoAmI` function, so that the gRPC server works when the runner is running on a different host. * Add support for multiple fixed runners to pool mgr * Added harness for dataplane system tests, minor refactors * Add Dockerfiles for components, along with docs. * Doc fix: second runner needs a different name. * Let us have three runners in system tests, why not * The first system test running a function in API/LB/PureRunner mode * Add unit test for Advertiser logic * Fix issue with Pure Runner not sending the last data frame * use config in models.Call as a temporary mechanism to override lb group ID * make gofmt happy * Updates documentation for how to configure lb groups for an app/route * small refactor unit test * Factor NodePool into its own package * Lots of fixes to Pure Runner - concurrency woes with errors and cancellations * New dataplane with static runnerpool (#813) Added static node pool as default implementation * moved nullRunner to grpc package * remove duplication in README * fix go vet issues * Fix server initialisation in api tests * Tiny logging changes in pool manager. Using `WithError` instead of `Errorf` when appropriate. * Change some log levels in the pure runner * fixing readme * moves multitenant compute documentation * adds introduction to multitenant readme * Proper triggering of system tests in makefile * Fix insructions about starting up the components * Change db file for system tests to avoid contention in parallel tests * fixes revisions from merge * Fix merge issue with handling of reserved slot * renaming nulb to lb in the doc and images folder * better TryExec sleep logic clean shutdown In this change we implement a better way to deal with the sleep inside the for loop during the attempt for placing a call. Plus we added a clean way to shutdown the connections with external component when we shut down the server. * System_test mysql port set mysql port for system test to a different value to the one set for the api tests to avoid conflicts as they can run in parallel. * change the container name for system-test * removes flaky test TestRouteRunnerExecution pending resolution by issue #796 * amend remove_containers to remove new added containers * Rework capacity reservation logic at a higher level for now * LB agent implements Submit rather than delegating. * Fix go vet linting errors * Changed a couple of error levels * Fix formatting * removes commmented out test * adds snappy to vendor directory * updates Gopkg and vendor directories, removing snappy and addhing siphash * wait for db containers to come up before starting the tests * make system tests start API node on 8085 to avoid port conflict with api_tests * avoid port conflicts with api_test.sh which are run in parallel * fixes postgres port conflict and issue with removal of old containers * Remove spurious println
This commit is contained in:
committed by
Tolga Ceylan
parent
d5da6fd8c5
commit
8af57da7b2
@@ -111,6 +111,13 @@ type agent struct {
|
||||
}
|
||||
|
||||
func New(da DataAccess) Agent {
|
||||
a := NewSyncOnly(da).(*agent)
|
||||
a.wg.Add(1)
|
||||
go a.asyncDequeue() // safe shutdown can nanny this fine
|
||||
return a
|
||||
}
|
||||
|
||||
func NewSyncOnly(da DataAccess) Agent {
|
||||
|
||||
cfg, err := NewAgentConfig()
|
||||
if err != nil {
|
||||
@@ -133,9 +140,6 @@ func New(da DataAccess) Agent {
|
||||
}
|
||||
|
||||
// TODO assert that agent doesn't get started for API nodes up above ?
|
||||
a.wg.Add(1)
|
||||
go a.asyncDequeue() // safe shutdown can nanny this fine
|
||||
|
||||
return a
|
||||
}
|
||||
|
||||
@@ -199,7 +203,6 @@ func (a *agent) endStateTrackers(ctx context.Context, call *call) {
|
||||
func (a *agent) submit(ctx context.Context, call *call) error {
|
||||
statsEnqueue(ctx)
|
||||
|
||||
// TODO can we replace state trackers with metrics?
|
||||
a.startStateTrackers(ctx, call)
|
||||
defer a.endStateTrackers(ctx, call)
|
||||
|
||||
@@ -208,7 +211,6 @@ func (a *agent) submit(ctx context.Context, call *call) error {
|
||||
handleStatsDequeue(ctx, err)
|
||||
return transformTimeout(err, true)
|
||||
}
|
||||
|
||||
defer slot.Close(ctx) // notify our slot is free once we're done
|
||||
|
||||
err = call.Start(ctx)
|
||||
|
||||
@@ -168,6 +168,22 @@ func FromModel(mCall *models.Call) CallOpt {
|
||||
}
|
||||
}
|
||||
|
||||
func FromModelAndInput(mCall *models.Call, in io.ReadCloser) CallOpt {
|
||||
return func(a *agent, c *call) error {
|
||||
c.Call = mCall
|
||||
|
||||
req, err := http.NewRequest(c.Method, c.URL, in)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
req.Header = c.Headers
|
||||
|
||||
c.req = req
|
||||
// TODO anything else really?
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// TODO this should be required
|
||||
func WithWriter(w io.Writer) CallOpt {
|
||||
return func(a *agent, c *call) error {
|
||||
@@ -183,6 +199,13 @@ func WithContext(ctx context.Context) CallOpt {
|
||||
}
|
||||
}
|
||||
|
||||
func WithoutPreemptiveCapacityCheck() CallOpt {
|
||||
return func(a *agent, c *call) error {
|
||||
c.disablePreemptiveCapacityCheck = true
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// GetCall builds a Call that can be used to submit jobs to the agent.
|
||||
//
|
||||
// TODO where to put this? async and sync both call this
|
||||
@@ -201,9 +224,11 @@ func (a *agent) GetCall(opts ...CallOpt) (Call, error) {
|
||||
return nil, errors.New("no model or request provided for call")
|
||||
}
|
||||
|
||||
if !a.resources.IsResourcePossible(c.Memory, uint64(c.CPUs), c.Type == models.TypeAsync) {
|
||||
// if we're not going to be able to run this call on this machine, bail here.
|
||||
return nil, models.ErrCallTimeoutServerBusy
|
||||
if !c.disablePreemptiveCapacityCheck {
|
||||
if !a.resources.IsResourcePossible(c.Memory, uint64(c.CPUs), c.Type == models.TypeAsync) {
|
||||
// if we're not going to be able to run this call on this machine, bail here.
|
||||
return nil, models.ErrCallTimeoutServerBusy
|
||||
}
|
||||
}
|
||||
|
||||
c.da = a.da
|
||||
@@ -245,6 +270,8 @@ type call struct {
|
||||
execDeadline time.Time
|
||||
requestState RequestState
|
||||
containerState ContainerState
|
||||
// This can be used to disable the preemptive capacity check in GetCall
|
||||
disablePreemptiveCapacityCheck bool
|
||||
}
|
||||
|
||||
func (c *call) Model() *models.Call { return c.Call }
|
||||
|
||||
801
api/agent/grpc/runner.pb.go
Normal file
801
api/agent/grpc/runner.pb.go
Normal file
@@ -0,0 +1,801 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// source: runner.proto
|
||||
|
||||
/*
|
||||
Package runner is a generated protocol buffer package.
|
||||
|
||||
It is generated from these files:
|
||||
runner.proto
|
||||
|
||||
It has these top-level messages:
|
||||
TryCall
|
||||
CallAcknowledged
|
||||
DataFrame
|
||||
HttpHeader
|
||||
HttpRespMeta
|
||||
CallResultStart
|
||||
CallFinished
|
||||
ClientMsg
|
||||
RunnerMsg
|
||||
RunnerStatus
|
||||
*/
|
||||
package runner
|
||||
|
||||
import proto "github.com/golang/protobuf/proto"
|
||||
import fmt "fmt"
|
||||
import math "math"
|
||||
import google_protobuf "github.com/golang/protobuf/ptypes/empty"
|
||||
|
||||
import (
|
||||
context "golang.org/x/net/context"
|
||||
grpc "google.golang.org/grpc"
|
||||
)
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ = proto.Marshal
|
||||
var _ = fmt.Errorf
|
||||
var _ = math.Inf
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the proto package it is being compiled against.
|
||||
// A compilation error at this line likely means your copy of the
|
||||
// proto package needs to be updated.
|
||||
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
|
||||
|
||||
// Request to allocate a slot for a call
|
||||
type TryCall struct {
|
||||
ModelsCallJson string `protobuf:"bytes,1,opt,name=models_call_json,json=modelsCallJson" json:"models_call_json,omitempty"`
|
||||
}
|
||||
|
||||
func (m *TryCall) Reset() { *m = TryCall{} }
|
||||
func (m *TryCall) String() string { return proto.CompactTextString(m) }
|
||||
func (*TryCall) ProtoMessage() {}
|
||||
func (*TryCall) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
|
||||
|
||||
func (m *TryCall) GetModelsCallJson() string {
|
||||
if m != nil {
|
||||
return m.ModelsCallJson
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// Call has been accepted and a slot allocated, or it's been rejected
|
||||
type CallAcknowledged struct {
|
||||
Committed bool `protobuf:"varint,1,opt,name=committed" json:"committed,omitempty"`
|
||||
Details string `protobuf:"bytes,2,opt,name=details" json:"details,omitempty"`
|
||||
SlotAllocationLatency string `protobuf:"bytes,3,opt,name=slot_allocation_latency,json=slotAllocationLatency" json:"slot_allocation_latency,omitempty"`
|
||||
}
|
||||
|
||||
func (m *CallAcknowledged) Reset() { *m = CallAcknowledged{} }
|
||||
func (m *CallAcknowledged) String() string { return proto.CompactTextString(m) }
|
||||
func (*CallAcknowledged) ProtoMessage() {}
|
||||
func (*CallAcknowledged) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
|
||||
|
||||
func (m *CallAcknowledged) GetCommitted() bool {
|
||||
if m != nil {
|
||||
return m.Committed
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (m *CallAcknowledged) GetDetails() string {
|
||||
if m != nil {
|
||||
return m.Details
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *CallAcknowledged) GetSlotAllocationLatency() string {
|
||||
if m != nil {
|
||||
return m.SlotAllocationLatency
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// Data sent C2S and S2C - as soon as the runner sees the first of these it
|
||||
// will start running. If empty content, there must be one of these with eof.
|
||||
// The runner will send these for the body of the response, AFTER it has sent
|
||||
// a CallEnding message.
|
||||
type DataFrame struct {
|
||||
Data []byte `protobuf:"bytes,1,opt,name=data,proto3" json:"data,omitempty"`
|
||||
Eof bool `protobuf:"varint,2,opt,name=eof" json:"eof,omitempty"`
|
||||
}
|
||||
|
||||
func (m *DataFrame) Reset() { *m = DataFrame{} }
|
||||
func (m *DataFrame) String() string { return proto.CompactTextString(m) }
|
||||
func (*DataFrame) ProtoMessage() {}
|
||||
func (*DataFrame) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
|
||||
|
||||
func (m *DataFrame) GetData() []byte {
|
||||
if m != nil {
|
||||
return m.Data
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *DataFrame) GetEof() bool {
|
||||
if m != nil {
|
||||
return m.Eof
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
type HttpHeader struct {
|
||||
Key string `protobuf:"bytes,1,opt,name=key" json:"key,omitempty"`
|
||||
Value string `protobuf:"bytes,2,opt,name=value" json:"value,omitempty"`
|
||||
}
|
||||
|
||||
func (m *HttpHeader) Reset() { *m = HttpHeader{} }
|
||||
func (m *HttpHeader) String() string { return proto.CompactTextString(m) }
|
||||
func (*HttpHeader) ProtoMessage() {}
|
||||
func (*HttpHeader) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
|
||||
|
||||
func (m *HttpHeader) GetKey() string {
|
||||
if m != nil {
|
||||
return m.Key
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *HttpHeader) GetValue() string {
|
||||
if m != nil {
|
||||
return m.Value
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
type HttpRespMeta struct {
|
||||
StatusCode int32 `protobuf:"varint,1,opt,name=status_code,json=statusCode" json:"status_code,omitempty"`
|
||||
Headers []*HttpHeader `protobuf:"bytes,2,rep,name=headers" json:"headers,omitempty"`
|
||||
}
|
||||
|
||||
func (m *HttpRespMeta) Reset() { *m = HttpRespMeta{} }
|
||||
func (m *HttpRespMeta) String() string { return proto.CompactTextString(m) }
|
||||
func (*HttpRespMeta) ProtoMessage() {}
|
||||
func (*HttpRespMeta) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} }
|
||||
|
||||
func (m *HttpRespMeta) GetStatusCode() int32 {
|
||||
if m != nil {
|
||||
return m.StatusCode
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *HttpRespMeta) GetHeaders() []*HttpHeader {
|
||||
if m != nil {
|
||||
return m.Headers
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Call has started to finish - data might not be here yet and it will be sent
|
||||
// as DataFrames.
|
||||
type CallResultStart struct {
|
||||
// Types that are valid to be assigned to Meta:
|
||||
// *CallResultStart_Http
|
||||
Meta isCallResultStart_Meta `protobuf_oneof:"meta"`
|
||||
}
|
||||
|
||||
func (m *CallResultStart) Reset() { *m = CallResultStart{} }
|
||||
func (m *CallResultStart) String() string { return proto.CompactTextString(m) }
|
||||
func (*CallResultStart) ProtoMessage() {}
|
||||
func (*CallResultStart) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} }
|
||||
|
||||
type isCallResultStart_Meta interface {
|
||||
isCallResultStart_Meta()
|
||||
}
|
||||
|
||||
type CallResultStart_Http struct {
|
||||
Http *HttpRespMeta `protobuf:"bytes,100,opt,name=http,oneof"`
|
||||
}
|
||||
|
||||
func (*CallResultStart_Http) isCallResultStart_Meta() {}
|
||||
|
||||
func (m *CallResultStart) GetMeta() isCallResultStart_Meta {
|
||||
if m != nil {
|
||||
return m.Meta
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *CallResultStart) GetHttp() *HttpRespMeta {
|
||||
if x, ok := m.GetMeta().(*CallResultStart_Http); ok {
|
||||
return x.Http
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// XXX_OneofFuncs is for the internal use of the proto package.
|
||||
func (*CallResultStart) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
|
||||
return _CallResultStart_OneofMarshaler, _CallResultStart_OneofUnmarshaler, _CallResultStart_OneofSizer, []interface{}{
|
||||
(*CallResultStart_Http)(nil),
|
||||
}
|
||||
}
|
||||
|
||||
func _CallResultStart_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
|
||||
m := msg.(*CallResultStart)
|
||||
// meta
|
||||
switch x := m.Meta.(type) {
|
||||
case *CallResultStart_Http:
|
||||
b.EncodeVarint(100<<3 | proto.WireBytes)
|
||||
if err := b.EncodeMessage(x.Http); err != nil {
|
||||
return err
|
||||
}
|
||||
case nil:
|
||||
default:
|
||||
return fmt.Errorf("CallResultStart.Meta has unexpected type %T", x)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func _CallResultStart_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
|
||||
m := msg.(*CallResultStart)
|
||||
switch tag {
|
||||
case 100: // meta.http
|
||||
if wire != proto.WireBytes {
|
||||
return true, proto.ErrInternalBadWireType
|
||||
}
|
||||
msg := new(HttpRespMeta)
|
||||
err := b.DecodeMessage(msg)
|
||||
m.Meta = &CallResultStart_Http{msg}
|
||||
return true, err
|
||||
default:
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
|
||||
func _CallResultStart_OneofSizer(msg proto.Message) (n int) {
|
||||
m := msg.(*CallResultStart)
|
||||
// meta
|
||||
switch x := m.Meta.(type) {
|
||||
case *CallResultStart_Http:
|
||||
s := proto.Size(x.Http)
|
||||
n += proto.SizeVarint(100<<3 | proto.WireBytes)
|
||||
n += proto.SizeVarint(uint64(s))
|
||||
n += s
|
||||
case nil:
|
||||
default:
|
||||
panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
// Call has really finished, it might have completed or crashed
|
||||
type CallFinished struct {
|
||||
Success bool `protobuf:"varint,1,opt,name=success" json:"success,omitempty"`
|
||||
Details string `protobuf:"bytes,2,opt,name=details" json:"details,omitempty"`
|
||||
}
|
||||
|
||||
func (m *CallFinished) Reset() { *m = CallFinished{} }
|
||||
func (m *CallFinished) String() string { return proto.CompactTextString(m) }
|
||||
func (*CallFinished) ProtoMessage() {}
|
||||
func (*CallFinished) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} }
|
||||
|
||||
func (m *CallFinished) GetSuccess() bool {
|
||||
if m != nil {
|
||||
return m.Success
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (m *CallFinished) GetDetails() string {
|
||||
if m != nil {
|
||||
return m.Details
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
type ClientMsg struct {
|
||||
// Types that are valid to be assigned to Body:
|
||||
// *ClientMsg_Try
|
||||
// *ClientMsg_Data
|
||||
Body isClientMsg_Body `protobuf_oneof:"body"`
|
||||
}
|
||||
|
||||
func (m *ClientMsg) Reset() { *m = ClientMsg{} }
|
||||
func (m *ClientMsg) String() string { return proto.CompactTextString(m) }
|
||||
func (*ClientMsg) ProtoMessage() {}
|
||||
func (*ClientMsg) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
|
||||
|
||||
type isClientMsg_Body interface {
|
||||
isClientMsg_Body()
|
||||
}
|
||||
|
||||
type ClientMsg_Try struct {
|
||||
Try *TryCall `protobuf:"bytes,1,opt,name=try,oneof"`
|
||||
}
|
||||
type ClientMsg_Data struct {
|
||||
Data *DataFrame `protobuf:"bytes,2,opt,name=data,oneof"`
|
||||
}
|
||||
|
||||
func (*ClientMsg_Try) isClientMsg_Body() {}
|
||||
func (*ClientMsg_Data) isClientMsg_Body() {}
|
||||
|
||||
func (m *ClientMsg) GetBody() isClientMsg_Body {
|
||||
if m != nil {
|
||||
return m.Body
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *ClientMsg) GetTry() *TryCall {
|
||||
if x, ok := m.GetBody().(*ClientMsg_Try); ok {
|
||||
return x.Try
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *ClientMsg) GetData() *DataFrame {
|
||||
if x, ok := m.GetBody().(*ClientMsg_Data); ok {
|
||||
return x.Data
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// XXX_OneofFuncs is for the internal use of the proto package.
|
||||
func (*ClientMsg) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
|
||||
return _ClientMsg_OneofMarshaler, _ClientMsg_OneofUnmarshaler, _ClientMsg_OneofSizer, []interface{}{
|
||||
(*ClientMsg_Try)(nil),
|
||||
(*ClientMsg_Data)(nil),
|
||||
}
|
||||
}
|
||||
|
||||
func _ClientMsg_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
|
||||
m := msg.(*ClientMsg)
|
||||
// body
|
||||
switch x := m.Body.(type) {
|
||||
case *ClientMsg_Try:
|
||||
b.EncodeVarint(1<<3 | proto.WireBytes)
|
||||
if err := b.EncodeMessage(x.Try); err != nil {
|
||||
return err
|
||||
}
|
||||
case *ClientMsg_Data:
|
||||
b.EncodeVarint(2<<3 | proto.WireBytes)
|
||||
if err := b.EncodeMessage(x.Data); err != nil {
|
||||
return err
|
||||
}
|
||||
case nil:
|
||||
default:
|
||||
return fmt.Errorf("ClientMsg.Body has unexpected type %T", x)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func _ClientMsg_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
|
||||
m := msg.(*ClientMsg)
|
||||
switch tag {
|
||||
case 1: // body.try
|
||||
if wire != proto.WireBytes {
|
||||
return true, proto.ErrInternalBadWireType
|
||||
}
|
||||
msg := new(TryCall)
|
||||
err := b.DecodeMessage(msg)
|
||||
m.Body = &ClientMsg_Try{msg}
|
||||
return true, err
|
||||
case 2: // body.data
|
||||
if wire != proto.WireBytes {
|
||||
return true, proto.ErrInternalBadWireType
|
||||
}
|
||||
msg := new(DataFrame)
|
||||
err := b.DecodeMessage(msg)
|
||||
m.Body = &ClientMsg_Data{msg}
|
||||
return true, err
|
||||
default:
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
|
||||
func _ClientMsg_OneofSizer(msg proto.Message) (n int) {
|
||||
m := msg.(*ClientMsg)
|
||||
// body
|
||||
switch x := m.Body.(type) {
|
||||
case *ClientMsg_Try:
|
||||
s := proto.Size(x.Try)
|
||||
n += proto.SizeVarint(1<<3 | proto.WireBytes)
|
||||
n += proto.SizeVarint(uint64(s))
|
||||
n += s
|
||||
case *ClientMsg_Data:
|
||||
s := proto.Size(x.Data)
|
||||
n += proto.SizeVarint(2<<3 | proto.WireBytes)
|
||||
n += proto.SizeVarint(uint64(s))
|
||||
n += s
|
||||
case nil:
|
||||
default:
|
||||
panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
type RunnerMsg struct {
|
||||
// Types that are valid to be assigned to Body:
|
||||
// *RunnerMsg_Acknowledged
|
||||
// *RunnerMsg_ResultStart
|
||||
// *RunnerMsg_Data
|
||||
// *RunnerMsg_Finished
|
||||
Body isRunnerMsg_Body `protobuf_oneof:"body"`
|
||||
}
|
||||
|
||||
func (m *RunnerMsg) Reset() { *m = RunnerMsg{} }
|
||||
func (m *RunnerMsg) String() string { return proto.CompactTextString(m) }
|
||||
func (*RunnerMsg) ProtoMessage() {}
|
||||
func (*RunnerMsg) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} }
|
||||
|
||||
type isRunnerMsg_Body interface {
|
||||
isRunnerMsg_Body()
|
||||
}
|
||||
|
||||
type RunnerMsg_Acknowledged struct {
|
||||
Acknowledged *CallAcknowledged `protobuf:"bytes,1,opt,name=acknowledged,oneof"`
|
||||
}
|
||||
type RunnerMsg_ResultStart struct {
|
||||
ResultStart *CallResultStart `protobuf:"bytes,2,opt,name=result_start,json=resultStart,oneof"`
|
||||
}
|
||||
type RunnerMsg_Data struct {
|
||||
Data *DataFrame `protobuf:"bytes,3,opt,name=data,oneof"`
|
||||
}
|
||||
type RunnerMsg_Finished struct {
|
||||
Finished *CallFinished `protobuf:"bytes,4,opt,name=finished,oneof"`
|
||||
}
|
||||
|
||||
func (*RunnerMsg_Acknowledged) isRunnerMsg_Body() {}
|
||||
func (*RunnerMsg_ResultStart) isRunnerMsg_Body() {}
|
||||
func (*RunnerMsg_Data) isRunnerMsg_Body() {}
|
||||
func (*RunnerMsg_Finished) isRunnerMsg_Body() {}
|
||||
|
||||
func (m *RunnerMsg) GetBody() isRunnerMsg_Body {
|
||||
if m != nil {
|
||||
return m.Body
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *RunnerMsg) GetAcknowledged() *CallAcknowledged {
|
||||
if x, ok := m.GetBody().(*RunnerMsg_Acknowledged); ok {
|
||||
return x.Acknowledged
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *RunnerMsg) GetResultStart() *CallResultStart {
|
||||
if x, ok := m.GetBody().(*RunnerMsg_ResultStart); ok {
|
||||
return x.ResultStart
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *RunnerMsg) GetData() *DataFrame {
|
||||
if x, ok := m.GetBody().(*RunnerMsg_Data); ok {
|
||||
return x.Data
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *RunnerMsg) GetFinished() *CallFinished {
|
||||
if x, ok := m.GetBody().(*RunnerMsg_Finished); ok {
|
||||
return x.Finished
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// XXX_OneofFuncs is for the internal use of the proto package.
|
||||
func (*RunnerMsg) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
|
||||
return _RunnerMsg_OneofMarshaler, _RunnerMsg_OneofUnmarshaler, _RunnerMsg_OneofSizer, []interface{}{
|
||||
(*RunnerMsg_Acknowledged)(nil),
|
||||
(*RunnerMsg_ResultStart)(nil),
|
||||
(*RunnerMsg_Data)(nil),
|
||||
(*RunnerMsg_Finished)(nil),
|
||||
}
|
||||
}
|
||||
|
||||
func _RunnerMsg_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
|
||||
m := msg.(*RunnerMsg)
|
||||
// body
|
||||
switch x := m.Body.(type) {
|
||||
case *RunnerMsg_Acknowledged:
|
||||
b.EncodeVarint(1<<3 | proto.WireBytes)
|
||||
if err := b.EncodeMessage(x.Acknowledged); err != nil {
|
||||
return err
|
||||
}
|
||||
case *RunnerMsg_ResultStart:
|
||||
b.EncodeVarint(2<<3 | proto.WireBytes)
|
||||
if err := b.EncodeMessage(x.ResultStart); err != nil {
|
||||
return err
|
||||
}
|
||||
case *RunnerMsg_Data:
|
||||
b.EncodeVarint(3<<3 | proto.WireBytes)
|
||||
if err := b.EncodeMessage(x.Data); err != nil {
|
||||
return err
|
||||
}
|
||||
case *RunnerMsg_Finished:
|
||||
b.EncodeVarint(4<<3 | proto.WireBytes)
|
||||
if err := b.EncodeMessage(x.Finished); err != nil {
|
||||
return err
|
||||
}
|
||||
case nil:
|
||||
default:
|
||||
return fmt.Errorf("RunnerMsg.Body has unexpected type %T", x)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func _RunnerMsg_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
|
||||
m := msg.(*RunnerMsg)
|
||||
switch tag {
|
||||
case 1: // body.acknowledged
|
||||
if wire != proto.WireBytes {
|
||||
return true, proto.ErrInternalBadWireType
|
||||
}
|
||||
msg := new(CallAcknowledged)
|
||||
err := b.DecodeMessage(msg)
|
||||
m.Body = &RunnerMsg_Acknowledged{msg}
|
||||
return true, err
|
||||
case 2: // body.result_start
|
||||
if wire != proto.WireBytes {
|
||||
return true, proto.ErrInternalBadWireType
|
||||
}
|
||||
msg := new(CallResultStart)
|
||||
err := b.DecodeMessage(msg)
|
||||
m.Body = &RunnerMsg_ResultStart{msg}
|
||||
return true, err
|
||||
case 3: // body.data
|
||||
if wire != proto.WireBytes {
|
||||
return true, proto.ErrInternalBadWireType
|
||||
}
|
||||
msg := new(DataFrame)
|
||||
err := b.DecodeMessage(msg)
|
||||
m.Body = &RunnerMsg_Data{msg}
|
||||
return true, err
|
||||
case 4: // body.finished
|
||||
if wire != proto.WireBytes {
|
||||
return true, proto.ErrInternalBadWireType
|
||||
}
|
||||
msg := new(CallFinished)
|
||||
err := b.DecodeMessage(msg)
|
||||
m.Body = &RunnerMsg_Finished{msg}
|
||||
return true, err
|
||||
default:
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
|
||||
func _RunnerMsg_OneofSizer(msg proto.Message) (n int) {
|
||||
m := msg.(*RunnerMsg)
|
||||
// body
|
||||
switch x := m.Body.(type) {
|
||||
case *RunnerMsg_Acknowledged:
|
||||
s := proto.Size(x.Acknowledged)
|
||||
n += proto.SizeVarint(1<<3 | proto.WireBytes)
|
||||
n += proto.SizeVarint(uint64(s))
|
||||
n += s
|
||||
case *RunnerMsg_ResultStart:
|
||||
s := proto.Size(x.ResultStart)
|
||||
n += proto.SizeVarint(2<<3 | proto.WireBytes)
|
||||
n += proto.SizeVarint(uint64(s))
|
||||
n += s
|
||||
case *RunnerMsg_Data:
|
||||
s := proto.Size(x.Data)
|
||||
n += proto.SizeVarint(3<<3 | proto.WireBytes)
|
||||
n += proto.SizeVarint(uint64(s))
|
||||
n += s
|
||||
case *RunnerMsg_Finished:
|
||||
s := proto.Size(x.Finished)
|
||||
n += proto.SizeVarint(4<<3 | proto.WireBytes)
|
||||
n += proto.SizeVarint(uint64(s))
|
||||
n += s
|
||||
case nil:
|
||||
default:
|
||||
panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
type RunnerStatus struct {
|
||||
Active int32 `protobuf:"varint,2,opt,name=active" json:"active,omitempty"`
|
||||
}
|
||||
|
||||
func (m *RunnerStatus) Reset() { *m = RunnerStatus{} }
|
||||
func (m *RunnerStatus) String() string { return proto.CompactTextString(m) }
|
||||
func (*RunnerStatus) ProtoMessage() {}
|
||||
func (*RunnerStatus) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{9} }
|
||||
|
||||
func (m *RunnerStatus) GetActive() int32 {
|
||||
if m != nil {
|
||||
return m.Active
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func init() {
|
||||
proto.RegisterType((*TryCall)(nil), "TryCall")
|
||||
proto.RegisterType((*CallAcknowledged)(nil), "CallAcknowledged")
|
||||
proto.RegisterType((*DataFrame)(nil), "DataFrame")
|
||||
proto.RegisterType((*HttpHeader)(nil), "HttpHeader")
|
||||
proto.RegisterType((*HttpRespMeta)(nil), "HttpRespMeta")
|
||||
proto.RegisterType((*CallResultStart)(nil), "CallResultStart")
|
||||
proto.RegisterType((*CallFinished)(nil), "CallFinished")
|
||||
proto.RegisterType((*ClientMsg)(nil), "ClientMsg")
|
||||
proto.RegisterType((*RunnerMsg)(nil), "RunnerMsg")
|
||||
proto.RegisterType((*RunnerStatus)(nil), "RunnerStatus")
|
||||
}
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ context.Context
|
||||
var _ grpc.ClientConn
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the grpc package it is being compiled against.
|
||||
const _ = grpc.SupportPackageIsVersion4
|
||||
|
||||
// Client API for RunnerProtocol service
|
||||
|
||||
type RunnerProtocolClient interface {
|
||||
Engage(ctx context.Context, opts ...grpc.CallOption) (RunnerProtocol_EngageClient, error)
|
||||
// Rather than rely on Prometheus for this, expose status that's specific to the runner lifecycle through this.
|
||||
Status(ctx context.Context, in *google_protobuf.Empty, opts ...grpc.CallOption) (*RunnerStatus, error)
|
||||
}
|
||||
|
||||
type runnerProtocolClient struct {
|
||||
cc *grpc.ClientConn
|
||||
}
|
||||
|
||||
func NewRunnerProtocolClient(cc *grpc.ClientConn) RunnerProtocolClient {
|
||||
return &runnerProtocolClient{cc}
|
||||
}
|
||||
|
||||
func (c *runnerProtocolClient) Engage(ctx context.Context, opts ...grpc.CallOption) (RunnerProtocol_EngageClient, error) {
|
||||
stream, err := grpc.NewClientStream(ctx, &_RunnerProtocol_serviceDesc.Streams[0], c.cc, "/RunnerProtocol/Engage", opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
x := &runnerProtocolEngageClient{stream}
|
||||
return x, nil
|
||||
}
|
||||
|
||||
type RunnerProtocol_EngageClient interface {
|
||||
Send(*ClientMsg) error
|
||||
Recv() (*RunnerMsg, error)
|
||||
grpc.ClientStream
|
||||
}
|
||||
|
||||
type runnerProtocolEngageClient struct {
|
||||
grpc.ClientStream
|
||||
}
|
||||
|
||||
func (x *runnerProtocolEngageClient) Send(m *ClientMsg) error {
|
||||
return x.ClientStream.SendMsg(m)
|
||||
}
|
||||
|
||||
func (x *runnerProtocolEngageClient) Recv() (*RunnerMsg, error) {
|
||||
m := new(RunnerMsg)
|
||||
if err := x.ClientStream.RecvMsg(m); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return m, nil
|
||||
}
|
||||
|
||||
func (c *runnerProtocolClient) Status(ctx context.Context, in *google_protobuf.Empty, opts ...grpc.CallOption) (*RunnerStatus, error) {
|
||||
out := new(RunnerStatus)
|
||||
err := grpc.Invoke(ctx, "/RunnerProtocol/Status", in, out, c.cc, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// Server API for RunnerProtocol service
|
||||
|
||||
type RunnerProtocolServer interface {
|
||||
Engage(RunnerProtocol_EngageServer) error
|
||||
// Rather than rely on Prometheus for this, expose status that's specific to the runner lifecycle through this.
|
||||
Status(context.Context, *google_protobuf.Empty) (*RunnerStatus, error)
|
||||
}
|
||||
|
||||
func RegisterRunnerProtocolServer(s *grpc.Server, srv RunnerProtocolServer) {
|
||||
s.RegisterService(&_RunnerProtocol_serviceDesc, srv)
|
||||
}
|
||||
|
||||
func _RunnerProtocol_Engage_Handler(srv interface{}, stream grpc.ServerStream) error {
|
||||
return srv.(RunnerProtocolServer).Engage(&runnerProtocolEngageServer{stream})
|
||||
}
|
||||
|
||||
type RunnerProtocol_EngageServer interface {
|
||||
Send(*RunnerMsg) error
|
||||
Recv() (*ClientMsg, error)
|
||||
grpc.ServerStream
|
||||
}
|
||||
|
||||
type runnerProtocolEngageServer struct {
|
||||
grpc.ServerStream
|
||||
}
|
||||
|
||||
func (x *runnerProtocolEngageServer) Send(m *RunnerMsg) error {
|
||||
return x.ServerStream.SendMsg(m)
|
||||
}
|
||||
|
||||
func (x *runnerProtocolEngageServer) Recv() (*ClientMsg, error) {
|
||||
m := new(ClientMsg)
|
||||
if err := x.ServerStream.RecvMsg(m); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return m, nil
|
||||
}
|
||||
|
||||
func _RunnerProtocol_Status_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(google_protobuf.Empty)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(RunnerProtocolServer).Status(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: "/RunnerProtocol/Status",
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(RunnerProtocolServer).Status(ctx, req.(*google_protobuf.Empty))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
var _RunnerProtocol_serviceDesc = grpc.ServiceDesc{
|
||||
ServiceName: "RunnerProtocol",
|
||||
HandlerType: (*RunnerProtocolServer)(nil),
|
||||
Methods: []grpc.MethodDesc{
|
||||
{
|
||||
MethodName: "Status",
|
||||
Handler: _RunnerProtocol_Status_Handler,
|
||||
},
|
||||
},
|
||||
Streams: []grpc.StreamDesc{
|
||||
{
|
||||
StreamName: "Engage",
|
||||
Handler: _RunnerProtocol_Engage_Handler,
|
||||
ServerStreams: true,
|
||||
ClientStreams: true,
|
||||
},
|
||||
},
|
||||
Metadata: "runner.proto",
|
||||
}
|
||||
|
||||
func init() { proto.RegisterFile("runner.proto", fileDescriptor0) }
|
||||
|
||||
var fileDescriptor0 = []byte{
|
||||
// 566 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x74, 0x93, 0x59, 0x6b, 0xdb, 0x40,
|
||||
0x10, 0xc7, 0xad, 0xd8, 0x71, 0xec, 0xb1, 0x92, 0xba, 0x4b, 0x0f, 0x93, 0x06, 0x1a, 0xd4, 0x03,
|
||||
0x43, 0x61, 0xd3, 0x3a, 0x3d, 0xde, 0x0a, 0x49, 0x9a, 0x20, 0x4a, 0x03, 0x65, 0x53, 0xfa, 0x6a,
|
||||
0x36, 0xd2, 0x44, 0x56, 0xb3, 0xd2, 0x1a, 0xed, 0x28, 0xc5, 0xaf, 0xfd, 0x88, 0xfd, 0x44, 0x65,
|
||||
0x57, 0x47, 0xdc, 0x40, 0xde, 0x34, 0xfb, 0x9f, 0xf3, 0xa7, 0x19, 0xf0, 0x8b, 0x32, 0xcf, 0xb1,
|
||||
0xe0, 0xcb, 0x42, 0x93, 0xde, 0x7d, 0x96, 0x68, 0x9d, 0x28, 0x3c, 0x70, 0xd6, 0x65, 0x79, 0x75,
|
||||
0x80, 0xd9, 0x92, 0x56, 0x95, 0x18, 0x1c, 0xc2, 0xd6, 0x8f, 0x62, 0x75, 0x22, 0x95, 0x62, 0x53,
|
||||
0x18, 0x67, 0x3a, 0x46, 0x65, 0xe6, 0x91, 0x54, 0x6a, 0xfe, 0xcb, 0xe8, 0x7c, 0xe2, 0xed, 0x7b,
|
||||
0xd3, 0xa1, 0xd8, 0xa9, 0xde, 0xad, 0xd7, 0x57, 0xa3, 0xf3, 0xe0, 0x8f, 0x07, 0x63, 0x6b, 0x1c,
|
||||
0x45, 0xd7, 0xb9, 0xfe, 0xad, 0x30, 0x4e, 0x30, 0x66, 0x7b, 0x30, 0x8c, 0x74, 0x96, 0xa5, 0x44,
|
||||
0x18, 0xbb, 0xb8, 0x81, 0xb8, 0x7d, 0x60, 0x13, 0xd8, 0x8a, 0x91, 0x64, 0xaa, 0xcc, 0x64, 0xc3,
|
||||
0xe5, 0x6c, 0x4c, 0xf6, 0x11, 0x9e, 0x1a, 0xa5, 0x69, 0x2e, 0x95, 0xd2, 0x91, 0xa4, 0x54, 0xe7,
|
||||
0x73, 0x25, 0x09, 0xf3, 0x68, 0x35, 0xe9, 0x3a, 0xcf, 0xc7, 0x56, 0x3e, 0x6a, 0xd5, 0x6f, 0x95,
|
||||
0x18, 0xbc, 0x83, 0xe1, 0x17, 0x49, 0xf2, 0xac, 0x90, 0x19, 0x32, 0x06, 0xbd, 0x58, 0x92, 0x74,
|
||||
0x75, 0x7d, 0xe1, 0xbe, 0xd9, 0x18, 0xba, 0xa8, 0xaf, 0x5c, 0xb9, 0x81, 0xb0, 0x9f, 0xc1, 0x7b,
|
||||
0x80, 0x90, 0x68, 0x19, 0xa2, 0x8c, 0xb1, 0xb0, 0xfa, 0x35, 0xae, 0xea, 0x11, 0xed, 0x27, 0x7b,
|
||||
0x04, 0x9b, 0x37, 0x52, 0x95, 0x58, 0xb7, 0x58, 0x19, 0xc1, 0x4f, 0xf0, 0x6d, 0x94, 0x40, 0xb3,
|
||||
0x3c, 0x47, 0x92, 0xec, 0x39, 0x8c, 0x0c, 0x49, 0x2a, 0xcd, 0x3c, 0xd2, 0x31, 0xba, 0xf8, 0x4d,
|
||||
0x01, 0xd5, 0xd3, 0x89, 0x8e, 0x91, 0xbd, 0x82, 0xad, 0x85, 0x2b, 0x61, 0x67, 0xed, 0x4e, 0x47,
|
||||
0xb3, 0x11, 0xbf, 0x2d, 0x2b, 0x1a, 0x2d, 0xf8, 0x0c, 0x0f, 0x2c, 0x44, 0x81, 0xa6, 0x54, 0x74,
|
||||
0x41, 0xb2, 0x20, 0xf6, 0x02, 0x7a, 0x0b, 0xa2, 0xe5, 0x24, 0xde, 0xf7, 0xa6, 0xa3, 0xd9, 0x36,
|
||||
0x5f, 0xaf, 0x1b, 0x76, 0x84, 0x13, 0x8f, 0xfb, 0xd0, 0xcb, 0x90, 0x64, 0x70, 0x0c, 0xbe, 0x8d,
|
||||
0x3f, 0x4b, 0xf3, 0xd4, 0x2c, 0x2a, 0xc4, 0xa6, 0x8c, 0x22, 0x34, 0xa6, 0xc6, 0xdf, 0x98, 0xf7,
|
||||
0xc3, 0x0f, 0x2e, 0x60, 0x78, 0xa2, 0x52, 0xcc, 0xe9, 0xdc, 0x24, 0x6c, 0x0f, 0xba, 0x54, 0x54,
|
||||
0x40, 0x46, 0xb3, 0x01, 0xaf, 0xf7, 0x22, 0xec, 0x08, 0xfb, 0xcc, 0xf6, 0x6b, 0xc4, 0x1b, 0x4e,
|
||||
0x06, 0xde, 0xc2, 0xb7, 0x8d, 0x59, 0xc5, 0x36, 0x76, 0xa9, 0xe3, 0x55, 0xf0, 0xd7, 0x83, 0xa1,
|
||||
0x70, 0x1b, 0x68, 0xb3, 0x7e, 0x02, 0x5f, 0xae, 0xed, 0x49, 0x9d, 0xfe, 0x21, 0xbf, 0xbb, 0x40,
|
||||
0x61, 0x47, 0xfc, 0xe7, 0xc8, 0x3e, 0x80, 0x5f, 0x38, 0x36, 0x73, 0x63, 0xe1, 0xd4, 0x85, 0xc7,
|
||||
0xfc, 0x0e, 0xb4, 0xb0, 0x23, 0x46, 0xc5, 0x1a, 0xc3, 0xa6, 0xcf, 0xee, 0x7d, 0x7d, 0xb2, 0x37,
|
||||
0x30, 0xb8, 0xaa, 0xa1, 0x4d, 0x7a, 0x35, 0xe9, 0x75, 0x92, 0x61, 0x47, 0xb4, 0x0e, 0xed, 0x50,
|
||||
0xaf, 0xc1, 0xaf, 0x66, 0xba, 0x70, 0x3f, 0x9a, 0x3d, 0x81, 0xbe, 0x8c, 0x28, 0xbd, 0xa9, 0x96,
|
||||
0x65, 0x53, 0xd4, 0xd6, 0x2c, 0x81, 0x9d, 0xca, 0xef, 0xbb, 0xbd, 0xaf, 0x48, 0x2b, 0xf6, 0x12,
|
||||
0xfa, 0xa7, 0x79, 0x22, 0x13, 0x64, 0xc0, 0x5b, 0xd8, 0xbb, 0xc0, 0x5b, 0x44, 0x53, 0xef, 0xad,
|
||||
0xc7, 0x0e, 0xa0, 0xdf, 0x64, 0xe6, 0xd5, 0xc1, 0xf2, 0xe6, 0x60, 0xf9, 0xa9, 0x3d, 0xd8, 0xdd,
|
||||
0x6d, 0xbe, 0xde, 0xc0, 0x65, 0xdf, 0xc9, 0x87, 0xff, 0x02, 0x00, 0x00, 0xff, 0xff, 0x05, 0xe4,
|
||||
0x7b, 0x85, 0xed, 0x03, 0x00, 0x00,
|
||||
}
|
||||
76
api/agent/grpc/runner.proto
Normal file
76
api/agent/grpc/runner.proto
Normal file
@@ -0,0 +1,76 @@
|
||||
syntax = "proto3";
|
||||
|
||||
import "google/protobuf/empty.proto";
|
||||
|
||||
// Request to allocate a slot for a call
|
||||
message TryCall {
|
||||
string models_call_json = 1;
|
||||
}
|
||||
|
||||
// Call has been accepted and a slot allocated, or it's been rejected
|
||||
message CallAcknowledged {
|
||||
bool committed = 1;
|
||||
string details = 2;
|
||||
string slot_allocation_latency = 3;
|
||||
}
|
||||
|
||||
// Data sent C2S and S2C - as soon as the runner sees the first of these it
|
||||
// will start running. If empty content, there must be one of these with eof.
|
||||
// The runner will send these for the body of the response, AFTER it has sent
|
||||
// a CallEnding message.
|
||||
message DataFrame {
|
||||
bytes data = 1;
|
||||
bool eof = 2;
|
||||
}
|
||||
|
||||
message HttpHeader {
|
||||
string key = 1;
|
||||
string value = 2;
|
||||
}
|
||||
|
||||
message HttpRespMeta {
|
||||
int32 status_code = 1;
|
||||
repeated HttpHeader headers = 2;
|
||||
}
|
||||
|
||||
// Call has started to finish - data might not be here yet and it will be sent
|
||||
// as DataFrames.
|
||||
message CallResultStart {
|
||||
oneof meta {
|
||||
HttpRespMeta http = 100;
|
||||
}
|
||||
}
|
||||
|
||||
// Call has really finished, it might have completed or crashed
|
||||
message CallFinished {
|
||||
bool success = 1;
|
||||
string details = 2;
|
||||
}
|
||||
|
||||
|
||||
message ClientMsg {
|
||||
oneof body {
|
||||
TryCall try = 1;
|
||||
DataFrame data = 2;
|
||||
}
|
||||
}
|
||||
|
||||
message RunnerMsg {
|
||||
oneof body {
|
||||
CallAcknowledged acknowledged = 1;
|
||||
CallResultStart result_start = 2;
|
||||
DataFrame data = 3;
|
||||
CallFinished finished = 4;
|
||||
}
|
||||
}
|
||||
|
||||
message RunnerStatus {
|
||||
int32 active = 2; // Number of currently inflight responses
|
||||
}
|
||||
|
||||
service RunnerProtocol {
|
||||
rpc Engage (stream ClientMsg) returns (stream RunnerMsg);
|
||||
|
||||
// Rather than rely on Prometheus for this, expose status that's specific to the runner lifecycle through this.
|
||||
rpc Status(google.protobuf.Empty) returns (RunnerStatus);
|
||||
}
|
||||
54
api/agent/hybrid/nop.go
Normal file
54
api/agent/hybrid/nop.go
Normal file
@@ -0,0 +1,54 @@
|
||||
package hybrid
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"io"
|
||||
|
||||
"github.com/fnproject/fn/api/agent"
|
||||
"github.com/fnproject/fn/api/models"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
// nopDataStore implements agent.DataAccess
|
||||
type nopDataStore struct{}
|
||||
|
||||
func NewNopDataStore() (agent.DataAccess, error) {
|
||||
return &nopDataStore{}, nil
|
||||
}
|
||||
|
||||
func (cl *nopDataStore) Enqueue(ctx context.Context, c *models.Call) error {
|
||||
ctx, span := trace.StartSpan(ctx, "nop_datastore_enqueue")
|
||||
defer span.End()
|
||||
return errors.New("Should not call Enqueue on a NOP data store")
|
||||
}
|
||||
|
||||
func (cl *nopDataStore) Dequeue(ctx context.Context) (*models.Call, error) {
|
||||
ctx, span := trace.StartSpan(ctx, "nop_datastore_dequeue")
|
||||
defer span.End()
|
||||
return nil, errors.New("Should not call Dequeue on a NOP data store")
|
||||
}
|
||||
|
||||
func (cl *nopDataStore) Start(ctx context.Context, c *models.Call) error {
|
||||
ctx, span := trace.StartSpan(ctx, "nop_datastore_start")
|
||||
defer span.End()
|
||||
return nil // It's ok to call this method, and it does no operations
|
||||
}
|
||||
|
||||
func (cl *nopDataStore) Finish(ctx context.Context, c *models.Call, r io.Reader, async bool) error {
|
||||
ctx, span := trace.StartSpan(ctx, "nop_datastore_end")
|
||||
defer span.End()
|
||||
return nil // It's ok to call this method, and it does no operations
|
||||
}
|
||||
|
||||
func (cl *nopDataStore) GetApp(ctx context.Context, appName string) (*models.App, error) {
|
||||
ctx, span := trace.StartSpan(ctx, "nop_datastore_get_app")
|
||||
defer span.End()
|
||||
return nil, errors.New("Should not call GetApp on a NOP data store")
|
||||
}
|
||||
|
||||
func (cl *nopDataStore) GetRoute(ctx context.Context, appName, route string) (*models.Route, error) {
|
||||
ctx, span := trace.StartSpan(ctx, "nop_datastore_get_route")
|
||||
defer span.End()
|
||||
return nil, errors.New("Should not call GetRoute on a NOP data store")
|
||||
}
|
||||
252
api/agent/lb_agent.go
Normal file
252
api/agent/lb_agent.go
Normal file
@@ -0,0 +1,252 @@
|
||||
package agent
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"io"
|
||||
"net/http"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
"go.opencensus.io/trace"
|
||||
|
||||
"github.com/fnproject/fn/api/common"
|
||||
"github.com/fnproject/fn/api/models"
|
||||
"github.com/fnproject/fn/fnext"
|
||||
"github.com/fnproject/fn/poolmanager"
|
||||
)
|
||||
|
||||
// RequestReader takes an agent.Call and return a ReadCloser for the request body inside it
|
||||
func RequestReader(c *Call) (io.ReadCloser, error) {
|
||||
// Get the call :(((((
|
||||
cc, ok := (*c).(*call)
|
||||
|
||||
if !ok {
|
||||
return nil, errors.New("Can't cast agent.Call to agent.call")
|
||||
}
|
||||
|
||||
if cc.req == nil {
|
||||
return nil, errors.New("Call doesn't contain a request")
|
||||
}
|
||||
|
||||
logrus.Info(cc.req)
|
||||
|
||||
return cc.req.Body, nil
|
||||
}
|
||||
|
||||
func ResponseWriter(c *Call) (*http.ResponseWriter, error) {
|
||||
cc, ok := (*c).(*call)
|
||||
|
||||
if !ok {
|
||||
return nil, errors.New("Can't cast agent.Call to agent.call")
|
||||
}
|
||||
|
||||
if rw, ok := cc.w.(http.ResponseWriter); ok {
|
||||
return &rw, nil
|
||||
}
|
||||
|
||||
return nil, errors.New("Unable to get HTTP response writer from the call")
|
||||
}
|
||||
|
||||
type remoteSlot struct {
|
||||
lbAgent *lbAgent
|
||||
}
|
||||
|
||||
func (s *remoteSlot) exec(ctx context.Context, call *call) error {
|
||||
a := s.lbAgent
|
||||
|
||||
memMb := call.Model().Memory
|
||||
lbGroupID := GetGroupID(call.Model())
|
||||
|
||||
capacityRequest := &poolmanager.CapacityRequest{TotalMemoryMb: memMb, LBGroupID: lbGroupID}
|
||||
a.np.AssignCapacity(capacityRequest)
|
||||
defer a.np.ReleaseCapacity(capacityRequest)
|
||||
|
||||
err := a.placer.PlaceCall(a.np, ctx, call, lbGroupID)
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error("Failed to place call")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func (s *remoteSlot) Close(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *remoteSlot) Error() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
type Placer interface {
|
||||
PlaceCall(np NodePool, ctx context.Context, call *call, lbGroupID string) error
|
||||
}
|
||||
|
||||
type naivePlacer struct {
|
||||
}
|
||||
|
||||
func NewNaivePlacer() Placer {
|
||||
return &naivePlacer{}
|
||||
}
|
||||
|
||||
func minDuration(f, s time.Duration) time.Duration {
|
||||
if f < s {
|
||||
return f
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
func (sp *naivePlacer) PlaceCall(np NodePool, ctx context.Context, call *call, lbGroupID string) error {
|
||||
timeout := time.After(call.slotDeadline.Sub(time.Now()))
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return models.ErrCallTimeoutServerBusy
|
||||
case <-timeout:
|
||||
return models.ErrCallTimeoutServerBusy
|
||||
default:
|
||||
for _, r := range np.Runners(lbGroupID) {
|
||||
placed, err := r.TryExec(ctx, call)
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error("Failed during call placement")
|
||||
}
|
||||
if placed {
|
||||
return err
|
||||
}
|
||||
}
|
||||
remaining := call.slotDeadline.Sub(time.Now())
|
||||
if remaining <= 0 {
|
||||
return models.ErrCallTimeoutServerBusy
|
||||
}
|
||||
time.Sleep(minDuration(retryWaitInterval, remaining))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const (
|
||||
runnerReconnectInterval = 5 * time.Second
|
||||
// sleep time to attempt placement across all runners before retrying
|
||||
retryWaitInterval = 10 * time.Millisecond
|
||||
// sleep time when scaling from 0 to 1 runners
|
||||
noCapacityWaitInterval = 1 * time.Second
|
||||
// amount of time to wait to place a request on a runner
|
||||
placementTimeout = 15 * time.Second
|
||||
)
|
||||
|
||||
type lbAgent struct {
|
||||
delegatedAgent Agent
|
||||
np NodePool
|
||||
placer Placer
|
||||
|
||||
wg sync.WaitGroup // Needs a good name
|
||||
shutdown chan struct{}
|
||||
}
|
||||
|
||||
func NewLBAgent(agent Agent, np NodePool, p Placer) (Agent, error) {
|
||||
a := &lbAgent{
|
||||
delegatedAgent: agent,
|
||||
np: np,
|
||||
placer: p,
|
||||
}
|
||||
return a, nil
|
||||
}
|
||||
|
||||
// GetCall delegates to the wrapped agent but disables the capacity check as
|
||||
// this agent isn't actually running the call.
|
||||
func (a *lbAgent) GetCall(opts ...CallOpt) (Call, error) {
|
||||
opts = append(opts, WithoutPreemptiveCapacityCheck())
|
||||
return a.delegatedAgent.GetCall(opts...)
|
||||
}
|
||||
|
||||
func (a *lbAgent) Close() error {
|
||||
a.np.Shutdown()
|
||||
err := a.delegatedAgent.Close()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func GetGroupID(call *models.Call) string {
|
||||
// TODO until fn supports metadata, allow LB Group ID to
|
||||
// be overridden via configuration.
|
||||
// Note that employing this mechanism will expose the value of the
|
||||
// LB Group ID to the function as an environment variable!
|
||||
lbgID := call.Config["FN_LB_GROUP_ID"]
|
||||
if lbgID == "" {
|
||||
return "default"
|
||||
}
|
||||
return lbgID
|
||||
}
|
||||
|
||||
func (a *lbAgent) Submit(callI Call) error {
|
||||
a.wg.Add(1)
|
||||
defer a.wg.Done()
|
||||
|
||||
select {
|
||||
case <-a.shutdown:
|
||||
return models.ErrCallTimeoutServerBusy
|
||||
default:
|
||||
}
|
||||
|
||||
call := callI.(*call)
|
||||
|
||||
ctx, cancel := context.WithDeadline(call.req.Context(), call.execDeadline)
|
||||
call.req = call.req.WithContext(ctx)
|
||||
defer cancel()
|
||||
|
||||
ctx, span := trace.StartSpan(ctx, "agent_submit")
|
||||
defer span.End()
|
||||
|
||||
err := a.submit(ctx, call)
|
||||
return err
|
||||
}
|
||||
|
||||
func (a *lbAgent) submit(ctx context.Context, call *call) error {
|
||||
statsEnqueue(ctx)
|
||||
|
||||
a.startStateTrackers(ctx, call)
|
||||
defer a.endStateTrackers(ctx, call)
|
||||
|
||||
slot := &remoteSlot{lbAgent: a}
|
||||
|
||||
defer slot.Close(ctx) // notify our slot is free once we're done
|
||||
|
||||
err := call.Start(ctx)
|
||||
if err != nil {
|
||||
handleStatsDequeue(ctx, err)
|
||||
return transformTimeout(err, true)
|
||||
}
|
||||
|
||||
statsDequeueAndStart(ctx)
|
||||
|
||||
// pass this error (nil or otherwise) to end directly, to store status, etc
|
||||
err = slot.exec(ctx, call)
|
||||
handleStatsEnd(ctx, err)
|
||||
|
||||
// TODO: we need to allocate more time to store the call + logs in case the call timed out,
|
||||
// but this could put us over the timeout if the call did not reply yet (need better policy).
|
||||
ctx = common.BackgroundContext(ctx)
|
||||
err = call.End(ctx, err)
|
||||
return transformTimeout(err, false)
|
||||
}
|
||||
|
||||
func (a *lbAgent) AddCallListener(cl fnext.CallListener) {
|
||||
a.delegatedAgent.AddCallListener(cl)
|
||||
}
|
||||
|
||||
func (a *lbAgent) Enqueue(context.Context, *models.Call) error {
|
||||
logrus.Fatal("Enqueue not implemented. Panicking.")
|
||||
return nil
|
||||
}
|
||||
|
||||
func (a *lbAgent) startStateTrackers(ctx context.Context, call *call) {
|
||||
delegatedAgent := a.delegatedAgent.(*agent)
|
||||
delegatedAgent.startStateTrackers(ctx, call)
|
||||
}
|
||||
|
||||
func (a *lbAgent) endStateTrackers(ctx context.Context, call *call) {
|
||||
delegatedAgent := a.delegatedAgent.(*agent)
|
||||
delegatedAgent.endStateTrackers(ctx, call)
|
||||
}
|
||||
74
api/agent/lb_hash.go
Normal file
74
api/agent/lb_hash.go
Normal file
@@ -0,0 +1,74 @@
|
||||
/* The consistent hash ring from the original fnlb.
|
||||
The behaviour of this depends on changes to the runner list leaving it relatively stable.
|
||||
*/
|
||||
|
||||
package agent
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/dchest/siphash"
|
||||
"github.com/fnproject/fn/api/models"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
type chPlacer struct {
|
||||
}
|
||||
|
||||
func NewCHPlacer() Placer {
|
||||
return &chPlacer{}
|
||||
}
|
||||
|
||||
// This borrows the CH placement algorithm from the original FNLB.
|
||||
// Because we ask a runner to accept load (queuing on the LB rather than on the nodes), we don't use
|
||||
// the LB_WAIT to drive placement decisions: runners only accept work if they have the capacity for it.
|
||||
func (p *chPlacer) PlaceCall(np NodePool, ctx context.Context, call *call, lbGroupID string) error {
|
||||
// The key is just the path in this case
|
||||
key := call.Path
|
||||
sum64 := siphash.Hash(0, 0x4c617279426f6174, []byte(key))
|
||||
timeout := time.After(call.slotDeadline.Sub(time.Now()))
|
||||
for {
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return models.ErrCallTimeoutServerBusy
|
||||
case <-timeout:
|
||||
return models.ErrCallTimeoutServerBusy
|
||||
default:
|
||||
runners := np.Runners(lbGroupID)
|
||||
i := int(jumpConsistentHash(sum64, int32(len(runners))))
|
||||
for j := 0; j < len(runners); j++ {
|
||||
r := runners[i]
|
||||
|
||||
placed, err := r.TryExec(ctx, call)
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error("Failed during call placement")
|
||||
}
|
||||
if placed {
|
||||
return err
|
||||
}
|
||||
|
||||
i = (i + 1) % len(runners)
|
||||
}
|
||||
|
||||
remaining := call.slotDeadline.Sub(time.Now())
|
||||
if remaining <= 0 {
|
||||
return models.ErrCallTimeoutServerBusy
|
||||
}
|
||||
time.Sleep(minDuration(retryWaitInterval, remaining))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// A Fast, Minimal Memory, Consistent Hash Algorithm:
|
||||
// https://arxiv.org/ftp/arxiv/papers/1406/1406.2294.pdf
|
||||
func jumpConsistentHash(key uint64, num_buckets int32) int32 {
|
||||
var b, j int64 = -1, 0
|
||||
for j < int64(num_buckets) {
|
||||
b = j
|
||||
key = key*2862933555777941757 + 1
|
||||
j = (b + 1) * int64((1<<31)/(key>>33)+1)
|
||||
}
|
||||
return int32(b)
|
||||
}
|
||||
22
api/agent/lb_pool.go
Normal file
22
api/agent/lb_pool.go
Normal file
@@ -0,0 +1,22 @@
|
||||
package agent
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/fnproject/fn/poolmanager"
|
||||
)
|
||||
|
||||
// NodePool provides information about pools of runners and receives capacity demands
|
||||
type NodePool interface {
|
||||
Runners(lbgID string) []Runner
|
||||
AssignCapacity(r *poolmanager.CapacityRequest)
|
||||
ReleaseCapacity(r *poolmanager.CapacityRequest)
|
||||
Shutdown()
|
||||
}
|
||||
|
||||
// Runner is the interface to invoke the execution of a function call on a specific runner
|
||||
type Runner interface {
|
||||
TryExec(ctx context.Context, call Call) (bool, error)
|
||||
Close()
|
||||
Address() string
|
||||
}
|
||||
414
api/agent/nodepool/grpc/grpc_pool.go
Normal file
414
api/agent/nodepool/grpc/grpc_pool.go
Normal file
@@ -0,0 +1,414 @@
|
||||
package grpc
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"io"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/credentials"
|
||||
|
||||
"github.com/fnproject/fn/api/agent"
|
||||
pb "github.com/fnproject/fn/api/agent/grpc"
|
||||
"github.com/fnproject/fn/api/id"
|
||||
"github.com/fnproject/fn/grpcutil"
|
||||
"github.com/fnproject/fn/poolmanager"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
const (
|
||||
// CapacityUpdatePeriod defines how often the capacity updates are sent
|
||||
CapacityUpdatePeriod = 1 * time.Second
|
||||
)
|
||||
|
||||
type gRPCNodePool struct {
|
||||
npm poolmanager.NodePoolManager
|
||||
advertiser poolmanager.CapacityAdvertiser
|
||||
mx sync.RWMutex
|
||||
lbg map[string]*lbg // {lbgid -> *lbg}
|
||||
generator secureRunnerFactory
|
||||
//TODO find a better place for this
|
||||
pki *pkiData
|
||||
|
||||
shutdown chan struct{}
|
||||
}
|
||||
|
||||
// TODO need to go in a better place
|
||||
type pkiData struct {
|
||||
ca string
|
||||
key string
|
||||
cert string
|
||||
}
|
||||
|
||||
type lbg struct {
|
||||
mx sync.RWMutex
|
||||
id string
|
||||
runners map[string]agent.Runner
|
||||
r_list atomic.Value // We attempt to maintain the same order of runners as advertised by the NPM.
|
||||
// This is to preserve as reasonable behaviour as possible for the CH algorithm
|
||||
generator secureRunnerFactory
|
||||
}
|
||||
|
||||
type nullRunner struct{}
|
||||
|
||||
func (n *nullRunner) TryExec(ctx context.Context, call agent.Call) (bool, error) {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
func (n *nullRunner) Close() {}
|
||||
|
||||
func (n *nullRunner) Address() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
var nullRunnerSingleton = new(nullRunner)
|
||||
|
||||
type gRPCRunner struct {
|
||||
// Need a WaitGroup of TryExec in flight
|
||||
wg sync.WaitGroup
|
||||
address string
|
||||
conn *grpc.ClientConn
|
||||
client pb.RunnerProtocolClient
|
||||
}
|
||||
|
||||
// allow factory to be overridden in tests
|
||||
type secureRunnerFactory func(addr string, cert string, key string, ca string) (agent.Runner, error)
|
||||
|
||||
func secureGRPCRunnerFactory(addr string, cert string, key string, ca string) (agent.Runner, error) {
|
||||
p := &pkiData{
|
||||
cert: cert,
|
||||
key: key,
|
||||
ca: ca,
|
||||
}
|
||||
conn, client, err := runnerConnection(addr, p)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &gRPCRunner{
|
||||
address: addr,
|
||||
conn: conn,
|
||||
client: client,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func DefaultgRPCNodePool(npmAddress string, cert string, key string, ca string) agent.NodePool {
|
||||
npm := poolmanager.NewNodePoolManager(npmAddress, cert, key, ca)
|
||||
// TODO do we need to persistent this ID in order to survive restart?
|
||||
lbID := id.New().String()
|
||||
advertiser := poolmanager.NewCapacityAdvertiser(npm, lbID, CapacityUpdatePeriod)
|
||||
return newgRPCNodePool(cert, key, ca, npm, advertiser, secureGRPCRunnerFactory)
|
||||
}
|
||||
|
||||
func newgRPCNodePool(cert string, key string, ca string, npm poolmanager.NodePoolManager, advertiser poolmanager.CapacityAdvertiser, rf secureRunnerFactory) agent.NodePool {
|
||||
|
||||
logrus.Info("Starting dynamic runner pool")
|
||||
p := &pkiData{
|
||||
ca: ca,
|
||||
cert: cert,
|
||||
key: key,
|
||||
}
|
||||
|
||||
np := &gRPCNodePool{
|
||||
npm: npm,
|
||||
advertiser: advertiser,
|
||||
lbg: make(map[string]*lbg),
|
||||
generator: rf,
|
||||
shutdown: make(chan struct{}),
|
||||
pki: p,
|
||||
}
|
||||
go np.maintenance()
|
||||
return np
|
||||
}
|
||||
|
||||
func (np *gRPCNodePool) Runners(lbgID string) []agent.Runner {
|
||||
np.mx.RLock()
|
||||
lbg, ok := np.lbg[lbgID]
|
||||
np.mx.RUnlock()
|
||||
|
||||
if !ok {
|
||||
np.mx.Lock()
|
||||
lbg, ok = np.lbg[lbgID]
|
||||
if !ok {
|
||||
lbg = newLBG(lbgID, np.generator)
|
||||
np.lbg[lbgID] = lbg
|
||||
}
|
||||
np.mx.Unlock()
|
||||
}
|
||||
|
||||
return lbg.runnerList()
|
||||
}
|
||||
|
||||
func (np *gRPCNodePool) Shutdown() {
|
||||
np.advertiser.Shutdown()
|
||||
np.npm.Shutdown()
|
||||
}
|
||||
|
||||
func (np *gRPCNodePool) AssignCapacity(r *poolmanager.CapacityRequest) {
|
||||
np.advertiser.AssignCapacity(r)
|
||||
|
||||
}
|
||||
func (np *gRPCNodePool) ReleaseCapacity(r *poolmanager.CapacityRequest) {
|
||||
np.advertiser.ReleaseCapacity(r)
|
||||
}
|
||||
|
||||
func (np *gRPCNodePool) maintenance() {
|
||||
ticker := time.NewTicker(500 * time.Millisecond)
|
||||
for {
|
||||
select {
|
||||
case <-np.shutdown:
|
||||
return
|
||||
case <-ticker.C:
|
||||
// Reload any LBGroup information from NPM (pull for the moment, shift to listening to a stream later)
|
||||
np.reloadLBGmembership()
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func newLBG(lbgID string, generator secureRunnerFactory) *lbg {
|
||||
lbg := &lbg{
|
||||
id: lbgID,
|
||||
runners: make(map[string]agent.Runner),
|
||||
r_list: atomic.Value{},
|
||||
generator: generator,
|
||||
}
|
||||
lbg.r_list.Store([]agent.Runner{})
|
||||
return lbg
|
||||
}
|
||||
|
||||
func (np *gRPCNodePool) reloadLBGmembership() {
|
||||
np.mx.RLock()
|
||||
lbgroups := np.lbg
|
||||
np.mx.RUnlock()
|
||||
for lbgID, lbg := range lbgroups {
|
||||
lbg.reloadMembers(lbgID, np.npm, np.pki)
|
||||
}
|
||||
}
|
||||
|
||||
func (lbg *lbg) runnerList() []agent.Runner {
|
||||
orig_runners := lbg.r_list.Load().([]agent.Runner)
|
||||
// XXX: Return a copy. If we required this to be immutably read by the caller, we could return the structure directly
|
||||
runners := make([]agent.Runner, len(orig_runners))
|
||||
copy(runners, orig_runners)
|
||||
return runners
|
||||
}
|
||||
|
||||
func (lbg *lbg) reloadMembers(lbgID string, npm poolmanager.NodePoolManager, p *pkiData) {
|
||||
|
||||
runners, err := npm.GetRunners(lbgID)
|
||||
if err != nil {
|
||||
logrus.Debug("Failed to get the list of runners from node pool manager")
|
||||
}
|
||||
lbg.mx.Lock()
|
||||
defer lbg.mx.Unlock()
|
||||
r_list := make([]agent.Runner, len(runners))
|
||||
seen := map[string]bool{} // If we've seen a particular runner or not
|
||||
var errGenerator error
|
||||
for i, addr := range runners {
|
||||
r, ok := lbg.runners[addr]
|
||||
if !ok {
|
||||
logrus.WithField("runner_addr", addr).Debug("New Runner to be added")
|
||||
r, errGenerator = lbg.generator(addr, p.cert, p.key, p.ca)
|
||||
if errGenerator != nil {
|
||||
logrus.WithField("runner_addr", addr).Debug("Creation of the new runner failed")
|
||||
} else {
|
||||
lbg.runners[addr] = r
|
||||
}
|
||||
}
|
||||
if errGenerator == nil {
|
||||
r_list[i] = r // Maintain the delivered order
|
||||
} else {
|
||||
// some algorithms (like consistent hash) work better if the i'th element
|
||||
// of r_list points to the same node on all LBs, so insert a placeholder
|
||||
// if we can't create the runner for some reason"
|
||||
r_list[i] = nullRunnerSingleton
|
||||
}
|
||||
|
||||
seen[addr] = true
|
||||
}
|
||||
lbg.r_list.Store(r_list)
|
||||
|
||||
// Remove any runners that we have not encountered
|
||||
for addr, r := range lbg.runners {
|
||||
if _, ok := seen[addr]; !ok {
|
||||
logrus.WithField("runner_addr", addr).Debug("Removing drained runner")
|
||||
delete(lbg.runners, addr)
|
||||
r.Close()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (r *gRPCRunner) Close() {
|
||||
go func() {
|
||||
r.wg.Wait()
|
||||
r.conn.Close()
|
||||
}()
|
||||
}
|
||||
|
||||
func runnerConnection(address string, pki *pkiData) (*grpc.ClientConn, pb.RunnerProtocolClient, error) {
|
||||
ctx := context.Background()
|
||||
|
||||
var creds credentials.TransportCredentials
|
||||
if pki != nil {
|
||||
var err error
|
||||
creds, err = grpcutil.CreateCredentials(pki.cert, pki.key, pki.ca)
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error("Unable to create credentials to connect to runner node")
|
||||
return nil, nil, err
|
||||
}
|
||||
}
|
||||
|
||||
conn, err := grpcutil.DialWithBackoff(ctx, address, creds, grpc.DefaultBackoffConfig)
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error("Unable to connect to runner node")
|
||||
}
|
||||
|
||||
protocolClient := pb.NewRunnerProtocolClient(conn)
|
||||
logrus.WithField("runner_addr", address).Info("Connected to runner")
|
||||
|
||||
return conn, protocolClient, nil
|
||||
}
|
||||
|
||||
func (r *gRPCRunner) Address() string {
|
||||
return r.address
|
||||
}
|
||||
|
||||
func (r *gRPCRunner) TryExec(ctx context.Context, call agent.Call) (bool, error) {
|
||||
logrus.WithField("runner_addr", r.address).Debug("Attempting to place call")
|
||||
r.wg.Add(1)
|
||||
defer r.wg.Done()
|
||||
|
||||
// Get app and route information
|
||||
// Construct model.Call with CONFIG in it already
|
||||
modelJSON, err := json.Marshal(call.Model())
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error("Failed to encode model as JSON")
|
||||
// If we can't encode the model, no runner will ever be able to run this. Give up.
|
||||
return true, err
|
||||
}
|
||||
runnerConnection, err := r.client.Engage(ctx)
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error("Unable to create client to runner node")
|
||||
// Try on next runner
|
||||
return false, err
|
||||
}
|
||||
|
||||
err = runnerConnection.Send(&pb.ClientMsg{Body: &pb.ClientMsg_Try{Try: &pb.TryCall{ModelsCallJson: string(modelJSON)}}})
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error("Failed to send message to runner node")
|
||||
return false, err
|
||||
}
|
||||
msg, err := runnerConnection.Recv()
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error("Failed to receive first message from runner node")
|
||||
return false, err
|
||||
}
|
||||
|
||||
switch body := msg.Body.(type) {
|
||||
case *pb.RunnerMsg_Acknowledged:
|
||||
if !body.Acknowledged.Committed {
|
||||
logrus.Errorf("Runner didn't commit invocation request: %v", body.Acknowledged.Details)
|
||||
return false, nil
|
||||
// Try the next runner
|
||||
}
|
||||
logrus.Debug("Runner committed invocation request, sending data frames")
|
||||
done := make(chan error)
|
||||
go receiveFromRunner(runnerConnection, call, done)
|
||||
sendToRunner(call, runnerConnection)
|
||||
return true, <-done
|
||||
|
||||
default:
|
||||
logrus.Errorf("Unhandled message type received from runner: %v\n", msg)
|
||||
return true, nil
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func sendToRunner(call agent.Call, protocolClient pb.RunnerProtocol_EngageClient) error {
|
||||
bodyReader, err := agent.RequestReader(&call)
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error("Unable to get reader for request body")
|
||||
return err
|
||||
}
|
||||
writeBufferSize := 10 * 1024 // 10KB
|
||||
writeBuffer := make([]byte, writeBufferSize)
|
||||
for {
|
||||
n, err := bodyReader.Read(writeBuffer)
|
||||
logrus.Debugf("Wrote %v bytes to the runner", n)
|
||||
|
||||
if err == io.EOF {
|
||||
err = protocolClient.Send(&pb.ClientMsg{
|
||||
Body: &pb.ClientMsg_Data{
|
||||
Data: &pb.DataFrame{
|
||||
Data: writeBuffer,
|
||||
Eof: true,
|
||||
},
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error("Failed to send data frame with EOF to runner")
|
||||
}
|
||||
break
|
||||
}
|
||||
err = protocolClient.Send(&pb.ClientMsg{
|
||||
Body: &pb.ClientMsg_Data{
|
||||
Data: &pb.DataFrame{
|
||||
Data: writeBuffer,
|
||||
Eof: false,
|
||||
},
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error("Failed to send data frame")
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func receiveFromRunner(protocolClient pb.RunnerProtocol_EngageClient, call agent.Call, done chan error) {
|
||||
w, err := agent.ResponseWriter(&call)
|
||||
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error("Unable to get response writer from call")
|
||||
done <- err
|
||||
return
|
||||
}
|
||||
|
||||
for {
|
||||
msg, err := protocolClient.Recv()
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error("Failed to receive message from runner")
|
||||
done <- err
|
||||
return
|
||||
}
|
||||
|
||||
switch body := msg.Body.(type) {
|
||||
case *pb.RunnerMsg_ResultStart:
|
||||
switch meta := body.ResultStart.Meta.(type) {
|
||||
case *pb.CallResultStart_Http:
|
||||
for _, header := range meta.Http.Headers {
|
||||
(*w).Header().Set(header.Key, header.Value)
|
||||
}
|
||||
default:
|
||||
logrus.Errorf("Unhandled meta type in start message: %v", meta)
|
||||
}
|
||||
case *pb.RunnerMsg_Data:
|
||||
(*w).Write(body.Data.Data)
|
||||
case *pb.RunnerMsg_Finished:
|
||||
if body.Finished.Success {
|
||||
logrus.Infof("Call finished successfully: %v", body.Finished.Details)
|
||||
} else {
|
||||
logrus.Infof("Call finish unsuccessfully:: %v", body.Finished.Details)
|
||||
}
|
||||
close(done)
|
||||
return
|
||||
default:
|
||||
logrus.Errorf("Unhandled message type from runner: %v", body)
|
||||
}
|
||||
}
|
||||
}
|
||||
214
api/agent/nodepool/grpc/lb_test.go
Normal file
214
api/agent/nodepool/grpc/lb_test.go
Normal file
@@ -0,0 +1,214 @@
|
||||
package grpc
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"strconv"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/fnproject/fn/api/agent"
|
||||
"github.com/fnproject/fn/api/models"
|
||||
"github.com/fnproject/fn/poolmanager"
|
||||
model "github.com/fnproject/fn/poolmanager/grpc"
|
||||
)
|
||||
|
||||
type mockRunner struct {
|
||||
wg sync.WaitGroup
|
||||
sleep time.Duration
|
||||
mtx sync.Mutex
|
||||
maxCalls int32 // Max concurrent calls
|
||||
curCalls int32 // Current calls
|
||||
addr string
|
||||
}
|
||||
|
||||
type mockNodePoolManager struct {
|
||||
runners []string
|
||||
}
|
||||
|
||||
type mockgRPCNodePool struct {
|
||||
npm poolmanager.NodePoolManager
|
||||
lbg map[string]*lbg
|
||||
generator secureRunnerFactory
|
||||
pki *pkiData
|
||||
}
|
||||
|
||||
func newMockgRPCNodePool(rf secureRunnerFactory, runners []string) *mockgRPCNodePool {
|
||||
npm := &mockNodePoolManager{runners: runners}
|
||||
|
||||
return &mockgRPCNodePool{
|
||||
npm: npm,
|
||||
lbg: make(map[string]*lbg),
|
||||
generator: rf,
|
||||
pki: &pkiData{},
|
||||
}
|
||||
}
|
||||
|
||||
func (npm *mockNodePoolManager) AdvertiseCapacity(snapshots *model.CapacitySnapshotList) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (npm *mockNodePoolManager) GetRunners(lbgID string) ([]string, error) {
|
||||
return npm.runners, nil
|
||||
}
|
||||
|
||||
func (npm *mockNodePoolManager) Shutdown() error {
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func NewMockRunnerFactory(sleep time.Duration, maxCalls int32) secureRunnerFactory {
|
||||
return func(addr string, cert string, key string, ca string) (agent.Runner, error) {
|
||||
return &mockRunner{
|
||||
sleep: sleep,
|
||||
maxCalls: maxCalls,
|
||||
addr: addr,
|
||||
}, nil
|
||||
}
|
||||
}
|
||||
|
||||
func FaultyRunnerFactory() secureRunnerFactory {
|
||||
return func(addr string, cert string, key string, ca string) (agent.Runner, error) {
|
||||
return &mockRunner{
|
||||
addr: addr,
|
||||
}, errors.New("Creation of new runner failed")
|
||||
}
|
||||
}
|
||||
|
||||
func (r *mockRunner) checkAndIncrCalls() error {
|
||||
r.mtx.Lock()
|
||||
defer r.mtx.Unlock()
|
||||
if r.curCalls >= r.maxCalls {
|
||||
return models.ErrCallTimeoutServerBusy //TODO is that the correct error?
|
||||
}
|
||||
r.curCalls++
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *mockRunner) decrCalls() {
|
||||
r.mtx.Lock()
|
||||
defer r.mtx.Unlock()
|
||||
r.curCalls--
|
||||
}
|
||||
|
||||
func (r *mockRunner) TryExec(ctx context.Context, call agent.Call) (bool, error) {
|
||||
err := r.checkAndIncrCalls()
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
defer r.decrCalls()
|
||||
|
||||
r.wg.Add(1)
|
||||
defer r.wg.Done()
|
||||
|
||||
time.Sleep(r.sleep)
|
||||
|
||||
w, err := agent.ResponseWriter(&call)
|
||||
if err != nil {
|
||||
return true, err
|
||||
}
|
||||
buf := []byte("OK")
|
||||
(*w).Header().Set("Content-Type", "text/plain")
|
||||
(*w).Header().Set("Content-Length", strconv.Itoa(len(buf)))
|
||||
(*w).Write(buf)
|
||||
|
||||
return true, nil
|
||||
}
|
||||
|
||||
func (r *mockRunner) Close() {
|
||||
go func() {
|
||||
r.wg.Wait()
|
||||
}()
|
||||
}
|
||||
|
||||
func (r *mockRunner) Address() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func setupMockNodePool(lbgID string, expectedRunners []string) (*mockgRPCNodePool, *lbg) {
|
||||
rf := NewMockRunnerFactory(1*time.Millisecond, 1)
|
||||
lb := newLBG(lbgID, rf)
|
||||
|
||||
np := newMockgRPCNodePool(rf, expectedRunners)
|
||||
np.lbg[lbgID] = lb
|
||||
return np, lb
|
||||
}
|
||||
|
||||
func checkRunners(t *testing.T, expectedRunners []string, actualRunners map[string]agent.Runner, ordList []agent.Runner) {
|
||||
if len(expectedRunners) != len(actualRunners) {
|
||||
t.Errorf("List of runners is wrong, expected: %d got: %d", len(expectedRunners), len(actualRunners))
|
||||
}
|
||||
for i, r := range expectedRunners {
|
||||
_, ok := actualRunners[r]
|
||||
if !ok {
|
||||
t.Errorf("Advertised runner %s not found in the list of runners", r)
|
||||
}
|
||||
ordR := ordList[i].(*mockRunner)
|
||||
if ordR.addr != r {
|
||||
t.Error("Ordered list is not in sync with the advertised list")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestReloadMembersNoRunners(t *testing.T) {
|
||||
lbgID := "lb-test"
|
||||
// // Empty list, no runners available
|
||||
np, lb := setupMockNodePool(lbgID, make([]string, 0))
|
||||
|
||||
np.lbg[lbgID].reloadMembers(lbgID, np.npm, np.pki)
|
||||
expectedRunners := []string{}
|
||||
ordList := lb.r_list.Load().([]agent.Runner)
|
||||
checkRunners(t, expectedRunners, lb.runners, ordList)
|
||||
}
|
||||
|
||||
func TestReloadMembersNewRunners(t *testing.T) {
|
||||
lbgID := "lb-test"
|
||||
expectedRunners := []string{"171.16.0.1", "171.16.0.2"}
|
||||
np, lb := setupMockNodePool(lbgID, expectedRunners)
|
||||
|
||||
np.lbg[lbgID].reloadMembers(lbgID, np.npm, np.pki)
|
||||
ordList := lb.r_list.Load().([]agent.Runner)
|
||||
checkRunners(t, expectedRunners, lb.runners, ordList)
|
||||
}
|
||||
|
||||
func TestReloadMembersRemoveRunners(t *testing.T) {
|
||||
lbgID := "lb-test"
|
||||
expectedRunners := []string{"171.16.0.1", "171.16.0.3"}
|
||||
np, lb := setupMockNodePool(lbgID, expectedRunners)
|
||||
|
||||
// actual runners before the update
|
||||
actualRunners := []string{"171.16.0.1", "171.16.0.2", "171.16.0.19"}
|
||||
for _, v := range actualRunners {
|
||||
r, err := lb.generator(v, np.pki.cert, np.pki.key, np.pki.ca)
|
||||
if err != nil {
|
||||
t.Error("Failed to create new runner")
|
||||
}
|
||||
lb.runners[v] = r
|
||||
}
|
||||
|
||||
if len(lb.runners) != len(actualRunners) {
|
||||
t.Errorf("Failed to load list of runners")
|
||||
}
|
||||
|
||||
np.lbg[lbgID].reloadMembers(lbgID, np.npm, np.pki)
|
||||
ordList := lb.r_list.Load().([]agent.Runner)
|
||||
checkRunners(t, expectedRunners, lb.runners, ordList)
|
||||
}
|
||||
|
||||
func TestReloadMembersFailToCreateNewRunners(t *testing.T) {
|
||||
lbgID := "lb-test"
|
||||
rf := FaultyRunnerFactory()
|
||||
lb := newLBG(lbgID, rf)
|
||||
np := newMockgRPCNodePool(rf, []string{"171.19.0.1"})
|
||||
np.lbg[lbgID] = lb
|
||||
np.lbg[lbgID].reloadMembers(lbgID, np.npm, np.pki)
|
||||
actualRunners := lb.runners
|
||||
if len(actualRunners) != 0 {
|
||||
t.Errorf("List of runners should be empty")
|
||||
}
|
||||
ordList := lb.r_list.Load().([]agent.Runner)
|
||||
if ordList[0] != nullRunnerSingleton {
|
||||
t.Errorf("Ordered list should have a nullRunner")
|
||||
}
|
||||
}
|
||||
105
api/agent/nodepool/grpc/static_pool.go
Normal file
105
api/agent/nodepool/grpc/static_pool.go
Normal file
@@ -0,0 +1,105 @@
|
||||
package grpc
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/fnproject/fn/api/agent"
|
||||
"github.com/fnproject/fn/poolmanager"
|
||||
)
|
||||
|
||||
// allow factory to be overridden in tests
|
||||
type insecureRunnerFactory func(addr string) (agent.Runner, error)
|
||||
|
||||
func insecureGRPCRunnerFactory(addr string) (agent.Runner, error) {
|
||||
conn, client, err := runnerConnection(addr, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &gRPCRunner{
|
||||
address: addr,
|
||||
conn: conn,
|
||||
client: client,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// manages a single set of runners ignoring lb groups
|
||||
type staticNodePool struct {
|
||||
generator insecureRunnerFactory
|
||||
rMtx *sync.RWMutex
|
||||
runners []agent.Runner
|
||||
}
|
||||
|
||||
// NewStaticNodePool returns a NodePool consisting of a static set of runners
|
||||
func DefaultStaticNodePool(runnerAddresses []string) agent.NodePool {
|
||||
return newStaticNodePool(runnerAddresses, insecureGRPCRunnerFactory)
|
||||
}
|
||||
|
||||
// NewStaticNodePool returns a NodePool consisting of a static set of runners
|
||||
func newStaticNodePool(runnerAddresses []string, runnerFactory insecureRunnerFactory) agent.NodePool {
|
||||
logrus.WithField("runners", runnerAddresses).Info("Starting static runner pool")
|
||||
var runners []agent.Runner
|
||||
for _, addr := range runnerAddresses {
|
||||
r, err := runnerFactory(addr)
|
||||
if err != nil {
|
||||
logrus.WithField("runner_addr", addr).Warn("Invalid runner")
|
||||
continue
|
||||
}
|
||||
logrus.WithField("runner_addr", addr).Debug("Adding runner to pool")
|
||||
runners = append(runners, r)
|
||||
}
|
||||
return &staticNodePool{
|
||||
rMtx: &sync.RWMutex{},
|
||||
runners: runners,
|
||||
generator: runnerFactory,
|
||||
}
|
||||
}
|
||||
|
||||
func (np *staticNodePool) Runners(lbgID string) []agent.Runner {
|
||||
np.rMtx.RLock()
|
||||
defer np.rMtx.RUnlock()
|
||||
|
||||
r := make([]agent.Runner, len(np.runners))
|
||||
copy(r, np.runners)
|
||||
return r
|
||||
}
|
||||
|
||||
func (np *staticNodePool) AddRunner(address string) error {
|
||||
np.rMtx.Lock()
|
||||
defer np.rMtx.Unlock()
|
||||
|
||||
r, err := np.generator(address)
|
||||
if err != nil {
|
||||
logrus.WithField("runner_addr", address).Warn("Failed to add runner")
|
||||
return err
|
||||
}
|
||||
np.runners = append(np.runners, r)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (np *staticNodePool) RemoveRunner(address string) {
|
||||
np.rMtx.Lock()
|
||||
defer np.rMtx.Unlock()
|
||||
|
||||
for i, r := range np.runners {
|
||||
if r.Address() == address {
|
||||
// delete runner from list
|
||||
np.runners = append(np.runners[:i], np.runners[i+1:]...)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (np *staticNodePool) AssignCapacity(r *poolmanager.CapacityRequest) {
|
||||
// NO-OP
|
||||
}
|
||||
|
||||
func (np *staticNodePool) ReleaseCapacity(r *poolmanager.CapacityRequest) {
|
||||
// NO-OP
|
||||
}
|
||||
|
||||
func (np *staticNodePool) Shutdown() {
|
||||
// NO-OP
|
||||
}
|
||||
79
api/agent/nodepool/grpc/static_pool_test.go
Normal file
79
api/agent/nodepool/grpc/static_pool_test.go
Normal file
@@ -0,0 +1,79 @@
|
||||
package grpc
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/fnproject/fn/api/agent"
|
||||
"github.com/fnproject/fn/poolmanager"
|
||||
)
|
||||
|
||||
func setupStaticPool(runners []string) agent.NodePool {
|
||||
return newStaticNodePool(runners, mockRunnerFactory)
|
||||
}
|
||||
|
||||
type mockStaticRunner struct {
|
||||
address string
|
||||
}
|
||||
|
||||
func (r *mockStaticRunner) TryExec(ctx context.Context, call agent.Call) (bool, error) {
|
||||
return true, nil
|
||||
}
|
||||
|
||||
func (r *mockStaticRunner) Close() {
|
||||
|
||||
}
|
||||
func (r *mockStaticRunner) Address() string {
|
||||
return r.address
|
||||
}
|
||||
|
||||
func mockRunnerFactory(addr string) (agent.Runner, error) {
|
||||
return &mockStaticRunner{address: addr}, nil
|
||||
}
|
||||
|
||||
func TestNewStaticPool(t *testing.T) {
|
||||
addrs := []string{"127.0.0.1:8080", "127.0.0.1:8081"}
|
||||
np := setupStaticPool(addrs)
|
||||
|
||||
if len(np.Runners("foo")) != len(addrs) {
|
||||
t.Fatalf("Invalid number of runners %v", len(np.Runners("foo")))
|
||||
}
|
||||
}
|
||||
|
||||
func TestCapacityForStaticPool(t *testing.T) {
|
||||
addrs := []string{"127.0.0.1:8080", "127.0.0.1:8081"}
|
||||
np := setupStaticPool(addrs)
|
||||
|
||||
cr := &poolmanager.CapacityRequest{TotalMemoryMb: 100, LBGroupID: "foo"}
|
||||
np.AssignCapacity(cr)
|
||||
np.ReleaseCapacity(cr)
|
||||
|
||||
if len(np.Runners("foo")) != len(addrs) {
|
||||
t.Fatalf("Invalid number of runners %v", len(np.Runners("foo")))
|
||||
}
|
||||
}
|
||||
|
||||
func TestAddNodeToPool(t *testing.T) {
|
||||
addrs := []string{"127.0.0.1:8080", "127.0.0.1:8081"}
|
||||
np := setupStaticPool(addrs).(*staticNodePool)
|
||||
np.AddRunner("127.0.0.1:8082")
|
||||
|
||||
if len(np.Runners("foo")) != len(addrs)+1 {
|
||||
t.Fatalf("Invalid number of runners %v", len(np.Runners("foo")))
|
||||
}
|
||||
}
|
||||
|
||||
func TestRemoveNodeFromPool(t *testing.T) {
|
||||
addrs := []string{"127.0.0.1:8080", "127.0.0.1:8081"}
|
||||
np := setupStaticPool(addrs).(*staticNodePool)
|
||||
np.RemoveRunner("127.0.0.1:8081")
|
||||
|
||||
if len(np.Runners("foo")) != len(addrs)-1 {
|
||||
t.Fatalf("Invalid number of runners %v", len(np.Runners("foo")))
|
||||
}
|
||||
|
||||
np.RemoveRunner("127.0.0.1:8081")
|
||||
if len(np.Runners("foo")) != len(addrs)-1 {
|
||||
t.Fatalf("Invalid number of runners %v", len(np.Runners("foo")))
|
||||
}
|
||||
}
|
||||
572
api/agent/pure_runner.go
Normal file
572
api/agent/pure_runner.go
Normal file
@@ -0,0 +1,572 @@
|
||||
package agent
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"net/http"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
runner "github.com/fnproject/fn/api/agent/grpc"
|
||||
"github.com/fnproject/fn/api/models"
|
||||
"github.com/go-openapi/strfmt"
|
||||
"github.com/golang/protobuf/ptypes/empty"
|
||||
"github.com/sirupsen/logrus"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/credentials"
|
||||
"google.golang.org/grpc/metadata"
|
||||
"google.golang.org/grpc/peer"
|
||||
)
|
||||
|
||||
// callHandle represents the state of the call as handled by the pure runner, and additionally it implements the
|
||||
// interface of http.ResponseWriter so that it can be used for streaming the output back.
|
||||
type callHandle struct {
|
||||
engagement runner.RunnerProtocol_EngageServer
|
||||
c *call // the agent's version of call
|
||||
input io.WriteCloser
|
||||
started bool
|
||||
done chan error // to synchronize
|
||||
// As the state can be set and checked by both goroutines handling this state, we need a mutex.
|
||||
stateMutex sync.Mutex
|
||||
// Timings, for metrics:
|
||||
receivedTime strfmt.DateTime // When was the call received?
|
||||
allocatedTime strfmt.DateTime // When did we finish allocating capacity?
|
||||
// Last communication error on the stream (if any). This basically acts as a cancellation flag too.
|
||||
streamError error
|
||||
// For implementing http.ResponseWriter:
|
||||
outHeaders http.Header
|
||||
outStatus int
|
||||
headerWritten bool
|
||||
}
|
||||
|
||||
func (ch *callHandle) Header() http.Header {
|
||||
return ch.outHeaders
|
||||
}
|
||||
|
||||
func (ch *callHandle) WriteHeader(status int) {
|
||||
ch.outStatus = status
|
||||
ch.commitHeaders()
|
||||
}
|
||||
|
||||
func (ch *callHandle) commitHeaders() error {
|
||||
if ch.headerWritten {
|
||||
return nil
|
||||
}
|
||||
ch.headerWritten = true
|
||||
logrus.Debugf("Committing call result with status %d", ch.outStatus)
|
||||
|
||||
var outHeaders []*runner.HttpHeader
|
||||
|
||||
for h, vals := range ch.outHeaders {
|
||||
for _, v := range vals {
|
||||
outHeaders = append(outHeaders, &runner.HttpHeader{
|
||||
Key: h,
|
||||
Value: v,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Only write if we are not in an error situation. If we cause a stream error, then record that but don't cancel
|
||||
// the call: basically just blackhole the output and return the write error to cause Submit to fail properly.
|
||||
ch.stateMutex.Lock()
|
||||
defer ch.stateMutex.Unlock()
|
||||
err := ch.streamError
|
||||
if err != nil {
|
||||
return fmt.Errorf("Bailing out because of communication error: %v", ch.streamError)
|
||||
}
|
||||
|
||||
logrus.Debug("Sending call result start message")
|
||||
err = ch.engagement.Send(&runner.RunnerMsg{
|
||||
Body: &runner.RunnerMsg_ResultStart{
|
||||
ResultStart: &runner.CallResultStart{
|
||||
Meta: &runner.CallResultStart_Http{
|
||||
Http: &runner.HttpRespMeta{
|
||||
Headers: outHeaders,
|
||||
StatusCode: int32(ch.outStatus),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error("Error sending call result")
|
||||
ch.streamError = err
|
||||
return err
|
||||
}
|
||||
logrus.Debug("Sent call result message")
|
||||
return nil
|
||||
}
|
||||
|
||||
func (ch *callHandle) Write(data []byte) (int, error) {
|
||||
err := ch.commitHeaders()
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("Error sending data: %v", err)
|
||||
}
|
||||
|
||||
// Only write if we are not in an error situation. If we cause a stream error, then record that but don't cancel
|
||||
// the call: basically just blackhole the output and return the write error to cause Submit to fail properly.
|
||||
ch.stateMutex.Lock()
|
||||
defer ch.stateMutex.Unlock()
|
||||
err = ch.streamError
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("Bailing out because of communication error: %v", ch.streamError)
|
||||
}
|
||||
|
||||
logrus.Debugf("Sending call response data %d bytes long", len(data))
|
||||
err = ch.engagement.Send(&runner.RunnerMsg{
|
||||
Body: &runner.RunnerMsg_Data{
|
||||
Data: &runner.DataFrame{
|
||||
Data: data,
|
||||
Eof: false,
|
||||
},
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
ch.streamError = err
|
||||
return 0, fmt.Errorf("Error sending data: %v", err)
|
||||
}
|
||||
return len(data), nil
|
||||
}
|
||||
|
||||
func (ch *callHandle) Close() error {
|
||||
err := ch.commitHeaders()
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error sending close frame: %v", err)
|
||||
}
|
||||
|
||||
// Only write if we are not in an error situation. If we cause a stream error, then record that but don't cancel
|
||||
// the call: basically just blackhole the output and return the write error to cause the caller to fail properly.
|
||||
ch.stateMutex.Lock()
|
||||
defer ch.stateMutex.Unlock()
|
||||
err = ch.streamError
|
||||
if err != nil {
|
||||
return fmt.Errorf("Bailing out because of communication error: %v", ch.streamError)
|
||||
}
|
||||
logrus.Debug("Sending call response data end")
|
||||
err = ch.engagement.Send(&runner.RunnerMsg{
|
||||
Body: &runner.RunnerMsg_Data{
|
||||
Data: &runner.DataFrame{
|
||||
Eof: true,
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error sending close frame: %v", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// cancel implements the logic for cancelling the execution of a call based on what the state in the handle is.
|
||||
func (ch *callHandle) cancel(ctx context.Context, err error) {
|
||||
ch.stateMutex.Lock()
|
||||
defer ch.stateMutex.Unlock()
|
||||
|
||||
// Do not double-cancel.
|
||||
if ch.streamError != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// First, record that there has been an error.
|
||||
ch.streamError = err
|
||||
// Caller may have died or disconnected. The behaviour here depends on the state of the call.
|
||||
// If the call was placed and is running we need to handle it...
|
||||
if ch.c != nil {
|
||||
// If we've actually started the call we're in the middle of an execution with i/o going back and forth.
|
||||
// This is hard to stop. Side effects can be occurring at any point. However, at least we should stop
|
||||
// the i/o flow. Recording the stream error in the handle should have stopped the output, but we also
|
||||
// want to stop any input being sent through, so we close the input stream and let the function
|
||||
// probably crash out. If it doesn't crash out, well, it means the function doesn't handle i/o errors
|
||||
// properly and it will hang there until the timeout, then it'll be killed properly by the timeout
|
||||
// handling in Submit.
|
||||
if ch.started {
|
||||
ch.input.Close()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type pureRunnerCapacityManager struct {
|
||||
totalCapacityUnits uint64
|
||||
committedCapacityUnits uint64
|
||||
mtx sync.Mutex
|
||||
}
|
||||
|
||||
type capacityDeallocator func()
|
||||
|
||||
func newPureRunnerCapacityManager(units uint64) pureRunnerCapacityManager {
|
||||
return pureRunnerCapacityManager{
|
||||
totalCapacityUnits: units,
|
||||
committedCapacityUnits: 0,
|
||||
}
|
||||
}
|
||||
|
||||
func (prcm *pureRunnerCapacityManager) checkAndReserveCapacity(units uint64) error {
|
||||
prcm.mtx.Lock()
|
||||
defer prcm.mtx.Unlock()
|
||||
if prcm.committedCapacityUnits+units < prcm.totalCapacityUnits {
|
||||
prcm.committedCapacityUnits = prcm.committedCapacityUnits + units
|
||||
return nil
|
||||
}
|
||||
return models.ErrCallTimeoutServerBusy
|
||||
}
|
||||
|
||||
func (prcm *pureRunnerCapacityManager) releaseCapacity(units uint64) {
|
||||
prcm.mtx.Lock()
|
||||
defer prcm.mtx.Unlock()
|
||||
if units <= prcm.committedCapacityUnits {
|
||||
prcm.committedCapacityUnits = prcm.committedCapacityUnits - units
|
||||
return
|
||||
}
|
||||
panic("Fatal error in pure runner capacity calculation, getting to sub-zero capacity")
|
||||
}
|
||||
|
||||
type pureRunner struct {
|
||||
gRPCServer *grpc.Server
|
||||
listen string
|
||||
a Agent
|
||||
inflight int32
|
||||
capacity pureRunnerCapacityManager
|
||||
}
|
||||
|
||||
func (pr *pureRunner) ensureFunctionIsRunning(state *callHandle) {
|
||||
// Only start it once!
|
||||
state.stateMutex.Lock()
|
||||
defer state.stateMutex.Unlock()
|
||||
if !state.started {
|
||||
state.started = true
|
||||
go func() {
|
||||
err := pr.a.Submit(state.c)
|
||||
if err != nil {
|
||||
// In this case the function has failed for a legitimate reason. We send a call failed message if we
|
||||
// can. If there's a streaming error doing that then we are basically in the "double exception" case
|
||||
// and who knows what's best to do. Submit has already finished so we don't need to cancel... but at
|
||||
// least we should set streamError if it's not set.
|
||||
state.stateMutex.Lock()
|
||||
defer state.stateMutex.Unlock()
|
||||
if state.streamError == nil {
|
||||
err2 := state.engagement.Send(&runner.RunnerMsg{
|
||||
Body: &runner.RunnerMsg_Finished{Finished: &runner.CallFinished{
|
||||
Success: false,
|
||||
Details: fmt.Sprintf("%v", err),
|
||||
}}})
|
||||
if err2 != nil {
|
||||
state.streamError = err2
|
||||
}
|
||||
}
|
||||
state.done <- err
|
||||
return
|
||||
}
|
||||
// First close the writer, then send the call finished message
|
||||
err = state.Close()
|
||||
if err != nil {
|
||||
// If we fail to close the writer we need to communicate back that the function has failed; if there's
|
||||
// a streaming error doing that then we are basically in the "double exception" case and who knows
|
||||
// what's best to do. Submit has already finished so we don't need to cancel... but at least we should
|
||||
// set streamError if it's not set.
|
||||
state.stateMutex.Lock()
|
||||
defer state.stateMutex.Unlock()
|
||||
if state.streamError == nil {
|
||||
err2 := state.engagement.Send(&runner.RunnerMsg{
|
||||
Body: &runner.RunnerMsg_Finished{Finished: &runner.CallFinished{
|
||||
Success: false,
|
||||
Details: fmt.Sprintf("%v", err),
|
||||
}}})
|
||||
if err2 != nil {
|
||||
state.streamError = err2
|
||||
}
|
||||
}
|
||||
state.done <- err
|
||||
return
|
||||
}
|
||||
// At this point everything should have worked. Send a successful message... and if that runs afoul of a
|
||||
// stream error, well, we're in a bit of trouble. Everything has finished, so there is nothing to cancel
|
||||
// and we just give up, but at least we set streamError.
|
||||
state.stateMutex.Lock()
|
||||
defer state.stateMutex.Unlock()
|
||||
if state.streamError == nil {
|
||||
err2 := state.engagement.Send(&runner.RunnerMsg{
|
||||
Body: &runner.RunnerMsg_Finished{Finished: &runner.CallFinished{
|
||||
Success: true,
|
||||
Details: state.c.Model().ID,
|
||||
}}})
|
||||
if err2 != nil {
|
||||
state.streamError = err2
|
||||
state.done <- err2
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
state.done <- nil
|
||||
}()
|
||||
}
|
||||
}
|
||||
|
||||
func (pr *pureRunner) handleData(ctx context.Context, data *runner.DataFrame, state *callHandle) error {
|
||||
pr.ensureFunctionIsRunning(state)
|
||||
|
||||
// Only push the input if we're in a non-error situation
|
||||
state.stateMutex.Lock()
|
||||
defer state.stateMutex.Unlock()
|
||||
if state.streamError == nil {
|
||||
if len(data.Data) > 0 {
|
||||
_, err := state.input.Write(data.Data)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if data.Eof {
|
||||
state.input.Close()
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (pr *pureRunner) handleTryCall(ctx context.Context, tc *runner.TryCall, state *callHandle) (capacityDeallocator, error) {
|
||||
state.receivedTime = strfmt.DateTime(time.Now())
|
||||
var c models.Call
|
||||
err := json.Unmarshal([]byte(tc.ModelsCallJson), &c)
|
||||
if err != nil {
|
||||
return func() {}, err
|
||||
}
|
||||
|
||||
// Capacity check first
|
||||
err = pr.capacity.checkAndReserveCapacity(c.Memory)
|
||||
if err != nil {
|
||||
return func() {}, err
|
||||
}
|
||||
|
||||
// Proceed!
|
||||
var w http.ResponseWriter
|
||||
w = state
|
||||
inR, inW := io.Pipe()
|
||||
agent_call, err := pr.a.GetCall(FromModelAndInput(&c, inR), WithWriter(w))
|
||||
if err != nil {
|
||||
return func() { pr.capacity.releaseCapacity(c.Memory) }, err
|
||||
}
|
||||
state.c = agent_call.(*call)
|
||||
state.input = inW
|
||||
state.allocatedTime = strfmt.DateTime(time.Now())
|
||||
|
||||
return func() { pr.capacity.releaseCapacity(c.Memory) }, nil
|
||||
}
|
||||
|
||||
// Handles a client engagement
|
||||
func (pr *pureRunner) Engage(engagement runner.RunnerProtocol_EngageServer) error {
|
||||
// Keep lightweight tabs on what this runner is doing: for draindown tests
|
||||
atomic.AddInt32(&pr.inflight, 1)
|
||||
defer atomic.AddInt32(&pr.inflight, -1)
|
||||
|
||||
pv, ok := peer.FromContext(engagement.Context())
|
||||
logrus.Debug("Starting engagement")
|
||||
if ok {
|
||||
logrus.Debug("Peer is ", pv)
|
||||
}
|
||||
md, ok := metadata.FromIncomingContext(engagement.Context())
|
||||
if ok {
|
||||
logrus.Debug("MD is ", md)
|
||||
}
|
||||
|
||||
var state = callHandle{
|
||||
engagement: engagement,
|
||||
c: nil,
|
||||
input: nil,
|
||||
started: false,
|
||||
done: make(chan error),
|
||||
streamError: nil,
|
||||
outHeaders: make(http.Header),
|
||||
outStatus: 200,
|
||||
headerWritten: false,
|
||||
}
|
||||
|
||||
grpc.EnableTracing = false
|
||||
logrus.Debug("Entering engagement handler")
|
||||
|
||||
msg, err := engagement.Recv()
|
||||
if err != nil {
|
||||
// In this case the connection has dropped before we've even started.
|
||||
return err
|
||||
}
|
||||
switch body := msg.Body.(type) {
|
||||
case *runner.ClientMsg_Try:
|
||||
dealloc, err := pr.handleTryCall(engagement.Context(), body.Try, &state)
|
||||
defer dealloc()
|
||||
// At the stage of TryCall, there is only one thread running and nothing has happened yet so there should
|
||||
// not be a streamError. We can handle `err` by sending a message back. If we cause a stream error by sending
|
||||
// the message, we are in a "double exception" case and we might as well cancel the call with the original
|
||||
// error, so we can ignore the error from Send.
|
||||
if err != nil {
|
||||
_ = engagement.Send(&runner.RunnerMsg{
|
||||
Body: &runner.RunnerMsg_Acknowledged{Acknowledged: &runner.CallAcknowledged{
|
||||
Committed: false,
|
||||
Details: fmt.Sprintf("%v", err),
|
||||
}}})
|
||||
state.cancel(engagement.Context(), err)
|
||||
return err
|
||||
}
|
||||
|
||||
// If we succeed in creating the call, but we get a stream error sending a message back, we must cancel
|
||||
// the call because we've probably lost the connection.
|
||||
err = engagement.Send(&runner.RunnerMsg{
|
||||
Body: &runner.RunnerMsg_Acknowledged{Acknowledged: &runner.CallAcknowledged{
|
||||
Committed: true,
|
||||
Details: state.c.Model().ID,
|
||||
SlotAllocationLatency: time.Time(state.allocatedTime).Sub(time.Time(state.receivedTime)).String(),
|
||||
}}})
|
||||
if err != nil {
|
||||
state.cancel(engagement.Context(), err)
|
||||
return err
|
||||
}
|
||||
|
||||
// Then at this point we start handling the data that should be being pushed to us.
|
||||
foundEof := false
|
||||
for !foundEof {
|
||||
msg, err := engagement.Recv()
|
||||
if err != nil {
|
||||
// In this case the connection has dropped or there's something bad happening. We know we can't even
|
||||
// send a message back. Cancel the call, all bets are off.
|
||||
state.cancel(engagement.Context(), err)
|
||||
return err
|
||||
}
|
||||
|
||||
switch body := msg.Body.(type) {
|
||||
case *runner.ClientMsg_Data:
|
||||
err := pr.handleData(engagement.Context(), body.Data, &state)
|
||||
if err != nil {
|
||||
// If this happens, then we couldn't write into the input. The state of the function is inconsistent
|
||||
// and therefore we need to cancel. We also need to communicate back that the function has failed;
|
||||
// that could also run afoul of a stream error, but at that point we don't care, just cancel the
|
||||
// call with the original error.
|
||||
_ = state.engagement.Send(&runner.RunnerMsg{
|
||||
Body: &runner.RunnerMsg_Finished{Finished: &runner.CallFinished{
|
||||
Success: false,
|
||||
Details: fmt.Sprintf("%v", err),
|
||||
}}})
|
||||
state.cancel(engagement.Context(), err)
|
||||
return err
|
||||
}
|
||||
// Then break the loop if this was the last input data frame, i.e. eof is on
|
||||
if body.Data.Eof {
|
||||
foundEof = true
|
||||
}
|
||||
default:
|
||||
err := errors.New("Protocol failure in communication with function runner")
|
||||
// This is essentially a panic. Try to communicate back that the call has failed, and bail out; that
|
||||
// could also run afoul of a stream error, but at that point we don't care, just cancel the call with
|
||||
// the catastrophic error.
|
||||
_ = state.engagement.Send(&runner.RunnerMsg{
|
||||
Body: &runner.RunnerMsg_Finished{Finished: &runner.CallFinished{
|
||||
Success: false,
|
||||
Details: fmt.Sprintf("%v", err),
|
||||
}}})
|
||||
state.cancel(engagement.Context(), err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Synchronize to the function running goroutine finishing
|
||||
select {
|
||||
case <-state.done:
|
||||
case <-engagement.Context().Done():
|
||||
return engagement.Context().Err()
|
||||
}
|
||||
|
||||
default:
|
||||
// Protocol error. This should not happen.
|
||||
return errors.New("Protocol failure in communication with function runner")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (pr *pureRunner) Status(ctx context.Context, _ *empty.Empty) (*runner.RunnerStatus, error) {
|
||||
return &runner.RunnerStatus{
|
||||
Active: atomic.LoadInt32(&pr.inflight),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (pr *pureRunner) Start() error {
|
||||
logrus.Info("Pure Runner listening on ", pr.listen)
|
||||
lis, err := net.Listen("tcp", pr.listen)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Could not listen on %s: %s", pr.listen, err)
|
||||
}
|
||||
|
||||
if err := pr.gRPCServer.Serve(lis); err != nil {
|
||||
return fmt.Errorf("grpc serve error: %s", err)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func CreatePureRunner(addr string, a Agent, cert string, key string, ca string) (*pureRunner, error) {
|
||||
if cert != "" && key != "" && ca != "" {
|
||||
c, err := creds(cert, key, ca)
|
||||
if err != nil {
|
||||
logrus.WithField("runner_addr", addr).Warn("Failed to create credentials!")
|
||||
return nil, err
|
||||
}
|
||||
return createPureRunner(addr, a, c)
|
||||
}
|
||||
|
||||
logrus.Warn("Running pure runner in insecure mode!")
|
||||
return createPureRunner(addr, a, nil)
|
||||
}
|
||||
|
||||
func creds(cert string, key string, ca string) (credentials.TransportCredentials, error) {
|
||||
// Load the certificates from disk
|
||||
certificate, err := tls.LoadX509KeyPair(cert, key)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Could not load server key pair: %s", err)
|
||||
}
|
||||
|
||||
// Create a certificate pool from the certificate authority
|
||||
certPool := x509.NewCertPool()
|
||||
authority, err := ioutil.ReadFile(ca)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Could not read ca certificate: %s", err)
|
||||
}
|
||||
|
||||
if ok := certPool.AppendCertsFromPEM(authority); !ok {
|
||||
return nil, errors.New("Failed to append client certs")
|
||||
}
|
||||
|
||||
return credentials.NewTLS(&tls.Config{
|
||||
ClientAuth: tls.RequireAndVerifyClientCert,
|
||||
Certificates: []tls.Certificate{certificate},
|
||||
ClientCAs: certPool,
|
||||
}), nil
|
||||
}
|
||||
|
||||
const megabyte uint64 = 1024 * 1024
|
||||
|
||||
func createPureRunner(addr string, a Agent, creds credentials.TransportCredentials) (*pureRunner, error) {
|
||||
var srv *grpc.Server
|
||||
if creds != nil {
|
||||
srv = grpc.NewServer(grpc.Creds(creds))
|
||||
} else {
|
||||
srv = grpc.NewServer()
|
||||
}
|
||||
memUnits := getAvailableMemoryUnits()
|
||||
pr := &pureRunner{
|
||||
gRPCServer: srv,
|
||||
listen: addr,
|
||||
a: a,
|
||||
capacity: newPureRunnerCapacityManager(memUnits),
|
||||
}
|
||||
|
||||
runner.RegisterRunnerProtocolServer(srv, pr)
|
||||
return pr, nil
|
||||
}
|
||||
|
||||
func getAvailableMemoryUnits() uint64 {
|
||||
// To reuse code - but it's a bit of a hack. TODO: refactor the OS-specific get memory funcs out of that.
|
||||
throwawayRT := NewResourceTracker().(*resourceTracker)
|
||||
return throwawayRT.ramAsyncTotal
|
||||
}
|
||||
Reference in New Issue
Block a user