Files
fn-serverless/api/agent/grpc/runner.pb.go
Gerardo Viedma 8af57da7b2 Support load-balanced runner groups for multitenant compute isolation (#814)
* Initial stab at the protocol

* initial protocol sketch for node pool manager

* Added http header frame as a message

* Force the use of WithAgent variants when creating a server

* adds grpc models for node pool manager plus go deps

* Naming things is really hard

* Merge (and optionally purge) details received by the NPM

* WIP: starting to add the runner-side functionality of the new data plane

* WIP: Basic startup of grpc server for pure runner. Needs proper certs.

* Go fmt

* Initial agent for LB nodes.

* Agent implementation for LB nodes.

* Pass keys and certs to LB node agent.

* Remove accidentally left reference to env var.

* Add env variables for certificate files

* stub out the capacity and group membership server channels

* implement server-side runner manager service

* removes unused variable

* fixes build error

* splits up GetCall and GetLBGroupId

* Change LB node agent to use TLS connection.

* Encode call model as JSON to send to runner node.

* Use hybrid client in LB node agent.

This should provide access to get app and route information for the call
from an API node.

* More error handling on the pure runner side

* Tentative fix for GetCall problem: set deadlines correctly when reserving slot

* Connect loop for LB agent to runner nodes.

* Extract runner connection function in LB agent.

* drops committed capacity counts

* Bugfix - end state tracker only in submit

* Do logs properly

* adds first pass of tracking capacity metrics in agent

* maked memory capacity metric uint64

* maked memory capacity metric uint64

* removes use of old capacity field

* adds remove capacity call

* merges overwritten reconnect logic

* First pass of a NPM

Provide a service that talks to a (simulated) CP.

- Receive incoming capacity assertions from LBs for LBGs
- expire LB requests after a short period
- ask the CP to add runners to a LBG
- note runner set changes and readvertise
- scale down by marking runners as "draining"
- shut off draining runners after some cool-down period

* add capacity update on schedule

* Send periodic capcacity metrics

Sending capcacity metrics to node pool manager

* splits grpc and api interfaces for capacity manager

* failure to advertise capacity shouldn't panic

* Add some instructions for starting DP/CP parts.

* Create the poolmanager server with TLS

* Use logrus

* Get npm compiling with cert fixups.

* Fix: pure runner should not start async processing

* brings runner, nulb and npm together

* Add field to acknowledgment to record slot allocation latency; fix a bug too

* iterating on pool manager locking issue

* raises timeout of placement retry loop

* Fix up NPM

Improve logging

Ensure that channels etc. are actually initialised in the structure
creation!

* Update the docs - runners GRPC port is 9120

* Bugfix: return runner pool accurately.

* Double locking

* Note purges as LBs stop talking to us

* Get the purging of old LBs working.

* Tweak: on restart, load runner set before making scaling decisions.

* more agent synchronization improvements

* Deal with teh CP pulling out active hosts from under us.

* lock at lbgroup level

* Send request and receive response from runner.

* Add capacity check right before slot reservation

* Pass the full Call into the receive loop.

* Wait for the data from the runner before finishing

* force runner list refresh every time

* Don't init db and mq for pure runners

* adds shutdown of npm

* fixes broken log line

* Extract an interface for the Predictor used by the NPM

* purge drained connections from npm

* Refactor of the LB agent into the agent package

* removes capacitytest wip

* Fix undefined err issue

* updating README for poolmanager set up

* ues retrying dial for lb to npm connections

* Rename lb_calls to lb_agent now that all functionality is there

* Use the right deadline and errors in LBAgent

* Make stream error flag per-call rather than global otherwise the whole runner is damaged by one call dropping

* abstracting gRPCNodePool

* Make stream error flag per-call rather than global otherwise the whole runner is damaged by one call dropping

* Add some init checks for LB and pure runner nodes

* adding some useful debug

* Fix default db and mq for lb node

* removes unreachable code, fixes typo

* Use datastore as logstore in API nodes.

This fixes a bug caused by trying to insert logs into a nil logstore. It
was nil because it wasn't being set for API nodes.

* creates placement abstraction and moves capacity APIs to NodePool

* removed TODO, added logging

* Dial reconnections for LB <-> runners

LB grpc connections to runners are established using a backoff stategy
in event of reconnections, this allows to let the LB up even in case one
of the runners go away and reconnect to it as soon as it is back.

* Add a status call to the Runner protocol

Stub at the moment. To be used for things like draindown, health checks.

* Remove comment.

* makes assign/release capacity lockless

* Fix hanging issue in lb agent when connections drop

* Add the CH hash from fnlb

Select this with FN_PLACER=ch when launching the LB.

* small improvement for locking on reloadLBGmembership

* Stabilise the list of Runenrs returned by NodePool

The NodePoolManager makes some attempt to keep the list of runner nodes advertised as
stable as possible. Let's preserve this effort in the client side. The main point of this
is to attempt to keep the same runner at the same inxed in the []Runner returned by
NodePool.Runners(lbgid); the ch algorithm likes it when this is the case.

* Factor out a generator function for the Runners so that mocks can be injected

* temporarily allow lbgroup to be specified in HTTP header, while we sort out changes to the model

* fixes bug with nil runners

* Initial work for mocking things in tests

* fix for anonymouse go routine error

* fixing lb_test to compile

* Refactor: internal objects for gRPCNodePool are now injectable, with defaults for the real world case

* Make GRPC port configurable, fix weird handling of web port too

* unit test reload Members

* check on runner creation failure

* adding nullRunner in case of failure during runner creation

* Refactored capacity advertisements/aggregations. Made grpc advertisement post asynchronous and non-blocking.

* make capacityEntry private

* Change the runner gRPC bind address.

This uses the existing `whoAmI` function, so that the gRPC server works
when the runner is running on a different host.

* Add support for multiple fixed runners to pool mgr

* Added harness for dataplane system tests, minor refactors

* Add Dockerfiles for components, along with docs.

* Doc fix: second runner needs a different name.

* Let us have three runners in system tests, why not

* The first system test running a function in API/LB/PureRunner mode

* Add unit test for Advertiser logic

* Fix issue with Pure Runner not sending the last data frame

* use config in models.Call as a temporary mechanism to override lb group ID

* make gofmt happy

* Updates documentation for how to configure lb groups for an app/route

* small refactor unit test

* Factor NodePool into its own package

* Lots of fixes to Pure Runner - concurrency woes with errors and cancellations

* New dataplane with static runnerpool (#813)

Added static node pool as default implementation

* moved nullRunner to grpc package

* remove duplication in README

* fix go vet issues

* Fix server initialisation in api tests

* Tiny logging changes in pool manager.

Using `WithError` instead of `Errorf` when appropriate.

* Change some log levels in the pure runner

* fixing readme

* moves multitenant compute documentation

* adds introduction to multitenant readme

* Proper triggering of system tests in makefile

* Fix insructions about starting up the components

* Change db file for system tests to avoid contention in parallel tests

* fixes revisions from merge

* Fix merge issue with handling of reserved slot

* renaming nulb to lb in the doc and images folder

* better TryExec sleep logic clean shutdown

In this change we implement a better way to deal with the sleep inside
the for loop during the attempt for placing a call.
Plus we added a clean way to shutdown the connections with external
component when we shut down the server.

* System_test mysql port

set mysql port for system test to a different value to the one set for
the api tests to avoid conflicts as they can run in parallel.

* change the container name for system-test

* removes flaky test TestRouteRunnerExecution pending resolution by issue #796

* amend remove_containers to remove new added containers

* Rework capacity reservation logic at a higher level for now

* LB agent implements Submit rather than delegating.

* Fix go vet linting errors

* Changed a couple of error levels

* Fix formatting

* removes commmented out test

* adds snappy to vendor directory

* updates Gopkg and vendor directories, removing snappy and addhing siphash

* wait for db containers to come up before starting the tests

* make system tests start API node on 8085 to avoid port conflict with api_tests

* avoid port conflicts with api_test.sh which are run in parallel

* fixes postgres port conflict and issue with removal of old containers

* Remove spurious println
2018-03-08 14:45:19 -08:00

802 lines
25 KiB
Go

// Code generated by protoc-gen-go. DO NOT EDIT.
// source: runner.proto
/*
Package runner is a generated protocol buffer package.
It is generated from these files:
runner.proto
It has these top-level messages:
TryCall
CallAcknowledged
DataFrame
HttpHeader
HttpRespMeta
CallResultStart
CallFinished
ClientMsg
RunnerMsg
RunnerStatus
*/
package runner
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import google_protobuf "github.com/golang/protobuf/ptypes/empty"
import (
context "golang.org/x/net/context"
grpc "google.golang.org/grpc"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// Request to allocate a slot for a call
type TryCall struct {
ModelsCallJson string `protobuf:"bytes,1,opt,name=models_call_json,json=modelsCallJson" json:"models_call_json,omitempty"`
}
func (m *TryCall) Reset() { *m = TryCall{} }
func (m *TryCall) String() string { return proto.CompactTextString(m) }
func (*TryCall) ProtoMessage() {}
func (*TryCall) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *TryCall) GetModelsCallJson() string {
if m != nil {
return m.ModelsCallJson
}
return ""
}
// Call has been accepted and a slot allocated, or it's been rejected
type CallAcknowledged struct {
Committed bool `protobuf:"varint,1,opt,name=committed" json:"committed,omitempty"`
Details string `protobuf:"bytes,2,opt,name=details" json:"details,omitempty"`
SlotAllocationLatency string `protobuf:"bytes,3,opt,name=slot_allocation_latency,json=slotAllocationLatency" json:"slot_allocation_latency,omitempty"`
}
func (m *CallAcknowledged) Reset() { *m = CallAcknowledged{} }
func (m *CallAcknowledged) String() string { return proto.CompactTextString(m) }
func (*CallAcknowledged) ProtoMessage() {}
func (*CallAcknowledged) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *CallAcknowledged) GetCommitted() bool {
if m != nil {
return m.Committed
}
return false
}
func (m *CallAcknowledged) GetDetails() string {
if m != nil {
return m.Details
}
return ""
}
func (m *CallAcknowledged) GetSlotAllocationLatency() string {
if m != nil {
return m.SlotAllocationLatency
}
return ""
}
// Data sent C2S and S2C - as soon as the runner sees the first of these it
// will start running. If empty content, there must be one of these with eof.
// The runner will send these for the body of the response, AFTER it has sent
// a CallEnding message.
type DataFrame struct {
Data []byte `protobuf:"bytes,1,opt,name=data,proto3" json:"data,omitempty"`
Eof bool `protobuf:"varint,2,opt,name=eof" json:"eof,omitempty"`
}
func (m *DataFrame) Reset() { *m = DataFrame{} }
func (m *DataFrame) String() string { return proto.CompactTextString(m) }
func (*DataFrame) ProtoMessage() {}
func (*DataFrame) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
func (m *DataFrame) GetData() []byte {
if m != nil {
return m.Data
}
return nil
}
func (m *DataFrame) GetEof() bool {
if m != nil {
return m.Eof
}
return false
}
type HttpHeader struct {
Key string `protobuf:"bytes,1,opt,name=key" json:"key,omitempty"`
Value string `protobuf:"bytes,2,opt,name=value" json:"value,omitempty"`
}
func (m *HttpHeader) Reset() { *m = HttpHeader{} }
func (m *HttpHeader) String() string { return proto.CompactTextString(m) }
func (*HttpHeader) ProtoMessage() {}
func (*HttpHeader) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
func (m *HttpHeader) GetKey() string {
if m != nil {
return m.Key
}
return ""
}
func (m *HttpHeader) GetValue() string {
if m != nil {
return m.Value
}
return ""
}
type HttpRespMeta struct {
StatusCode int32 `protobuf:"varint,1,opt,name=status_code,json=statusCode" json:"status_code,omitempty"`
Headers []*HttpHeader `protobuf:"bytes,2,rep,name=headers" json:"headers,omitempty"`
}
func (m *HttpRespMeta) Reset() { *m = HttpRespMeta{} }
func (m *HttpRespMeta) String() string { return proto.CompactTextString(m) }
func (*HttpRespMeta) ProtoMessage() {}
func (*HttpRespMeta) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} }
func (m *HttpRespMeta) GetStatusCode() int32 {
if m != nil {
return m.StatusCode
}
return 0
}
func (m *HttpRespMeta) GetHeaders() []*HttpHeader {
if m != nil {
return m.Headers
}
return nil
}
// Call has started to finish - data might not be here yet and it will be sent
// as DataFrames.
type CallResultStart struct {
// Types that are valid to be assigned to Meta:
// *CallResultStart_Http
Meta isCallResultStart_Meta `protobuf_oneof:"meta"`
}
func (m *CallResultStart) Reset() { *m = CallResultStart{} }
func (m *CallResultStart) String() string { return proto.CompactTextString(m) }
func (*CallResultStart) ProtoMessage() {}
func (*CallResultStart) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} }
type isCallResultStart_Meta interface {
isCallResultStart_Meta()
}
type CallResultStart_Http struct {
Http *HttpRespMeta `protobuf:"bytes,100,opt,name=http,oneof"`
}
func (*CallResultStart_Http) isCallResultStart_Meta() {}
func (m *CallResultStart) GetMeta() isCallResultStart_Meta {
if m != nil {
return m.Meta
}
return nil
}
func (m *CallResultStart) GetHttp() *HttpRespMeta {
if x, ok := m.GetMeta().(*CallResultStart_Http); ok {
return x.Http
}
return nil
}
// XXX_OneofFuncs is for the internal use of the proto package.
func (*CallResultStart) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
return _CallResultStart_OneofMarshaler, _CallResultStart_OneofUnmarshaler, _CallResultStart_OneofSizer, []interface{}{
(*CallResultStart_Http)(nil),
}
}
func _CallResultStart_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
m := msg.(*CallResultStart)
// meta
switch x := m.Meta.(type) {
case *CallResultStart_Http:
b.EncodeVarint(100<<3 | proto.WireBytes)
if err := b.EncodeMessage(x.Http); err != nil {
return err
}
case nil:
default:
return fmt.Errorf("CallResultStart.Meta has unexpected type %T", x)
}
return nil
}
func _CallResultStart_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
m := msg.(*CallResultStart)
switch tag {
case 100: // meta.http
if wire != proto.WireBytes {
return true, proto.ErrInternalBadWireType
}
msg := new(HttpRespMeta)
err := b.DecodeMessage(msg)
m.Meta = &CallResultStart_Http{msg}
return true, err
default:
return false, nil
}
}
func _CallResultStart_OneofSizer(msg proto.Message) (n int) {
m := msg.(*CallResultStart)
// meta
switch x := m.Meta.(type) {
case *CallResultStart_Http:
s := proto.Size(x.Http)
n += proto.SizeVarint(100<<3 | proto.WireBytes)
n += proto.SizeVarint(uint64(s))
n += s
case nil:
default:
panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
}
return n
}
// Call has really finished, it might have completed or crashed
type CallFinished struct {
Success bool `protobuf:"varint,1,opt,name=success" json:"success,omitempty"`
Details string `protobuf:"bytes,2,opt,name=details" json:"details,omitempty"`
}
func (m *CallFinished) Reset() { *m = CallFinished{} }
func (m *CallFinished) String() string { return proto.CompactTextString(m) }
func (*CallFinished) ProtoMessage() {}
func (*CallFinished) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} }
func (m *CallFinished) GetSuccess() bool {
if m != nil {
return m.Success
}
return false
}
func (m *CallFinished) GetDetails() string {
if m != nil {
return m.Details
}
return ""
}
type ClientMsg struct {
// Types that are valid to be assigned to Body:
// *ClientMsg_Try
// *ClientMsg_Data
Body isClientMsg_Body `protobuf_oneof:"body"`
}
func (m *ClientMsg) Reset() { *m = ClientMsg{} }
func (m *ClientMsg) String() string { return proto.CompactTextString(m) }
func (*ClientMsg) ProtoMessage() {}
func (*ClientMsg) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
type isClientMsg_Body interface {
isClientMsg_Body()
}
type ClientMsg_Try struct {
Try *TryCall `protobuf:"bytes,1,opt,name=try,oneof"`
}
type ClientMsg_Data struct {
Data *DataFrame `protobuf:"bytes,2,opt,name=data,oneof"`
}
func (*ClientMsg_Try) isClientMsg_Body() {}
func (*ClientMsg_Data) isClientMsg_Body() {}
func (m *ClientMsg) GetBody() isClientMsg_Body {
if m != nil {
return m.Body
}
return nil
}
func (m *ClientMsg) GetTry() *TryCall {
if x, ok := m.GetBody().(*ClientMsg_Try); ok {
return x.Try
}
return nil
}
func (m *ClientMsg) GetData() *DataFrame {
if x, ok := m.GetBody().(*ClientMsg_Data); ok {
return x.Data
}
return nil
}
// XXX_OneofFuncs is for the internal use of the proto package.
func (*ClientMsg) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
return _ClientMsg_OneofMarshaler, _ClientMsg_OneofUnmarshaler, _ClientMsg_OneofSizer, []interface{}{
(*ClientMsg_Try)(nil),
(*ClientMsg_Data)(nil),
}
}
func _ClientMsg_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
m := msg.(*ClientMsg)
// body
switch x := m.Body.(type) {
case *ClientMsg_Try:
b.EncodeVarint(1<<3 | proto.WireBytes)
if err := b.EncodeMessage(x.Try); err != nil {
return err
}
case *ClientMsg_Data:
b.EncodeVarint(2<<3 | proto.WireBytes)
if err := b.EncodeMessage(x.Data); err != nil {
return err
}
case nil:
default:
return fmt.Errorf("ClientMsg.Body has unexpected type %T", x)
}
return nil
}
func _ClientMsg_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
m := msg.(*ClientMsg)
switch tag {
case 1: // body.try
if wire != proto.WireBytes {
return true, proto.ErrInternalBadWireType
}
msg := new(TryCall)
err := b.DecodeMessage(msg)
m.Body = &ClientMsg_Try{msg}
return true, err
case 2: // body.data
if wire != proto.WireBytes {
return true, proto.ErrInternalBadWireType
}
msg := new(DataFrame)
err := b.DecodeMessage(msg)
m.Body = &ClientMsg_Data{msg}
return true, err
default:
return false, nil
}
}
func _ClientMsg_OneofSizer(msg proto.Message) (n int) {
m := msg.(*ClientMsg)
// body
switch x := m.Body.(type) {
case *ClientMsg_Try:
s := proto.Size(x.Try)
n += proto.SizeVarint(1<<3 | proto.WireBytes)
n += proto.SizeVarint(uint64(s))
n += s
case *ClientMsg_Data:
s := proto.Size(x.Data)
n += proto.SizeVarint(2<<3 | proto.WireBytes)
n += proto.SizeVarint(uint64(s))
n += s
case nil:
default:
panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
}
return n
}
type RunnerMsg struct {
// Types that are valid to be assigned to Body:
// *RunnerMsg_Acknowledged
// *RunnerMsg_ResultStart
// *RunnerMsg_Data
// *RunnerMsg_Finished
Body isRunnerMsg_Body `protobuf_oneof:"body"`
}
func (m *RunnerMsg) Reset() { *m = RunnerMsg{} }
func (m *RunnerMsg) String() string { return proto.CompactTextString(m) }
func (*RunnerMsg) ProtoMessage() {}
func (*RunnerMsg) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} }
type isRunnerMsg_Body interface {
isRunnerMsg_Body()
}
type RunnerMsg_Acknowledged struct {
Acknowledged *CallAcknowledged `protobuf:"bytes,1,opt,name=acknowledged,oneof"`
}
type RunnerMsg_ResultStart struct {
ResultStart *CallResultStart `protobuf:"bytes,2,opt,name=result_start,json=resultStart,oneof"`
}
type RunnerMsg_Data struct {
Data *DataFrame `protobuf:"bytes,3,opt,name=data,oneof"`
}
type RunnerMsg_Finished struct {
Finished *CallFinished `protobuf:"bytes,4,opt,name=finished,oneof"`
}
func (*RunnerMsg_Acknowledged) isRunnerMsg_Body() {}
func (*RunnerMsg_ResultStart) isRunnerMsg_Body() {}
func (*RunnerMsg_Data) isRunnerMsg_Body() {}
func (*RunnerMsg_Finished) isRunnerMsg_Body() {}
func (m *RunnerMsg) GetBody() isRunnerMsg_Body {
if m != nil {
return m.Body
}
return nil
}
func (m *RunnerMsg) GetAcknowledged() *CallAcknowledged {
if x, ok := m.GetBody().(*RunnerMsg_Acknowledged); ok {
return x.Acknowledged
}
return nil
}
func (m *RunnerMsg) GetResultStart() *CallResultStart {
if x, ok := m.GetBody().(*RunnerMsg_ResultStart); ok {
return x.ResultStart
}
return nil
}
func (m *RunnerMsg) GetData() *DataFrame {
if x, ok := m.GetBody().(*RunnerMsg_Data); ok {
return x.Data
}
return nil
}
func (m *RunnerMsg) GetFinished() *CallFinished {
if x, ok := m.GetBody().(*RunnerMsg_Finished); ok {
return x.Finished
}
return nil
}
// XXX_OneofFuncs is for the internal use of the proto package.
func (*RunnerMsg) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
return _RunnerMsg_OneofMarshaler, _RunnerMsg_OneofUnmarshaler, _RunnerMsg_OneofSizer, []interface{}{
(*RunnerMsg_Acknowledged)(nil),
(*RunnerMsg_ResultStart)(nil),
(*RunnerMsg_Data)(nil),
(*RunnerMsg_Finished)(nil),
}
}
func _RunnerMsg_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
m := msg.(*RunnerMsg)
// body
switch x := m.Body.(type) {
case *RunnerMsg_Acknowledged:
b.EncodeVarint(1<<3 | proto.WireBytes)
if err := b.EncodeMessage(x.Acknowledged); err != nil {
return err
}
case *RunnerMsg_ResultStart:
b.EncodeVarint(2<<3 | proto.WireBytes)
if err := b.EncodeMessage(x.ResultStart); err != nil {
return err
}
case *RunnerMsg_Data:
b.EncodeVarint(3<<3 | proto.WireBytes)
if err := b.EncodeMessage(x.Data); err != nil {
return err
}
case *RunnerMsg_Finished:
b.EncodeVarint(4<<3 | proto.WireBytes)
if err := b.EncodeMessage(x.Finished); err != nil {
return err
}
case nil:
default:
return fmt.Errorf("RunnerMsg.Body has unexpected type %T", x)
}
return nil
}
func _RunnerMsg_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
m := msg.(*RunnerMsg)
switch tag {
case 1: // body.acknowledged
if wire != proto.WireBytes {
return true, proto.ErrInternalBadWireType
}
msg := new(CallAcknowledged)
err := b.DecodeMessage(msg)
m.Body = &RunnerMsg_Acknowledged{msg}
return true, err
case 2: // body.result_start
if wire != proto.WireBytes {
return true, proto.ErrInternalBadWireType
}
msg := new(CallResultStart)
err := b.DecodeMessage(msg)
m.Body = &RunnerMsg_ResultStart{msg}
return true, err
case 3: // body.data
if wire != proto.WireBytes {
return true, proto.ErrInternalBadWireType
}
msg := new(DataFrame)
err := b.DecodeMessage(msg)
m.Body = &RunnerMsg_Data{msg}
return true, err
case 4: // body.finished
if wire != proto.WireBytes {
return true, proto.ErrInternalBadWireType
}
msg := new(CallFinished)
err := b.DecodeMessage(msg)
m.Body = &RunnerMsg_Finished{msg}
return true, err
default:
return false, nil
}
}
func _RunnerMsg_OneofSizer(msg proto.Message) (n int) {
m := msg.(*RunnerMsg)
// body
switch x := m.Body.(type) {
case *RunnerMsg_Acknowledged:
s := proto.Size(x.Acknowledged)
n += proto.SizeVarint(1<<3 | proto.WireBytes)
n += proto.SizeVarint(uint64(s))
n += s
case *RunnerMsg_ResultStart:
s := proto.Size(x.ResultStart)
n += proto.SizeVarint(2<<3 | proto.WireBytes)
n += proto.SizeVarint(uint64(s))
n += s
case *RunnerMsg_Data:
s := proto.Size(x.Data)
n += proto.SizeVarint(3<<3 | proto.WireBytes)
n += proto.SizeVarint(uint64(s))
n += s
case *RunnerMsg_Finished:
s := proto.Size(x.Finished)
n += proto.SizeVarint(4<<3 | proto.WireBytes)
n += proto.SizeVarint(uint64(s))
n += s
case nil:
default:
panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
}
return n
}
type RunnerStatus struct {
Active int32 `protobuf:"varint,2,opt,name=active" json:"active,omitempty"`
}
func (m *RunnerStatus) Reset() { *m = RunnerStatus{} }
func (m *RunnerStatus) String() string { return proto.CompactTextString(m) }
func (*RunnerStatus) ProtoMessage() {}
func (*RunnerStatus) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{9} }
func (m *RunnerStatus) GetActive() int32 {
if m != nil {
return m.Active
}
return 0
}
func init() {
proto.RegisterType((*TryCall)(nil), "TryCall")
proto.RegisterType((*CallAcknowledged)(nil), "CallAcknowledged")
proto.RegisterType((*DataFrame)(nil), "DataFrame")
proto.RegisterType((*HttpHeader)(nil), "HttpHeader")
proto.RegisterType((*HttpRespMeta)(nil), "HttpRespMeta")
proto.RegisterType((*CallResultStart)(nil), "CallResultStart")
proto.RegisterType((*CallFinished)(nil), "CallFinished")
proto.RegisterType((*ClientMsg)(nil), "ClientMsg")
proto.RegisterType((*RunnerMsg)(nil), "RunnerMsg")
proto.RegisterType((*RunnerStatus)(nil), "RunnerStatus")
}
// Reference imports to suppress errors if they are not otherwise used.
var _ context.Context
var _ grpc.ClientConn
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
const _ = grpc.SupportPackageIsVersion4
// Client API for RunnerProtocol service
type RunnerProtocolClient interface {
Engage(ctx context.Context, opts ...grpc.CallOption) (RunnerProtocol_EngageClient, error)
// Rather than rely on Prometheus for this, expose status that's specific to the runner lifecycle through this.
Status(ctx context.Context, in *google_protobuf.Empty, opts ...grpc.CallOption) (*RunnerStatus, error)
}
type runnerProtocolClient struct {
cc *grpc.ClientConn
}
func NewRunnerProtocolClient(cc *grpc.ClientConn) RunnerProtocolClient {
return &runnerProtocolClient{cc}
}
func (c *runnerProtocolClient) Engage(ctx context.Context, opts ...grpc.CallOption) (RunnerProtocol_EngageClient, error) {
stream, err := grpc.NewClientStream(ctx, &_RunnerProtocol_serviceDesc.Streams[0], c.cc, "/RunnerProtocol/Engage", opts...)
if err != nil {
return nil, err
}
x := &runnerProtocolEngageClient{stream}
return x, nil
}
type RunnerProtocol_EngageClient interface {
Send(*ClientMsg) error
Recv() (*RunnerMsg, error)
grpc.ClientStream
}
type runnerProtocolEngageClient struct {
grpc.ClientStream
}
func (x *runnerProtocolEngageClient) Send(m *ClientMsg) error {
return x.ClientStream.SendMsg(m)
}
func (x *runnerProtocolEngageClient) Recv() (*RunnerMsg, error) {
m := new(RunnerMsg)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
func (c *runnerProtocolClient) Status(ctx context.Context, in *google_protobuf.Empty, opts ...grpc.CallOption) (*RunnerStatus, error) {
out := new(RunnerStatus)
err := grpc.Invoke(ctx, "/RunnerProtocol/Status", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// Server API for RunnerProtocol service
type RunnerProtocolServer interface {
Engage(RunnerProtocol_EngageServer) error
// Rather than rely on Prometheus for this, expose status that's specific to the runner lifecycle through this.
Status(context.Context, *google_protobuf.Empty) (*RunnerStatus, error)
}
func RegisterRunnerProtocolServer(s *grpc.Server, srv RunnerProtocolServer) {
s.RegisterService(&_RunnerProtocol_serviceDesc, srv)
}
func _RunnerProtocol_Engage_Handler(srv interface{}, stream grpc.ServerStream) error {
return srv.(RunnerProtocolServer).Engage(&runnerProtocolEngageServer{stream})
}
type RunnerProtocol_EngageServer interface {
Send(*RunnerMsg) error
Recv() (*ClientMsg, error)
grpc.ServerStream
}
type runnerProtocolEngageServer struct {
grpc.ServerStream
}
func (x *runnerProtocolEngageServer) Send(m *RunnerMsg) error {
return x.ServerStream.SendMsg(m)
}
func (x *runnerProtocolEngageServer) Recv() (*ClientMsg, error) {
m := new(ClientMsg)
if err := x.ServerStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
func _RunnerProtocol_Status_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(google_protobuf.Empty)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(RunnerProtocolServer).Status(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/RunnerProtocol/Status",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(RunnerProtocolServer).Status(ctx, req.(*google_protobuf.Empty))
}
return interceptor(ctx, in, info, handler)
}
var _RunnerProtocol_serviceDesc = grpc.ServiceDesc{
ServiceName: "RunnerProtocol",
HandlerType: (*RunnerProtocolServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "Status",
Handler: _RunnerProtocol_Status_Handler,
},
},
Streams: []grpc.StreamDesc{
{
StreamName: "Engage",
Handler: _RunnerProtocol_Engage_Handler,
ServerStreams: true,
ClientStreams: true,
},
},
Metadata: "runner.proto",
}
func init() { proto.RegisterFile("runner.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 566 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x74, 0x93, 0x59, 0x6b, 0xdb, 0x40,
0x10, 0xc7, 0xad, 0xd8, 0x71, 0xec, 0xb1, 0x92, 0xba, 0x4b, 0x0f, 0x93, 0x06, 0x1a, 0xd4, 0x03,
0x43, 0x61, 0xd3, 0x3a, 0x3d, 0xde, 0x0a, 0x49, 0x9a, 0x20, 0x4a, 0x03, 0x65, 0x53, 0xfa, 0x6a,
0x36, 0xd2, 0x44, 0x56, 0xb3, 0xd2, 0x1a, 0xed, 0x28, 0xc5, 0xaf, 0xfd, 0x88, 0xfd, 0x44, 0x65,
0x57, 0x47, 0xdc, 0x40, 0xde, 0x34, 0xfb, 0x9f, 0xf3, 0xa7, 0x19, 0xf0, 0x8b, 0x32, 0xcf, 0xb1,
0xe0, 0xcb, 0x42, 0x93, 0xde, 0x7d, 0x96, 0x68, 0x9d, 0x28, 0x3c, 0x70, 0xd6, 0x65, 0x79, 0x75,
0x80, 0xd9, 0x92, 0x56, 0x95, 0x18, 0x1c, 0xc2, 0xd6, 0x8f, 0x62, 0x75, 0x22, 0x95, 0x62, 0x53,
0x18, 0x67, 0x3a, 0x46, 0x65, 0xe6, 0x91, 0x54, 0x6a, 0xfe, 0xcb, 0xe8, 0x7c, 0xe2, 0xed, 0x7b,
0xd3, 0xa1, 0xd8, 0xa9, 0xde, 0xad, 0xd7, 0x57, 0xa3, 0xf3, 0xe0, 0x8f, 0x07, 0x63, 0x6b, 0x1c,
0x45, 0xd7, 0xb9, 0xfe, 0xad, 0x30, 0x4e, 0x30, 0x66, 0x7b, 0x30, 0x8c, 0x74, 0x96, 0xa5, 0x44,
0x18, 0xbb, 0xb8, 0x81, 0xb8, 0x7d, 0x60, 0x13, 0xd8, 0x8a, 0x91, 0x64, 0xaa, 0xcc, 0x64, 0xc3,
0xe5, 0x6c, 0x4c, 0xf6, 0x11, 0x9e, 0x1a, 0xa5, 0x69, 0x2e, 0x95, 0xd2, 0x91, 0xa4, 0x54, 0xe7,
0x73, 0x25, 0x09, 0xf3, 0x68, 0x35, 0xe9, 0x3a, 0xcf, 0xc7, 0x56, 0x3e, 0x6a, 0xd5, 0x6f, 0x95,
0x18, 0xbc, 0x83, 0xe1, 0x17, 0x49, 0xf2, 0xac, 0x90, 0x19, 0x32, 0x06, 0xbd, 0x58, 0x92, 0x74,
0x75, 0x7d, 0xe1, 0xbe, 0xd9, 0x18, 0xba, 0xa8, 0xaf, 0x5c, 0xb9, 0x81, 0xb0, 0x9f, 0xc1, 0x7b,
0x80, 0x90, 0x68, 0x19, 0xa2, 0x8c, 0xb1, 0xb0, 0xfa, 0x35, 0xae, 0xea, 0x11, 0xed, 0x27, 0x7b,
0x04, 0x9b, 0x37, 0x52, 0x95, 0x58, 0xb7, 0x58, 0x19, 0xc1, 0x4f, 0xf0, 0x6d, 0x94, 0x40, 0xb3,
0x3c, 0x47, 0x92, 0xec, 0x39, 0x8c, 0x0c, 0x49, 0x2a, 0xcd, 0x3c, 0xd2, 0x31, 0xba, 0xf8, 0x4d,
0x01, 0xd5, 0xd3, 0x89, 0x8e, 0x91, 0xbd, 0x82, 0xad, 0x85, 0x2b, 0x61, 0x67, 0xed, 0x4e, 0x47,
0xb3, 0x11, 0xbf, 0x2d, 0x2b, 0x1a, 0x2d, 0xf8, 0x0c, 0x0f, 0x2c, 0x44, 0x81, 0xa6, 0x54, 0x74,
0x41, 0xb2, 0x20, 0xf6, 0x02, 0x7a, 0x0b, 0xa2, 0xe5, 0x24, 0xde, 0xf7, 0xa6, 0xa3, 0xd9, 0x36,
0x5f, 0xaf, 0x1b, 0x76, 0x84, 0x13, 0x8f, 0xfb, 0xd0, 0xcb, 0x90, 0x64, 0x70, 0x0c, 0xbe, 0x8d,
0x3f, 0x4b, 0xf3, 0xd4, 0x2c, 0x2a, 0xc4, 0xa6, 0x8c, 0x22, 0x34, 0xa6, 0xc6, 0xdf, 0x98, 0xf7,
0xc3, 0x0f, 0x2e, 0x60, 0x78, 0xa2, 0x52, 0xcc, 0xe9, 0xdc, 0x24, 0x6c, 0x0f, 0xba, 0x54, 0x54,
0x40, 0x46, 0xb3, 0x01, 0xaf, 0xf7, 0x22, 0xec, 0x08, 0xfb, 0xcc, 0xf6, 0x6b, 0xc4, 0x1b, 0x4e,
0x06, 0xde, 0xc2, 0xb7, 0x8d, 0x59, 0xc5, 0x36, 0x76, 0xa9, 0xe3, 0x55, 0xf0, 0xd7, 0x83, 0xa1,
0x70, 0x1b, 0x68, 0xb3, 0x7e, 0x02, 0x5f, 0xae, 0xed, 0x49, 0x9d, 0xfe, 0x21, 0xbf, 0xbb, 0x40,
0x61, 0x47, 0xfc, 0xe7, 0xc8, 0x3e, 0x80, 0x5f, 0x38, 0x36, 0x73, 0x63, 0xe1, 0xd4, 0x85, 0xc7,
0xfc, 0x0e, 0xb4, 0xb0, 0x23, 0x46, 0xc5, 0x1a, 0xc3, 0xa6, 0xcf, 0xee, 0x7d, 0x7d, 0xb2, 0x37,
0x30, 0xb8, 0xaa, 0xa1, 0x4d, 0x7a, 0x35, 0xe9, 0x75, 0x92, 0x61, 0x47, 0xb4, 0x0e, 0xed, 0x50,
0xaf, 0xc1, 0xaf, 0x66, 0xba, 0x70, 0x3f, 0x9a, 0x3d, 0x81, 0xbe, 0x8c, 0x28, 0xbd, 0xa9, 0x96,
0x65, 0x53, 0xd4, 0xd6, 0x2c, 0x81, 0x9d, 0xca, 0xef, 0xbb, 0xbd, 0xaf, 0x48, 0x2b, 0xf6, 0x12,
0xfa, 0xa7, 0x79, 0x22, 0x13, 0x64, 0xc0, 0x5b, 0xd8, 0xbb, 0xc0, 0x5b, 0x44, 0x53, 0xef, 0xad,
0xc7, 0x0e, 0xa0, 0xdf, 0x64, 0xe6, 0xd5, 0xc1, 0xf2, 0xe6, 0x60, 0xf9, 0xa9, 0x3d, 0xd8, 0xdd,
0x6d, 0xbe, 0xde, 0xc0, 0x65, 0xdf, 0xc9, 0x87, 0xff, 0x02, 0x00, 0x00, 0xff, 0xff, 0x05, 0xe4,
0x7b, 0x85, 0xed, 0x03, 0x00, 0x00,
}