mirror of
https://github.com/fnproject/fn.git
synced 2022-10-28 21:29:17 +03:00
* Initial stab at the protocol * initial protocol sketch for node pool manager * Added http header frame as a message * Force the use of WithAgent variants when creating a server * adds grpc models for node pool manager plus go deps * Naming things is really hard * Merge (and optionally purge) details received by the NPM * WIP: starting to add the runner-side functionality of the new data plane * WIP: Basic startup of grpc server for pure runner. Needs proper certs. * Go fmt * Initial agent for LB nodes. * Agent implementation for LB nodes. * Pass keys and certs to LB node agent. * Remove accidentally left reference to env var. * Add env variables for certificate files * stub out the capacity and group membership server channels * implement server-side runner manager service * removes unused variable * fixes build error * splits up GetCall and GetLBGroupId * Change LB node agent to use TLS connection. * Encode call model as JSON to send to runner node. * Use hybrid client in LB node agent. This should provide access to get app and route information for the call from an API node. * More error handling on the pure runner side * Tentative fix for GetCall problem: set deadlines correctly when reserving slot * Connect loop for LB agent to runner nodes. * Extract runner connection function in LB agent. * drops committed capacity counts * Bugfix - end state tracker only in submit * Do logs properly * adds first pass of tracking capacity metrics in agent * maked memory capacity metric uint64 * maked memory capacity metric uint64 * removes use of old capacity field * adds remove capacity call * merges overwritten reconnect logic * First pass of a NPM Provide a service that talks to a (simulated) CP. - Receive incoming capacity assertions from LBs for LBGs - expire LB requests after a short period - ask the CP to add runners to a LBG - note runner set changes and readvertise - scale down by marking runners as "draining" - shut off draining runners after some cool-down period * add capacity update on schedule * Send periodic capcacity metrics Sending capcacity metrics to node pool manager * splits grpc and api interfaces for capacity manager * failure to advertise capacity shouldn't panic * Add some instructions for starting DP/CP parts. * Create the poolmanager server with TLS * Use logrus * Get npm compiling with cert fixups. * Fix: pure runner should not start async processing * brings runner, nulb and npm together * Add field to acknowledgment to record slot allocation latency; fix a bug too * iterating on pool manager locking issue * raises timeout of placement retry loop * Fix up NPM Improve logging Ensure that channels etc. are actually initialised in the structure creation! * Update the docs - runners GRPC port is 9120 * Bugfix: return runner pool accurately. * Double locking * Note purges as LBs stop talking to us * Get the purging of old LBs working. * Tweak: on restart, load runner set before making scaling decisions. * more agent synchronization improvements * Deal with teh CP pulling out active hosts from under us. * lock at lbgroup level * Send request and receive response from runner. * Add capacity check right before slot reservation * Pass the full Call into the receive loop. * Wait for the data from the runner before finishing * force runner list refresh every time * Don't init db and mq for pure runners * adds shutdown of npm * fixes broken log line * Extract an interface for the Predictor used by the NPM * purge drained connections from npm * Refactor of the LB agent into the agent package * removes capacitytest wip * Fix undefined err issue * updating README for poolmanager set up * ues retrying dial for lb to npm connections * Rename lb_calls to lb_agent now that all functionality is there * Use the right deadline and errors in LBAgent * Make stream error flag per-call rather than global otherwise the whole runner is damaged by one call dropping * abstracting gRPCNodePool * Make stream error flag per-call rather than global otherwise the whole runner is damaged by one call dropping * Add some init checks for LB and pure runner nodes * adding some useful debug * Fix default db and mq for lb node * removes unreachable code, fixes typo * Use datastore as logstore in API nodes. This fixes a bug caused by trying to insert logs into a nil logstore. It was nil because it wasn't being set for API nodes. * creates placement abstraction and moves capacity APIs to NodePool * removed TODO, added logging * Dial reconnections for LB <-> runners LB grpc connections to runners are established using a backoff stategy in event of reconnections, this allows to let the LB up even in case one of the runners go away and reconnect to it as soon as it is back. * Add a status call to the Runner protocol Stub at the moment. To be used for things like draindown, health checks. * Remove comment. * makes assign/release capacity lockless * Fix hanging issue in lb agent when connections drop * Add the CH hash from fnlb Select this with FN_PLACER=ch when launching the LB. * small improvement for locking on reloadLBGmembership * Stabilise the list of Runenrs returned by NodePool The NodePoolManager makes some attempt to keep the list of runner nodes advertised as stable as possible. Let's preserve this effort in the client side. The main point of this is to attempt to keep the same runner at the same inxed in the []Runner returned by NodePool.Runners(lbgid); the ch algorithm likes it when this is the case. * Factor out a generator function for the Runners so that mocks can be injected * temporarily allow lbgroup to be specified in HTTP header, while we sort out changes to the model * fixes bug with nil runners * Initial work for mocking things in tests * fix for anonymouse go routine error * fixing lb_test to compile * Refactor: internal objects for gRPCNodePool are now injectable, with defaults for the real world case * Make GRPC port configurable, fix weird handling of web port too * unit test reload Members * check on runner creation failure * adding nullRunner in case of failure during runner creation * Refactored capacity advertisements/aggregations. Made grpc advertisement post asynchronous and non-blocking. * make capacityEntry private * Change the runner gRPC bind address. This uses the existing `whoAmI` function, so that the gRPC server works when the runner is running on a different host. * Add support for multiple fixed runners to pool mgr * Added harness for dataplane system tests, minor refactors * Add Dockerfiles for components, along with docs. * Doc fix: second runner needs a different name. * Let us have three runners in system tests, why not * The first system test running a function in API/LB/PureRunner mode * Add unit test for Advertiser logic * Fix issue with Pure Runner not sending the last data frame * use config in models.Call as a temporary mechanism to override lb group ID * make gofmt happy * Updates documentation for how to configure lb groups for an app/route * small refactor unit test * Factor NodePool into its own package * Lots of fixes to Pure Runner - concurrency woes with errors and cancellations * New dataplane with static runnerpool (#813) Added static node pool as default implementation * moved nullRunner to grpc package * remove duplication in README * fix go vet issues * Fix server initialisation in api tests * Tiny logging changes in pool manager. Using `WithError` instead of `Errorf` when appropriate. * Change some log levels in the pure runner * fixing readme * moves multitenant compute documentation * adds introduction to multitenant readme * Proper triggering of system tests in makefile * Fix insructions about starting up the components * Change db file for system tests to avoid contention in parallel tests * fixes revisions from merge * Fix merge issue with handling of reserved slot * renaming nulb to lb in the doc and images folder * better TryExec sleep logic clean shutdown In this change we implement a better way to deal with the sleep inside the for loop during the attempt for placing a call. Plus we added a clean way to shutdown the connections with external component when we shut down the server. * System_test mysql port set mysql port for system test to a different value to the one set for the api tests to avoid conflicts as they can run in parallel. * change the container name for system-test * removes flaky test TestRouteRunnerExecution pending resolution by issue #796 * amend remove_containers to remove new added containers * Rework capacity reservation logic at a higher level for now * LB agent implements Submit rather than delegating. * Fix go vet linting errors * Changed a couple of error levels * Fix formatting * removes commmented out test * adds snappy to vendor directory * updates Gopkg and vendor directories, removing snappy and addhing siphash * wait for db containers to come up before starting the tests * make system tests start API node on 8085 to avoid port conflict with api_tests * avoid port conflicts with api_test.sh which are run in parallel * fixes postgres port conflict and issue with removal of old containers * Remove spurious println
1262 lines
51 KiB
Go
1262 lines
51 KiB
Go
// Code generated by protoc-gen-go. DO NOT EDIT.
|
|
// source: google/bigtable/v2/bigtable.proto
|
|
|
|
/*
|
|
Package bigtable is a generated protocol buffer package.
|
|
|
|
It is generated from these files:
|
|
google/bigtable/v2/bigtable.proto
|
|
google/bigtable/v2/data.proto
|
|
|
|
It has these top-level messages:
|
|
ReadRowsRequest
|
|
ReadRowsResponse
|
|
SampleRowKeysRequest
|
|
SampleRowKeysResponse
|
|
MutateRowRequest
|
|
MutateRowResponse
|
|
MutateRowsRequest
|
|
MutateRowsResponse
|
|
CheckAndMutateRowRequest
|
|
CheckAndMutateRowResponse
|
|
ReadModifyWriteRowRequest
|
|
ReadModifyWriteRowResponse
|
|
Row
|
|
Family
|
|
Column
|
|
Cell
|
|
RowRange
|
|
RowSet
|
|
ColumnRange
|
|
TimestampRange
|
|
ValueRange
|
|
RowFilter
|
|
Mutation
|
|
ReadModifyWriteRule
|
|
*/
|
|
package bigtable
|
|
|
|
import proto "github.com/golang/protobuf/proto"
|
|
import fmt "fmt"
|
|
import math "math"
|
|
import _ "google.golang.org/genproto/googleapis/api/annotations"
|
|
import google_protobuf1 "github.com/golang/protobuf/ptypes/wrappers"
|
|
import google_rpc "google.golang.org/genproto/googleapis/rpc/status"
|
|
|
|
import (
|
|
context "golang.org/x/net/context"
|
|
grpc "google.golang.org/grpc"
|
|
)
|
|
|
|
// Reference imports to suppress errors if they are not otherwise used.
|
|
var _ = proto.Marshal
|
|
var _ = fmt.Errorf
|
|
var _ = math.Inf
|
|
|
|
// This is a compile-time assertion to ensure that this generated file
|
|
// is compatible with the proto package it is being compiled against.
|
|
// A compilation error at this line likely means your copy of the
|
|
// proto package needs to be updated.
|
|
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
|
|
|
|
// Request message for Bigtable.ReadRows.
|
|
type ReadRowsRequest struct {
|
|
// The unique name of the table from which to read.
|
|
// Values are of the form
|
|
// `projects/<project>/instances/<instance>/tables/<table>`.
|
|
TableName string `protobuf:"bytes,1,opt,name=table_name,json=tableName" json:"table_name,omitempty"`
|
|
// This is a private alpha release of Cloud Bigtable replication. This feature
|
|
// is not currently available to most Cloud Bigtable customers. This feature
|
|
// might be changed in backward-incompatible ways and is not recommended for
|
|
// production use. It is not subject to any SLA or deprecation policy.
|
|
//
|
|
// This value specifies routing for replication. If not specified, the
|
|
// "default" application profile will be used.
|
|
AppProfileId string `protobuf:"bytes,5,opt,name=app_profile_id,json=appProfileId" json:"app_profile_id,omitempty"`
|
|
// The row keys and/or ranges to read. If not specified, reads from all rows.
|
|
Rows *RowSet `protobuf:"bytes,2,opt,name=rows" json:"rows,omitempty"`
|
|
// The filter to apply to the contents of the specified row(s). If unset,
|
|
// reads the entirety of each row.
|
|
Filter *RowFilter `protobuf:"bytes,3,opt,name=filter" json:"filter,omitempty"`
|
|
// The read will terminate after committing to N rows' worth of results. The
|
|
// default (zero) is to return all results.
|
|
RowsLimit int64 `protobuf:"varint,4,opt,name=rows_limit,json=rowsLimit" json:"rows_limit,omitempty"`
|
|
}
|
|
|
|
func (m *ReadRowsRequest) Reset() { *m = ReadRowsRequest{} }
|
|
func (m *ReadRowsRequest) String() string { return proto.CompactTextString(m) }
|
|
func (*ReadRowsRequest) ProtoMessage() {}
|
|
func (*ReadRowsRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
|
|
|
|
func (m *ReadRowsRequest) GetTableName() string {
|
|
if m != nil {
|
|
return m.TableName
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (m *ReadRowsRequest) GetAppProfileId() string {
|
|
if m != nil {
|
|
return m.AppProfileId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (m *ReadRowsRequest) GetRows() *RowSet {
|
|
if m != nil {
|
|
return m.Rows
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (m *ReadRowsRequest) GetFilter() *RowFilter {
|
|
if m != nil {
|
|
return m.Filter
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (m *ReadRowsRequest) GetRowsLimit() int64 {
|
|
if m != nil {
|
|
return m.RowsLimit
|
|
}
|
|
return 0
|
|
}
|
|
|
|
// Response message for Bigtable.ReadRows.
|
|
type ReadRowsResponse struct {
|
|
Chunks []*ReadRowsResponse_CellChunk `protobuf:"bytes,1,rep,name=chunks" json:"chunks,omitempty"`
|
|
// Optionally the server might return the row key of the last row it
|
|
// has scanned. The client can use this to construct a more
|
|
// efficient retry request if needed: any row keys or portions of
|
|
// ranges less than this row key can be dropped from the request.
|
|
// This is primarily useful for cases where the server has read a
|
|
// lot of data that was filtered out since the last committed row
|
|
// key, allowing the client to skip that work on a retry.
|
|
LastScannedRowKey []byte `protobuf:"bytes,2,opt,name=last_scanned_row_key,json=lastScannedRowKey,proto3" json:"last_scanned_row_key,omitempty"`
|
|
}
|
|
|
|
func (m *ReadRowsResponse) Reset() { *m = ReadRowsResponse{} }
|
|
func (m *ReadRowsResponse) String() string { return proto.CompactTextString(m) }
|
|
func (*ReadRowsResponse) ProtoMessage() {}
|
|
func (*ReadRowsResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
|
|
|
|
func (m *ReadRowsResponse) GetChunks() []*ReadRowsResponse_CellChunk {
|
|
if m != nil {
|
|
return m.Chunks
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (m *ReadRowsResponse) GetLastScannedRowKey() []byte {
|
|
if m != nil {
|
|
return m.LastScannedRowKey
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// Specifies a piece of a row's contents returned as part of the read
|
|
// response stream.
|
|
type ReadRowsResponse_CellChunk struct {
|
|
// The row key for this chunk of data. If the row key is empty,
|
|
// this CellChunk is a continuation of the same row as the previous
|
|
// CellChunk in the response stream, even if that CellChunk was in a
|
|
// previous ReadRowsResponse message.
|
|
RowKey []byte `protobuf:"bytes,1,opt,name=row_key,json=rowKey,proto3" json:"row_key,omitempty"`
|
|
// The column family name for this chunk of data. If this message
|
|
// is not present this CellChunk is a continuation of the same column
|
|
// family as the previous CellChunk. The empty string can occur as a
|
|
// column family name in a response so clients must check
|
|
// explicitly for the presence of this message, not just for
|
|
// `family_name.value` being non-empty.
|
|
FamilyName *google_protobuf1.StringValue `protobuf:"bytes,2,opt,name=family_name,json=familyName" json:"family_name,omitempty"`
|
|
// The column qualifier for this chunk of data. If this message
|
|
// is not present, this CellChunk is a continuation of the same column
|
|
// as the previous CellChunk. Column qualifiers may be empty so
|
|
// clients must check for the presence of this message, not just
|
|
// for `qualifier.value` being non-empty.
|
|
Qualifier *google_protobuf1.BytesValue `protobuf:"bytes,3,opt,name=qualifier" json:"qualifier,omitempty"`
|
|
// The cell's stored timestamp, which also uniquely identifies it
|
|
// within its column. Values are always expressed in
|
|
// microseconds, but individual tables may set a coarser
|
|
// granularity to further restrict the allowed values. For
|
|
// example, a table which specifies millisecond granularity will
|
|
// only allow values of `timestamp_micros` which are multiples of
|
|
// 1000. Timestamps are only set in the first CellChunk per cell
|
|
// (for cells split into multiple chunks).
|
|
TimestampMicros int64 `protobuf:"varint,4,opt,name=timestamp_micros,json=timestampMicros" json:"timestamp_micros,omitempty"`
|
|
// Labels applied to the cell by a
|
|
// [RowFilter][google.bigtable.v2.RowFilter]. Labels are only set
|
|
// on the first CellChunk per cell.
|
|
Labels []string `protobuf:"bytes,5,rep,name=labels" json:"labels,omitempty"`
|
|
// The value stored in the cell. Cell values can be split across
|
|
// multiple CellChunks. In that case only the value field will be
|
|
// set in CellChunks after the first: the timestamp and labels
|
|
// will only be present in the first CellChunk, even if the first
|
|
// CellChunk came in a previous ReadRowsResponse.
|
|
Value []byte `protobuf:"bytes,6,opt,name=value,proto3" json:"value,omitempty"`
|
|
// If this CellChunk is part of a chunked cell value and this is
|
|
// not the final chunk of that cell, value_size will be set to the
|
|
// total length of the cell value. The client can use this size
|
|
// to pre-allocate memory to hold the full cell value.
|
|
ValueSize int32 `protobuf:"varint,7,opt,name=value_size,json=valueSize" json:"value_size,omitempty"`
|
|
// Types that are valid to be assigned to RowStatus:
|
|
// *ReadRowsResponse_CellChunk_ResetRow
|
|
// *ReadRowsResponse_CellChunk_CommitRow
|
|
RowStatus isReadRowsResponse_CellChunk_RowStatus `protobuf_oneof:"row_status"`
|
|
}
|
|
|
|
func (m *ReadRowsResponse_CellChunk) Reset() { *m = ReadRowsResponse_CellChunk{} }
|
|
func (m *ReadRowsResponse_CellChunk) String() string { return proto.CompactTextString(m) }
|
|
func (*ReadRowsResponse_CellChunk) ProtoMessage() {}
|
|
func (*ReadRowsResponse_CellChunk) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1, 0} }
|
|
|
|
type isReadRowsResponse_CellChunk_RowStatus interface {
|
|
isReadRowsResponse_CellChunk_RowStatus()
|
|
}
|
|
|
|
type ReadRowsResponse_CellChunk_ResetRow struct {
|
|
ResetRow bool `protobuf:"varint,8,opt,name=reset_row,json=resetRow,oneof"`
|
|
}
|
|
type ReadRowsResponse_CellChunk_CommitRow struct {
|
|
CommitRow bool `protobuf:"varint,9,opt,name=commit_row,json=commitRow,oneof"`
|
|
}
|
|
|
|
func (*ReadRowsResponse_CellChunk_ResetRow) isReadRowsResponse_CellChunk_RowStatus() {}
|
|
func (*ReadRowsResponse_CellChunk_CommitRow) isReadRowsResponse_CellChunk_RowStatus() {}
|
|
|
|
func (m *ReadRowsResponse_CellChunk) GetRowStatus() isReadRowsResponse_CellChunk_RowStatus {
|
|
if m != nil {
|
|
return m.RowStatus
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (m *ReadRowsResponse_CellChunk) GetRowKey() []byte {
|
|
if m != nil {
|
|
return m.RowKey
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (m *ReadRowsResponse_CellChunk) GetFamilyName() *google_protobuf1.StringValue {
|
|
if m != nil {
|
|
return m.FamilyName
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (m *ReadRowsResponse_CellChunk) GetQualifier() *google_protobuf1.BytesValue {
|
|
if m != nil {
|
|
return m.Qualifier
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (m *ReadRowsResponse_CellChunk) GetTimestampMicros() int64 {
|
|
if m != nil {
|
|
return m.TimestampMicros
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (m *ReadRowsResponse_CellChunk) GetLabels() []string {
|
|
if m != nil {
|
|
return m.Labels
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (m *ReadRowsResponse_CellChunk) GetValue() []byte {
|
|
if m != nil {
|
|
return m.Value
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (m *ReadRowsResponse_CellChunk) GetValueSize() int32 {
|
|
if m != nil {
|
|
return m.ValueSize
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (m *ReadRowsResponse_CellChunk) GetResetRow() bool {
|
|
if x, ok := m.GetRowStatus().(*ReadRowsResponse_CellChunk_ResetRow); ok {
|
|
return x.ResetRow
|
|
}
|
|
return false
|
|
}
|
|
|
|
func (m *ReadRowsResponse_CellChunk) GetCommitRow() bool {
|
|
if x, ok := m.GetRowStatus().(*ReadRowsResponse_CellChunk_CommitRow); ok {
|
|
return x.CommitRow
|
|
}
|
|
return false
|
|
}
|
|
|
|
// XXX_OneofFuncs is for the internal use of the proto package.
|
|
func (*ReadRowsResponse_CellChunk) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
|
|
return _ReadRowsResponse_CellChunk_OneofMarshaler, _ReadRowsResponse_CellChunk_OneofUnmarshaler, _ReadRowsResponse_CellChunk_OneofSizer, []interface{}{
|
|
(*ReadRowsResponse_CellChunk_ResetRow)(nil),
|
|
(*ReadRowsResponse_CellChunk_CommitRow)(nil),
|
|
}
|
|
}
|
|
|
|
func _ReadRowsResponse_CellChunk_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
|
|
m := msg.(*ReadRowsResponse_CellChunk)
|
|
// row_status
|
|
switch x := m.RowStatus.(type) {
|
|
case *ReadRowsResponse_CellChunk_ResetRow:
|
|
t := uint64(0)
|
|
if x.ResetRow {
|
|
t = 1
|
|
}
|
|
b.EncodeVarint(8<<3 | proto.WireVarint)
|
|
b.EncodeVarint(t)
|
|
case *ReadRowsResponse_CellChunk_CommitRow:
|
|
t := uint64(0)
|
|
if x.CommitRow {
|
|
t = 1
|
|
}
|
|
b.EncodeVarint(9<<3 | proto.WireVarint)
|
|
b.EncodeVarint(t)
|
|
case nil:
|
|
default:
|
|
return fmt.Errorf("ReadRowsResponse_CellChunk.RowStatus has unexpected type %T", x)
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func _ReadRowsResponse_CellChunk_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
|
|
m := msg.(*ReadRowsResponse_CellChunk)
|
|
switch tag {
|
|
case 8: // row_status.reset_row
|
|
if wire != proto.WireVarint {
|
|
return true, proto.ErrInternalBadWireType
|
|
}
|
|
x, err := b.DecodeVarint()
|
|
m.RowStatus = &ReadRowsResponse_CellChunk_ResetRow{x != 0}
|
|
return true, err
|
|
case 9: // row_status.commit_row
|
|
if wire != proto.WireVarint {
|
|
return true, proto.ErrInternalBadWireType
|
|
}
|
|
x, err := b.DecodeVarint()
|
|
m.RowStatus = &ReadRowsResponse_CellChunk_CommitRow{x != 0}
|
|
return true, err
|
|
default:
|
|
return false, nil
|
|
}
|
|
}
|
|
|
|
func _ReadRowsResponse_CellChunk_OneofSizer(msg proto.Message) (n int) {
|
|
m := msg.(*ReadRowsResponse_CellChunk)
|
|
// row_status
|
|
switch x := m.RowStatus.(type) {
|
|
case *ReadRowsResponse_CellChunk_ResetRow:
|
|
n += proto.SizeVarint(8<<3 | proto.WireVarint)
|
|
n += 1
|
|
case *ReadRowsResponse_CellChunk_CommitRow:
|
|
n += proto.SizeVarint(9<<3 | proto.WireVarint)
|
|
n += 1
|
|
case nil:
|
|
default:
|
|
panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
|
|
}
|
|
return n
|
|
}
|
|
|
|
// Request message for Bigtable.SampleRowKeys.
|
|
type SampleRowKeysRequest struct {
|
|
// The unique name of the table from which to sample row keys.
|
|
// Values are of the form
|
|
// `projects/<project>/instances/<instance>/tables/<table>`.
|
|
TableName string `protobuf:"bytes,1,opt,name=table_name,json=tableName" json:"table_name,omitempty"`
|
|
// This is a private alpha release of Cloud Bigtable replication. This feature
|
|
// is not currently available to most Cloud Bigtable customers. This feature
|
|
// might be changed in backward-incompatible ways and is not recommended for
|
|
// production use. It is not subject to any SLA or deprecation policy.
|
|
//
|
|
// This value specifies routing for replication. If not specified, the
|
|
// "default" application profile will be used.
|
|
AppProfileId string `protobuf:"bytes,2,opt,name=app_profile_id,json=appProfileId" json:"app_profile_id,omitempty"`
|
|
}
|
|
|
|
func (m *SampleRowKeysRequest) Reset() { *m = SampleRowKeysRequest{} }
|
|
func (m *SampleRowKeysRequest) String() string { return proto.CompactTextString(m) }
|
|
func (*SampleRowKeysRequest) ProtoMessage() {}
|
|
func (*SampleRowKeysRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
|
|
|
|
func (m *SampleRowKeysRequest) GetTableName() string {
|
|
if m != nil {
|
|
return m.TableName
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (m *SampleRowKeysRequest) GetAppProfileId() string {
|
|
if m != nil {
|
|
return m.AppProfileId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
// Response message for Bigtable.SampleRowKeys.
|
|
type SampleRowKeysResponse struct {
|
|
// Sorted streamed sequence of sample row keys in the table. The table might
|
|
// have contents before the first row key in the list and after the last one,
|
|
// but a key containing the empty string indicates "end of table" and will be
|
|
// the last response given, if present.
|
|
// Note that row keys in this list may not have ever been written to or read
|
|
// from, and users should therefore not make any assumptions about the row key
|
|
// structure that are specific to their use case.
|
|
RowKey []byte `protobuf:"bytes,1,opt,name=row_key,json=rowKey,proto3" json:"row_key,omitempty"`
|
|
// Approximate total storage space used by all rows in the table which precede
|
|
// `row_key`. Buffering the contents of all rows between two subsequent
|
|
// samples would require space roughly equal to the difference in their
|
|
// `offset_bytes` fields.
|
|
OffsetBytes int64 `protobuf:"varint,2,opt,name=offset_bytes,json=offsetBytes" json:"offset_bytes,omitempty"`
|
|
}
|
|
|
|
func (m *SampleRowKeysResponse) Reset() { *m = SampleRowKeysResponse{} }
|
|
func (m *SampleRowKeysResponse) String() string { return proto.CompactTextString(m) }
|
|
func (*SampleRowKeysResponse) ProtoMessage() {}
|
|
func (*SampleRowKeysResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
|
|
|
|
func (m *SampleRowKeysResponse) GetRowKey() []byte {
|
|
if m != nil {
|
|
return m.RowKey
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (m *SampleRowKeysResponse) GetOffsetBytes() int64 {
|
|
if m != nil {
|
|
return m.OffsetBytes
|
|
}
|
|
return 0
|
|
}
|
|
|
|
// Request message for Bigtable.MutateRow.
|
|
type MutateRowRequest struct {
|
|
// The unique name of the table to which the mutation should be applied.
|
|
// Values are of the form
|
|
// `projects/<project>/instances/<instance>/tables/<table>`.
|
|
TableName string `protobuf:"bytes,1,opt,name=table_name,json=tableName" json:"table_name,omitempty"`
|
|
// This is a private alpha release of Cloud Bigtable replication. This feature
|
|
// is not currently available to most Cloud Bigtable customers. This feature
|
|
// might be changed in backward-incompatible ways and is not recommended for
|
|
// production use. It is not subject to any SLA or deprecation policy.
|
|
//
|
|
// This value specifies routing for replication. If not specified, the
|
|
// "default" application profile will be used.
|
|
AppProfileId string `protobuf:"bytes,4,opt,name=app_profile_id,json=appProfileId" json:"app_profile_id,omitempty"`
|
|
// The key of the row to which the mutation should be applied.
|
|
RowKey []byte `protobuf:"bytes,2,opt,name=row_key,json=rowKey,proto3" json:"row_key,omitempty"`
|
|
// Changes to be atomically applied to the specified row. Entries are applied
|
|
// in order, meaning that earlier mutations can be masked by later ones.
|
|
// Must contain at least one entry and at most 100000.
|
|
Mutations []*Mutation `protobuf:"bytes,3,rep,name=mutations" json:"mutations,omitempty"`
|
|
}
|
|
|
|
func (m *MutateRowRequest) Reset() { *m = MutateRowRequest{} }
|
|
func (m *MutateRowRequest) String() string { return proto.CompactTextString(m) }
|
|
func (*MutateRowRequest) ProtoMessage() {}
|
|
func (*MutateRowRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} }
|
|
|
|
func (m *MutateRowRequest) GetTableName() string {
|
|
if m != nil {
|
|
return m.TableName
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (m *MutateRowRequest) GetAppProfileId() string {
|
|
if m != nil {
|
|
return m.AppProfileId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (m *MutateRowRequest) GetRowKey() []byte {
|
|
if m != nil {
|
|
return m.RowKey
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (m *MutateRowRequest) GetMutations() []*Mutation {
|
|
if m != nil {
|
|
return m.Mutations
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// Response message for Bigtable.MutateRow.
|
|
type MutateRowResponse struct {
|
|
}
|
|
|
|
func (m *MutateRowResponse) Reset() { *m = MutateRowResponse{} }
|
|
func (m *MutateRowResponse) String() string { return proto.CompactTextString(m) }
|
|
func (*MutateRowResponse) ProtoMessage() {}
|
|
func (*MutateRowResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} }
|
|
|
|
// Request message for BigtableService.MutateRows.
|
|
type MutateRowsRequest struct {
|
|
// The unique name of the table to which the mutations should be applied.
|
|
TableName string `protobuf:"bytes,1,opt,name=table_name,json=tableName" json:"table_name,omitempty"`
|
|
// This is a private alpha release of Cloud Bigtable replication. This feature
|
|
// is not currently available to most Cloud Bigtable customers. This feature
|
|
// might be changed in backward-incompatible ways and is not recommended for
|
|
// production use. It is not subject to any SLA or deprecation policy.
|
|
//
|
|
// This value specifies routing for replication. If not specified, the
|
|
// "default" application profile will be used.
|
|
AppProfileId string `protobuf:"bytes,3,opt,name=app_profile_id,json=appProfileId" json:"app_profile_id,omitempty"`
|
|
// The row keys and corresponding mutations to be applied in bulk.
|
|
// Each entry is applied as an atomic mutation, but the entries may be
|
|
// applied in arbitrary order (even between entries for the same row).
|
|
// At least one entry must be specified, and in total the entries can
|
|
// contain at most 100000 mutations.
|
|
Entries []*MutateRowsRequest_Entry `protobuf:"bytes,2,rep,name=entries" json:"entries,omitempty"`
|
|
}
|
|
|
|
func (m *MutateRowsRequest) Reset() { *m = MutateRowsRequest{} }
|
|
func (m *MutateRowsRequest) String() string { return proto.CompactTextString(m) }
|
|
func (*MutateRowsRequest) ProtoMessage() {}
|
|
func (*MutateRowsRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} }
|
|
|
|
func (m *MutateRowsRequest) GetTableName() string {
|
|
if m != nil {
|
|
return m.TableName
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (m *MutateRowsRequest) GetAppProfileId() string {
|
|
if m != nil {
|
|
return m.AppProfileId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (m *MutateRowsRequest) GetEntries() []*MutateRowsRequest_Entry {
|
|
if m != nil {
|
|
return m.Entries
|
|
}
|
|
return nil
|
|
}
|
|
|
|
type MutateRowsRequest_Entry struct {
|
|
// The key of the row to which the `mutations` should be applied.
|
|
RowKey []byte `protobuf:"bytes,1,opt,name=row_key,json=rowKey,proto3" json:"row_key,omitempty"`
|
|
// Changes to be atomically applied to the specified row. Mutations are
|
|
// applied in order, meaning that earlier mutations can be masked by
|
|
// later ones.
|
|
// You must specify at least one mutation.
|
|
Mutations []*Mutation `protobuf:"bytes,2,rep,name=mutations" json:"mutations,omitempty"`
|
|
}
|
|
|
|
func (m *MutateRowsRequest_Entry) Reset() { *m = MutateRowsRequest_Entry{} }
|
|
func (m *MutateRowsRequest_Entry) String() string { return proto.CompactTextString(m) }
|
|
func (*MutateRowsRequest_Entry) ProtoMessage() {}
|
|
func (*MutateRowsRequest_Entry) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6, 0} }
|
|
|
|
func (m *MutateRowsRequest_Entry) GetRowKey() []byte {
|
|
if m != nil {
|
|
return m.RowKey
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (m *MutateRowsRequest_Entry) GetMutations() []*Mutation {
|
|
if m != nil {
|
|
return m.Mutations
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// Response message for BigtableService.MutateRows.
|
|
type MutateRowsResponse struct {
|
|
// One or more results for Entries from the batch request.
|
|
Entries []*MutateRowsResponse_Entry `protobuf:"bytes,1,rep,name=entries" json:"entries,omitempty"`
|
|
}
|
|
|
|
func (m *MutateRowsResponse) Reset() { *m = MutateRowsResponse{} }
|
|
func (m *MutateRowsResponse) String() string { return proto.CompactTextString(m) }
|
|
func (*MutateRowsResponse) ProtoMessage() {}
|
|
func (*MutateRowsResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
|
|
|
|
func (m *MutateRowsResponse) GetEntries() []*MutateRowsResponse_Entry {
|
|
if m != nil {
|
|
return m.Entries
|
|
}
|
|
return nil
|
|
}
|
|
|
|
type MutateRowsResponse_Entry struct {
|
|
// The index into the original request's `entries` list of the Entry
|
|
// for which a result is being reported.
|
|
Index int64 `protobuf:"varint,1,opt,name=index" json:"index,omitempty"`
|
|
// The result of the request Entry identified by `index`.
|
|
// Depending on how requests are batched during execution, it is possible
|
|
// for one Entry to fail due to an error with another Entry. In the event
|
|
// that this occurs, the same error will be reported for both entries.
|
|
Status *google_rpc.Status `protobuf:"bytes,2,opt,name=status" json:"status,omitempty"`
|
|
}
|
|
|
|
func (m *MutateRowsResponse_Entry) Reset() { *m = MutateRowsResponse_Entry{} }
|
|
func (m *MutateRowsResponse_Entry) String() string { return proto.CompactTextString(m) }
|
|
func (*MutateRowsResponse_Entry) ProtoMessage() {}
|
|
func (*MutateRowsResponse_Entry) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7, 0} }
|
|
|
|
func (m *MutateRowsResponse_Entry) GetIndex() int64 {
|
|
if m != nil {
|
|
return m.Index
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (m *MutateRowsResponse_Entry) GetStatus() *google_rpc.Status {
|
|
if m != nil {
|
|
return m.Status
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// Request message for Bigtable.CheckAndMutateRow.
|
|
type CheckAndMutateRowRequest struct {
|
|
// The unique name of the table to which the conditional mutation should be
|
|
// applied.
|
|
// Values are of the form
|
|
// `projects/<project>/instances/<instance>/tables/<table>`.
|
|
TableName string `protobuf:"bytes,1,opt,name=table_name,json=tableName" json:"table_name,omitempty"`
|
|
// This is a private alpha release of Cloud Bigtable replication. This feature
|
|
// is not currently available to most Cloud Bigtable customers. This feature
|
|
// might be changed in backward-incompatible ways and is not recommended for
|
|
// production use. It is not subject to any SLA or deprecation policy.
|
|
//
|
|
// This value specifies routing for replication. If not specified, the
|
|
// "default" application profile will be used.
|
|
AppProfileId string `protobuf:"bytes,7,opt,name=app_profile_id,json=appProfileId" json:"app_profile_id,omitempty"`
|
|
// The key of the row to which the conditional mutation should be applied.
|
|
RowKey []byte `protobuf:"bytes,2,opt,name=row_key,json=rowKey,proto3" json:"row_key,omitempty"`
|
|
// The filter to be applied to the contents of the specified row. Depending
|
|
// on whether or not any results are yielded, either `true_mutations` or
|
|
// `false_mutations` will be executed. If unset, checks that the row contains
|
|
// any values at all.
|
|
PredicateFilter *RowFilter `protobuf:"bytes,6,opt,name=predicate_filter,json=predicateFilter" json:"predicate_filter,omitempty"`
|
|
// Changes to be atomically applied to the specified row if `predicate_filter`
|
|
// yields at least one cell when applied to `row_key`. Entries are applied in
|
|
// order, meaning that earlier mutations can be masked by later ones.
|
|
// Must contain at least one entry if `false_mutations` is empty, and at most
|
|
// 100000.
|
|
TrueMutations []*Mutation `protobuf:"bytes,4,rep,name=true_mutations,json=trueMutations" json:"true_mutations,omitempty"`
|
|
// Changes to be atomically applied to the specified row if `predicate_filter`
|
|
// does not yield any cells when applied to `row_key`. Entries are applied in
|
|
// order, meaning that earlier mutations can be masked by later ones.
|
|
// Must contain at least one entry if `true_mutations` is empty, and at most
|
|
// 100000.
|
|
FalseMutations []*Mutation `protobuf:"bytes,5,rep,name=false_mutations,json=falseMutations" json:"false_mutations,omitempty"`
|
|
}
|
|
|
|
func (m *CheckAndMutateRowRequest) Reset() { *m = CheckAndMutateRowRequest{} }
|
|
func (m *CheckAndMutateRowRequest) String() string { return proto.CompactTextString(m) }
|
|
func (*CheckAndMutateRowRequest) ProtoMessage() {}
|
|
func (*CheckAndMutateRowRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} }
|
|
|
|
func (m *CheckAndMutateRowRequest) GetTableName() string {
|
|
if m != nil {
|
|
return m.TableName
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (m *CheckAndMutateRowRequest) GetAppProfileId() string {
|
|
if m != nil {
|
|
return m.AppProfileId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (m *CheckAndMutateRowRequest) GetRowKey() []byte {
|
|
if m != nil {
|
|
return m.RowKey
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (m *CheckAndMutateRowRequest) GetPredicateFilter() *RowFilter {
|
|
if m != nil {
|
|
return m.PredicateFilter
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (m *CheckAndMutateRowRequest) GetTrueMutations() []*Mutation {
|
|
if m != nil {
|
|
return m.TrueMutations
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (m *CheckAndMutateRowRequest) GetFalseMutations() []*Mutation {
|
|
if m != nil {
|
|
return m.FalseMutations
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// Response message for Bigtable.CheckAndMutateRow.
|
|
type CheckAndMutateRowResponse struct {
|
|
// Whether or not the request's `predicate_filter` yielded any results for
|
|
// the specified row.
|
|
PredicateMatched bool `protobuf:"varint,1,opt,name=predicate_matched,json=predicateMatched" json:"predicate_matched,omitempty"`
|
|
}
|
|
|
|
func (m *CheckAndMutateRowResponse) Reset() { *m = CheckAndMutateRowResponse{} }
|
|
func (m *CheckAndMutateRowResponse) String() string { return proto.CompactTextString(m) }
|
|
func (*CheckAndMutateRowResponse) ProtoMessage() {}
|
|
func (*CheckAndMutateRowResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{9} }
|
|
|
|
func (m *CheckAndMutateRowResponse) GetPredicateMatched() bool {
|
|
if m != nil {
|
|
return m.PredicateMatched
|
|
}
|
|
return false
|
|
}
|
|
|
|
// Request message for Bigtable.ReadModifyWriteRow.
|
|
type ReadModifyWriteRowRequest struct {
|
|
// The unique name of the table to which the read/modify/write rules should be
|
|
// applied.
|
|
// Values are of the form
|
|
// `projects/<project>/instances/<instance>/tables/<table>`.
|
|
TableName string `protobuf:"bytes,1,opt,name=table_name,json=tableName" json:"table_name,omitempty"`
|
|
// This is a private alpha release of Cloud Bigtable replication. This feature
|
|
// is not currently available to most Cloud Bigtable customers. This feature
|
|
// might be changed in backward-incompatible ways and is not recommended for
|
|
// production use. It is not subject to any SLA or deprecation policy.
|
|
//
|
|
// This value specifies routing for replication. If not specified, the
|
|
// "default" application profile will be used.
|
|
AppProfileId string `protobuf:"bytes,4,opt,name=app_profile_id,json=appProfileId" json:"app_profile_id,omitempty"`
|
|
// The key of the row to which the read/modify/write rules should be applied.
|
|
RowKey []byte `protobuf:"bytes,2,opt,name=row_key,json=rowKey,proto3" json:"row_key,omitempty"`
|
|
// Rules specifying how the specified row's contents are to be transformed
|
|
// into writes. Entries are applied in order, meaning that earlier rules will
|
|
// affect the results of later ones.
|
|
Rules []*ReadModifyWriteRule `protobuf:"bytes,3,rep,name=rules" json:"rules,omitempty"`
|
|
}
|
|
|
|
func (m *ReadModifyWriteRowRequest) Reset() { *m = ReadModifyWriteRowRequest{} }
|
|
func (m *ReadModifyWriteRowRequest) String() string { return proto.CompactTextString(m) }
|
|
func (*ReadModifyWriteRowRequest) ProtoMessage() {}
|
|
func (*ReadModifyWriteRowRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{10} }
|
|
|
|
func (m *ReadModifyWriteRowRequest) GetTableName() string {
|
|
if m != nil {
|
|
return m.TableName
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (m *ReadModifyWriteRowRequest) GetAppProfileId() string {
|
|
if m != nil {
|
|
return m.AppProfileId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (m *ReadModifyWriteRowRequest) GetRowKey() []byte {
|
|
if m != nil {
|
|
return m.RowKey
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (m *ReadModifyWriteRowRequest) GetRules() []*ReadModifyWriteRule {
|
|
if m != nil {
|
|
return m.Rules
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// Response message for Bigtable.ReadModifyWriteRow.
|
|
type ReadModifyWriteRowResponse struct {
|
|
// A Row containing the new contents of all cells modified by the request.
|
|
Row *Row `protobuf:"bytes,1,opt,name=row" json:"row,omitempty"`
|
|
}
|
|
|
|
func (m *ReadModifyWriteRowResponse) Reset() { *m = ReadModifyWriteRowResponse{} }
|
|
func (m *ReadModifyWriteRowResponse) String() string { return proto.CompactTextString(m) }
|
|
func (*ReadModifyWriteRowResponse) ProtoMessage() {}
|
|
func (*ReadModifyWriteRowResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{11} }
|
|
|
|
func (m *ReadModifyWriteRowResponse) GetRow() *Row {
|
|
if m != nil {
|
|
return m.Row
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func init() {
|
|
proto.RegisterType((*ReadRowsRequest)(nil), "google.bigtable.v2.ReadRowsRequest")
|
|
proto.RegisterType((*ReadRowsResponse)(nil), "google.bigtable.v2.ReadRowsResponse")
|
|
proto.RegisterType((*ReadRowsResponse_CellChunk)(nil), "google.bigtable.v2.ReadRowsResponse.CellChunk")
|
|
proto.RegisterType((*SampleRowKeysRequest)(nil), "google.bigtable.v2.SampleRowKeysRequest")
|
|
proto.RegisterType((*SampleRowKeysResponse)(nil), "google.bigtable.v2.SampleRowKeysResponse")
|
|
proto.RegisterType((*MutateRowRequest)(nil), "google.bigtable.v2.MutateRowRequest")
|
|
proto.RegisterType((*MutateRowResponse)(nil), "google.bigtable.v2.MutateRowResponse")
|
|
proto.RegisterType((*MutateRowsRequest)(nil), "google.bigtable.v2.MutateRowsRequest")
|
|
proto.RegisterType((*MutateRowsRequest_Entry)(nil), "google.bigtable.v2.MutateRowsRequest.Entry")
|
|
proto.RegisterType((*MutateRowsResponse)(nil), "google.bigtable.v2.MutateRowsResponse")
|
|
proto.RegisterType((*MutateRowsResponse_Entry)(nil), "google.bigtable.v2.MutateRowsResponse.Entry")
|
|
proto.RegisterType((*CheckAndMutateRowRequest)(nil), "google.bigtable.v2.CheckAndMutateRowRequest")
|
|
proto.RegisterType((*CheckAndMutateRowResponse)(nil), "google.bigtable.v2.CheckAndMutateRowResponse")
|
|
proto.RegisterType((*ReadModifyWriteRowRequest)(nil), "google.bigtable.v2.ReadModifyWriteRowRequest")
|
|
proto.RegisterType((*ReadModifyWriteRowResponse)(nil), "google.bigtable.v2.ReadModifyWriteRowResponse")
|
|
}
|
|
|
|
// Reference imports to suppress errors if they are not otherwise used.
|
|
var _ context.Context
|
|
var _ grpc.ClientConn
|
|
|
|
// This is a compile-time assertion to ensure that this generated file
|
|
// is compatible with the grpc package it is being compiled against.
|
|
const _ = grpc.SupportPackageIsVersion4
|
|
|
|
// Client API for Bigtable service
|
|
|
|
type BigtableClient interface {
|
|
// Streams back the contents of all requested rows in key order, optionally
|
|
// applying the same Reader filter to each. Depending on their size,
|
|
// rows and cells may be broken up across multiple responses, but
|
|
// atomicity of each row will still be preserved. See the
|
|
// ReadRowsResponse documentation for details.
|
|
ReadRows(ctx context.Context, in *ReadRowsRequest, opts ...grpc.CallOption) (Bigtable_ReadRowsClient, error)
|
|
// Returns a sample of row keys in the table. The returned row keys will
|
|
// delimit contiguous sections of the table of approximately equal size,
|
|
// which can be used to break up the data for distributed tasks like
|
|
// mapreduces.
|
|
SampleRowKeys(ctx context.Context, in *SampleRowKeysRequest, opts ...grpc.CallOption) (Bigtable_SampleRowKeysClient, error)
|
|
// Mutates a row atomically. Cells already present in the row are left
|
|
// unchanged unless explicitly changed by `mutation`.
|
|
MutateRow(ctx context.Context, in *MutateRowRequest, opts ...grpc.CallOption) (*MutateRowResponse, error)
|
|
// Mutates multiple rows in a batch. Each individual row is mutated
|
|
// atomically as in MutateRow, but the entire batch is not executed
|
|
// atomically.
|
|
MutateRows(ctx context.Context, in *MutateRowsRequest, opts ...grpc.CallOption) (Bigtable_MutateRowsClient, error)
|
|
// Mutates a row atomically based on the output of a predicate Reader filter.
|
|
CheckAndMutateRow(ctx context.Context, in *CheckAndMutateRowRequest, opts ...grpc.CallOption) (*CheckAndMutateRowResponse, error)
|
|
// Modifies a row atomically on the server. The method reads the latest
|
|
// existing timestamp and value from the specified columns and writes a new
|
|
// entry based on pre-defined read/modify/write rules. The new value for the
|
|
// timestamp is the greater of the existing timestamp or the current server
|
|
// time. The method returns the new contents of all modified cells.
|
|
ReadModifyWriteRow(ctx context.Context, in *ReadModifyWriteRowRequest, opts ...grpc.CallOption) (*ReadModifyWriteRowResponse, error)
|
|
}
|
|
|
|
type bigtableClient struct {
|
|
cc *grpc.ClientConn
|
|
}
|
|
|
|
func NewBigtableClient(cc *grpc.ClientConn) BigtableClient {
|
|
return &bigtableClient{cc}
|
|
}
|
|
|
|
func (c *bigtableClient) ReadRows(ctx context.Context, in *ReadRowsRequest, opts ...grpc.CallOption) (Bigtable_ReadRowsClient, error) {
|
|
stream, err := grpc.NewClientStream(ctx, &_Bigtable_serviceDesc.Streams[0], c.cc, "/google.bigtable.v2.Bigtable/ReadRows", opts...)
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
x := &bigtableReadRowsClient{stream}
|
|
if err := x.ClientStream.SendMsg(in); err != nil {
|
|
return nil, err
|
|
}
|
|
if err := x.ClientStream.CloseSend(); err != nil {
|
|
return nil, err
|
|
}
|
|
return x, nil
|
|
}
|
|
|
|
type Bigtable_ReadRowsClient interface {
|
|
Recv() (*ReadRowsResponse, error)
|
|
grpc.ClientStream
|
|
}
|
|
|
|
type bigtableReadRowsClient struct {
|
|
grpc.ClientStream
|
|
}
|
|
|
|
func (x *bigtableReadRowsClient) Recv() (*ReadRowsResponse, error) {
|
|
m := new(ReadRowsResponse)
|
|
if err := x.ClientStream.RecvMsg(m); err != nil {
|
|
return nil, err
|
|
}
|
|
return m, nil
|
|
}
|
|
|
|
func (c *bigtableClient) SampleRowKeys(ctx context.Context, in *SampleRowKeysRequest, opts ...grpc.CallOption) (Bigtable_SampleRowKeysClient, error) {
|
|
stream, err := grpc.NewClientStream(ctx, &_Bigtable_serviceDesc.Streams[1], c.cc, "/google.bigtable.v2.Bigtable/SampleRowKeys", opts...)
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
x := &bigtableSampleRowKeysClient{stream}
|
|
if err := x.ClientStream.SendMsg(in); err != nil {
|
|
return nil, err
|
|
}
|
|
if err := x.ClientStream.CloseSend(); err != nil {
|
|
return nil, err
|
|
}
|
|
return x, nil
|
|
}
|
|
|
|
type Bigtable_SampleRowKeysClient interface {
|
|
Recv() (*SampleRowKeysResponse, error)
|
|
grpc.ClientStream
|
|
}
|
|
|
|
type bigtableSampleRowKeysClient struct {
|
|
grpc.ClientStream
|
|
}
|
|
|
|
func (x *bigtableSampleRowKeysClient) Recv() (*SampleRowKeysResponse, error) {
|
|
m := new(SampleRowKeysResponse)
|
|
if err := x.ClientStream.RecvMsg(m); err != nil {
|
|
return nil, err
|
|
}
|
|
return m, nil
|
|
}
|
|
|
|
func (c *bigtableClient) MutateRow(ctx context.Context, in *MutateRowRequest, opts ...grpc.CallOption) (*MutateRowResponse, error) {
|
|
out := new(MutateRowResponse)
|
|
err := grpc.Invoke(ctx, "/google.bigtable.v2.Bigtable/MutateRow", in, out, c.cc, opts...)
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
return out, nil
|
|
}
|
|
|
|
func (c *bigtableClient) MutateRows(ctx context.Context, in *MutateRowsRequest, opts ...grpc.CallOption) (Bigtable_MutateRowsClient, error) {
|
|
stream, err := grpc.NewClientStream(ctx, &_Bigtable_serviceDesc.Streams[2], c.cc, "/google.bigtable.v2.Bigtable/MutateRows", opts...)
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
x := &bigtableMutateRowsClient{stream}
|
|
if err := x.ClientStream.SendMsg(in); err != nil {
|
|
return nil, err
|
|
}
|
|
if err := x.ClientStream.CloseSend(); err != nil {
|
|
return nil, err
|
|
}
|
|
return x, nil
|
|
}
|
|
|
|
type Bigtable_MutateRowsClient interface {
|
|
Recv() (*MutateRowsResponse, error)
|
|
grpc.ClientStream
|
|
}
|
|
|
|
type bigtableMutateRowsClient struct {
|
|
grpc.ClientStream
|
|
}
|
|
|
|
func (x *bigtableMutateRowsClient) Recv() (*MutateRowsResponse, error) {
|
|
m := new(MutateRowsResponse)
|
|
if err := x.ClientStream.RecvMsg(m); err != nil {
|
|
return nil, err
|
|
}
|
|
return m, nil
|
|
}
|
|
|
|
func (c *bigtableClient) CheckAndMutateRow(ctx context.Context, in *CheckAndMutateRowRequest, opts ...grpc.CallOption) (*CheckAndMutateRowResponse, error) {
|
|
out := new(CheckAndMutateRowResponse)
|
|
err := grpc.Invoke(ctx, "/google.bigtable.v2.Bigtable/CheckAndMutateRow", in, out, c.cc, opts...)
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
return out, nil
|
|
}
|
|
|
|
func (c *bigtableClient) ReadModifyWriteRow(ctx context.Context, in *ReadModifyWriteRowRequest, opts ...grpc.CallOption) (*ReadModifyWriteRowResponse, error) {
|
|
out := new(ReadModifyWriteRowResponse)
|
|
err := grpc.Invoke(ctx, "/google.bigtable.v2.Bigtable/ReadModifyWriteRow", in, out, c.cc, opts...)
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
return out, nil
|
|
}
|
|
|
|
// Server API for Bigtable service
|
|
|
|
type BigtableServer interface {
|
|
// Streams back the contents of all requested rows in key order, optionally
|
|
// applying the same Reader filter to each. Depending on their size,
|
|
// rows and cells may be broken up across multiple responses, but
|
|
// atomicity of each row will still be preserved. See the
|
|
// ReadRowsResponse documentation for details.
|
|
ReadRows(*ReadRowsRequest, Bigtable_ReadRowsServer) error
|
|
// Returns a sample of row keys in the table. The returned row keys will
|
|
// delimit contiguous sections of the table of approximately equal size,
|
|
// which can be used to break up the data for distributed tasks like
|
|
// mapreduces.
|
|
SampleRowKeys(*SampleRowKeysRequest, Bigtable_SampleRowKeysServer) error
|
|
// Mutates a row atomically. Cells already present in the row are left
|
|
// unchanged unless explicitly changed by `mutation`.
|
|
MutateRow(context.Context, *MutateRowRequest) (*MutateRowResponse, error)
|
|
// Mutates multiple rows in a batch. Each individual row is mutated
|
|
// atomically as in MutateRow, but the entire batch is not executed
|
|
// atomically.
|
|
MutateRows(*MutateRowsRequest, Bigtable_MutateRowsServer) error
|
|
// Mutates a row atomically based on the output of a predicate Reader filter.
|
|
CheckAndMutateRow(context.Context, *CheckAndMutateRowRequest) (*CheckAndMutateRowResponse, error)
|
|
// Modifies a row atomically on the server. The method reads the latest
|
|
// existing timestamp and value from the specified columns and writes a new
|
|
// entry based on pre-defined read/modify/write rules. The new value for the
|
|
// timestamp is the greater of the existing timestamp or the current server
|
|
// time. The method returns the new contents of all modified cells.
|
|
ReadModifyWriteRow(context.Context, *ReadModifyWriteRowRequest) (*ReadModifyWriteRowResponse, error)
|
|
}
|
|
|
|
func RegisterBigtableServer(s *grpc.Server, srv BigtableServer) {
|
|
s.RegisterService(&_Bigtable_serviceDesc, srv)
|
|
}
|
|
|
|
func _Bigtable_ReadRows_Handler(srv interface{}, stream grpc.ServerStream) error {
|
|
m := new(ReadRowsRequest)
|
|
if err := stream.RecvMsg(m); err != nil {
|
|
return err
|
|
}
|
|
return srv.(BigtableServer).ReadRows(m, &bigtableReadRowsServer{stream})
|
|
}
|
|
|
|
type Bigtable_ReadRowsServer interface {
|
|
Send(*ReadRowsResponse) error
|
|
grpc.ServerStream
|
|
}
|
|
|
|
type bigtableReadRowsServer struct {
|
|
grpc.ServerStream
|
|
}
|
|
|
|
func (x *bigtableReadRowsServer) Send(m *ReadRowsResponse) error {
|
|
return x.ServerStream.SendMsg(m)
|
|
}
|
|
|
|
func _Bigtable_SampleRowKeys_Handler(srv interface{}, stream grpc.ServerStream) error {
|
|
m := new(SampleRowKeysRequest)
|
|
if err := stream.RecvMsg(m); err != nil {
|
|
return err
|
|
}
|
|
return srv.(BigtableServer).SampleRowKeys(m, &bigtableSampleRowKeysServer{stream})
|
|
}
|
|
|
|
type Bigtable_SampleRowKeysServer interface {
|
|
Send(*SampleRowKeysResponse) error
|
|
grpc.ServerStream
|
|
}
|
|
|
|
type bigtableSampleRowKeysServer struct {
|
|
grpc.ServerStream
|
|
}
|
|
|
|
func (x *bigtableSampleRowKeysServer) Send(m *SampleRowKeysResponse) error {
|
|
return x.ServerStream.SendMsg(m)
|
|
}
|
|
|
|
func _Bigtable_MutateRow_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
|
in := new(MutateRowRequest)
|
|
if err := dec(in); err != nil {
|
|
return nil, err
|
|
}
|
|
if interceptor == nil {
|
|
return srv.(BigtableServer).MutateRow(ctx, in)
|
|
}
|
|
info := &grpc.UnaryServerInfo{
|
|
Server: srv,
|
|
FullMethod: "/google.bigtable.v2.Bigtable/MutateRow",
|
|
}
|
|
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
|
return srv.(BigtableServer).MutateRow(ctx, req.(*MutateRowRequest))
|
|
}
|
|
return interceptor(ctx, in, info, handler)
|
|
}
|
|
|
|
func _Bigtable_MutateRows_Handler(srv interface{}, stream grpc.ServerStream) error {
|
|
m := new(MutateRowsRequest)
|
|
if err := stream.RecvMsg(m); err != nil {
|
|
return err
|
|
}
|
|
return srv.(BigtableServer).MutateRows(m, &bigtableMutateRowsServer{stream})
|
|
}
|
|
|
|
type Bigtable_MutateRowsServer interface {
|
|
Send(*MutateRowsResponse) error
|
|
grpc.ServerStream
|
|
}
|
|
|
|
type bigtableMutateRowsServer struct {
|
|
grpc.ServerStream
|
|
}
|
|
|
|
func (x *bigtableMutateRowsServer) Send(m *MutateRowsResponse) error {
|
|
return x.ServerStream.SendMsg(m)
|
|
}
|
|
|
|
func _Bigtable_CheckAndMutateRow_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
|
in := new(CheckAndMutateRowRequest)
|
|
if err := dec(in); err != nil {
|
|
return nil, err
|
|
}
|
|
if interceptor == nil {
|
|
return srv.(BigtableServer).CheckAndMutateRow(ctx, in)
|
|
}
|
|
info := &grpc.UnaryServerInfo{
|
|
Server: srv,
|
|
FullMethod: "/google.bigtable.v2.Bigtable/CheckAndMutateRow",
|
|
}
|
|
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
|
return srv.(BigtableServer).CheckAndMutateRow(ctx, req.(*CheckAndMutateRowRequest))
|
|
}
|
|
return interceptor(ctx, in, info, handler)
|
|
}
|
|
|
|
func _Bigtable_ReadModifyWriteRow_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
|
in := new(ReadModifyWriteRowRequest)
|
|
if err := dec(in); err != nil {
|
|
return nil, err
|
|
}
|
|
if interceptor == nil {
|
|
return srv.(BigtableServer).ReadModifyWriteRow(ctx, in)
|
|
}
|
|
info := &grpc.UnaryServerInfo{
|
|
Server: srv,
|
|
FullMethod: "/google.bigtable.v2.Bigtable/ReadModifyWriteRow",
|
|
}
|
|
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
|
return srv.(BigtableServer).ReadModifyWriteRow(ctx, req.(*ReadModifyWriteRowRequest))
|
|
}
|
|
return interceptor(ctx, in, info, handler)
|
|
}
|
|
|
|
var _Bigtable_serviceDesc = grpc.ServiceDesc{
|
|
ServiceName: "google.bigtable.v2.Bigtable",
|
|
HandlerType: (*BigtableServer)(nil),
|
|
Methods: []grpc.MethodDesc{
|
|
{
|
|
MethodName: "MutateRow",
|
|
Handler: _Bigtable_MutateRow_Handler,
|
|
},
|
|
{
|
|
MethodName: "CheckAndMutateRow",
|
|
Handler: _Bigtable_CheckAndMutateRow_Handler,
|
|
},
|
|
{
|
|
MethodName: "ReadModifyWriteRow",
|
|
Handler: _Bigtable_ReadModifyWriteRow_Handler,
|
|
},
|
|
},
|
|
Streams: []grpc.StreamDesc{
|
|
{
|
|
StreamName: "ReadRows",
|
|
Handler: _Bigtable_ReadRows_Handler,
|
|
ServerStreams: true,
|
|
},
|
|
{
|
|
StreamName: "SampleRowKeys",
|
|
Handler: _Bigtable_SampleRowKeys_Handler,
|
|
ServerStreams: true,
|
|
},
|
|
{
|
|
StreamName: "MutateRows",
|
|
Handler: _Bigtable_MutateRows_Handler,
|
|
ServerStreams: true,
|
|
},
|
|
},
|
|
Metadata: "google/bigtable/v2/bigtable.proto",
|
|
}
|
|
|
|
func init() { proto.RegisterFile("google/bigtable/v2/bigtable.proto", fileDescriptor0) }
|
|
|
|
var fileDescriptor0 = []byte{
|
|
// 1210 bytes of a gzipped FileDescriptorProto
|
|
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x57, 0x41, 0x6f, 0x1b, 0x45,
|
|
0x14, 0x66, 0xec, 0xd8, 0xf1, 0xbe, 0xa4, 0x4d, 0x32, 0x84, 0x76, 0x6b, 0x5a, 0x70, 0x97, 0x16,
|
|
0xdc, 0x94, 0xae, 0x2b, 0x23, 0x0e, 0x75, 0xd5, 0x02, 0x09, 0x69, 0x53, 0x41, 0xaa, 0x6a, 0x2c,
|
|
0x15, 0x09, 0x22, 0xad, 0xc6, 0xeb, 0xb1, 0x3b, 0x74, 0x77, 0x67, 0xbb, 0x3b, 0x5b, 0xe3, 0x22,
|
|
0x2e, 0xfc, 0x05, 0x8e, 0x08, 0x71, 0x42, 0x48, 0x08, 0x38, 0x73, 0xe3, 0xc0, 0x8d, 0x03, 0x17,
|
|
0xae, 0x1c, 0xfb, 0x0b, 0xb8, 0x23, 0xa1, 0x9d, 0x9d, 0xb5, 0x9d, 0xc4, 0x6e, 0x9d, 0x20, 0x71,
|
|
0xdb, 0x7d, 0xef, 0x7d, 0x6f, 0xbf, 0xf7, 0xbd, 0x37, 0x6f, 0x6c, 0x38, 0xdf, 0x17, 0xa2, 0xef,
|
|
0xb1, 0x46, 0x87, 0xf7, 0x25, 0xed, 0x78, 0xac, 0xf1, 0xb8, 0x39, 0x7a, 0xb6, 0xc3, 0x48, 0x48,
|
|
0x81, 0x71, 0x16, 0x62, 0x8f, 0xcc, 0x8f, 0x9b, 0xd5, 0xb3, 0x1a, 0x46, 0x43, 0xde, 0xa0, 0x41,
|
|
0x20, 0x24, 0x95, 0x5c, 0x04, 0x71, 0x86, 0xa8, 0x9e, 0x9b, 0x92, 0xb4, 0x4b, 0x25, 0xd5, 0xee,
|
|
0x57, 0xb4, 0x5b, 0xbd, 0x75, 0x92, 0x5e, 0x63, 0x10, 0xd1, 0x30, 0x64, 0x51, 0x0e, 0x3f, 0xad,
|
|
0xfd, 0x51, 0xe8, 0x36, 0x62, 0x49, 0x65, 0xa2, 0x1d, 0xd6, 0x5f, 0x08, 0x56, 0x08, 0xa3, 0x5d,
|
|
0x22, 0x06, 0x31, 0x61, 0x8f, 0x12, 0x16, 0x4b, 0x7c, 0x0e, 0x40, 0x7d, 0xc3, 0x09, 0xa8, 0xcf,
|
|
0x4c, 0x54, 0x43, 0x75, 0x83, 0x18, 0xca, 0x72, 0x97, 0xfa, 0x0c, 0x5f, 0x80, 0x93, 0x34, 0x0c,
|
|
0x9d, 0x30, 0x12, 0x3d, 0xee, 0x31, 0x87, 0x77, 0xcd, 0x92, 0x0a, 0x59, 0xa6, 0x61, 0x78, 0x2f,
|
|
0x33, 0xde, 0xe9, 0x62, 0x1b, 0x16, 0x22, 0x31, 0x88, 0xcd, 0x42, 0x0d, 0xd5, 0x97, 0x9a, 0x55,
|
|
0xfb, 0x70, 0xc5, 0x36, 0x11, 0x83, 0x36, 0x93, 0x44, 0xc5, 0xe1, 0xb7, 0xa1, 0xdc, 0xe3, 0x9e,
|
|
0x64, 0x91, 0x59, 0x54, 0x88, 0x73, 0x33, 0x10, 0xb7, 0x54, 0x10, 0xd1, 0xc1, 0x29, 0xd7, 0x14,
|
|
0xee, 0x78, 0xdc, 0xe7, 0xd2, 0x5c, 0xa8, 0xa1, 0x7a, 0x91, 0x18, 0xa9, 0xe5, 0xc3, 0xd4, 0x60,
|
|
0xfd, 0x5d, 0x84, 0xd5, 0x71, 0x79, 0x71, 0x28, 0x82, 0x98, 0xe1, 0x5b, 0x50, 0x76, 0x1f, 0x24,
|
|
0xc1, 0xc3, 0xd8, 0x44, 0xb5, 0x62, 0x7d, 0xa9, 0x69, 0x4f, 0xfd, 0xd4, 0x01, 0x94, 0xbd, 0xc5,
|
|
0x3c, 0x6f, 0x2b, 0x85, 0x11, 0x8d, 0xc6, 0x0d, 0x58, 0xf7, 0x68, 0x2c, 0x9d, 0xd8, 0xa5, 0x41,
|
|
0xc0, 0xba, 0x4e, 0x24, 0x06, 0xce, 0x43, 0x36, 0x54, 0x25, 0x2f, 0x93, 0xb5, 0xd4, 0xd7, 0xce,
|
|
0x5c, 0x44, 0x0c, 0x3e, 0x60, 0xc3, 0xea, 0xd3, 0x02, 0x18, 0xa3, 0x34, 0xf8, 0x34, 0x2c, 0xe6,
|
|
0x08, 0xa4, 0x10, 0xe5, 0x48, 0x85, 0xe1, 0x1b, 0xb0, 0xd4, 0xa3, 0x3e, 0xf7, 0x86, 0x59, 0x03,
|
|
0x32, 0x05, 0xcf, 0xe6, 0x24, 0xf3, 0x16, 0xdb, 0x6d, 0x19, 0xf1, 0xa0, 0x7f, 0x9f, 0x7a, 0x09,
|
|
0x23, 0x90, 0x01, 0x54, 0x7f, 0xae, 0x81, 0xf1, 0x28, 0xa1, 0x1e, 0xef, 0xf1, 0x91, 0x98, 0x2f,
|
|
0x1f, 0x02, 0x6f, 0x0e, 0x25, 0x8b, 0x33, 0xec, 0x38, 0x1a, 0x5f, 0x82, 0x55, 0xc9, 0x7d, 0x16,
|
|
0x4b, 0xea, 0x87, 0x8e, 0xcf, 0xdd, 0x48, 0xc4, 0x5a, 0xd3, 0x95, 0x91, 0x7d, 0x57, 0x99, 0xf1,
|
|
0x29, 0x28, 0x7b, 0xb4, 0xc3, 0xbc, 0xd8, 0x2c, 0xd5, 0x8a, 0x75, 0x83, 0xe8, 0x37, 0xbc, 0x0e,
|
|
0xa5, 0xc7, 0x69, 0x5a, 0xb3, 0xac, 0x6a, 0xca, 0x5e, 0xd2, 0x36, 0xa9, 0x07, 0x27, 0xe6, 0x4f,
|
|
0x98, 0xb9, 0x58, 0x43, 0xf5, 0x12, 0x31, 0x94, 0xa5, 0xcd, 0x9f, 0xa4, 0x6e, 0x23, 0x62, 0x31,
|
|
0x93, 0xa9, 0x84, 0x66, 0xa5, 0x86, 0xea, 0x95, 0x9d, 0x17, 0x48, 0x45, 0x99, 0x88, 0x18, 0xe0,
|
|
0x57, 0x01, 0x5c, 0xe1, 0xfb, 0x3c, 0xf3, 0x1b, 0xda, 0x6f, 0x64, 0x36, 0x22, 0x06, 0x9b, 0xcb,
|
|
0x6a, 0x0a, 0x9c, 0x6c, 0xb2, 0xad, 0x4f, 0x60, 0xbd, 0x4d, 0xfd, 0xd0, 0x63, 0x99, 0xec, 0xc7,
|
|
0x9f, 0xeb, 0xc2, 0xe1, 0xb9, 0xb6, 0xda, 0xf0, 0xd2, 0x81, 0xe4, 0x7a, 0xaa, 0x66, 0xb6, 0xf3,
|
|
0x3c, 0x2c, 0x8b, 0x5e, 0x2f, 0xad, 0xae, 0x93, 0x8a, 0xae, 0xb2, 0x16, 0xc9, 0x52, 0x66, 0x53,
|
|
0x7d, 0xb0, 0x7e, 0x44, 0xb0, 0xba, 0x9b, 0x48, 0x2a, 0xd3, 0xac, 0xc7, 0xa6, 0xbb, 0x30, 0xe5,
|
|
0x18, 0x4e, 0xb0, 0x2a, 0xec, 0x63, 0xd5, 0x02, 0xc3, 0x4f, 0xf4, 0x8e, 0x31, 0x8b, 0xea, 0x1c,
|
|
0x9c, 0x9d, 0x76, 0x0e, 0x76, 0x75, 0x10, 0x19, 0x87, 0x5b, 0x2f, 0xc2, 0xda, 0x04, 0xdb, 0xac,
|
|
0x7e, 0xeb, 0x1f, 0x34, 0x61, 0x3d, 0xbe, 0xe6, 0xc5, 0x29, 0x45, 0x6c, 0xc3, 0x22, 0x0b, 0x64,
|
|
0xc4, 0x95, 0x78, 0x29, 0xd3, 0xcb, 0x33, 0x99, 0x4e, 0x7e, 0xdc, 0xde, 0x0e, 0x64, 0x34, 0x24,
|
|
0x39, 0xb6, 0xba, 0x07, 0x25, 0x65, 0x99, 0xdd, 0xaa, 0x7d, 0xa2, 0x14, 0x8e, 0x26, 0xca, 0xf7,
|
|
0x08, 0xf0, 0x24, 0x85, 0xd1, 0xb2, 0x19, 0x71, 0xcf, 0xb6, 0xcd, 0x9b, 0xcf, 0xe3, 0xae, 0xf7,
|
|
0xcd, 0x01, 0xf2, 0x77, 0x72, 0xf2, 0xeb, 0x50, 0xe2, 0x41, 0x97, 0x7d, 0xa6, 0xa8, 0x17, 0x49,
|
|
0xf6, 0x82, 0x37, 0xa0, 0x9c, 0x4d, 0xbf, 0x5e, 0x17, 0x38, 0xff, 0x4a, 0x14, 0xba, 0x76, 0x5b,
|
|
0x79, 0x88, 0x8e, 0xb0, 0xfe, 0x28, 0x80, 0xb9, 0xf5, 0x80, 0xb9, 0x0f, 0xdf, 0x0b, 0xba, 0xff,
|
|
0x7d, 0xea, 0x16, 0x8f, 0x32, 0x75, 0x3b, 0xb0, 0x1a, 0x46, 0xac, 0xcb, 0x5d, 0x2a, 0x99, 0xa3,
|
|
0xf7, 0x7d, 0x79, 0x9e, 0x7d, 0xbf, 0x32, 0x82, 0x65, 0x06, 0xbc, 0x05, 0x27, 0x65, 0x94, 0x30,
|
|
0x67, 0xdc, 0xaf, 0x85, 0x39, 0xfa, 0x75, 0x22, 0xc5, 0xe4, 0x6f, 0x31, 0xde, 0x86, 0x95, 0x1e,
|
|
0xf5, 0xe2, 0xc9, 0x2c, 0xa5, 0x39, 0xb2, 0x9c, 0x54, 0xa0, 0x51, 0x1a, 0x6b, 0x07, 0xce, 0x4c,
|
|
0xd1, 0x53, 0x0f, 0xc0, 0x65, 0x58, 0x1b, 0x97, 0xec, 0x53, 0xe9, 0x3e, 0x60, 0x5d, 0xa5, 0x6b,
|
|
0x85, 0x8c, 0xb5, 0xd8, 0xcd, 0xec, 0xd6, 0x2f, 0x08, 0xce, 0xa4, 0x37, 0xcf, 0xae, 0xe8, 0xf2,
|
|
0xde, 0xf0, 0xa3, 0x88, 0xff, 0x8f, 0x1b, 0xe1, 0x06, 0x94, 0xa2, 0xc4, 0x63, 0xf9, 0x36, 0x78,
|
|
0x63, 0xd6, 0xad, 0x38, 0xc9, 0x2d, 0xf1, 0x18, 0xc9, 0x50, 0xd6, 0x6d, 0xa8, 0x4e, 0x63, 0xae,
|
|
0x55, 0xb8, 0x04, 0xc5, 0x74, 0x77, 0x23, 0xd5, 0xeb, 0xd3, 0x33, 0x7a, 0x4d, 0xd2, 0x98, 0xe6,
|
|
0x4f, 0x15, 0xa8, 0x6c, 0x6a, 0x07, 0xfe, 0x06, 0x41, 0x25, 0xbf, 0x8a, 0xf1, 0x6b, 0xcf, 0xbe,
|
|
0xa8, 0x95, 0x48, 0xd5, 0x0b, 0xf3, 0xdc, 0xe6, 0xd6, 0xfb, 0x5f, 0xfe, 0xf9, 0xf4, 0xab, 0xc2,
|
|
0x4d, 0xeb, 0x5a, 0xfa, 0x43, 0xea, 0xf3, 0xb1, 0xaa, 0x37, 0xc2, 0x48, 0x7c, 0xca, 0x5c, 0x19,
|
|
0x37, 0x36, 0x1a, 0x3c, 0x88, 0x25, 0x0d, 0x5c, 0x96, 0x3e, 0xab, 0x88, 0xb8, 0xb1, 0xf1, 0x45,
|
|
0x2b, 0xd2, 0xa9, 0x5a, 0x68, 0xe3, 0x2a, 0xc2, 0x3f, 0x23, 0x38, 0xb1, 0xef, 0x3e, 0xc0, 0xf5,
|
|
0x69, 0xdf, 0x9f, 0x76, 0x1f, 0x55, 0x2f, 0xcd, 0x11, 0xa9, 0xe9, 0xde, 0x52, 0x74, 0xdf, 0xc5,
|
|
0x37, 0x8f, 0x4c, 0x37, 0x9e, 0xcc, 0x77, 0x15, 0xe1, 0x6f, 0x11, 0x18, 0xa3, 0x21, 0xc5, 0x17,
|
|
0x9e, 0xb9, 0x8c, 0x72, 0xa2, 0x17, 0x9f, 0x13, 0xa5, 0x49, 0x6e, 0x2b, 0x92, 0xef, 0x58, 0xad,
|
|
0x23, 0x93, 0xf4, 0xf3, 0x5c, 0x2d, 0xb4, 0x81, 0xbf, 0x43, 0x00, 0xe3, 0x7d, 0x88, 0x2f, 0xce,
|
|
0xb5, 0xeb, 0xab, 0xaf, 0xcf, 0xb7, 0x56, 0x73, 0x25, 0xad, 0xeb, 0xc7, 0x27, 0xa9, 0x5b, 0xff,
|
|
0x2b, 0x82, 0xb5, 0x43, 0xc7, 0x1e, 0x4f, 0x5d, 0xef, 0xb3, 0xb6, 0x6d, 0xf5, 0xca, 0x9c, 0xd1,
|
|
0x9a, 0xfc, 0xae, 0x22, 0x7f, 0xdb, 0xda, 0x3c, 0x32, 0x79, 0xf7, 0x60, 0xce, 0x54, 0xe9, 0xdf,
|
|
0x10, 0xe0, 0xc3, 0x67, 0x16, 0x5f, 0x99, 0xe7, 0xe4, 0x8f, 0x6b, 0xb0, 0xe7, 0x0d, 0xd7, 0x45,
|
|
0xdc, 0x55, 0x45, 0xec, 0x58, 0x5b, 0xc7, 0x3a, 0x7a, 0xfb, 0x93, 0xb6, 0xd0, 0xc6, 0xe6, 0xd7,
|
|
0x08, 0x4e, 0xb9, 0xc2, 0x9f, 0xc2, 0x62, 0xf3, 0x44, 0xbe, 0x47, 0xee, 0xa5, 0xbf, 0x7b, 0xef,
|
|
0xa1, 0x8f, 0x5b, 0x3a, 0xa8, 0x2f, 0x3c, 0x1a, 0xf4, 0x6d, 0x11, 0xf5, 0x1b, 0x7d, 0x16, 0xa8,
|
|
0x5f, 0xc5, 0x8d, 0xcc, 0x45, 0x43, 0x1e, 0x4f, 0xfe, 0xcb, 0xba, 0x9e, 0x3f, 0xff, 0x50, 0x30,
|
|
0x6f, 0x67, 0xe0, 0x2d, 0x4f, 0x24, 0x5d, 0x3b, 0x4f, 0x6d, 0xdf, 0x6f, 0xfe, 0x9e, 0xbb, 0xf6,
|
|
0x94, 0x6b, 0x2f, 0x77, 0xed, 0xdd, 0x6f, 0x76, 0xca, 0x2a, 0xf9, 0x5b, 0xff, 0x06, 0x00, 0x00,
|
|
0xff, 0xff, 0xd6, 0x35, 0xfc, 0x0e, 0x16, 0x0e, 0x00, 0x00,
|
|
}
|