automagic sql db migrations (#461)

* adds migrations

closes #57

migrations only run if the database is not brand new. brand new
databases will contain all the right fields when CREATE TABLE is called,
this is for readability mostly more than efficiency (do not want to have
to go through all of the database migrations to ascertain what columns a table
has). upon startup of a new database, the migrations will be analyzed and the
highest version set, so that future migrations will be run. this should also
avoid running through all the migrations, which could bork db's easily enough
(if the user just exits from impatience, say).

otherwise, all migrations that a db has not yet seen will be run against it
upon startup, this should be seamless to the user whether they had a db that
had 0 migrations run on it before or N. this means users will not have to
explicitly run any migrations on their dbs nor see any errors when we upgrade
the db (so long as things go well). if migrations do not go so well, users
will have to manually repair dbs (this is the intention of the `migrate`
library and it seems sane), this should be rare, and I'm unsure myself how
best to resolve not having gone through this myself, I would assume it will
require running down migrations and then manually updating the migration
field; in any case, docs once one of us has to go through this.

migrations are written to files and checked into version control, and then use
go-bindata to generate those files into go code and compiled in to be consumed
by the migrate library (so that we don't have to put migration files on any
servers) -- this is also in vcs. this seems to work ok. I don't like having to
use the separate go-bindata tool but it wasn't really hard to install and then
go generate takes care of the args. adding migrations should be relatively
rare anyway, but tried to make it pretty painless.

1 migration to add created_at to the route is done here as an example of how
to do migrations, as well as testing these things ;) -- `created_at` will be
`0001-01-01T00:00:00.000Z` for any existing routes after a user runs this
version. could spend the extra time adding 'today's date to any outstanding
records, but that's not really accurate, the main thing is nobody will have to
nuke their db with the migrations in place & we don't have any prod clusters
really to worry about. all future routes will correctly have `created_at` set,
and plan to add other timestamps but wanted to keep this patch as small as
possible so only did routes.created_at.

there are tests that a spankin new db will work as expected as well as a db
after running all down & up migrations works. the latter tests only run on mysql
and postgres, since sqlite3 does not like ALTER TABLE DROP COLUMN; up
migrations will need to be tested manually for sqlite3 only, but in theory if
they are simple and work on postgres and mysql, there is a good likelihood of
success; the new migration from this patch works on sqlite3 fine.

for now, we need to use `github.com/rdallman/migrate` to move forward, as
getting integrated into upstream is proving difficult due to
`github.com/go-sql-driver/mysql` being broken on master (yay dependencies).
Fortunately for us, we vendor a version of the `mysql` bindings that actually
works, thus, we are capable of using the `mattes/migrate` library with success
due to that. this also will require go1.9 to use the new `database/sql.Conn`
type, CI has been updated accordingly.

some doc fixes too from testing.. and of course updated all deps.

anyway, whew. this should let us add fields to the db without busting
everybody's dbs. open to feedback on better ways, but this was overall pretty
simple despite futzing with mysql.

* add migrate pkg to deps, update deps

use rdallman/migrate until we resolve in mattes land

* add README in migrations package

* add ref to mattes lib
This commit is contained in:
Reed Allman
2017-11-14 12:54:33 -08:00
committed by GitHub
parent 91962e50b9
commit 61b416a9b5
397 changed files with 20532 additions and 4335 deletions

View File

@@ -1 +1 @@
0.2.0
0.2.1

View File

@@ -57,7 +57,7 @@ func (a *Client) DeleteAppsApp(params *DeleteAppsAppParams) (*DeleteAppsAppOK, e
/*
GetApps gets all app names
Get a list of all the apps in the system.
Get a list of all the apps in the system, returned in alphabetical order.
*/
func (a *Client) GetApps(params *GetAppsParams) (*GetAppsOK, error) {
// TODO: Validate the params before sending

View File

@@ -14,6 +14,7 @@ import (
"github.com/go-openapi/errors"
"github.com/go-openapi/runtime"
cr "github.com/go-openapi/runtime/client"
"github.com/go-openapi/swag"
strfmt "github.com/go-openapi/strfmt"
)
@@ -21,7 +22,7 @@ import (
// NewGetAppsParams creates a new GetAppsParams object
// with the default values initialized.
func NewGetAppsParams() *GetAppsParams {
var ()
return &GetAppsParams{
timeout: cr.DefaultTimeout,
@@ -31,7 +32,7 @@ func NewGetAppsParams() *GetAppsParams {
// NewGetAppsParamsWithTimeout creates a new GetAppsParams object
// with the default values initialized, and the ability to set a timeout on a request
func NewGetAppsParamsWithTimeout(timeout time.Duration) *GetAppsParams {
var ()
return &GetAppsParams{
timeout: timeout,
@@ -41,7 +42,7 @@ func NewGetAppsParamsWithTimeout(timeout time.Duration) *GetAppsParams {
// NewGetAppsParamsWithContext creates a new GetAppsParams object
// with the default values initialized, and the ability to set a context for a request
func NewGetAppsParamsWithContext(ctx context.Context) *GetAppsParams {
var ()
return &GetAppsParams{
Context: ctx,
@@ -51,7 +52,7 @@ func NewGetAppsParamsWithContext(ctx context.Context) *GetAppsParams {
// NewGetAppsParamsWithHTTPClient creates a new GetAppsParams object
// with the default values initialized, and the ability to set a custom HTTPClient for a request
func NewGetAppsParamsWithHTTPClient(client *http.Client) *GetAppsParams {
var ()
return &GetAppsParams{
HTTPClient: client,
}
@@ -61,6 +62,18 @@ func NewGetAppsParamsWithHTTPClient(client *http.Client) *GetAppsParams {
for the get apps operation typically these are written to a http.Request
*/
type GetAppsParams struct {
/*Cursor
Cursor from previous response.next_cursor to begin results after, if any.
*/
Cursor *string
/*PerPage
Number of results to return, defaults to 30. Max of 100.
*/
PerPage *int64
timeout time.Duration
Context context.Context
HTTPClient *http.Client
@@ -99,6 +112,28 @@ func (o *GetAppsParams) SetHTTPClient(client *http.Client) {
o.HTTPClient = client
}
// WithCursor adds the cursor to the get apps params
func (o *GetAppsParams) WithCursor(cursor *string) *GetAppsParams {
o.SetCursor(cursor)
return o
}
// SetCursor adds the cursor to the get apps params
func (o *GetAppsParams) SetCursor(cursor *string) {
o.Cursor = cursor
}
// WithPerPage adds the perPage to the get apps params
func (o *GetAppsParams) WithPerPage(perPage *int64) *GetAppsParams {
o.SetPerPage(perPage)
return o
}
// SetPerPage adds the perPage to the get apps params
func (o *GetAppsParams) SetPerPage(perPage *int64) {
o.PerPage = perPage
}
// WriteToRequest writes these params to a swagger request
func (o *GetAppsParams) WriteToRequest(r runtime.ClientRequest, reg strfmt.Registry) error {
@@ -107,6 +142,38 @@ func (o *GetAppsParams) WriteToRequest(r runtime.ClientRequest, reg strfmt.Regis
}
var res []error
if o.Cursor != nil {
// query param cursor
var qrCursor string
if o.Cursor != nil {
qrCursor = *o.Cursor
}
qCursor := qrCursor
if qCursor != "" {
if err := r.SetQueryParam("cursor", qCursor); err != nil {
return err
}
}
}
if o.PerPage != nil {
// query param per_page
var qrPerPage int64
if o.PerPage != nil {
qrPerPage = *o.PerPage
}
qPerPage := swag.FormatInt64(qrPerPage)
if qPerPage != "" {
if err := r.SetQueryParam("per_page", qPerPage); err != nil {
return err
}
}
}
if len(res) > 0 {
return errors.CompositeValidationError(res...)
}

View File

@@ -148,12 +148,10 @@ func (o *PatchAppsAppParams) WriteToRequest(r runtime.ClientRequest, reg strfmt.
return err
}
if o.Body == nil {
o.Body = new(models.AppWrapper)
}
if err := r.SetBodyParam(o.Body); err != nil {
return err
if o.Body != nil {
if err := r.SetBodyParam(o.Body); err != nil {
return err
}
}
if len(res) > 0 {

View File

@@ -127,12 +127,10 @@ func (o *PostAppsParams) WriteToRequest(r runtime.ClientRequest, reg strfmt.Regi
}
var res []error
if o.Body == nil {
o.Body = new(models.AppWrapper)
}
if err := r.SetBodyParam(o.Body); err != nil {
return err
if o.Body != nil {
if err := r.SetBodyParam(o.Body); err != nil {
return err
}
}
if len(res) > 0 {

View File

@@ -27,7 +27,7 @@ type Client struct {
/*
GetAppsAppCalls gets app bound calls
Get app-bound calls can filter to route-bound calls.
Get app-bound calls can filter to route-bound calls, results returned in created_at, descending order (newest first).
*/
func (a *Client) GetAppsAppCalls(params *GetAppsAppCallsParams) (*GetAppsAppCallsOK, error) {
// TODO: Validate the params before sending

View File

@@ -14,6 +14,7 @@ import (
"github.com/go-openapi/errors"
"github.com/go-openapi/runtime"
cr "github.com/go-openapi/runtime/client"
"github.com/go-openapi/swag"
strfmt "github.com/go-openapi/strfmt"
)
@@ -67,11 +68,31 @@ type GetAppsAppCallsParams struct {
*/
App string
/*Route
App route.
/*Cursor
Cursor from previous response.next_cursor to begin results after, if any.
*/
Route *string
Cursor *string
/*FromTime
Unix timestamp in seconds, of call.created_at to begin the results at, default 0.
*/
FromTime *int64
/*Path
Route path to match, exact.
*/
Path *string
/*PerPage
Number of results to return, defaults to 30. Max of 100.
*/
PerPage *int64
/*ToTime
Unix timestamp in seconds, of call.created_at to end the results at, defaults to latest.
*/
ToTime *int64
timeout time.Duration
Context context.Context
@@ -122,15 +143,59 @@ func (o *GetAppsAppCallsParams) SetApp(app string) {
o.App = app
}
// WithRoute adds the route to the get apps app calls params
func (o *GetAppsAppCallsParams) WithRoute(route *string) *GetAppsAppCallsParams {
o.SetRoute(route)
// WithCursor adds the cursor to the get apps app calls params
func (o *GetAppsAppCallsParams) WithCursor(cursor *string) *GetAppsAppCallsParams {
o.SetCursor(cursor)
return o
}
// SetRoute adds the route to the get apps app calls params
func (o *GetAppsAppCallsParams) SetRoute(route *string) {
o.Route = route
// SetCursor adds the cursor to the get apps app calls params
func (o *GetAppsAppCallsParams) SetCursor(cursor *string) {
o.Cursor = cursor
}
// WithFromTime adds the fromTime to the get apps app calls params
func (o *GetAppsAppCallsParams) WithFromTime(fromTime *int64) *GetAppsAppCallsParams {
o.SetFromTime(fromTime)
return o
}
// SetFromTime adds the fromTime to the get apps app calls params
func (o *GetAppsAppCallsParams) SetFromTime(fromTime *int64) {
o.FromTime = fromTime
}
// WithPath adds the path to the get apps app calls params
func (o *GetAppsAppCallsParams) WithPath(path *string) *GetAppsAppCallsParams {
o.SetPath(path)
return o
}
// SetPath adds the path to the get apps app calls params
func (o *GetAppsAppCallsParams) SetPath(path *string) {
o.Path = path
}
// WithPerPage adds the perPage to the get apps app calls params
func (o *GetAppsAppCallsParams) WithPerPage(perPage *int64) *GetAppsAppCallsParams {
o.SetPerPage(perPage)
return o
}
// SetPerPage adds the perPage to the get apps app calls params
func (o *GetAppsAppCallsParams) SetPerPage(perPage *int64) {
o.PerPage = perPage
}
// WithToTime adds the toTime to the get apps app calls params
func (o *GetAppsAppCallsParams) WithToTime(toTime *int64) *GetAppsAppCallsParams {
o.SetToTime(toTime)
return o
}
// SetToTime adds the toTime to the get apps app calls params
func (o *GetAppsAppCallsParams) SetToTime(toTime *int64) {
o.ToTime = toTime
}
// WriteToRequest writes these params to a swagger request
@@ -146,16 +211,80 @@ func (o *GetAppsAppCallsParams) WriteToRequest(r runtime.ClientRequest, reg strf
return err
}
if o.Route != nil {
if o.Cursor != nil {
// query param route
var qrRoute string
if o.Route != nil {
qrRoute = *o.Route
// query param cursor
var qrCursor string
if o.Cursor != nil {
qrCursor = *o.Cursor
}
qRoute := qrRoute
if qRoute != "" {
if err := r.SetQueryParam("route", qRoute); err != nil {
qCursor := qrCursor
if qCursor != "" {
if err := r.SetQueryParam("cursor", qCursor); err != nil {
return err
}
}
}
if o.FromTime != nil {
// query param from_time
var qrFromTime int64
if o.FromTime != nil {
qrFromTime = *o.FromTime
}
qFromTime := swag.FormatInt64(qrFromTime)
if qFromTime != "" {
if err := r.SetQueryParam("from_time", qFromTime); err != nil {
return err
}
}
}
if o.Path != nil {
// query param path
var qrPath string
if o.Path != nil {
qrPath = *o.Path
}
qPath := qrPath
if qPath != "" {
if err := r.SetQueryParam("path", qPath); err != nil {
return err
}
}
}
if o.PerPage != nil {
// query param per_page
var qrPerPage int64
if o.PerPage != nil {
qrPerPage = *o.PerPage
}
qPerPage := swag.FormatInt64(qrPerPage)
if qPerPage != "" {
if err := r.SetQueryParam("per_page", qPerPage); err != nil {
return err
}
}
}
if o.ToTime != nil {
// query param to_time
var qrToTime int64
if o.ToTime != nil {
qrToTime = *o.ToTime
}
qToTime := swag.FormatInt64(qrToTime)
if qToTime != "" {
if err := r.SetQueryParam("to_time", qToTime); err != nil {
return err
}
}

View File

@@ -14,6 +14,7 @@ import (
"github.com/go-openapi/errors"
"github.com/go-openapi/runtime"
cr "github.com/go-openapi/runtime/client"
"github.com/go-openapi/swag"
strfmt "github.com/go-openapi/strfmt"
)
@@ -67,6 +68,21 @@ type GetAppsAppRoutesParams struct {
*/
App string
/*Cursor
Cursor from previous response.next_cursor to begin results after, if any.
*/
Cursor *string
/*Image
Route image to match, exact.
*/
Image *string
/*PerPage
Number of results to return, defaults to 30. Max of 100.
*/
PerPage *int64
timeout time.Duration
Context context.Context
@@ -117,6 +133,39 @@ func (o *GetAppsAppRoutesParams) SetApp(app string) {
o.App = app
}
// WithCursor adds the cursor to the get apps app routes params
func (o *GetAppsAppRoutesParams) WithCursor(cursor *string) *GetAppsAppRoutesParams {
o.SetCursor(cursor)
return o
}
// SetCursor adds the cursor to the get apps app routes params
func (o *GetAppsAppRoutesParams) SetCursor(cursor *string) {
o.Cursor = cursor
}
// WithImage adds the image to the get apps app routes params
func (o *GetAppsAppRoutesParams) WithImage(image *string) *GetAppsAppRoutesParams {
o.SetImage(image)
return o
}
// SetImage adds the image to the get apps app routes params
func (o *GetAppsAppRoutesParams) SetImage(image *string) {
o.Image = image
}
// WithPerPage adds the perPage to the get apps app routes params
func (o *GetAppsAppRoutesParams) WithPerPage(perPage *int64) *GetAppsAppRoutesParams {
o.SetPerPage(perPage)
return o
}
// SetPerPage adds the perPage to the get apps app routes params
func (o *GetAppsAppRoutesParams) SetPerPage(perPage *int64) {
o.PerPage = perPage
}
// WriteToRequest writes these params to a swagger request
func (o *GetAppsAppRoutesParams) WriteToRequest(r runtime.ClientRequest, reg strfmt.Registry) error {
@@ -130,6 +179,54 @@ func (o *GetAppsAppRoutesParams) WriteToRequest(r runtime.ClientRequest, reg str
return err
}
if o.Cursor != nil {
// query param cursor
var qrCursor string
if o.Cursor != nil {
qrCursor = *o.Cursor
}
qCursor := qrCursor
if qCursor != "" {
if err := r.SetQueryParam("cursor", qCursor); err != nil {
return err
}
}
}
if o.Image != nil {
// query param image
var qrImage string
if o.Image != nil {
qrImage = *o.Image
}
qImage := qrImage
if qImage != "" {
if err := r.SetQueryParam("image", qImage); err != nil {
return err
}
}
}
if o.PerPage != nil {
// query param per_page
var qrPerPage int64
if o.PerPage != nil {
qrPerPage = *o.PerPage
}
qPerPage := swag.FormatInt64(qrPerPage)
if qPerPage != "" {
if err := r.SetQueryParam("per_page", qPerPage); err != nil {
return err
}
}
}
if len(res) > 0 {
return errors.CompositeValidationError(res...)
}

View File

@@ -164,12 +164,10 @@ func (o *PatchAppsAppRoutesRouteParams) WriteToRequest(r runtime.ClientRequest,
return err
}
if o.Body == nil {
o.Body = new(models.RouteWrapper)
}
if err := r.SetBodyParam(o.Body); err != nil {
return err
if o.Body != nil {
if err := r.SetBodyParam(o.Body); err != nil {
return err
}
}
// path param route

View File

@@ -148,12 +148,10 @@ func (o *PostAppsAppRoutesParams) WriteToRequest(r runtime.ClientRequest, reg st
return err
}
if o.Body == nil {
o.Body = new(models.RouteWrapper)
}
if err := r.SetBodyParam(o.Body); err != nil {
return err
if o.Body != nil {
if err := r.SetBodyParam(o.Body); err != nil {
return err
}
}
if len(res) > 0 {

View File

@@ -164,12 +164,10 @@ func (o *PutAppsAppRoutesRouteParams) WriteToRequest(r runtime.ClientRequest, re
return err
}
if o.Body == nil {
o.Body = new(models.RouteWrapper)
}
if err := r.SetBodyParam(o.Body); err != nil {
return err
if o.Body != nil {
if err := r.SetBodyParam(o.Body); err != nil {
return err
}
}
// path param route

View File

@@ -57,7 +57,7 @@ func (a *Client) DeleteAppsAppRoutesRoute(params *DeleteAppsAppRoutesRouteParams
/*
GetAppsAppRoutes gets route list by app name
This will list routes for a particular app.
This will list routes for a particular app, returned in alphabetical order.
*/
func (a *Client) GetAppsAppRoutes(params *GetAppsAppRoutesParams) (*GetAppsAppRoutesOK, error) {
// TODO: Validate the params before sending

View File

@@ -17,7 +17,7 @@ import (
type App struct {
// Application configuration
// Application configuration, applied to all routes.
Config map[string]string `json:"config,omitempty"`
// Name of this app. Must be different than the image name. Can ony contain alphanumeric, -, and _.

View File

@@ -6,8 +6,6 @@ package models
// Editing this file might prove futile when you re-run the swagger generate command
import (
"strconv"
strfmt "github.com/go-openapi/strfmt"
"github.com/go-openapi/errors"
@@ -22,16 +20,22 @@ type AppsWrapper struct {
// apps
// Required: true
Apps []*App `json:"apps"`
Apps AppsWrapperApps `json:"apps"`
// error
Error *ErrorBody `json:"error,omitempty"`
// cursor to send with subsequent request to receive the next page, if non-empty
// Read Only: true
NextCursor string `json:"next_cursor,omitempty"`
}
/* polymorph AppsWrapper apps false */
/* polymorph AppsWrapper error false */
/* polymorph AppsWrapper next_cursor false */
// Validate validates this apps wrapper
func (m *AppsWrapper) Validate(formats strfmt.Registry) error {
var res []error
@@ -58,24 +62,6 @@ func (m *AppsWrapper) validateApps(formats strfmt.Registry) error {
return err
}
for i := 0; i < len(m.Apps); i++ {
if swag.IsZero(m.Apps[i]) { // not required
continue
}
if m.Apps[i] != nil {
if err := m.Apps[i].Validate(formats); err != nil {
if ve, ok := err.(*errors.Validation); ok {
return ve.ValidateName("apps" + "." + strconv.Itoa(i))
}
return err
}
}
}
return nil
}

View File

@@ -0,0 +1,48 @@
// Code generated by go-swagger; DO NOT EDIT.
package models
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command
import (
"strconv"
strfmt "github.com/go-openapi/strfmt"
"github.com/go-openapi/errors"
"github.com/go-openapi/swag"
)
// AppsWrapperApps apps wrapper apps
// swagger:model appsWrapperApps
type AppsWrapperApps []*App
// Validate validates this apps wrapper apps
func (m AppsWrapperApps) Validate(formats strfmt.Registry) error {
var res []error
for i := 0; i < len(m); i++ {
if swag.IsZero(m[i]) { // not required
continue
}
if m[i] != nil {
if err := m[i].Validate(formats); err != nil {
if ve, ok := err.(*errors.Validation); ok {
return ve.ValidateName(strconv.Itoa(i))
}
return err
}
}
}
if len(res) > 0 {
return errors.CompositeValidationError(res...)
}
return nil
}

View File

@@ -6,8 +6,6 @@ package models
// Editing this file might prove futile when you re-run the swagger generate command
import (
"strconv"
strfmt "github.com/go-openapi/strfmt"
"github.com/go-openapi/errors"
@@ -22,16 +20,22 @@ type CallsWrapper struct {
// calls
// Required: true
Calls []*Call `json:"calls"`
Calls CallsWrapperCalls `json:"calls"`
// error
Error *ErrorBody `json:"error,omitempty"`
// cursor to send with subsequent request to receive the next page, if non-empty
// Read Only: true
NextCursor string `json:"next_cursor,omitempty"`
}
/* polymorph CallsWrapper calls false */
/* polymorph CallsWrapper error false */
/* polymorph CallsWrapper next_cursor false */
// Validate validates this calls wrapper
func (m *CallsWrapper) Validate(formats strfmt.Registry) error {
var res []error
@@ -58,24 +62,6 @@ func (m *CallsWrapper) validateCalls(formats strfmt.Registry) error {
return err
}
for i := 0; i < len(m.Calls); i++ {
if swag.IsZero(m.Calls[i]) { // not required
continue
}
if m.Calls[i] != nil {
if err := m.Calls[i].Validate(formats); err != nil {
if ve, ok := err.(*errors.Validation); ok {
return ve.ValidateName("calls" + "." + strconv.Itoa(i))
}
return err
}
}
}
return nil
}

View File

@@ -0,0 +1,48 @@
// Code generated by go-swagger; DO NOT EDIT.
package models
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command
import (
"strconv"
strfmt "github.com/go-openapi/strfmt"
"github.com/go-openapi/errors"
"github.com/go-openapi/swag"
)
// CallsWrapperCalls calls wrapper calls
// swagger:model callsWrapperCalls
type CallsWrapperCalls []*Call
// Validate validates this calls wrapper calls
func (m CallsWrapperCalls) Validate(formats strfmt.Registry) error {
var res []error
for i := 0; i < len(m); i++ {
if swag.IsZero(m[i]) { // not required
continue
}
if m[i] != nil {
if err := m[i].Validate(formats); err != nil {
if ve, ok := err.(*errors.Validation); ok {
return ve.ValidateName(strconv.Itoa(i))
}
return err
}
}
}
if len(res) > 0 {
return errors.CompositeValidationError(res...)
}
return nil
}

View File

@@ -6,8 +6,6 @@ package models
// Editing this file might prove futile when you re-run the swagger generate command
import (
"strconv"
strfmt "github.com/go-openapi/strfmt"
"github.com/go-openapi/errors"
@@ -23,13 +21,19 @@ type RoutesWrapper struct {
// error
Error *ErrorBody `json:"error,omitempty"`
// cursor to send with subsequent request to receive the next page, if non-empty
// Read Only: true
NextCursor string `json:"next_cursor,omitempty"`
// routes
// Required: true
Routes []*Route `json:"routes"`
Routes RoutesWrapperRoutes `json:"routes"`
}
/* polymorph RoutesWrapper error false */
/* polymorph RoutesWrapper next_cursor false */
/* polymorph RoutesWrapper routes false */
// Validate validates this routes wrapper
@@ -77,24 +81,6 @@ func (m *RoutesWrapper) validateRoutes(formats strfmt.Registry) error {
return err
}
for i := 0; i < len(m.Routes); i++ {
if swag.IsZero(m.Routes[i]) { // not required
continue
}
if m.Routes[i] != nil {
if err := m.Routes[i].Validate(formats); err != nil {
if ve, ok := err.(*errors.Validation); ok {
return ve.ValidateName("routes" + "." + strconv.Itoa(i))
}
return err
}
}
}
return nil
}

View File

@@ -0,0 +1,48 @@
// Code generated by go-swagger; DO NOT EDIT.
package models
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command
import (
"strconv"
strfmt "github.com/go-openapi/strfmt"
"github.com/go-openapi/errors"
"github.com/go-openapi/swag"
)
// RoutesWrapperRoutes routes wrapper routes
// swagger:model routesWrapperRoutes
type RoutesWrapperRoutes []*Route
// Validate validates this routes wrapper routes
func (m RoutesWrapperRoutes) Validate(formats strfmt.Registry) error {
var res []error
for i := 0; i < len(m); i++ {
if swag.IsZero(m[i]) { // not required
continue
}
if m[i] != nil {
if err := m[i].Validate(formats); err != nil {
if ve, ok := err.(*errors.Validation); ok {
return ve.ValidateName(strconv.Itoa(i))
}
return err
}
}
}
if len(res) > 0 {
return errors.CompositeValidationError(res...)
}
return nil
}

View File

@@ -1,28 +0,0 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.sublime*
.idea/
*.iml
iron.json

View File

@@ -1,530 +0,0 @@
go.iron
=======
[Iron.io](http://www.iron.io) Go (golang) API libraries
Go docs: http://godoc.org/github.com/iron-io/iron_go3
Iron.io Go Client Library
-------------
# IronMQ
[IronMQ](http://www.iron.io/products/mq) is an elastic message queue for managing data and event flow within cloud applications and between systems.
The [full API documentation is here](http://dev.iron.io/mq/reference/api/) and this client tries to stick to the API as
much as possible so if you see an option in the API docs, you can use it in the methods below.
You can find [Go docs here](http://godoc.org/github.com/iron-io/iron_go3).
## Getting Started
### Get credentials
To start using iron_go, you need to sign up and get an oauth token.
1. Go to http://iron.io/ and sign up.
2. Create new project at http://hud.iron.io/dashboard
3. Download the iron.json file from "Credentials" block of project
--
### Configure
1\. Reference the library:
```go
import "github.com/iron-io/iron_go3/mq"
```
2\. [Setup your Iron.io credentials](http://dev.iron.io/mq/3/reference/configuration/)
3\. Create an IronMQ client object:
```go
queue := mq.New("test_queue");
```
Or use initializer with settings specified in code:
```go
settings := &config.Settings {
Token: "l504pLkINUWYDSO9YW4m",
ProjectId: "53ec6fc95e8edd2884000003",
Host: "localhost",
Scheme: "http",
Port: 8080,
}
queue := mq.ConfigNew("test_queue", settings);
```
Push queues must be explicitly created. There's no changing a queue's type.
```go
subscribers := []mq.QueueSubscriber{mq.QueueSubscriber{Name: "sub1", URL: "wwww.subscriber1.com"}, mq.QueueSubscriber{Name: "sub2", URL: "wwww.subscriber2.com"}}
subscription := mq.PushInfo {
Retries: 3,
RetriesDelay: 60,
ErrorQueue: "error_queue",
Subscribers: subscribers,
}
queue_type := "multicast";
queueInfo := mq.QueueInfo{ Type: &queue_type, MessageExpiration: 60, MessageTimeout: 56, Push: &subscription}
result, err := mq.CreateQueue("test_queue", queueInfo);
```
## The Basics
### Get Queues List
```go
queues, err := mq.ListQueues(0, 100);
for _, element := range queues {
fmt.Println(element.Name);
}
```
Request URL Query Parameters:
* per_page - number of elements in response, default is 30.
* previous - this is the last queue on the previous page, it will start from the next one. If queue with specified
name doesnt exist result will contain first per_page queues that lexicographically greater than previous.
* prefix - an optional queue prefix to search on. e.g., prefix=ca could return queues ["cars", "cats", etc.]
FilterPage will return the list of queues with the specified options.
```go
queues := mq.FilterPage(prefix, prev string, perPage int)
```
--
### Get a Queue Object
You can have as many queues as you want, each with their own unique set of messages.
```go
queue := mq.New("test_queue");
```
Now you can use it.
--
### Post a Message on a Queue
Messages are placed on the queue in a FIFO arrangement.
If a queue does not exist, it will be created upon the first posting of a message.
```go
id, err := q.PushString("Hello, World!")
```
--
### Retrieve Queue Information
```go
info, err := q.Info()
fmt.Println(info.Name);
```
--
### Reserve/Get a Message off a Queue
```go
msg, err := q.Reserve()
fmt.Printf("The message says: %q\n", msg.Body)
```
--
### Delete a Message from a Queue
```go
msg, _ := q.Reserve()
// perform some actions with a message here
msg.Delete()
```
Be sure to delete a message from the queue when you're done with it.
```go
msg, _ := q.Reserve()
// perform some actions with a message here
msg.Delete()
```
Delete multiple messages from the queue:
```go
ids, err := queue.PushStrings("more", "and more", "and more")
queue.DeleteMessages(ids)
```
Delete multiple reserved messages:
```go
messages, err := queue.ReserveN(3)
queue.DeleteReservedMessages(messages)
```
--
## Queues
### Retrieve Queue Information
```go
info, err := q.Info()
fmt.Println(info.Name);
fmt.Println(info.Size);
```
QueueInfo struct consists of the following fields:
```go
type QueueInfo struct {
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
PushType string `json:"push_type,omitempty"`
Reserved int `json:"reserved,omitempty"`
RetriesDelay int `json:"retries,omitempty"`
Retries int `json:"retries_delay,omitempty"`
Size int `json:"size,omitempty"`
Subscribers []QueueSubscriber `json:"subscribers,omitempty"`
TotalMessages int `json:"total_messages,omitempty"`
ErrorQueue string `json:"error_queue,omitempty"`
}
```
--
### Delete a Message Queue
```go
deleted, err := q.Delete()
if(deleted) {
fmt.Println("Successfully deleted")
} else {
fmt.Println("Cannot delete, because of error: ", err)
}
```
--
### Post Messages to a Queue
**Single message:**
```go
id, err := q.PushString("Hello, World!")
// To control parameters like timeout and delay, construct your own message.
id, err := q.PushMessage(&mq.Message{Delay: 0, Body: "Hi there"})
```
**Multiple messages:**
You can also pass multiple messages in a single call.
```go
ids, err := q.PushStrings("Message 1", "Message 2")
```
To control parameters like timeout and delay, construct your own message.
```go
ids, err = q.PushMessages(
&mq.Message{Delay: 0, Body: "The first"},
&mq.Message{Delay: 10, Body: "The second"},
&mq.Message{Delay: 10, Body: "The third"},
&mq.Message{Delay: 0, Body: "The fifth"},
)
```
**Parameters:**
* `Delay`: The item will not be available on the queue until this many seconds have passed.
Default is 0 seconds. Maximum is 604,800 seconds (7 days).
--
### Get Messages from a Queue
```go
msg, err := q.Reserve()
fmt.Printf("The message says: %q\n", msg.Body)
```
When you reserve a message from the queue, it is no longer on the queue but it still exists within the system.
You have to explicitly delete the message or else it will go back onto the queue after the `timeout`.
The default `timeout` is 60 seconds. Minimal `timeout` is 30 seconds.
You also can get several messages at a time:
```go
// get 5 messages
msgs, err := q.ReserveN(5)
```
And with timeout param:
```go
messages, err := q.GetNWithTimeout(4, 600)
```
### Touch a Message on a Queue
Touching a reserved message extends its timeout by the duration specified when the message was created, which is 60 seconds by default.
```go
msg, _ := q.Reserve()
err := msg.Touch() // new reservation id will be assigned to current message
```
There is another way to touch a message without getting it:
```go
newReservationId, err := q.TouchMessage(messageId, reservationId)
```
#### Specifiying timeout
```go
msg, _ := q.Reserve()
err := msg.TouchFor(10) // new reservation id will be assigned to current message
```
or
```go
newReservationId, err := q.TouchMessageFor(messageId, reservationId, 10)
```
--
### Release Message
```go
msg, _ := q.Reserve()
delay := 30
err := msg.release(delay)
```
Or another way to release a message without creation of message object:
```go
delay := 30
err := q.ReleaseMessage("5987586196292186572", delay)
```
**Optional parameters:**
* `delay`: The item will not be available on the queue until this many seconds have passed.
Default is 0 seconds. Maximum is 604,800 seconds (7 days).
--
### Delete a Message from a Queue
```go
msg, _ := q.Reserve()
// perform some actions with a message here
err := msg.Delete()
```
Or
```go
err := q.DeleteMessage("5987586196292186572")
```
Be sure to delete a message from the queue when you're done with it.
--
### Peek Messages from a Queue
Peeking at a queue returns the next messages on the queue, but it does not reserve them.
```go
message, err := q.Peek()
```
There is a way to get several messages not reserving them:
```go
messages, err := q.PeekN(50)
for _, m := range messages {
fmt.Println(m.Body)
}
```
And with timeout param:
```go
messages, err := q.PeekNWithTimeout(4, 600)
```
--
### Clear a Queue
```go
err := q.Clear()
```
### Add an Alert to a Queue
[Check out our Blog Post on Queue Alerts](http://blog.iron.io).
Alerts have now been incorporated into IronMQ. This feature lets developers control actions based on the activity within a queue. With alerts, actions can be triggered when the number of messages in a queue reach a certain threshold. These actions can include things like auto-scaling, failure detection, load-monitoring, and system health.
You may add up to 5 alerts per queue.
**Required parameters:**
* `type`: required - "fixed" or "progressive". In case of alert's type set to "fixed", alert will be triggered when queue size pass value set by trigger parameter. When type set to "progressive", alert will be triggered when queue size pass any of values, calculated by trigger * N where N >= 1. For example, if trigger set to 10, alert will be triggered at queue sizes 10, 20, 30, etc.
* `direction`: required - "asc" or "desc". Set direction in which queue size must be changed when pass trigger value. If direction set to "asc" queue size must growing to trigger alert. When direction is "desc" queue size must decreasing to trigger alert.
* `trigger`: required. It will be used to calculate actual values of queue size when alert must be triggered. See type field description. Trigger must be integer value greater than 0.
* `queue`: required. Name of queue which will be used to post alert messages.
```go
err := q.AddAlerts(
&mq.Alert{Queue: "new_milestone_queue", Trigger: 10, Direction: "asc", Type: "progressive"},
&mq.Alert{Queue: "low_level_queue", Trigger: 5, Direction: "desc", Type: "fixed" })
```
#### Update alerts in a queue
```go
err := q.AddAlerts(
&mq.Alert{Queue: "milestone_queue", Trigger: 100, Direction: "asc", Type: "progressive"})
```
#### Remove alerts from a queue
You can delete an alert from a queue by id:
```go
err := q.RemoveAlert("532fdf593663ed6afa06ed16")
```
Or delete several alerts by ids:
```go
err := q.RemoveAlerts("532f59663ed6afed16483052", "559663ed6af6483399b3400a")
```
Also you can delete all alerts
```go
err := q.RemoveAllAlerts()
```
Please, remember, that passing zero of alerts while update process will lead to deleating of all previously added alerts.
```go
q.AddAlerts(
&mq.Alert{Queue: "alert1", Trigger: 10, Direction: "asc", Type: "progressive"},
&mq.Alert{Queue: "alert2", Trigger: 5, Direction: "desc", Type: "fixed" })
info, _ := q.Info() // 2
q.UpdateAlerts()
info, _ = q.Info() // 0
```
--
## Push Queues
IronMQ push queues allow you to setup a queue that will push to an endpoint, rather than having to poll the endpoint.
[Here's the announcement for an overview](http://blog.iron.io/2013/01/ironmq-push-queues-reliable-message.html).
### Update a Message Queue
Same as create queue, all QueueInfo fields are optional. Queue type cannot be changed.
```go
info, err := q.Update(...)
```
QueueInfo struct consists of following fields:
```go
type QueueInfo struct {
PushType string `json:"push_type,omitempty"`
RetriesDelay int `json:"retries,omitempty"`
Retries int `json:"retries_delay,omitempty"`
Subscribers []QueueSubscriber `json:"subscribers,omitempty"`
// and some other fields not related to push queues
}
```
**The following parameters are all related to Push Queues:**
* `type`: Either `multicast` to push to all subscribers or `unicast` to push to one and only one subscriber. Default is `multicast`.
* `retries`: How many times to retry on failure. Default is 3. Maximum is 100.
* `retries_delay`: Delay between each retry in seconds. Default is 60.
* `subscribers`: An array of `QueueSubscriber`
This set of subscribers will replace the existing subscribers.
To add or remove subscribers, see the add subscribers endpoint or the remove subscribers endpoint.
QueueSubscriber has the following structure:
```go
type QueueSubscriber struct {
URL string `json:"url"`
Headers map[string]string `json:"headers,omitempty"`
}
```
--
### Set Subscribers on a Queue
Subscribers can be any HTTP endpoint. `push_type` is one of:
* `multicast`: will push to all endpoints/subscribers
* `unicast`: will push to one and only one endpoint/subscriber
Subscribers could be added only to push queue (unicast or multicast). It's possible to set it while creating a queue:
```go
queueType := "multicast"
subscribers := []mq.QueueSubscriber{
mq.QueueSubscriber{Name: "test3", URL: "http://mysterious-brook-1807.herokuapp.com/ironmq_push_3"},
mq.QueueSubscriber{Name: "test4", URL: "http://mysterious-brook-1807.herokuapp.com/ironmq_push_4"},
}
pushInfo := mq.PushInfo{RetriesDelay: 45, Retries: 2, Subscribers: subscribers}
info, err := mq.CreateQueue(qn, mq.QueueInfo{Type: &queueType, Push: &pushInfo})
```
It's also possible to manage subscribers for existing push queue using the following methods:
* `AddSubscribers` - adds subscribers and replaces existing (if name of old one is equal to name of new one)
* `ReplaceSubscribers` - adds new collection of subscribers instead of existing
* `RemoveSubscribers` and `RemoveSubscribersCollection` - remove specified subscribers
--
### Get Message Push Status
After pushing a message:
```go
subscribers, err := message.Subscribers()
```
Returns an array of subscribers with status.
--
## Further Links
* [IronMQ Overview](http://dev.iron.io/mq/3/)
* [IronMQ REST/HTTP API](http://dev.iron.io/mq/3/reference/api/)
* [Push Queues](http://dev.iron.io/mq/reference/push_queues/)
* [Other Client Libraries](http://dev.iron.io/mq/3/libraries/)
* [Live Chat, Support & Fun](http://get.iron.io/chat)
-------------
© 2011 - 2014 Iron.io Inc. All Rights Reserved.

View File

@@ -1,289 +0,0 @@
// api provides common functionality for all the iron.io APIs
package api
import (
"bytes"
"crypto/tls"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"log"
"net"
"net/http"
"net/url"
"os"
"strings"
"time"
"github.com/iron-io/iron_go3/config"
)
type DefaultResponseBody struct {
Msg string `json:"msg"`
}
type URL struct {
URL url.URL
ContentType string
Settings config.Settings
}
var (
Debug bool
DebugOnErrors bool
DefaultCacheSize = 8192
// HttpClient is the client used by iron_go to make each http request. It is exported in case
// the client would like to modify it from the default behavior from http.DefaultClient.
// This uses the DefaultTransport modified to enable TLS Session Client caching.
HttpClient = &http.Client{
Transport: &http.Transport{
Proxy: http.ProxyFromEnvironment,
Dial: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
}).Dial,
MaxIdleConnsPerHost: 512,
TLSHandshakeTimeout: 10 * time.Second,
TLSClientConfig: &tls.Config{
ClientSessionCache: tls.NewLRUClientSessionCache(DefaultCacheSize),
},
},
}
)
func dbg(v ...interface{}) {
if Debug {
log.Println(v...)
}
}
func dbgerr(v ...interface{}) {
if DebugOnErrors && !Debug {
log.Println(v...)
}
}
func init() {
if os.Getenv("IRON_API_DEBUG") != "" {
Debug = true
dbg("debugging of api enabled")
}
if os.Getenv("IRON_API_DEBUG_ON_ERRORS") != "" {
DebugOnErrors = true
dbg("debugging of api on errors enabled")
}
}
func Action(cs config.Settings, prefix string, suffix ...string) *URL {
parts := append([]string{prefix}, suffix...)
return ActionEndpoint(cs, strings.Join(parts, "/"))
}
func RootAction(cs config.Settings, prefix string, suffix ...string) *URL {
parts := append([]string{prefix}, suffix...)
return RootActionEndpoint(cs, strings.Join(parts, "/"))
}
func ActionEndpoint(cs config.Settings, endpoint string) *URL {
u := &URL{Settings: cs, URL: url.URL{}}
u.URL.Scheme = cs.Scheme
u.URL.Host = fmt.Sprintf("%s:%d", cs.Host, cs.Port)
u.URL.Path = fmt.Sprintf("/%s/projects/%s/%s", cs.ApiVersion, cs.ProjectId, endpoint)
return u
}
func RootActionEndpoint(cs config.Settings, endpoint string) *URL {
u := &URL{Settings: cs, URL: url.URL{}}
u.URL.Scheme = cs.Scheme
u.URL.Host = fmt.Sprintf("%s:%d", cs.Host, cs.Port)
u.URL.Path = fmt.Sprintf("/%s/%s", cs.ApiVersion, endpoint)
return u
}
func VersionAction(cs config.Settings) *URL {
u := &URL{Settings: cs, URL: url.URL{Scheme: cs.Scheme}}
u.URL.Host = fmt.Sprintf("%s:%d", cs.Host, cs.Port)
u.URL.Path = "/version"
return u
}
func (u *URL) QueryAdd(key string, format string, value interface{}) *URL {
query := u.URL.Query()
query.Add(key, fmt.Sprintf(format, value))
u.URL.RawQuery = query.Encode()
return u
}
func (u *URL) SetContentType(t string) *URL {
u.ContentType = t
return u
}
func (u *URL) Req(method string, in, out interface{}) error {
var body io.ReadSeeker
switch in := in.(type) {
case io.ReadSeeker:
// ready to send (zips uses this)
body = in
default:
if in == nil {
in = struct{}{}
}
data, err := json.Marshal(in)
if err != nil {
return err
}
dbg("request body:", in)
body = bytes.NewReader(data)
}
response, err := u.req(method, body)
if response != nil && response.Body != nil {
defer response.Body.Close()
}
if err != nil {
dbg("ERROR!", err, err.Error())
body := "<empty>"
if response != nil && response.Body != nil {
binary, _ := ioutil.ReadAll(response.Body)
body = string(binary)
}
dbgerr("ERROR!", err, err.Error(), "Request:", body, " Response:", body)
return err
}
dbg("response:", response)
if out != nil {
return json.NewDecoder(response.Body).Decode(out)
}
// throw it away
io.Copy(ioutil.Discard, response.Body)
return nil
}
// returned body must be closed by caller if non-nil
func (u *URL) Request(method string, body io.Reader) (response *http.Response, err error) {
var byts []byte
if body != nil {
byts, err = ioutil.ReadAll(body)
if err != nil {
return nil, err
}
}
return u.req(method, bytes.NewReader(byts))
}
var MaxRequestRetries = 5
func (u *URL) req(method string, body io.ReadSeeker) (response *http.Response, err error) {
request, err := http.NewRequest(method, u.URL.String(), nil)
if err != nil {
return nil, err
}
// body=bytes.Reader implements `Len() int`. if this changes for some reason, looky here
if s, ok := body.(interface {
Len() int
}); ok {
request.ContentLength = int64(s.Len())
}
request.Header.Set("Authorization", "OAuth "+u.Settings.Token)
request.Header.Set("Accept", "application/json")
request.Header.Set("Accept-Encoding", "gzip/deflate")
request.Header.Set("User-Agent", u.Settings.UserAgent)
if u.ContentType != "" {
request.Header.Set("Content-Type", u.ContentType)
} else if body != nil {
request.Header.Set("Content-Type", "application/json")
}
if rc, ok := body.(io.ReadCloser); ok { // stdlib doesn't have ReadSeekCloser :(
request.Body = rc
} else {
request.Body = ioutil.NopCloser(body)
}
dbg("URL:", request.URL.String())
dbg("request:", fmt.Sprintf("%#v\n", request))
for tries := 0; tries < MaxRequestRetries; tries++ {
body.Seek(0, 0) // set back to beginning for retries
response, err = HttpClient.Do(request)
if err != nil {
if response != nil && response.Body != nil {
response.Body.Close() // make sure to close since we won't return it
}
if err == io.EOF {
continue
}
return nil, err
}
if response.StatusCode == http.StatusServiceUnavailable {
delay := (tries + 1) * 10 // smooth out delays from 0-2
time.Sleep(time.Duration(delay*delay) * time.Millisecond)
continue
}
break
}
if err != nil { // for that one lucky case where io.EOF reaches MaxRetries
return nil, err
}
if err = ResponseAsError(response); err != nil {
return nil, err
}
return response, nil
}
var HTTPErrorDescriptions = map[int]string{
http.StatusUnauthorized: "The OAuth token is either not provided or invalid",
http.StatusNotFound: "The resource, project, or endpoint being requested doesn't exist.",
http.StatusMethodNotAllowed: "This endpoint doesn't support that particular verb",
http.StatusNotAcceptable: "Required fields are missing",
}
func ResponseAsError(response *http.Response) HTTPResponseError {
if response.StatusCode == http.StatusOK || response.StatusCode == http.StatusCreated {
return nil
}
if response == nil {
return resErr{statusCode: http.StatusTeapot, error: fmt.Sprint("response nil but no errors. beware unicorns, this shouldn't happen")}
}
if response.Body != nil {
defer response.Body.Close()
}
var out DefaultResponseBody
err := json.NewDecoder(response.Body).Decode(&out)
if err != nil {
return resErr{statusCode: response.StatusCode, error: fmt.Sprint(response.Status, ": ", err.Error())}
}
if out.Msg != "" {
return resErr{statusCode: response.StatusCode, error: fmt.Sprint(response.Status, ": ", out.Msg)}
}
return resErr{statusCode: response.StatusCode, error: response.Status + ": Unknown API Response"}
}
type HTTPResponseError interface {
Error() string
StatusCode() int
}
type resErr struct {
error string
statusCode int
}
func (h resErr) Error() string { return h.error }
func (h resErr) StatusCode() int { return h.statusCode }

View File

@@ -1,235 +0,0 @@
// IronCache (cloud k/v store) client library
package cache
import (
"bytes"
"encoding/gob"
"encoding/json"
"fmt"
"time"
"github.com/iron-io/iron_go3/api"
"github.com/iron-io/iron_go3/config"
)
var (
JSON = Codec{Marshal: json.Marshal, Unmarshal: json.Unmarshal}
Gob = Codec{Marshal: gobMarshal, Unmarshal: gobUnmarshal}
)
type Cache struct {
Settings config.Settings
Name string
}
type Item struct {
// Value is the Item's value
Value interface{}
// Object is the Item's value for use with a Codec.
Object interface{}
// Number of seconds until expiration. The zero value defaults to 7 days,
// maximum is 30 days.
Expiration time.Duration
// Caches item only if the key is currently cached.
Replace bool
// Caches item only if the key isn't currently cached.
Add bool
}
// New returns a struct ready to make requests with.
// The cacheName argument is used as namespace.
func New(cacheName string) *Cache {
return &Cache{Settings: config.Config("iron_cache"), Name: cacheName}
}
func (c *Cache) caches(suffix ...string) *api.URL {
return api.Action(c.Settings, "caches", suffix...)
}
func (c *Cache) ListCaches(page, perPage int) (caches []*Cache, err error) {
out := []struct {
Project_id string
Name string
}{}
err = c.caches().
QueryAdd("page", "%d", page).
QueryAdd("per_page", "%d", perPage).
Req("GET", nil, &out)
if err != nil {
return
}
caches = make([]*Cache, 0, len(out))
for _, item := range out {
caches = append(caches, &Cache{
Settings: c.Settings,
Name: item.Name,
})
}
return
}
func (c *Cache) ServerVersion() (version string, err error) {
out := map[string]string{}
err = api.VersionAction(c.Settings).Req("GET", nil, &out)
if err != nil {
return
}
return out["version"], nil
}
func (c *Cache) Clear() (err error) {
return c.caches(c.Name, "clear").Req("POST", nil, nil)
}
// Put adds an Item to the cache, overwriting any existing key of the same name.
func (c *Cache) Put(key string, item *Item) (err error) {
in := struct {
Value interface{} `json:"value"`
ExpiresIn int `json:"expires_in,omitempty"`
Replace bool `json:"replace,omitempty"`
Add bool `json:"add,omitempty"`
}{
Value: item.Value,
ExpiresIn: int(item.Expiration.Seconds()),
Replace: item.Replace,
Add: item.Add,
}
return c.caches(c.Name, "items", key).Req("PUT", &in, nil)
}
func anyToString(value interface{}) (str interface{}, err error) {
switch v := value.(type) {
case string:
str = v
case uint, uint8, uint16, uint32, uint64, int, int8, int16, int32, int64:
str = v
case float32, float64:
str = v
case bool:
str = v
case fmt.Stringer:
str = v.String()
default:
var bytes []byte
if bytes, err = json.Marshal(value); err == nil {
str = string(bytes)
}
}
return
}
func (c *Cache) Set(key string, value interface{}, ttl ...int) (err error) {
str, err := anyToString(value)
if err == nil {
if len(ttl) > 0 {
err = c.Put(key, &Item{Value: str, Expiration: time.Duration(ttl[0]) * time.Second})
} else {
err = c.Put(key, &Item{Value: str})
}
}
return
}
func (c *Cache) Add(key string, value ...interface{}) (err error) {
str, err := anyToString(value)
if err == nil {
err = c.Put(key, &Item{
Value: str, Expiration: time.Duration(123) * time.Second, Add: true,
})
}
return
}
func (c *Cache) Replace(key string, value ...interface{}) (err error) {
str, err := anyToString(value)
if err == nil {
err = c.Put(key, &Item{
Value: str, Expiration: time.Duration(123) * time.Second, Replace: true,
})
}
return
}
// Increment increments the corresponding item's value.
func (c *Cache) Increment(key string, amount int64) (value interface{}, err error) {
in := map[string]int64{"amount": amount}
out := struct {
Message string `json:"msg"`
Value interface{} `json:"value"`
}{}
if err = c.caches(c.Name, "items", key, "increment").Req("POST", &in, &out); err == nil {
value = out.Value
}
return
}
// Get gets an item from the cache.
func (c *Cache) Get(key string) (value interface{}, err error) {
out := struct {
Cache string `json:"cache"`
Key string `json:"key"`
Value interface{} `json:"value"`
}{}
if err = c.caches(c.Name, "items", key).Req("GET", nil, &out); err == nil {
value = out.Value
}
return
}
func (c *Cache) GetMeta(key string) (value map[string]interface{}, err error) {
value = map[string]interface{}{}
err = c.caches(c.Name, "items", key).Req("GET", nil, &value)
return
}
// Delete removes an item from the cache.
func (c *Cache) Delete(key string) (err error) {
return c.caches(c.Name, "items", key).Req("DELETE", nil, nil)
}
type Codec struct {
Marshal func(interface{}) ([]byte, error)
Unmarshal func([]byte, interface{}) error
}
func (cd Codec) Put(c *Cache, key string, item *Item) (err error) {
bytes, err := cd.Marshal(item.Object)
if err != nil {
return
}
item.Value = string(bytes)
return c.Put(key, item)
}
func (cd Codec) Get(c *Cache, key string, object interface{}) (err error) {
value, err := c.Get(key)
if err != nil {
return
}
err = cd.Unmarshal([]byte(value.(string)), object)
if err != nil {
return
}
return
}
func gobMarshal(v interface{}) ([]byte, error) {
writer := bytes.Buffer{}
enc := gob.NewEncoder(&writer)
err := enc.Encode(v)
return writer.Bytes(), err
}
func gobUnmarshal(marshalled []byte, v interface{}) error {
reader := bytes.NewBuffer(marshalled)
dec := gob.NewDecoder(reader)
return dec.Decode(v)
}

View File

@@ -1,101 +0,0 @@
package cache_test
import (
"fmt"
"github.com/iron-io/iron_go3/cache"
)
func p(a ...interface{}) { fmt.Println(a...) }
func Example1StoringData() {
// For configuration info, see http://dev.iron.io/articles/configuration
c := cache.New("test_cache")
// Numbers will get stored as numbers
c.Set("number_item", 42)
// Strings get stored as strings
c.Set("string_item", "Hello, IronCache")
// Objects and dicts get JSON-encoded and stored as strings
c.Set("complex_item", map[string]interface{}{
"test": "this is a dict",
"args": []string{"apples", "oranges"},
})
p("all stored")
// Output:
// all stored
}
func Example2Incrementing() {
c := cache.New("test_cache")
p(c.Increment("number_item", 10))
p(c.Get("number_item"))
p(c.Increment("string_item", 10))
p(c.Increment("complex_item", 10))
// Output:
// <nil>
// 52 <nil>
// 400 Bad Request: Cannot increment or decrement non-numeric value
// 400 Bad Request: Cannot increment or decrement non-numeric value
}
func Example3Decrementing() {
c := cache.New("test_cache")
p(c.Increment("number_item", -10))
p(c.Get("number_item"))
p(c.Increment("string_item", -10))
p(c.Increment("complex_item", -10))
// Output:
// <nil>
// 42 <nil>
// 400 Bad Request: Cannot increment or decrement non-numeric value
// 400 Bad Request: Cannot increment or decrement non-numeric value
}
func Example4RetrievingData() {
c := cache.New("test_cache")
value, err := c.Get("number_item")
fmt.Printf("%#v (%#v)\n", value, err)
value, err = c.Get("string_item")
fmt.Printf("%#v (%#v)\n", value, err)
// JSON is returned as strings
value, err = c.Get("complex_item")
fmt.Printf("%#v (%#v)\n", value, err)
// You can use the JSON codec to deserialize it.
obj := struct {
Args []string
Test string
}{}
err = cache.JSON.Get(c, "complex_item", &obj)
fmt.Printf("%#v (%#v)\n", obj, err)
// Output:
// 42 (<nil>)
// "Hello, IronCache" (<nil>)
// "{\"args\":[\"apples\",\"oranges\"],\"test\":\"this is a dict\"}" (<nil>)
// struct { Args []string; Test string }{Args:[]string{"apples", "oranges"}, Test:"this is a dict"} (<nil>)
}
func Example5DeletingData() {
c := cache.New("test_cache")
// Immediately delete an item
c.Delete("string_item")
p(c.Get("string_item"))
// Output:
// <nil> 404 Not Found: The resource, project, or endpoint being requested doesn't exist.
}

View File

@@ -1,58 +0,0 @@
package cache_test
import (
"testing"
"time"
"github.com/iron-io/iron_go3/cache"
. "github.com/jeffh/go.bdd"
)
func TestEverything(t *testing.T) {}
func init() {
defer PrintSpecReport()
Describe("IronCache", func() {
c := cache.New("cachename")
It("Lists all caches", func() {
_, err := c.ListCaches(0, 100) // can't check the caches value just yet.
Expect(err, ToBeNil)
})
It("Puts a value into the cache", func() {
err := c.Put("keyname", &cache.Item{
Value: "value",
Expiration: 2 * time.Second,
})
Expect(err, ToBeNil)
})
It("Gets a value from the cache", func() {
value, err := c.Get("keyname")
Expect(err, ToBeNil)
Expect(value, ToEqual, "value")
})
It("Gets meta-information about an item", func() {
err := c.Put("forever", &cache.Item{Value: "and ever", Expiration: 0})
Expect(err, ToBeNil)
value, err := c.GetMeta("forever")
Expect(err, ToBeNil)
Expect(value["key"], ToEqual, "forever")
Expect(value["value"], ToEqual, "and ever")
Expect(value["cache"], ToEqual, "cachename")
Expect(value["expires"], ToEqual, "9999-01-01T00:00:00Z")
Expect(value["flags"], ToEqual, 0.0)
})
It("Sets numeric items", func() {
err := c.Set("number", 42)
Expect(err, ToBeNil)
value, err := c.Get("number")
Expect(err, ToBeNil)
Expect(value.(float64), ToEqual, 42.0)
})
})
}

View File

@@ -1,31 +0,0 @@
machine:
environment:
GOPATH: $HOME
GOROOT: $HOME/go
PATH: $GOROOT/bin:$PATH
GO15VENDOREXPERIMENT: 1
CHECKOUT_DIR: $HOME/$CIRCLE_PROJECT_REPONAME
GH_IRON: $HOME/src/github.com/iron-io
GO_PROJECT: ../src/github.com/iron-io
checkout:
post:
- mkdir -p "$GH_IRON"
- cp -R "$CHECKOUT_DIR" "$GH_IRON/$CIRCLE_PROJECT_REPONAME":
pwd: $HOME
dependencies:
pre:
# install go1.5
- wget https://storage.googleapis.com/golang/go1.5.linux-amd64.tar.gz
- tar -C $HOME -xvzf go1.5.linux-amd64.tar.gz
override:
# this was being dumb, don't want it to auto detect we are a go repo b/c vendoring
- which go
test:
override:
- go get github.com/jeffh/go.bdd:
pwd: $GO_PROJECT/$CIRCLE_PROJECT_REPONAME
- go test ./mq:
pwd: $GO_PROJECT/$CIRCLE_PROJECT_REPONAME

View File

@@ -1,283 +0,0 @@
// config helper for cache, mq, and worker
package config
import (
"encoding/json"
"io/ioutil"
"log"
"os"
"path/filepath"
"runtime"
"strconv"
"strings"
)
// Contains the configuration for an iron.io service.
// An empty instance is not usable
type Settings struct {
Token string `json:"token,omitempty"`
ProjectId string `json:"project_id,omitempty"`
Host string `json:"host,omitempty"`
Scheme string `json:"scheme,omitempty"`
Port uint16 `json:"port,omitempty"`
ApiVersion string `json:"api_version,omitempty"`
UserAgent string `json:"user_agent,omitempty"`
}
var (
debug = false
goVersion = runtime.Version()
Presets = map[string]Settings{
"worker": Settings{
Scheme: "https",
Port: 443,
ApiVersion: "2",
Host: "worker-aws-us-east-1.iron.io",
UserAgent: "iron_go3/worker 2.0 (Go " + goVersion + ")",
},
"mq": Settings{
Scheme: "https",
Port: 443,
ApiVersion: "3",
Host: "mq-aws-us-east-1-1.iron.io",
UserAgent: "iron_go3/mq 3.0 (Go " + goVersion + ")",
},
"cache": Settings{
Scheme: "https",
Port: 443,
ApiVersion: "1",
Host: "cache-aws-us-east-1.iron.io",
UserAgent: "iron_go3/cache 1.0 (Go " + goVersion + ")",
},
}
)
func dbg(v ...interface{}) {
if debug {
log.Println(v...)
}
}
// ManualConfig gathers configuration from env variables, json config files
// and finally overwrites it with specified instance of Settings.
// Examples of fullProduct are "iron_worker", "iron_cache", "iron_mq" and
func ManualConfig(fullProduct string, configuration *Settings) (settings Settings) {
return config(fullProduct, "", configuration)
}
// Config gathers configuration from env variables and json config files.
// Examples of fullProduct are "iron_worker", "iron_cache", "iron_mq".
func Config(fullProduct string) (settings Settings) {
return config(fullProduct, "", nil)
}
// Like Config, but useful for keeping multiple dev environment information in
// one iron.json config file. If env="", works same as Config.
//
// e.g.
// {
// "production": {
// "token": ...,
// "project_id": ...
// },
// "test": {
// ...
// }
// }
func ConfigWithEnv(fullProduct, env string) (settings Settings) {
return config(fullProduct, env, nil)
}
func config(fullProduct, env string, configuration *Settings) Settings {
if os.Getenv("IRON_CONFIG_DEBUG") != "" {
debug = true
dbg("debugging of config enabled")
}
pair := strings.SplitN(fullProduct, "_", 2)
if len(pair) != 2 {
panic("Invalid product name, has to use prefix.")
}
family, product := pair[0], pair[1]
base, found := Presets[product]
if !found {
base = Settings{
Scheme: "https",
Port: 443,
ApiVersion: "1",
Host: product + "-aws-us-east-1.iron.io",
UserAgent: "iron_go",
}
}
base.globalConfig(family, product, env)
base.globalEnv(family, product)
base.productEnv(family, product)
base.localConfig(family, product, env)
base.manualConfig(configuration)
if base.Token == "" || base.ProjectId == "" {
panic("Didn't find token or project_id in configs. Check your environment or iron.json.")
}
return base
}
func (s *Settings) globalConfig(family, product, env string) {
home, err := homeDir()
if err != nil {
log.Println("Error getting home directory:", err)
return
}
path := filepath.Join(home, ".iron.json")
s.UseConfigFile(family, product, path, env)
}
// The environment variables the scheme looks for are all of the same formula:
// the camel-cased product name is switched to an underscore (“IronWorker”
// becomes “iron_worker”) and converted to be all capital letters. For the
// global environment variables, “IRON” is used by itself. The value being
// loaded is then joined by an underscore to the name, and again capitalised.
// For example, to retrieve the OAuth token, the client looks for “IRON_TOKEN”.
func (s *Settings) globalEnv(family, product string) {
eFamily := strings.ToUpper(family) + "_"
s.commonEnv(eFamily)
}
// In the case of product-specific variables (which override global variables),
// it would be “IRON_WORKER_TOKEN” (for IronWorker).
func (s *Settings) productEnv(family, product string) {
eProduct := strings.ToUpper(family) + "_" + strings.ToUpper(product) + "_"
s.commonEnv(eProduct)
}
func (s *Settings) localConfig(family, product, env string) {
s.UseConfigFile(family, product, "iron.json", env)
}
func (s *Settings) manualConfig(settings *Settings) {
if settings != nil {
s.UseSettings(settings)
}
}
func (s *Settings) commonEnv(prefix string) {
if token := os.Getenv(prefix + "TOKEN"); token != "" {
s.Token = token
dbg("env has TOKEN:", s.Token)
}
if pid := os.Getenv(prefix + "PROJECT_ID"); pid != "" {
s.ProjectId = pid
dbg("env has PROJECT_ID:", s.ProjectId)
}
if host := os.Getenv(prefix + "HOST"); host != "" {
s.Host = host
dbg("env has HOST:", s.Host)
}
if scheme := os.Getenv(prefix + "SCHEME"); scheme != "" {
s.Scheme = scheme
dbg("env has SCHEME:", s.Scheme)
}
if port := os.Getenv(prefix + "PORT"); port != "" {
n, err := strconv.ParseUint(port, 10, 16)
if err != nil {
panic(err)
}
s.Port = uint16(n)
dbg("env has PORT:", s.Port)
}
if vers := os.Getenv(prefix + "API_VERSION"); vers != "" {
s.ApiVersion = vers
dbg("env has API_VERSION:", s.ApiVersion)
}
}
// Load and merge the given JSON config file.
func (s *Settings) UseConfigFile(family, product, path, env string) {
content, err := ioutil.ReadFile(path)
if err != nil {
dbg("tried to", err, ": skipping")
return
}
data := map[string]interface{}{}
err = json.Unmarshal(content, &data)
if err != nil {
panic("Invalid JSON in " + path + ": " + err.Error())
}
dbg("config in", path, "found")
if env != "" {
envdata, ok := data[env].(map[string]interface{})
if !ok {
return // bail, they specified an env but we couldn't find one, so error out.
}
data = envdata
}
s.UseConfigMap(data)
ipData, found := data[family+"_"+product]
if found {
pData := ipData.(map[string]interface{})
s.UseConfigMap(pData)
}
}
// Merge the given data into the settings.
func (s *Settings) UseConfigMap(data map[string]interface{}) {
if token, found := data["token"]; found {
s.Token = token.(string)
dbg("config has token:", s.Token)
}
if projectId, found := data["project_id"]; found {
s.ProjectId = projectId.(string)
dbg("config has project_id:", s.ProjectId)
}
if host, found := data["host"]; found {
s.Host = host.(string)
dbg("config has host:", s.Host)
}
if prot, found := data["scheme"]; found {
s.Scheme = prot.(string)
dbg("config has scheme:", s.Scheme)
}
if port, found := data["port"]; found {
s.Port = uint16(port.(float64))
dbg("config has port:", s.Port)
}
if vers, found := data["api_version"]; found {
s.ApiVersion = vers.(string)
dbg("config has api_version:", s.ApiVersion)
}
if agent, found := data["user_agent"]; found {
s.UserAgent = agent.(string)
dbg("config has user_agent:", s.UserAgent)
}
}
// Merge the given instance into the settings.
func (s *Settings) UseSettings(settings *Settings) {
if settings.Token != "" {
s.Token = settings.Token
}
if settings.ProjectId != "" {
s.ProjectId = settings.ProjectId
}
if settings.Host != "" {
s.Host = settings.Host
}
if settings.Scheme != "" {
s.Scheme = settings.Scheme
}
if settings.ApiVersion != "" {
s.ApiVersion = settings.ApiVersion
}
if settings.UserAgent != "" {
s.UserAgent = settings.UserAgent
}
if settings.Port > 0 {
s.Port = settings.Port
}
}

View File

@@ -1,19 +0,0 @@
package config_test
import (
"github.com/iron-io/iron_go3/config"
. "github.com/jeffh/go.bdd"
"testing"
)
func init() {
defer PrintSpecReport()
Describe("gets config", func() {
It("gets default configs", func() {
s := config.Config("iron_undefined")
Expect(s.Host, ToEqual, "undefined-aws-us-east-1.iron.io")
})
})
}
func TestEverything(t *testing.T) {}

View File

@@ -1,84 +0,0 @@
//The MIT License (MIT)
//
//Copyright (c) 2013 Mitchell Hashimoto
//
//Permission is hereby granted, free of charge, to any person obtaining a copy
//of this software and associated documentation files (the "Software"), to deal
//in the Software without restriction, including without limitation the rights
//to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
//copies of the Software, and to permit persons to whom the Software is
//furnished to do so, subject to the following conditions:
//
//The above copyright notice and this permission notice shall be included in
//all copies or substantial portions of the Software.
//
//THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
//IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
//FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
//AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
//LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
//OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
//THE SOFTWARE.
//
// This file is a wholesale copy of https://github.com/mitchellh/go-homedir@1f6da4a72e57d4e7edd4a7295a585e0a3999a2d4
// with Dir() renamed to homeDir() and Expand() deleted.
package config
import (
"bytes"
"errors"
"os"
"os/exec"
"runtime"
"strings"
)
// homeDir returns the home directory for the executing user.
//
// This uses an OS-specific method for discovering the home directory.
// An error is returned if a home directory cannot be detected.
func homeDir() (string, error) {
if runtime.GOOS == "windows" {
return dirWindows()
}
// Unix-like system, so just assume Unix
return dirUnix()
}
func dirUnix() (string, error) {
// First prefer the HOME environmental variable
if home := os.Getenv("HOME"); home != "" {
return home, nil
}
// If that fails, try the shell
var stdout bytes.Buffer
cmd := exec.Command("sh", "-c", "eval echo ~$USER")
cmd.Stdout = &stdout
if err := cmd.Run(); err != nil {
return "", err
}
result := strings.TrimSpace(stdout.String())
if result == "" {
return "", errors.New("blank output when reading home directory")
}
return result, nil
}
func dirWindows() (string, error) {
drive := os.Getenv("HOMEDRIVE")
path := os.Getenv("HOMEPATH")
home := drive + path
if drive == "" || path == "" {
home = os.Getenv("USERPROFILE")
}
if home == "" {
return "", errors.New("HOMEDRIVE, HOMEPATH, and USERPROFILE are blank")
}
return home, nil
}

View File

@@ -1,108 +0,0 @@
/*
This code sample demonstrates how to get a list of existing tasks
http://dev.iron.io/worker/reference/api/
http://dev.iron.io/worker/reference/api/#get_info_about_a_code_package
*/
package main
import (
"bytes"
"encoding/json"
"github.com/iron-io/iron_go3/api"
"github.com/iron-io/iron_go3/config"
"io/ioutil"
"log"
"text/template"
"time"
)
type (
Code struct {
Id string `json:"id"`
ProjectId string `json:"project_id"`
Name string `json:"name"`
Runtime string `json:"runtime"`
LatestChecksum string `json:"latest_checksum"`
Revision int `json:"rev"`
LatestHistoryId string `json:"latest_history_id"`
LatestChange time.Time `json:"latest_change"`
}
)
func main() {
// Create your configuration for iron_worker
// Find these value in credentials
config := config.Config("iron_worker")
config.ProjectId = "your_project_id"
config.Token = "your_token"
// Capture info for this code
codeId := "522d160a91c530531f6f528d"
// Create your endpoint url for tasks
url := api.Action(config, "codes", codeId)
log.Printf("Url: %s\n", url.URL.String())
// Post the request to Iron.io
resp, err := url.Request("GET", nil)
defer resp.Body.Close()
if err != nil {
log.Println(err)
return
}
// Check the status code
if resp.StatusCode != 200 {
log.Printf("%v\n", resp)
return
}
// Capture the response
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Println(err)
return
}
// Unmarshall to struct
code := &Code{}
err = json.Unmarshal(body, code)
if err != nil {
log.Printf("%v\n", err)
return
}
// Unmarshall to map
results := map[string]interface{}{}
err = json.Unmarshal(body, &results)
if err != nil {
log.Printf("%v\n", err)
return
}
// Pretty print the response
prettyPrint(code)
}
func prettyPrint(code *Code) {
prettyTemplate := template.Must(template.New("pretty").Parse(prettyPrintFormat()))
display := new(bytes.Buffer)
prettyTemplate.Execute(display, code)
log.Printf("%s,\n", display.String())
}
func prettyPrintFormat() string {
return `{
"id": "{{.Id}}",
"project_id": "{{.ProjectId}}",
"name": "{{.Name}}",
"runtime": "{{.Runtime}}",
"latest_checksum": "{{.LatestChecksum}}",
"rev": {{.Revision}},
"latest_history_id": "{{.LatestHistoryId}}",
"latest_change": "{{.LatestChange}}",
}`
}

View File

@@ -1,116 +0,0 @@
/*
This code sample demonstrates how to get a list of existing tasks
http://dev.iron.io/worker/reference/api/
http://dev.iron.io/worker/reference/api/#list_code_packages
*/
package main
import (
"bytes"
"encoding/json"
"fmt"
"github.com/iron-io/iron_go3/api"
"github.com/iron-io/iron_go3/config"
"io/ioutil"
"log"
"text/template"
"time"
)
type (
CodeResponse struct {
Codes []Code `json:"codes"`
}
Code struct {
Id string `json:"id"`
ProjectId string `json:"project_id"`
Name string `json:"name"`
Runtime string `json:"runtime"`
LatestChecksum string `json:"latest_checksum"`
Revision int `json:"rev"`
LatestHistoryId string `json:"latest_history_id"`
LatestChange time.Time `json:"latest_change"`
}
)
func main() {
// Create your configuration for iron_worker
// Find these value in credentials
config := config.Config("iron_worker")
config.ProjectId = "your_project_id"
config.Token = "your_token"
// Create your endpoint url for tasks
url := api.ActionEndpoint(config, "codes")
log.Printf("Url: %s\n", url.URL.String())
// Post the request to Iron.io
resp, err := url.Request("GET", nil)
defer resp.Body.Close()
if err != nil {
log.Println(err)
return
}
// Check the status code
if resp.StatusCode != 200 {
log.Printf("%v\n", resp)
return
}
// Capture the response
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Println(err)
return
}
// Unmarshall to struct
codeResponse := &CodeResponse{}
err = json.Unmarshal(body, codeResponse)
if err != nil {
log.Printf("%v\n", err)
return
}
// Or you can Unmarshall to map
results := map[string]interface{}{}
err = json.Unmarshal(body, &results)
if err != nil {
log.Printf("%v\n", err)
return
}
// Pretty print the response
prettyPrint(codeResponse)
}
func prettyPrint(codeResponse *CodeResponse) {
prettyTemplate := template.Must(template.New("pretty").Parse(prettyPrintFormat()))
codes := "\n"
display := new(bytes.Buffer)
for _, code := range codeResponse.Codes {
display.Reset()
prettyTemplate.Execute(display, code)
codes += fmt.Sprintf("%s,\n", display.String())
}
log.Printf(codes)
}
func prettyPrintFormat() string {
return `{
"id": "{{.Id}}",
"project_id": "{{.ProjectId}}",
"name": "{{.Name}}",
"runtime": "{{.Runtime}}",
"latest_checksum": "{{.LatestChecksum}}",
"rev": {{.Revision}},
"latest_history_id": "{{.LatestHistoryId}}",
"latest_change": "{{.LatestChange}}",
}`
}

View File

@@ -1,120 +0,0 @@
/*
This code sample demonstrates how to get a list of existing tasks
http://dev.iron.io/worker/reference/api/
http://dev.iron.io/worker/reference/api/#get_info_about_a_task
*/
package main
import (
"bytes"
"encoding/json"
"github.com/iron-io/iron_go3/api"
"github.com/iron-io/iron_go3/config"
"io/ioutil"
"log"
"text/template"
"time"
)
type (
Task struct {
Id string `json:"id"`
ProjectId string `json:"project_id"`
CodeId string `json:"code_id"`
CodeHistoryId string `json:"code_history_id"`
Status string `json:"status"`
CodeName string `json:"code_name"`
CodeRevision string `json:"code_rev"`
StartTime time.Time `json:"start_time"`
EndTime time.Time `json:"end_time"`
Duration int `json:"duration"`
Timeout int `json:"timeout"`
Payload string `json:"payload"`
UpdatedAt time.Time `json:"updated_at"`
CreatedAt time.Time `json:"created_at"`
}
)
func main() {
// Create your configuration for iron_worker
// Find these value in credentials
config := config.Config("iron_worker")
config.ProjectId = "your_project_id"
config.Token = "your_token"
// Capture info for this task
taskId := "52b45b17a31186632b00da4c"
// Create your endpoint url for tasks
url := api.Action(config, "tasks", taskId)
log.Printf("Url: %s\n", url.URL.String())
// Post the request to Iron.io
resp, err := url.Request("GET", nil)
defer resp.Body.Close()
if err != nil {
log.Println(err)
return
}
// Check the status code
if resp.StatusCode != 200 {
log.Printf("%v\n", resp)
return
}
// Capture the response
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Println(err)
return
}
// Unmarshall to struct
task := &Task{}
err = json.Unmarshal(body, task)
if err != nil {
log.Printf("%v\n", err)
return
}
// Or you can Unmarshall to map
results := map[string]interface{}{}
err = json.Unmarshal(body, &results)
if err != nil {
log.Printf("%v\n", err)
return
}
// Pretty print the response
prettyPrint(task)
}
func prettyPrint(task *Task) {
prettyTemplate := template.Must(template.New("pretty").Parse(prettyPrintFormat()))
display := new(bytes.Buffer)
prettyTemplate.Execute(display, task)
log.Printf("%s,\n", display.String())
}
func prettyPrintFormat() string {
return `{
"id": "{{.Id}}",
"project_id": "{{.ProjectId}}",
"code_id": "{{.CodeId}}",
"code_history_id": "{{.CodeHistoryId}}",
"status": "{{.Status}}",
"code_name": "{{.CodeName}}",
"code_revision": "{{.CodeRevision}}",
"start_time": "{{.StartTime}}",
"end_time": "{{.EndTime}}",
"duration": {{.Duration}},
"timeout": {{.Timeout}},
"payload": {{.Payload}},
"created_at": "{{.CreatedAt}}",
"updated_at": "{{.UpdatedAt}}",
}`
}

View File

@@ -1,129 +0,0 @@
/*
This code sample demonstrates how to get a list of existing tasks
http://dev.iron.io/worker/reference/api/
http://dev.iron.io/worker/reference/api/#list_tasks
*/
package main
import (
"bytes"
"encoding/json"
"fmt"
"github.com/iron-io/iron_go3/api"
"github.com/iron-io/iron_go3/config"
"io/ioutil"
"log"
"text/template"
"time"
)
type (
TaskResponse struct {
Tasks []Task `json:"tasks"`
}
Task struct {
Id string `json:"id"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
ProjectId string `json:"project_id"`
CodeId string `json:"code_id"`
Status string `json:"status"`
Message string `json:"msg"`
CodeName string `json:"code_name"`
StartTime time.Time `json:"start_time"`
EndTime time.Time `json:"end_time"`
Duration int `json:"duration"`
RunTimes int `json:"run_times"`
Timeout int `json:"timeout"`
Percent int `json:"percent"`
}
)
func main() {
// Create your configuration for iron_worker
// Find these value in credentials
config := config.Config("iron_worker")
config.ProjectId = "your_project_id"
config.Token = "your_token"
// Create your endpoint url for tasks
url := api.ActionEndpoint(config, "tasks")
url.QueryAdd("code_name", "%s", "task")
log.Printf("Url: %s\n", url.URL.String())
// Post the request to Iron.io
resp, err := url.Request("GET", nil)
defer resp.Body.Close()
if err != nil {
log.Println(err)
return
}
// Check the status code
if resp.StatusCode != 200 {
log.Printf("%v\n", resp)
return
}
// Capture the response
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Println(err)
return
}
// Unmarshall to struct
taskResponse := &TaskResponse{}
err = json.Unmarshal(body, taskResponse)
if err != nil {
log.Printf("%v\n", err)
return
}
// Or you can Unmarshall to map
results := map[string]interface{}{}
err = json.Unmarshal(body, &results)
if err != nil {
log.Printf("%v\n", err)
return
}
// Pretty print the response
prettyPrint(taskResponse)
}
func prettyPrint(taskResponse *TaskResponse) {
prettyTemplate := template.Must(template.New("pretty").Parse(prettyPrintFormat()))
tasks := "\n"
display := new(bytes.Buffer)
for _, task := range taskResponse.Tasks {
display.Reset()
prettyTemplate.Execute(display, task)
tasks += fmt.Sprintf("%s,\n", display.String())
}
log.Printf(tasks)
}
func prettyPrintFormat() string {
return `{
"id": "{{.Id}}",
"created_at": "{{.CreatedAt}}",
"updated_at": "{{.UpdatedAt}}",
"project_id": "{{.ProjectId}}",
"code_id": "{{.CodeId}}",
"status": "{{.Status}}",
"msg": "{{.Message}}",
"code_name": "{{.CodeName}}",
"start_time": "{{.StartTime}}",
"end_time": "{{.EndTime}}",
"duration": {{.Duration}},
"run_times": {{.RunTimes}},
"timeout": {{.Timeout}},
"percent": {{.Percent}}
}`
}

View File

@@ -1,73 +0,0 @@
/*
This code sample demonstrates how to get the log for a task
http://dev.iron.io/worker/reference/api/
http://dev.iron.io/worker/reference/api/#get_a_tasks_log
*/
package main
import (
"github.com/iron-io/iron_go3/api"
"github.com/iron-io/iron_go3/config"
"io/ioutil"
"log"
"time"
)
type (
Task struct {
Id string `json:"id"`
ProjectId string `json:"project_id"`
CodeId string `json:"code_id"`
CodeHistoryId string `json:"code_history_id"`
Status string `json:"status"`
CodeName string `json:"code_name"`
CodeRevision string `json:"code_rev"`
StartTime time.Time `json:"start_time"`
EndTime time.Time `json:"end_time"`
Duration int `json:"duration"`
Timeout int `json:"timeout"`
Payload string `json:"payload"`
UpdatedAt time.Time `json:"updated_at"`
CreatedAt time.Time `json:"created_at"`
}
)
func main() {
// Create your configuration for iron_worker
// Find these value in credentials
config := config.Config("iron_worker")
config.ProjectId = "your_project_id"
config.Token = "your_token"
// Capture info for this task
taskId := "52b45b17a31186632b00da4c"
// Create your endpoint url for tasks
url := api.Action(config, "tasks", taskId, "log")
log.Printf("Url: %s\n", url.URL.String())
// Post the request to Iron.io
resp, err := url.Request("GET", nil)
defer resp.Body.Close()
if err != nil {
log.Println(err)
return
}
// Check the status code
if resp.StatusCode != 200 {
log.Printf("%v\n", resp)
return
}
// Capture the response
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Println(err)
return
}
// Display the log
log.Printf("\n%s\n", string(body))
}

View File

@@ -1,109 +0,0 @@
/*
This code sample demonstrates how to queue a worker from your your existing
task list.
http://dev.iron.io/worker/reference/api/
http://dev.iron.io/worker/reference/api/#queue_a_task
*/
package main
import (
"bytes"
"encoding/json"
"fmt"
"github.com/iron-io/iron_go3/api"
"github.com/iron-io/iron_go3/config"
"io/ioutil"
"log"
"text/template"
)
type (
TaskResponse struct {
Message string `json:"msg"`
Tasks []Task `json:"tasks"`
}
Task struct {
Id string `json:"id"`
}
)
// payload defines a sample payload document
var payload = `{"tasks":[
{
"code_name" : "Worker-Name",
"timeout" : 20,
"payload" : "{ \"key\" : \"value", \"key\" : \"value\" }"
}]}`
func main() {
// Create your configuration for iron_worker
// Find these value in credentials
config := config.Config("iron_worker")
config.ProjectId = "your_project_id"
config.Token = "your_token"
// Create your endpoint url for tasks
url := api.ActionEndpoint(config, "tasks")
log.Printf("Url: %s\n", url.URL.String())
// Convert the payload to a slice of bytes
postData := bytes.NewBufferString(payload)
// Post the request to Iron.io
resp, err := url.Request("POST", postData)
defer resp.Body.Close()
if err != nil {
log.Println(err)
return
}
// Capture the response
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Println(err)
return
}
// Unmarshall to struct
taskResponse := &TaskResponse{}
err = json.Unmarshal(body, taskResponse)
if err != nil {
log.Printf("%v\n", err)
return
}
// Or you can Unmarshall to map
results := map[string]interface{}{}
err = json.Unmarshal(body, &results)
if err != nil {
log.Printf("%v\n", err)
return
}
// Pretty print the response
prettyPrint(taskResponse)
}
func prettyPrint(taskResponse *TaskResponse) {
prettyTemplate := template.Must(template.New("pretty").Parse(prettyPrintFormat()))
tasks := "\n"
tasks += "\"msg\": " + taskResponse.Message + "\n"
display := new(bytes.Buffer)
for _, task := range taskResponse.Tasks {
display.Reset()
prettyTemplate.Execute(display, task)
tasks += fmt.Sprintf("%s,\n", display.String())
}
log.Printf(tasks)
}
func prettyPrintFormat() string {
return `{
"id": "{{.Id}}",
}`
}

View File

@@ -1,72 +0,0 @@
package mq_test
import (
"errors"
"github.com/iron-io/iron_go3/mq"
)
func ExampleQueue() error {
// Standard way of using a queue will be to just start pushing or
// getting messages, q.Upsert isn't necessary unless you explicitly
// need to create a queue with custom settings.
q := mq.New("my_queue2")
// Simply pushing messages will create a queue if it doesn't exist, with defaults.
_, err := q.PushStrings("msg1", "msg2")
if err != nil {
return err
}
msgs, err := q.GetN(2)
if err != nil {
return err
}
if len(msgs) != 2 {
return errors.New("not good")
}
return nil
}
func ExampleQueue_Upsert() error {
// Prepare a Queue from configs
q := mq.New("my_queue")
// Upsert will create the queue on the server or update its message_timeout
// to 120 if it already exists.
// Let's just make sure we don't have a queue, because we can.
if _, err := q.Info(); mq.ErrQueueNotFound(err) {
_, err := q.Update(mq.QueueInfo{MessageTimeout: 120}) // ok, we'll make one.
if err != nil {
return err
}
}
// Definitely exists now.
// Let's just add some messages.
_, err := q.PushStrings("msg1", "msg2")
if err != nil {
return err
}
msgs, err := q.Peek()
if len(msgs) != 2 {
// and it has messages already...
}
return nil
}
func ExampleList() error {
qs, err := mq.List() // Will get up to 30 queues. All ready to use.
if err != nil {
return err
}
// Pop a message off of each queue.
for _, q := range qs {
_, err := q.Pop()
if err != nil {
return err
}
}
return nil
}

View File

@@ -1,563 +0,0 @@
// IronMQ (elastic message queue) client library
package mq
import (
"encoding/json"
"errors"
"time"
"github.com/iron-io/iron_go3/api"
"github.com/iron-io/iron_go3/config"
)
type Timestamped struct {
CreatedAt time.Time `json:"created_at,omitempty"`
UpdatedAt time.Time `json:"updated_at,omitempty"`
}
// A Queue is the client's idea of a queue, sufficient for getting
// information for the queue with given Name at the server configured
// with Settings. See mq.New()
type Queue struct {
Settings config.Settings `json:"-"`
Name string `json:"name"`
}
// When used for create/update, Size and TotalMessages will be omitted.
type QueueInfo struct {
Name string `json:"name"`
Size int `json:"size"`
TotalMessages int `json:"total_messages"`
MessageExpiration int `json:"message_expiration"`
MessageTimeout int `json:"message_timeout"`
Type string `json:"type,omitempty"`
Push *PushInfo `json:"push,omitempty"`
Alerts []Alert `json:"alerts,omitempty"`
}
type PushInfo struct {
RetriesDelay int `json:"retries_delay,omitempty"`
Retries int `json:"retries,omitempty"`
Subscribers []QueueSubscriber `json:"subscribers,omitempty"`
ErrorQueue string `json:"error_queue,omitempty"`
}
type QueueSubscriber struct {
Name string `json:"name"`
URL string `json:"url"`
Headers map[string]string `json:"headers,omitempty"` // HTTP headers
}
type Alert struct {
Type string `json:"type"`
Trigger int `json:"trigger"`
Direction string `json:"direction"`
Queue string `json:"queue"`
Snooze int `json:"snooze"`
}
// Message is dual purpose, as it represents a returned message and also
// can be used for creation. For creation, only Body and Delay are valid.
// Delay will not be present in returned message.
type Message struct {
Id string `json:"id,omitempty"`
Body string `json:"body"`
Delay int64 `json:"delay,omitempty"` // time in seconds to wait before enqueue, default 0
ReservedUntil time.Time `json:"reserved_until,omitempty"`
ReservedCount int `json:"reserved_count,omitempty"`
ReservationId string `json:"reservation_id,omitempty"`
q Queue // todo: shouldn't this be a pointer?
}
type PushStatus struct {
Retried int `json:"retried"`
StatusCode int `json:"status_code"`
Status string `json:"status"`
}
type Subscriber struct {
Retried int `json:"retried"`
StatusCode int `json:"status_code"`
Status string `json:"status"`
URL string `json:"url"`
}
type Subscription struct {
PushType string
Retries int
RetriesDelay int
}
func ErrQueueNotFound(err error) bool {
return err.Error() == "404 Not Found: Queue not found"
}
// New uses the configuration specified in an iron.json file or environment variables
// to return a Queue object capable of acquiring information about or modifying the queue
// specified by queueName.
func New(queueName string) Queue {
return Queue{Settings: config.Config("iron_mq"), Name: queueName}
}
// ConfigNew uses the specified settings over configuration specified in an iron.json file or
// environment variables to return a Queue object capable of acquiring information about or
// modifying the queue specified by queueName.
func ConfigNew(queueName string, settings *config.Settings) Queue {
return Queue{Settings: config.ManualConfig("iron_mq", settings), Name: queueName}
}
// Will create a new queue, all fields are optional.
// Queue type cannot be changed.
func CreateQueue(queueName string, queueInfo QueueInfo) (QueueInfo, error) {
info := queueInfo
info.Name = queueName
return ConfigCreateQueue(info, nil)
}
// Will create a new queue, all fields are optional.
// Queue type cannot be changed.
func ConfigCreateQueue(queueInfo QueueInfo, settings *config.Settings) (QueueInfo, error) {
if queueInfo.Name == "" {
return QueueInfo{}, errors.New("Name of queue is empty")
}
url := api.Action(config.ManualConfig("iron_mq", settings), "queues", queueInfo.Name)
in := struct {
Queue QueueInfo `json:"queue"`
}{
Queue: queueInfo,
}
var out struct {
Queue QueueInfo `json:"queue"`
}
err := url.Req("PUT", in, &out)
return out.Queue, err
}
// List will get a listQueues of all queues for the configured project, paginated 30 at a time.
// For paging or filtering, see ListPage and Filter.
func List() ([]Queue, error) {
return ListQueues(config.Config("iron_mq"), "", "", 0)
}
// ListPage is like List, but will allow specifying a page length and pagination.
// To get the first page, let prev = "".
// To get the second page, use the name of the last queue on the first page as "prev".
func ListPage(prev string, perPage int) ([]Queue, error) {
return ListQueues(config.Config("iron_mq"), "", prev, perPage)
}
// Filter is like List, but will only return queues with the specified prefix.
func Filter(prefix string) ([]Queue, error) {
return ListQueues(config.Config("iron_mq"), prefix, "", 0)
}
// Like ListPage, but with an added filter.
func FilterPage(prefix, prev string, perPage int) ([]Queue, error) {
return ListQueues(config.Config("iron_mq"), prefix, prev, perPage)
}
func ListQueues(s config.Settings, prefix, prev string, perPage int) ([]Queue, error) {
var out struct {
Queues []Queue `json:"queues"`
}
url := api.Action(s, "queues")
if prev != "" {
url.QueryAdd("previous", "%v", prev)
}
if prefix != "" {
url.QueryAdd("prefix", "%v", prefix)
}
if perPage != 0 {
url.QueryAdd("per_page", "%d", perPage)
}
err := url.Req("GET", nil, &out)
if err != nil {
return nil, err
}
for idx := range out.Queues {
out.Queues[idx].Settings = s
}
return out.Queues, nil
}
func (q Queue) queues(s ...string) *api.URL { return api.Action(q.Settings, "queues", s...) }
func (q *Queue) UnmarshalJSON(data []byte) error {
var name struct {
Name string `json:"name"`
}
err := json.Unmarshal(data, &name)
q.Name = name.Name
return err
}
// Will return information about a queue, could also be used to check existence.
// TODO make QueueNotExist err
func (q Queue) Info() (QueueInfo, error) {
var out struct {
QI QueueInfo `json:"queue"`
}
err := q.queues(q.Name).Req("GET", nil, &out)
return out.QI, err
}
// Will create or update a queue, all QueueInfo fields are optional.
// Queue type cannot be changed.
func (q Queue) Update(queueInfo QueueInfo) (QueueInfo, error) {
var out struct {
QI QueueInfo `json:"queue"`
}
in := struct {
QI QueueInfo `json:"queue"`
}{
QI: queueInfo,
}
err := q.queues(q.Name).Req("PATCH", in, &out)
return out.QI, err
}
func (q Queue) Delete() error {
return q.queues(q.Name).Req("DELETE", nil, nil)
}
// PushString enqueues a message with body specified and no delay.
func (q Queue) PushString(body string) (id string, err error) {
ids, err := q.PushStrings(body)
if err != nil {
return
}
return ids[0], nil
}
// PushStrings enqueues messages with specified bodies and no delay.
func (q Queue) PushStrings(bodies ...string) (ids []string, err error) {
msgs := make([]Message, len(bodies))
for i, body := range bodies {
msgs[i] = Message{Body: body}
}
return q.PushMessages(msgs...)
}
// PushMessage enqueues a message.
func (q Queue) PushMessage(msg Message) (id string, err error) {
ids, err := q.PushMessages(msg)
if err != nil {
return "", err
} else if len(ids) < 1 {
return "", errors.New("didn't receive message ID for pushing message")
}
return ids[0], err
}
// PushMessages enqueues each message in order.
func (q Queue) PushMessages(msgs ...Message) (ids []string, err error) {
in := struct {
Messages []Message `json:"messages"`
}{
Messages: msgs,
}
var out struct {
IDs []string `json:"ids"`
Msg string `json:"msg"` // TODO get rid of this on server and here, too.
}
err = q.queues(q.Name, "messages").Req("POST", &in, &out)
return out.IDs, err
}
// Peek first 30 messages on queue.
func (q Queue) Peek() ([]Message, error) {
return q.PeekN(30)
}
// Peek with N, max 100.
func (q Queue) PeekN(n int) ([]Message, error) {
var out struct {
Messages []Message `json:"messages"`
}
err := q.queues(q.Name, "messages").
QueryAdd("n", "%d", n).
Req("GET", nil, &out)
for i, _ := range out.Messages {
out.Messages[i].q = q
}
return out.Messages, err
}
// Reserves a message from the queue.
// The message will not be deleted, but will be reserved until the timeout
// expires. If the timeout expires before the message is deleted, the message
// will be placed back onto the queue.
// As a result, be sure to Delete a message after you're done with it.
func (q Queue) Reserve() (msg *Message, err error) {
msgs, err := q.GetN(1)
if len(msgs) > 0 {
return &msgs[0], err
}
return nil, err
}
// ReserveN reserves multiple messages from the queue.
func (q Queue) ReserveN(n int) ([]Message, error) {
return q.LongPoll(n, 60, 0, false)
}
// Get reserves a message from the queue.
// Deprecated, use Reserve instead.
func (q Queue) Get() (msg *Message, err error) {
return q.Reserve()
}
// GetN is Get for N.
// Deprecated, use ReserveN instead.
func (q Queue) GetN(n int) ([]Message, error) {
return q.ReserveN(n)
}
// TODO deprecate for LongPoll?
func (q Queue) GetNWithTimeout(n, timeout int) ([]Message, error) {
return q.LongPoll(n, timeout, 0, false)
}
// Pop will get and delete a message from the queue.
func (q Queue) Pop() (msg Message, err error) {
msgs, err := q.PopN(1)
if len(msgs) > 0 {
msg = msgs[0]
}
return msg, err
}
// PopN is Pop for N.
func (q Queue) PopN(n int) ([]Message, error) {
return q.LongPoll(n, 0, 0, true)
}
// LongPoll is the long form for Get, Pop, with all options available.
// If wait = 0, then LongPoll is simply a get, otherwise, the server
// will poll for n messages up to wait seconds (max 30).
// If delete is specified, then each message will be deleted instead
// of being put back onto the queue.
func (q Queue) LongPoll(n, timeout, wait int, delete bool) ([]Message, error) {
in := struct {
N int `json:"n"`
Timeout int `json:"timeout"`
Wait int `json:"wait"`
Delete bool `json:"delete"`
}{
N: n,
Timeout: timeout,
Wait: wait,
Delete: delete,
}
var out struct {
Messages []Message `json:"messages"` // TODO don't think we need pointer here
}
err := q.queues(q.Name, "reservations").Req("POST", &in, &out)
for i, _ := range out.Messages {
out.Messages[i].q = q
}
return out.Messages, err
}
// Delete all messages in the queue
func (q Queue) Clear() (err error) {
return q.queues(q.Name, "messages").Req("DELETE", &struct{}{}, nil)
}
// Delete message from queue
func (q Queue) DeleteMessage(msgId, reservationId string) (err error) {
body := struct {
Res string `json:"reservation_id"`
}{Res: reservationId}
return q.queues(q.Name, "messages", msgId).Req("DELETE", body, nil)
}
// Delete multiple messages by id
func (q Queue) DeleteMessages(ids []string) error {
in := struct {
Ids []delmsg `json:"ids"`
}{Ids: make([]delmsg, len(ids))}
for i, val := range ids {
in.Ids[i].Id = val
}
return q.queues(q.Name, "messages").Req("DELETE", in, nil)
}
type delmsg struct {
Id string `json:"id"`
Res string `json:"reservation_id"`
}
// Delete multiple reserved messages from the queue
func (q Queue) DeleteReservedMessages(messages []Message) error {
ids := struct {
Ids []delmsg `json:"ids"`
}{Ids: make([]delmsg, len(messages))}
for i, val := range messages {
ids.Ids[i].Id = val.Id
ids.Ids[i].Res = val.ReservationId
}
return q.queues(q.Name, "messages").Req("DELETE", ids, nil)
}
// Reset timeout of message to keep it reserved
func (q Queue) TouchMessage(msgId, reservationId string) (string, error) {
return q.TouchMessageFor(msgId, reservationId, 0)
}
// Reset timeout of message to keep it reserved
func (q Queue) TouchMessageFor(msgId, reservationId string, timeout int) (string, error) {
in := struct {
Timeout int `json:"timeout,omitempty"`
ReservationId string `json:"reservation_id,omitempty"`
}{ReservationId: reservationId}
if timeout > 0 {
in.Timeout = timeout
}
out := &Message{}
err := q.queues(q.Name, "messages", msgId, "touch").Req("POST", in, out)
return out.ReservationId, err
}
// Put message back in the queue, message will be available after +delay+ seconds.
func (q Queue) ReleaseMessage(msgId, reservationId string, delay int64) (err error) {
body := struct {
Delay int64 `json:"delay"`
ReservationId string `json:"reservation_id"`
}{Delay: delay, ReservationId: reservationId}
return q.queues(q.Name, "messages", msgId, "release").Req("POST", &body, nil)
}
func (q Queue) MessageSubscribers(msgId string) ([]Subscriber, error) {
out := struct {
Subscribers []Subscriber `json:"subscribers"`
}{}
err := q.queues(q.Name, "messages", msgId, "subscribers").Req("GET", nil, &out)
return out.Subscribers, err
}
func (q Queue) AddSubscribers(subscribers ...QueueSubscriber) error {
collection := struct {
Subscribers []QueueSubscriber `json:"subscribers,omitempty"`
}{
Subscribers: subscribers,
}
return q.queues(q.Name, "subscribers").Req("POST", &collection, nil)
}
func (q Queue) ReplaceSubscribers(subscribers ...QueueSubscriber) error {
collection := struct {
Subscribers []QueueSubscriber `json:"subscribers,omitempty"`
}{
Subscribers: subscribers,
}
return q.queues(q.Name, "subscribers").Req("PUT", &collection, nil)
}
func (q Queue) RemoveSubscribers(subscribers ...string) error {
collection := make([]QueueSubscriber, len(subscribers))
for i, subscriber := range subscribers {
collection[i].Name = subscriber
}
return q.RemoveSubscribersCollection(collection...)
}
func (q Queue) RemoveSubscribersCollection(subscribers ...QueueSubscriber) error {
collection := struct {
Subscribers []QueueSubscriber `json:"subscribers,omitempty"`
}{
Subscribers: subscribers,
}
return q.queues(q.Name, "subscribers").Req("DELETE", &collection, nil)
}
func (q Queue) MessageSubscribersPollN(msgId string, n int) ([]Subscriber, error) {
subs, err := q.MessageSubscribers(msgId)
for {
time.Sleep(100 * time.Millisecond)
subs, err = q.MessageSubscribers(msgId)
if err != nil {
return subs, err
}
if len(subs) >= n && actualPushStatus(subs) {
return subs, nil
}
}
return subs, err
}
func (q Queue) AddAlerts(alerts ...*Alert) (err error) {
var queue struct {
QI QueueInfo `json:"queue"`
}
in := QueueInfo{
Alerts: make([]Alert, len(alerts)),
}
for i, alert := range alerts {
in.Alerts[i] = *alert
}
queue.QI = in
return q.queues(q.Name).Req("PATCH", &queue, nil)
}
func actualPushStatus(subs []Subscriber) bool {
for _, sub := range subs {
if sub.Status == "queued" {
return false
}
}
return true
}
// Delete message from queue
func (m Message) Delete() (err error) {
return m.q.DeleteMessage(m.Id, m.ReservationId)
}
// Reset timeout of message to keep it reserved
func (m *Message) Touch() (err error) {
return m.TouchFor(0)
}
// Reset timeout of message to keep it reserved
func (m *Message) TouchFor(timeout int) (err error) {
reservationId, error := m.q.TouchMessageFor(m.Id, m.ReservationId, timeout)
m.ReservationId = reservationId
return error
}
// Put message back in the queue, message will be available after +delay+ seconds.
func (m Message) Release(delay int64) (err error) {
return m.q.ReleaseMessage(m.Id, m.ReservationId, delay)
}
func (m Message) Subscribers() (interface{}, error) {
return m.q.MessageSubscribers(m.Id)
}

View File

@@ -1,142 +0,0 @@
package mq
import (
"fmt"
"testing"
"time"
. "github.com/jeffh/go.bdd"
)
func TestEverything(t *testing.T) {}
func q(name string) Queue {
c := New(name)
return c
}
func init() {
defer PrintSpecReport()
Describe("IronMQ", func() {
It("Deletes all existing messages", func() {
c := q("queuename")
_, err := c.PushString("just a little test")
Expect(err, ToBeNil)
Expect(c.Clear(), ToBeNil)
info, err := c.Info()
Expect(err, ToBeNil)
Expect(info.Size, ToEqual, 0x0)
})
It("Pushes ands gets a message", func() {
c := q("queuename")
id1, err := c.PushString("just a little test")
Expect(err, ToBeNil)
msg, err := c.Get()
Expect(err, ToBeNil)
Expect(msg, ToNotBeNil)
Expect(msg.Id, ToDeepEqual, id1)
Expect(msg.Body, ToDeepEqual, "just a little test")
err = c.DeleteMessage(msg.Id, msg.ReservationId)
Expect(err, ToBeNil)
info, err := c.Info()
Expect(err, ToBeNil)
Expect(info.Size, ToEqual, 0x0)
})
It("clears the queue", func() {
q := q("queuename")
strings := []string{}
for n := 0; n < 100; n++ {
strings = append(strings, fmt.Sprint("test: ", n))
}
_, err := q.PushStrings(strings...)
Expect(err, ToBeNil)
info, err := q.Info()
Expect(err, ToBeNil)
Expect(info.Size, ToEqual, 100)
Expect(q.Clear(), ToBeNil)
info, err = q.Info()
Expect(err, ToBeNil)
Expect(info.Size, ToEqual, 0)
})
It("Lists all queues", func() {
c := q("queuename")
queues, err := ListQueues(c.Settings, "", "", 101) // can't check the caches value just yet.
Expect(err, ToBeNil)
l := len(queues)
t := l >= 1
Expect(t, ToBeTrue)
found := false
for _, queue := range queues {
if queue.Name == "queuename" {
found = true
break
}
}
Expect(found, ToEqual, true)
})
It("releases a message", func() {
c := q("queuename")
id, err := c.PushString("trying")
Expect(err, ToBeNil)
msg, err := c.Get()
Expect(err, ToBeNil)
err = msg.Release(3)
Expect(err, ToBeNil)
msg, err = c.Get()
Expect(msg, ToBeNil)
time.Sleep(4 * time.Second)
msg, err = c.Get()
Expect(err, ToBeNil)
Expect(msg, ToNotBeNil)
Expect(msg.Id, ToEqual, id)
})
It("updates a queue", func() {
name := "pushqueue" + time.Now().String()
_, err := CreateQueue(name, QueueInfo{Type: "multicast", Push: &PushInfo{
Subscribers: []QueueSubscriber{{Name: "first", URL: "http://hit.me.with.a.message"}}}})
Expect(err, ToBeNil)
c := q(name)
info, err := c.Info()
Expect(err, ToBeNil)
qi := QueueInfo{Type: "multicast", Push: &PushInfo{
Subscribers: []QueueSubscriber{{Name: "first", URL: "http://hit.me.with.another.message"}}}}
rc, err := c.Update(qi)
Expect(err, ToBeNil)
info, err = c.Info()
Expect(err, ToBeNil)
Expect(info.Name, ToEqual, rc.Name)
err = c.Delete()
Expect(err, ToBeNil)
})
})
}

View File

@@ -1,616 +0,0 @@
package worker
import (
"archive/zip"
"crypto/aes"
"crypto/cipher"
"crypto/rand"
"crypto/rsa"
"crypto/sha1"
"crypto/x509"
"encoding/base64"
"encoding/json"
"encoding/pem"
"fmt"
"io"
"io/ioutil"
"mime/multipart"
"time"
)
type Schedule struct {
CodeName string `json:"code_name"`
Delay *time.Duration `json:"delay"`
EndAt *time.Time `json:"end_at"`
MaxConcurrency *int `json:"max_concurrency"`
Name string `json:"name"`
Payload string `json:"payload"`
Priority *int `json:"priority"`
RunEvery *int `json:"run_every"`
RunTimes *int `json:"run_times"`
StartAt *time.Time `json:"start_at"`
Cluster string `json:"cluster"`
Label string `json:"label"`
}
type ScheduleInfo struct {
CodeName string `json:"code_name"`
CreatedAt time.Time `json:"created_at"`
EndAt time.Time `json:"end_at"`
Id string `json:"id"`
LastRunTime time.Time `json:"last_run_time"`
MaxConcurrency int `json:"max_concurrency"`
Msg string `json:"msg"`
NextStart time.Time `json:"next_start"`
ProjectId string `json:"project_id"`
RunCount int `json:"run_count"`
RunTimes int `json:"run_times"`
StartAt time.Time `json:"start_at"`
Status string `json:"status"`
UpdatedAt time.Time `json:"updated_at"`
}
type Task struct {
CodeName string `json:"code_name"`
Payload string `json:"payload"`
Priority int `json:"priority"`
Timeout *time.Duration `json:"timeout"`
Delay *time.Duration `json:"delay"`
Cluster string `json:"cluster"`
Label string `json:"label"`
}
type TaskInfo struct {
CodeHistoryId string `json:"code_history_id"`
CodeId string `json:"code_id"`
CodeName string `json:"code_name"`
CodeRev string `json:"code_rev"`
Id string `json:"id"`
Payload string `json:"payload"`
ProjectId string `json:"project_id"`
Status string `json:"status"`
Msg string `json:"msg,omitempty"`
ScheduleId string `json:"schedule_id"`
Duration int `json:"duration"`
RunTimes int `json:"run_times"`
Timeout int `json:"timeout"`
Percent int `json:"percent,omitempty"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
StartTime time.Time `json:"start_time"`
EndTime time.Time `json:"end_time"`
}
type CodeSource map[string][]byte // map[pathInZip]code
type Code struct {
Id string `json:"id,omitempty"`
Name string `json:"name"`
Runtime string `json:"runtime"`
FileName string `json:"file_name"`
Config string `json:"config,omitempty"`
MaxConcurrency int `json:"max_concurrency,omitempty"`
Retries *int `json:"retries,omitempty"`
RetriesDelay *int `json:"retries_delay,omitempty"` // seconds
Stack string `json:"stack"`
Image string `json:"image"`
Command string `json:"command"`
Host string `json:"host,omitempty"` // PaaS router thing
EnvVars map[string]string `json:"env_vars"`
Source CodeSource `json:"-"`
DefaultPriority int `json:"default_priority,omitempty"`
}
type CodeInfo struct {
Id string `json:"id"`
LatestChecksum string `json:"latest_checksum"`
LatestHistoryId string `json:"latest_history_id"`
Name string `json:"name"`
ProjectId string `json:"project_id"`
Runtime *string `json:"runtime"`
Rev int `json:"rev"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
LatestChange time.Time `json:"latest_change"`
}
// CodePackageList lists code packages.
//
// The page argument decides the page of code packages you want to retrieve, starting from 0, maximum is 100.
//
// The perPage argument determines the number of code packages to return. Note
// this is a maximum value, so there may be fewer packages returned if there
// arent enough results. If this is < 1, 1 will be the default. Maximum is 100.
func (w *Worker) CodePackageList(page, perPage int) (codes []CodeInfo, err error) {
out := map[string][]CodeInfo{}
err = w.codes().
QueryAdd("page", "%d", page).
QueryAdd("per_page", "%d", perPage).
Req("GET", nil, &out)
if err != nil {
return
}
return out["codes"], nil
}
// CodePackageInfo gets info about a code package
func (w *Worker) CodePackageInfo(codeId string) (code CodeInfo, err error) {
out := CodeInfo{}
err = w.codes(codeId).Req("GET", nil, &out)
return out, err
}
// CodePackageDelete deletes a code package
func (w *Worker) CodePackageDelete(codeId string) (err error) {
return w.codes(codeId).Req("DELETE", nil, nil)
}
// CodePackageDownload downloads a code package
func (w *Worker) CodePackageDownload(codeId string) (code Code, err error) {
out := Code{}
err = w.codes(codeId, "download").Req("GET", nil, &out)
return out, err
}
// CodePackageRevisions lists the revisions of a code pacakge
func (w *Worker) CodePackageRevisions(codeId string) (code Code, err error) {
out := Code{}
err = w.codes(codeId, "revisions").Req("GET", nil, &out)
return out, err
}
// CodePackageZipUpload can be used to upload a code package with a zip
// package, where zipName is a filepath where the zip can be located. If
// zipName is an empty string, then the code package will be uploaded without a
// zip package (see CodePackageUpload).
func (w *Worker) CodePackageZipUpload(zipName string, args Code) (*Code, error) {
return w.codePackageUpload(zipName, args)
}
// CodePackageUpload uploads a code package without a zip file.
func (w *Worker) CodePackageUpload(args Code) (*Code, error) {
return w.codePackageUpload("", args)
}
func (w *Worker) codePackageUpload(zipName string, args Code) (*Code, error) {
b := randomBoundary()
r := &streamZipPipe{zipName: zipName, args: args, boundary: b}
defer r.Close()
var out Code
err := w.codes().
SetContentType("multipart/form-data; boundary="+b).
Req("POST", r, &out)
return &out, err
}
func randomBoundary() string {
var buf [30]byte
_, err := io.ReadFull(rand.Reader, buf[:])
if err != nil {
panic(err)
}
return fmt.Sprintf("%x", buf[:])
}
// implement seek so that we can retry it. not thread safe,
// Read and Seek must be called in same thread.
type streamZipPipe struct {
zipName string
args Code
boundary string
r io.ReadCloser
w io.WriteCloser
once bool
err chan error
}
// safe to call multiple times, implement io.Closer so http will call this
func (s *streamZipPipe) Close() error {
if s.r != nil {
return s.r.Close()
}
return nil
}
// only seeks to beginning, ignores parameters
func (s *streamZipPipe) Seek(offset int64, whence int) (int64, error) {
// just restart the whole thing, the last pipe should have errored out and been closed
s.r, s.w = io.Pipe()
s.err = make(chan error, 1)
s.once = true
go s.pipe()
return 0, nil
}
func (s *streamZipPipe) Read(b []byte) (int, error) {
if !s.once {
s.once = true
s.r, s.w = io.Pipe()
s.err = make(chan error, 1)
go s.pipe()
}
select {
case err := <-s.err:
if err != nil {
return 0, err // n should get ignored
}
default:
}
return s.r.Read(b)
}
func (s *streamZipPipe) pipe() {
defer s.w.Close()
mWriter := multipart.NewWriter(s.w)
mWriter.SetBoundary(s.boundary)
mMetaWriter, err := mWriter.CreateFormField("data")
if err != nil {
s.err <- err
return
}
if err := json.NewEncoder(mMetaWriter).Encode(s.args); err != nil {
s.err <- err
return
}
if s.zipName != "" {
r, err := zip.OpenReader(s.zipName)
if err != nil {
s.err <- err
return
}
defer r.Close()
mFileWriter, err := mWriter.CreateFormFile("file", "worker.zip")
if err != nil {
s.err <- err
return
}
zWriter := zip.NewWriter(mFileWriter)
for _, f := range r.File {
fWriter, err := zWriter.Create(f.Name)
if err != nil {
s.err <- err
return
}
rc, err := f.Open()
if err != nil {
s.err <- err
return
}
_, err = io.Copy(fWriter, rc)
rc.Close()
if err != nil {
s.err <- err
return
}
}
if err := zWriter.Close(); err != nil {
s.err <- err
return
}
}
if err := mWriter.Close(); err != nil {
s.err <- err
}
}
func (w *Worker) TaskList() (tasks []TaskInfo, err error) {
out := map[string][]TaskInfo{}
err = w.tasks().Req("GET", nil, &out)
if err != nil {
return
}
return out["tasks"], nil
}
type TaskListParams struct {
CodeName string
Label string
Page int
PerPage int
FromTime time.Time
ToTime time.Time
Statuses []string
}
func (w *Worker) FilteredTaskList(params TaskListParams) (tasks []TaskInfo, err error) {
out := map[string][]TaskInfo{}
url := w.tasks()
url.QueryAdd("code_name", "%s", params.CodeName)
if params.Label != "" {
url.QueryAdd("label", "%s", params.Label)
}
if params.Page > 0 {
url.QueryAdd("page", "%d", params.Page)
}
if params.PerPage > 0 {
url.QueryAdd("per_page", "%d", params.PerPage)
}
if fromTimeSeconds := params.FromTime.Unix(); fromTimeSeconds > 0 {
url.QueryAdd("from_time", "%d", fromTimeSeconds)
}
if toTimeSeconds := params.ToTime.Unix(); toTimeSeconds > 0 {
url.QueryAdd("to_time", "%d", toTimeSeconds)
}
for _, status := range params.Statuses {
url.QueryAdd(status, "%d", true)
}
err = url.Req("GET", nil, &out)
if err != nil {
return
}
return out["tasks"], nil
}
// TaskQueue queues a task
func (w *Worker) TaskQueue(tasks ...Task) (taskIds []string, err error) {
outTasks := make([]map[string]interface{}, 0, len(tasks))
for _, task := range tasks {
thisTask := map[string]interface{}{
"code_name": task.CodeName,
"payload": task.Payload,
"priority": task.Priority,
"cluster": task.Cluster,
"label": task.Label,
}
if task.Timeout != nil {
thisTask["timeout"] = (*task.Timeout).Seconds()
}
if task.Delay != nil {
thisTask["delay"] = int64((*task.Delay).Seconds())
}
outTasks = append(outTasks, thisTask)
}
in := map[string][]map[string]interface{}{"tasks": outTasks}
out := struct {
Tasks []struct {
Id string `json:"id"`
} `json:"tasks"`
Msg string `json:"msg"`
}{}
err = w.tasks().Req("POST", &in, &out)
if err != nil {
return
}
taskIds = make([]string, 0, len(out.Tasks))
for _, task := range out.Tasks {
taskIds = append(taskIds, task.Id)
}
return
}
// TaskInfo gives info about a given task
func (w *Worker) TaskInfo(taskId string) (task TaskInfo, err error) {
out := TaskInfo{}
err = w.tasks(taskId).Req("GET", nil, &out)
return out, err
}
func (w *Worker) TaskLog(taskId string) (log []byte, err error) {
response, err := w.tasks(taskId, "log").Request("GET", nil)
if err != nil {
return
}
log, err = ioutil.ReadAll(response.Body)
return
}
// TaskCancel cancels a Task
func (w *Worker) TaskCancel(taskId string) (err error) {
_, err = w.tasks(taskId, "cancel").Request("POST", nil)
return err
}
// TaskProgress sets a Task's Progress
func (w *Worker) TaskProgress(taskId string, progress int, msg string) (err error) {
payload := map[string]interface{}{
"msg": msg,
"percent": progress,
}
err = w.tasks(taskId, "progress").Req("POST", payload, nil)
return
}
// TaskQueueWebhook queues a Task from a Webhook
func (w *Worker) TaskQueueWebhook() (err error) { return }
// ScheduleList lists Scheduled Tasks
func (w *Worker) ScheduleList() (schedules []ScheduleInfo, err error) {
out := map[string][]ScheduleInfo{}
err = w.schedules().Req("GET", nil, &out)
if err != nil {
return
}
return out["schedules"], nil
}
// Schedule a Task
func (w *Worker) Schedule(schedules ...Schedule) (scheduleIds []string, err error) {
outSchedules := make([]map[string]interface{}, 0, len(schedules))
for _, schedule := range schedules {
sm := map[string]interface{}{
"code_name": schedule.CodeName,
"name": schedule.Name,
"payload": schedule.Payload,
"label": schedule.Label,
"cluster": schedule.Cluster,
}
if schedule.Delay != nil {
sm["delay"] = (*schedule.Delay).Seconds()
}
if schedule.EndAt != nil {
sm["end_at"] = *schedule.EndAt
}
if schedule.MaxConcurrency != nil {
sm["max_concurrency"] = *schedule.MaxConcurrency
}
if schedule.Priority != nil {
sm["priority"] = *schedule.Priority
}
if schedule.RunEvery != nil {
sm["run_every"] = *schedule.RunEvery
}
if schedule.RunTimes != nil {
sm["run_times"] = *schedule.RunTimes
}
if schedule.StartAt != nil {
sm["start_at"] = *schedule.StartAt
}
outSchedules = append(outSchedules, sm)
}
in := map[string][]map[string]interface{}{"schedules": outSchedules}
out := struct {
Schedules []struct {
Id string `json:"id"`
} `json:"schedules"`
Msg string `json:"msg"`
}{}
err = w.schedules().Req("POST", &in, &out)
if err != nil {
return
}
scheduleIds = make([]string, 0, len(out.Schedules))
for _, schedule := range out.Schedules {
scheduleIds = append(scheduleIds, schedule.Id)
}
return
}
// ScheduleInfo gets info about a scheduled task
func (w *Worker) ScheduleInfo(scheduleId string) (info ScheduleInfo, err error) {
info = ScheduleInfo{}
err = w.schedules(scheduleId).Req("GET", nil, &info)
return info, nil
}
// ScheduleCancel cancels a scheduled task
func (w *Worker) ScheduleCancel(scheduleId string) (err error) {
_, err = w.schedules(scheduleId, "cancel").Request("POST", nil)
return
}
// TODO we should probably support other crypto functions at some point so that people have a choice.
// - expects an x509 rsa public key (ala "-----BEGIN RSA PUBLIC KEY-----")
// - returns a base64 ciphertext with an rsa encrypted aes-128 session key stored in the bit length
// of the modulus of the given RSA key first bits (i.e. 2048 = first 256 bytes), followed by
// the aes cipher with a new, random iv in the first 12 bytes,
// and the auth tag in the last 16 bytes of the cipher.
// - must have RSA key >= 1024
// - end format w/ RSA of 2048 for display purposes, all base64 encoded:
// [ 256 byte RSA encrypted AES key | len(payload) AES-GCM cipher | 16 bytes AES tag | 12 bytes AES nonce ]
// - each task will be encrypted with a different AES session key
//
// EncryptPayloads will return a copy of the input tasks with the Payload field modified
// to be encrypted as described above. Upon any error, the tasks returned will be nil.
func EncryptPayloads(publicKey []byte, in ...Task) ([]Task, error) {
rsablock, _ := pem.Decode(publicKey)
rsaKey, err := x509.ParsePKIXPublicKey(rsablock.Bytes)
if err != nil {
return nil, err
}
rsaPublicKey := rsaKey.(*rsa.PublicKey)
tasks := make([]Task, len(in))
copy(tasks, in)
for i := range tasks {
// get a random aes-128 session key to encrypt
aesKey := make([]byte, 128/8)
if _, err := rand.Read(aesKey); err != nil {
return nil, err
}
// have to use sha1 b/c ruby openssl picks it for OAEP: https://www.openssl.org/docs/manmaster/crypto/RSA_public_encrypt.html
aesKeyCipher, err := rsa.EncryptOAEP(sha1.New(), rand.Reader, rsaPublicKey, aesKey, nil)
if err != nil {
return nil, err
}
block, err := aes.NewCipher(aesKey)
if err != nil {
return nil, err
}
gcm, err := cipher.NewGCM(block)
if err != nil {
return nil, err
}
pbytes := []byte(tasks[i].Payload)
// The IV needs to be unique, but not secure. last 12 bytes are IV.
ciphertext := make([]byte, len(pbytes)+gcm.Overhead()+gcm.NonceSize())
nonce := ciphertext[len(ciphertext)-gcm.NonceSize():]
if _, err := rand.Read(nonce); err != nil {
return nil, err
}
// tag is appended to cipher as last 16 bytes. https://golang.org/src/crypto/cipher/gcm.go?s=2318:2357#L145
gcm.Seal(ciphertext[:0], nonce, pbytes, nil)
// base64 the whole thing
tasks[i].Payload = base64.StdEncoding.EncodeToString(append(aesKeyCipher, ciphertext...))
}
return tasks, nil
}
type Cluster struct {
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
Memory int64 `json:"memory,omitempty"`
DiskSpace int64 `json:"disk_space,omitempty"`
CpuShare *int32 `json:"cpu_share,omitempty"`
}
func (w *Worker) ClusterCreate(c Cluster) (Cluster, error) {
var out struct {
C Cluster `json:"cluster"`
}
err := w.clusters().Req("POST", c, &out)
return out.C, err
}
func (w *Worker) ClusterDelete(id string) error {
return w.clusters(id).Req("DELETE", nil, nil)
}
func (w *Worker) ClusterToken(id string) (string, error) {
var out struct {
Token string `json:"token"`
}
err := w.clusters(id, "credentials").Req("GET", nil, &out)
return out.Token, err
}

View File

@@ -1,101 +0,0 @@
package worker
import (
"encoding/json"
"flag"
"io"
"io/ioutil"
"os"
)
var (
TaskDir string
envFlag string
payloadFlag string
TaskId string
configFlag string
)
// call this to parse flags before using the other methods.
func ParseFlags() {
flag.StringVar(&TaskDir, "d", "", "task dir")
flag.StringVar(&envFlag, "e", "", "environment type")
flag.StringVar(&payloadFlag, "payload", "", "payload file")
flag.StringVar(&TaskId, "id", "", "task id")
flag.StringVar(&configFlag, "config", "", "config file")
flag.Parse()
if os.Getenv("TASK_ID") != "" {
TaskId = os.Getenv("TASK_ID")
}
if os.Getenv("TASK_DIR") != "" {
TaskDir = os.Getenv("TASK_DIR")
}
if os.Getenv("PAYLOAD_FILE") != "" {
payloadFlag = os.Getenv("PAYLOAD_FILE")
}
if os.Getenv("CONFIG_FILE") != "" {
configFlag = os.Getenv("CONFIG_FILE")
}
}
func PayloadReader() (io.ReadCloser, error) {
return os.Open(payloadFlag)
}
func PayloadFromJSON(v interface{}) error {
reader, err := PayloadReader()
if err != nil {
return err
}
defer reader.Close()
return json.NewDecoder(reader).Decode(v)
}
func PayloadAsString() (string, error) {
reader, err := PayloadReader()
if err != nil {
return "", err
}
defer reader.Close()
b, err := ioutil.ReadAll(reader)
if err != nil {
return "", err
}
return string(b), nil
}
func ConfigReader() (io.ReadCloser, error) {
return os.Open(configFlag)
}
func ConfigFromJSON(v interface{}) error {
reader, err := ConfigReader()
if err != nil {
return err
}
defer reader.Close()
return json.NewDecoder(reader).Decode(v)
}
func ConfigAsString() (string, error) {
reader, err := ConfigReader()
if err != nil {
return "", err
}
defer reader.Close()
b, err := ioutil.ReadAll(reader)
if err != nil {
return "", err
}
return string(b), nil
}
func IronTaskId() string {
return TaskId
}
func IronTaskDir() string {
return TaskDir
}

View File

@@ -1,105 +0,0 @@
// IronWorker (elastic computing) client library
package worker
import (
"time"
"github.com/iron-io/iron_go3/api"
"github.com/iron-io/iron_go3/config"
)
type Worker struct {
Settings config.Settings
}
func New() *Worker {
return &Worker{Settings: config.Config("iron_worker")}
}
func (w *Worker) codes(s ...string) *api.URL { return api.Action(w.Settings, "codes", s...) }
func (w *Worker) tasks(s ...string) *api.URL { return api.Action(w.Settings, "tasks", s...) }
func (w *Worker) schedules(s ...string) *api.URL { return api.Action(w.Settings, "schedules", s...) }
func (w *Worker) clusters(s ...string) *api.URL { return api.RootAction(w.Settings, "clusters", s...) }
// exponential sleep between retries, replace this with your own preferred strategy
func sleepBetweenRetries(previousDuration time.Duration) time.Duration {
if previousDuration >= 60*time.Second {
return previousDuration
}
return previousDuration + previousDuration
}
var GoCodeRunner = []byte(`#!/bin/sh
root() {
while [ $# -gt 0 ]; do
if [ "$1" = "-d" ]; then
printf "%s\n" "$2"
break
fi
done
}
cd "$(root "$@")"
chmod +x worker
./worker "$@"
`)
// WaitForTask returns a channel that will receive the completed task and is closed afterwards.
// If an error occured during the wait, the channel will be closed.
func (w *Worker) WaitForTask(taskId string) chan TaskInfo {
out := make(chan TaskInfo)
go func() {
defer close(out)
retryDelay := 100 * time.Millisecond
for {
info, err := w.TaskInfo(taskId)
if err != nil {
return
}
if info.Status == "queued" || info.Status == "running" {
time.Sleep(retryDelay)
retryDelay = sleepBetweenRetries(retryDelay)
} else {
out <- info
return
}
}
}()
return out
}
func (w *Worker) WaitForTaskLog(taskId string) chan []byte {
out := make(chan []byte)
go func() {
defer close(out)
retryDelay := 100 * time.Millisecond
for {
log, err := w.TaskLog(taskId)
if err != nil {
e, ok := err.(api.HTTPResponseError)
if ok && e.StatusCode() == 404 {
time.Sleep(retryDelay)
retryDelay = sleepBetweenRetries(retryDelay)
continue
}
return
}
out <- log
return
}
}()
return out
}
func clamp(value, min, max int) int {
if value < min {
return min
} else if value > max {
return max
}
return value
}

View File

@@ -1,151 +0,0 @@
package worker
import (
"io/ioutil"
"os"
"testing"
"time"
// "github.com/iron-io/iron_go/worker"
. "github.com/jeffh/go.bdd"
)
func TestEverything(*testing.T) {
defer PrintSpecReport()
Describe("iron.io worker", func() {
w := New()
It("Prepares the specs by deleting all existing code packages", func() {
codes, err := w.CodePackageList(0, 100)
Expect(err, ToBeNil)
for _, code := range codes {
err = w.CodePackageDelete(code.Id)
Expect(err, ToBeNil)
}
codes, err = w.CodePackageList(0, 100)
Expect(err, ToBeNil)
Expect(len(codes), ToEqual, 0)
})
It("Creates a code package", func() {
tempDir, err := ioutil.TempDir("", "iron-worker")
Expect(err, ToBeNil)
defer os.RemoveAll(tempDir)
fd, err := os.Create(tempDir + "/main.go")
Expect(err, ToBeNil)
n, err := fd.WriteString(`package main; func main(){ println("Hello world!") }`)
Expect(err, ToBeNil)
Expect(n, ToEqual, 52)
Expect(fd.Close(), ToBeNil)
pkg, err := NewGoCodePackage("GoFun", fd.Name())
Expect(err, ToBeNil)
id, err := w.CodePackageUpload(pkg)
Expect(err, ToBeNil)
info, err := w.CodePackageInfo(id)
Expect(err, ToBeNil)
Expect(info.Id, ToEqual, id)
Expect(info.Name, ToEqual, "GoFun")
Expect(info.Rev, ToEqual, 1)
})
It("Queues a Task", func() {
ids, err := w.TaskQueue(Task{CodeName: "GoFun"})
Expect(err, ToBeNil)
id := ids[0]
info, err := w.TaskInfo(id)
Expect(err, ToBeNil)
Expect(info.CodeName, ToEqual, "GoFun")
select {
case info = <-w.WaitForTask(id):
Expect(info.Status, ToEqual, "complete")
case <-time.After(5 * time.Second):
panic("info timed out")
}
log, err := w.TaskLog(id)
Expect(err, ToBeNil)
Expect(log, ToDeepEqual, []byte("Hello world!\n"))
})
It("Cancels a task", func() {
delay := 10 * time.Second
ids, err := w.TaskQueue(Task{CodeName: "GoFun", Delay: &delay})
Expect(err, ToBeNil)
id := ids[0]
err = w.TaskCancel(id)
Expect(err, ToBeNil)
info, err := w.TaskInfo(id)
Expect(info.Status, ToEqual, "cancelled")
})
It("Queues a lot of tasks and lists them", func() {
delay := 100 * time.Second
ids, err := w.TaskQueue(Task{CodeName: "GoFun", Delay: &delay})
Expect(err, ToBeNil)
firstId := ids[0]
time.Sleep(1 * time.Second)
ids, err = w.TaskQueue(Task{CodeName: "GoFun", Delay: &delay})
Expect(err, ToBeNil)
secondId := ids[0]
tasks, err := w.TaskList()
Expect(err, ToBeNil)
Expect(tasks[0].CreatedAt.After(tasks[1].CreatedAt), ToEqual, true)
Expect(tasks[0].Id, ToEqual, secondId)
Expect(tasks[1].Id, ToEqual, firstId)
})
It("Schedules a Task ", func() {
delay := 10 * time.Second
ids, err := w.Schedule(Schedule{
Name: "ScheduledGoFun",
CodeName: "GoFun",
Payload: "foobar",
Delay: &delay,
})
Expect(err, ToBeNil)
id := ids[0]
info, err := w.ScheduleInfo(id)
Expect(err, ToBeNil)
Expect(info.CodeName, ToEqual, "GoFun")
Expect(info.Status, ToEqual, "scheduled")
})
It("Cancels a scheduled task", func() {
delay := 10 * time.Second
ids, err := w.Schedule(Schedule{
Name: "ScheduledGoFun",
CodeName: "GoFun",
Payload: "foobar",
Delay: &delay,
})
Expect(err, ToBeNil)
id := ids[0]
err = w.ScheduleCancel(id)
Expect(err, ToBeNil)
info, err := w.ScheduleInfo(id)
Expect(err, ToBeNil)
Expect(info.CodeName, ToEqual, "GoFun")
Expect(info.Status, ToEqual, "cancelled")
})
})
}

6
vendor/github.com/mattes/migrate/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,6 @@
.DS_Store
cli/build
cli/cli
cli/migrate
.coverage
.godoc.pid

62
vendor/github.com/mattes/migrate/.travis.yml generated vendored Normal file
View File

@@ -0,0 +1,62 @@
language: go
sudo: required
go:
- 1.7
- 1.8
- 1.9
env:
- MIGRATE_TEST_CONTAINER_BOOT_DELAY=10
# TODO: https://docs.docker.com/engine/installation/linux/ubuntu/
# pre-provision with travis docker setup and pin down docker version in install step
services:
- docker
install:
- make deps
- (cd $GOPATH/src/github.com/docker/docker && git fetch --all --tags --prune && git checkout v17.05.0-ce)
- sudo apt-get update && sudo apt-get install docker-ce=17.05.0*
- go get github.com/mattn/goveralls
script:
- make test
after_success:
- goveralls -service=travis-ci -coverprofile .coverage/combined.txt
- make list-external-deps > dependency_tree.txt && cat dependency_tree.txt
before_deploy:
- make build-cli
- gem install --no-ri --no-rdoc fpm
- fpm -s dir -t deb -n migrate -v "$(git describe --tags 2>/dev/null | cut -c 2-)" --license MIT -m matthias.kadenbach@gmail.com --url https://github.com/mattes/migrate --description='Database migrations' -a amd64 -p migrate.$(git describe --tags 2>/dev/null | cut -c 2-).deb --deb-no-default-config-files -f -C cli/build migrate.linux-amd64=/usr/bin/migrate
deploy:
- provider: releases
api_key:
secure: EFow50BI448HVb/uQ1Kk2Kq0xzmwIYq3V67YyymXIuqSCodvXEsMiBPUoLrxEknpPEIc67LEQTNdfHBgvyHk6oRINWAfie+7pr5tKrpOTF9ghyxoN1PlO8WKQCqwCvGMBCnc5ur5rvzp0bqfpV2rs5q9/nngy3kBuEvs12V7iho=
skip_cleanup: true
on:
go: 1.8
repo: mattes/migrate
tags: true
file:
- cli/build/migrate.linux-amd64.tar.gz
- cli/build/migrate.darwin-amd64.tar.gz
- cli/build/migrate.windows-amd64.exe.tar.gz
- cli/build/sha256sum.txt
- dependency_tree.txt
- provider: packagecloud
repository: migrate
username: mattes
token:
secure: RiHJ/+J9DvXUah/APYdWySWZ5uOOISYJ0wS7xddc7/BNStRVjzFzvJ9zmb67RkyZZrvGuVjPiL4T8mtDyCJCj47RmU/56wPdEHbar/FjsiUCgwvR19RlulkgbV4okBCePbwzMw6HNHRp14TzfQCPtnN4kef0lOI4gZJkImN7rtQ=
dist: ubuntu/xenial
package_glob: '*.deb'
skip_cleanup: true
on:
go: 1.8
repo: mattes/migrate
tags: true

22
vendor/github.com/mattes/migrate/CONTRIBUTING.md generated vendored Normal file
View File

@@ -0,0 +1,22 @@
# Development, Testing and Contributing
1. Make sure you have a running Docker daemon
(Install for [MacOS](https://docs.docker.com/docker-for-mac/))
2. Fork this repo and `git clone` somewhere to `$GOPATH/src/github.com/%you%/migrate`
3. `make rewrite-import-paths` to update imports to your local fork
4. Confirm tests are working: `make test-short`
5. Write awesome code ...
6. `make test` to run all tests against all database versions
7. `make restore-import-paths` to restore import paths
8. Push code and open Pull Request
Some more helpful commands:
* You can specify which database/ source tests to run:
`make test-short SOURCE='file go-bindata' DATABASE='postgres cassandra'`
* After `make test`, run `make html-coverage` which opens a shiny test coverage overview.
* Missing imports? `make deps`
* `make build-cli` builds the CLI in directory `cli/build/`.
* `make list-external-deps` lists all external dependencies for each package
* `make docs && make open-docs` opens godoc in your browser, `make kill-docs` kills the godoc server.
Repeatedly call `make docs` to refresh the server.

67
vendor/github.com/mattes/migrate/FAQ.md generated vendored Normal file
View File

@@ -0,0 +1,67 @@
# FAQ
#### How is the code base structured?
```
/ package migrate (the heart of everything)
/cli the CLI wrapper
/database database driver and sub directories have the actual driver implementations
/source source driver and sub directories have the actual driver implementations
```
#### Why is there no `source/driver.go:Last()`?
It's not needed. And unless the source has a "native" way to read a directory in reversed order,
it might be expensive to do a full directory scan in order to get the last element.
#### What is a NilMigration? NilVersion?
NilMigration defines a migration without a body. NilVersion is defined as const -1.
#### What is the difference between uint(version) and int(targetVersion)?
version refers to an existing migration version coming from a source and therefor can never be negative.
targetVersion can either be a version OR represent a NilVersion, which equals -1.
#### What's the difference between Next/Previous and Up/Down?
```
1_first_migration.up.extension next -> 2_second_migration.up.extension ...
1_first_migration.down.extension <- previous 2_second_migration.down.extension ...
```
#### Why two separate files (up and down) for a migration?
It makes all of our lives easier. No new markup/syntax to learn for users
and existing database utility tools continue to work as expected.
#### How many migrations can migrate handle?
Whatever the maximum positive signed integer value is for your platform.
For 32bit it would be 2,147,483,647 migrations. Migrate only keeps references to
the currently run and pre-fetched migrations in memory. Please note that some
source drivers need to do build a full "directory" tree first, which puts some
heat on the memory consumption.
#### Are the table tests in migrate_test.go bloated?
Yes and no. There are duplicate test cases for sure but they don't hurt here. In fact
the tests are very visual now and might help new users understand expected behaviors quickly.
Migrate from version x to y and y is the last migration? Just check out the test for
that particular case and know what's going on instantly.
#### What is Docker being used for?
Only for testing. See [testing/docker.go](testing/docker.go)
#### Why not just use docker-compose?
It doesn't give us enough runtime control for testing. We want to be able to bring up containers fast
and whenever we want, not just once at the beginning of all tests.
#### Can I maintain my driver in my own repository?
Yes, technically thats possible. We want to encourage you to contribute your driver to this respository though.
The driver's functionality is dictated by migrate's interfaces. That means there should really
just be one driver for a database/ source. We want to prevent a future where several drivers doing the exact same thing,
just implemented a bit differently, co-exist somewhere on Github. If users have to do research first to find the
"best" available driver for a database in order to get started, we would have failed as an open source community.
#### Can I mix multiple sources during a batch of migrations?
No.
#### What does "dirty" database mean?
Before a migration runs, each database sets a dirty flag. Execution stops if a migration fails and the dirty state persists,
which prevents attempts to run more migrations on top of a failed migration. You need to manually fix the error
and then "force" the expected version.

23
vendor/github.com/mattes/migrate/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,23 @@
The MIT License (MIT)
Copyright (c) 2016 Matthias Kadenbach
https://github.com/mattes/migrate
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

81
vendor/github.com/mattes/migrate/MIGRATIONS.md generated vendored Normal file
View File

@@ -0,0 +1,81 @@
# Migrations
## Migration Filename Format
A single logical migration is represented as two separate migration files, one
to migrate "up" to the specified version from the previous version, and a second
to migrate back "down" to the previous version. These migrations can be provided
by any one of the supported [migration sources](./README.md#migration-sources).
The ordering and direction of the migration files is determined by the filenames
used for them. `migrate` expects the filenames of migrations to have the format:
{version}_{title}.up.{extension}
{version}_{title}.down.{extension}
The `title` of each migration is unused, and is only for readability. Similarly,
the `extension` of the migration files is not checked by the library, and should
be an appropriate format for the database in use (`.sql` for SQL variants, for
instance).
Versions of migrations may be represented as any 64 bit unsigned integer.
All migrations are applied upward in order of increasing version number, and
downward by decreasing version number.
Common versioning schemes include incrementing integers:
1_initialize_schema.down.sql
1_initialize_schema.up.sql
2_add_table.down.sql
2_add_table.up.sql
...
Or timestamps at an appropriate resolution:
1500360784_initialize_schema.down.sql
1500360784_initialize_schema.up.sql
1500445949_add_table.down.sql
1500445949_add_table.up.sql
...
But any scheme resulting in distinct, incrementing integers as versions is valid.
It is suggested that the version number of corresponding `up` and `down` migration
files be equivalent for clarity, but they are allowed to differ so long as the
relative ordering of the migrations is preserved.
The migration files are permitted to be empty, so in the event that a migration
is a no-op or is irreversible, it is recommended to still include both migration
files, and either leaving them empty or adding a comment as appropriate.
## Migration Content Format
The format of the migration files themselves varies between database systems.
Different databases have different semantics around schema changes and when and
how they are allowed to occur (for instance, if schema changes can occur within
a transaction).
As such, the `migrate` library has little to no checking around the format of
migration sources. The migration files are generally processed directly by the
drivers as raw operations.
## Reversibility of Migrations
Best practice for writing schema migration is that all migrations should be
reversible. It should in theory be possible for run migrations down and back up
through any and all versions with the state being fully cleaned and recreated
by doing so.
By adhering to this recommended practice, development and deployment of new code
is cleaner and easier (cleaning database state for a new feature should be as
easy as migrating down to a prior version, and back up to the latest).
As opposed to some other migration libraries, `migrate` represents up and down
migrations as separate files. This prevents any non-standard file syntax from
being introduced which may result in unintended behavior or errors, depending
on what database is processing the file.
While it is technically possible for an up or down migration to exist on its own
without an equivalently versioned counterpart, it is strongly recommended to
always include a down migration which cleans up the state of the corresponding
up migration.

123
vendor/github.com/mattes/migrate/Makefile generated vendored Normal file
View File

@@ -0,0 +1,123 @@
SOURCE ?= file go-bindata github aws-s3 google-cloud-storage
DATABASE ?= postgres mysql redshift cassandra sqlite3 spanner cockroachdb clickhouse
VERSION ?= $(shell git describe --tags 2>/dev/null | cut -c 2-)
TEST_FLAGS ?=
REPO_OWNER ?= $(shell cd .. && basename "$$(pwd)")
build-cli: clean
-mkdir ./cli/build
cd ./cli && CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -a -o build/migrate.linux-amd64 -ldflags='-X main.Version=$(VERSION)' -tags '$(DATABASE) $(SOURCE)' .
cd ./cli && CGO_ENABLED=1 GOOS=darwin GOARCH=amd64 go build -a -o build/migrate.darwin-amd64 -ldflags='-X main.Version=$(VERSION)' -tags '$(DATABASE) $(SOURCE)' .
cd ./cli && CGO_ENABLED=1 GOOS=windows GOARCH=amd64 go build -a -o build/migrate.windows-amd64.exe -ldflags='-X main.Version=$(VERSION)' -tags '$(DATABASE) $(SOURCE)' .
cd ./cli/build && find . -name 'migrate*' | xargs -I{} tar czf {}.tar.gz {}
cd ./cli/build && shasum -a 256 * > sha256sum.txt
cat ./cli/build/sha256sum.txt
clean:
-rm -r ./cli/build
test-short:
make test-with-flags --ignore-errors TEST_FLAGS='-short'
test:
@-rm -r .coverage
@mkdir .coverage
make test-with-flags TEST_FLAGS='-v -race -covermode atomic -coverprofile .coverage/_$$(RAND).txt -bench=. -benchmem'
@echo 'mode: atomic' > .coverage/combined.txt
@cat .coverage/*.txt | grep -v 'mode: atomic' >> .coverage/combined.txt
test-with-flags:
@echo SOURCE: $(SOURCE)
@echo DATABASE: $(DATABASE)
@go test $(TEST_FLAGS) .
@go test $(TEST_FLAGS) ./cli/...
@go test $(TEST_FLAGS) ./testing/...
@echo -n '$(SOURCE)' | tr -s ' ' '\n' | xargs -I{} go test $(TEST_FLAGS) ./source/{}
@go test $(TEST_FLAGS) ./source/testing/...
@go test $(TEST_FLAGS) ./source/stub/...
@echo -n '$(DATABASE)' | tr -s ' ' '\n' | xargs -I{} go test $(TEST_FLAGS) ./database/{}
@go test $(TEST_FLAGS) ./database/testing/...
@go test $(TEST_FLAGS) ./database/stub/...
kill-orphaned-docker-containers:
docker rm -f $(shell docker ps -aq --filter label=migrate_test)
html-coverage:
go tool cover -html=.coverage/combined.txt
deps:
-go get -v -u ./...
-go test -v -i ./...
# TODO: why is this not being fetched with the command above?
-go get -u github.com/fsouza/fake-gcs-server/fakestorage
list-external-deps:
$(call external_deps,'.')
$(call external_deps,'./cli/...')
$(call external_deps,'./testing/...')
$(foreach v, $(SOURCE), $(call external_deps,'./source/$(v)/...'))
$(call external_deps,'./source/testing/...')
$(call external_deps,'./source/stub/...')
$(foreach v, $(DATABASE), $(call external_deps,'./database/$(v)/...'))
$(call external_deps,'./database/testing/...')
$(call external_deps,'./database/stub/...')
restore-import-paths:
find . -name '*.go' -type f -execdir sed -i '' s%\"github.com/$(REPO_OWNER)/migrate%\"github.com/mattes/migrate%g '{}' \;
rewrite-import-paths:
find . -name '*.go' -type f -execdir sed -i '' s%\"github.com/mattes/migrate%\"github.com/$(REPO_OWNER)/migrate%g '{}' \;
# example: fswatch -0 --exclude .godoc.pid --event Updated . | xargs -0 -n1 -I{} make docs
docs:
-make kill-docs
nohup godoc -play -http=127.0.0.1:6064 </dev/null >/dev/null 2>&1 & echo $$! > .godoc.pid
cat .godoc.pid
kill-docs:
@cat .godoc.pid
kill -9 $$(cat .godoc.pid)
rm .godoc.pid
open-docs:
open http://localhost:6064/pkg/github.com/$(REPO_OWNER)/migrate
# example: make release V=0.0.0
release:
git tag v$(V)
@read -p "Press enter to confirm and push to origin ..." && git push origin v$(V)
define external_deps
@echo '-- $(1)'; go list -f '{{join .Deps "\n"}}' $(1) | grep -v github.com/$(REPO_OWNER)/migrate | xargs go list -f '{{if not .Standard}}{{.ImportPath}}{{end}}'
endef
.PHONY: build-cli clean test-short test test-with-flags deps html-coverage \
restore-import-paths rewrite-import-paths list-external-deps release \
docs kill-docs open-docs kill-orphaned-docker-containers
SHELL = /bin/bash
RAND = $(shell echo $$RANDOM)

140
vendor/github.com/mattes/migrate/README.md generated vendored Normal file
View File

@@ -0,0 +1,140 @@
[![Build Status](https://travis-ci.org/mattes/migrate.svg?branch=master)](https://travis-ci.org/mattes/migrate)
[![GoDoc](https://godoc.org/github.com/mattes/migrate?status.svg)](https://godoc.org/github.com/mattes/migrate)
[![Coverage Status](https://coveralls.io/repos/github/mattes/migrate/badge.svg?branch=v3.0-prev)](https://coveralls.io/github/mattes/migrate?branch=v3.0-prev)
[![packagecloud.io](https://img.shields.io/badge/deb-packagecloud.io-844fec.svg)](https://packagecloud.io/mattes/migrate?filter=debs)
# migrate
__Database migrations written in Go. Use as [CLI](#cli-usage) or import as [library](#use-in-your-go-project).__
* Migrate reads migrations from [sources](#migration-sources)
and applies them in correct order to a [database](#databases).
* Drivers are "dumb", migrate glues everything together and makes sure the logic is bulletproof.
(Keeps the drivers lightweight, too.)
* Database drivers don't assume things or try to correct user input. When in doubt, fail.
Looking for [v1](https://github.com/mattes/migrate/tree/v1)?
## Databases
Database drivers run migrations. [Add a new database?](database/driver.go)
* [PostgreSQL](database/postgres)
* [Redshift](database/redshift)
* [Ql](database/ql)
* [Cassandra](database/cassandra)
* [SQLite](database/sqlite3)
* [MySQL/ MariaDB](database/mysql)
* [Neo4j](database/neo4j) ([todo #167](https://github.com/mattes/migrate/issues/167))
* [MongoDB](database/mongodb) ([todo #169](https://github.com/mattes/migrate/issues/169))
* [CrateDB](database/crate) ([todo #170](https://github.com/mattes/migrate/issues/170))
* [Shell](database/shell) ([todo #171](https://github.com/mattes/migrate/issues/171))
* [Google Cloud Spanner](database/spanner)
* [CockroachDB](database/cockroachdb)
* [ClickHouse](database/clickhouse)
## Migration Sources
Source drivers read migrations from local or remote sources. [Add a new source?](source/driver.go)
* [Filesystem](source/file) - read from fileystem (always included)
* [Go-Bindata](source/go-bindata) - read from embedded binary data ([jteeuwen/go-bindata](https://github.com/jteeuwen/go-bindata))
* [Github](source/github) - read from remote Github repositories
* [AWS S3](source/aws-s3) - read from Amazon Web Services S3
* [Google Cloud Storage](source/google-cloud-storage) - read from Google Cloud Platform Storage
## CLI usage
* Simple wrapper around this library.
* Handles ctrl+c (SIGINT) gracefully.
* No config search paths, no config files, no magic ENV var injections.
__[CLI Documentation](cli)__
([brew todo #156](https://github.com/mattes/migrate/issues/156))
```
$ brew install migrate --with-postgres
$ migrate -database postgres://localhost:5432/database up 2
```
## Use in your Go project
* API is stable and frozen for this release (v3.x).
* Package migrate has no external dependencies.
* Only import the drivers you need.
(check [dependency_tree.txt](https://github.com/mattes/migrate/releases) for each driver)
* To help prevent database corruptions, it supports graceful stops via `GracefulStop chan bool`.
* Bring your own logger.
* Uses `io.Reader` streams internally for low memory overhead.
* Thread-safe and no goroutine leaks.
__[Go Documentation](https://godoc.org/github.com/mattes/migrate)__
```go
import (
"github.com/mattes/migrate"
_ "github.com/mattes/migrate/database/postgres"
_ "github.com/mattes/migrate/source/github"
)
func main() {
m, err := migrate.New(
"github://mattes:personal-access-token@mattes/migrate_test",
"postgres://localhost:5432/database?sslmode=enable")
m.Steps(2)
}
```
Want to use an existing database client?
```go
import (
"database/sql"
_ "github.com/lib/pq"
"github.com/mattes/migrate"
"github.com/mattes/migrate/database/postgres"
_ "github.com/mattes/migrate/source/file"
)
func main() {
db, err := sql.Open("postgres", "postgres://localhost:5432/database?sslmode=enable")
driver, err := postgres.WithInstance(db, &postgres.Config{})
m, err := migrate.NewWithDatabaseInstance(
"file:///migrations",
"postgres", driver)
m.Steps(2)
}
```
## Migration files
Each migration has an up and down migration. [Why?](FAQ.md#why-two-separate-files-up-and-down-for-a-migration)
```
1481574547_create_users_table.up.sql
1481574547_create_users_table.down.sql
```
[Best practices: How to write migrations.](MIGRATIONS.md)
## Development and Contributing
Yes, please! [`Makefile`](Makefile) is your friend,
read the [development guide](CONTRIBUTING.md).
Also have a look at the [FAQ](FAQ.md).
---
Looking for alternatives? [https://awesome-go.com/#database](https://awesome-go.com/#database).

113
vendor/github.com/mattes/migrate/cli/README.md generated vendored Normal file
View File

@@ -0,0 +1,113 @@
# migrate CLI
## Installation
#### With Go toolchain
```
$ go get -u -d github.com/mattes/migrate/cli github.com/lib/pq
$ go build -tags 'postgres' -o /usr/local/bin/migrate github.com/mattes/migrate/cli
```
Note: This example builds the cli which will only work with postgres. In order
to build the cli for use with other databases, replace the `postgres` build tag
with the appropriate database tag(s) for the databases desired. The tags
correspond to the names of the sub-packages underneath the
[`database`](../database) package.
#### MacOS
([todo #156](https://github.com/mattes/migrate/issues/156))
```
$ brew install migrate --with-postgres
```
#### Linux (*.deb package)
```
$ curl -L https://packagecloud.io/mattes/migrate/gpgkey | apt-key add -
$ echo "deb https://packagecloud.io/mattes/migrate/ubuntu/ xenial main" > /etc/apt/sources.list.d/migrate.list
$ apt-get update
$ apt-get install -y migrate
```
#### Download pre-build binary (Windows, MacOS, or Linux)
[Release Downloads](https://github.com/mattes/migrate/releases)
```
$ curl -L https://github.com/mattes/migrate/releases/download/$version/migrate.$platform-amd64.tar.gz | tar xvz
```
## Usage
```
$ migrate -help
Usage: migrate OPTIONS COMMAND [arg...]
migrate [ -version | -help ]
Options:
-source Location of the migrations (driver://url)
-path Shorthand for -source=file://path
-database Run migrations against this database (driver://url)
-prefetch N Number of migrations to load in advance before executing (default 10)
-lock-timeout N Allow N seconds to acquire database lock (default 15)
-verbose Print verbose logging
-version Print version
-help Print usage
Commands:
create [-ext E] [-dir D] NAME
Create a set of timestamped up/down migrations titled NAME, in directory D with extension E
goto V Migrate to version V
up [N] Apply all or N up migrations
down [N] Apply all or N down migrations
drop Drop everyting inside database
force V Set version V but don't run migration (ignores dirty state)
version Print current migration version
```
So let's say you want to run the first two migrations
```
$ migrate -database postgres://localhost:5432/database up 2
```
If your migrations are hosted on github
```
$ migrate -source github://mattes:personal-access-token@mattes/migrate_test \
-database postgres://localhost:5432/database down 2
```
The CLI will gracefully stop at a safe point when SIGINT (ctrl+c) is received.
Send SIGKILL for immediate halt.
## Reading CLI arguments from somewhere else
##### ENV variables
```
$ migrate -database "$MY_MIGRATE_DATABASE"
```
##### JSON files
Check out https://stedolan.github.io/jq/
```
$ migrate -database "$(cat config.json | jq '.database')"
```
##### YAML files
````
$ migrate -database "$(cat config/database.yml | ruby -ryaml -e "print YAML.load(STDIN.read)['database']")"
$ migrate -database "$(cat config/database.yml | python -c 'import yaml,sys;print yaml.safe_load(sys.stdin)["database"]')"
```

7
vendor/github.com/mattes/migrate/cli/build_aws-s3.go generated vendored Normal file
View File

@@ -0,0 +1,7 @@
// +build aws-s3
package main
import (
_ "github.com/mattes/migrate/source/aws-s3"
)

View File

@@ -0,0 +1,7 @@
// +build cassandra
package main
import (
_ "github.com/mattes/migrate/database/cassandra"
)

View File

@@ -0,0 +1,8 @@
// +build clickhouse
package main
import (
_ "github.com/kshvakov/clickhouse"
_ "github.com/mattes/migrate/database/clickhouse"
)

View File

@@ -0,0 +1,7 @@
// +build cockroachdb
package main
import (
_ "github.com/mattes/migrate/database/cockroachdb"
)

7
vendor/github.com/mattes/migrate/cli/build_github.go generated vendored Normal file
View File

@@ -0,0 +1,7 @@
// +build github
package main
import (
_ "github.com/mattes/migrate/source/github"
)

View File

@@ -0,0 +1,7 @@
// +build go-bindata
package main
import (
_ "github.com/mattes/migrate/source/go-bindata"
)

View File

@@ -0,0 +1,7 @@
// +build google-cloud-storage
package main
import (
_ "github.com/mattes/migrate/source/google-cloud-storage"
)

7
vendor/github.com/mattes/migrate/cli/build_mysql.go generated vendored Normal file
View File

@@ -0,0 +1,7 @@
// +build mysql
package main
import (
_ "github.com/mattes/migrate/database/mysql"
)

View File

@@ -0,0 +1,7 @@
// +build postgres
package main
import (
_ "github.com/mattes/migrate/database/postgres"
)

7
vendor/github.com/mattes/migrate/cli/build_ql.go generated vendored Normal file
View File

@@ -0,0 +1,7 @@
// +build ql
package main
import (
_ "github.com/mattes/migrate/database/ql"
)

View File

@@ -0,0 +1,7 @@
// +build redshift
package main
import (
_ "github.com/mattes/migrate/database/redshift"
)

View File

@@ -0,0 +1,7 @@
// +build spanner
package main
import (
_ "github.com/mattes/migrate/database/spanner"
)

View File

@@ -0,0 +1,7 @@
// +build sqlite3
package main
import (
_ "github.com/mattes/migrate/database/sqlite3"
)

96
vendor/github.com/mattes/migrate/cli/commands.go generated vendored Normal file
View File

@@ -0,0 +1,96 @@
package main
import (
"github.com/mattes/migrate"
_ "github.com/mattes/migrate/database/stub" // TODO remove again
_ "github.com/mattes/migrate/source/file"
"os"
"fmt"
)
func createCmd(dir string, timestamp int64, name string, ext string) {
base := fmt.Sprintf("%v%v_%v.", dir, timestamp, name)
os.MkdirAll(dir, os.ModePerm)
createFile(base + "up" + ext)
createFile(base + "down" + ext)
}
func createFile(fname string) {
if _, err := os.Create(fname); err != nil {
log.fatalErr(err)
}
}
func gotoCmd(m *migrate.Migrate, v uint) {
if err := m.Migrate(v); err != nil {
if err != migrate.ErrNoChange {
log.fatalErr(err)
} else {
log.Println(err)
}
}
}
func upCmd(m *migrate.Migrate, limit int) {
if limit >= 0 {
if err := m.Steps(limit); err != nil {
if err != migrate.ErrNoChange {
log.fatalErr(err)
} else {
log.Println(err)
}
}
} else {
if err := m.Up(); err != nil {
if err != migrate.ErrNoChange {
log.fatalErr(err)
} else {
log.Println(err)
}
}
}
}
func downCmd(m *migrate.Migrate, limit int) {
if limit >= 0 {
if err := m.Steps(-limit); err != nil {
if err != migrate.ErrNoChange {
log.fatalErr(err)
} else {
log.Println(err)
}
}
} else {
if err := m.Down(); err != nil {
if err != migrate.ErrNoChange {
log.fatalErr(err)
} else {
log.Println(err)
}
}
}
}
func dropCmd(m *migrate.Migrate) {
if err := m.Drop(); err != nil {
log.fatalErr(err)
}
}
func forceCmd(m *migrate.Migrate, v int) {
if err := m.Force(v); err != nil {
log.fatalErr(err)
}
}
func versionCmd(m *migrate.Migrate) {
v, dirty, err := m.Version()
if err != nil {
log.fatalErr(err)
}
if dirty {
log.Printf("%v (dirty)\n", v)
} else {
log.Println(v)
}
}

View File

@@ -0,0 +1,12 @@
FROM ubuntu:xenial
RUN apt-get update && \
apt-get install -y curl apt-transport-https
RUN curl -L https://packagecloud.io/mattes/migrate/gpgkey | apt-key add - && \
echo "deb https://packagecloud.io/mattes/migrate/ubuntu/ xenial main" > /etc/apt/sources.list.d/migrate.list && \
apt-get update && \
apt-get install -y migrate
RUN migrate -version

45
vendor/github.com/mattes/migrate/cli/log.go generated vendored Normal file
View File

@@ -0,0 +1,45 @@
package main
import (
"fmt"
logpkg "log"
"os"
)
type Log struct {
verbose bool
}
func (l *Log) Printf(format string, v ...interface{}) {
if l.verbose {
logpkg.Printf(format, v...)
} else {
fmt.Fprintf(os.Stderr, format, v...)
}
}
func (l *Log) Println(args ...interface{}) {
if l.verbose {
logpkg.Println(args...)
} else {
fmt.Fprintln(os.Stderr, args...)
}
}
func (l *Log) Verbose() bool {
return l.verbose
}
func (l *Log) fatalf(format string, v ...interface{}) {
l.Printf(format, v...)
os.Exit(1)
}
func (l *Log) fatal(args ...interface{}) {
l.Println(args...)
os.Exit(1)
}
func (l *Log) fatalErr(err error) {
l.fatal("error:", err)
}

237
vendor/github.com/mattes/migrate/cli/main.go generated vendored Normal file
View File

@@ -0,0 +1,237 @@
package main
import (
"flag"
"fmt"
"os"
"os/signal"
"strconv"
"strings"
"syscall"
"time"
"github.com/mattes/migrate"
)
// set main log
var log = &Log{}
func main() {
helpPtr := flag.Bool("help", false, "")
versionPtr := flag.Bool("version", false, "")
verbosePtr := flag.Bool("verbose", false, "")
prefetchPtr := flag.Uint("prefetch", 10, "")
lockTimeoutPtr := flag.Uint("lock-timeout", 15, "")
pathPtr := flag.String("path", "", "")
databasePtr := flag.String("database", "", "")
sourcePtr := flag.String("source", "", "")
flag.Usage = func() {
fmt.Fprint(os.Stderr,
`Usage: migrate OPTIONS COMMAND [arg...]
migrate [ -version | -help ]
Options:
-source Location of the migrations (driver://url)
-path Shorthand for -source=file://path
-database Run migrations against this database (driver://url)
-prefetch N Number of migrations to load in advance before executing (default 10)
-lock-timeout N Allow N seconds to acquire database lock (default 15)
-verbose Print verbose logging
-version Print version
-help Print usage
Commands:
create [-ext E] [-dir D] NAME
Create a set of timestamped up/down migrations titled NAME, in directory D with extension E
goto V Migrate to version V
up [N] Apply all or N up migrations
down [N] Apply all or N down migrations
drop Drop everyting inside database
force V Set version V but don't run migration (ignores dirty state)
version Print current migration version
`)
}
flag.Parse()
// initialize logger
log.verbose = *verbosePtr
// show cli version
if *versionPtr {
fmt.Fprintln(os.Stderr, Version)
os.Exit(0)
}
// show help
if *helpPtr {
flag.Usage()
os.Exit(0)
}
// translate -path into -source if given
if *sourcePtr == "" && *pathPtr != "" {
*sourcePtr = fmt.Sprintf("file://%v", *pathPtr)
}
// initialize migrate
// don't catch migraterErr here and let each command decide
// how it wants to handle the error
migrater, migraterErr := migrate.New(*sourcePtr, *databasePtr)
defer func() {
if migraterErr == nil {
migrater.Close()
}
}()
if migraterErr == nil {
migrater.Log = log
migrater.PrefetchMigrations = *prefetchPtr
migrater.LockTimeout = time.Duration(int64(*lockTimeoutPtr)) * time.Second
// handle Ctrl+c
signals := make(chan os.Signal, 1)
signal.Notify(signals, syscall.SIGINT)
go func() {
for range signals {
log.Println("Stopping after this running migration ...")
migrater.GracefulStop <- true
return
}
}()
}
startTime := time.Now()
switch flag.Arg(0) {
case "create":
args := flag.Args()[1:]
createFlagSet := flag.NewFlagSet("create", flag.ExitOnError)
extPtr := createFlagSet.String("ext", "", "File extension")
dirPtr := createFlagSet.String("dir", "", "Directory to place file in (default: current working directory)")
createFlagSet.Parse(args)
if createFlagSet.NArg() == 0 {
log.fatal("error: please specify name")
}
name := createFlagSet.Arg(0)
if *extPtr != "" {
*extPtr = "." + strings.TrimPrefix(*extPtr, ".")
}
if *dirPtr != "" {
*dirPtr = strings.Trim(*dirPtr, "/") + "/"
}
timestamp := startTime.Unix()
createCmd(*dirPtr, timestamp, name, *extPtr)
case "goto":
if migraterErr != nil {
log.fatalErr(migraterErr)
}
if flag.Arg(1) == "" {
log.fatal("error: please specify version argument V")
}
v, err := strconv.ParseUint(flag.Arg(1), 10, 64)
if err != nil {
log.fatal("error: can't read version argument V")
}
gotoCmd(migrater, uint(v))
if log.verbose {
log.Println("Finished after", time.Now().Sub(startTime))
}
case "up":
if migraterErr != nil {
log.fatalErr(migraterErr)
}
limit := -1
if flag.Arg(1) != "" {
n, err := strconv.ParseUint(flag.Arg(1), 10, 64)
if err != nil {
log.fatal("error: can't read limit argument N")
}
limit = int(n)
}
upCmd(migrater, limit)
if log.verbose {
log.Println("Finished after", time.Now().Sub(startTime))
}
case "down":
if migraterErr != nil {
log.fatalErr(migraterErr)
}
limit := -1
if flag.Arg(1) != "" {
n, err := strconv.ParseUint(flag.Arg(1), 10, 64)
if err != nil {
log.fatal("error: can't read limit argument N")
}
limit = int(n)
}
downCmd(migrater, limit)
if log.verbose {
log.Println("Finished after", time.Now().Sub(startTime))
}
case "drop":
if migraterErr != nil {
log.fatalErr(migraterErr)
}
dropCmd(migrater)
if log.verbose {
log.Println("Finished after", time.Now().Sub(startTime))
}
case "force":
if migraterErr != nil {
log.fatalErr(migraterErr)
}
if flag.Arg(1) == "" {
log.fatal("error: please specify version argument V")
}
v, err := strconv.ParseInt(flag.Arg(1), 10, 64)
if err != nil {
log.fatal("error: can't read version argument V")
}
if v < -1 {
log.fatal("error: argument V must be >= -1")
}
forceCmd(migrater, int(v))
if log.verbose {
log.Println("Finished after", time.Now().Sub(startTime))
}
case "version":
if migraterErr != nil {
log.fatalErr(migraterErr)
}
versionCmd(migrater)
default:
flag.Usage()
os.Exit(0)
}
}

4
vendor/github.com/mattes/migrate/cli/version.go generated vendored Normal file
View File

@@ -0,0 +1,4 @@
package main
// Version is set in Makefile with build flags
var Version = "dev"

View File

@@ -0,0 +1,31 @@
# Cassandra
* Drop command will not work on Cassandra 2.X because it rely on
system_schema table which comes with 3.X
* Other commands should work properly but are **not tested**
## Usage
`cassandra://host:port/keyspace?param1=value&param2=value2`
| URL Query | Default value | Description |
|------------|-------------|-----------|
| `x-migrations-table` | schema_migrations | Name of the migrations table |
| `port` | 9042 | The port to bind to |
| `consistency` | ALL | Migration consistency
| `protocol` | | Cassandra protocol version (3 or 4)
| `timeout` | 1 minute | Migration timeout
| `username` | nil | Username to use when authenticating. |
| `password` | nil | Password to use when authenticating. |
`timeout` is parsed using [time.ParseDuration(s string)](https://golang.org/pkg/time/#ParseDuration)
## Upgrading from v1
1. Write down the current migration version from schema_migrations
2. `DROP TABLE schema_migrations`
4. Download and install the latest migrate version.
5. Force the current migration version with `migrate force <current_version>`.

View File

@@ -0,0 +1,228 @@
package cassandra
import (
"fmt"
"io"
"io/ioutil"
nurl "net/url"
"strconv"
"time"
"github.com/gocql/gocql"
"github.com/mattes/migrate/database"
)
func init() {
db := new(Cassandra)
database.Register("cassandra", db)
}
var DefaultMigrationsTable = "schema_migrations"
var dbLocked = false
var (
ErrNilConfig = fmt.Errorf("no config")
ErrNoKeyspace = fmt.Errorf("no keyspace provided")
ErrDatabaseDirty = fmt.Errorf("database is dirty")
)
type Config struct {
MigrationsTable string
KeyspaceName string
}
type Cassandra struct {
session *gocql.Session
isLocked bool
// Open and WithInstance need to guarantee that config is never nil
config *Config
}
func (p *Cassandra) Open(url string) (database.Driver, error) {
u, err := nurl.Parse(url)
if err != nil {
return nil, err
}
// Check for missing mandatory attributes
if len(u.Path) == 0 {
return nil, ErrNoKeyspace
}
migrationsTable := u.Query().Get("x-migrations-table")
if len(migrationsTable) == 0 {
migrationsTable = DefaultMigrationsTable
}
p.config = &Config{
KeyspaceName: u.Path,
MigrationsTable: migrationsTable,
}
cluster := gocql.NewCluster(u.Host)
cluster.Keyspace = u.Path[1:len(u.Path)]
cluster.Consistency = gocql.All
cluster.Timeout = 1 * time.Minute
if len(u.Query().Get("username")) > 0 && len(u.Query().Get("password")) > 0 {
authenticator := gocql.PasswordAuthenticator{
Username: u.Query().Get("username"),
Password: u.Query().Get("password"),
}
cluster.Authenticator = authenticator
}
// Retrieve query string configuration
if len(u.Query().Get("consistency")) > 0 {
var consistency gocql.Consistency
consistency, err = parseConsistency(u.Query().Get("consistency"))
if err != nil {
return nil, err
}
cluster.Consistency = consistency
}
if len(u.Query().Get("protocol")) > 0 {
var protoversion int
protoversion, err = strconv.Atoi(u.Query().Get("protocol"))
if err != nil {
return nil, err
}
cluster.ProtoVersion = protoversion
}
if len(u.Query().Get("timeout")) > 0 {
var timeout time.Duration
timeout, err = time.ParseDuration(u.Query().Get("timeout"))
if err != nil {
return nil, err
}
cluster.Timeout = timeout
}
p.session, err = cluster.CreateSession()
if err != nil {
return nil, err
}
if err := p.ensureVersionTable(); err != nil {
return nil, err
}
return p, nil
}
func (p *Cassandra) Close() error {
p.session.Close()
return nil
}
func (p *Cassandra) Lock() error {
if dbLocked {
return database.ErrLocked
}
dbLocked = true
return nil
}
func (p *Cassandra) Unlock() error {
dbLocked = false
return nil
}
func (p *Cassandra) Run(migration io.Reader) error {
migr, err := ioutil.ReadAll(migration)
if err != nil {
return err
}
// run migration
query := string(migr[:])
if err := p.session.Query(query).Exec(); err != nil {
// TODO: cast to Cassandra error and get line number
return database.Error{OrigErr: err, Err: "migration failed", Query: migr}
}
return nil
}
func (p *Cassandra) SetVersion(version int, dirty bool) error {
query := `TRUNCATE "` + p.config.MigrationsTable + `"`
if err := p.session.Query(query).Exec(); err != nil {
return &database.Error{OrigErr: err, Query: []byte(query)}
}
if version >= 0 {
query = `INSERT INTO "` + p.config.MigrationsTable + `" (version, dirty) VALUES (?, ?)`
if err := p.session.Query(query, version, dirty).Exec(); err != nil {
return &database.Error{OrigErr: err, Query: []byte(query)}
}
}
return nil
}
// Return current keyspace version
func (p *Cassandra) Version() (version int, dirty bool, err error) {
query := `SELECT version, dirty FROM "` + p.config.MigrationsTable + `" LIMIT 1`
err = p.session.Query(query).Scan(&version, &dirty)
switch {
case err == gocql.ErrNotFound:
return database.NilVersion, false, nil
case err != nil:
if _, ok := err.(*gocql.Error); ok {
return database.NilVersion, false, nil
}
return 0, false, &database.Error{OrigErr: err, Query: []byte(query)}
default:
return version, dirty, nil
}
}
func (p *Cassandra) Drop() error {
// select all tables in current schema
query := fmt.Sprintf(`SELECT table_name from system_schema.tables WHERE keyspace_name='%s'`, p.config.KeyspaceName[1:]) // Skip '/' character
iter := p.session.Query(query).Iter()
var tableName string
for iter.Scan(&tableName) {
err := p.session.Query(fmt.Sprintf(`DROP TABLE %s`, tableName)).Exec()
if err != nil {
return err
}
}
// Re-create the version table
if err := p.ensureVersionTable(); err != nil {
return err
}
return nil
}
// Ensure version table exists
func (p *Cassandra) ensureVersionTable() error {
err := p.session.Query(fmt.Sprintf("CREATE TABLE IF NOT EXISTS %s (version bigint, dirty boolean, PRIMARY KEY(version))", p.config.MigrationsTable)).Exec()
if err != nil {
return err
}
if _, _, err = p.Version(); err != nil {
return err
}
return nil
}
// ParseConsistency wraps gocql.ParseConsistency
// to return an error instead of a panicking.
func parseConsistency(consistencyStr string) (consistency gocql.Consistency, err error) {
defer func() {
if r := recover(); r != nil {
var ok bool
err, ok = r.(error)
if !ok {
err = fmt.Errorf("Failed to parse consistency \"%s\": %v", consistencyStr, r)
}
}
}()
consistency = gocql.ParseConsistency(consistencyStr)
return consistency, nil
}

View File

@@ -0,0 +1,53 @@
package cassandra
import (
"fmt"
"testing"
dt "github.com/mattes/migrate/database/testing"
mt "github.com/mattes/migrate/testing"
"github.com/gocql/gocql"
"time"
"strconv"
)
var versions = []mt.Version{
{Image: "cassandra:3.0.10"},
{Image: "cassandra:3.0"},
}
func isReady(i mt.Instance) bool {
// Cassandra exposes 5 ports (7000, 7001, 7199, 9042 & 9160)
// We only need the port bound to 9042, but we can only access to the first one
// through 'i.Port()' (which calls DockerContainer.firstPortMapping())
// So we need to get port mapping to retrieve correct port number bound to 9042
portMap := i.NetworkSettings().Ports
port, _ := strconv.Atoi(portMap["9042/tcp"][0].HostPort)
cluster := gocql.NewCluster(i.Host())
cluster.Port = port
//cluster.ProtoVersion = 4
cluster.Consistency = gocql.All
cluster.Timeout = 1 * time.Minute
p, err := cluster.CreateSession()
if err != nil {
return false
}
// Create keyspace for tests
p.Query("CREATE KEYSPACE testks WITH REPLICATION = {'class': 'SimpleStrategy', 'replication_factor':1}").Exec()
return true
}
func Test(t *testing.T) {
mt.ParallelTest(t, versions, isReady,
func(t *testing.T, i mt.Instance) {
p := &Cassandra{}
portMap := i.NetworkSettings().Ports
port, _ := strconv.Atoi(portMap["9042/tcp"][0].HostPort)
addr := fmt.Sprintf("cassandra://%v:%v/testks", i.Host(), port)
d, err := p.Open(addr)
if err != nil {
t.Fatalf("%v", err)
}
dt.Test(t, d, []byte("SELECT table_name from system_schema.tables"))
})
}

View File

@@ -0,0 +1,12 @@
# ClickHouse
`clickhouse://host:port?username=user&password=qwerty&database=clicks`
| URL Query | Description |
|------------|-------------|
| `x-migrations-table`| Name of the migrations table |
| `database` | The name of the database to connect to |
| `username` | The user to sign in as |
| `password` | The user's password |
| `host` | The host to connect to. |
| `port` | The port to bind to. |

View File

@@ -0,0 +1,196 @@
package clickhouse
import (
"database/sql"
"fmt"
"io"
"io/ioutil"
"net/url"
"time"
"github.com/mattes/migrate"
"github.com/mattes/migrate/database"
)
var DefaultMigrationsTable = "schema_migrations"
var ErrNilConfig = fmt.Errorf("no config")
type Config struct {
DatabaseName string
MigrationsTable string
}
func init() {
database.Register("clickhouse", &ClickHouse{})
}
func WithInstance(conn *sql.DB, config *Config) (database.Driver, error) {
if config == nil {
return nil, ErrNilConfig
}
if err := conn.Ping(); err != nil {
return nil, err
}
ch := &ClickHouse{
conn: conn,
config: config,
}
if err := ch.init(); err != nil {
return nil, err
}
return ch, nil
}
type ClickHouse struct {
conn *sql.DB
config *Config
}
func (ch *ClickHouse) Open(dsn string) (database.Driver, error) {
purl, err := url.Parse(dsn)
if err != nil {
return nil, err
}
q := migrate.FilterCustomQuery(purl)
q.Scheme = "tcp"
conn, err := sql.Open("clickhouse", q.String())
if err != nil {
return nil, err
}
ch = &ClickHouse{
conn: conn,
config: &Config{
MigrationsTable: purl.Query().Get("x-migrations-table"),
DatabaseName: purl.Query().Get("database"),
},
}
if err := ch.init(); err != nil {
return nil, err
}
return ch, nil
}
func (ch *ClickHouse) init() error {
if len(ch.config.DatabaseName) == 0 {
if err := ch.conn.QueryRow("SELECT currentDatabase()").Scan(&ch.config.DatabaseName); err != nil {
return err
}
}
if len(ch.config.MigrationsTable) == 0 {
ch.config.MigrationsTable = DefaultMigrationsTable
}
return ch.ensureVersionTable()
}
func (ch *ClickHouse) Run(r io.Reader) error {
migration, err := ioutil.ReadAll(r)
if err != nil {
return err
}
if _, err := ch.conn.Exec(string(migration)); err != nil {
return database.Error{OrigErr: err, Err: "migration failed", Query: migration}
}
return nil
}
func (ch *ClickHouse) Version() (int, bool, error) {
var (
version int
dirty uint8
query = "SELECT version, dirty FROM `" + ch.config.MigrationsTable + "` ORDER BY sequence DESC LIMIT 1"
)
if err := ch.conn.QueryRow(query).Scan(&version, &dirty); err != nil {
if err == sql.ErrNoRows {
return database.NilVersion, false, nil
}
return 0, false, &database.Error{OrigErr: err, Query: []byte(query)}
}
return version, dirty == 1, nil
}
func (ch *ClickHouse) SetVersion(version int, dirty bool) error {
var (
bool = func(v bool) uint8 {
if v {
return 1
}
return 0
}
tx, err = ch.conn.Begin()
)
if err != nil {
return err
}
query := "INSERT INTO " + ch.config.MigrationsTable + " (version, dirty, sequence) VALUES (?, ?, ?)"
if _, err := tx.Exec(query, version, bool(dirty), time.Now().UnixNano()); err != nil {
return &database.Error{OrigErr: err, Query: []byte(query)}
}
return tx.Commit()
}
func (ch *ClickHouse) ensureVersionTable() error {
var (
table string
query = "SHOW TABLES FROM " + ch.config.DatabaseName + " LIKE '" + ch.config.MigrationsTable + "'"
)
// check if migration table exists
if err := ch.conn.QueryRow(query).Scan(&table); err != nil {
if err != sql.ErrNoRows {
return &database.Error{OrigErr: err, Query: []byte(query)}
}
} else {
return nil
}
// if not, create the empty migration table
query = `
CREATE TABLE ` + ch.config.MigrationsTable + ` (
version UInt32,
dirty UInt8,
sequence UInt64
) Engine=TinyLog
`
if _, err := ch.conn.Exec(query); err != nil {
return &database.Error{OrigErr: err, Query: []byte(query)}
}
return nil
}
func (ch *ClickHouse) Drop() error {
var (
query = "SHOW TABLES FROM " + ch.config.DatabaseName
tables, err = ch.conn.Query(query)
)
if err != nil {
return &database.Error{OrigErr: err, Query: []byte(query)}
}
defer tables.Close()
for tables.Next() {
var table string
if err := tables.Scan(&table); err != nil {
return err
}
query = "DROP TABLE IF EXISTS " + ch.config.DatabaseName + "." + table
if _, err := ch.conn.Exec(query); err != nil {
return &database.Error{OrigErr: err, Query: []byte(query)}
}
}
return ch.ensureVersionTable()
}
func (ch *ClickHouse) Lock() error { return nil }
func (ch *ClickHouse) Unlock() error { return nil }
func (ch *ClickHouse) Close() error { return ch.conn.Close() }

View File

@@ -0,0 +1 @@
DROP TABLE IF EXISTS test_1;

View File

@@ -0,0 +1,3 @@
CREATE TABLE test_1 (
Date Date
) Engine=Memory;

View File

@@ -0,0 +1 @@
DROP TABLE IF EXISTS test_2;

View File

@@ -0,0 +1,3 @@
CREATE TABLE test_2 (
Date Date
) Engine=Memory;

View File

@@ -0,0 +1,19 @@
# cockroachdb
`cockroachdb://user:password@host:port/dbname?query` (`cockroach://`, and `crdb-postgres://` work, too)
| URL Query | WithInstance Config | Description |
|------------|---------------------|-------------|
| `x-migrations-table` | `MigrationsTable` | Name of the migrations table |
| `x-lock-table` | `LockTable` | Name of the table which maintains the migration lock |
| `x-force-lock` | `ForceLock` | Force lock acquisition to fix faulty migrations which may not have released the schema lock (Boolean, default is `false`) |
| `dbname` | `DatabaseName` | The name of the database to connect to |
| `user` | | The user to sign in as |
| `password` | | The user's password |
| `host` | | The host to connect to. Values that start with / are for unix domain sockets. (default is localhost) |
| `port` | | The port to bind to. (default is 5432) |
| `connect_timeout` | | Maximum wait for connection, in seconds. Zero or not specified means wait indefinitely. |
| `sslcert` | | Cert file location. The file must contain PEM encoded data. |
| `sslkey` | | Key file location. The file must contain PEM encoded data. |
| `sslrootcert` | | The location of the root certificate file. The file must contain PEM encoded data. |
| `sslmode` | | Whether or not to use SSL (disable\|require\|verify-ca\|verify-full) |

View File

@@ -0,0 +1,338 @@
package cockroachdb
import (
"database/sql"
"fmt"
"io"
"io/ioutil"
nurl "net/url"
"github.com/cockroachdb/cockroach-go/crdb"
"github.com/lib/pq"
"github.com/mattes/migrate"
"github.com/mattes/migrate/database"
"regexp"
"strconv"
"context"
)
func init() {
db := CockroachDb{}
database.Register("cockroach", &db)
database.Register("cockroachdb", &db)
database.Register("crdb-postgres", &db)
}
var DefaultMigrationsTable = "schema_migrations"
var DefaultLockTable = "schema_lock"
var (
ErrNilConfig = fmt.Errorf("no config")
ErrNoDatabaseName = fmt.Errorf("no database name")
)
type Config struct {
MigrationsTable string
LockTable string
ForceLock bool
DatabaseName string
}
type CockroachDb struct {
db *sql.DB
isLocked bool
// Open and WithInstance need to guarantee that config is never nil
config *Config
}
func WithInstance(instance *sql.DB, config *Config) (database.Driver, error) {
if config == nil {
return nil, ErrNilConfig
}
if err := instance.Ping(); err != nil {
return nil, err
}
query := `SELECT current_database()`
var databaseName string
if err := instance.QueryRow(query).Scan(&databaseName); err != nil {
return nil, &database.Error{OrigErr: err, Query: []byte(query)}
}
if len(databaseName) == 0 {
return nil, ErrNoDatabaseName
}
config.DatabaseName = databaseName
if len(config.MigrationsTable) == 0 {
config.MigrationsTable = DefaultMigrationsTable
}
if len(config.LockTable) == 0 {
config.LockTable = DefaultLockTable
}
px := &CockroachDb{
db: instance,
config: config,
}
if err := px.ensureVersionTable(); err != nil {
return nil, err
}
if err := px.ensureLockTable(); err != nil {
return nil, err
}
return px, nil
}
func (c *CockroachDb) Open(url string) (database.Driver, error) {
purl, err := nurl.Parse(url)
if err != nil {
return nil, err
}
// As Cockroach uses the postgres protocol, and 'postgres' is already a registered database, we need to replace the
// connect prefix, with the actual protocol, so that the library can differentiate between the implementations
re := regexp.MustCompile("^(cockroach(db)?|crdb-postgres)")
connectString := re.ReplaceAllString(migrate.FilterCustomQuery(purl).String(), "postgres")
db, err := sql.Open("postgres", connectString)
if err != nil {
return nil, err
}
migrationsTable := purl.Query().Get("x-migrations-table")
if len(migrationsTable) == 0 {
migrationsTable = DefaultMigrationsTable
}
lockTable := purl.Query().Get("x-lock-table")
if len(lockTable) == 0 {
lockTable = DefaultLockTable
}
forceLockQuery := purl.Query().Get("x-force-lock")
forceLock, err := strconv.ParseBool(forceLockQuery)
if err != nil {
forceLock = false
}
px, err := WithInstance(db, &Config{
DatabaseName: purl.Path,
MigrationsTable: migrationsTable,
LockTable: lockTable,
ForceLock: forceLock,
})
if err != nil {
return nil, err
}
return px, nil
}
func (c *CockroachDb) Close() error {
return c.db.Close()
}
// Locking is done manually with a separate lock table. Implementing advisory locks in CRDB is being discussed
// See: https://github.com/cockroachdb/cockroach/issues/13546
func (c *CockroachDb) Lock() error {
err := crdb.ExecuteTx(context.Background(), c.db, nil, func(tx *sql.Tx) error {
aid, err := database.GenerateAdvisoryLockId(c.config.DatabaseName)
if err != nil {
return err
}
query := "SELECT * FROM " + c.config.LockTable + " WHERE lock_id = $1"
rows, err := tx.Query(query, aid)
if err != nil {
return database.Error{OrigErr: err, Err: "failed to fetch migration lock", Query: []byte(query)}
}
defer rows.Close()
// If row exists at all, lock is present
locked := rows.Next()
if locked && !c.config.ForceLock {
return database.Error{Err: "lock could not be acquired; already locked", Query: []byte(query)}
}
query = "INSERT INTO " + c.config.LockTable + " (lock_id) VALUES ($1)"
if _, err := tx.Exec(query, aid) ; err != nil {
return database.Error{OrigErr: err, Err: "failed to set migration lock", Query: []byte(query)}
}
return nil
})
if err != nil {
return err
} else {
c.isLocked = true
return nil
}
}
// Locking is done manually with a separate lock table. Implementing advisory locks in CRDB is being discussed
// See: https://github.com/cockroachdb/cockroach/issues/13546
func (c *CockroachDb) Unlock() error {
aid, err := database.GenerateAdvisoryLockId(c.config.DatabaseName)
if err != nil {
return err
}
// In the event of an implementation (non-migration) error, it is possible for the lock to not be released. Until
// a better locking mechanism is added, a manual purging of the lock table may be required in such circumstances
query := "DELETE FROM " + c.config.LockTable + " WHERE lock_id = $1"
if _, err := c.db.Exec(query, aid); err != nil {
if e, ok := err.(*pq.Error); ok {
// 42P01 is "UndefinedTableError" in CockroachDB
// https://github.com/cockroachdb/cockroach/blob/master/pkg/sql/pgwire/pgerror/codes.go
if e.Code == "42P01" {
// On drops, the lock table is fully removed; This is fine, and is a valid "unlocked" state for the schema
c.isLocked = false
return nil
}
}
return database.Error{OrigErr: err, Err: "failed to release migration lock", Query: []byte(query)}
}
c.isLocked = false
return nil
}
func (c *CockroachDb) Run(migration io.Reader) error {
migr, err := ioutil.ReadAll(migration)
if err != nil {
return err
}
// run migration
query := string(migr[:])
if _, err := c.db.Exec(query); err != nil {
return database.Error{OrigErr: err, Err: "migration failed", Query: migr}
}
return nil
}
func (c *CockroachDb) SetVersion(version int, dirty bool) error {
return crdb.ExecuteTx(context.Background(), c.db, nil, func(tx *sql.Tx) error {
if _, err := tx.Exec( `TRUNCATE "` + c.config.MigrationsTable + `"`); err != nil {
return err
}
if version >= 0 {
if _, err := tx.Exec(`INSERT INTO "` + c.config.MigrationsTable + `" (version, dirty) VALUES ($1, $2)`, version, dirty); err != nil {
return err
}
}
return nil
})
}
func (c *CockroachDb) Version() (version int, dirty bool, err error) {
query := `SELECT version, dirty FROM "` + c.config.MigrationsTable + `" LIMIT 1`
err = c.db.QueryRow(query).Scan(&version, &dirty)
switch {
case err == sql.ErrNoRows:
return database.NilVersion, false, nil
case err != nil:
if e, ok := err.(*pq.Error); ok {
// 42P01 is "UndefinedTableError" in CockroachDB
// https://github.com/cockroachdb/cockroach/blob/master/pkg/sql/pgwire/pgerror/codes.go
if e.Code == "42P01" {
return database.NilVersion, false, nil
}
}
return 0, false, &database.Error{OrigErr: err, Query: []byte(query)}
default:
return version, dirty, nil
}
}
func (c *CockroachDb) Drop() error {
// select all tables in current schema
query := `SELECT table_name FROM information_schema.tables WHERE table_schema=(SELECT current_schema())`
tables, err := c.db.Query(query)
if err != nil {
return &database.Error{OrigErr: err, Query: []byte(query)}
}
defer tables.Close()
// delete one table after another
tableNames := make([]string, 0)
for tables.Next() {
var tableName string
if err := tables.Scan(&tableName); err != nil {
return err
}
if len(tableName) > 0 {
tableNames = append(tableNames, tableName)
}
}
if len(tableNames) > 0 {
// delete one by one ...
for _, t := range tableNames {
query = `DROP TABLE IF EXISTS ` + t + ` CASCADE`
if _, err := c.db.Exec(query); err != nil {
return &database.Error{OrigErr: err, Query: []byte(query)}
}
}
if err := c.ensureVersionTable(); err != nil {
return err
}
}
return nil
}
func (c *CockroachDb) ensureVersionTable() error {
// check if migration table exists
var count int
query := `SELECT COUNT(1) FROM information_schema.tables WHERE table_name = $1 AND table_schema = (SELECT current_schema()) LIMIT 1`
if err := c.db.QueryRow(query, c.config.MigrationsTable).Scan(&count); err != nil {
return &database.Error{OrigErr: err, Query: []byte(query)}
}
if count == 1 {
return nil
}
// if not, create the empty migration table
query = `CREATE TABLE "` + c.config.MigrationsTable + `" (version INT NOT NULL PRIMARY KEY, dirty BOOL NOT NULL)`
if _, err := c.db.Exec(query); err != nil {
return &database.Error{OrigErr: err, Query: []byte(query)}
}
return nil
}
func (c *CockroachDb) ensureLockTable() error {
// check if lock table exists
var count int
query := `SELECT COUNT(1) FROM information_schema.tables WHERE table_name = $1 AND table_schema = (SELECT current_schema()) LIMIT 1`
if err := c.db.QueryRow(query, c.config.LockTable).Scan(&count); err != nil {
return &database.Error{OrigErr: err, Query: []byte(query)}
}
if count == 1 {
return nil
}
// if not, create the empty lock table
query = `CREATE TABLE "` + c.config.LockTable + `" (lock_id INT NOT NULL PRIMARY KEY)`
if _, err := c.db.Exec(query); err != nil {
return &database.Error{OrigErr: err, Query: []byte(query)}
}
return nil
}

View File

@@ -0,0 +1,91 @@
package cockroachdb
// error codes https://github.com/lib/pq/blob/master/error.go
import (
//"bytes"
"database/sql"
"fmt"
"io"
"testing"
"github.com/lib/pq"
dt "github.com/mattes/migrate/database/testing"
mt "github.com/mattes/migrate/testing"
"bytes"
)
var versions = []mt.Version{
{Image: "cockroachdb/cockroach:v1.0.2", Cmd: []string{"start", "--insecure"}},
}
func isReady(i mt.Instance) bool {
db, err := sql.Open("postgres", fmt.Sprintf("postgres://root@%v:%v?sslmode=disable", i.Host(), i.PortFor(26257)))
if err != nil {
return false
}
defer db.Close()
err = db.Ping()
if err == io.EOF {
_, err = db.Exec("CREATE DATABASE migrate")
return err == nil;
} else if e, ok := err.(*pq.Error); ok {
if e.Code.Name() == "cannot_connect_now" {
return false
}
}
_, err = db.Exec("CREATE DATABASE migrate")
return err == nil;
return true
}
func Test(t *testing.T) {
mt.ParallelTest(t, versions, isReady,
func(t *testing.T, i mt.Instance) {
c := &CockroachDb{}
addr := fmt.Sprintf("cockroach://root@%v:%v/migrate?sslmode=disable", i.Host(), i.PortFor(26257))
d, err := c.Open(addr)
if err != nil {
t.Fatalf("%v", err)
}
dt.Test(t, d, []byte("SELECT 1"))
})
}
func TestMultiStatement(t *testing.T) {
mt.ParallelTest(t, versions, isReady,
func(t *testing.T, i mt.Instance) {
c := &CockroachDb{}
addr := fmt.Sprintf("cockroach://root@%v:%v/migrate?sslmode=disable", i.Host(), i.Port())
d, err := c.Open(addr)
if err != nil {
t.Fatalf("%v", err)
}
if err := d.Run(bytes.NewReader([]byte("CREATE TABLE foo (foo text); CREATE TABLE bar (bar text);"))); err != nil {
t.Fatalf("expected err to be nil, got %v", err)
}
// make sure second table exists
var exists bool
if err := d.(*CockroachDb).db.QueryRow("SELECT EXISTS (SELECT 1 FROM information_schema.tables WHERE table_name = 'bar' AND table_schema = (SELECT current_schema()))").Scan(&exists); err != nil {
t.Fatal(err)
}
if !exists {
t.Fatalf("expected table bar to exist")
}
})
}
func TestFilterCustomQuery(t *testing.T) {
mt.ParallelTest(t, versions, isReady,
func(t *testing.T, i mt.Instance) {
c := &CockroachDb{}
addr := fmt.Sprintf("cockroach://root@%v:%v/migrate?sslmode=disable&x-custom=foobar", i.Host(), i.PortFor(26257))
_, err := c.Open(addr)
if err != nil {
t.Fatalf("%v", err)
}
})
}

View File

@@ -0,0 +1 @@
DROP TABLE IF EXISTS users;

View File

@@ -0,0 +1,5 @@
CREATE TABLE users (
user_id INT UNIQUE,
name STRING(40),
email STRING(40)
);

View File

@@ -0,0 +1 @@
ALTER TABLE users DROP COLUMN IF EXISTS city;

View File

@@ -0,0 +1 @@
ALTER TABLE users ADD COLUMN city TEXT;

View File

@@ -0,0 +1 @@
DROP INDEX IF EXISTS users_email_index;

View File

@@ -0,0 +1,3 @@
CREATE UNIQUE INDEX IF NOT EXISTS users_email_index ON users (email);
-- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean sed interdum velit, tristique iaculis justo. Pellentesque ut porttitor dolor. Donec sit amet pharetra elit. Cras vel ligula ex. Phasellus posuere.

View File

@@ -0,0 +1 @@
DROP TABLE IF EXISTS books;

View File

@@ -0,0 +1,5 @@
CREATE TABLE books (
user_id INT,
name STRING(40),
author STRING(40)
);

View File

@@ -0,0 +1 @@
DROP TABLE IF EXISTS movies;

View File

@@ -0,0 +1,5 @@
CREATE TABLE movies (
user_id INT,
name STRING(40),
director STRING(40)
);

View File

@@ -0,0 +1 @@
-- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean sed interdum velit, tristique iaculis justo. Pellentesque ut porttitor dolor. Donec sit amet pharetra elit. Cras vel ligula ex. Phasellus posuere.

View File

@@ -0,0 +1 @@
-- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean sed interdum velit, tristique iaculis justo. Pellentesque ut porttitor dolor. Donec sit amet pharetra elit. Cras vel ligula ex. Phasellus posuere.

View File

@@ -0,0 +1 @@
-- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean sed interdum velit, tristique iaculis justo. Pellentesque ut porttitor dolor. Donec sit amet pharetra elit. Cras vel ligula ex. Phasellus posuere.

View File

@@ -0,0 +1 @@
-- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean sed interdum velit, tristique iaculis justo. Pellentesque ut porttitor dolor. Donec sit amet pharetra elit. Cras vel ligula ex. Phasellus posuere.

View File

112
vendor/github.com/mattes/migrate/database/driver.go generated vendored Normal file
View File

@@ -0,0 +1,112 @@
// Package database provides the Database interface.
// All database drivers must implement this interface, register themselves,
// optionally provide a `WithInstance` function and pass the tests
// in package database/testing.
package database
import (
"fmt"
"io"
nurl "net/url"
"sync"
)
var (
ErrLocked = fmt.Errorf("can't acquire lock")
)
const NilVersion int = -1
var driversMu sync.RWMutex
var drivers = make(map[string]Driver)
// Driver is the interface every database driver must implement.
//
// How to implement a database driver?
// 1. Implement this interface.
// 2. Optionally, add a function named `WithInstance`.
// This function should accept an existing DB instance and a Config{} struct
// and return a driver instance.
// 3. Add a test that calls database/testing.go:Test()
// 4. Add own tests for Open(), WithInstance() (when provided) and Close().
// All other functions are tested by tests in database/testing.
// Saves you some time and makes sure all database drivers behave the same way.
// 5. Call Register in init().
// 6. Create a migrate/cli/build_<driver-name>.go file
// 7. Add driver name in 'DATABASE' variable in Makefile
//
// Guidelines:
// * Don't try to correct user input. Don't assume things.
// When in doubt, return an error and explain the situation to the user.
// * All configuration input must come from the URL string in func Open()
// or the Config{} struct in WithInstance. Don't os.Getenv().
type Driver interface {
// Open returns a new driver instance configured with parameters
// coming from the URL string. Migrate will call this function
// only once per instance.
Open(url string) (Driver, error)
// Close closes the underlying database instance managed by the driver.
// Migrate will call this function only once per instance.
Close() error
// Lock should acquire a database lock so that only one migration process
// can run at a time. Migrate will call this function before Run is called.
// If the implementation can't provide this functionality, return nil.
// Return database.ErrLocked if database is already locked.
Lock() error
// Unlock should release the lock. Migrate will call this function after
// all migrations have been run.
Unlock() error
// Run applies a migration to the database. migration is garantueed to be not nil.
Run(migration io.Reader) error
// SetVersion saves version and dirty state.
// Migrate will call this function before and after each call to Run.
// version must be >= -1. -1 means NilVersion.
SetVersion(version int, dirty bool) error
// Version returns the currently active version and if the database is dirty.
// When no migration has been applied, it must return version -1.
// Dirty means, a previous migration failed and user interaction is required.
Version() (version int, dirty bool, err error)
// Drop deletes everything in the database.
Drop() error
}
// Open returns a new driver instance.
func Open(url string) (Driver, error) {
u, err := nurl.Parse(url)
if err != nil {
return nil, err
}
if u.Scheme == "" {
return nil, fmt.Errorf("database driver: invalid URL scheme")
}
driversMu.RLock()
d, ok := drivers[u.Scheme]
driversMu.RUnlock()
if !ok {
return nil, fmt.Errorf("database driver: unknown driver %v (forgotten import?)", u.Scheme)
}
return d.Open(url)
}
// Register globally registers a driver.
func Register(name string, driver Driver) {
driversMu.Lock()
defer driversMu.Unlock()
if driver == nil {
panic("Register driver is nil")
}
if _, dup := drivers[name]; dup {
panic("Register called twice for driver " + name)
}
drivers[name] = driver
}

View File

@@ -0,0 +1,8 @@
package database
func ExampleDriver() {
// see database/stub for an example
// database/stub/stub.go has the driver implementation
// database/stub/stub_test.go runs database/testing/test.go:Test
}

27
vendor/github.com/mattes/migrate/database/error.go generated vendored Normal file
View File

@@ -0,0 +1,27 @@
package database
import (
"fmt"
)
// Error should be used for errors involving queries ran against the database
type Error struct {
// Optional: the line number
Line uint
// Query is a query excerpt
Query []byte
// Err is a useful/helping error message for humans
Err string
// OrigErr is the underlying error
OrigErr error
}
func (e Error) Error() string {
if len(e.Err) == 0 {
return fmt.Sprintf("%v in line %v: %s", e.OrigErr, e.Line, e.Query)
}
return fmt.Sprintf("%v in line %v: %s (details: %v)", e.Err, e.Line, e.Query, e.OrigErr)
}

View File

Some files were not shown because too many files have changed in this diff Show More