mirror of
https://github.com/fnproject/fn.git
synced 2022-10-28 21:29:17 +03:00
Fnlb was moved to its own repo: fnproject/lb (#702)
* Fnlb was moved to its own repo: fnproject/lb * Clean up fnlb leftovers * Newer deps
This commit is contained in:
committed by
Reed Allman
parent
4ffa3d5005
commit
d3be603e54
5
vendor/github.com/mattes/migrate/.travis.yml
generated
vendored
5
vendor/github.com/mattes/migrate/.travis.yml
generated
vendored
@@ -4,7 +4,6 @@ sudo: required
|
||||
go:
|
||||
- 1.7
|
||||
- 1.8
|
||||
- 1.9
|
||||
|
||||
env:
|
||||
- MIGRATE_TEST_CONTAINER_BOOT_DELAY=10
|
||||
@@ -16,8 +15,8 @@ services:
|
||||
|
||||
install:
|
||||
- make deps
|
||||
- (cd $GOPATH/src/github.com/docker/docker && git fetch --all --tags --prune && git checkout v17.05.0-ce)
|
||||
- sudo apt-get update && sudo apt-get install docker-ce=17.05.0*
|
||||
- (cd $GOPATH/src/github.com/docker/docker && git fetch --all --tags --prune && git checkout v1.13.0)
|
||||
- sudo apt-get update && sudo apt-get install docker-engine=1.13.0*
|
||||
- go get github.com/mattn/goveralls
|
||||
|
||||
script:
|
||||
|
||||
134
vendor/github.com/mattes/migrate/FAQ.md
generated
vendored
134
vendor/github.com/mattes/migrate/FAQ.md
generated
vendored
@@ -1,67 +1,67 @@
|
||||
# FAQ
|
||||
|
||||
#### How is the code base structured?
|
||||
```
|
||||
/ package migrate (the heart of everything)
|
||||
/cli the CLI wrapper
|
||||
/database database driver and sub directories have the actual driver implementations
|
||||
/source source driver and sub directories have the actual driver implementations
|
||||
```
|
||||
|
||||
#### Why is there no `source/driver.go:Last()`?
|
||||
It's not needed. And unless the source has a "native" way to read a directory in reversed order,
|
||||
it might be expensive to do a full directory scan in order to get the last element.
|
||||
|
||||
#### What is a NilMigration? NilVersion?
|
||||
NilMigration defines a migration without a body. NilVersion is defined as const -1.
|
||||
|
||||
#### What is the difference between uint(version) and int(targetVersion)?
|
||||
version refers to an existing migration version coming from a source and therefor can never be negative.
|
||||
targetVersion can either be a version OR represent a NilVersion, which equals -1.
|
||||
|
||||
#### What's the difference between Next/Previous and Up/Down?
|
||||
```
|
||||
1_first_migration.up.extension next -> 2_second_migration.up.extension ...
|
||||
1_first_migration.down.extension <- previous 2_second_migration.down.extension ...
|
||||
```
|
||||
|
||||
#### Why two separate files (up and down) for a migration?
|
||||
It makes all of our lives easier. No new markup/syntax to learn for users
|
||||
and existing database utility tools continue to work as expected.
|
||||
|
||||
#### How many migrations can migrate handle?
|
||||
Whatever the maximum positive signed integer value is for your platform.
|
||||
For 32bit it would be 2,147,483,647 migrations. Migrate only keeps references to
|
||||
the currently run and pre-fetched migrations in memory. Please note that some
|
||||
source drivers need to do build a full "directory" tree first, which puts some
|
||||
heat on the memory consumption.
|
||||
|
||||
#### Are the table tests in migrate_test.go bloated?
|
||||
Yes and no. There are duplicate test cases for sure but they don't hurt here. In fact
|
||||
the tests are very visual now and might help new users understand expected behaviors quickly.
|
||||
Migrate from version x to y and y is the last migration? Just check out the test for
|
||||
that particular case and know what's going on instantly.
|
||||
|
||||
#### What is Docker being used for?
|
||||
Only for testing. See [testing/docker.go](testing/docker.go)
|
||||
|
||||
#### Why not just use docker-compose?
|
||||
It doesn't give us enough runtime control for testing. We want to be able to bring up containers fast
|
||||
and whenever we want, not just once at the beginning of all tests.
|
||||
|
||||
#### Can I maintain my driver in my own repository?
|
||||
Yes, technically thats possible. We want to encourage you to contribute your driver to this respository though.
|
||||
The driver's functionality is dictated by migrate's interfaces. That means there should really
|
||||
just be one driver for a database/ source. We want to prevent a future where several drivers doing the exact same thing,
|
||||
just implemented a bit differently, co-exist somewhere on Github. If users have to do research first to find the
|
||||
"best" available driver for a database in order to get started, we would have failed as an open source community.
|
||||
|
||||
#### Can I mix multiple sources during a batch of migrations?
|
||||
No.
|
||||
|
||||
#### What does "dirty" database mean?
|
||||
Before a migration runs, each database sets a dirty flag. Execution stops if a migration fails and the dirty state persists,
|
||||
which prevents attempts to run more migrations on top of a failed migration. You need to manually fix the error
|
||||
and then "force" the expected version.
|
||||
|
||||
|
||||
# FAQ
|
||||
|
||||
#### How is the code base structured?
|
||||
```
|
||||
/ package migrate (the heart of everything)
|
||||
/cli the CLI wrapper
|
||||
/database database driver and sub directories have the actual driver implementations
|
||||
/source source driver and sub directories have the actual driver implementations
|
||||
```
|
||||
|
||||
#### Why is there no `source/driver.go:Last()`?
|
||||
It's not needed. And unless the source has a "native" way to read a directory in reversed order,
|
||||
it might be expensive to do a full directory scan in order to get the last element.
|
||||
|
||||
#### What is a NilMigration? NilVersion?
|
||||
NilMigration defines a migration without a body. NilVersion is defined as const -1.
|
||||
|
||||
#### What is the difference between uint(version) and int(targetVersion)?
|
||||
version refers to an existing migration version coming from a source and therefor can never be negative.
|
||||
targetVersion can either be a version OR represent a NilVersion, which equals -1.
|
||||
|
||||
#### What's the difference between Next/Previous and Up/Down?
|
||||
```
|
||||
1_first_migration.up next -> 2_second_migration.up ...
|
||||
1_first_migration.down <- previous 2_second_migration.down ...
|
||||
```
|
||||
|
||||
#### Why two separate files (up and down) for a migration?
|
||||
It makes all of our lives easier. No new markup/syntax to learn for users
|
||||
and existing database utility tools continue to work as expected.
|
||||
|
||||
#### How many migrations can migrate handle?
|
||||
Whatever the maximum positive signed integer value is for your platform.
|
||||
For 32bit it would be 2,147,483,647 migrations. Migrate only keeps references to
|
||||
the currently run and pre-fetched migrations in memory. Please note that some
|
||||
source drivers need to do build a full "directory" tree first, which puts some
|
||||
heat on the memory consumption.
|
||||
|
||||
#### Are the table tests in migrate_test.go bloated?
|
||||
Yes and no. There are duplicate test cases for sure but they don't hurt here. In fact
|
||||
the tests are very visual now and might help new users understand expected behaviors quickly.
|
||||
Migrate from version x to y and y is the last migration? Just check out the test for
|
||||
that particular case and know what's going on instantly.
|
||||
|
||||
#### What is Docker being used for?
|
||||
Only for testing. See [testing/docker.go](testing/docker.go)
|
||||
|
||||
#### Why not just use docker-compose?
|
||||
It doesn't give us enough runtime control for testing. We want to be able to bring up containers fast
|
||||
and whenever we want, not just once at the beginning of all tests.
|
||||
|
||||
#### Can I maintain my driver in my own repository?
|
||||
Yes, technically thats possible. We want to encourage you to contribute your driver to this respository though.
|
||||
The driver's functionality is dictated by migrate's interfaces. That means there should really
|
||||
just be one driver for a database/ source. We want to prevent a future where several drivers doing the exact same thing,
|
||||
just implemented a bit differently, co-exist somewhere on Github. If users have to do research first to find the
|
||||
"best" available driver for a database in order to get started, we would have failed as an open source community.
|
||||
|
||||
#### Can I mix multiple sources during a batch of migrations?
|
||||
No.
|
||||
|
||||
#### What does "dirty" database mean?
|
||||
Before a migration runs, each database sets a dirty flag. Execution stops if a migration fails and the dirty state persists,
|
||||
which prevents attempts to run more migrations on top of a failed migration. You need to manually fix the error
|
||||
and then "force" the expected version.
|
||||
|
||||
|
||||
|
||||
80
vendor/github.com/mattes/migrate/MIGRATIONS.md
generated
vendored
80
vendor/github.com/mattes/migrate/MIGRATIONS.md
generated
vendored
@@ -1,81 +1,5 @@
|
||||
# Migrations
|
||||
|
||||
## Migration Filename Format
|
||||
## Best practices: How to write migrations.
|
||||
|
||||
A single logical migration is represented as two separate migration files, one
|
||||
to migrate "up" to the specified version from the previous version, and a second
|
||||
to migrate back "down" to the previous version. These migrations can be provided
|
||||
by any one of the supported [migration sources](./README.md#migration-sources).
|
||||
|
||||
The ordering and direction of the migration files is determined by the filenames
|
||||
used for them. `migrate` expects the filenames of migrations to have the format:
|
||||
|
||||
{version}_{title}.up.{extension}
|
||||
{version}_{title}.down.{extension}
|
||||
|
||||
The `title` of each migration is unused, and is only for readability. Similarly,
|
||||
the `extension` of the migration files is not checked by the library, and should
|
||||
be an appropriate format for the database in use (`.sql` for SQL variants, for
|
||||
instance).
|
||||
|
||||
Versions of migrations may be represented as any 64 bit unsigned integer.
|
||||
All migrations are applied upward in order of increasing version number, and
|
||||
downward by decreasing version number.
|
||||
|
||||
Common versioning schemes include incrementing integers:
|
||||
|
||||
1_initialize_schema.down.sql
|
||||
1_initialize_schema.up.sql
|
||||
2_add_table.down.sql
|
||||
2_add_table.up.sql
|
||||
...
|
||||
|
||||
Or timestamps at an appropriate resolution:
|
||||
|
||||
1500360784_initialize_schema.down.sql
|
||||
1500360784_initialize_schema.up.sql
|
||||
1500445949_add_table.down.sql
|
||||
1500445949_add_table.up.sql
|
||||
...
|
||||
|
||||
But any scheme resulting in distinct, incrementing integers as versions is valid.
|
||||
|
||||
It is suggested that the version number of corresponding `up` and `down` migration
|
||||
files be equivalent for clarity, but they are allowed to differ so long as the
|
||||
relative ordering of the migrations is preserved.
|
||||
|
||||
The migration files are permitted to be empty, so in the event that a migration
|
||||
is a no-op or is irreversible, it is recommended to still include both migration
|
||||
files, and either leaving them empty or adding a comment as appropriate.
|
||||
|
||||
## Migration Content Format
|
||||
|
||||
The format of the migration files themselves varies between database systems.
|
||||
Different databases have different semantics around schema changes and when and
|
||||
how they are allowed to occur (for instance, if schema changes can occur within
|
||||
a transaction).
|
||||
|
||||
As such, the `migrate` library has little to no checking around the format of
|
||||
migration sources. The migration files are generally processed directly by the
|
||||
drivers as raw operations.
|
||||
|
||||
## Reversibility of Migrations
|
||||
|
||||
Best practice for writing schema migration is that all migrations should be
|
||||
reversible. It should in theory be possible for run migrations down and back up
|
||||
through any and all versions with the state being fully cleaned and recreated
|
||||
by doing so.
|
||||
|
||||
By adhering to this recommended practice, development and deployment of new code
|
||||
is cleaner and easier (cleaning database state for a new feature should be as
|
||||
easy as migrating down to a prior version, and back up to the latest).
|
||||
|
||||
As opposed to some other migration libraries, `migrate` represents up and down
|
||||
migrations as separate files. This prevents any non-standard file syntax from
|
||||
being introduced which may result in unintended behavior or errors, depending
|
||||
on what database is processing the file.
|
||||
|
||||
While it is technically possible for an up or down migration to exist on its own
|
||||
without an equivalently versioned counterpart, it is strongly recommended to
|
||||
always include a down migration which cleans up the state of the corresponding
|
||||
up migration.
|
||||
@TODO
|
||||
|
||||
8
vendor/github.com/mattes/migrate/Makefile
generated
vendored
8
vendor/github.com/mattes/migrate/Makefile
generated
vendored
@@ -1,5 +1,5 @@
|
||||
SOURCE ?= file go-bindata github aws-s3 google-cloud-storage
|
||||
DATABASE ?= postgres mysql redshift cassandra sqlite3 spanner cockroachdb clickhouse
|
||||
DATABASE ?= postgres mysql redshift
|
||||
VERSION ?= $(shell git describe --tags 2>/dev/null | cut -c 2-)
|
||||
TEST_FLAGS ?=
|
||||
REPO_OWNER ?= $(shell cd .. && basename "$$(pwd)")
|
||||
@@ -7,9 +7,9 @@ REPO_OWNER ?= $(shell cd .. && basename "$$(pwd)")
|
||||
|
||||
build-cli: clean
|
||||
-mkdir ./cli/build
|
||||
cd ./cli && CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -a -o build/migrate.linux-amd64 -ldflags='-X main.Version=$(VERSION)' -tags '$(DATABASE) $(SOURCE)' .
|
||||
cd ./cli && CGO_ENABLED=1 GOOS=darwin GOARCH=amd64 go build -a -o build/migrate.darwin-amd64 -ldflags='-X main.Version=$(VERSION)' -tags '$(DATABASE) $(SOURCE)' .
|
||||
cd ./cli && CGO_ENABLED=1 GOOS=windows GOARCH=amd64 go build -a -o build/migrate.windows-amd64.exe -ldflags='-X main.Version=$(VERSION)' -tags '$(DATABASE) $(SOURCE)' .
|
||||
cd ./cli && GOOS=linux GOARCH=amd64 go build -a -o build/migrate.linux-amd64 -ldflags='-X main.Version=$(VERSION)' -tags '$(DATABASE) $(SOURCE)' .
|
||||
cd ./cli && GOOS=darwin GOARCH=amd64 go build -a -o build/migrate.darwin-amd64 -ldflags='-X main.Version=$(VERSION)' -tags '$(DATABASE) $(SOURCE)' .
|
||||
cd ./cli && GOOS=windows GOARCH=amd64 go build -a -o build/migrate.windows-amd64.exe -ldflags='-X main.Version=$(VERSION)' -tags '$(DATABASE) $(SOURCE)' .
|
||||
cd ./cli/build && find . -name 'migrate*' | xargs -I{} tar czf {}.tar.gz {}
|
||||
cd ./cli/build && shasum -a 256 * > sha256sum.txt
|
||||
cat ./cli/build/sha256sum.txt
|
||||
|
||||
15
vendor/github.com/mattes/migrate/README.md
generated
vendored
15
vendor/github.com/mattes/migrate/README.md
generated
vendored
@@ -24,16 +24,15 @@ Database drivers run migrations. [Add a new database?](database/driver.go)
|
||||
* [PostgreSQL](database/postgres)
|
||||
* [Redshift](database/redshift)
|
||||
* [Ql](database/ql)
|
||||
* [Cassandra](database/cassandra)
|
||||
* [SQLite](database/sqlite3)
|
||||
* [Cassandra](database/cassandra) ([todo #164](https://github.com/mattes/migrate/issues/164))
|
||||
* [SQLite](database/sqlite) ([todo #165](https://github.com/mattes/migrate/issues/165))
|
||||
* [MySQL/ MariaDB](database/mysql)
|
||||
* [Neo4j](database/neo4j) ([todo #167](https://github.com/mattes/migrate/issues/167))
|
||||
* [MongoDB](database/mongodb) ([todo #169](https://github.com/mattes/migrate/issues/169))
|
||||
* [CrateDB](database/crate) ([todo #170](https://github.com/mattes/migrate/issues/170))
|
||||
* [Shell](database/shell) ([todo #171](https://github.com/mattes/migrate/issues/171))
|
||||
* [Google Cloud Spanner](database/spanner)
|
||||
* [CockroachDB](database/cockroachdb)
|
||||
* [ClickHouse](database/clickhouse)
|
||||
* [Google Cloud Spanner](database/spanner) ([todo #172](https://github.com/mattes/migrate/issues/172))
|
||||
|
||||
|
||||
|
||||
## Migration Sources
|
||||
@@ -137,4 +136,8 @@ Also have a look at the [FAQ](FAQ.md).
|
||||
|
||||
---
|
||||
|
||||
Looking for alternatives? [https://awesome-go.com/#database](https://awesome-go.com/#database).
|
||||
__Alternatives__
|
||||
|
||||
https://bitbucket.org/liamstask/goose, https://github.com/tanel/dbmigrate,
|
||||
https://github.com/BurntSushi/migration, https://github.com/DavidHuie/gomigrate,
|
||||
https://github.com/rubenv/sql-migrate
|
||||
|
||||
12
vendor/github.com/mattes/migrate/cli/README.md
generated
vendored
12
vendor/github.com/mattes/migrate/cli/README.md
generated
vendored
@@ -5,16 +5,10 @@
|
||||
#### With Go toolchain
|
||||
|
||||
```
|
||||
$ go get -u -d github.com/mattes/migrate/cli github.com/lib/pq
|
||||
$ go get -u -d github.com/mattes/migrate/cli
|
||||
$ go build -tags 'postgres' -o /usr/local/bin/migrate github.com/mattes/migrate/cli
|
||||
```
|
||||
|
||||
Note: This example builds the cli which will only work with postgres. In order
|
||||
to build the cli for use with other databases, replace the `postgres` build tag
|
||||
with the appropriate database tag(s) for the databases desired. The tags
|
||||
correspond to the names of the sub-packages underneath the
|
||||
[`database`](../database) package.
|
||||
|
||||
#### MacOS
|
||||
|
||||
([todo #156](https://github.com/mattes/migrate/issues/156))
|
||||
@@ -51,7 +45,7 @@ Usage: migrate OPTIONS COMMAND [arg...]
|
||||
|
||||
Options:
|
||||
-source Location of the migrations (driver://url)
|
||||
-path Shorthand for -source=file://path
|
||||
-path Shorthand for -source=file://path
|
||||
-database Run migrations against this database (driver://url)
|
||||
-prefetch N Number of migrations to load in advance before executing (default 10)
|
||||
-lock-timeout N Allow N seconds to acquire database lock (default 15)
|
||||
@@ -60,8 +54,6 @@ Options:
|
||||
-help Print usage
|
||||
|
||||
Commands:
|
||||
create [-ext E] [-dir D] NAME
|
||||
Create a set of timestamped up/down migrations titled NAME, in directory D with extension E
|
||||
goto V Migrate to version V
|
||||
up [N] Apply all or N up migrations
|
||||
down [N] Apply all or N down migrations
|
||||
|
||||
7
vendor/github.com/mattes/migrate/cli/build_cassandra.go
generated
vendored
7
vendor/github.com/mattes/migrate/cli/build_cassandra.go
generated
vendored
@@ -1,7 +0,0 @@
|
||||
// +build cassandra
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
_ "github.com/mattes/migrate/database/cassandra"
|
||||
)
|
||||
8
vendor/github.com/mattes/migrate/cli/build_clickhouse.go
generated
vendored
8
vendor/github.com/mattes/migrate/cli/build_clickhouse.go
generated
vendored
@@ -1,8 +0,0 @@
|
||||
// +build clickhouse
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
_ "github.com/kshvakov/clickhouse"
|
||||
_ "github.com/mattes/migrate/database/clickhouse"
|
||||
)
|
||||
7
vendor/github.com/mattes/migrate/cli/build_cockroachdb.go
generated
vendored
7
vendor/github.com/mattes/migrate/cli/build_cockroachdb.go
generated
vendored
@@ -1,7 +0,0 @@
|
||||
// +build cockroachdb
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
_ "github.com/mattes/migrate/database/cockroachdb"
|
||||
)
|
||||
7
vendor/github.com/mattes/migrate/cli/build_spanner.go
generated
vendored
7
vendor/github.com/mattes/migrate/cli/build_spanner.go
generated
vendored
@@ -1,7 +0,0 @@
|
||||
// +build spanner
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
_ "github.com/mattes/migrate/database/spanner"
|
||||
)
|
||||
7
vendor/github.com/mattes/migrate/cli/build_sqlite3.go
generated
vendored
7
vendor/github.com/mattes/migrate/cli/build_sqlite3.go
generated
vendored
@@ -1,7 +0,0 @@
|
||||
// +build sqlite3
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
_ "github.com/mattes/migrate/database/sqlite3"
|
||||
)
|
||||
15
vendor/github.com/mattes/migrate/cli/commands.go
generated
vendored
15
vendor/github.com/mattes/migrate/cli/commands.go
generated
vendored
@@ -4,23 +4,8 @@ import (
|
||||
"github.com/mattes/migrate"
|
||||
_ "github.com/mattes/migrate/database/stub" // TODO remove again
|
||||
_ "github.com/mattes/migrate/source/file"
|
||||
"os"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
func createCmd(dir string, timestamp int64, name string, ext string) {
|
||||
base := fmt.Sprintf("%v%v_%v.", dir, timestamp, name)
|
||||
os.MkdirAll(dir, os.ModePerm)
|
||||
createFile(base + "up" + ext)
|
||||
createFile(base + "down" + ext)
|
||||
}
|
||||
|
||||
func createFile(fname string) {
|
||||
if _, err := os.Create(fname); err != nil {
|
||||
log.fatalErr(err)
|
||||
}
|
||||
}
|
||||
|
||||
func gotoCmd(m *migrate.Migrate, v uint) {
|
||||
if err := m.Migrate(v); err != nil {
|
||||
if err != migrate.ErrNoChange {
|
||||
|
||||
27
vendor/github.com/mattes/migrate/cli/main.go
generated
vendored
27
vendor/github.com/mattes/migrate/cli/main.go
generated
vendored
@@ -6,7 +6,6 @@ import (
|
||||
"os"
|
||||
"os/signal"
|
||||
"strconv"
|
||||
"strings"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
@@ -42,8 +41,6 @@ Options:
|
||||
-help Print usage
|
||||
|
||||
Commands:
|
||||
create [-ext E] [-dir D] NAME
|
||||
Create a set of timestamped up/down migrations titled NAME, in directory D with extension E
|
||||
goto V Migrate to version V
|
||||
up [N] Apply all or N up migrations
|
||||
down [N] Apply all or N down migrations
|
||||
@@ -104,30 +101,6 @@ Commands:
|
||||
startTime := time.Now()
|
||||
|
||||
switch flag.Arg(0) {
|
||||
case "create":
|
||||
args := flag.Args()[1:]
|
||||
|
||||
createFlagSet := flag.NewFlagSet("create", flag.ExitOnError)
|
||||
extPtr := createFlagSet.String("ext", "", "File extension")
|
||||
dirPtr := createFlagSet.String("dir", "", "Directory to place file in (default: current working directory)")
|
||||
createFlagSet.Parse(args)
|
||||
|
||||
if createFlagSet.NArg() == 0 {
|
||||
log.fatal("error: please specify name")
|
||||
}
|
||||
name := createFlagSet.Arg(0)
|
||||
|
||||
if *extPtr != "" {
|
||||
*extPtr = "." + strings.TrimPrefix(*extPtr, ".")
|
||||
}
|
||||
if *dirPtr != "" {
|
||||
*dirPtr = strings.Trim(*dirPtr, "/") + "/"
|
||||
}
|
||||
|
||||
timestamp := startTime.Unix()
|
||||
|
||||
createCmd(*dirPtr, timestamp, name, *extPtr)
|
||||
|
||||
case "goto":
|
||||
if migraterErr != nil {
|
||||
log.fatalErr(migraterErr)
|
||||
|
||||
31
vendor/github.com/mattes/migrate/database/cassandra/README.md
generated
vendored
31
vendor/github.com/mattes/migrate/database/cassandra/README.md
generated
vendored
@@ -1,31 +0,0 @@
|
||||
# Cassandra
|
||||
|
||||
* Drop command will not work on Cassandra 2.X because it rely on
|
||||
system_schema table which comes with 3.X
|
||||
* Other commands should work properly but are **not tested**
|
||||
|
||||
|
||||
## Usage
|
||||
`cassandra://host:port/keyspace?param1=value¶m2=value2`
|
||||
|
||||
|
||||
| URL Query | Default value | Description |
|
||||
|------------|-------------|-----------|
|
||||
| `x-migrations-table` | schema_migrations | Name of the migrations table |
|
||||
| `port` | 9042 | The port to bind to |
|
||||
| `consistency` | ALL | Migration consistency
|
||||
| `protocol` | | Cassandra protocol version (3 or 4)
|
||||
| `timeout` | 1 minute | Migration timeout
|
||||
| `username` | nil | Username to use when authenticating. |
|
||||
| `password` | nil | Password to use when authenticating. |
|
||||
|
||||
|
||||
`timeout` is parsed using [time.ParseDuration(s string)](https://golang.org/pkg/time/#ParseDuration)
|
||||
|
||||
|
||||
## Upgrading from v1
|
||||
|
||||
1. Write down the current migration version from schema_migrations
|
||||
2. `DROP TABLE schema_migrations`
|
||||
4. Download and install the latest migrate version.
|
||||
5. Force the current migration version with `migrate force <current_version>`.
|
||||
|
||||
228
vendor/github.com/mattes/migrate/database/cassandra/cassandra.go
generated
vendored
228
vendor/github.com/mattes/migrate/database/cassandra/cassandra.go
generated
vendored
@@ -1,228 +0,0 @@
|
||||
package cassandra
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
nurl "net/url"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/gocql/gocql"
|
||||
"github.com/mattes/migrate/database"
|
||||
)
|
||||
|
||||
func init() {
|
||||
db := new(Cassandra)
|
||||
database.Register("cassandra", db)
|
||||
}
|
||||
|
||||
var DefaultMigrationsTable = "schema_migrations"
|
||||
var dbLocked = false
|
||||
|
||||
var (
|
||||
ErrNilConfig = fmt.Errorf("no config")
|
||||
ErrNoKeyspace = fmt.Errorf("no keyspace provided")
|
||||
ErrDatabaseDirty = fmt.Errorf("database is dirty")
|
||||
)
|
||||
|
||||
type Config struct {
|
||||
MigrationsTable string
|
||||
KeyspaceName string
|
||||
}
|
||||
|
||||
type Cassandra struct {
|
||||
session *gocql.Session
|
||||
isLocked bool
|
||||
|
||||
// Open and WithInstance need to guarantee that config is never nil
|
||||
config *Config
|
||||
}
|
||||
|
||||
func (p *Cassandra) Open(url string) (database.Driver, error) {
|
||||
u, err := nurl.Parse(url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Check for missing mandatory attributes
|
||||
if len(u.Path) == 0 {
|
||||
return nil, ErrNoKeyspace
|
||||
}
|
||||
|
||||
migrationsTable := u.Query().Get("x-migrations-table")
|
||||
if len(migrationsTable) == 0 {
|
||||
migrationsTable = DefaultMigrationsTable
|
||||
}
|
||||
|
||||
p.config = &Config{
|
||||
KeyspaceName: u.Path,
|
||||
MigrationsTable: migrationsTable,
|
||||
}
|
||||
|
||||
cluster := gocql.NewCluster(u.Host)
|
||||
cluster.Keyspace = u.Path[1:len(u.Path)]
|
||||
cluster.Consistency = gocql.All
|
||||
cluster.Timeout = 1 * time.Minute
|
||||
|
||||
if len(u.Query().Get("username")) > 0 && len(u.Query().Get("password")) > 0 {
|
||||
authenticator := gocql.PasswordAuthenticator{
|
||||
Username: u.Query().Get("username"),
|
||||
Password: u.Query().Get("password"),
|
||||
}
|
||||
cluster.Authenticator = authenticator
|
||||
}
|
||||
|
||||
// Retrieve query string configuration
|
||||
if len(u.Query().Get("consistency")) > 0 {
|
||||
var consistency gocql.Consistency
|
||||
consistency, err = parseConsistency(u.Query().Get("consistency"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
cluster.Consistency = consistency
|
||||
}
|
||||
if len(u.Query().Get("protocol")) > 0 {
|
||||
var protoversion int
|
||||
protoversion, err = strconv.Atoi(u.Query().Get("protocol"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cluster.ProtoVersion = protoversion
|
||||
}
|
||||
if len(u.Query().Get("timeout")) > 0 {
|
||||
var timeout time.Duration
|
||||
timeout, err = time.ParseDuration(u.Query().Get("timeout"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cluster.Timeout = timeout
|
||||
}
|
||||
|
||||
p.session, err = cluster.CreateSession()
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if err := p.ensureVersionTable(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return p, nil
|
||||
}
|
||||
|
||||
func (p *Cassandra) Close() error {
|
||||
p.session.Close()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *Cassandra) Lock() error {
|
||||
if dbLocked {
|
||||
return database.ErrLocked
|
||||
}
|
||||
dbLocked = true
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *Cassandra) Unlock() error {
|
||||
dbLocked = false
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *Cassandra) Run(migration io.Reader) error {
|
||||
migr, err := ioutil.ReadAll(migration)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// run migration
|
||||
query := string(migr[:])
|
||||
if err := p.session.Query(query).Exec(); err != nil {
|
||||
// TODO: cast to Cassandra error and get line number
|
||||
return database.Error{OrigErr: err, Err: "migration failed", Query: migr}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *Cassandra) SetVersion(version int, dirty bool) error {
|
||||
query := `TRUNCATE "` + p.config.MigrationsTable + `"`
|
||||
if err := p.session.Query(query).Exec(); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
if version >= 0 {
|
||||
query = `INSERT INTO "` + p.config.MigrationsTable + `" (version, dirty) VALUES (?, ?)`
|
||||
if err := p.session.Query(query, version, dirty).Exec(); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Return current keyspace version
|
||||
func (p *Cassandra) Version() (version int, dirty bool, err error) {
|
||||
query := `SELECT version, dirty FROM "` + p.config.MigrationsTable + `" LIMIT 1`
|
||||
err = p.session.Query(query).Scan(&version, &dirty)
|
||||
switch {
|
||||
case err == gocql.ErrNotFound:
|
||||
return database.NilVersion, false, nil
|
||||
|
||||
case err != nil:
|
||||
if _, ok := err.(*gocql.Error); ok {
|
||||
return database.NilVersion, false, nil
|
||||
}
|
||||
return 0, false, &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
|
||||
default:
|
||||
return version, dirty, nil
|
||||
}
|
||||
}
|
||||
|
||||
func (p *Cassandra) Drop() error {
|
||||
// select all tables in current schema
|
||||
query := fmt.Sprintf(`SELECT table_name from system_schema.tables WHERE keyspace_name='%s'`, p.config.KeyspaceName[1:]) // Skip '/' character
|
||||
iter := p.session.Query(query).Iter()
|
||||
var tableName string
|
||||
for iter.Scan(&tableName) {
|
||||
err := p.session.Query(fmt.Sprintf(`DROP TABLE %s`, tableName)).Exec()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// Re-create the version table
|
||||
if err := p.ensureVersionTable(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Ensure version table exists
|
||||
func (p *Cassandra) ensureVersionTable() error {
|
||||
err := p.session.Query(fmt.Sprintf("CREATE TABLE IF NOT EXISTS %s (version bigint, dirty boolean, PRIMARY KEY(version))", p.config.MigrationsTable)).Exec()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if _, _, err = p.Version(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ParseConsistency wraps gocql.ParseConsistency
|
||||
// to return an error instead of a panicking.
|
||||
func parseConsistency(consistencyStr string) (consistency gocql.Consistency, err error) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
var ok bool
|
||||
err, ok = r.(error)
|
||||
if !ok {
|
||||
err = fmt.Errorf("Failed to parse consistency \"%s\": %v", consistencyStr, r)
|
||||
}
|
||||
}
|
||||
}()
|
||||
consistency = gocql.ParseConsistency(consistencyStr)
|
||||
|
||||
return consistency, nil
|
||||
}
|
||||
53
vendor/github.com/mattes/migrate/database/cassandra/cassandra_test.go
generated
vendored
53
vendor/github.com/mattes/migrate/database/cassandra/cassandra_test.go
generated
vendored
@@ -1,53 +0,0 @@
|
||||
package cassandra
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
dt "github.com/mattes/migrate/database/testing"
|
||||
mt "github.com/mattes/migrate/testing"
|
||||
"github.com/gocql/gocql"
|
||||
"time"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
var versions = []mt.Version{
|
||||
{Image: "cassandra:3.0.10"},
|
||||
{Image: "cassandra:3.0"},
|
||||
}
|
||||
|
||||
func isReady(i mt.Instance) bool {
|
||||
// Cassandra exposes 5 ports (7000, 7001, 7199, 9042 & 9160)
|
||||
// We only need the port bound to 9042, but we can only access to the first one
|
||||
// through 'i.Port()' (which calls DockerContainer.firstPortMapping())
|
||||
// So we need to get port mapping to retrieve correct port number bound to 9042
|
||||
portMap := i.NetworkSettings().Ports
|
||||
port, _ := strconv.Atoi(portMap["9042/tcp"][0].HostPort)
|
||||
|
||||
cluster := gocql.NewCluster(i.Host())
|
||||
cluster.Port = port
|
||||
//cluster.ProtoVersion = 4
|
||||
cluster.Consistency = gocql.All
|
||||
cluster.Timeout = 1 * time.Minute
|
||||
p, err := cluster.CreateSession()
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
// Create keyspace for tests
|
||||
p.Query("CREATE KEYSPACE testks WITH REPLICATION = {'class': 'SimpleStrategy', 'replication_factor':1}").Exec()
|
||||
return true
|
||||
}
|
||||
|
||||
func Test(t *testing.T) {
|
||||
mt.ParallelTest(t, versions, isReady,
|
||||
func(t *testing.T, i mt.Instance) {
|
||||
p := &Cassandra{}
|
||||
portMap := i.NetworkSettings().Ports
|
||||
port, _ := strconv.Atoi(portMap["9042/tcp"][0].HostPort)
|
||||
addr := fmt.Sprintf("cassandra://%v:%v/testks", i.Host(), port)
|
||||
d, err := p.Open(addr)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
dt.Test(t, d, []byte("SELECT table_name from system_schema.tables"))
|
||||
})
|
||||
}
|
||||
12
vendor/github.com/mattes/migrate/database/clickhouse/README.md
generated
vendored
12
vendor/github.com/mattes/migrate/database/clickhouse/README.md
generated
vendored
@@ -1,12 +0,0 @@
|
||||
# ClickHouse
|
||||
|
||||
`clickhouse://host:port?username=user&password=qwerty&database=clicks`
|
||||
|
||||
| URL Query | Description |
|
||||
|------------|-------------|
|
||||
| `x-migrations-table`| Name of the migrations table |
|
||||
| `database` | The name of the database to connect to |
|
||||
| `username` | The user to sign in as |
|
||||
| `password` | The user's password |
|
||||
| `host` | The host to connect to. |
|
||||
| `port` | The port to bind to. |
|
||||
196
vendor/github.com/mattes/migrate/database/clickhouse/clickhouse.go
generated
vendored
196
vendor/github.com/mattes/migrate/database/clickhouse/clickhouse.go
generated
vendored
@@ -1,196 +0,0 @@
|
||||
package clickhouse
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net/url"
|
||||
"time"
|
||||
|
||||
"github.com/mattes/migrate"
|
||||
"github.com/mattes/migrate/database"
|
||||
)
|
||||
|
||||
var DefaultMigrationsTable = "schema_migrations"
|
||||
|
||||
var ErrNilConfig = fmt.Errorf("no config")
|
||||
|
||||
type Config struct {
|
||||
DatabaseName string
|
||||
MigrationsTable string
|
||||
}
|
||||
|
||||
func init() {
|
||||
database.Register("clickhouse", &ClickHouse{})
|
||||
}
|
||||
|
||||
func WithInstance(conn *sql.DB, config *Config) (database.Driver, error) {
|
||||
if config == nil {
|
||||
return nil, ErrNilConfig
|
||||
}
|
||||
|
||||
if err := conn.Ping(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ch := &ClickHouse{
|
||||
conn: conn,
|
||||
config: config,
|
||||
}
|
||||
|
||||
if err := ch.init(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return ch, nil
|
||||
}
|
||||
|
||||
type ClickHouse struct {
|
||||
conn *sql.DB
|
||||
config *Config
|
||||
}
|
||||
|
||||
func (ch *ClickHouse) Open(dsn string) (database.Driver, error) {
|
||||
purl, err := url.Parse(dsn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
q := migrate.FilterCustomQuery(purl)
|
||||
q.Scheme = "tcp"
|
||||
conn, err := sql.Open("clickhouse", q.String())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ch = &ClickHouse{
|
||||
conn: conn,
|
||||
config: &Config{
|
||||
MigrationsTable: purl.Query().Get("x-migrations-table"),
|
||||
DatabaseName: purl.Query().Get("database"),
|
||||
},
|
||||
}
|
||||
|
||||
if err := ch.init(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return ch, nil
|
||||
}
|
||||
|
||||
func (ch *ClickHouse) init() error {
|
||||
if len(ch.config.DatabaseName) == 0 {
|
||||
if err := ch.conn.QueryRow("SELECT currentDatabase()").Scan(&ch.config.DatabaseName); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if len(ch.config.MigrationsTable) == 0 {
|
||||
ch.config.MigrationsTable = DefaultMigrationsTable
|
||||
}
|
||||
|
||||
return ch.ensureVersionTable()
|
||||
}
|
||||
|
||||
func (ch *ClickHouse) Run(r io.Reader) error {
|
||||
migration, err := ioutil.ReadAll(r)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := ch.conn.Exec(string(migration)); err != nil {
|
||||
return database.Error{OrigErr: err, Err: "migration failed", Query: migration}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
func (ch *ClickHouse) Version() (int, bool, error) {
|
||||
var (
|
||||
version int
|
||||
dirty uint8
|
||||
query = "SELECT version, dirty FROM `" + ch.config.MigrationsTable + "` ORDER BY sequence DESC LIMIT 1"
|
||||
)
|
||||
if err := ch.conn.QueryRow(query).Scan(&version, &dirty); err != nil {
|
||||
if err == sql.ErrNoRows {
|
||||
return database.NilVersion, false, nil
|
||||
}
|
||||
return 0, false, &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
return version, dirty == 1, nil
|
||||
}
|
||||
|
||||
func (ch *ClickHouse) SetVersion(version int, dirty bool) error {
|
||||
var (
|
||||
bool = func(v bool) uint8 {
|
||||
if v {
|
||||
return 1
|
||||
}
|
||||
return 0
|
||||
}
|
||||
tx, err = ch.conn.Begin()
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
query := "INSERT INTO " + ch.config.MigrationsTable + " (version, dirty, sequence) VALUES (?, ?, ?)"
|
||||
if _, err := tx.Exec(query, version, bool(dirty), time.Now().UnixNano()); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
|
||||
return tx.Commit()
|
||||
}
|
||||
|
||||
func (ch *ClickHouse) ensureVersionTable() error {
|
||||
var (
|
||||
table string
|
||||
query = "SHOW TABLES FROM " + ch.config.DatabaseName + " LIKE '" + ch.config.MigrationsTable + "'"
|
||||
)
|
||||
// check if migration table exists
|
||||
if err := ch.conn.QueryRow(query).Scan(&table); err != nil {
|
||||
if err != sql.ErrNoRows {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
} else {
|
||||
return nil
|
||||
}
|
||||
// if not, create the empty migration table
|
||||
query = `
|
||||
CREATE TABLE ` + ch.config.MigrationsTable + ` (
|
||||
version UInt32,
|
||||
dirty UInt8,
|
||||
sequence UInt64
|
||||
) Engine=TinyLog
|
||||
`
|
||||
if _, err := ch.conn.Exec(query); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (ch *ClickHouse) Drop() error {
|
||||
var (
|
||||
query = "SHOW TABLES FROM " + ch.config.DatabaseName
|
||||
tables, err = ch.conn.Query(query)
|
||||
)
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
defer tables.Close()
|
||||
for tables.Next() {
|
||||
var table string
|
||||
if err := tables.Scan(&table); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
query = "DROP TABLE IF EXISTS " + ch.config.DatabaseName + "." + table
|
||||
|
||||
if _, err := ch.conn.Exec(query); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
}
|
||||
return ch.ensureVersionTable()
|
||||
}
|
||||
|
||||
func (ch *ClickHouse) Lock() error { return nil }
|
||||
func (ch *ClickHouse) Unlock() error { return nil }
|
||||
func (ch *ClickHouse) Close() error { return ch.conn.Close() }
|
||||
@@ -1 +0,0 @@
|
||||
DROP TABLE IF EXISTS test_1;
|
||||
@@ -1,3 +0,0 @@
|
||||
CREATE TABLE test_1 (
|
||||
Date Date
|
||||
) Engine=Memory;
|
||||
@@ -1 +0,0 @@
|
||||
DROP TABLE IF EXISTS test_2;
|
||||
@@ -1,3 +0,0 @@
|
||||
CREATE TABLE test_2 (
|
||||
Date Date
|
||||
) Engine=Memory;
|
||||
19
vendor/github.com/mattes/migrate/database/cockroachdb/README.md
generated
vendored
19
vendor/github.com/mattes/migrate/database/cockroachdb/README.md
generated
vendored
@@ -1,19 +0,0 @@
|
||||
# cockroachdb
|
||||
|
||||
`cockroachdb://user:password@host:port/dbname?query` (`cockroach://`, and `crdb-postgres://` work, too)
|
||||
|
||||
| URL Query | WithInstance Config | Description |
|
||||
|------------|---------------------|-------------|
|
||||
| `x-migrations-table` | `MigrationsTable` | Name of the migrations table |
|
||||
| `x-lock-table` | `LockTable` | Name of the table which maintains the migration lock |
|
||||
| `x-force-lock` | `ForceLock` | Force lock acquisition to fix faulty migrations which may not have released the schema lock (Boolean, default is `false`) |
|
||||
| `dbname` | `DatabaseName` | The name of the database to connect to |
|
||||
| `user` | | The user to sign in as |
|
||||
| `password` | | The user's password |
|
||||
| `host` | | The host to connect to. Values that start with / are for unix domain sockets. (default is localhost) |
|
||||
| `port` | | The port to bind to. (default is 5432) |
|
||||
| `connect_timeout` | | Maximum wait for connection, in seconds. Zero or not specified means wait indefinitely. |
|
||||
| `sslcert` | | Cert file location. The file must contain PEM encoded data. |
|
||||
| `sslkey` | | Key file location. The file must contain PEM encoded data. |
|
||||
| `sslrootcert` | | The location of the root certificate file. The file must contain PEM encoded data. |
|
||||
| `sslmode` | | Whether or not to use SSL (disable\|require\|verify-ca\|verify-full) |
|
||||
338
vendor/github.com/mattes/migrate/database/cockroachdb/cockroachdb.go
generated
vendored
338
vendor/github.com/mattes/migrate/database/cockroachdb/cockroachdb.go
generated
vendored
@@ -1,338 +0,0 @@
|
||||
package cockroachdb
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
nurl "net/url"
|
||||
|
||||
"github.com/cockroachdb/cockroach-go/crdb"
|
||||
"github.com/lib/pq"
|
||||
"github.com/mattes/migrate"
|
||||
"github.com/mattes/migrate/database"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"context"
|
||||
)
|
||||
|
||||
func init() {
|
||||
db := CockroachDb{}
|
||||
database.Register("cockroach", &db)
|
||||
database.Register("cockroachdb", &db)
|
||||
database.Register("crdb-postgres", &db)
|
||||
}
|
||||
|
||||
var DefaultMigrationsTable = "schema_migrations"
|
||||
var DefaultLockTable = "schema_lock"
|
||||
|
||||
var (
|
||||
ErrNilConfig = fmt.Errorf("no config")
|
||||
ErrNoDatabaseName = fmt.Errorf("no database name")
|
||||
)
|
||||
|
||||
type Config struct {
|
||||
MigrationsTable string
|
||||
LockTable string
|
||||
ForceLock bool
|
||||
DatabaseName string
|
||||
}
|
||||
|
||||
type CockroachDb struct {
|
||||
db *sql.DB
|
||||
isLocked bool
|
||||
|
||||
// Open and WithInstance need to guarantee that config is never nil
|
||||
config *Config
|
||||
}
|
||||
|
||||
func WithInstance(instance *sql.DB, config *Config) (database.Driver, error) {
|
||||
if config == nil {
|
||||
return nil, ErrNilConfig
|
||||
}
|
||||
|
||||
if err := instance.Ping(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
query := `SELECT current_database()`
|
||||
var databaseName string
|
||||
if err := instance.QueryRow(query).Scan(&databaseName); err != nil {
|
||||
return nil, &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
|
||||
if len(databaseName) == 0 {
|
||||
return nil, ErrNoDatabaseName
|
||||
}
|
||||
|
||||
config.DatabaseName = databaseName
|
||||
|
||||
if len(config.MigrationsTable) == 0 {
|
||||
config.MigrationsTable = DefaultMigrationsTable
|
||||
}
|
||||
|
||||
if len(config.LockTable) == 0 {
|
||||
config.LockTable = DefaultLockTable
|
||||
}
|
||||
|
||||
px := &CockroachDb{
|
||||
db: instance,
|
||||
config: config,
|
||||
}
|
||||
|
||||
if err := px.ensureVersionTable(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if err := px.ensureLockTable(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return px, nil
|
||||
}
|
||||
|
||||
func (c *CockroachDb) Open(url string) (database.Driver, error) {
|
||||
purl, err := nurl.Parse(url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// As Cockroach uses the postgres protocol, and 'postgres' is already a registered database, we need to replace the
|
||||
// connect prefix, with the actual protocol, so that the library can differentiate between the implementations
|
||||
re := regexp.MustCompile("^(cockroach(db)?|crdb-postgres)")
|
||||
connectString := re.ReplaceAllString(migrate.FilterCustomQuery(purl).String(), "postgres")
|
||||
|
||||
db, err := sql.Open("postgres", connectString)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
migrationsTable := purl.Query().Get("x-migrations-table")
|
||||
if len(migrationsTable) == 0 {
|
||||
migrationsTable = DefaultMigrationsTable
|
||||
}
|
||||
|
||||
lockTable := purl.Query().Get("x-lock-table")
|
||||
if len(lockTable) == 0 {
|
||||
lockTable = DefaultLockTable
|
||||
}
|
||||
|
||||
forceLockQuery := purl.Query().Get("x-force-lock")
|
||||
forceLock, err := strconv.ParseBool(forceLockQuery)
|
||||
if err != nil {
|
||||
forceLock = false
|
||||
}
|
||||
|
||||
px, err := WithInstance(db, &Config{
|
||||
DatabaseName: purl.Path,
|
||||
MigrationsTable: migrationsTable,
|
||||
LockTable: lockTable,
|
||||
ForceLock: forceLock,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return px, nil
|
||||
}
|
||||
|
||||
func (c *CockroachDb) Close() error {
|
||||
return c.db.Close()
|
||||
}
|
||||
|
||||
// Locking is done manually with a separate lock table. Implementing advisory locks in CRDB is being discussed
|
||||
// See: https://github.com/cockroachdb/cockroach/issues/13546
|
||||
func (c *CockroachDb) Lock() error {
|
||||
err := crdb.ExecuteTx(context.Background(), c.db, nil, func(tx *sql.Tx) error {
|
||||
aid, err := database.GenerateAdvisoryLockId(c.config.DatabaseName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
query := "SELECT * FROM " + c.config.LockTable + " WHERE lock_id = $1"
|
||||
rows, err := tx.Query(query, aid)
|
||||
if err != nil {
|
||||
return database.Error{OrigErr: err, Err: "failed to fetch migration lock", Query: []byte(query)}
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
// If row exists at all, lock is present
|
||||
locked := rows.Next()
|
||||
if locked && !c.config.ForceLock {
|
||||
return database.Error{Err: "lock could not be acquired; already locked", Query: []byte(query)}
|
||||
}
|
||||
|
||||
query = "INSERT INTO " + c.config.LockTable + " (lock_id) VALUES ($1)"
|
||||
if _, err := tx.Exec(query, aid) ; err != nil {
|
||||
return database.Error{OrigErr: err, Err: "failed to set migration lock", Query: []byte(query)}
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
} else {
|
||||
c.isLocked = true
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Locking is done manually with a separate lock table. Implementing advisory locks in CRDB is being discussed
|
||||
// See: https://github.com/cockroachdb/cockroach/issues/13546
|
||||
func (c *CockroachDb) Unlock() error {
|
||||
aid, err := database.GenerateAdvisoryLockId(c.config.DatabaseName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// In the event of an implementation (non-migration) error, it is possible for the lock to not be released. Until
|
||||
// a better locking mechanism is added, a manual purging of the lock table may be required in such circumstances
|
||||
query := "DELETE FROM " + c.config.LockTable + " WHERE lock_id = $1"
|
||||
if _, err := c.db.Exec(query, aid); err != nil {
|
||||
if e, ok := err.(*pq.Error); ok {
|
||||
// 42P01 is "UndefinedTableError" in CockroachDB
|
||||
// https://github.com/cockroachdb/cockroach/blob/master/pkg/sql/pgwire/pgerror/codes.go
|
||||
if e.Code == "42P01" {
|
||||
// On drops, the lock table is fully removed; This is fine, and is a valid "unlocked" state for the schema
|
||||
c.isLocked = false
|
||||
return nil
|
||||
}
|
||||
}
|
||||
return database.Error{OrigErr: err, Err: "failed to release migration lock", Query: []byte(query)}
|
||||
}
|
||||
|
||||
c.isLocked = false
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *CockroachDb) Run(migration io.Reader) error {
|
||||
migr, err := ioutil.ReadAll(migration)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// run migration
|
||||
query := string(migr[:])
|
||||
if _, err := c.db.Exec(query); err != nil {
|
||||
return database.Error{OrigErr: err, Err: "migration failed", Query: migr}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *CockroachDb) SetVersion(version int, dirty bool) error {
|
||||
return crdb.ExecuteTx(context.Background(), c.db, nil, func(tx *sql.Tx) error {
|
||||
if _, err := tx.Exec( `TRUNCATE "` + c.config.MigrationsTable + `"`); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if version >= 0 {
|
||||
if _, err := tx.Exec(`INSERT INTO "` + c.config.MigrationsTable + `" (version, dirty) VALUES ($1, $2)`, version, dirty); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
func (c *CockroachDb) Version() (version int, dirty bool, err error) {
|
||||
query := `SELECT version, dirty FROM "` + c.config.MigrationsTable + `" LIMIT 1`
|
||||
err = c.db.QueryRow(query).Scan(&version, &dirty)
|
||||
|
||||
switch {
|
||||
case err == sql.ErrNoRows:
|
||||
return database.NilVersion, false, nil
|
||||
|
||||
case err != nil:
|
||||
if e, ok := err.(*pq.Error); ok {
|
||||
// 42P01 is "UndefinedTableError" in CockroachDB
|
||||
// https://github.com/cockroachdb/cockroach/blob/master/pkg/sql/pgwire/pgerror/codes.go
|
||||
if e.Code == "42P01" {
|
||||
return database.NilVersion, false, nil
|
||||
}
|
||||
}
|
||||
return 0, false, &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
|
||||
default:
|
||||
return version, dirty, nil
|
||||
}
|
||||
}
|
||||
|
||||
func (c *CockroachDb) Drop() error {
|
||||
// select all tables in current schema
|
||||
query := `SELECT table_name FROM information_schema.tables WHERE table_schema=(SELECT current_schema())`
|
||||
tables, err := c.db.Query(query)
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
defer tables.Close()
|
||||
|
||||
// delete one table after another
|
||||
tableNames := make([]string, 0)
|
||||
for tables.Next() {
|
||||
var tableName string
|
||||
if err := tables.Scan(&tableName); err != nil {
|
||||
return err
|
||||
}
|
||||
if len(tableName) > 0 {
|
||||
tableNames = append(tableNames, tableName)
|
||||
}
|
||||
}
|
||||
|
||||
if len(tableNames) > 0 {
|
||||
// delete one by one ...
|
||||
for _, t := range tableNames {
|
||||
query = `DROP TABLE IF EXISTS ` + t + ` CASCADE`
|
||||
if _, err := c.db.Exec(query); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
}
|
||||
if err := c.ensureVersionTable(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *CockroachDb) ensureVersionTable() error {
|
||||
// check if migration table exists
|
||||
var count int
|
||||
query := `SELECT COUNT(1) FROM information_schema.tables WHERE table_name = $1 AND table_schema = (SELECT current_schema()) LIMIT 1`
|
||||
if err := c.db.QueryRow(query, c.config.MigrationsTable).Scan(&count); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
if count == 1 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// if not, create the empty migration table
|
||||
query = `CREATE TABLE "` + c.config.MigrationsTable + `" (version INT NOT NULL PRIMARY KEY, dirty BOOL NOT NULL)`
|
||||
if _, err := c.db.Exec(query); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
func (c *CockroachDb) ensureLockTable() error {
|
||||
// check if lock table exists
|
||||
var count int
|
||||
query := `SELECT COUNT(1) FROM information_schema.tables WHERE table_name = $1 AND table_schema = (SELECT current_schema()) LIMIT 1`
|
||||
if err := c.db.QueryRow(query, c.config.LockTable).Scan(&count); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
if count == 1 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// if not, create the empty lock table
|
||||
query = `CREATE TABLE "` + c.config.LockTable + `" (lock_id INT NOT NULL PRIMARY KEY)`
|
||||
if _, err := c.db.Exec(query); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
91
vendor/github.com/mattes/migrate/database/cockroachdb/cockroachdb_test.go
generated
vendored
91
vendor/github.com/mattes/migrate/database/cockroachdb/cockroachdb_test.go
generated
vendored
@@ -1,91 +0,0 @@
|
||||
package cockroachdb
|
||||
|
||||
// error codes https://github.com/lib/pq/blob/master/error.go
|
||||
|
||||
import (
|
||||
//"bytes"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"io"
|
||||
"testing"
|
||||
|
||||
"github.com/lib/pq"
|
||||
dt "github.com/mattes/migrate/database/testing"
|
||||
mt "github.com/mattes/migrate/testing"
|
||||
"bytes"
|
||||
)
|
||||
|
||||
var versions = []mt.Version{
|
||||
{Image: "cockroachdb/cockroach:v1.0.2", Cmd: []string{"start", "--insecure"}},
|
||||
}
|
||||
|
||||
func isReady(i mt.Instance) bool {
|
||||
db, err := sql.Open("postgres", fmt.Sprintf("postgres://root@%v:%v?sslmode=disable", i.Host(), i.PortFor(26257)))
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
defer db.Close()
|
||||
err = db.Ping()
|
||||
if err == io.EOF {
|
||||
_, err = db.Exec("CREATE DATABASE migrate")
|
||||
return err == nil;
|
||||
} else if e, ok := err.(*pq.Error); ok {
|
||||
if e.Code.Name() == "cannot_connect_now" {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
_, err = db.Exec("CREATE DATABASE migrate")
|
||||
return err == nil;
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func Test(t *testing.T) {
|
||||
mt.ParallelTest(t, versions, isReady,
|
||||
func(t *testing.T, i mt.Instance) {
|
||||
c := &CockroachDb{}
|
||||
addr := fmt.Sprintf("cockroach://root@%v:%v/migrate?sslmode=disable", i.Host(), i.PortFor(26257))
|
||||
d, err := c.Open(addr)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
dt.Test(t, d, []byte("SELECT 1"))
|
||||
})
|
||||
}
|
||||
|
||||
func TestMultiStatement(t *testing.T) {
|
||||
mt.ParallelTest(t, versions, isReady,
|
||||
func(t *testing.T, i mt.Instance) {
|
||||
c := &CockroachDb{}
|
||||
addr := fmt.Sprintf("cockroach://root@%v:%v/migrate?sslmode=disable", i.Host(), i.Port())
|
||||
d, err := c.Open(addr)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
if err := d.Run(bytes.NewReader([]byte("CREATE TABLE foo (foo text); CREATE TABLE bar (bar text);"))); err != nil {
|
||||
t.Fatalf("expected err to be nil, got %v", err)
|
||||
}
|
||||
|
||||
// make sure second table exists
|
||||
var exists bool
|
||||
if err := d.(*CockroachDb).db.QueryRow("SELECT EXISTS (SELECT 1 FROM information_schema.tables WHERE table_name = 'bar' AND table_schema = (SELECT current_schema()))").Scan(&exists); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !exists {
|
||||
t.Fatalf("expected table bar to exist")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestFilterCustomQuery(t *testing.T) {
|
||||
mt.ParallelTest(t, versions, isReady,
|
||||
func(t *testing.T, i mt.Instance) {
|
||||
c := &CockroachDb{}
|
||||
addr := fmt.Sprintf("cockroach://root@%v:%v/migrate?sslmode=disable&x-custom=foobar", i.Host(), i.PortFor(26257))
|
||||
_, err := c.Open(addr)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -1 +0,0 @@
|
||||
DROP TABLE IF EXISTS users;
|
||||
@@ -1,5 +0,0 @@
|
||||
CREATE TABLE users (
|
||||
user_id INT UNIQUE,
|
||||
name STRING(40),
|
||||
email STRING(40)
|
||||
);
|
||||
@@ -1 +0,0 @@
|
||||
ALTER TABLE users DROP COLUMN IF EXISTS city;
|
||||
@@ -1 +0,0 @@
|
||||
ALTER TABLE users ADD COLUMN city TEXT;
|
||||
@@ -1 +0,0 @@
|
||||
DROP INDEX IF EXISTS users_email_index;
|
||||
@@ -1,3 +0,0 @@
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS users_email_index ON users (email);
|
||||
|
||||
-- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean sed interdum velit, tristique iaculis justo. Pellentesque ut porttitor dolor. Donec sit amet pharetra elit. Cras vel ligula ex. Phasellus posuere.
|
||||
@@ -1 +0,0 @@
|
||||
DROP TABLE IF EXISTS books;
|
||||
@@ -1,5 +0,0 @@
|
||||
CREATE TABLE books (
|
||||
user_id INT,
|
||||
name STRING(40),
|
||||
author STRING(40)
|
||||
);
|
||||
@@ -1 +0,0 @@
|
||||
DROP TABLE IF EXISTS movies;
|
||||
@@ -1,5 +0,0 @@
|
||||
CREATE TABLE movies (
|
||||
user_id INT,
|
||||
name STRING(40),
|
||||
director STRING(40)
|
||||
);
|
||||
@@ -1 +0,0 @@
|
||||
-- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean sed interdum velit, tristique iaculis justo. Pellentesque ut porttitor dolor. Donec sit amet pharetra elit. Cras vel ligula ex. Phasellus posuere.
|
||||
@@ -1 +0,0 @@
|
||||
-- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean sed interdum velit, tristique iaculis justo. Pellentesque ut porttitor dolor. Donec sit amet pharetra elit. Cras vel ligula ex. Phasellus posuere.
|
||||
@@ -1 +0,0 @@
|
||||
-- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean sed interdum velit, tristique iaculis justo. Pellentesque ut porttitor dolor. Donec sit amet pharetra elit. Cras vel ligula ex. Phasellus posuere.
|
||||
@@ -1 +0,0 @@
|
||||
-- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean sed interdum velit, tristique iaculis justo. Pellentesque ut porttitor dolor. Donec sit amet pharetra elit. Cras vel ligula ex. Phasellus posuere.
|
||||
6
vendor/github.com/mattes/migrate/database/driver.go
generated
vendored
6
vendor/github.com/mattes/migrate/database/driver.go
generated
vendored
@@ -32,8 +32,6 @@ var drivers = make(map[string]Driver)
|
||||
// All other functions are tested by tests in database/testing.
|
||||
// Saves you some time and makes sure all database drivers behave the same way.
|
||||
// 5. Call Register in init().
|
||||
// 6. Create a migrate/cli/build_<driver-name>.go file
|
||||
// 7. Add driver name in 'DATABASE' variable in Makefile
|
||||
//
|
||||
// Guidelines:
|
||||
// * Don't try to correct user input. Don't assume things.
|
||||
@@ -73,7 +71,7 @@ type Driver interface {
|
||||
// Dirty means, a previous migration failed and user interaction is required.
|
||||
Version() (version int, dirty bool, err error)
|
||||
|
||||
// Drop deletes everything in the database.
|
||||
// Drop deletes everyting in the database.
|
||||
Drop() error
|
||||
}
|
||||
|
||||
@@ -92,7 +90,7 @@ func Open(url string) (Driver, error) {
|
||||
d, ok := drivers[u.Scheme]
|
||||
driversMu.RUnlock()
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("database driver: unknown driver %v (forgotten import?)", u.Scheme)
|
||||
return nil, fmt.Errorf("database driver: unknown driver %v (forgotton import?)", u.Scheme)
|
||||
}
|
||||
|
||||
return d.Open(url)
|
||||
|
||||
31
vendor/github.com/mattes/migrate/database/mysql/README.md
generated
vendored
31
vendor/github.com/mattes/migrate/database/mysql/README.md
generated
vendored
@@ -1,4 +1,4 @@
|
||||
# MySQL
|
||||
# mysql
|
||||
|
||||
`mysql://user:password@tcp(host:port)/dbname?query`
|
||||
|
||||
@@ -15,35 +15,6 @@
|
||||
| `x-tls-key` | | Key file location. |
|
||||
| `x-tls-insecure-skip-verify` | | Whether or not to use SSL (true\|false) |
|
||||
|
||||
## Use with existing client
|
||||
|
||||
If you use the MySQL driver with existing database client, you must create the client with parameter `multiStatements=true`:
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
|
||||
_ "github.com/go-sql-driver/mysql"
|
||||
"github.com/mattes/migrate"
|
||||
"github.com/mattes/migrate/database/mysql"
|
||||
_ "github.com/mattes/migrate/source/file"
|
||||
)
|
||||
|
||||
func main() {
|
||||
db, _ := sql.Open("mysql", "user:password@tcp(host:port)/dbname?multiStatements=true")
|
||||
driver, _ := mysql.WithInstance(db, &mysql.Config{})
|
||||
m, _ := migrate.NewWithDatabaseInstance(
|
||||
"file:///migrations",
|
||||
"mysql",
|
||||
driver,
|
||||
)
|
||||
|
||||
m.Steps(2)
|
||||
}
|
||||
```
|
||||
|
||||
## Upgrading from v1
|
||||
|
||||
1. Write down the current migration version from schema_migrations
|
||||
|
||||
6
vendor/github.com/mattes/migrate/database/mysql/mysql.go
generated
vendored
6
vendor/github.com/mattes/migrate/database/mysql/mysql.go
generated
vendored
@@ -156,8 +156,7 @@ func (m *Mysql) Lock() error {
|
||||
return database.ErrLocked
|
||||
}
|
||||
|
||||
aid, err := database.GenerateAdvisoryLockId(
|
||||
fmt.Sprintf("%s:%s", m.config.DatabaseName, m.config.MigrationsTable))
|
||||
aid, err := database.GenerateAdvisoryLockId(m.config.DatabaseName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -181,8 +180,7 @@ func (m *Mysql) Unlock() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
aid, err := database.GenerateAdvisoryLockId(
|
||||
fmt.Sprintf("%s:%s", m.config.DatabaseName, m.config.MigrationsTable))
|
||||
aid, err := database.GenerateAdvisoryLockId(m.config.DatabaseName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
8
vendor/github.com/mattes/migrate/database/mysql/mysql_test.go
generated
vendored
8
vendor/github.com/mattes/migrate/database/mysql/mysql_test.go
generated
vendored
@@ -14,10 +14,10 @@ import (
|
||||
)
|
||||
|
||||
var versions = []mt.Version{
|
||||
{Image: "mysql:8", ENV: []string{"MYSQL_ROOT_PASSWORD=root", "MYSQL_DATABASE=public"}},
|
||||
{Image: "mysql:5.7", ENV: []string{"MYSQL_ROOT_PASSWORD=root", "MYSQL_DATABASE=public"}},
|
||||
{Image: "mysql:5.6", ENV: []string{"MYSQL_ROOT_PASSWORD=root", "MYSQL_DATABASE=public"}},
|
||||
{Image: "mysql:5.5", ENV: []string{"MYSQL_ROOT_PASSWORD=root", "MYSQL_DATABASE=public"}},
|
||||
{"mysql:8", []string{"MYSQL_ROOT_PASSWORD=root", "MYSQL_DATABASE=public"}},
|
||||
{"mysql:5.7", []string{"MYSQL_ROOT_PASSWORD=root", "MYSQL_DATABASE=public"}},
|
||||
{"mysql:5.6", []string{"MYSQL_ROOT_PASSWORD=root", "MYSQL_DATABASE=public"}},
|
||||
{"mysql:5.5", []string{"MYSQL_ROOT_PASSWORD=root", "MYSQL_DATABASE=public"}},
|
||||
}
|
||||
|
||||
func isReady(i mt.Instance) bool {
|
||||
|
||||
4
vendor/github.com/mattes/migrate/database/postgres/postgres.go
generated
vendored
4
vendor/github.com/mattes/migrate/database/postgres/postgres.go
generated
vendored
@@ -176,14 +176,14 @@ func (p *Postgres) SetVersion(version int, dirty bool) error {
|
||||
}
|
||||
|
||||
query := `TRUNCATE "` + p.config.MigrationsTable + `"`
|
||||
if _, err := tx.Exec(query); err != nil {
|
||||
if _, err := p.db.Exec(query); err != nil {
|
||||
tx.Rollback()
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
|
||||
if version >= 0 {
|
||||
query = `INSERT INTO "` + p.config.MigrationsTable + `" (version, dirty) VALUES ($1, $2)`
|
||||
if _, err := tx.Exec(query, version, dirty); err != nil {
|
||||
if _, err := p.db.Exec(query, version, dirty); err != nil {
|
||||
tx.Rollback()
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
|
||||
35
vendor/github.com/mattes/migrate/database/spanner/README.md
generated
vendored
35
vendor/github.com/mattes/migrate/database/spanner/README.md
generated
vendored
@@ -1,35 +0,0 @@
|
||||
# Google Cloud Spanner
|
||||
|
||||
## Usage
|
||||
|
||||
The DSN must be given in the following format.
|
||||
|
||||
`spanner://projects/{projectId}/instances/{instanceId}/databases/{databaseName}`
|
||||
|
||||
See [Google Spanner Documentation](https://cloud.google.com/spanner/docs) for details.
|
||||
|
||||
|
||||
| Param | WithInstance Config | Description |
|
||||
| ----- | ------------------- | ----------- |
|
||||
| `x-migrations-table` | `MigrationsTable` | Name of the migrations table |
|
||||
| `url` | `DatabaseName` | The full path to the Spanner database resource. If provided as part of `Config` it must not contain a scheme or query string to match the format `projects/{projectId}/instances/{instanceId}/databases/{databaseName}`|
|
||||
| `projectId` || The Google Cloud Platform project id
|
||||
| `instanceId` || The id of the instance running Spanner
|
||||
| `databaseName` || The name of the Spanner database
|
||||
|
||||
|
||||
> **Note:** Google Cloud Spanner migrations can take a considerable amount of
|
||||
> time. The migrations provided as part of the example take about 6 minutes to
|
||||
> run on a small instance.
|
||||
>
|
||||
> ```log
|
||||
> 1481574547/u create_users_table (21.354507597s)
|
||||
> 1496539702/u add_city_to_users (41.647359754s)
|
||||
> 1496601752/u add_index_on_user_emails (2m12.155787369s)
|
||||
> 1496602638/u create_books_table (2m30.77299181s)
|
||||
|
||||
## Testing
|
||||
|
||||
To unit test the `spanner` driver, `SPANNER_DATABASE` needs to be set. You'll
|
||||
need to sign-up to Google Cloud Platform (GCP) and have a running Spanner
|
||||
instance since it is not possible to run Google Spanner outside GCP.
|
||||
@@ -1 +0,0 @@
|
||||
DROP TABLE Users
|
||||
@@ -1,5 +0,0 @@
|
||||
CREATE TABLE Users (
|
||||
UserId INT64,
|
||||
Name STRING(40),
|
||||
Email STRING(83)
|
||||
) PRIMARY KEY(UserId)
|
||||
@@ -1 +0,0 @@
|
||||
ALTER TABLE Users DROP COLUMN city
|
||||
@@ -1 +0,0 @@
|
||||
ALTER TABLE Users ADD COLUMN city STRING(100)
|
||||
@@ -1 +0,0 @@
|
||||
DROP INDEX UsersEmailIndex
|
||||
@@ -1 +0,0 @@
|
||||
CREATE UNIQUE INDEX UsersEmailIndex ON Users (Email)
|
||||
@@ -1 +0,0 @@
|
||||
DROP TABLE Books
|
||||
@@ -1,6 +0,0 @@
|
||||
CREATE TABLE Books (
|
||||
UserId INT64,
|
||||
Name STRING(40),
|
||||
Author STRING(40)
|
||||
) PRIMARY KEY(UserId, Name),
|
||||
INTERLEAVE IN PARENT Users ON DELETE CASCADE
|
||||
294
vendor/github.com/mattes/migrate/database/spanner/spanner.go
generated
vendored
294
vendor/github.com/mattes/migrate/database/spanner/spanner.go
generated
vendored
@@ -1,294 +0,0 @@
|
||||
package spanner
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
nurl "net/url"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"golang.org/x/net/context"
|
||||
|
||||
"cloud.google.com/go/spanner"
|
||||
sdb "cloud.google.com/go/spanner/admin/database/apiv1"
|
||||
|
||||
"github.com/mattes/migrate"
|
||||
"github.com/mattes/migrate/database"
|
||||
|
||||
"google.golang.org/api/iterator"
|
||||
adminpb "google.golang.org/genproto/googleapis/spanner/admin/database/v1"
|
||||
)
|
||||
|
||||
func init() {
|
||||
db := Spanner{}
|
||||
database.Register("spanner", &db)
|
||||
}
|
||||
|
||||
// DefaultMigrationsTable is used if no custom table is specified
|
||||
const DefaultMigrationsTable = "SchemaMigrations"
|
||||
|
||||
// Driver errors
|
||||
var (
|
||||
ErrNilConfig = fmt.Errorf("no config")
|
||||
ErrNoDatabaseName = fmt.Errorf("no database name")
|
||||
ErrNoSchema = fmt.Errorf("no schema")
|
||||
ErrDatabaseDirty = fmt.Errorf("database is dirty")
|
||||
)
|
||||
|
||||
// Config used for a Spanner instance
|
||||
type Config struct {
|
||||
MigrationsTable string
|
||||
DatabaseName string
|
||||
}
|
||||
|
||||
// Spanner implements database.Driver for Google Cloud Spanner
|
||||
type Spanner struct {
|
||||
db *DB
|
||||
|
||||
config *Config
|
||||
}
|
||||
|
||||
type DB struct {
|
||||
admin *sdb.DatabaseAdminClient
|
||||
data *spanner.Client
|
||||
}
|
||||
|
||||
// WithInstance implements database.Driver
|
||||
func WithInstance(instance *DB, config *Config) (database.Driver, error) {
|
||||
if config == nil {
|
||||
return nil, ErrNilConfig
|
||||
}
|
||||
|
||||
if len(config.DatabaseName) == 0 {
|
||||
return nil, ErrNoDatabaseName
|
||||
}
|
||||
|
||||
if len(config.MigrationsTable) == 0 {
|
||||
config.MigrationsTable = DefaultMigrationsTable
|
||||
}
|
||||
|
||||
sx := &Spanner{
|
||||
db: instance,
|
||||
config: config,
|
||||
}
|
||||
|
||||
if err := sx.ensureVersionTable(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return sx, nil
|
||||
}
|
||||
|
||||
// Open implements database.Driver
|
||||
func (s *Spanner) Open(url string) (database.Driver, error) {
|
||||
purl, err := nurl.Parse(url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
adminClient, err := sdb.NewDatabaseAdminClient(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
dbname := strings.Replace(migrate.FilterCustomQuery(purl).String(), "spanner://", "", 1)
|
||||
dataClient, err := spanner.NewClient(ctx, dbname)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
migrationsTable := purl.Query().Get("x-migrations-table")
|
||||
if len(migrationsTable) == 0 {
|
||||
migrationsTable = DefaultMigrationsTable
|
||||
}
|
||||
|
||||
db := &DB{admin: adminClient, data: dataClient}
|
||||
return WithInstance(db, &Config{
|
||||
DatabaseName: dbname,
|
||||
MigrationsTable: migrationsTable,
|
||||
})
|
||||
}
|
||||
|
||||
// Close implements database.Driver
|
||||
func (s *Spanner) Close() error {
|
||||
s.db.data.Close()
|
||||
return s.db.admin.Close()
|
||||
}
|
||||
|
||||
// Lock implements database.Driver but doesn't do anything because Spanner only
|
||||
// enqueues the UpdateDatabaseDdlRequest.
|
||||
func (s *Spanner) Lock() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Unlock implements database.Driver but no action required, see Lock.
|
||||
func (s *Spanner) Unlock() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Run implements database.Driver
|
||||
func (s *Spanner) Run(migration io.Reader) error {
|
||||
migr, err := ioutil.ReadAll(migration)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// run migration
|
||||
stmts := migrationStatements(migr)
|
||||
ctx := context.Background()
|
||||
|
||||
op, err := s.db.admin.UpdateDatabaseDdl(ctx, &adminpb.UpdateDatabaseDdlRequest{
|
||||
Database: s.config.DatabaseName,
|
||||
Statements: stmts,
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Err: "migration failed", Query: migr}
|
||||
}
|
||||
|
||||
if err := op.Wait(ctx); err != nil {
|
||||
return &database.Error{OrigErr: err, Err: "migration failed", Query: migr}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetVersion implements database.Driver
|
||||
func (s *Spanner) SetVersion(version int, dirty bool) error {
|
||||
ctx := context.Background()
|
||||
|
||||
_, err := s.db.data.ReadWriteTransaction(ctx,
|
||||
func(ctx context.Context, txn *spanner.ReadWriteTransaction) error {
|
||||
m := []*spanner.Mutation{
|
||||
spanner.Delete(s.config.MigrationsTable, spanner.AllKeys()),
|
||||
spanner.Insert(s.config.MigrationsTable,
|
||||
[]string{"Version", "Dirty"},
|
||||
[]interface{}{version, dirty},
|
||||
)}
|
||||
return txn.BufferWrite(m)
|
||||
})
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Version implements database.Driver
|
||||
func (s *Spanner) Version() (version int, dirty bool, err error) {
|
||||
ctx := context.Background()
|
||||
|
||||
stmt := spanner.Statement{
|
||||
SQL: `SELECT Version, Dirty FROM ` + s.config.MigrationsTable + ` LIMIT 1`,
|
||||
}
|
||||
iter := s.db.data.Single().Query(ctx, stmt)
|
||||
defer iter.Stop()
|
||||
|
||||
row, err := iter.Next()
|
||||
switch err {
|
||||
case iterator.Done:
|
||||
return database.NilVersion, false, nil
|
||||
case nil:
|
||||
var v int64
|
||||
if err = row.Columns(&v, &dirty); err != nil {
|
||||
return 0, false, &database.Error{OrigErr: err, Query: []byte(stmt.SQL)}
|
||||
}
|
||||
version = int(v)
|
||||
default:
|
||||
return 0, false, &database.Error{OrigErr: err, Query: []byte(stmt.SQL)}
|
||||
}
|
||||
|
||||
return version, dirty, nil
|
||||
}
|
||||
|
||||
// Drop implements database.Driver. Retrieves the database schema first and
|
||||
// creates statements to drop the indexes and tables accordingly.
|
||||
// Note: The drop statements are created in reverse order to how they're
|
||||
// provided in the schema. Assuming the schema describes how the database can
|
||||
// be "build up", it seems logical to "unbuild" the database simply by going the
|
||||
// opposite direction. More testing
|
||||
func (s *Spanner) Drop() error {
|
||||
ctx := context.Background()
|
||||
res, err := s.db.admin.GetDatabaseDdl(ctx, &adminpb.GetDatabaseDdlRequest{
|
||||
Database: s.config.DatabaseName,
|
||||
})
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Err: "drop failed"}
|
||||
}
|
||||
if len(res.Statements) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
r := regexp.MustCompile(`(CREATE TABLE\s(\S+)\s)|(CREATE.+INDEX\s(\S+)\s)`)
|
||||
stmts := make([]string, 0)
|
||||
for i := len(res.Statements) - 1; i >= 0; i-- {
|
||||
s := res.Statements[i]
|
||||
m := r.FindSubmatch([]byte(s))
|
||||
|
||||
if len(m) == 0 {
|
||||
continue
|
||||
} else if tbl := m[2]; len(tbl) > 0 {
|
||||
stmts = append(stmts, fmt.Sprintf(`DROP TABLE %s`, tbl))
|
||||
} else if idx := m[4]; len(idx) > 0 {
|
||||
stmts = append(stmts, fmt.Sprintf(`DROP INDEX %s`, idx))
|
||||
}
|
||||
}
|
||||
|
||||
op, err := s.db.admin.UpdateDatabaseDdl(ctx, &adminpb.UpdateDatabaseDdlRequest{
|
||||
Database: s.config.DatabaseName,
|
||||
Statements: stmts,
|
||||
})
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(strings.Join(stmts, "; "))}
|
||||
}
|
||||
if err := op.Wait(ctx); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(strings.Join(stmts, "; "))}
|
||||
}
|
||||
|
||||
if err := s.ensureVersionTable(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Spanner) ensureVersionTable() error {
|
||||
ctx := context.Background()
|
||||
tbl := s.config.MigrationsTable
|
||||
iter := s.db.data.Single().Read(ctx, tbl, spanner.AllKeys(), []string{"Version"})
|
||||
if err := iter.Do(func(r *spanner.Row) error { return nil }); err == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
stmt := fmt.Sprintf(`CREATE TABLE %s (
|
||||
Version INT64 NOT NULL,
|
||||
Dirty BOOL NOT NULL
|
||||
) PRIMARY KEY(Version)`, tbl)
|
||||
|
||||
op, err := s.db.admin.UpdateDatabaseDdl(ctx, &adminpb.UpdateDatabaseDdlRequest{
|
||||
Database: s.config.DatabaseName,
|
||||
Statements: []string{stmt},
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(stmt)}
|
||||
}
|
||||
if err := op.Wait(ctx); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(stmt)}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func migrationStatements(migration []byte) []string {
|
||||
regex := regexp.MustCompile(";$")
|
||||
migrationString := string(migration[:])
|
||||
migrationString = strings.TrimSpace(migrationString)
|
||||
migrationString = regex.ReplaceAllString(migrationString, "")
|
||||
|
||||
statements := strings.Split(migrationString, ";")
|
||||
return statements
|
||||
}
|
||||
28
vendor/github.com/mattes/migrate/database/spanner/spanner_test.go
generated
vendored
28
vendor/github.com/mattes/migrate/database/spanner/spanner_test.go
generated
vendored
@@ -1,28 +0,0 @@
|
||||
package spanner
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
dt "github.com/mattes/migrate/database/testing"
|
||||
)
|
||||
|
||||
func Test(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping test in short mode.")
|
||||
}
|
||||
|
||||
db, ok := os.LookupEnv("SPANNER_DATABASE")
|
||||
if !ok {
|
||||
t.Skip("SPANNER_DATABASE not set, skipping test.")
|
||||
}
|
||||
|
||||
s := &Spanner{}
|
||||
addr := fmt.Sprintf("spanner://%v", db)
|
||||
d, err := s.Open(addr)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
dt.Test(t, d, []byte("SELECT 1"))
|
||||
}
|
||||
1
vendor/github.com/mattes/migrate/database/sqlite3/migration/33_create_table.down.sql
generated
vendored
1
vendor/github.com/mattes/migrate/database/sqlite3/migration/33_create_table.down.sql
generated
vendored
@@ -1 +0,0 @@
|
||||
DROP TABLE IF EXISTS pets;
|
||||
3
vendor/github.com/mattes/migrate/database/sqlite3/migration/33_create_table.up.sql
generated
vendored
3
vendor/github.com/mattes/migrate/database/sqlite3/migration/33_create_table.up.sql
generated
vendored
@@ -1,3 +0,0 @@
|
||||
CREATE TABLE pets (
|
||||
name string
|
||||
);
|
||||
1
vendor/github.com/mattes/migrate/database/sqlite3/migration/44_alter_table.down.sql
generated
vendored
1
vendor/github.com/mattes/migrate/database/sqlite3/migration/44_alter_table.down.sql
generated
vendored
@@ -1 +0,0 @@
|
||||
DROP TABLE IF EXISTS pets;
|
||||
1
vendor/github.com/mattes/migrate/database/sqlite3/migration/44_alter_table.up.sql
generated
vendored
1
vendor/github.com/mattes/migrate/database/sqlite3/migration/44_alter_table.up.sql
generated
vendored
@@ -1 +0,0 @@
|
||||
ALTER TABLE pets ADD predator bool;
|
||||
214
vendor/github.com/mattes/migrate/database/sqlite3/sqlite3.go
generated
vendored
214
vendor/github.com/mattes/migrate/database/sqlite3/sqlite3.go
generated
vendored
@@ -1,214 +0,0 @@
|
||||
package sqlite3
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"github.com/mattes/migrate"
|
||||
"github.com/mattes/migrate/database"
|
||||
_ "github.com/mattn/go-sqlite3"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
nurl "net/url"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func init() {
|
||||
database.Register("sqlite3", &Sqlite{})
|
||||
}
|
||||
|
||||
var DefaultMigrationsTable = "schema_migrations"
|
||||
var (
|
||||
ErrDatabaseDirty = fmt.Errorf("database is dirty")
|
||||
ErrNilConfig = fmt.Errorf("no config")
|
||||
ErrNoDatabaseName = fmt.Errorf("no database name")
|
||||
)
|
||||
|
||||
type Config struct {
|
||||
MigrationsTable string
|
||||
DatabaseName string
|
||||
}
|
||||
|
||||
type Sqlite struct {
|
||||
db *sql.DB
|
||||
isLocked bool
|
||||
|
||||
config *Config
|
||||
}
|
||||
|
||||
func WithInstance(instance *sql.DB, config *Config) (database.Driver, error) {
|
||||
if config == nil {
|
||||
return nil, ErrNilConfig
|
||||
}
|
||||
|
||||
if err := instance.Ping(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(config.MigrationsTable) == 0 {
|
||||
config.MigrationsTable = DefaultMigrationsTable
|
||||
}
|
||||
|
||||
mx := &Sqlite{
|
||||
db: instance,
|
||||
config: config,
|
||||
}
|
||||
if err := mx.ensureVersionTable(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return mx, nil
|
||||
}
|
||||
|
||||
func (m *Sqlite) ensureVersionTable() error {
|
||||
|
||||
query := fmt.Sprintf(`
|
||||
CREATE TABLE IF NOT EXISTS %s (version uint64,dirty bool);
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS version_unique ON %s (version);
|
||||
`, DefaultMigrationsTable, DefaultMigrationsTable)
|
||||
|
||||
if _, err := m.db.Exec(query); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Sqlite) Open(url string) (database.Driver, error) {
|
||||
purl, err := nurl.Parse(url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
dbfile := strings.Replace(migrate.FilterCustomQuery(purl).String(), "sqlite3://", "", 1)
|
||||
db, err := sql.Open("sqlite3", dbfile)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
migrationsTable := purl.Query().Get("x-migrations-table")
|
||||
if len(migrationsTable) == 0 {
|
||||
migrationsTable = DefaultMigrationsTable
|
||||
}
|
||||
mx, err := WithInstance(db, &Config{
|
||||
DatabaseName: purl.Path,
|
||||
MigrationsTable: migrationsTable,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return mx, nil
|
||||
}
|
||||
|
||||
func (m *Sqlite) Close() error {
|
||||
return m.db.Close()
|
||||
}
|
||||
|
||||
func (m *Sqlite) Drop() error {
|
||||
query := `SELECT name FROM sqlite_master WHERE type = 'table';`
|
||||
tables, err := m.db.Query(query)
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
defer tables.Close()
|
||||
tableNames := make([]string, 0)
|
||||
for tables.Next() {
|
||||
var tableName string
|
||||
if err := tables.Scan(&tableName); err != nil {
|
||||
return err
|
||||
}
|
||||
if len(tableName) > 0 {
|
||||
tableNames = append(tableNames, tableName)
|
||||
}
|
||||
}
|
||||
if len(tableNames) > 0 {
|
||||
for _, t := range tableNames {
|
||||
query := "DROP TABLE " + t
|
||||
err = m.executeQuery(query)
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
}
|
||||
if err := m.ensureVersionTable(); err != nil {
|
||||
return err
|
||||
}
|
||||
query := "VACUUM"
|
||||
_, err = m.db.Query(query)
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Sqlite) Lock() error {
|
||||
if m.isLocked {
|
||||
return database.ErrLocked
|
||||
}
|
||||
m.isLocked = true
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Sqlite) Unlock() error {
|
||||
if !m.isLocked {
|
||||
return nil
|
||||
}
|
||||
m.isLocked = false
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Sqlite) Run(migration io.Reader) error {
|
||||
migr, err := ioutil.ReadAll(migration)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
query := string(migr[:])
|
||||
|
||||
return m.executeQuery(query)
|
||||
}
|
||||
|
||||
func (m *Sqlite) executeQuery(query string) error {
|
||||
tx, err := m.db.Begin()
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Err: "transaction start failed"}
|
||||
}
|
||||
if _, err := tx.Exec(query); err != nil {
|
||||
tx.Rollback()
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
if err := tx.Commit(); err != nil {
|
||||
return &database.Error{OrigErr: err, Err: "transaction commit failed"}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Sqlite) SetVersion(version int, dirty bool) error {
|
||||
tx, err := m.db.Begin()
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Err: "transaction start failed"}
|
||||
}
|
||||
|
||||
query := "DELETE FROM " + m.config.MigrationsTable
|
||||
if _, err := tx.Exec(query); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
|
||||
if version >= 0 {
|
||||
query := fmt.Sprintf(`INSERT INTO %s (version, dirty) VALUES (%d, '%t')`, m.config.MigrationsTable, version, dirty)
|
||||
if _, err := tx.Exec(query); err != nil {
|
||||
tx.Rollback()
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
}
|
||||
|
||||
if err := tx.Commit(); err != nil {
|
||||
return &database.Error{OrigErr: err, Err: "transaction commit failed"}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Sqlite) Version() (version int, dirty bool, err error) {
|
||||
query := "SELECT version, dirty FROM " + m.config.MigrationsTable + " LIMIT 1"
|
||||
err = m.db.QueryRow(query).Scan(&version, &dirty)
|
||||
if err != nil {
|
||||
return database.NilVersion, false, nil
|
||||
}
|
||||
return version, dirty, nil
|
||||
}
|
||||
61
vendor/github.com/mattes/migrate/database/sqlite3/sqlite3_test.go
generated
vendored
61
vendor/github.com/mattes/migrate/database/sqlite3/sqlite3_test.go
generated
vendored
@@ -1,61 +0,0 @@
|
||||
package sqlite3
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"github.com/mattes/migrate"
|
||||
dt "github.com/mattes/migrate/database/testing"
|
||||
_ "github.com/mattes/migrate/source/file"
|
||||
_ "github.com/mattn/go-sqlite3"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func Test(t *testing.T) {
|
||||
dir, err := ioutil.TempDir("", "sqlite3-driver-test")
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer func() {
|
||||
os.RemoveAll(dir)
|
||||
}()
|
||||
fmt.Printf("DB path : %s\n", filepath.Join(dir, "sqlite3.db"))
|
||||
p := &Sqlite{}
|
||||
addr := fmt.Sprintf("sqlite3://%s", filepath.Join(dir, "sqlite3.db"))
|
||||
d, err := p.Open(addr)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
|
||||
db, err := sql.Open("sqlite3", filepath.Join(dir, "sqlite3.db"))
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer func() {
|
||||
if err := db.Close(); err != nil {
|
||||
return
|
||||
}
|
||||
}()
|
||||
dt.Test(t, d, []byte("CREATE TABLE t (Qty int, Name string);"))
|
||||
driver, err := WithInstance(db, &Config{})
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
if err := d.Drop(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
m, err := migrate.NewWithDatabaseInstance(
|
||||
"file://./migration",
|
||||
"ql", driver)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
fmt.Println("UP")
|
||||
err = m.Up()
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
}
|
||||
4
vendor/github.com/mattes/migrate/migrate.go
generated
vendored
4
vendor/github.com/mattes/migrate/migrate.go
generated
vendored
@@ -61,7 +61,7 @@ type Migrate struct {
|
||||
|
||||
// GracefulStop accepts `true` and will stop executing migrations
|
||||
// as soon as possible at a safe break point, so that the database
|
||||
// is not corrupted.
|
||||
// is not corrpupted.
|
||||
GracefulStop chan bool
|
||||
isGracefulStop bool
|
||||
|
||||
@@ -300,7 +300,7 @@ func (m *Migrate) Down() error {
|
||||
return m.unlockErr(m.runMigrations(ret))
|
||||
}
|
||||
|
||||
// Drop deletes everything in the database.
|
||||
// Drop deletes everyting in the database.
|
||||
func (m *Migrate) Drop() error {
|
||||
if err := m.lock(); err != nil {
|
||||
return err
|
||||
|
||||
2
vendor/github.com/mattes/migrate/source/driver.go
generated
vendored
2
vendor/github.com/mattes/migrate/source/driver.go
generated
vendored
@@ -87,7 +87,7 @@ func Open(url string) (Driver, error) {
|
||||
d, ok := drivers[u.Scheme]
|
||||
driversMu.RUnlock()
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("source driver: unknown driver %v (forgotten import?)", u.Scheme)
|
||||
return nil, fmt.Errorf("source driver: unknown driver %v (forgotton import?)", u.Scheme)
|
||||
}
|
||||
|
||||
return d.Open(url)
|
||||
|
||||
5
vendor/github.com/mattes/migrate/source/go-bindata/README.md
generated
vendored
5
vendor/github.com/mattes/migrate/source/go-bindata/README.md
generated
vendored
@@ -24,9 +24,8 @@ func main() {
|
||||
func(name string) ([]byte, error) {
|
||||
return migrations.Asset(name)
|
||||
})
|
||||
|
||||
d, err := bindata.WithInstance(s)
|
||||
m, err := migrate.NewWithSourceInstance("go-bindata", d, "database://foobar")
|
||||
|
||||
m, err := migrate.NewWithSourceInstance("go-bindata", s, "database://foobar")
|
||||
m.Up() // run your migrations and handle the errors above of course
|
||||
}
|
||||
```
|
||||
|
||||
42
vendor/github.com/mattes/migrate/testing/docker.go
generated
vendored
42
vendor/github.com/mattes/migrate/testing/docker.go
generated
vendored
@@ -3,7 +3,7 @@ package testing
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"context" // TODO: is issue with go < 1.7?
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
@@ -12,28 +12,24 @@ import (
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
dockertypes "github.com/docker/docker/api/types"
|
||||
dockercontainer "github.com/docker/docker/api/types/container"
|
||||
dockernetwork "github.com/docker/docker/api/types/network"
|
||||
dockerclient "github.com/docker/docker/client"
|
||||
)
|
||||
|
||||
func NewDockerContainer(t testing.TB, image string, env []string, cmd []string) (*DockerContainer, error) {
|
||||
func NewDockerContainer(t testing.TB, image string, env []string) (*DockerContainer, error) {
|
||||
c, err := dockerclient.NewEnvClient()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if cmd == nil {
|
||||
cmd = make([]string, 0)
|
||||
}
|
||||
|
||||
contr := &DockerContainer{
|
||||
t: t,
|
||||
client: c,
|
||||
ImageName: image,
|
||||
ENV: env,
|
||||
Cmd: cmd,
|
||||
}
|
||||
|
||||
if err := contr.PullImage(); err != nil {
|
||||
@@ -53,7 +49,6 @@ type DockerContainer struct {
|
||||
client *dockerclient.Client
|
||||
ImageName string
|
||||
ENV []string
|
||||
Cmd []string
|
||||
ContainerId string
|
||||
ContainerName string
|
||||
ContainerJSON dockertypes.ContainerJSON
|
||||
@@ -92,7 +87,6 @@ func (d *DockerContainer) Start() error {
|
||||
Image: d.ImageName,
|
||||
Labels: map[string]string{"migrate_test": "true"},
|
||||
Env: d.ENV,
|
||||
Cmd: d.Cmd,
|
||||
},
|
||||
&dockercontainer.HostConfig{
|
||||
PublishAllPorts: true,
|
||||
@@ -166,7 +160,7 @@ func (d *DockerContainer) Logs() (io.ReadCloser, error) {
|
||||
})
|
||||
}
|
||||
|
||||
func (d *DockerContainer) portMapping(selectFirst bool, cPort int) (containerPort uint, hostIP string, hostPort uint, err error) {
|
||||
func (d *DockerContainer) firstPortMapping() (containerPort uint, hostIP string, hostPort uint, err error) {
|
||||
if !d.containerInspected {
|
||||
if err := d.Inspect(); err != nil {
|
||||
d.t.Fatal(err)
|
||||
@@ -174,10 +168,6 @@ func (d *DockerContainer) portMapping(selectFirst bool, cPort int) (containerPor
|
||||
}
|
||||
|
||||
for port, bindings := range d.ContainerJSON.NetworkSettings.Ports {
|
||||
if !selectFirst && port.Int() != cPort {
|
||||
// Skip ahead until we find the port we want
|
||||
continue
|
||||
}
|
||||
for _, binding := range bindings {
|
||||
|
||||
hostPortUint, err := strconv.ParseUint(binding.HostPort, 10, 64)
|
||||
@@ -188,16 +178,11 @@ func (d *DockerContainer) portMapping(selectFirst bool, cPort int) (containerPor
|
||||
return uint(port.Int()), binding.HostIP, uint(hostPortUint), nil
|
||||
}
|
||||
}
|
||||
|
||||
if selectFirst {
|
||||
return 0, "", 0, fmt.Errorf("no port binding")
|
||||
} else {
|
||||
return 0, "", 0, fmt.Errorf("specified port not bound")
|
||||
}
|
||||
return 0, "", 0, fmt.Errorf("no port binding")
|
||||
}
|
||||
|
||||
func (d *DockerContainer) Host() string {
|
||||
_, hostIP, _, err := d.portMapping(true, -1)
|
||||
_, hostIP, _, err := d.firstPortMapping()
|
||||
if err != nil {
|
||||
d.t.Fatal(err)
|
||||
}
|
||||
@@ -210,26 +195,13 @@ func (d *DockerContainer) Host() string {
|
||||
}
|
||||
|
||||
func (d *DockerContainer) Port() uint {
|
||||
_, _, port, err := d.portMapping(true, -1)
|
||||
_, _, port, err := d.firstPortMapping()
|
||||
if err != nil {
|
||||
d.t.Fatal(err)
|
||||
}
|
||||
return port
|
||||
}
|
||||
|
||||
func (d *DockerContainer) PortFor(cPort int) uint {
|
||||
_, _, port, err := d.portMapping(false, cPort)
|
||||
if err != nil {
|
||||
d.t.Fatal(err)
|
||||
}
|
||||
return port
|
||||
}
|
||||
|
||||
func (d *DockerContainer) NetworkSettings() dockertypes.NetworkSettings {
|
||||
netSettings := d.ContainerJSON.NetworkSettings
|
||||
return *netSettings
|
||||
}
|
||||
|
||||
type dockerImagePullOutput struct {
|
||||
Status string `json:"status"`
|
||||
ProgressDetails struct {
|
||||
|
||||
11
vendor/github.com/mattes/migrate/testing/testing.go
generated
vendored
11
vendor/github.com/mattes/migrate/testing/testing.go
generated
vendored
@@ -6,8 +6,6 @@ import (
|
||||
"strconv"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
dockertypes "github.com/docker/docker/api/types"
|
||||
)
|
||||
|
||||
type IsReadyFunc func(Instance) bool
|
||||
@@ -17,7 +15,6 @@ type TestFunc func(*testing.T, Instance)
|
||||
type Version struct {
|
||||
Image string
|
||||
ENV []string
|
||||
Cmd []string
|
||||
}
|
||||
|
||||
func ParallelTest(t *testing.T, versions []Version, readyFn IsReadyFunc, testFn TestFunc) {
|
||||
@@ -39,7 +36,7 @@ func ParallelTest(t *testing.T, versions []Version, readyFn IsReadyFunc, testFn
|
||||
t.Parallel()
|
||||
|
||||
// create new container
|
||||
container, err := NewDockerContainer(t, version.Image, version.ENV, version.Cmd)
|
||||
container, err := NewDockerContainer(t, version.Image, version.ENV)
|
||||
if err != nil {
|
||||
t.Fatalf("%v\n%s", err, containerLogs(t, container))
|
||||
}
|
||||
@@ -49,8 +46,8 @@ func ParallelTest(t *testing.T, versions []Version, readyFn IsReadyFunc, testFn
|
||||
|
||||
// wait until database is ready
|
||||
tick := time.Tick(1000 * time.Millisecond)
|
||||
timeout := time.After(time.Duration(delay + 60) * time.Second)
|
||||
outer:
|
||||
timeout := time.After(time.Duration(delay+60) * time.Second)
|
||||
outer:
|
||||
for {
|
||||
select {
|
||||
case <-tick:
|
||||
@@ -90,7 +87,5 @@ func containerLogs(t *testing.T, c *DockerContainer) []byte {
|
||||
type Instance interface {
|
||||
Host() string
|
||||
Port() uint
|
||||
PortFor(int) uint
|
||||
NetworkSettings() dockertypes.NetworkSettings
|
||||
KeepForDebugging()
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user