Merge pull request #341 from fnproject/dep-update

Finally rid of capital Sirupsen??
This commit is contained in:
Reed Allman
2017-09-19 10:48:14 -07:00
committed by GitHub
1127 changed files with 41199 additions and 41383 deletions

1
.gitignore vendored
View File

@@ -28,3 +28,4 @@ fn/func.yaml
tmp/
fnlb/fnlb
/fn
.DS_Store

78
glide.lock generated
View File

@@ -1,16 +1,16 @@
hash: 60c3aa7f40235c70cfcc42335f9795bd1af326dc46f817aabbe7098fdd9f91a1
updated: 2017-09-05T11:30:26.153448972-07:00
hash: fb250d2ab2c11e79e99b1ca52628db958a9b578ff74f3f3d18a741fc1bf4832b
updated: 2017-09-18T23:22:20.345095297-07:00
imports:
- name: github.com/amir/raidman
version: 1ccc43bfb9c93cb401a4025e49c64ba71e5e668b
- name: github.com/apache/thrift
version: 9235bec082127e84bf1b0353a0764c9060aca6d2
version: 4c30c15924bfbc7c9e6bfc0e82630e97980e556e
subpackages:
- lib/go/thrift
- name: github.com/asaskevich/govalidator
version: 73945b6115bfbbcc57d89b7316e28109364124e1
version: 15028e809df8c71964e8efa6c11e81d5c0262302
- name: github.com/Azure/go-ansiterm
version: fa152c58bc15761d0200cb75fe958b89a9d4888e
version: 19f72df4d05d31cbe1c56bfc8045c96babff6c7e
subpackages:
- winterm
- name: github.com/beorn7/perks
@@ -18,15 +18,15 @@ imports:
subpackages:
- quantile
- name: github.com/boltdb/bolt
version: 2f1ce7a837dcb8da3ec595b1dac9d0632f0f99e8
version: fa5367d20c994db73282594be0146ab221657943
- name: github.com/cactus/go-statsd-client
version: ce77ca9ecdee1c3ffd097e32f9bb832825ccb203
subpackages:
- statsd
- name: github.com/cenkalti/backoff
version: 61153c768f31ee5f130071d08fc82b85208528de
version: 80e08cb804a3eb3e576876c777e957e874609a9a
- name: github.com/cloudflare/cfssl
version: 42549e19d448b683fa35bcce1aea3bf193ee8037
version: 7d88da830aad9d533c2fb8532da23f6a75331b52
subpackages:
- api
- auth
@@ -44,11 +44,11 @@ imports:
- signer
- signer/local
- name: github.com/coreos/etcd
version: 589a7a19ac469afa687ab1f7487dd5d4c2a6ee6a
version: 5bb9f9591f01d0a3c61d2eb3a3bb281726005b2b
subpackages:
- raft/raftpb
- name: github.com/coreos/go-semver
version: 1817cd4bea52af76542157eeabd74b057d1a199e
version: 8ab6407b697782a06568d4b7f1db25550ec2e4c6
subpackages:
- semver
- name: github.com/davecgh/go-spew
@@ -58,7 +58,7 @@ imports:
- name: github.com/dchest/siphash
version: 4ebf1de738443ea7f45f02dc394c4df1942a126d
- name: github.com/dghubble/go-twitter
version: f74be7f0f20b142558537ca43852457f7c52e051
version: c4115fa44a928413e0b857e0eb47376ffde3a61a
subpackages:
- twitter
- name: github.com/dghubble/oauth1
@@ -68,7 +68,7 @@ imports:
- name: github.com/dgrijalva/jwt-go
version: a539ee1a749a2b895533f979515ac7e6e0f5b650
- name: github.com/docker/cli
version: f5a192bcc4c2794e44eb9dd7d91c2be95c5c6342
version: 139fcd3ee95f37f3ac17b1200fb0a63908cb6781
subpackages:
- cli/config/configfile
- name: github.com/docker/distribution
@@ -132,7 +132,7 @@ imports:
subpackages:
- store
- name: github.com/docker/libnetwork
version: ba46b928444931e6865d8618dc03622cac79aa6f
version: 6d098467ec58038b68620a3c2c418936661efa64
subpackages:
- datastore
- discoverapi
@@ -140,7 +140,7 @@ imports:
- name: github.com/docker/libtrust
version: aabc10ec26b754e797f9028f4589c5b7bd90dc20
- name: github.com/docker/swarmkit
version: 0554c9bc9a485025e89b8e5c2c1f0d75961906a2
version: bd7bafb8a61de1f5f23c8215ce7b9ecbcb30ff21
subpackages:
- api
- api/deepcopy
@@ -167,12 +167,8 @@ imports:
version: bb955e01b9346ac19dc29eb16586c90ded99a98c
- name: github.com/eapache/queue
version: 44cc805cf13205b55f69e14bcb69867d1ae92f98
- name: github.com/fsnotify/fsnotify
version: 4da3e2cfbabc9f751898f250b49f2439785783a1
- name: github.com/fsouza/go-dockerclient
version: 75772940379e725b5aae213e570f9dcd751951cb
- name: github.com/fnproject/fn_go
version: e046aa4ca1f1028a04fc51395297ff07515cb0b6
version: 418dcd8e37593d86604e89a48d7ee2e109a1d3bf
subpackages:
- client
- client/apps
@@ -180,15 +176,19 @@ imports:
- client/operations
- client/routes
- models
- name: github.com/fsnotify/fsnotify
version: 4da3e2cfbabc9f751898f250b49f2439785783a1
- name: github.com/fsouza/go-dockerclient
version: 98edf3edfae6a6500fecc69d2bcccf1302544004
- name: github.com/garyburd/redigo
version: b925df3cc15d8646e9b5b333ebaf3011385aba11
version: 70e1b1943d4fc9c56791abaa6f4d1e727b9ab925
subpackages:
- internal
- redis
- name: github.com/gin-contrib/sse
version: 22d885f9ecc78bf4ee5d72b937e4bbcdc58e8cae
- name: github.com/gin-gonic/gin
version: 848fa41ca016fa3a3d385af710c4219c1cb477a4
version: 5afc5b19730118c9b8324fe9dd995d44ec65c81a
subpackages:
- binding
- json
@@ -208,7 +208,7 @@ imports:
subpackages:
- fmts
- name: github.com/go-openapi/runtime
version: bf2ff8f7150788b1c7256abb0805ba0410cbbabb
version: d6605b7c17ac3b1033ca794886e6142a4141f5b0
subpackages:
- client
- name: github.com/go-openapi/spec
@@ -220,7 +220,7 @@ imports:
- name: github.com/go-openapi/validate
version: 8a82927c942c94794a5cd8b8b50ce2f48a955c0c
- name: github.com/go-sql-driver/mysql
version: 26471af196a17ee75a22e6481b5a5897fb16b081
version: be22b3051b3467094e9908389d900e74f72cf08f
- name: github.com/gogo/protobuf
version: 100ba4e885062801d56799d78530b73b178a78f3
subpackages:
@@ -267,7 +267,7 @@ imports:
subpackages:
- simplelru
- name: github.com/hashicorp/hcl
version: 8f6b1344a92ff8877cf24a5de9177bf7d0a2a187
version: 68e816d1c783414e79bc65b3994d9ab6b0a722ab
subpackages:
- hcl/ast
- hcl/parser
@@ -278,7 +278,7 @@ imports:
- json/scanner
- json/token
- name: github.com/iron-io/iron_go3
version: 830335d420db87fc84cbff7f0d1348a46b499946
version: ded317cb147d3b52b593da08495bc7d53efa17d8
subpackages:
- api
- config
@@ -290,17 +290,17 @@ imports:
subpackages:
- reflectx
- name: github.com/json-iterator/go
version: 8c7fc7584a2a4dad472a39a85889dabb3091dfb1
version: fdfe0b9a69118ff692d6e1005e9de7e0cffb7d6b
- name: github.com/kr/logfmt
version: b84e30acd515aadc4b783ad4ff83aff3299bdfe0
- name: github.com/lib/pq
version: e42267488fe361b9dc034be7a6bffef5b195bceb
version: 23da1db4f16d9658a86ae9b717c245fc078f10f1
subpackages:
- oid
- name: github.com/magiconair/properties
version: 8d7837e64d3c1ee4e54a880c5a920ab4316fc90a
- name: github.com/mailru/easyjson
version: 2a92e673c9a6302dd05c3a691ae1f24aef46457d
version: 2f5df55504ebc322e4d52d34df6a1f5b503bf26d
subpackages:
- buffer
- jlexer
@@ -322,7 +322,7 @@ imports:
- name: github.com/opencontainers/go-digest
version: 279bed98673dd5bef374d3b6e4b09e2af76183bf
- name: github.com/opencontainers/image-spec
version: 7653c236dd968a4f18c94d588591d98dea106323
version: ebd93fd0782379ca3d821f0fa74f0651a9347a3e
subpackages:
- specs-go
- specs-go/v1
@@ -339,7 +339,7 @@ imports:
- ext
- log
- name: github.com/openzipkin/zipkin-go-opentracing
version: 37e942825de0f846d15acc3bc9d027c9134a9b25
version: 9c88fa03bfdfaa5fec7cd1b40f3d10ec15c15fc6
subpackages:
- flag
- thrift/gen-go/scribe
@@ -357,7 +357,7 @@ imports:
subpackages:
- xxHash32
- name: github.com/pkg/errors
version: c605e284fe17294bda444b34710735b29d1a9d90
version: 2b3a18b5f0fb6b4f9190549597d3f962c02bc5eb
- name: github.com/prometheus/client_golang
version: c5b7fccd204277076155f10851dad72b76a49317
subpackages:
@@ -367,7 +367,7 @@ imports:
subpackages:
- go
- name: github.com/prometheus/common
version: 49fee292b27bfff7f354ee0f64e1bc4850462edf
version: 2f17f4a9d485bf34b4bfaccc273805040e4f86c8
subpackages:
- expfmt
- internal/bitbucket.org/ww/goautoneg
@@ -377,15 +377,13 @@ imports:
subpackages:
- xfs
- name: github.com/PuerkitoBio/purell
version: f619812e3caf603a8df60a7ec6f2654b703189ef
version: 7cf257f0a33260797b0febf39f95fccd86aab2a3
- name: github.com/PuerkitoBio/urlesc
version: de5bf2ad457846296e2031421a34e2568e304e35
- name: github.com/rcrowley/go-metrics
version: 1f30fe9094a513ce4c700b9a54458bbb0c96996c
- name: github.com/Shopify/sarama
version: 15174039fd207656a0f97f52bc78ec7793deeada
- name: github.com/Sirupsen/logrus
version: 89742aefa4b206dcf400792f3bd35b542998eb3b
version: 4704a3a8c95920361c47e9a2adec13c3d757c757
- name: github.com/sirupsen/logrus
version: 89742aefa4b206dcf400792f3bd35b542998eb3b
subpackages:
@@ -403,11 +401,11 @@ imports:
- name: github.com/spf13/viper
version: 25b30aa063fc18e48662b86996252eabdcf2f0c7
- name: github.com/ugorji/go
version: 8c0409fcbb70099c748d71f714529204975f6c3f
version: 54210f4e076c57f351166f0ed60e67d3fca57a36
subpackages:
- codec
- name: golang.org/x/crypto
version: 81e90905daefcd6fd217b62423c0908922eadb30
version: 7d9177d70076375b9a59c8fde23d52d9c4a7ecd5
subpackages:
- bcrypt
- blowfish
@@ -416,7 +414,7 @@ imports:
- pkcs12/internal/rc2
- ssh/terminal
- name: golang.org/x/net
version: c8c74377599bd978aee1cf3b9b63a8634051cec2
version: 66aacef3dd8a676686c7ae3716979581e8b03c47
subpackages:
- context
- context/ctxhttp
@@ -427,7 +425,7 @@ imports:
- lex/httplex
- trace
- name: golang.org/x/sys
version: 7ddbeae9ae08c6a06a59597f0c9edbc5ff2444ce
version: 07c182904dbd53199946ba614a412c61d3c548f5
subpackages:
- unix
- windows

View File

@@ -22,7 +22,7 @@ import:
- package: github.com/dghubble/oauth1
- package: github.com/dgrijalva/jwt-go
- package: github.com/docker/cli
version: f5a192bcc4c2794e44eb9dd7d91c2be95c5c6342
version: 139fcd3ee95f37f3ac17b1200fb0a63908cb6781
subpackages:
- cli/config/configfile
- package: github.com/docker/distribution
@@ -67,7 +67,9 @@ import:
- package: github.com/opencontainers/runc
version: ae2948042b08ad3d6d13cd09f40a50ffff4fc688
- package: github.com/Azure/go-ansiterm
version: fa152c58bc15761d0200cb75fe958b89a9d4888e
version: 19f72df4d05d31cbe1c56bfc8045c96babff6c7e
- package: github.com/prometheus/common
version: 2f17f4a9d485bf34b4bfaccc273805040e4f86c8
testImport:
- package: github.com/patrickmn/go-cache
branch: master

View File

@@ -5,7 +5,7 @@ import (
"io/ioutil"
"os"
"github.com/Sirupsen/logrus"
"github.com/sirupsen/logrus"
)
var logger *logrus.Logger

View File

@@ -9,7 +9,7 @@ import (
"strconv"
"github.com/Azure/go-ansiterm"
"github.com/Sirupsen/logrus"
"github.com/sirupsen/logrus"
)
var logger *logrus.Logger

View File

@@ -299,7 +299,7 @@ func sortQuery(u *url.URL) {
if len(q) > 0 {
arKeys := make([]string, len(q))
i := 0
for k := range q {
for k, _ := range q {
arKeys[i] = k
i++
}

View File

@@ -16,672 +16,672 @@ type testCase struct {
var (
cases = [...]*testCase{
{
&testCase{
"LowerScheme",
"HTTP://www.SRC.ca",
FlagLowercaseScheme,
"http://www.SRC.ca",
false,
},
{
&testCase{
"LowerScheme2",
"http://www.SRC.ca",
FlagLowercaseScheme,
"http://www.SRC.ca",
false,
},
{
&testCase{
"LowerHost",
"HTTP://www.SRC.ca/",
FlagLowercaseHost,
"http://www.src.ca/", // Since Go1.1, scheme is automatically lowercased
false,
},
{
&testCase{
"UpperEscapes",
`http://www.whatever.com/Some%aa%20Special%8Ecases/`,
FlagUppercaseEscapes,
"http://www.whatever.com/Some%AA%20Special%8Ecases/",
false,
},
{
&testCase{
"UnnecessaryEscapes",
`http://www.toto.com/%41%42%2E%44/%32%33%52%2D/%5f%7E`,
FlagDecodeUnnecessaryEscapes,
"http://www.toto.com/AB.D/23R-/_~",
false,
},
{
&testCase{
"RemoveDefaultPort",
"HTTP://www.SRC.ca:80/",
FlagRemoveDefaultPort,
"http://www.SRC.ca/", // Since Go1.1, scheme is automatically lowercased
false,
},
{
&testCase{
"RemoveDefaultPort2",
"HTTP://www.SRC.ca:80",
FlagRemoveDefaultPort,
"http://www.SRC.ca", // Since Go1.1, scheme is automatically lowercased
false,
},
{
&testCase{
"RemoveDefaultPort3",
"HTTP://www.SRC.ca:8080",
FlagRemoveDefaultPort,
"http://www.SRC.ca:8080", // Since Go1.1, scheme is automatically lowercased
false,
},
{
&testCase{
"Safe",
"HTTP://www.SRC.ca:80/to%1ato%8b%ee/OKnow%41%42%43%7e",
FlagsSafe,
"http://www.src.ca/to%1Ato%8B%EE/OKnowABC~",
false,
},
{
&testCase{
"BothLower",
"HTTP://www.SRC.ca:80/to%1ato%8b%ee/OKnow%41%42%43%7e",
FlagLowercaseHost | FlagLowercaseScheme,
"http://www.src.ca:80/to%1Ato%8B%EE/OKnowABC~",
false,
},
{
&testCase{
"RemoveTrailingSlash",
"HTTP://www.SRC.ca:80/",
FlagRemoveTrailingSlash,
"http://www.SRC.ca:80", // Since Go1.1, scheme is automatically lowercased
false,
},
{
&testCase{
"RemoveTrailingSlash2",
"HTTP://www.SRC.ca:80/toto/titi/",
FlagRemoveTrailingSlash,
"http://www.SRC.ca:80/toto/titi", // Since Go1.1, scheme is automatically lowercased
false,
},
{
&testCase{
"RemoveTrailingSlash3",
"HTTP://www.SRC.ca:80/toto/titi/fin/?a=1",
FlagRemoveTrailingSlash,
"http://www.SRC.ca:80/toto/titi/fin?a=1", // Since Go1.1, scheme is automatically lowercased
false,
},
{
&testCase{
"AddTrailingSlash",
"HTTP://www.SRC.ca:80",
FlagAddTrailingSlash,
"http://www.SRC.ca:80/", // Since Go1.1, scheme is automatically lowercased
false,
},
{
&testCase{
"AddTrailingSlash2",
"HTTP://www.SRC.ca:80/toto/titi.html",
FlagAddTrailingSlash,
"http://www.SRC.ca:80/toto/titi.html/", // Since Go1.1, scheme is automatically lowercased
false,
},
{
&testCase{
"AddTrailingSlash3",
"HTTP://www.SRC.ca:80/toto/titi/fin?a=1",
FlagAddTrailingSlash,
"http://www.SRC.ca:80/toto/titi/fin/?a=1", // Since Go1.1, scheme is automatically lowercased
false,
},
{
&testCase{
"RemoveDotSegments",
"HTTP://root/a/b/./../../c/",
FlagRemoveDotSegments,
"http://root/c/", // Since Go1.1, scheme is automatically lowercased
false,
},
{
&testCase{
"RemoveDotSegments2",
"HTTP://root/../a/b/./../c/../d",
FlagRemoveDotSegments,
"http://root/a/d", // Since Go1.1, scheme is automatically lowercased
false,
},
{
&testCase{
"UsuallySafe",
"HTTP://www.SRC.ca:80/to%1ato%8b%ee/./c/d/../OKnow%41%42%43%7e/?a=b#test",
FlagsUsuallySafeGreedy,
"http://www.src.ca/to%1Ato%8B%EE/c/OKnowABC~?a=b#test",
false,
},
{
&testCase{
"RemoveDirectoryIndex",
"HTTP://root/a/b/c/default.aspx",
FlagRemoveDirectoryIndex,
"http://root/a/b/c/", // Since Go1.1, scheme is automatically lowercased
false,
},
{
&testCase{
"RemoveDirectoryIndex2",
"HTTP://root/a/b/c/default#a=b",
FlagRemoveDirectoryIndex,
"http://root/a/b/c/default#a=b", // Since Go1.1, scheme is automatically lowercased
false,
},
{
&testCase{
"RemoveFragment",
"HTTP://root/a/b/c/default#toto=tata",
FlagRemoveFragment,
"http://root/a/b/c/default", // Since Go1.1, scheme is automatically lowercased
false,
},
{
&testCase{
"ForceHTTP",
"https://root/a/b/c/default#toto=tata",
FlagForceHTTP,
"http://root/a/b/c/default#toto=tata",
false,
},
{
&testCase{
"RemoveDuplicateSlashes",
"https://root/a//b///c////default#toto=tata",
FlagRemoveDuplicateSlashes,
"https://root/a/b/c/default#toto=tata",
false,
},
{
&testCase{
"RemoveDuplicateSlashes2",
"https://root//a//b///c////default#toto=tata",
FlagRemoveDuplicateSlashes,
"https://root/a/b/c/default#toto=tata",
false,
},
{
&testCase{
"RemoveWWW",
"https://www.root/a/b/c/",
FlagRemoveWWW,
"https://root/a/b/c/",
false,
},
{
&testCase{
"RemoveWWW2",
"https://WwW.Root/a/b/c/",
FlagRemoveWWW,
"https://Root/a/b/c/",
false,
},
{
&testCase{
"AddWWW",
"https://Root/a/b/c/",
FlagAddWWW,
"https://www.Root/a/b/c/",
false,
},
{
&testCase{
"SortQuery",
"http://root/toto/?b=4&a=1&c=3&b=2&a=5",
FlagSortQuery,
"http://root/toto/?a=1&a=5&b=2&b=4&c=3",
false,
},
{
&testCase{
"RemoveEmptyQuerySeparator",
"http://root/toto/?",
FlagRemoveEmptyQuerySeparator,
"http://root/toto/",
false,
},
{
&testCase{
"Unsafe",
"HTTPS://www.RooT.com/toto/t%45%1f///a/./b/../c/?z=3&w=2&a=4&w=1#invalid",
FlagsUnsafeGreedy,
"http://root.com/toto/tE%1F/a/c?a=4&w=1&w=2&z=3",
false,
},
{
&testCase{
"Safe2",
"HTTPS://www.RooT.com/toto/t%45%1f///a/./b/../c/?z=3&w=2&a=4&w=1#invalid",
FlagsSafe,
"https://www.root.com/toto/tE%1F///a/./b/../c/?z=3&w=2&a=4&w=1#invalid",
false,
},
{
&testCase{
"UsuallySafe2",
"HTTPS://www.RooT.com/toto/t%45%1f///a/./b/../c/?z=3&w=2&a=4&w=1#invalid",
FlagsUsuallySafeGreedy,
"https://www.root.com/toto/tE%1F///a/c?z=3&w=2&a=4&w=1#invalid",
false,
},
{
&testCase{
"AddTrailingSlashBug",
"http://src.ca/",
FlagsAllNonGreedy,
"http://www.src.ca/",
false,
},
{
&testCase{
"SourceModified",
"HTTPS://www.RooT.com/toto/t%45%1f///a/./b/../c/?z=3&w=2&a=4&w=1#invalid",
FlagsUnsafeGreedy,
"http://root.com/toto/tE%1F/a/c?a=4&w=1&w=2&z=3",
true,
},
{
&testCase{
"IPv6-1",
"http://[2001:db8:1f70::999:de8:7648:6e8]/test",
FlagsSafe | FlagRemoveDotSegments,
"http://[2001:db8:1f70::999:de8:7648:6e8]/test",
false,
},
{
&testCase{
"IPv6-2",
"http://[::ffff:192.168.1.1]/test",
FlagsSafe | FlagRemoveDotSegments,
"http://[::ffff:192.168.1.1]/test",
false,
},
{
&testCase{
"IPv6-3",
"http://[::ffff:192.168.1.1]:80/test",
FlagsSafe | FlagRemoveDotSegments,
"http://[::ffff:192.168.1.1]/test",
false,
},
{
&testCase{
"IPv6-4",
"htTps://[::fFff:192.168.1.1]:443/test",
FlagsSafe | FlagRemoveDotSegments,
"https://[::ffff:192.168.1.1]/test",
false,
},
{
&testCase{
"FTP",
"ftp://user:pass@ftp.foo.net/foo/bar",
FlagsSafe | FlagRemoveDotSegments,
"ftp://user:pass@ftp.foo.net/foo/bar",
false,
},
{
&testCase{
"Standard-1",
"http://www.foo.com:80/foo",
FlagsSafe | FlagRemoveDotSegments,
"http://www.foo.com/foo",
false,
},
{
&testCase{
"Standard-2",
"http://www.foo.com:8000/foo",
FlagsSafe | FlagRemoveDotSegments,
"http://www.foo.com:8000/foo",
false,
},
{
&testCase{
"Standard-3",
"http://www.foo.com/%7ebar",
FlagsSafe | FlagRemoveDotSegments,
"http://www.foo.com/~bar",
false,
},
{
&testCase{
"Standard-4",
"http://www.foo.com/%7Ebar",
FlagsSafe | FlagRemoveDotSegments,
"http://www.foo.com/~bar",
false,
},
{
&testCase{
"Standard-5",
"http://USER:pass@www.Example.COM/foo/bar",
FlagsSafe | FlagRemoveDotSegments,
"http://USER:pass@www.example.com/foo/bar",
false,
},
{
&testCase{
"Standard-6",
"http://test.example/?a=%26&b=1",
FlagsSafe | FlagRemoveDotSegments,
"http://test.example/?a=%26&b=1",
false,
},
{
&testCase{
"Standard-7",
"http://test.example/%25/?p=%20val%20%25",
FlagsSafe | FlagRemoveDotSegments,
"http://test.example/%25/?p=%20val%20%25",
false,
},
{
&testCase{
"Standard-8",
"http://test.example/path/with a%20space+/",
FlagsSafe | FlagRemoveDotSegments,
"http://test.example/path/with%20a%20space+/",
false,
},
{
&testCase{
"Standard-9",
"http://test.example/?",
FlagsSafe | FlagRemoveDotSegments,
"http://test.example/",
false,
},
{
&testCase{
"Standard-10",
"http://a.COM/path/?b&a",
FlagsSafe | FlagRemoveDotSegments,
"http://a.com/path/?b&a",
false,
},
{
&testCase{
"StandardCasesAddTrailingSlash",
"http://test.example?",
FlagsSafe | FlagAddTrailingSlash,
"http://test.example/",
false,
},
{
&testCase{
"OctalIP-1",
"http://0123.011.0.4/",
FlagsSafe | FlagDecodeOctalHost,
"http://0123.011.0.4/",
false,
},
{
&testCase{
"OctalIP-2",
"http://0102.0146.07.0223/",
FlagsSafe | FlagDecodeOctalHost,
"http://66.102.7.147/",
false,
},
{
&testCase{
"OctalIP-3",
"http://0102.0146.07.0223.:23/",
FlagsSafe | FlagDecodeOctalHost,
"http://66.102.7.147.:23/",
false,
},
{
&testCase{
"OctalIP-4",
"http://USER:pass@0102.0146.07.0223../",
FlagsSafe | FlagDecodeOctalHost,
"http://USER:pass@66.102.7.147../",
false,
},
{
&testCase{
"DWORDIP-1",
"http://123.1113982867/",
FlagsSafe | FlagDecodeDWORDHost,
"http://123.1113982867/",
false,
},
{
&testCase{
"DWORDIP-2",
"http://1113982867/",
FlagsSafe | FlagDecodeDWORDHost,
"http://66.102.7.147/",
false,
},
{
&testCase{
"DWORDIP-3",
"http://1113982867.:23/",
FlagsSafe | FlagDecodeDWORDHost,
"http://66.102.7.147.:23/",
false,
},
{
&testCase{
"DWORDIP-4",
"http://USER:pass@1113982867../",
FlagsSafe | FlagDecodeDWORDHost,
"http://USER:pass@66.102.7.147../",
false,
},
{
&testCase{
"HexIP-1",
"http://0x123.1113982867/",
FlagsSafe | FlagDecodeHexHost,
"http://0x123.1113982867/",
false,
},
{
&testCase{
"HexIP-2",
"http://0x42660793/",
FlagsSafe | FlagDecodeHexHost,
"http://66.102.7.147/",
false,
},
{
&testCase{
"HexIP-3",
"http://0x42660793.:23/",
FlagsSafe | FlagDecodeHexHost,
"http://66.102.7.147.:23/",
false,
},
{
&testCase{
"HexIP-4",
"http://USER:pass@0x42660793../",
FlagsSafe | FlagDecodeHexHost,
"http://USER:pass@66.102.7.147../",
false,
},
{
&testCase{
"UnnecessaryHostDots-1",
"http://.www.foo.com../foo/bar.html",
FlagsSafe | FlagRemoveUnnecessaryHostDots,
"http://www.foo.com/foo/bar.html",
false,
},
{
&testCase{
"UnnecessaryHostDots-2",
"http://www.foo.com./foo/bar.html",
FlagsSafe | FlagRemoveUnnecessaryHostDots,
"http://www.foo.com/foo/bar.html",
false,
},
{
&testCase{
"UnnecessaryHostDots-3",
"http://www.foo.com.:81/foo",
FlagsSafe | FlagRemoveUnnecessaryHostDots,
"http://www.foo.com:81/foo",
false,
},
{
&testCase{
"UnnecessaryHostDots-4",
"http://www.example.com./",
FlagsSafe | FlagRemoveUnnecessaryHostDots,
"http://www.example.com/",
false,
},
{
&testCase{
"EmptyPort-1",
"http://www.thedraymin.co.uk:/main/?p=308",
FlagsSafe | FlagRemoveEmptyPortSeparator,
"http://www.thedraymin.co.uk/main/?p=308",
false,
},
{
&testCase{
"EmptyPort-2",
"http://www.src.ca:",
FlagsSafe | FlagRemoveEmptyPortSeparator,
"http://www.src.ca",
false,
},
{
&testCase{
"Slashes-1",
"http://test.example/foo/bar/.",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/foo/bar/",
false,
},
{
&testCase{
"Slashes-2",
"http://test.example/foo/bar/./",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/foo/bar/",
false,
},
{
&testCase{
"Slashes-3",
"http://test.example/foo/bar/..",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/foo/",
false,
},
{
&testCase{
"Slashes-4",
"http://test.example/foo/bar/../",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/foo/",
false,
},
{
&testCase{
"Slashes-5",
"http://test.example/foo/bar/../baz",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/foo/baz",
false,
},
{
&testCase{
"Slashes-6",
"http://test.example/foo/bar/../..",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/",
false,
},
{
&testCase{
"Slashes-7",
"http://test.example/foo/bar/../../",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/",
false,
},
{
&testCase{
"Slashes-8",
"http://test.example/foo/bar/../../baz",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/baz",
false,
},
{
&testCase{
"Slashes-9",
"http://test.example/foo/bar/../../../baz",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/baz",
false,
},
{
&testCase{
"Slashes-10",
"http://test.example/foo/bar/../../../../baz",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/baz",
false,
},
{
&testCase{
"Slashes-11",
"http://test.example/./foo",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/foo",
false,
},
{
&testCase{
"Slashes-12",
"http://test.example/../foo",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/foo",
false,
},
{
&testCase{
"Slashes-13",
"http://test.example/foo.",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/foo.",
false,
},
{
&testCase{
"Slashes-14",
"http://test.example/.foo",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/.foo",
false,
},
{
&testCase{
"Slashes-15",
"http://test.example/foo..",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/foo..",
false,
},
{
&testCase{
"Slashes-16",
"http://test.example/..foo",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/..foo",
false,
},
{
&testCase{
"Slashes-17",
"http://test.example/./../foo",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/foo",
false,
},
{
&testCase{
"Slashes-18",
"http://test.example/./foo/.",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/foo/",
false,
},
{
&testCase{
"Slashes-19",
"http://test.example/foo/./bar",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/foo/bar",
false,
},
{
&testCase{
"Slashes-20",
"http://test.example/foo/../bar",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/bar",
false,
},
{
&testCase{
"Slashes-21",
"http://test.example/foo//",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/foo/",
false,
},
{
&testCase{
"Slashes-22",
"http://test.example/foo///bar//",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"http://test.example/foo/bar/",
false,
},
{
&testCase{
"Relative",
"foo/bar",
FlagsAllGreedy,
"foo/bar",
false,
},
{
&testCase{
"Relative-1",
"./../foo",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"foo",
false,
},
{
&testCase{
"Relative-2",
"./foo/bar/../baz/../bang/..",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"foo/",
false,
},
{
&testCase{
"Relative-3",
"foo///bar//",
FlagsSafe | FlagRemoveDotSegments | FlagRemoveDuplicateSlashes,
"foo/bar/",
false,
},
{
&testCase{
"Relative-4",
"www.youtube.com",
FlagsUsuallySafeGreedy,

View File

@@ -612,6 +612,9 @@ func (client *client) backgroundMetadataUpdater() {
if specificTopics, err := client.Topics(); err != nil {
Logger.Println("Client background metadata topic load:", err)
break
} else if len(specificTopics) == 0 {
Logger.Println("Client background metadata update: no specific topics to update")
break
} else {
topics = specificTopics
}

View File

@@ -196,11 +196,23 @@ type Config struct {
// Equivalent to the JVM's `fetch.wait.max.ms`.
MaxWaitTime time.Duration
// The maximum amount of time the consumer expects a message takes to process
// for the user. If writing to the Messages channel takes longer than this,
// that partition will stop fetching more messages until it can proceed again.
// The maximum amount of time the consumer expects a message takes to
// process for the user. If writing to the Messages channel takes longer
// than this, that partition will stop fetching more messages until it
// can proceed again.
// Note that, since the Messages channel is buffered, the actual grace time is
// (MaxProcessingTime * ChanneBufferSize). Defaults to 100ms.
// If a message is not written to the Messages channel between two ticks
// of the expiryTicker then a timeout is detected.
// Using a ticker instead of a timer to detect timeouts should typically
// result in many fewer calls to Timer functions which may result in a
// significant performance improvement if many messages are being sent
// and timeouts are infrequent.
// The disadvantage of using a ticker instead of a timer is that
// timeouts will be less accurate. That is, the effective timeout could
// be between `MaxProcessingTime` and `2 * MaxProcessingTime`. For
// example, if `MaxProcessingTime` is 100ms then a delay of 180ms
// between two messages being sent may not be recognized as a timeout.
MaxProcessingTime time.Duration
// Return specifies what channels will be populated. If they are set to true,

View File

@@ -440,35 +440,37 @@ func (child *partitionConsumer) HighWaterMarkOffset() int64 {
func (child *partitionConsumer) responseFeeder() {
var msgs []*ConsumerMessage
expiryTimer := time.NewTimer(child.conf.Consumer.MaxProcessingTime)
expireTimedOut := false
msgSent := false
feederLoop:
for response := range child.feeder {
msgs, child.responseResult = child.parseResponse(response)
expiryTicker := time.NewTicker(child.conf.Consumer.MaxProcessingTime)
for i, msg := range msgs {
if !expiryTimer.Stop() && !expireTimedOut {
// expiryTimer was expired; clear out the waiting msg
<-expiryTimer.C
}
expiryTimer.Reset(child.conf.Consumer.MaxProcessingTime)
expireTimedOut = false
messageSelect:
select {
case child.messages <- msg:
case <-expiryTimer.C:
expireTimedOut = true
child.responseResult = errTimedOut
child.broker.acks.Done()
for _, msg = range msgs[i:] {
child.messages <- msg
msgSent = true
case <-expiryTicker.C:
if !msgSent {
child.responseResult = errTimedOut
child.broker.acks.Done()
for _, msg = range msgs[i:] {
child.messages <- msg
}
child.broker.input <- child
continue feederLoop
} else {
// current message has not been sent, return to select
// statement
msgSent = false
goto messageSelect
}
child.broker.input <- child
continue feederLoop
}
}
expiryTicker.Stop()
child.broker.acks.Done()
}

View File

@@ -803,6 +803,48 @@ func TestConsumerOffsetOutOfRange(t *testing.T) {
broker0.Close()
}
func TestConsumerExpiryTicker(t *testing.T) {
// Given
broker0 := NewMockBroker(t, 0)
fetchResponse1 := &FetchResponse{}
for i := 1; i <= 8; i++ {
fetchResponse1.AddMessage("my_topic", 0, nil, testMsg, int64(i))
}
broker0.SetHandlerByMap(map[string]MockResponse{
"MetadataRequest": NewMockMetadataResponse(t).
SetBroker(broker0.Addr(), broker0.BrokerID()).
SetLeader("my_topic", 0, broker0.BrokerID()),
"OffsetRequest": NewMockOffsetResponse(t).
SetOffset("my_topic", 0, OffsetNewest, 1234).
SetOffset("my_topic", 0, OffsetOldest, 1),
"FetchRequest": NewMockSequence(fetchResponse1),
})
config := NewConfig()
config.ChannelBufferSize = 0
config.Consumer.MaxProcessingTime = 10 * time.Millisecond
master, err := NewConsumer([]string{broker0.Addr()}, config)
if err != nil {
t.Fatal(err)
}
// When
consumer, err := master.ConsumePartition("my_topic", 0, 1)
if err != nil {
t.Fatal(err)
}
// Then: messages with offsets 1 through 8 are read
for i := 1; i <= 8; i++ {
assertMessageOffset(t, <-consumer.Messages(), int64(i))
time.Sleep(2 * time.Millisecond)
}
safeClose(t, consumer)
safeClose(t, master)
broker0.Close()
}
func assertMessageOffset(t *testing.T, msg *ConsumerMessage, expectedOffset int64) {
if msg.Offset != expectedOffset {
t.Errorf("Incorrect message offset: expected=%d, actual=%d", expectedOffset, msg.Offset)

View File

@@ -2,6 +2,7 @@ package sarama
import (
"runtime"
"strings"
"testing"
"time"
)
@@ -106,7 +107,7 @@ func TestMessageEncoding(t *testing.T) {
message.Value = []byte{}
message.Codec = CompressionGZIP
if runtime.Version() == "go1.8" {
if runtime.Version() == "go1.8" || strings.HasPrefix(runtime.Version(), "go1.8.") {
testEncodable(t, "empty gzip", &message, emptyGzipMessage18)
} else {
testEncodable(t, "empty gzip", &message, emptyGzipMessage)

View File

@@ -151,6 +151,13 @@ type PartitionOffsetManager interface {
// message twice, and your processing should ideally be idempotent.
MarkOffset(offset int64, metadata string)
// ResetOffset resets to the provided offset, alongside a metadata string that
// represents the state of the partition consumer at that point in time. Reset
// acts as a counterpart to MarkOffset, the difference being that it allows to
// reset an offset to an earlier or smaller value, where MarkOffset only
// allows incrementing the offset. cf MarkOffset for more details.
ResetOffset(offset int64, metadata string)
// Errors returns a read channel of errors that occur during offset management, if
// enabled. By default, errors are logged and not returned over this channel. If
// you want to implement any custom error handling, set your config's
@@ -329,6 +336,17 @@ func (pom *partitionOffsetManager) MarkOffset(offset int64, metadata string) {
}
}
func (pom *partitionOffsetManager) ResetOffset(offset int64, metadata string) {
pom.lock.Lock()
defer pom.lock.Unlock()
if offset < pom.offset {
pom.offset = offset
pom.metadata = metadata
pom.dirty = true
}
}
func (pom *partitionOffsetManager) updateCommitted(offset int64, metadata string) {
pom.lock.Lock()
defer pom.lock.Unlock()

View File

@@ -204,6 +204,70 @@ func TestPartitionOffsetManagerNextOffset(t *testing.T) {
safeClose(t, testClient)
}
func TestPartitionOffsetManagerResetOffset(t *testing.T) {
om, testClient, broker, coordinator := initOffsetManager(t)
pom := initPartitionOffsetManager(t, om, coordinator, 5, "original_meta")
ocResponse := new(OffsetCommitResponse)
ocResponse.AddError("my_topic", 0, ErrNoError)
coordinator.Returns(ocResponse)
expected := int64(1)
pom.ResetOffset(expected, "modified_meta")
actual, meta := pom.NextOffset()
if actual != expected {
t.Errorf("Expected offset %v. Actual: %v", expected, actual)
}
if meta != "modified_meta" {
t.Errorf("Expected metadata \"modified_meta\". Actual: %q", meta)
}
safeClose(t, pom)
safeClose(t, om)
safeClose(t, testClient)
broker.Close()
coordinator.Close()
}
func TestPartitionOffsetManagerResetOffsetWithRetention(t *testing.T) {
om, testClient, broker, coordinator := initOffsetManager(t)
testClient.Config().Consumer.Offsets.Retention = time.Hour
pom := initPartitionOffsetManager(t, om, coordinator, 5, "original_meta")
ocResponse := new(OffsetCommitResponse)
ocResponse.AddError("my_topic", 0, ErrNoError)
handler := func(req *request) (res encoder) {
if req.body.version() != 2 {
t.Errorf("Expected to be using version 2. Actual: %v", req.body.version())
}
offsetCommitRequest := req.body.(*OffsetCommitRequest)
if offsetCommitRequest.RetentionTime != (60 * 60 * 1000) {
t.Errorf("Expected an hour retention time. Actual: %v", offsetCommitRequest.RetentionTime)
}
return ocResponse
}
coordinator.setHandler(handler)
expected := int64(1)
pom.ResetOffset(expected, "modified_meta")
actual, meta := pom.NextOffset()
if actual != expected {
t.Errorf("Expected offset %v. Actual: %v", expected, actual)
}
if meta != "modified_meta" {
t.Errorf("Expected metadata \"modified_meta\". Actual: %q", meta)
}
safeClose(t, pom)
safeClose(t, om)
safeClose(t, testClient)
broker.Close()
coordinator.Close()
}
func TestPartitionOffsetManagerMarkOffset(t *testing.T) {
om, testClient, broker, coordinator := initOffsetManager(t)
pom := initPartitionOffsetManager(t, om, coordinator, 5, "original_meta")

View File

@@ -42,9 +42,14 @@ env:
- BUILD_LIBS="CPP C_GLIB HASKELL JAVA PYTHON TESTING TUTORIALS" # only meaningful for CMake builds
matrix:
- TEST_NAME="Cross Language Tests (Binary, Header, JSON Protocols)"
- TEST_NAME="Cross Language Tests (Binary Protocol)"
SCRIPT="cross-test.sh"
BUILD_ARG="-'(binary|header|json)'"
BUILD_ARG="-'(binary)'"
BUILD_ENV="-e CC=clang -e CXX=clang++ -e THRIFT_CROSSTEST_CONCURRENCY=4"
- TEST_NAME="Cross Language Tests (Header, JSON Protocols)"
SCRIPT="cross-test.sh"
BUILD_ARG="-'(header|json)'"
BUILD_ENV="-e CC=clang -e CXX=clang++ -e THRIFT_CROSSTEST_CONCURRENCY=4"
- TEST_NAME="Cross Language Tests (Compact and Multiplexed Protocols)"
@@ -54,20 +59,22 @@ env:
# Autotools builds
# TODO: Remove them once migrated to CMake
- TEST_NAME="Autotools (CentOS 7.3)"
DISTRO=centos-7.3
SCRIPT="autotools.sh"
BUILD_ENV="-e CC=gcc -e CXX=g++"
BUILD_ARG="--without-cpp --without-csharp --without-c_glib --without-d -without-dart --without-erlang --without-go --without-haskell --without-haxe"
# centos-7.3 build jobs appear to be unstable/hang...
# TEST_NAME="Autotools (CentOS 7.3)"
# DISTRO=centos-7.3
# SCRIPT="autotools.sh"
# BUILD_ENV="-e CC=gcc -e CXX=g++"
# BUILD_ARG="--without-cpp --without-csharp --without-c_glib --without-d -without-dart --without-erlang --without-go --without-haskell --without-haxe"
- TEST_NAME="Autotools (Ubuntu Xenial)"
SCRIPT="autotools.sh"
BUILD_ENV="-e CC=gcc -e CXX=g++"
BUILD_ARG="--enable-plugin --without-java --without-lua --without-nodejs --without-perl --without-php --without-php_extension --without-python --without-py3 --without-ruby --without-rust"
BUILD_ARG="--enable-plugin" # --without-java --without-lua --without-nodejs --without-perl --without-php --without-php_extension --without-python --without-py3 --without-ruby --without-rust"
# CMake builds
- TEST_NAME="CMake (CentOS 7.3)"
DISTRO=centos-7.3
# centos-7.3 build jobs appear to be unstable/hang...
# TEST_NAME="CMake (CentOS 7.3)"
# DISTRO=centos-7.3
- TEST_NAME="CMake (Ubuntu Xenial)"
@@ -76,7 +83,7 @@ env:
BUILD_LIBS="CPP TESTING TUTORIALS"
BUILD_ARG="-DWITH_BOOSTTHREADS=ON -DWITH_PYTHON=OFF -DWITH_C_GLIB=OFF -DWITH_JAVA=OFF -DWITH_HASKELL=OFF"
- TEST_NAME="C++ Plugin (Std Thread)"
- TEST_NAME="C++ (Std Thread) and Plugin"
BUILD_LIBS="CPP TESTING TUTORIALS"
BUILD_ARG="-DWITH_PLUGIN=ON -DWITH_STDTHREADS=ON -DWITH_PYTHON=OFF -DWITH_C_GLIB=OFF -DWITH_JAVA=OFF -DWITH_HASKELL=OFF"
@@ -89,11 +96,15 @@ env:
# C and C++ undefined behavior. This wraps autotools.sh, but each binary crashes if
# undefined behavior occurs. Skips the known flaky tests.
# Unstable: THRIFT-4064 needs to be fixed perhaps?
- TEST_NAME="UBSan"
SCRIPT="ubsan.sh"
BUILD_ARG="--without-haskell --without-nodejs --without-perl --without-python"
UNSTABLE=true
matrix:
allow_failures:
- env: UNSTABLE=true
include:
# QA jobs for code analytics and metrics
#

View File

@@ -40,7 +40,7 @@ environment:
LIBEVENT_VERSION: 2.0.22
QT_VERSION: 5.6
ZLIB_VERSION: 1.2.8
DISABLED_TESTS: StressTestNonBlocking|concurrency_test
DISABLED_TESTS: StressTestNonBlocking
- PROFILE: MSVC2015
PLATFORM: x64

View File

@@ -10,11 +10,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# Apache Thrift Docker build environment for Centos
# Apache Thrift Docker build environment for CentOS
#
# Known missing client libraries:
# - dotnet (will update to 2.0.0 separately)
# - haxe (not in debian stretch)
# - haxe (not in centos)
FROM centos:7.3.1611
MAINTAINER Apache Thrift <dev@thrift.apache.org>
@@ -33,12 +33,14 @@ RUN yum install -y \
flex \
gcc \
gcc-c++ \
gdb \
git \
libtool \
m4 \
make \
tar \
unzip \
valgrind \
wget && \
ln -s /usr/bin/cmake3 /usr/bin/cmake && \
ln -s /usr/bin/cpack3 /usr/bin/cpack && \

View File

@@ -56,6 +56,7 @@ RUN apt-get install -y --no-install-recommends \
gdb \
ninja-build \
pkg-config \
valgrind \
vim

View File

@@ -60,6 +60,7 @@ RUN apt-get install -y --no-install-recommends \
llvm \
ninja-build \
pkg-config \
valgrind \
vim
ENV PATH /usr/lib/llvm-3.8/bin:$PATH
@@ -140,7 +141,8 @@ RUN apt-get install -y --no-install-recommends \
neko-dev \
libneko0
RUN haxelib setup --always /usr/share/haxe/lib && \
haxelib install --always hxcpp
haxelib install --always hxcpp 3.4.64
# note: hxcpp 3.4.185 (latest) no longer ships static libraries, and caused a build failure
RUN apt-get install -y --no-install-recommends \
`# Java dependencies` \

View File

@@ -2820,6 +2820,8 @@ void t_csharp_generator::generate_csharp_property(ofstream& out,
}
if (ttype->is_base_type()) {
use_nullable = ((t_base_type*)ttype)->get_base() != t_base_type::TYPE_STRING;
} else if (ttype->is_enum()) {
use_nullable = true;
}
}
indent(out) << "return " << fieldPrefix + tfield->get_name() << ";" << endl;

View File

@@ -2023,13 +2023,13 @@ void t_delphi_generator::generate_service_client(t_service* tservice) {
indent_impl(s_service_impl) << "var" << endl;
indent_up_impl();
indent_impl(s_service_impl) << argsvar << " : " << args_intfnm << ";" << endl;
indent_impl(s_service_impl) << msgvar << " : Thrift.Protocol.IMessage;" << endl;
indent_impl(s_service_impl) << msgvar << " : Thrift.Protocol.TThriftMessage;" << endl;
indent_down_impl();
indent_impl(s_service_impl) << "begin" << endl;
indent_up_impl();
indent_impl(s_service_impl) << "seqid_ := seqid_ + 1;" << endl;
indent_impl(s_service_impl) << msgvar << " := Thrift.Protocol.TMessageImpl.Create('" << funname
indent_impl(s_service_impl) << "Thrift.Protocol.Init( " << msgvar << ", '" << funname
<< "', " << ((*f_iter)->is_oneway() ? "TMessageType.Oneway"
: "TMessageType.Call")
<< ", seqid_);" << endl;
@@ -2076,7 +2076,7 @@ void t_delphi_generator::generate_service_client(t_service* tservice) {
indent_impl(s_service_impl) << function_signature(&recv_function, full_cls) << endl;
indent_impl(s_service_impl) << "var" << endl;
indent_up_impl();
indent_impl(s_service_impl) << msgvar << " : Thrift.Protocol.IMessage;" << endl;
indent_impl(s_service_impl) << msgvar << " : Thrift.Protocol.TThriftMessage;" << endl;
if (xceptions.size() > 0) {
indent_impl(s_service_impl) << exceptvar << " : Exception;" << endl;
}
@@ -2234,7 +2234,7 @@ void t_delphi_generator::generate_service_server(t_service* tservice) {
;
indent_impl(s_service_impl) << "var" << endl;
indent_up_impl();
indent_impl(s_service_impl) << "msg : Thrift.Protocol.IMessage;" << endl;
indent_impl(s_service_impl) << "msg : Thrift.Protocol.TThriftMessage;" << endl;
indent_impl(s_service_impl) << "fn : TProcessFunction;" << endl;
indent_impl(s_service_impl) << "x : TApplicationException;" << endl;
if (events_) {
@@ -2257,7 +2257,7 @@ void t_delphi_generator::generate_service_server(t_service* tservice) {
"TApplicationExceptionUnknownMethod.Create("
"'Invalid method name: ''' + msg.Name + '''');" << endl;
indent_impl(s_service_impl)
<< "msg := Thrift.Protocol.TMessageImpl.Create(msg.Name, TMessageType.Exception, msg.SeqID);"
<< "Thrift.Protocol.Init( msg, msg.Name, TMessageType.Exception, msg.SeqID);"
<< endl;
indent_impl(s_service_impl) << "oprot.WriteMessageBegin( msg);" << endl;
indent_impl(s_service_impl) << "x.Write(oprot);" << endl;
@@ -2373,7 +2373,7 @@ void t_delphi_generator::generate_process_function(t_service* tservice, t_functi
indent_up_impl();
indent_impl(s_service_impl) << "args: " << args_intfnm << ";" << endl;
if (!tfunction->is_oneway()) {
indent_impl(s_service_impl) << "msg: Thrift.Protocol.IMessage;" << endl;
indent_impl(s_service_impl) << "msg: Thrift.Protocol.TThriftMessage;" << endl;
indent_impl(s_service_impl) << "ret: " << result_intfnm << ";" << endl;
indent_impl(s_service_impl) << "appx : TApplicationException;" << endl;
}
@@ -2459,7 +2459,7 @@ void t_delphi_generator::generate_process_function(t_service* tservice, t_functi
if(events_) {
indent_impl(s_service_impl) << "if events <> nil then events.PreWrite;" << endl;
}
indent_impl(s_service_impl) << "msg := Thrift.Protocol.TMessageImpl.Create('"
indent_impl(s_service_impl) << "Thrift.Protocol.Init( msg, '"
<< tfunction->get_name() << "', TMessageType.Exception, seqid);"
<< endl;
indent_impl(s_service_impl) << "oprot.WriteMessageBegin( msg);" << endl;
@@ -2487,7 +2487,7 @@ void t_delphi_generator::generate_process_function(t_service* tservice, t_functi
if (events_) {
indent_impl(s_service_impl) << "if events <> nil then events.PreWrite;" << endl;
}
indent_impl(s_service_impl) << "msg := Thrift.Protocol.TMessageImpl.Create('"
indent_impl(s_service_impl) << "Thrift.Protocol.Init( msg, '"
<< tfunction->get_name() << "', TMessageType.Reply, seqid); "
<< endl;
indent_impl(s_service_impl) << "oprot.WriteMessageBegin( msg); " << endl;
@@ -2619,11 +2619,11 @@ void t_delphi_generator::generate_deserialize_container(ostream& out,
}
if (ttype->is_map()) {
local_var = obj + ": IMap;";
local_var = obj + ": TThriftMap;";
} else if (ttype->is_set()) {
local_var = obj + ": ISet;";
local_var = obj + ": TThriftSet;";
} else if (ttype->is_list()) {
local_var = obj + ": IList;";
local_var = obj + ": TThriftList;";
}
local_vars << " " << local_var << endl;
counter = tmp("_i");
@@ -2803,23 +2803,23 @@ void t_delphi_generator::generate_serialize_container(ostream& out,
string obj;
if (ttype->is_map()) {
obj = tmp("map");
local_vars << " " << obj << " : IMap;" << endl;
indent_impl(out) << obj << " := TMapImpl.Create( "
local_vars << " " << obj << " : TThriftMap;" << endl;
indent_impl(out) << "Thrift.Protocol.Init( " << obj << ", "
<< type_to_enum(((t_map*)ttype)->get_key_type()) << ", "
<< type_to_enum(((t_map*)ttype)->get_val_type()) << ", " << prefix
<< ".Count);" << endl;
indent_impl(out) << "oprot.WriteMapBegin( " << obj << ");" << endl;
} else if (ttype->is_set()) {
obj = tmp("set_");
local_vars << " " << obj << " : ISet;" << endl;
indent_impl(out) << obj << " := TSetImpl.Create("
local_vars << " " << obj << " : TThriftSet;" << endl;
indent_impl(out) << "Thrift.Protocol.Init( " << obj << ", "
<< type_to_enum(((t_set*)ttype)->get_elem_type()) << ", " << prefix
<< ".Count);" << endl;
indent_impl(out) << "oprot.WriteSetBegin( " << obj << ");" << endl;
} else if (ttype->is_list()) {
obj = tmp("list_");
local_vars << " " << obj << " : IList;" << endl;
indent_impl(out) << obj << " := TListImpl.Create("
local_vars << " " << obj << " : TThriftList;" << endl;
indent_impl(out) << "Thrift.Protocol.Init( " << obj << ", "
<< type_to_enum(((t_list*)ttype)->get_elem_type()) << ", " << prefix
<< ".Count);" << endl;
indent_impl(out) << "oprot.WriteListBegin( " << obj << ");" << endl;
@@ -3548,7 +3548,7 @@ void t_delphi_generator::generate_delphi_struct_reader_impl(ostream& out,
<< ") then begin" << endl;
indent_up_impl();
generate_deserialize_field(code_block, is_exception, *f_iter, "", local_vars);
generate_deserialize_field(code_block, is_exception, *f_iter, "Self.", local_vars);
// required field?
if ((*f_iter)->get_req() == t_field::T_REQUIRED) {
@@ -3617,8 +3617,8 @@ void t_delphi_generator::generate_delphi_struct_reader_impl(ostream& out,
<< endl;
indent_impl(out) << "var" << endl;
indent_up_impl();
indent_impl(out) << "field_ : IField;" << endl;
indent_impl(out) << "struc : IStruct;" << endl;
indent_impl(out) << "field_ : TThriftField;" << endl;
indent_impl(out) << "struc : TThriftStruct;" << endl;
indent_down_impl();
out << local_vars.str() << endl;
out << code_block.str();
@@ -3642,11 +3642,11 @@ void t_delphi_generator::generate_delphi_struct_result_writer_impl(ostream& out,
indent_impl(local_vars) << "tracker : IProtocolRecursionTracker;" << endl;
indent_impl(code_block) << "tracker := oprot.NextRecursionLevel;" << endl;
indent_impl(code_block) << "struc := TStructImpl.Create('" << name << "');" << endl;
indent_impl(code_block) << "Thrift.Protocol.Init( struc, '" << name << "');" << endl;
indent_impl(code_block) << "oprot.WriteStructBegin(struc);" << endl;
if (fields.size() > 0) {
indent_impl(code_block) << "field_ := TFieldImpl.Create;" << endl;
indent_impl(code_block) << "Thrift.Protocol.Init( field_);" << endl;
for (f_iter = fields.begin(); f_iter != fields.end(); ++f_iter) {
indent_impl(code_block) << "if (__isset_" << prop_name(*f_iter, is_exception) << ") then"
<< endl;
@@ -3657,7 +3657,7 @@ void t_delphi_generator::generate_delphi_struct_result_writer_impl(ostream& out,
<< endl;
indent_impl(code_block) << "field_.ID := " << (*f_iter)->get_key() << ";" << endl;
indent_impl(code_block) << "oprot.WriteFieldBegin(field_);" << endl;
generate_serialize_field(code_block, is_exception, *f_iter, "", local_vars);
generate_serialize_field(code_block, is_exception, *f_iter, "Self.", local_vars);
indent_impl(code_block) << "oprot.WriteFieldEnd();" << endl;
indent_down_impl();
}
@@ -3677,10 +3677,10 @@ void t_delphi_generator::generate_delphi_struct_result_writer_impl(ostream& out,
<< endl;
indent_impl(out) << "var" << endl;
indent_up_impl();
indent_impl(out) << "struc : IStruct;" << endl;
indent_impl(out) << "struc : TThriftStruct;" << endl;
if (fields.size() > 0) {
indent_impl(out) << "field_ : IField;" << endl;
indent_impl(out) << "field_ : TThriftField;" << endl;
}
out << local_vars.str();
@@ -3706,11 +3706,11 @@ void t_delphi_generator::generate_delphi_struct_writer_impl(ostream& out,
indent_impl(local_vars) << "tracker : IProtocolRecursionTracker;" << endl;
indent_impl(code_block) << "tracker := oprot.NextRecursionLevel;" << endl;
indent_impl(code_block) << "struc := TStructImpl.Create('" << name << "');" << endl;
indent_impl(code_block) << "Thrift.Protocol.Init( struc, '" << name << "');" << endl;
indent_impl(code_block) << "oprot.WriteStructBegin(struc);" << endl;
if (fields.size() > 0) {
indent_impl(code_block) << "field_ := TFieldImpl.Create;" << endl;
indent_impl(code_block) << "Thrift.Protocol.Init( field_);" << endl;
}
for (f_iter = fields.begin(); f_iter != fields.end(); ++f_iter) {
@@ -3720,13 +3720,13 @@ void t_delphi_generator::generate_delphi_struct_writer_impl(ostream& out,
bool has_isset = (!is_required);
if (is_required && null_allowed) {
null_allowed = false;
indent_impl(code_block) << "if (" << fieldname << " = nil)" << endl;
indent_impl(code_block) << "if (Self." << fieldname << " = nil)" << endl;
indent_impl(code_block) << "then raise TProtocolExceptionInvalidData.Create("
<< "'required field " << fieldname << " not set');"
<< endl;
}
if (null_allowed) {
indent_impl(code_block) << "if (" << fieldname << " <> nil)";
indent_impl(code_block) << "if (Self." << fieldname << " <> nil)";
if (has_isset) {
code_block << " and __isset_" << fieldname;
}
@@ -3743,7 +3743,7 @@ void t_delphi_generator::generate_delphi_struct_writer_impl(ostream& out,
<< endl;
indent_impl(code_block) << "field_.ID := " << (*f_iter)->get_key() << ";" << endl;
indent_impl(code_block) << "oprot.WriteFieldBegin(field_);" << endl;
generate_serialize_field(code_block, is_exception, *f_iter, "", local_vars);
generate_serialize_field(code_block, is_exception, *f_iter, "Self.", local_vars);
indent_impl(code_block) << "oprot.WriteFieldEnd();" << endl;
if (null_allowed || has_isset) {
indent_down_impl();
@@ -3765,9 +3765,9 @@ void t_delphi_generator::generate_delphi_struct_writer_impl(ostream& out,
<< endl;
indent_impl(out) << "var" << endl;
indent_up_impl();
indent_impl(out) << "struc : IStruct;" << endl;
indent_impl(out) << "struc : TThriftStruct;" << endl;
if (fields.size() > 0) {
indent_impl(out) << "field_ : IField;" << endl;
indent_impl(out) << "field_ : TThriftField;" << endl;
}
out << local_vars.str();
indent_down_impl();
@@ -3825,7 +3825,7 @@ void t_delphi_generator::generate_delphi_struct_tostring_impl(ostream& out,
bool null_allowed = type_can_be_null((*f_iter)->get_type());
bool is_optional = ((*f_iter)->get_req() != t_field::T_REQUIRED);
if (null_allowed) {
indent_impl(out) << "if (" << prop_name((*f_iter), is_exception) << " <> nil)";
indent_impl(out) << "if (Self." << prop_name((*f_iter), is_exception) << " <> nil)";
if (is_optional) {
out << " and __isset_" << prop_name(*f_iter, is_exception);
}
@@ -3857,14 +3857,14 @@ void t_delphi_generator::generate_delphi_struct_tostring_impl(ostream& out,
}
if (ttype->is_xception() || ttype->is_struct()) {
indent_impl(out) << "if (" << prop_name((*f_iter), is_exception) << " = nil) then " << tmp_sb
<< ".Append('<null>') else " << tmp_sb << ".Append("
indent_impl(out) << "if (Self." << prop_name((*f_iter), is_exception) << " = nil) then " << tmp_sb
<< ".Append('<null>') else " << tmp_sb << ".Append( Self."
<< prop_name((*f_iter), is_exception) << ".ToString());" << endl;
} else if (ttype->is_enum()) {
indent_impl(out) << tmp_sb << ".Append(Integer(" << prop_name((*f_iter), is_exception)
indent_impl(out) << tmp_sb << ".Append(Integer( Self." << prop_name((*f_iter), is_exception)
<< "));" << endl;
} else {
indent_impl(out) << tmp_sb << ".Append(" << prop_name((*f_iter), is_exception) << ");"
indent_impl(out) << tmp_sb << ".Append( Self." << prop_name((*f_iter), is_exception) << ");"
<< endl;
}

View File

@@ -58,6 +58,7 @@ public:
gen_dynbase_ = false;
gen_slots_ = false;
gen_tornado_ = false;
gen_zope_interface_ = false;
gen_twisted_ = false;
gen_dynamic_ = false;
coding_ = "";
@@ -105,8 +106,11 @@ public:
} else if( iter->first.compare("dynimport") == 0) {
gen_dynbase_ = true;
import_dynbase_ = (iter->second);
} else if( iter->first.compare("zope.interface") == 0) {
gen_zope_interface_ = true;
} else if( iter->first.compare("twisted") == 0) {
gen_twisted_ = true;
gen_zope_interface_ = true;
} else if( iter->first.compare("tornado") == 0) {
gen_tornado_ = true;
} else if( iter->first.compare("coding") == 0) {
@@ -290,6 +294,11 @@ private:
std::string copy_options_;
/**
* True if we should generate code for use with zope.interface.
*/
bool gen_zope_interface_;
/**
* True if we should generate Twisted-friendly RPC services.
*/
@@ -425,7 +434,7 @@ string t_py_generator::py_imports() {
<< endl
<< "from thrift.protocol.TProtocol import TProtocolException"
<< endl
<< "from thrift.TRecursive import fix_spec"
<< "from thrift.TRecursive import fix_spec"
<< endl;
if (gen_utf8strings_) {
@@ -623,10 +632,10 @@ string t_py_generator::render_const_value(t_type* type, t_const_value* value) {
return out.str();
}
/**
/**
* Generates the "forward declarations" for python structs.
* These are actually full class definitions so that calls to generate_struct
* can add the thrift_spec field. This is needed so that all thrift_spec
* can add the thrift_spec field. This is needed so that all thrift_spec
* definitions are grouped at the end of the file to enable co-recursive structs.
*/
void t_py_generator::generate_forward_declaration(t_struct* tstruct) {
@@ -1091,10 +1100,12 @@ void t_py_generator::generate_service(t_service* tservice) {
<< "from thrift.Thrift import TProcessor" << endl
<< "from thrift.transport import TTransport" << endl
<< import_dynbase_;
if (gen_zope_interface_) {
f_service_ << "from zope.interface import Interface, implementer" << endl;
}
if (gen_twisted_) {
f_service_ << "from zope.interface import Interface, implementer" << endl
<< "from twisted.internet import defer" << endl
f_service_ << "from twisted.internet import defer" << endl
<< "from thrift.transport import TTwisted" << endl;
} else if (gen_tornado_) {
f_service_ << "from tornado import gen" << endl;
@@ -1171,7 +1182,7 @@ void t_py_generator::generate_service_interface(t_service* tservice) {
extends = type_name(tservice->get_extends());
extends_if = "(" + extends + ".Iface)";
} else {
if (gen_twisted_) {
if (gen_zope_interface_) {
extends_if = "(Interface)";
} else if (gen_newstyle_ || gen_dynamic_ || gen_tornado_) {
extends_if = "(object)";
@@ -1214,20 +1225,20 @@ void t_py_generator::generate_service_client(t_service* tservice) {
string extends_client = "";
if (tservice->get_extends() != NULL) {
extends = type_name(tservice->get_extends());
if (gen_twisted_) {
if (gen_zope_interface_) {
extends_client = "(" + extends + ".Client)";
} else {
extends_client = extends + ".Client, ";
}
} else {
if (gen_twisted_ && (gen_newstyle_ || gen_dynamic_)) {
if (gen_zope_interface_ && (gen_newstyle_ || gen_dynamic_)) {
extends_client = "(object)";
}
}
f_service_ << endl << endl;
if (gen_twisted_) {
if (gen_zope_interface_) {
f_service_ << "@implementer(Iface)" << endl
<< "class Client" << extends_client << ":" << endl
<< endl;
@@ -1767,7 +1778,7 @@ void t_py_generator::generate_service_server(t_service* tservice) {
f_service_ << endl << endl;
// Generate the header portion
if (gen_twisted_) {
if (gen_zope_interface_) {
f_service_ << "@implementer(Iface)" << endl
<< "class Processor(" << extends_processor << "TProcessor):" << endl;
} else {
@@ -1779,7 +1790,7 @@ void t_py_generator::generate_service_server(t_service* tservice) {
indent(f_service_) << "def __init__(self, handler):" << endl;
indent_up();
if (extends.empty()) {
if (gen_twisted_) {
if (gen_zope_interface_) {
f_service_ << indent() << "self._handler = Iface(handler)" << endl;
} else {
f_service_ << indent() << "self._handler = handler" << endl;
@@ -1787,7 +1798,7 @@ void t_py_generator::generate_service_server(t_service* tservice) {
f_service_ << indent() << "self._processMap = {}" << endl;
} else {
if (gen_twisted_) {
if (gen_zope_interface_) {
f_service_ << indent() << extends << ".Processor.__init__(self, Iface(handler))" << endl;
} else {
f_service_ << indent() << extends << ".Processor.__init__(self, handler)" << endl;
@@ -2536,7 +2547,7 @@ string t_py_generator::function_signature(t_function* tfunction, bool interface)
vector<string> post;
string signature = tfunction->get_name() + "(";
if (!(gen_twisted_ && interface)) {
if (!(gen_zope_interface_ && interface)) {
pre.push_back("self");
}
@@ -2680,6 +2691,7 @@ string t_py_generator::type_to_spec_args(t_type* ttype) {
THRIFT_REGISTER_GENERATOR(
py,
"Python",
" zope.interface: Generate code for use with zope.interface.\n"
" twisted: Generate Twisted-friendly RPC services.\n"
" tornado: Generate code for use with Tornado.\n"
" no_utf8strings: Do not Encode/decode strings using utf8 in the generated code. Basically no effect for Python 3.\n"

View File

@@ -83,6 +83,9 @@ AS_IF([test "x$D_IMPORT_PREFIX" = x], [D_IMPORT_PREFIX="${includedir}/d2"])
AC_ARG_VAR([DMD_LIBEVENT_FLAGS], [DMD flags for linking libevent (auto-detected if not set).])
AC_ARG_VAR([DMD_OPENSSL_FLAGS], [DMD flags for linking OpenSSL (auto-detected if not set).])
AC_ARG_VAR([THRIFT], [Path to the thrift tool (needed for cross-compilation).])
AS_IF([test "x$THRIFT" = x], [THRIFT=`pwd`/compiler/cpp/thrift])
AC_PROG_CC
AC_PROG_CPP
AC_PROG_CXX

View File

@@ -237,8 +237,6 @@ nodist_libtestgencpp_la_SOURCES = \
gen-cpp/ThriftTest_types.h
libtestgencpp_la_CPPFLAGS = -I../../cpp/src $(BOOST_CPPFLAGS) -I./gen-cpp
THRIFT = $(top_builddir)/compiler/cpp/thrift
gen-c_glib/t_test_container_test_types.c gen-c_glib/t_test_container_test_types.h gen-c_glib/t_test_container_service.c gen-c_glib/t_test_container_service.h: ContainerTest.thrift $(THRIFT)
$(THRIFT) --gen c_glib $<

View File

@@ -265,8 +265,6 @@ include_qt_HEADERS = \
src/thrift/qt/TQIODeviceTransport.h \
src/thrift/qt/TQTcpServer.h
THRIFT = $(top_builddir)/compiler/cpp/thrift
WINDOWS_DIST = \
src/thrift/windows \
thrift.sln \

View File

@@ -51,6 +51,7 @@ public:
private:
scoped_ptr<boost::thread> thread_;
Monitor monitor_;
STATE state_;
weak_ptr<BoostThread> self_;
bool detached_;
@@ -71,25 +72,46 @@ public:
}
}
void start() {
if (state_ != uninitialized) {
return;
}
STATE getState() const
{
Synchronized sync(monitor_);
return state_;
}
void setState(STATE newState)
{
Synchronized sync(monitor_);
state_ = newState;
// unblock start() with the knowledge that the thread has actually
// started running, which avoids a race in detached threads.
if (newState == started) {
monitor_.notify();
}
}
void start() {
// Create reference
shared_ptr<BoostThread>* selfRef = new shared_ptr<BoostThread>();
*selfRef = self_.lock();
state_ = starting;
setState(starting);
Synchronized sync(monitor_);
thread_.reset(new boost::thread(bind(threadMain, (void*)selfRef)));
if (detached_)
thread_->detach();
// Wait for the thread to start and get far enough to grab everything
// that it needs from the calling context, thus absolving the caller
// from being required to hold on to runnable indefinitely.
monitor_.wait();
}
void join() {
if (!detached_ && state_ != uninitialized) {
if (!detached_ && getState() != uninitialized) {
thread_->join();
}
}
@@ -110,19 +132,11 @@ void* BoostThread::threadMain(void* arg) {
shared_ptr<BoostThread> thread = *(shared_ptr<BoostThread>*)arg;
delete reinterpret_cast<shared_ptr<BoostThread>*>(arg);
if (!thread) {
return (void*)0;
}
if (thread->state_ != starting) {
return (void*)0;
}
thread->state_ = started;
thread->setState(started);
thread->runnable()->run();
if (thread->state_ != stopping && thread->state_ != stopped) {
thread->state_ = stopping;
if (thread->getState() != stopping && thread->getState() != stopped) {
thread->setState(stopping);
}
return (void*)0;
}

View File

@@ -20,8 +20,8 @@
#ifndef _THRIFT_CONCURRENCY_BOOSTTHREADFACTORY_H_
#define _THRIFT_CONCURRENCY_BOOSTTHREADFACTORY_H_ 1
#include <thrift/concurrency/Monitor.h>
#include <thrift/concurrency/Thread.h>
#include <thrift/stdcxx.h>
namespace apache {

View File

@@ -20,7 +20,7 @@
#include <thrift/thrift-config.h>
#include <thrift/concurrency/Exception.h>
#include <thrift/concurrency/Mutex.h>
#include <thrift/concurrency/Monitor.h>
#include <thrift/concurrency/PosixThreadFactory.h>
#if GOOGLE_PERFTOOLS_REGISTER_THREAD
@@ -53,8 +53,8 @@ public:
private:
pthread_t pthread_;
Mutex state_mutex_;
STATE state_;
Monitor monitor_; // guard to protect state_ and also notification
STATE state_; // to protect proper thread start behavior
int policy_;
int priority_;
int stackSize_;
@@ -96,14 +96,20 @@ public:
STATE getState() const
{
Guard g(state_mutex_);
Synchronized sync(monitor_);
return state_;
}
void setState(STATE newState)
{
Guard g(state_mutex_);
Synchronized sync(monitor_);
state_ = newState;
// unblock start() with the knowledge that the thread has actually
// started running, which avoids a race in detached threads.
if (newState == started) {
monitor_.notify();
}
}
void start() {
@@ -154,9 +160,18 @@ public:
setState(starting);
Synchronized sync(monitor_);
if (pthread_create(&pthread_, &thread_attr, threadMain, (void*)selfRef) != 0) {
throw SystemResourceException("pthread_create failed");
}
// The caller may not choose to guarantee the scope of the Runnable
// being used in the thread, so we must actually wait until the thread
// starts before we return. If we do not wait, it would be possible
// for the caller to start destructing the Runnable and the Thread,
// and we would end up in a race. This was identified with valgrind.
monitor_.wait();
}
void join() {
@@ -174,8 +189,6 @@ public:
if (res != 0) {
GlobalOutput.printf("PthreadThread::join(): fail with code %d", res);
}
} else {
GlobalOutput.printf("PthreadThread::join(): detached thread");
}
}
@@ -202,14 +215,6 @@ void* PthreadThread::threadMain(void* arg) {
stdcxx::shared_ptr<PthreadThread> thread = *(stdcxx::shared_ptr<PthreadThread>*)arg;
delete reinterpret_cast<stdcxx::shared_ptr<PthreadThread>*>(arg);
if (thread == NULL) {
return (void*)0;
}
if (thread->getState() != starting) {
return (void*)0;
}
#if GOOGLE_PERFTOOLS_REGISTER_THREAD
ProfilerRegisterThread();
#endif

View File

@@ -21,8 +21,9 @@
#if USE_STD_THREAD
#include <thrift/concurrency/StdThreadFactory.h>
#include <thrift/concurrency/Exception.h>
#include <thrift/concurrency/Monitor.h>
#include <thrift/concurrency/StdThreadFactory.h>
#include <thrift/stdcxx.h>
#include <cassert>
@@ -49,6 +50,7 @@ public:
private:
std::unique_ptr<std::thread> thread_;
Monitor monitor_;
STATE state_;
bool detached_;
@@ -68,18 +70,42 @@ public:
}
}
STATE getState() const
{
Synchronized sync(monitor_);
return state_;
}
void setState(STATE newState)
{
Synchronized sync(monitor_);
state_ = newState;
// unblock start() with the knowledge that the thread has actually
// started running, which avoids a race in detached threads.
if (newState == started) {
monitor_.notify();
}
}
void start() {
if (state_ != uninitialized) {
if (getState() != uninitialized) {
return;
}
stdcxx::shared_ptr<StdThread> selfRef = shared_from_this();
state_ = starting;
setState(starting);
Synchronized sync(monitor_);
thread_ = std::unique_ptr<std::thread>(new std::thread(threadMain, selfRef));
if (detached_)
thread_->detach();
// Wait for the thread to start and get far enough to grab everything
// that it needs from the calling context, thus absolving the caller
// from being required to hold on to runnable indefinitely.
monitor_.wait();
}
void join() {
@@ -96,22 +122,16 @@ public:
};
void StdThread::threadMain(stdcxx::shared_ptr<StdThread> thread) {
if (thread == NULL) {
return;
}
#if GOOGLE_PERFTOOLS_REGISTER_THREAD
ProfilerRegisterThread();
#endif
if (thread->state_ != starting) {
return;
}
thread->state_ = started;
thread->setState(started);
thread->runnable()->run();
if (thread->state_ != stopping && thread->state_ != stopped) {
thread->state_ = stopping;
if (thread->getState() != stopping && thread->getState() != stopped) {
thread->setState(stopping);
}
return;
}
StdThreadFactory::StdThreadFactory(bool detached) : ThreadFactory(detached) {

View File

@@ -52,6 +52,8 @@ public:
}
}
bool operator==(const shared_ptr<Runnable> & runnable) const { return runnable_ == runnable; }
private:
shared_ptr<Runnable> runnable_;
friend class TimerManager::Dispatcher;
@@ -290,11 +292,23 @@ void TimerManager::add(shared_ptr<Runnable> task, const struct timeval& value) {
}
void TimerManager::remove(shared_ptr<Runnable> task) {
(void)task;
Synchronized s(monitor_);
if (state_ != TimerManager::STARTED) {
throw IllegalStateException();
}
bool found = false;
for (task_iterator ix = taskMap_.begin(); ix != taskMap_.end();) {
if (*ix->second == task) {
found = true;
taskCount_--;
taskMap_.erase(ix++);
} else {
++ix;
}
}
if (!found) {
throw NoSuchTaskException();
}
}
TimerManager::STATE TimerManager::state() const {

View File

@@ -658,14 +658,21 @@ void TServerSocket::notify(THRIFT_SOCKET notifySocket) {
}
void TServerSocket::interrupt() {
notify(interruptSockWriter_);
concurrency::Guard g(rwMutex_);
if (interruptSockWriter_ != THRIFT_INVALID_SOCKET) {
notify(interruptSockWriter_);
}
}
void TServerSocket::interruptChildren() {
notify(childInterruptSockWriter_);
concurrency::Guard g(rwMutex_);
if (childInterruptSockWriter_ != THRIFT_INVALID_SOCKET) {
notify(childInterruptSockWriter_);
}
}
void TServerSocket::close() {
concurrency::Guard g(rwMutex_);
if (serverSocket_ != THRIFT_INVALID_SOCKET) {
shutdown(serverSocket_, THRIFT_SHUT_RDWR);
::THRIFT_CLOSESOCKET(serverSocket_);

View File

@@ -20,6 +20,7 @@
#ifndef _THRIFT_TRANSPORT_TSERVERSOCKET_H_
#define _THRIFT_TRANSPORT_TSERVERSOCKET_H_ 1
#include <thrift/concurrency/Mutex.h>
#include <thrift/stdcxx.h>
#include <thrift/transport/PlatformSocket.h>
#include <thrift/transport/TServerTransport.h>
@@ -169,6 +170,7 @@ private:
bool keepAlive_;
bool listening_;
concurrency::Mutex rwMutex_; // thread-safe interrupt
THRIFT_SOCKET interruptSockWriter_; // is notified on interrupt()
THRIFT_SOCKET interruptSockReader_; // is used in select/poll with serverSocket_ for interruptability
THRIFT_SOCKET childInterruptSockWriter_; // is notified on interruptChildren()

View File

@@ -360,7 +360,6 @@ OpenSSLManualInitTest_LDADD = \
#
# Common thrift code generation rules
#
THRIFT = $(top_builddir)/compiler/cpp/thrift
gen-cpp/AnnotationTest_constants.cpp gen-cpp/AnnotationTest_constants.h gen-cpp/AnnotationTest_types.cpp gen-cpp/AnnotationTest_types.h: $(top_srcdir)/test/AnnotationTest.thrift
$(THRIFT) --gen cpp $<

View File

@@ -25,6 +25,10 @@
#include "TimerManagerTests.h"
#include "ThreadManagerTests.h"
// The test weight, where 10 is 10 times more threads than baseline
// and the baseline is optimized for running in valgrind
static size_t WEIGHT = 10;
int main(int argc, char** argv) {
std::string arg;
@@ -37,6 +41,11 @@ int main(int argc, char** argv) {
args[ix - 1] = std::string(argv[ix]);
}
if (getenv("VALGRIND") != 0) {
// lower the scale of every test
WEIGHT = 1;
}
bool runAll = args[0].compare("all") == 0;
if (runAll || args[0].compare("thread-factory") == 0) {
@@ -45,10 +54,10 @@ int main(int argc, char** argv) {
std::cout << "ThreadFactory tests..." << std::endl;
int reapLoops = 20;
int reapCount = 1000;
int reapLoops = 2 * WEIGHT;
int reapCount = 100 * WEIGHT;
size_t floodLoops = 3;
size_t floodCount = 20000;
size_t floodCount = 500 * WEIGHT;
std::cout << "\t\tThreadFactory reap N threads test: N = " << reapLoops << "x" << reapCount << std::endl;
@@ -114,6 +123,20 @@ int main(int argc, char** argv) {
std::cerr << "\t\tTimerManager tests FAILED" << std::endl;
return 1;
}
std::cout << "\t\tTimerManager test01" << std::endl;
if (!timerManagerTests.test01()) {
std::cerr << "\t\tTimerManager tests FAILED" << std::endl;
return 1;
}
std::cout << "\t\tTimerManager test02" << std::endl;
if (!timerManagerTests.test02()) {
std::cerr << "\t\tTimerManager tests FAILED" << std::endl;
return 1;
}
}
if (runAll || args[0].compare("thread-manager") == 0) {
@@ -121,8 +144,8 @@ int main(int argc, char** argv) {
std::cout << "ThreadManager tests..." << std::endl;
{
size_t workerCount = 100;
size_t taskCount = 50000;
size_t workerCount = 10 * WEIGHT;
size_t taskCount = 500 * WEIGHT;
int64_t delay = 10LL;
ThreadManagerTests threadManagerTests;
@@ -160,13 +183,13 @@ int main(int argc, char** argv) {
size_t minWorkerCount = 2;
size_t maxWorkerCount = 64;
size_t maxWorkerCount = 8;
size_t tasksPerWorker = 1000;
size_t tasksPerWorker = 100 * WEIGHT;
int64_t delay = 5LL;
for (size_t workerCount = minWorkerCount; workerCount < maxWorkerCount; workerCount *= 4) {
for (size_t workerCount = minWorkerCount; workerCount <= maxWorkerCount; workerCount *= 4) {
size_t taskCount = workerCount * tasksPerWorker;

View File

@@ -21,11 +21,12 @@
#include <thrift/concurrency/Thread.h>
#include <thrift/concurrency/PlatformThreadFactory.h>
#include <thrift/concurrency/Monitor.h>
#include <thrift/concurrency/Mutex.h>
#include <thrift/concurrency/Util.h>
#include <assert.h>
#include <iostream>
#include <set>
#include <vector>
namespace apache {
namespace thrift {
@@ -78,13 +79,13 @@ public:
int* activeCount = new int(count);
std::set<shared_ptr<Thread> > threads;
std::vector<shared_ptr<Thread> > threads;
int tix;
for (tix = 0; tix < count; tix++) {
try {
threads.insert(
threads.push_back(
threadFactory.newThread(shared_ptr<Runnable>(new ReapNTask(*monitor, *activeCount))));
} catch (SystemResourceException& e) {
std::cout << "\t\t\tfailed to create " << lix* count + tix << " thread " << e.what()
@@ -94,7 +95,7 @@ public:
}
tix = 0;
for (std::set<shared_ptr<Thread> >::const_iterator thread = threads.begin();
for (std::vector<shared_ptr<Thread> >::const_iterator thread = threads.begin();
thread != threads.end();
tix++, ++thread) {
@@ -113,6 +114,7 @@ public:
monitor->wait(1000);
}
}
delete activeCount;
std::cout << "\t\t\treaped " << lix* count << " threads" << std::endl;
}
@@ -253,19 +255,22 @@ public:
class FloodTask : public Runnable {
public:
FloodTask(const size_t id) : _id(id) {}
FloodTask(const size_t id, Monitor& mon) : _id(id), _mon(mon) {}
~FloodTask() {
if (_id % 10000 == 0) {
Synchronized sync(_mon);
std::cout << "\t\tthread " << _id << " done" << std::endl;
}
}
void run() {
if (_id % 10000 == 0) {
Synchronized sync(_mon);
std::cout << "\t\tthread " << _id << " started" << std::endl;
}
}
const size_t _id;
Monitor& _mon;
};
void foo(PlatformThreadFactory* tf) { (void)tf; }
@@ -273,7 +278,8 @@ public:
bool floodNTest(size_t loop = 1, size_t count = 100000) {
bool success = false;
Monitor mon;
for (size_t lix = 0; lix < loop; lix++) {
PlatformThreadFactory threadFactory = PlatformThreadFactory();
@@ -283,10 +289,8 @@ public:
try {
shared_ptr<FloodTask> task(new FloodTask(lix * count + tix));
shared_ptr<FloodTask> task(new FloodTask(lix * count + tix, mon));
shared_ptr<Thread> thread = threadFactory.newThread(task);
thread->start();
} catch (TException& e) {
@@ -298,8 +302,8 @@ public:
}
}
Synchronized sync(mon);
std::cout << "\t\t\tflooded " << (lix + 1) * count << " threads" << std::endl;
success = true;
}

View File

@@ -109,7 +109,7 @@ public:
shared_ptr<ThreadManager> threadManager = ThreadManager::newSimpleThreadManager(workerCount);
shared_ptr<PlatformThreadFactory> threadFactory
= shared_ptr<PlatformThreadFactory>(new PlatformThreadFactory());
= shared_ptr<PlatformThreadFactory>(new PlatformThreadFactory(false));
#if !USE_BOOST_THREAD && !USE_STD_THREAD
threadFactory->setPriority(PosixThreadFactory::HIGHEST);

View File

@@ -126,6 +126,72 @@ public:
return true;
}
/**
* This test creates two tasks, removes the first one then waits for the second one. It then
* verifies that the timer manager properly clean up itself and the remaining orphaned timeout
* task when the manager goes out of scope and its destructor is called.
*/
bool test01(int64_t timeout = 1000LL) {
TimerManager timerManager;
timerManager.threadFactory(shared_ptr<PlatformThreadFactory>(new PlatformThreadFactory()));
timerManager.start();
assert(timerManager.state() == TimerManager::STARTED);
Synchronized s(_monitor);
// Setup the two tasks
shared_ptr<TimerManagerTests::Task> taskToRemove
= shared_ptr<TimerManagerTests::Task>(new TimerManagerTests::Task(_monitor, timeout / 2));
timerManager.add(taskToRemove, taskToRemove->_timeout);
shared_ptr<TimerManagerTests::Task> task
= shared_ptr<TimerManagerTests::Task>(new TimerManagerTests::Task(_monitor, timeout));
timerManager.add(task, task->_timeout);
// Remove one task and wait until the other has completed
timerManager.remove(taskToRemove);
_monitor.wait(timeout * 2);
assert(!taskToRemove->_done);
assert(task->_done);
return true;
}
/**
* This test creates two tasks with the same callback and another one, then removes the two
* duplicated then waits for the last one. It then verifies that the timer manager properly
* clean up itself and the remaining orphaned timeout task when the manager goes out of scope
* and its destructor is called.
*/
bool test02(int64_t timeout = 1000LL) {
TimerManager timerManager;
timerManager.threadFactory(shared_ptr<PlatformThreadFactory>(new PlatformThreadFactory()));
timerManager.start();
assert(timerManager.state() == TimerManager::STARTED);
Synchronized s(_monitor);
// Setup the one tasks and add it twice
shared_ptr<TimerManagerTests::Task> taskToRemove
= shared_ptr<TimerManagerTests::Task>(new TimerManagerTests::Task(_monitor, timeout / 3));
timerManager.add(taskToRemove, taskToRemove->_timeout);
timerManager.add(taskToRemove, taskToRemove->_timeout * 2);
shared_ptr<TimerManagerTests::Task> task
= shared_ptr<TimerManagerTests::Task>(new TimerManagerTests::Task(_monitor, timeout));
timerManager.add(task, task->_timeout);
// Remove the first task (e.g. two timers) and wait until the other has completed
timerManager.remove(taskToRemove);
_monitor.wait(timeout * 2);
assert(!taskToRemove->_done);
assert(task->_done);
return true;
}
friend class TestTask;
Monitor _monitor;

View File

@@ -24,8 +24,6 @@ BUILT_SOURCES = trusted-ca-certificate.pem server-certificate.pem
# Thrift compiler rules
THRIFT = $(top_builddir)/compiler/cpp/thrift
debug_proto_gen = $(addprefix gen-d/, DebugProtoTest_types.d)
$(debug_proto_gen): $(top_srcdir)/test/DebugProtoTest.thrift

View File

@@ -68,16 +68,16 @@ type
// the standard format, without the service name prepended to TMessage.name.
TStoredMessageProtocol = class( TProtocolDecorator)
private
FMessageBegin : IMessage;
FMessageBegin : TThriftMessage;
public
constructor Create( const protocol : IProtocol; const aMsgBegin : IMessage);
function ReadMessageBegin: IMessage; override;
constructor Create( const protocol : IProtocol; const aMsgBegin : TThriftMessage);
function ReadMessageBegin: TThriftMessage; override;
end;
private
FServiceProcessorMap : TDictionary<String, IProcessor>;
procedure Error( const oprot : IProtocol; const msg : IMessage;
procedure Error( const oprot : IProtocol; const msg : TThriftMessage;
extype : TApplicationExceptionSpecializedClass; const etxt : string);
public
@@ -105,14 +105,14 @@ type
implementation
constructor TMultiplexedProcessorImpl.TStoredMessageProtocol.Create( const protocol : IProtocol; const aMsgBegin : IMessage);
constructor TMultiplexedProcessorImpl.TStoredMessageProtocol.Create( const protocol : IProtocol; const aMsgBegin : TThriftMessage);
begin
inherited Create( protocol);
FMessageBegin := aMsgBegin;
end;
function TMultiplexedProcessorImpl.TStoredMessageProtocol.ReadMessageBegin: IMessage;
function TMultiplexedProcessorImpl.TStoredMessageProtocol.ReadMessageBegin: TThriftMessage;
begin
result := FMessageBegin;
end;
@@ -141,15 +141,15 @@ begin
end;
procedure TMultiplexedProcessorImpl.Error( const oprot : IProtocol; const msg : IMessage;
procedure TMultiplexedProcessorImpl.Error( const oprot : IProtocol; const msg : TThriftMessage;
extype : TApplicationExceptionSpecializedClass;
const etxt : string);
var appex : TApplicationException;
newMsg : IMessage;
newMsg : TThriftMessage;
begin
appex := extype.Create(etxt);
try
newMsg := TMessageImpl.Create( msg.Name, TMessageType.Exception, msg.SeqID);
Init( newMsg, msg.Name, TMessageType.Exception, msg.SeqID);
oprot.WriteMessageBegin(newMsg);
appex.Write(oprot);
@@ -163,7 +163,7 @@ end;
function TMultiplexedProcessorImpl.Process(const iprot, oprot : IProtocol; const events : IProcessorEvents = nil): Boolean;
var msg, newMsg : IMessage;
var msg, newMsg : TThriftMessage;
idx : Integer;
sService : string;
processor : IProcessor;
@@ -204,7 +204,7 @@ begin
// Create a new TMessage, removing the service name
Inc( idx, Length(TMultiplexedProtocol.SEPARATOR));
newMsg := TMessageImpl.Create( Copy( msg.Name, idx, MAXINT), msg.Type_, msg.SeqID);
Init( newMsg, Copy( msg.Name, idx, MAXINT), msg.Type_, msg.SeqID);
// Dispatch processing to the stored processor
protocol := TStoredMessageProtocol.Create( iprot, newMsg);

View File

@@ -123,7 +123,7 @@ type
// If we encounter a boolean field begin, save the TField here so it can
// have the value incorporated.
private booleanField_ : IField;
private booleanField_ : TThriftField;
// If we Read a field header, and it's a boolean field, save the boolean
// value here so that ReadBool can use it.
@@ -148,21 +148,21 @@ type
private
// The workhorse of WriteFieldBegin. It has the option of doing a 'type override'
// of the type header. This is used specifically in the boolean field case.
procedure WriteFieldBeginInternal( const field : IField; typeOverride : Byte);
procedure WriteFieldBeginInternal( const field : TThriftField; typeOverride : Byte);
public
procedure WriteMessageBegin( const msg: IMessage); override;
procedure WriteMessageBegin( const msg: TThriftMessage); override;
procedure WriteMessageEnd; override;
procedure WriteStructBegin( const struc: IStruct); override;
procedure WriteStructBegin( const struc: TThriftStruct); override;
procedure WriteStructEnd; override;
procedure WriteFieldBegin( const field: IField); override;
procedure WriteFieldBegin( const field: TThriftField); override;
procedure WriteFieldEnd; override;
procedure WriteFieldStop; override;
procedure WriteMapBegin( const map: IMap); override;
procedure WriteMapBegin( const map: TThriftMap); override;
procedure WriteMapEnd; override;
procedure WriteListBegin( const list: IList); override;
procedure WriteListBegin( const list: TThriftList); override;
procedure WriteListEnd(); override;
procedure WriteSetBegin( const set_: ISet ); override;
procedure WriteSetBegin( const set_: TThriftSet ); override;
procedure WriteSetEnd(); override;
procedure WriteBool( b: Boolean); override;
procedure WriteByte( b: ShortInt); override;
@@ -194,17 +194,17 @@ type
class procedure fixedLongToBytes( const n : Int64; var buf : TBytes);
public
function ReadMessageBegin: IMessage; override;
function ReadMessageBegin: TThriftMessage; override;
procedure ReadMessageEnd(); override;
function ReadStructBegin: IStruct; override;
function ReadStructBegin: TThriftStruct; override;
procedure ReadStructEnd; override;
function ReadFieldBegin: IField; override;
function ReadFieldBegin: TThriftField; override;
procedure ReadFieldEnd(); override;
function ReadMapBegin: IMap; override;
function ReadMapBegin: TThriftMap; override;
procedure ReadMapEnd(); override;
function ReadListBegin: IList; override;
function ReadListBegin: TThriftList; override;
procedure ReadListEnd(); override;
function ReadSetBegin: ISet; override;
function ReadSetBegin: TThriftSet; override;
procedure ReadSetEnd(); override;
function ReadBool: Boolean; override;
function ReadByte: ShortInt; override;
@@ -273,7 +273,7 @@ begin
lastFieldId_ := 0;
lastField_ := TStack<Integer>.Create;
booleanField_ := nil;
Init( booleanField_, '', TType.Stop, 0);
boolValue_ := unused;
end;
@@ -293,7 +293,7 @@ procedure TCompactProtocolImpl.Reset;
begin
lastField_.Clear();
lastFieldId_ := 0;
booleanField_ := nil;
Init( booleanField_, '', TType.Stop, 0);
boolValue_ := unused;
end;
@@ -301,11 +301,8 @@ end;
// Writes a byte without any possibility of all that field header nonsense.
// Used internally by other writing methods that know they need to Write a byte.
procedure TCompactProtocolImpl.WriteByteDirect( const b : Byte);
var data : TBytes;
begin
SetLength( data, 1);
data[0] := b;
Transport.Write( data);
Transport.Write( @b, SizeOf(b));
end;
@@ -344,7 +341,7 @@ end;
// Write a message header to the wire. Compact Protocol messages contain the
// protocol version so we can migrate forwards in the future if need be.
procedure TCompactProtocolImpl.WriteMessageBegin( const msg: IMessage);
procedure TCompactProtocolImpl.WriteMessageBegin( const msg: TThriftMessage);
var versionAndType : Byte;
begin
Reset;
@@ -362,7 +359,7 @@ end;
// Write a struct begin. This doesn't actually put anything on the wire. We use it as an
// opportunity to put special placeholder markers on the field stack so we can get the
// field id deltas correct.
procedure TCompactProtocolImpl.WriteStructBegin( const struc: IStruct);
procedure TCompactProtocolImpl.WriteStructBegin( const struc: TThriftStruct);
begin
lastField_.Push(lastFieldId_);
lastFieldId_ := 0;
@@ -380,7 +377,7 @@ end;
// Write a field header containing the field id and field type. If the difference between the
// current field id and the last one is small (< 15), then the field id will be encoded in
// the 4 MSB as a delta. Otherwise, the field id will follow the type header as a zigzag varint.
procedure TCompactProtocolImpl.WriteFieldBegin( const field: IField);
procedure TCompactProtocolImpl.WriteFieldBegin( const field: TThriftField);
begin
case field.Type_ of
TType.Bool_ : booleanField_ := field; // we want to possibly include the value, so we'll wait.
@@ -392,7 +389,7 @@ end;
// The workhorse of WriteFieldBegin. It has the option of doing a 'type override'
// of the type header. This is used specifically in the boolean field case.
procedure TCompactProtocolImpl.WriteFieldBeginInternal( const field : IField; typeOverride : Byte);
procedure TCompactProtocolImpl.WriteFieldBeginInternal( const field : TThriftField; typeOverride : Byte);
var typeToWrite : Byte;
begin
// if there's a type override, use that.
@@ -425,7 +422,7 @@ end;
// Write a map header. If the map is empty, omit the key and value type
// headers, as we don't need any additional information to skip it.
procedure TCompactProtocolImpl.WriteMapBegin( const map: IMap);
procedure TCompactProtocolImpl.WriteMapBegin( const map: TThriftMap);
var key, val : Byte;
begin
if (map.Count = 0)
@@ -440,14 +437,14 @@ end;
// Write a list header.
procedure TCompactProtocolImpl.WriteListBegin( const list: IList);
procedure TCompactProtocolImpl.WriteListBegin( const list: TThriftList);
begin
WriteCollectionBegin( list.ElementType, list.Count);
end;
// Write a set header.
procedure TCompactProtocolImpl.WriteSetBegin( const set_: ISet );
procedure TCompactProtocolImpl.WriteSetBegin( const set_: TThriftSet );
begin
WriteCollectionBegin( set_.ElementType, set_.Count);
end;
@@ -464,10 +461,10 @@ begin
then bt := Types.BOOLEAN_TRUE
else bt := Types.BOOLEAN_FALSE;
if booleanField_ <> nil then begin
if booleanField_.Type_ = TType.Bool_ then begin
// we haven't written the field header yet
WriteFieldBeginInternal( booleanField_, Byte(bt));
booleanField_ := nil;
booleanField_.Type_ := TType.Stop;
end
else begin
// we're not part of a field, so just Write the value.
@@ -642,7 +639,7 @@ end;
// Read a message header.
function TCompactProtocolImpl.ReadMessageBegin : IMessage;
function TCompactProtocolImpl.ReadMessageBegin : TThriftMessage;
var protocolId, versionAndType, version, type_ : Byte;
seqid : Integer;
msgNm : String;
@@ -663,17 +660,17 @@ begin
type_ := Byte( (versionAndType shr TYPE_SHIFT_AMOUNT) and TYPE_BITS);
seqid := Integer( ReadVarint32);
msgNm := ReadString;
result := TMessageImpl.Create( msgNm, TMessageType(type_), seqid);
Init( result, msgNm, TMessageType(type_), seqid);
end;
// Read a struct begin. There's nothing on the wire for this, but it is our
// opportunity to push a new struct begin marker onto the field stack.
function TCompactProtocolImpl.ReadStructBegin: IStruct;
function TCompactProtocolImpl.ReadStructBegin: TThriftStruct;
begin
lastField_.Push( lastFieldId_);
lastFieldId_ := 0;
result := TStructImpl.Create('');
Init( result);
end;
@@ -687,7 +684,7 @@ end;
// Read a field header off the wire.
function TCompactProtocolImpl.ReadFieldBegin: IField;
function TCompactProtocolImpl.ReadFieldBegin: TThriftField;
var type_ : Byte;
fieldId, modifier : ShortInt;
begin
@@ -695,7 +692,7 @@ begin
// if it's a stop, then we can return immediately, as the struct is over.
if type_ = Byte(Types.STOP) then begin
result := TFieldImpl.Create( '', TType.Stop, 0);
Init( result, '', TType.Stop, 0);
Exit;
end;
@@ -705,7 +702,7 @@ begin
then fieldId := ReadI16 // not a delta. look ahead for the zigzag varint field id.
else fieldId := ShortInt( lastFieldId_ + modifier); // add the delta to the last Read field id.
result := TFieldImpl.Create( '', getTType(Byte(type_ and $0F)), fieldId);
Init( result, '', getTType(Byte(type_ and $0F)), fieldId);
// if this happens to be a boolean field, the value is encoded in the type
// save the boolean value in a special instance variable.
@@ -723,7 +720,7 @@ end;
// Read a map header off the wire. If the size is zero, skip Reading the key
// and value type. This means that 0-length maps will yield TMaps without the
// "correct" types.
function TCompactProtocolImpl.ReadMapBegin: IMap;
function TCompactProtocolImpl.ReadMapBegin: TThriftMap;
var size : Integer;
keyAndValueType : Byte;
key, val : TType;
@@ -735,7 +732,7 @@ begin
key := getTType( Byte( keyAndValueType shr 4));
val := getTType( Byte( keyAndValueType and $F));
result := TMapImpl.Create( key, val, size);
Init( result, key, val, size);
ASSERT( (result.KeyType = key) and (result.ValueType = val));
end;
@@ -744,7 +741,7 @@ end;
// be packed into the element type header. If it's a longer list, the 4 MSB
// of the element type header will be $F, and a varint will follow with the
// true size.
function TCompactProtocolImpl.ReadListBegin: IList;
function TCompactProtocolImpl.ReadListBegin: TThriftList;
var size_and_type : Byte;
size : Integer;
type_ : TType;
@@ -756,7 +753,7 @@ begin
then size := Integer( ReadVarint32);
type_ := getTType( size_and_type);
result := TListImpl.Create( type_, size);
Init( result, type_, size);
end;
@@ -764,7 +761,7 @@ end;
// be packed into the element type header. If it's a longer set, the 4 MSB
// of the element type header will be $F, and a varint will follow with the
// true size.
function TCompactProtocolImpl.ReadSetBegin: ISet;
function TCompactProtocolImpl.ReadSetBegin: TThriftSet;
var size_and_type : Byte;
size : Integer;
type_ : TType;
@@ -776,7 +773,7 @@ begin
then size := Integer( ReadVarint32);
type_ := getTType( size_and_type);
result := TSetImpl.Create( type_, size);
Init( result, type_, size);
end;
@@ -797,11 +794,8 @@ end;
// Read a single byte off the wire. Nothing interesting here.
function TCompactProtocolImpl.ReadByte: ShortInt;
var data : TBytes;
begin
SetLength( data, 1);
Transport.ReadAll( data, 0, 1);
result := ShortInt(data[0]);
Transport.ReadAll( @result, SizeOf(result), 0, 1);
end;

View File

@@ -103,7 +103,7 @@ type
private
FHasData : Boolean;
FData : TBytes;
FData : Byte;
public
// Return and consume the next byte to be Read, either taking it from the
@@ -169,18 +169,18 @@ type
public
// IProtocol
procedure WriteMessageBegin( const aMsg : IMessage); override;
procedure WriteMessageBegin( const aMsg : TThriftMessage); override;
procedure WriteMessageEnd; override;
procedure WriteStructBegin( const struc: IStruct); override;
procedure WriteStructBegin( const struc: TThriftStruct); override;
procedure WriteStructEnd; override;
procedure WriteFieldBegin( const field: IField); override;
procedure WriteFieldBegin( const field: TThriftField); override;
procedure WriteFieldEnd; override;
procedure WriteFieldStop; override;
procedure WriteMapBegin( const map: IMap); override;
procedure WriteMapBegin( const map: TThriftMap); override;
procedure WriteMapEnd; override;
procedure WriteListBegin( const list: IList); override;
procedure WriteListBegin( const list: TThriftList); override;
procedure WriteListEnd(); override;
procedure WriteSetBegin( const set_: ISet ); override;
procedure WriteSetBegin( const set_: TThriftSet ); override;
procedure WriteSetEnd(); override;
procedure WriteBool( b: Boolean); override;
procedure WriteByte( b: ShortInt); override;
@@ -191,17 +191,17 @@ type
procedure WriteString( const s: string ); override;
procedure WriteBinary( const b: TBytes); override;
//
function ReadMessageBegin: IMessage; override;
function ReadMessageBegin: TThriftMessage; override;
procedure ReadMessageEnd(); override;
function ReadStructBegin: IStruct; override;
function ReadStructBegin: TThriftStruct; override;
procedure ReadStructEnd; override;
function ReadFieldBegin: IField; override;
function ReadFieldBegin: TThriftField; override;
procedure ReadFieldEnd(); override;
function ReadMapBegin: IMap; override;
function ReadMapBegin: TThriftMap; override;
procedure ReadMapEnd(); override;
function ReadListBegin: IList; override;
function ReadListBegin: TThriftList; override;
procedure ReadListEnd(); override;
function ReadSetBegin: ISet; override;
function ReadSetBegin: TThriftSet; override;
procedure ReadSetEnd(); override;
function ReadBool: Boolean; override;
function ReadByte: ShortInt; override;
@@ -437,21 +437,19 @@ begin
if FHasData
then FHasData := FALSE
else begin
SetLength( FData, 1);
IJSONProtocol(FProto).Transport.ReadAll( FData, 0, 1);
IJSONProtocol(FProto).Transport.ReadAll( @FData, SizeOf(FData), 0, 1);
end;
result := FData[0];
result := FData;
end;
function TJSONProtocolImpl.TLookaheadReader.Peek : Byte;
begin
if not FHasData then begin
SetLength( FData, 1);
IJSONProtocol(FProto).Transport.ReadAll( FData, 0, 1);
IJSONProtocol(FProto).Transport.ReadAll( @FData, SizeOf(FData), 0, 1);
FHasData := TRUE;
end;
result := FData[0];
result := FData;
end;
@@ -681,7 +679,7 @@ begin
end;
procedure TJSONProtocolImpl.WriteMessageBegin( const aMsg : IMessage);
procedure TJSONProtocolImpl.WriteMessageBegin( const aMsg : TThriftMessage);
begin
ResetContextStack; // THRIFT-1473
@@ -700,7 +698,7 @@ begin
end;
procedure TJSONProtocolImpl.WriteStructBegin( const struc: IStruct);
procedure TJSONProtocolImpl.WriteStructBegin( const struc: TThriftStruct);
begin
WriteJSONObjectStart;
end;
@@ -712,7 +710,7 @@ begin
end;
procedure TJSONProtocolImpl.WriteFieldBegin( const field : IField);
procedure TJSONProtocolImpl.WriteFieldBegin( const field : TThriftField);
begin
WriteJSONInteger(field.ID);
WriteJSONObjectStart;
@@ -731,7 +729,7 @@ begin
// nothing to do
end;
procedure TJSONProtocolImpl.WriteMapBegin( const map: IMap);
procedure TJSONProtocolImpl.WriteMapBegin( const map: TThriftMap);
begin
WriteJSONArrayStart;
WriteJSONString( GetTypeNameForTypeID( map.KeyType));
@@ -748,7 +746,7 @@ begin
end;
procedure TJSONProtocolImpl.WriteListBegin( const list: IList);
procedure TJSONProtocolImpl.WriteListBegin( const list: TThriftList);
begin
WriteJSONArrayStart;
WriteJSONString( GetTypeNameForTypeID( list.ElementType));
@@ -762,7 +760,7 @@ begin
end;
procedure TJSONProtocolImpl.WriteSetBegin( const set_: ISet);
procedure TJSONProtocolImpl.WriteSetBegin( const set_: TThriftSet);
begin
WriteJSONArrayStart;
WriteJSONString( GetTypeNameForTypeID( set_.ElementType));
@@ -1051,11 +1049,11 @@ begin
end;
function TJSONProtocolImpl.ReadMessageBegin: IMessage;
function TJSONProtocolImpl.ReadMessageBegin: TThriftMessage;
begin
ResetContextStack; // THRIFT-1473
result := TMessageImpl.Create;
Init( result);
ReadJSONArrayStart;
if ReadJSONInteger <> VERSION
@@ -1073,10 +1071,10 @@ begin
end;
function TJSONProtocolImpl.ReadStructBegin : IStruct ;
function TJSONProtocolImpl.ReadStructBegin : TThriftStruct ;
begin
ReadJSONObjectStart;
result := TStructImpl.Create('');
Init( result);
end;
@@ -1086,11 +1084,11 @@ begin
end;
function TJSONProtocolImpl.ReadFieldBegin : IField;
function TJSONProtocolImpl.ReadFieldBegin : TThriftField;
var ch : Byte;
str : string;
begin
result := TFieldImpl.Create;
Init( result);
ch := FReader.Peek;
if ch = RBRACE[0]
then result.Type_ := TType.Stop
@@ -1110,10 +1108,10 @@ begin
end;
function TJSONProtocolImpl.ReadMapBegin : IMap;
function TJSONProtocolImpl.ReadMapBegin : TThriftMap;
var str : string;
begin
result := TMapImpl.Create;
Init( result);
ReadJSONArrayStart;
str := SysUtils.TEncoding.UTF8.GetString( ReadJSONString(FALSE));
@@ -1134,10 +1132,10 @@ begin
end;
function TJSONProtocolImpl.ReadListBegin : IList;
function TJSONProtocolImpl.ReadListBegin : TThriftList;
var str : string;
begin
result := TListImpl.Create;
Init( result);
ReadJSONArrayStart;
str := SysUtils.TEncoding.UTF8.GetString( ReadJSONString(FALSE));
@@ -1152,10 +1150,10 @@ begin
end;
function TJSONProtocolImpl.ReadSetBegin : ISet;
function TJSONProtocolImpl.ReadSetBegin : TThriftSet;
var str : string;
begin
result := TSetImpl.Create;
Init( result);
ReadJSONArrayStart;
str := SysUtils.TEncoding.UTF8.GetString( ReadJSONString(FALSE));

View File

@@ -71,7 +71,7 @@ type
{ Prepends the service name to the function name, separated by SEPARATOR.
Args: The original message.
}
procedure WriteMessageBegin( const msg: IMessage); override;
procedure WriteMessageBegin( const msg: TThriftMessage); override;
end;
@@ -86,14 +86,14 @@ begin
end;
procedure TMultiplexedProtocol.WriteMessageBegin( const msg: IMessage);
procedure TMultiplexedProtocol.WriteMessageBegin( const msg: TThriftMessage);
// Prepends the service name to the function name, separated by TMultiplexedProtocol.SEPARATOR.
var newMsg : IMessage;
var newMsg : TThriftMessage;
begin
case msg.Type_ of
TMessageType.Call,
TMessageType.Oneway : begin
newMsg := TMessageImpl.Create( FServiceName + SEPARATOR + msg.Name, msg.Type_, msg.SeqID);
Init( newMsg, FServiceName + SEPARATOR + msg.Name, msg.Type_, msg.SeqID);
inherited WriteMessageBegin( newMsg);
end;

File diff suppressed because it is too large Load Diff

View File

@@ -38,9 +38,11 @@ uses
type
IThriftStream = interface
['{732621B3-F697-4D76-A1B0-B4DD5A8E4018}']
procedure Write( const buffer: TBytes; offset: Integer; count: Integer);
function Read( var buffer: TBytes; offset: Integer; count: Integer): Integer;
['{2A77D916-7446-46C1-8545-0AEC0008DBCA}']
procedure Write( const buffer: TBytes; offset: Integer; count: Integer); overload;
procedure Write( const pBuf : Pointer; offset: Integer; count: Integer); overload;
function Read( var buffer: TBytes; offset: Integer; count: Integer): Integer; overload;
function Read( const pBuf : Pointer; const buflen : Integer; offset: Integer; count: Integer): Integer; overload;
procedure Open;
procedure Close;
procedure Flush;
@@ -50,10 +52,12 @@ type
TThriftStreamImpl = class( TInterfacedObject, IThriftStream)
private
procedure CheckSizeAndOffset( const buffer: TBytes; offset: Integer; count: Integer);
procedure CheckSizeAndOffset( const pBuf : Pointer; const buflen : Integer; offset: Integer; count: Integer); overload;
protected
procedure Write( const buffer: TBytes; offset: Integer; count: Integer); virtual;
function Read( var buffer: TBytes; offset: Integer; count: Integer): Integer; virtual;
procedure Write( const buffer: TBytes; offset: Integer; count: Integer); overload; inline;
procedure Write( const pBuf : Pointer; offset: Integer; count: Integer); overload; virtual;
function Read( var buffer: TBytes; offset: Integer; count: Integer): Integer; overload; inline;
function Read( const pBuf : Pointer; const buflen : Integer; offset: Integer; count: Integer): Integer; overload; virtual;
procedure Open; virtual; abstract;
procedure Close; virtual; abstract;
procedure Flush; virtual; abstract;
@@ -66,8 +70,8 @@ type
FStream : TStream;
FOwnsStream : Boolean;
protected
procedure Write( const buffer: TBytes; offset: Integer; count: Integer); override;
function Read( var buffer: TBytes; offset: Integer; count: Integer): Integer; override;
procedure Write( const pBuf : Pointer; offset: Integer; count: Integer); override;
function Read( const pBuf : Pointer; const buflen : Integer; offset: Integer; count: Integer): Integer; override;
procedure Open; override;
procedure Close; override;
procedure Flush; override;
@@ -82,8 +86,8 @@ type
private
FStream : IStream;
protected
procedure Write( const buffer: TBytes; offset: Integer; count: Integer); override;
function Read( var buffer: TBytes; offset: Integer; count: Integer): Integer; override;
procedure Write( const pBuf : Pointer; offset: Integer; count: Integer); override;
function Read( const pBuf : Pointer; const buflen : Integer; offset: Integer; count: Integer): Integer; override;
procedure Open; override;
procedure Close; override;
procedure Flush; override;
@@ -127,13 +131,17 @@ begin
// nothing to do
end;
function TThriftStreamAdapterCOM.Read( var buffer: TBytes; offset: Integer; count: Integer): Integer;
function TThriftStreamAdapterCOM.Read( const pBuf : Pointer; const buflen : Integer; offset: Integer; count: Integer): Integer;
begin
inherited;
if count >= buflen-offset
then count := buflen-offset;
Result := 0;
if FStream <> nil then begin
if count > 0 then begin
FStream.Read( @buffer[offset], count, @Result);
FStream.Read( @(PByteArray(pBuf)^[offset]), count, @Result);
end;
end;
end;
@@ -162,44 +170,53 @@ begin
end;
end;
procedure TThriftStreamAdapterCOM.Write( const buffer: TBytes; offset: Integer; count: Integer);
procedure TThriftStreamAdapterCOM.Write( const pBuf: Pointer; offset: Integer; count: Integer);
var nWritten : Integer;
begin
inherited;
if IsOpen then begin
if count > 0 then begin
FStream.Write( @buffer[0], count, @nWritten);
FStream.Write( @(PByteArray(pBuf)^[offset]), count, @nWritten);
end;
end;
end;
{ TThriftStreamImpl }
procedure TThriftStreamImpl.CheckSizeAndOffset(const buffer: TBytes; offset,
count: Integer);
var
len : Integer;
procedure TThriftStreamImpl.CheckSizeAndOffset( const pBuf : Pointer; const buflen : Integer; offset: Integer; count: Integer);
begin
if count > 0 then begin
len := Length( buffer );
if (offset < 0) or ( offset >= len) then begin
if (offset < 0) or ( offset >= buflen) then begin
raise ERangeError.Create( SBitsIndexError );
end;
if count > len then begin
if count > buflen then begin
raise ERangeError.Create( SBitsIndexError );
end;
end;
end;
function TThriftStreamImpl.Read(var buffer: TBytes; offset, count: Integer): Integer;
begin
if Length(buffer) > 0
then Result := Read( @buffer[0], Length(buffer), offset, count)
else Result := 0;
end;
function TThriftStreamImpl.Read( const pBuf : Pointer; const buflen : Integer; offset: Integer; count: Integer): Integer;
begin
Result := 0;
CheckSizeAndOffset( buffer, offset, count );
CheckSizeAndOffset( pBuf, buflen, offset, count );
end;
procedure TThriftStreamImpl.Write(const buffer: TBytes; offset, count: Integer);
begin
CheckSizeAndOffset( buffer, offset, count );
if Length(buffer) > 0
then Write( @buffer[0], offset, count);
end;
procedure TThriftStreamImpl.Write( const pBuf : Pointer; offset: Integer; count: Integer);
begin
CheckSizeAndOffset( pBuf, offset+count, offset, count);
end;
{ TThriftStreamAdapterDelphi }
@@ -241,14 +258,16 @@ begin
// nothing to do
end;
function TThriftStreamAdapterDelphi.Read(var buffer: TBytes; offset,
count: Integer): Integer;
function TThriftStreamAdapterDelphi.Read(const pBuf : Pointer; const buflen : Integer; offset, count: Integer): Integer;
begin
inherited;
Result := 0;
if count > 0 then begin
Result := FStream.Read( Pointer(@buffer[offset])^, count)
end;
if count >= buflen-offset
then count := buflen-offset;
if count > 0
then Result := FStream.Read( PByteArray(pBuf)^[offset], count)
else Result := 0;
end;
function TThriftStreamAdapterDelphi.ToArray: TBytes;
@@ -276,12 +295,11 @@ begin
end
end;
procedure TThriftStreamAdapterDelphi.Write(const buffer: TBytes; offset,
count: Integer);
procedure TThriftStreamAdapterDelphi.Write(const pBuf : Pointer; offset, count: Integer);
begin
inherited;
if count > 0 then begin
FStream.Write( Pointer(@buffer[offset])^, count)
FStream.Write( PByteArray(pBuf)^[offset], count)
end;
end;

View File

@@ -48,16 +48,16 @@ type
FOpenTimeOut : DWORD; // separate value to allow for fail-fast-on-open scenarios
FOverlapped : Boolean;
procedure Write( const buffer: TBytes; offset: Integer; count: Integer); override;
function Read( var buffer: TBytes; offset: Integer; count: Integer): Integer; override;
procedure Write( const pBuf : Pointer; offset, count : Integer); override;
function Read( const pBuf : Pointer; const buflen : Integer; offset: Integer; count: Integer): Integer; override;
//procedure Open; override; - see derived classes
procedure Close; override;
procedure Flush; override;
function ReadDirect( var buffer: TBytes; offset: Integer; count: Integer): Integer;
function ReadOverlapped( var buffer: TBytes; offset: Integer; count: Integer): Integer;
procedure WriteDirect( const buffer: TBytes; offset: Integer; count: Integer);
procedure WriteOverlapped( const buffer: TBytes; offset: Integer; count: Integer);
function ReadDirect( const pBuf : Pointer; const buflen : Integer; offset: Integer; count: Integer): Integer; overload;
function ReadOverlapped( const pBuf : Pointer; const buflen : Integer; offset: Integer; count: Integer): Integer; overload;
procedure WriteDirect( const pBuf : Pointer; offset: Integer; count: Integer); overload;
procedure WriteOverlapped( const pBuf : Pointer; offset: Integer; count: Integer); overload;
function IsOpen: Boolean; override;
function ToArray: TBytes; override;
@@ -310,34 +310,67 @@ begin
end;
procedure TPipeStreamBase.Write(const buffer: TBytes; offset, count: Integer);
procedure TPipeStreamBase.Write( const pBuf : Pointer; offset, count : Integer);
begin
if FOverlapped
then WriteOverlapped( buffer, offset, count)
else WriteDirect( buffer, offset, count);
then WriteOverlapped( pBuf, offset, count)
else WriteDirect( pBuf, offset, count);
end;
function TPipeStreamBase.Read( var buffer: TBytes; offset, count: Integer): Integer;
function TPipeStreamBase.Read( const pBuf : Pointer; const buflen : Integer; offset: Integer; count: Integer): Integer;
begin
if FOverlapped
then result := ReadOverlapped( buffer, offset, count)
else result := ReadDirect( buffer, offset, count);
then result := ReadOverlapped( pBuf, buflen, offset, count)
else result := ReadDirect( pBuf, buflen, offset, count);
end;
procedure TPipeStreamBase.WriteDirect(const buffer: TBytes; offset, count: Integer);
procedure TPipeStreamBase.WriteDirect( const pBuf : Pointer; offset: Integer; count: Integer);
var cbWritten : DWORD;
begin
if not IsOpen
then raise TTransportExceptionNotOpen.Create('Called write on non-open pipe');
if not WriteFile( FPipe, buffer[offset], count, cbWritten, nil)
if not WriteFile( FPipe, PByteArray(pBuf)^[offset], count, cbWritten, nil)
then raise TTransportExceptionNotOpen.Create('Write to pipe failed');
end;
function TPipeStreamBase.ReadDirect( var buffer: TBytes; offset, count: Integer): Integer;
procedure TPipeStreamBase.WriteOverlapped( const pBuf : Pointer; offset: Integer; count: Integer);
var cbWritten, dwWait, dwError : DWORD;
overlapped : IOverlappedHelper;
begin
if not IsOpen
then raise TTransportExceptionNotOpen.Create('Called write on non-open pipe');
overlapped := TOverlappedHelperImpl.Create;
if not WriteFile( FPipe, PByteArray(pBuf)^[offset], count, cbWritten, overlapped.OverlappedPtr)
then begin
dwError := GetLastError;
case dwError of
ERROR_IO_PENDING : begin
dwWait := overlapped.WaitFor(FTimeout);
if (dwWait = WAIT_TIMEOUT)
then raise TTransportExceptionTimedOut.Create('Pipe write timed out');
if (dwWait <> WAIT_OBJECT_0)
or not GetOverlappedResult( FPipe, overlapped.Overlapped, cbWritten, TRUE)
then raise TTransportExceptionUnknown.Create('Pipe write error');
end;
else
raise TTransportExceptionUnknown.Create(SysErrorMessage(dwError));
end;
end;
ASSERT( DWORD(count) = cbWritten);
end;
function TPipeStreamBase.ReadDirect( const pBuf : Pointer; const buflen : Integer; offset: Integer; count: Integer): Integer;
var cbRead, dwErr : DWORD;
bytes, retries : LongInt;
bOk : Boolean;
@@ -374,47 +407,14 @@ begin
end;
// read the data (or block INFINITE-ly)
bOk := ReadFile( FPipe, buffer[offset], count, cbRead, nil);
bOk := ReadFile( FPipe, PByteArray(pBuf)^[offset], count, cbRead, nil);
if (not bOk) and (GetLastError() <> ERROR_MORE_DATA)
then result := 0 // No more data, possibly because client disconnected.
else result := cbRead;
end;
procedure TPipeStreamBase.WriteOverlapped(const buffer: TBytes; offset, count: Integer);
var cbWritten, dwWait, dwError : DWORD;
overlapped : IOverlappedHelper;
begin
if not IsOpen
then raise TTransportExceptionNotOpen.Create('Called write on non-open pipe');
overlapped := TOverlappedHelperImpl.Create;
if not WriteFile( FPipe, buffer[offset], count, cbWritten, overlapped.OverlappedPtr)
then begin
dwError := GetLastError;
case dwError of
ERROR_IO_PENDING : begin
dwWait := overlapped.WaitFor(FTimeout);
if (dwWait = WAIT_TIMEOUT)
then raise TTransportExceptionTimedOut.Create('Pipe write timed out');
if (dwWait <> WAIT_OBJECT_0)
or not GetOverlappedResult( FPipe, overlapped.Overlapped, cbWritten, TRUE)
then raise TTransportExceptionUnknown.Create('Pipe write error');
end;
else
raise TTransportExceptionUnknown.Create(SysErrorMessage(dwError));
end;
end;
ASSERT( DWORD(count) = cbWritten);
end;
function TPipeStreamBase.ReadOverlapped( var buffer: TBytes; offset, count: Integer): Integer;
function TPipeStreamBase.ReadOverlapped( const pBuf : Pointer; const buflen : Integer; offset: Integer; count: Integer): Integer;
var cbRead, dwWait, dwError : DWORD;
bOk : Boolean;
overlapped : IOverlappedHelper;
@@ -425,7 +425,7 @@ begin
overlapped := TOverlappedHelperImpl.Create;
// read the data
bOk := ReadFile( FPipe, buffer[offset], count, cbRead, overlapped.OverlappedPtr);
bOk := ReadFile( FPipe, PByteArray(pBuf)^[offset], count, cbRead, overlapped.OverlappedPtr);
if not bOk then begin
dwError := GetLastError;
case dwError of
@@ -768,8 +768,6 @@ var sd : PSECURITY_DESCRIPTOR;
sa : SECURITY_ATTRIBUTES; //TSecurityAttributes;
hCAR, hPipeW, hCAW, hPipe : THandle;
begin
result := FALSE;
sd := PSECURITY_DESCRIPTOR( LocalAlloc( LPTR,SECURITY_DESCRIPTOR_MIN_LENGTH));
try
Win32Check( InitializeSecurityDescriptor( sd, SECURITY_DESCRIPTOR_REVISION));
@@ -779,12 +777,14 @@ begin
sa.lpSecurityDescriptor := sd;
sa.bInheritHandle := TRUE; //allow passing handle to child
if not CreatePipe( hCAR, hPipeW, @sa, FBufSize) then begin //create stdin pipe
Result := CreatePipe( hCAR, hPipeW, @sa, FBufSize); //create stdin pipe
if not Result then begin //create stdin pipe
raise TTransportExceptionNotOpen.Create('TServerPipe CreatePipe (anon) failed, '+SysErrorMessage(GetLastError));
Exit;
end;
if not CreatePipe( hPipe, hCAW, @sa, FBufSize) then begin //create stdout pipe
Result := CreatePipe( hPipe, hCAW, @sa, FBufSize); //create stdout pipe
if not Result then begin //create stdout pipe
CloseHandle( hCAR);
CloseHandle( hPipeW);
raise TTransportExceptionNotOpen.Create('TServerPipe CreatePipe (anon) failed, '+SysErrorMessage(GetLastError));
@@ -795,9 +795,6 @@ begin
FClientAnonWrite := hCAW;
FReadHandle := hPipe;
FWriteHandle := hPipeW;
result := TRUE;
finally
if sd <> nil then LocalFree( Cardinal(sd));
end;

View File

@@ -44,16 +44,20 @@ uses
type
ITransport = interface
['{A4A9FC37-D620-44DC-AD21-662D16364CE4}']
['{DB84961E-8BB3-4532-99E1-A8C7AC2300F7}']
function GetIsOpen: Boolean;
property IsOpen: Boolean read GetIsOpen;
function Peek: Boolean;
procedure Open;
procedure Close;
function Read(var buf: TBytes; off: Integer; len: Integer): Integer;
function ReadAll(var buf: TBytes; off: Integer; len: Integer): Integer;
function Read(var buf: TBytes; off: Integer; len: Integer): Integer; overload;
function Read(const pBuf : Pointer; const buflen : Integer; off: Integer; len: Integer): Integer; overload;
function ReadAll(var buf: TBytes; off: Integer; len: Integer): Integer; overload;
function ReadAll(const pBuf : Pointer; const buflen : Integer; off: Integer; len: Integer): Integer; overload;
procedure Write( const buf: TBytes); overload;
procedure Write( const buf: TBytes; off: Integer; len: Integer); overload;
procedure Write( const pBuf : Pointer; off, len : Integer); overload;
procedure Write( const pBuf : Pointer; len : Integer); overload;
procedure Flush;
end;
@@ -64,10 +68,14 @@ type
function Peek: Boolean; virtual;
procedure Open(); virtual; abstract;
procedure Close(); virtual; abstract;
function Read(var buf: TBytes; off: Integer; len: Integer): Integer; virtual; abstract;
function ReadAll(var buf: TBytes; off: Integer; len: Integer): Integer; virtual;
procedure Write( const buf: TBytes); overload; virtual;
procedure Write( const buf: TBytes; off: Integer; len: Integer); overload; virtual; abstract;
function Read(var buf: TBytes; off: Integer; len: Integer): Integer; overload; inline;
function Read(const pBuf : Pointer; const buflen : Integer; off: Integer; len: Integer): Integer; overload; virtual; abstract;
function ReadAll(var buf: TBytes; off: Integer; len: Integer): Integer; overload; inline;
function ReadAll(const pBuf : Pointer; const buflen : Integer; off: Integer; len: Integer): Integer; overload; virtual;
procedure Write( const buf: TBytes); overload; inline;
procedure Write( const buf: TBytes; off: Integer; len: Integer); overload; inline;
procedure Write( const pBuf : Pointer; len : Integer); overload; inline;
procedure Write( const pBuf : Pointer; off, len : Integer); overload; virtual; abstract;
procedure Flush; virtual;
end;
@@ -135,8 +143,8 @@ type
function GetIsOpen: Boolean; override;
procedure Open(); override;
procedure Close(); override;
function Read( var buf: TBytes; off: Integer; len: Integer): Integer; override;
procedure Write( const buf: TBytes; off: Integer; len: Integer); override;
function Read( const pBuf : Pointer; const buflen : Integer; off: Integer; len: Integer): Integer; override;
procedure Write( const pBuf : Pointer; off, len : Integer); override;
procedure Flush; override;
procedure SetConnectionTimeout(const Value: Integer);
@@ -193,8 +201,8 @@ type
SLEEP_TIME = 200;
{$ENDIF}
protected
procedure Write( const buffer: TBytes; offset: Integer; count: Integer); override;
function Read( var buffer: TBytes; offset: Integer; count: Integer): Integer; override;
procedure Write( const pBuf : Pointer; offset, count: Integer); override;
function Read( const pBuf : Pointer; const buflen : Integer; offset: Integer; count: Integer): Integer; override;
procedure Open; override;
procedure Close; override;
procedure Flush; override;
@@ -233,8 +241,8 @@ type
procedure Open; override;
procedure Close; override;
procedure Flush; override;
function Read(var buf: TBytes; off: Integer; len: Integer): Integer; override;
procedure Write( const buf: TBytes; off: Integer; len: Integer); override;
function Read( const pBuf : Pointer; const buflen : Integer; off: Integer; len: Integer): Integer; override;
procedure Write( const pBuf : Pointer; off, len : Integer); override;
constructor Create( const AInputStream : IThriftStream; const AOutputStream : IThriftStream);
destructor Destroy; override;
end;
@@ -246,8 +254,8 @@ type
FReadBuffer : TMemoryStream;
FWriteBuffer : TMemoryStream;
protected
procedure Write( const buffer: TBytes; offset: Integer; count: Integer); override;
function Read( var buffer: TBytes; offset: Integer; count: Integer): Integer; override;
procedure Write( const pBuf : Pointer; offset: Integer; count: Integer); override;
function Read( const pBuf : Pointer; const buflen : Integer; offset: Integer; count: Integer): Integer; override;
procedure Open; override;
procedure Close; override;
procedure Flush; override;
@@ -299,8 +307,8 @@ type
public
procedure Open(); override;
procedure Close(); override;
function Read(var buf: TBytes; off: Integer; len: Integer): Integer; override;
procedure Write( const buf: TBytes; off: Integer; len: Integer); override;
function Read( const pBuf : Pointer; const buflen : Integer; off: Integer; len: Integer): Integer; override;
procedure Write( const pBuf : Pointer; off, len : Integer); override;
constructor Create( const ATransport : IStreamTransport ); overload;
constructor Create( const ATransport : IStreamTransport; ABufSize: Integer); overload;
property UnderlyingTransport: ITransport read GetUnderlyingTransport;
@@ -377,8 +385,8 @@ type
function GetIsOpen: Boolean; override;
procedure Close(); override;
function Read(var buf: TBytes; off: Integer; len: Integer): Integer; override;
procedure Write( const buf: TBytes; off: Integer; len: Integer); override;
function Read( const pBuf : Pointer; const buflen : Integer; off: Integer; len: Integer): Integer; override;
procedure Write( const pBuf : Pointer; off, len : Integer); override;
procedure Flush; override;
end;
@@ -404,24 +412,47 @@ begin
Result := IsOpen;
end;
function TTransportImpl.ReadAll( var buf: TBytes; off, len: Integer): Integer;
var
got : Integer;
ret : Integer;
function TTransportImpl.Read(var buf: TBytes; off: Integer; len: Integer): Integer;
begin
got := 0;
while got < len do begin
ret := Read( buf, off + got, len - got);
if ret > 0
then Inc( got, ret)
else raise TTransportExceptionNotOpen.Create( 'Cannot read, Remote side has closed' );
end;
Result := got;
if Length(buf) > 0
then result := Read( @buf[0], Length(buf), off, len)
else result := 0;
end;
function TTransportImpl.ReadAll(var buf: TBytes; off: Integer; len: Integer): Integer;
begin
if Length(buf) > 0
then result := ReadAll( @buf[0], Length(buf), off, len)
else result := 0;
end;
procedure TTransportImpl.Write( const buf: TBytes);
begin
Self.Write( buf, 0, Length(buf) );
if Length(buf) > 0
then Write( @buf[0], 0, Length(buf));
end;
procedure TTransportImpl.Write( const buf: TBytes; off: Integer; len: Integer);
begin
if Length(buf) > 0
then Write( @buf[0], off, len);
end;
function TTransportImpl.ReadAll(const pBuf : Pointer; const buflen : Integer; off: Integer; len: Integer): Integer;
var ret : Integer;
begin
result := 0;
while result < len do begin
ret := Read( pBuf, buflen, off + result, len - result);
if ret > 0
then Inc( result, ret)
else raise TTransportExceptionNotOpen.Create( 'Cannot read, Remote side has closed' );
end;
end;
procedure TTransportImpl.Write( const pBuf : Pointer; len : Integer);
begin
Self.Write( pBuf, 0, len);
end;
{ THTTPClientImpl }
@@ -501,14 +532,14 @@ begin
// nothing to do
end;
function THTTPClientImpl.Read( var buf: TBytes; off, len: Integer): Integer;
function THTTPClientImpl.Read( const pBuf : Pointer; const buflen : Integer; off: Integer; len: Integer): Integer;
begin
if FInputStream = nil then begin
raise TTransportExceptionNotOpen.Create('No request has been sent');
end;
try
Result := FInputStream.Read( buf, off, len )
Result := FInputStream.Read( pBuf, buflen, off, len)
except
on E: Exception
do raise TTransportExceptionUnknown.Create(E.Message);
@@ -550,9 +581,9 @@ begin
FReadTimeout := Value
end;
procedure THTTPClientImpl.Write( const buf: TBytes; off, len: Integer);
procedure THTTPClientImpl.Write( const pBuf : Pointer; off, len : Integer);
begin
FOutputStream.Write( buf, off, len);
FOutputStream.Write( pBuf, off, len);
end;
{ TTransportException }
@@ -931,7 +962,7 @@ begin
// nothing to do
end;
function TBufferedStreamImpl.Read( var buffer: TBytes; offset: Integer; count: Integer): Integer;
function TBufferedStreamImpl.Read( const pBuf : Pointer; const buflen : Integer; offset: Integer; count: Integer): Integer;
var
nRead : Integer;
tempbuf : TBytes;
@@ -954,7 +985,7 @@ begin
if FReadBuffer.Position < FReadBuffer.Size then begin
nRead := Min( FReadBuffer.Size - FReadBuffer.Position, count);
Inc( Result, FReadBuffer.Read( Pointer(@buffer[offset])^, nRead));
Inc( Result, FReadBuffer.Read( PByteArray(pBuf)^[offset], nRead));
Dec( count, nRead);
Inc( offset, nRead);
end;
@@ -979,12 +1010,12 @@ begin
end;
end;
procedure TBufferedStreamImpl.Write( const buffer: TBytes; offset: Integer; count: Integer);
procedure TBufferedStreamImpl.Write( const pBuf : Pointer; offset: Integer; count: Integer);
begin
inherited;
if count > 0 then begin
if IsOpen then begin
FWriteBuffer.Write( Pointer(@buffer[offset])^, count );
FWriteBuffer.Write( PByteArray(pBuf)^[offset], count );
if FWriteBuffer.Size > FBufSize then begin
Flush;
end;
@@ -1043,22 +1074,22 @@ begin
end;
function TStreamTransportImpl.Read(var buf: TBytes; off, len: Integer): Integer;
function TStreamTransportImpl.Read( const pBuf : Pointer; const buflen : Integer; off: Integer; len: Integer): Integer;
begin
if FInputStream = nil then begin
raise TTransportExceptionNotOpen.Create('Cannot read from null inputstream' );
end;
Result := FInputStream.Read( buf, off, len );
Result := FInputStream.Read( pBuf,buflen, off, len );
end;
procedure TStreamTransportImpl.Write(const buf: TBytes; off, len: Integer);
procedure TStreamTransportImpl.Write( const pBuf : Pointer; off, len : Integer);
begin
if FOutputStream = nil then begin
raise TTransportExceptionNotOpen.Create('Cannot write to null outputstream' );
end;
FOutputStream.Write( buf, off, len );
FOutputStream.Write( pBuf, off, len );
end;
{ TBufferedTransportImpl }
@@ -1114,18 +1145,18 @@ begin
FTransport.Open
end;
function TBufferedTransportImpl.Read(var buf: TBytes; off, len: Integer): Integer;
function TBufferedTransportImpl.Read( const pBuf : Pointer; const buflen : Integer; off: Integer; len: Integer): Integer;
begin
Result := 0;
if FInputBuffer <> nil then begin
Result := FInputBuffer.Read( buf, off, len );
Result := FInputBuffer.Read( pBuf,buflen, off, len );
end;
end;
procedure TBufferedTransportImpl.Write(const buf: TBytes; off, len: Integer);
procedure TBufferedTransportImpl.Write( const pBuf : Pointer; off, len : Integer);
begin
if FOutputBuffer <> nil then begin
FOutputBuffer.Write( buf, off, len );
FOutputBuffer.Write( pBuf, off, len );
end;
end;
@@ -1222,24 +1253,21 @@ begin
FTransport.Open;
end;
function TFramedTransportImpl.Read(var buf: TBytes; off, len: Integer): Integer;
var
got : Integer;
function TFramedTransportImpl.Read( const pBuf : Pointer; const buflen : Integer; off: Integer; len: Integer): Integer;
begin
if FReadBuffer <> nil then begin
if len > 0
then got := FReadBuffer.Read( Pointer(@buf[off])^, len )
else got := 0;
if got > 0 then begin
Result := got;
if len > (buflen-off)
then len := buflen-off;
if (FReadBuffer <> nil) and (len > 0) then begin
result := FReadBuffer.Read( PByteArray(pBuf)^[off], len);
if result > 0 then begin
Exit;
end;
end;
ReadFrame;
if len > 0
then Result := FReadBuffer.Read( Pointer(@buf[off])^, len)
then Result := FReadBuffer.Read( PByteArray(pBuf)^[off], len)
else Result := 0;
end;
@@ -1260,14 +1288,15 @@ begin
FTransport.ReadAll( buff, 0, size );
FReadBuffer.Free;
FReadBuffer := TMemoryStream.Create;
FReadBuffer.Write( Pointer(@buff[0])^, size );
if Length(buff) > 0
then FReadBuffer.Write( Pointer(@buff[0])^, size );
FReadBuffer.Position := 0;
end;
procedure TFramedTransportImpl.Write(const buf: TBytes; off, len: Integer);
procedure TFramedTransportImpl.Write( const pBuf : Pointer; off, len : Integer);
begin
if len > 0
then FWriteBuffer.Write( Pointer(@buf[off])^, len );
then FWriteBuffer.Write( PByteArray(pBuf)^[off], len );
end;
{ TFramedTransport.TFactory }
@@ -1447,7 +1476,7 @@ end;
{$ENDIF}
{$IFDEF OLD_SOCKETS}
function TTcpSocketStreamImpl.Read(var buffer: TBytes; offset, count: Integer): Integer;
function TTcpSocketStreamImpl.Read( const pBuf : Pointer; const buflen : Integer; offset: Integer; count: Integer): Integer;
// old sockets version
var wfd : TWaitForData;
wsaError,
@@ -1462,7 +1491,7 @@ begin
else msecs := DEFAULT_THRIFT_TIMEOUT;
result := 0;
pDest := Pointer(@buffer[offset]);
pDest := @(PByteArray(pBuf)^[offset]);
while count > 0 do begin
while TRUE do begin
@@ -1513,7 +1542,7 @@ begin
end;
end;
procedure TTcpSocketStreamImpl.Write(const buffer: TBytes; offset, count: Integer);
procedure TTcpSocketStreamImpl.Write( const pBuf : Pointer; offset, count: Integer);
// old sockets version
var bCanWrite, bError : Boolean;
retval, wsaError : Integer;
@@ -1537,12 +1566,12 @@ begin
if bError or not bCanWrite
then raise TTransportExceptionUnknown.Create('unknown error');
FTcpClient.SendBuf( Pointer(@buffer[offset])^, count);
FTcpClient.SendBuf( PByteArray(pBuf)^[offset], count);
end;
{$ELSE}
function TTcpSocketStreamImpl.Read(var buffer: TBytes; offset, count: Integer): Integer;
function TTcpSocketStreamImpl.Read( const pBuf : Pointer; const buflen : Integer; offset: Integer; count: Integer): Integer;
// new sockets version
var nBytes : Integer;
pDest : PByte;
@@ -1550,7 +1579,7 @@ begin
inherited;
result := 0;
pDest := Pointer(@buffer[offset]);
pDest := @(PByteArray(pBuf)^[offset]);
while count > 0 do begin
nBytes := FTcpClient.Read(pDest^, count);
if nBytes = 0 then Exit;
@@ -1579,7 +1608,7 @@ begin
SetLength(Result, Length(Result) - 1024 + len);
end;
procedure TTcpSocketStreamImpl.Write(const buffer: TBytes; offset, count: Integer);
procedure TTcpSocketStreamImpl.Write( const pBuf : Pointer; offset, count: Integer);
// new sockets version
begin
inherited;
@@ -1587,7 +1616,7 @@ begin
if not FTcpClient.IsOpen
then raise TTransportExceptionNotOpen.Create('not open');
FTcpClient.Write(buffer[offset], count);
FTcpClient.Write( PByteArray(pBuf)^[offset], count);
end;
{$ENDIF}

View File

@@ -172,10 +172,10 @@ end;
class function TApplicationException.Read( const iprot: IProtocol): TApplicationException;
var
field : IField;
field : TThriftField;
msg : string;
typ : TExceptionType;
struc : IStruct;
struc : TThriftStruct;
begin
msg := '';
typ := TExceptionType.Unknown;
@@ -220,12 +220,11 @@ end;
procedure TApplicationException.Write( const oprot: IProtocol);
var
struc : IStruct;
field : IField;
struc : TThriftStruct;
field : TThriftField;
begin
struc := TStructImpl.Create( 'TApplicationException' );
field := TFieldImpl.Create;
Init(struc, 'TApplicationException');
Init(field);
oprot.WriteStructBegin( struc );
if Message <> '' then

View File

@@ -22,7 +22,8 @@ unit TestClient;
{$I ../src/Thrift.Defines.inc}
{.$DEFINE StressTest} // activate to stress-test the server with frequent connects/disconnects
{.$DEFINE PerfTest} // activate to activate the performance test
{.$DEFINE PerfTest} // activate the performance test
{$DEFINE Exceptions} // activate the exceptions test (or disable while debugging)
interface
@@ -258,7 +259,7 @@ begin
if s = 'buffered' then Include( layered, trns_Buffered)
else if s = 'framed' then Include( layered, trns_Framed)
else if s = 'http' then endpoint := trns_Http
else if s = 'evhttp' then endpoint := trns_AnonPipes
else if s = 'evhttp' then endpoint := trns_EvHttp
else InvalidArgs;
end
else if s = '--protocol' then begin
@@ -462,6 +463,7 @@ begin
StressTest( client);
{$ENDIF StressTest}
{$IFDEF Exceptions}
// in-depth exception test
// (1) do we get an exception at all?
// (2) do we get the right exception?
@@ -510,6 +512,7 @@ begin
on e:TTransportException do Expect( FALSE, 'Unexpected : "'+e.ToString+'"');
on e:Exception do Expect( FALSE, 'Unexpected exception "'+e.ClassName+'"');
end;
{$ENDIF Exceptions}
// simple things
@@ -525,6 +528,9 @@ begin
s := client.testString('Test');
Expect( s = 'Test', 'testString(''Test'') = "'+s+'"');
s := client.testString(''); // empty string
Expect( s = '', 'testString('''') = "'+s+'"');
s := client.testString(HUGE_TEST_STRING);
Expect( length(s) = length(HUGE_TEST_STRING),
'testString( length(HUGE_TEST_STRING) = '+IntToStr(Length(HUGE_TEST_STRING))+') '
@@ -540,6 +546,7 @@ begin
i64 := client.testI64(-34359738368);
Expect( i64 = -34359738368, 'testI64(-34359738368) = ' + IntToStr( i64));
// random binary
binOut := PrepareBinaryData( TRUE);
Console.WriteLine('testBinary('+BytesToHex(binOut)+')');
try
@@ -552,6 +559,19 @@ begin
on e:Exception do Expect( FALSE, 'testBinary(): Unexpected exception "'+e.ClassName+'": '+e.Message);
end;
// empty binary
SetLength( binOut, 0);
Console.WriteLine('testBinary('+BytesToHex(binOut)+')');
try
binIn := client.testBinary(binOut);
Expect( Length(binOut) = Length(binIn), 'testBinary(): length '+IntToStr(Length(binOut))+' = '+IntToStr(Length(binIn)));
i32 := Min( Length(binOut), Length(binIn));
Expect( CompareMem( binOut, binIn, i32), 'testBinary('+BytesToHex(binOut)+') = '+BytesToHex(binIn));
except
on e:TApplicationException do Console.WriteLine('testBinary(): '+e.Message);
on e:Exception do Expect( FALSE, 'testBinary(): Unexpected exception "'+e.ClassName+'": '+e.Message);
end;
Console.WriteLine('testDouble(5.325098235)');
dub := client.testDouble(5.325098235);
Expect( abs(dub-5.325098235) < 1e-14, 'testDouble(5.325098235) = ' + FloatToStr( dub));
@@ -1037,8 +1057,8 @@ procedure TClientThread.JSONProtocolReadWriteTest;
// other clients or servers expect as the real JSON. This is beyond the scope of this test.
var prot : IProtocol;
stm : TStringStream;
list : IList;
binary, binRead : TBytes;
list : TThriftList;
binary, binRead, emptyBinary : TBytes;
i,iErr : Integer;
const
TEST_SHORT = ShortInt( $FE);
@@ -1061,6 +1081,7 @@ begin
// prepare binary data
binary := PrepareBinaryData( FALSE);
SetLength( emptyBinary, 0); // empty binary data block
// output setup
prot := TJSONProtocolImpl.Create(
@@ -1068,7 +1089,8 @@ begin
nil, TThriftStreamAdapterDelphi.Create( stm, FALSE)));
// write
prot.WriteListBegin( TListImpl.Create( TType.String_, 9));
Init( list, TType.String_, 9);
prot.WriteListBegin( list);
prot.WriteBool( TRUE);
prot.WriteBool( FALSE);
prot.WriteByte( TEST_SHORT);
@@ -1078,6 +1100,8 @@ begin
prot.WriteDouble( TEST_DOUBLE);
prot.WriteString( TEST_STRING);
prot.WriteBinary( binary);
prot.WriteString( ''); // empty string
prot.WriteBinary( emptyBinary); // empty binary data block
prot.WriteListEnd;
// input setup
@@ -1100,6 +1124,8 @@ begin
Expect( abs(prot.ReadDouble-TEST_DOUBLE) < abs(DELTA_DOUBLE), 'WriteDouble/ReadDouble');
Expect( prot.ReadString = TEST_STRING, 'WriteString/ReadString');
binRead := prot.ReadBinary;
Expect( Length(prot.ReadString) = 0, 'WriteString/ReadString (empty string)');
Expect( Length(prot.ReadBinary) = 0, 'empty WriteBinary/ReadBinary (empty data block)');
prot.ReadListEnd;
// test binary data

View File

@@ -52,4 +52,49 @@ enum keywords {
}
struct Struct_lists {
1: list<Struct_simple> init;
2: list<Struct_simple> struc;
3: list<Struct_simple> field;
4: list<Struct_simple> field_;
5: list<Struct_simple> tracker;
6: list<Struct_simple> Self;
}
struct Struct_structs {
1: Struct_simple init;
2: Struct_simple struc;
3: Struct_simple field;
4: Struct_simple field_;
5: Struct_simple tracker;
6: Struct_simple Self;
}
struct Struct_simple {
1: bool init;
2: bool struc;
3: bool field;
4: bool field_;
5: bool tracker;
6: bool Self;
}
struct Struct_strings {
1: string init;
2: string struc;
3: string field;
4: string field_;
5: string tracker;
6: string Self;
}
struct Struct_binary {
1: binary init;
2: binary struc;
3: binary field;
4: binary field_;
5: binary tracker;
6: binary Self;
}

View File

@@ -22,6 +22,7 @@ unit TestSerializer.Data;
interface
uses
SysUtils,
Thrift.Collections,
DebugProtoTest;
@@ -194,7 +195,7 @@ begin
{$IF cDebugProtoTest_Option_AnsiStr_Binary}
result.SetBase64('base64');
{$ELSE}
not yet impl
result.SetBase64( TEncoding.UTF8.GetBytes('base64'));
{$IFEND}
// byte, i16, and i64 lists are populated by default constructor
@@ -338,7 +339,7 @@ begin
{$IF cDebugProtoTest_Option_AnsiStr_Binary}
result.A_binary := AnsiString( #0#1#2#3#4#5#6#7#8);
{$ELSE}
not yet impl
result.A_binary := TEncoding.UTF8.GetBytes( #0#1#2#3#4#5#6#7#8);
{$IFEND}
end;

View File

@@ -35,6 +35,7 @@ uses
Thrift.Serializer in '..\..\src\Thrift.Serializer.pas',
Thrift.Stream in '..\..\src\Thrift.Stream.pas',
Thrift.TypeRegistry in '..\..\src\Thrift.TypeRegistry.pas',
ReservedKeywords,
DebugProtoTest,
TestSerializer.Data;

View File

@@ -44,7 +44,7 @@ const
function CreatePing : IPing;
begin
result := TPingImpl.Create;
result.Version1 := Skiptest.One.TConstants.SKIPTESTSERVICE_VERSION;
result.Version1 := Tskiptest_version_1Constants.SKIPTESTSERVICE_VERSION;
end;
@@ -179,7 +179,7 @@ const
FILE_JSON = 'pingpong.json';
begin
try
Writeln( 'Delphi SkipTest '+IntToStr(TConstants.SKIPTESTSERVICE_VERSION)+' using '+Thrift.Version);
Writeln( 'Delphi SkipTest '+IntToStr(Tskiptest_version_1Constants.SKIPTESTSERVICE_VERSION)+' using '+Thrift.Version);
Writeln;
Writeln('Binary protocol');

View File

@@ -45,7 +45,7 @@ var list : IThriftList<IPong>;
set_ : IHashSet<string>;
begin
result := TPingImpl.Create;
result.Version1 := Skiptest.Two.TConstants.SKIPTESTSERVICE_VERSION;
result.Version1 := Tskiptest_version_2Constants.SKIPTESTSERVICE_VERSION;
result.BoolVal := TRUE;
result.ByteVal := 2;
result.DbVal := 3;
@@ -206,7 +206,7 @@ const
FILE_JSON = 'pingpong.json';
begin
try
Writeln( 'Delphi SkipTest '+IntToStr(TConstants.SKIPTESTSERVICE_VERSION)+' using '+Thrift.Version);
Writeln( 'Delphi SkipTest '+IntToStr(Tskiptest_version_2Constants.SKIPTESTSERVICE_VERSION)+' using '+Thrift.Version);
Writeln;
Writeln('Binary protocol');

View File

@@ -1 +1,3 @@
Please follow [General Coding Standards](/doc/coding_standards.md)
Particularly for Erlang please follow the Erlang [Programming Rules and Conventions](http://www.erlang.se/doc/programming_rules.shtml).

View File

@@ -36,7 +36,7 @@
terminate/2,
code_change/3 ]).
-record( state, { client = nil,
-record( state, { client = nil,
host,
port,
thrift_svc,
@@ -226,9 +226,9 @@ timer_fun() ->
end.
-else.
timer_fun() ->
T1 = erlang:now(),
T1 = erlang:timestamp(),
fun() ->
T2 = erlang:now(),
T2 = erlang:timestamp(),
timer:now_diff(T2, T1)
end.
-endif.

View File

@@ -21,7 +21,6 @@ if GOVERSION_LT_17
COMPILER_EXTRAFLAG=",legacy_context"
endif
THRIFT = $(top_builddir)/compiler/cpp/thrift
THRIFTARGS = -out gopath/src/ --gen go:thrift_import=thrift$(COMPILER_EXTRAFLAG)
THRIFTTEST = $(top_srcdir)/test/ThriftTest.thrift

View File

@@ -30,6 +30,17 @@ const (
PROTOCOL_ERROR = 7
)
var defaultApplicationExceptionMessage = map[int32]string{
UNKNOWN_APPLICATION_EXCEPTION: "unknown application exception",
UNKNOWN_METHOD: "unknown method",
INVALID_MESSAGE_TYPE_EXCEPTION: "invalid message type",
WRONG_METHOD_NAME: "wrong method name",
BAD_SEQUENCE_ID: "bad sequence ID",
MISSING_RESULT: "missing result",
INTERNAL_ERROR: "unknown internal error",
PROTOCOL_ERROR: "unknown protocol error",
}
// Application level Thrift exception
type TApplicationException interface {
TException
@@ -44,7 +55,10 @@ type tApplicationException struct {
}
func (e tApplicationException) Error() string {
return e.message
if e.message != "" {
return e.message
}
return defaultApplicationExceptionMessage[e.type_]
}
func NewTApplicationException(type_ int32, message string) TApplicationException {

View File

@@ -25,7 +25,7 @@ import (
func TestTApplicationException(t *testing.T) {
exc := NewTApplicationException(UNKNOWN_APPLICATION_EXCEPTION, "")
if exc.Error() != "" {
if exc.Error() != defaultApplicationExceptionMessage[UNKNOWN_APPLICATION_EXCEPTION] {
t.Fatalf("Expected empty string for exception but found '%s'", exc.Error())
}
if exc.TypeId() != UNKNOWN_APPLICATION_EXCEPTION {

View File

@@ -90,7 +90,8 @@ func (p *TSSLSocket) Open() error {
// If we have a hostname, we need to pass the hostname to tls.Dial for
// certificate hostname checks.
if p.hostPort != "" {
if p.conn, err = tls.Dial("tcp", p.hostPort, p.cfg); err != nil {
if p.conn, err = tls.DialWithDialer(&net.Dialer{
Timeout: p.timeout}, "tcp", p.hostPort, p.cfg); err != nil {
return NewTTransportException(NOT_OPEN, err.Error())
}
} else {
@@ -106,7 +107,8 @@ func (p *TSSLSocket) Open() error {
if len(p.addr.String()) == 0 {
return NewTTransportException(NOT_OPEN, "Cannot open bad address.")
}
if p.conn, err = tls.Dial(p.addr.Network(), p.addr.String(), p.cfg); err != nil {
if p.conn, err = tls.DialWithDialer(&net.Dialer{
Timeout: p.timeout}, p.addr.Network(), p.addr.String(), p.cfg); err != nil {
return NewTTransportException(NOT_OPEN, err.Error())
}
}

View File

@@ -17,7 +17,6 @@
# under the License.
#
THRIFT = $(top_builddir)/compiler/cpp/thrift
THRIFTCMD = $(THRIFT) --gen haxe -r
THRIFTTEST = $(top_srcdir)/test/ThriftTest.thrift
AGGR = $(top_srcdir)/contrib/async-test/aggr.thrift

View File

@@ -40,7 +40,7 @@ Library
Hs-Source-Dirs:
src
Build-Depends:
base >= 4, base < 5, containers, ghc-prim, attoparsec, binary, bytestring >= 0.10, base64-bytestring, hashable, HTTP, text, hspec-core < 2.4.0, unordered-containers >= 0.2.6, vector >= 0.10.12.2, QuickCheck >= 2.8.2, split
base >= 4, base < 5, containers, ghc-prim, attoparsec, binary, bytestring >= 0.10, base64-bytestring, hashable, HTTP, text, hspec-core > 2.4.0, unordered-containers >= 0.2.6, vector >= 0.10.12.2, QuickCheck >= 2.8.2, split
if flag(network-uri)
build-depends: network-uri >= 2.6, network >= 2.6
else

View File

@@ -19,8 +19,6 @@
export CLASSPATH
THRIFT = $(top_builddir)/compiler/cpp/thrift
all-local:
$(ANT) $(ANT_FLAGS)

View File

@@ -35,7 +35,7 @@ public final class TByteBuffer extends TTransport {
final int n = Math.min(byteBuffer.remaining(), len);
if (n > 0) {
try {
byteBuffer.get(buf, off, len);
byteBuffer.get(buf, off, n);
} catch (BufferUnderflowException e) {
throw new TTransportException("Unexpected end of input buffer", e);
}

View File

@@ -16,8 +16,6 @@
# under the License.
THRIFT = $(top_builddir)/compiler/cpp/thrift
stubs: $(top_srcdir)/test/ThriftTest.thrift
$(THRIFT) --gen js:node -o test/ $(top_srcdir)/test/ThriftTest.thrift

View File

@@ -17,8 +17,6 @@
# under the License.
#
THRIFT = $(top_builddir)/compiler/cpp/thrift
stubs: ../../../test/ThriftTest.thrift TestValidators.thrift
mkdir -p ./packages
$(THRIFT) --gen php -r --out ./packages ../../../test/ThriftTest.thrift

View File

@@ -54,8 +54,6 @@ test_server_LDADD = \
#
# Common thrift code generation rules
#
THRIFT = $(top_builddir)/compiler/cpp/thrift
gen-c_glib/t_test_second_service.c gen-c_glib/t_test_second_service.h gen-c_glib/t_test_thrift_test.c gen-c_glib/t_test_thrift_test.h gen-c_glib/t_test_thrift_test_types.c gen-c_glib/t_test_thrift_test_types.h: $(top_srcdir)/test/ThriftTest.thrift $(THRIFT)
$(THRIFT) --gen c_glib -r $<

View File

@@ -98,8 +98,6 @@ StressTestNonBlocking_LDADD = \
#
# Common thrift code generation rules
#
THRIFT = $(top_builddir)/compiler/cpp/thrift
gen-cpp/ThriftTest.cpp gen-cpp/ThriftTest_types.cpp gen-cpp/ThriftTest_constants.cpp: $(top_srcdir)/test/ThriftTest.thrift $(THRIFT)
$(THRIFT) --gen cpp:templates,cob_style -r $<

View File

@@ -17,8 +17,6 @@
# under the License.
#
THRIFT = $(top_builddir)/compiler/cpp/thrift
gen-dart/thrift_test/lib/thrift_test.dart: ../ThriftTest.thrift
$(THRIFT) --gen dart ../ThriftTest.thrift

View File

@@ -17,8 +17,6 @@
# under the License.
#
THRIFT = $(top_builddir)/compiler/cpp/thrift
THRIFT_FILES = $(wildcard ../*.thrift)
if ERLANG_OTP16

View File

@@ -160,12 +160,11 @@ start(Args) ->
ClientS4
end,
%% Use deprecated erlang:now until we start requiring OTP18
%% Started = erlang:monotonic_time(milli_seconds),
{_, StartSec, StartUSec} = erlang:now(),
{_, StartSec, StartUSec} = erlang:timestamp(),
error_logger:info_msg("testOneway"),
{Client20, {ok, ok}} = thrift_client:call(Client19, testOneway, [1]),
{_, EndSec, EndUSec} = erlang:now(),
{_, EndSec, EndUSec} = erlang:timestamp(),
Elapsed = (EndSec - StartSec) * 1000 + (EndUSec - StartUSec) / 1000,
if
Elapsed > 1000 -> exit(1);

View File

@@ -22,7 +22,6 @@ if GOVERSION_LT_17
COMPILER_EXTRAFLAG=",legacy_context"
endif
THRIFT = $(top_builddir)/compiler/cpp/thrift
THRIFTCMD = $(THRIFT) -out src/gen --gen go:thrift_import=thrift$(COMPILER_EXTRAFLAG)
THRIFTTEST = $(top_srcdir)/test/ThriftTest.thrift

View File

@@ -17,7 +17,6 @@
# under the License.
#
THRIFT = $(top_builddir)/compiler/cpp/thrift
THRIFTCMD = $(THRIFT) --gen haxe -r
THRIFTTEST = $(top_srcdir)/test/ThriftTest.thrift

View File

@@ -17,8 +17,6 @@
# under the License.
#
THRIFT = $(top_builddir)/compiler/cpp/thrift
stubs: $(THRIFT) ../ConstantsDemo.thrift ../DebugProtoTest.thrift ../ThriftTest.thrift ../Include.thrift
$(THRIFT) --gen hs ../ConstantsDemo.thrift
$(THRIFT) --gen hs ../DebugProtoTest.thrift

View File

@@ -2,22 +2,15 @@
"cpp-cpp_binary_buffered-ip-ssl",
"cpp-cpp_binary_framed-ip-ssl",
"cpp-cpp_binary_http-domain",
"cpp-cpp_binary_http-ip",
"cpp-cpp_binary_http-ip-ssl",
"cpp-cpp_compact_buffered-ip-ssl",
"cpp-cpp_compact_framed-ip-ssl",
"cpp-cpp_compact_http-domain",
"cpp-cpp_compact_http-ip",
"cpp-cpp_compact_http-ip-ssl",
"cpp-cpp_header_buffered-ip-ssl",
"cpp-cpp_header_framed-ip-ssl",
"cpp-cpp_header_http-domain",
"cpp-cpp_header_http-ip-ssl",
"cpp-cpp_json_buffered-ip-ssl",
"cpp-cpp_json_framed-ip",
"cpp-cpp_json_framed-ip-ssl",
"cpp-cpp_json_http-domain",
"cpp-cpp_json_http-ip",
"cpp-cpp_json_http-ip-ssl",
"cpp-dart_binary_http-ip",
"cpp-dart_compact_http-ip",
@@ -34,60 +27,16 @@
"cpp-java_compact_http-ip-ssl",
"cpp-java_json_http-ip",
"cpp-java_json_http-ip-ssl",
"csharp-c_glib_binary_buffered-ip-ssl",
"csharp-c_glib_binary_framed-ip-ssl",
"csharp-c_glib_compact_buffered-ip-ssl",
"csharp-c_glib_compact_framed-ip-ssl",
"csharp-cpp_binary_buffered-ip-ssl",
"csharp-cpp_binary_framed-ip-ssl",
"csharp-cpp_compact_buffered-ip-ssl",
"csharp-cpp_compact_framed-ip-ssl",
"csharp-cpp_json_buffered-ip-ssl",
"csharp-cpp_json_framed-ip-ssl",
"csharp-d_binary_buffered-ip-ssl",
"csharp-d_compact_buffered-ip-ssl",
"csharp-d_json_buffered-ip-ssl",
"csharp-d_binary_framed-ip-ssl",
"csharp-d_compact_buffered-ip-ssl",
"csharp-d_compact_framed-ip-ssl",
"csharp-d_json_buffered-ip-ssl",
"csharp-d_json_framed-ip-ssl",
"csharp-erl_binary_buffered-ip-ssl",
"csharp-erl_binary_framed-ip-ssl",
"csharp-erl_compact_buffered-ip-ssl",
"csharp-erl_compact_framed-ip-ssl",
"csharp-go_binary_buffered-ip-ssl",
"csharp-go_binary_framed-ip-ssl",
"csharp-go_compact_buffered-ip-ssl",
"csharp-go_compact_framed-ip-ssl",
"csharp-go_json_buffered-ip-ssl",
"csharp-go_json_framed-ip-ssl",
"csharp-nodejs_binary_buffered-ip-ssl",
"csharp-nodejs_binary_framed-ip-ssl",
"csharp-nodejs_compact_buffered-ip-ssl",
"csharp-nodejs_compact_framed-ip-ssl",
"csharp-nodejs_json_buffered-ip-ssl",
"csharp-nodejs_json_framed-ip-ssl",
"csharp-perl_binary_buffered-ip-ssl",
"csharp-perl_binary_framed-ip-ssl",
"csharp-py3_binary-accel_buffered-ip-ssl",
"csharp-py3_binary-accel_framed-ip-ssl",
"csharp-py3_binary_buffered-ip-ssl",
"csharp-py3_binary_framed-ip-ssl",
"csharp-py3_compact-accelc_buffered-ip-ssl",
"csharp-py3_compact-accelc_framed-ip-ssl",
"csharp-py3_compact_buffered-ip-ssl",
"csharp-py3_compact_framed-ip-ssl",
"csharp-py3_json_buffered-ip-ssl",
"csharp-py3_json_framed-ip-ssl",
"csharp-py_binary-accel_buffered-ip-ssl",
"csharp-py_binary-accel_framed-ip-ssl",
"csharp-py_binary_buffered-ip-ssl",
"csharp-py_binary_framed-ip-ssl",
"csharp-py_compact-accelc_buffered-ip-ssl",
"csharp-py_compact-accelc_framed-ip-ssl",
"csharp-py_compact_buffered-ip-ssl",
"csharp-py_compact_framed-ip-ssl",
"csharp-py_json_buffered-ip-ssl",
"csharp-py_json_framed-ip-ssl",
"d-cpp_binary_buffered-ip",
"d-cpp_binary_buffered-ip-ssl",
"d-cpp_binary_framed-ip",
@@ -202,10 +151,8 @@
"go-d_compact_http-ip-ssl",
"go-d_json_http-ip",
"go-d_json_http-ip-ssl",
"go-dart_binary_framed-ip",
"go-dart_binary_http-ip",
"go-dart_compact_http-ip",
"go-dart_json_framed-ip",
"go-dart_json_http-ip",
"go-java_binary_http-ip",
"go-java_binary_http-ip-ssl",
@@ -214,17 +161,12 @@
"go-java_json_http-ip",
"go-java_json_http-ip-ssl",
"go-nodejs_json_framed-ip",
"hs-csharp_binary_framed-ip",
"hs-csharp_compact_framed-ip",
"hs-csharp_json_framed-ip",
"hs-dart_binary_framed-ip",
"hs-dart_compact_framed-ip",
"hs-dart_json_framed-ip",
"hs-py3_json_buffered-ip",
"hs-py3_json_framed-ip",
"java-d_compact_buffered-ip",
"java-d_compact_buffered-ip-ssl",
"java-d_compact_framed-ip",
"rs-dart_binary_framed-ip",
"rs-dart_compact_framed-ip"
]
]

View File

@@ -17,8 +17,6 @@
# under the License.
#
THRIFT = $(top_builddir)/compiler/cpp/thrift
stubs: ../ThriftTest.thrift
$(THRIFT) --gen perl ../ThriftTest.thrift

View File

@@ -17,8 +17,6 @@
# under the License.
#
THRIFT = $(top_builddir)/compiler/cpp/thrift
stubs: ../ThriftTest.thrift
$(THRIFT) --gen php ../ThriftTest.thrift
$(THRIFT) --gen php:inlined ../ThriftTest.thrift

View File

@@ -17,7 +17,6 @@
# under the License.
#
THRIFT = $(top_builddir)/compiler/cpp/thrift
TRIAL ?= trial
stubs: ../ThriftTest.thrift ../SmallTest.thrift

View File

@@ -18,8 +18,6 @@
#
AUTOMAKE_OPTIONS = serial-tests
THRIFT = $(top_builddir)/compiler/cpp/thrift
py_unit_tests = RunClientServer.py
thrift_gen = \

View File

@@ -17,8 +17,6 @@
# under the License.
#
THRIFT = $(top_builddir)/compiler/cpp/thrift
stubs: $(THRIFT) ../ThriftTest.thrift ../SmallTest.thrift
$(THRIFT) --gen rb ../ThriftTest.thrift
$(THRIFT) --gen rb ../SmallTest.thrift

View File

@@ -17,8 +17,6 @@
# under the License.
#
THRIFT = $(top_builddir)/compiler/cpp/thrift
stubs: ../ThriftTest.thrift
$(THRIFT) -I ./thrifts -out src --gen rs ../ThriftTest.thrift

View File

@@ -5,5 +5,49 @@
fun:malloc
fun:_ZN5boost6detail25get_once_per_thread_epochEv
}
{
boostthreads/once/ignore
Helgrind:Race
fun:_ZN5boost13thread_detail17enter_once_regionERNS_9once_flagE
fun:_ZN5boost6detail23get_current_thread_dataEv
fun:_ZN5boost6detail20interruption_checkerC1EP15pthread_mutex_tP14pthread_cond_t
fun:_ZN5boost22condition_variable_any4waitINS_11unique_lockINS_11timed_mutexEEEEEvRT_
fun:_ZN6apache6thrift11concurrency7Monitor4Impl11waitForeverEv
fun:_ZN6apache6thrift11concurrency7Monitor4Impl19waitForTimeRelativeEl
fun:_ZN6apache6thrift11concurrency7Monitor4Impl4waitEl
fun:_ZNK6apache6thrift11concurrency7Monitor4waitEl
fun:_ZN6apache6thrift11concurrency11BoostThread5startEv
fun:_ZN6apache6thrift11concurrency4test18ThreadFactoryTests12reapNThreadsEii
fun:main
}
{
pthread/creation-tls/ignore
Helgrind:Race
fun:mempcpy
fun:_dl_allocate_tls_init
fun:get_cached_stack
fun:allocate_stack
fun:pthread_create@@GLIBC_2.2*
obj:/usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so
fun:_ZN6apache6thrift11concurrency13PthreadThread5startEv
fun:_ZN6apache6thrift11concurrency4test18ThreadFactoryTests12reapNThreadsEii
fun:main
}
{
boost-thread/creation-tls/ignore
Helgrind:Race
fun:mempcpy
fun:_dl_allocate_tls_init
fun:get_cached_stack
fun:allocate_stack
fun:pthread_create@@GLIBC_2.2.5
obj:/usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so
fun:_ZN5boost6thread21start_thread_noexceptEv
fun:_ZN5boost6thread12start_threadEv
fun:_ZN5boost6threadC1ISt5_BindIFPFPvS3_ES3_EEEEOT_
fun:_ZN6apache6thrift11concurrency11BoostThread5startEv
fun:_ZN6apache6thrift11concurrency4test18ThreadFactoryTests12reapNThreadsEii
fun:main
}

View File

@@ -28,8 +28,6 @@ AM_CFLAGS = -g -Wall -Wextra -pedantic $(GLIB_CFLAGS) $(GOBJECT_CFLAGS) $(OPENSS
AM_CPPFLAGS = -I$(top_srcdir)/lib/c_glib/src -Igen-c_glib
AM_LDFLAGS = $(GLIB_LIBS) $(GOBJECT_LIBS) $(OPENSSL_LDFLAGS) $(OPENSSL_LIBS) @GCOV_LDFLAGS@
THRIFT = $(top_builddir)/compiler/cpp/thrift
noinst_LTLIBRARIES = \
libtutorialgencglib.la

View File

@@ -61,8 +61,6 @@ TutorialClient_LDADD = \
#
# Common thrift code generation rules
#
THRIFT = $(top_builddir)/compiler/cpp/thrift
gen-cpp/Calculator.cpp gen-cpp/SharedService.cpp gen-cpp/shared_constants.cpp gen-cpp/shared_types.cpp gen-cpp/tutorial_constants.cpp gen-cpp/tutorial_types.cpp: $(top_srcdir)/tutorial/tutorial.thrift
$(THRIFT) --gen cpp -r $<

View File

@@ -19,8 +19,6 @@
BUILT_SOURCES = gen-dart/tutorial/lib/tutorial.dart gen-dart/shared/lib/shared.dart
THRIFT = $(top_builddir)/compiler/cpp/thrift
gen-dart/tutorial/lib/tutorial.dart gen-dart/shared/lib/shared.dart: $(top_srcdir)/tutorial/tutorial.thrift
$(THRIFT) --gen dart -r $<

View File

@@ -44,7 +44,7 @@ t() ->
{Client3, {ok, Sum1}} = thrift_client:call(Client2, add, [1, 4]),
io:format("1+4=~p~n", [Sum1]),
Work = #work{op=?tutorial_Operation_SUBTRACT,
Work = #'Work'{op=?TUTORIAL_OPERATION_SUBTRACT,
num1=15,
num2=10},
{Client4, {ok, Diff}} = thrift_client:call(Client3, calculate, [1, Work]),
@@ -55,7 +55,7 @@ t() ->
Client6 =
try
Work1 = #work{op=?tutorial_Operation_DIVIDE,
Work1 = #'Work'{op=?TUTORIAL_OPERATION_DIVIDE,
num1=1,
num2=0},
{ClientS1, {ok, _Quot}} = thrift_client:call(Client5, calculate, [2, Work1]),

View File

@@ -55,7 +55,7 @@ t() ->
{Client3, {ok, Sum1}} = thrift_client:call(Client2, add, [1, 4]),
io:format("1+4=~p~n", [Sum1]),
Work = #work{op=?tutorial_Operation_SUBTRACT,
Work = #'Work'{op=?TUTORIAL_OPERATION_SUBTRACT,
num1=15,
num2=10},
{Client4, {ok, Diff}} = thrift_client:call(Client3, calculate, [1, Work]),
@@ -66,7 +66,7 @@ t() ->
Client6 =
try
Work1 = #work{op=?tutorial_Operation_DIVIDE,
Work1 = #'Work'{op=?TUTORIAL_OPERATION_DIVIDE,
num1=1,
num2=0},
{ClientS1, {ok, _Quot}} = thrift_client:call(Client5, calculate, [2, Work1]),

View File

@@ -36,25 +36,25 @@ add(N1, N2) ->
N1+N2.
calculate(Logid, Work) ->
{ Op, Num1, Num2 } = { Work#work.op, Work#work.num1, Work#work.num2 },
{ Op, Num1, Num2 } = { Work#'Work'.op, Work#'Work'.num1, Work#'Work'.num2 },
debug("calculate(~p, {~p,~p,~p})", [Logid, Op, Num1, Num2]),
case Op of
?tutorial_Operation_ADD -> Num1 + Num2;
?tutorial_Operation_SUBTRACT -> Num1 - Num2;
?tutorial_Operation_MULTIPLY -> Num1 * Num2;
?TUTORIAL_OPERATION_ADD -> Num1 + Num2;
?TUTORIAL_OPERATION_SUBTRACT -> Num1 - Num2;
?TUTORIAL_OPERATION_MULTIPLY -> Num1 * Num2;
?tutorial_Operation_DIVIDE when Num2 == 0 ->
throw(#invalidOperation{whatOp=Op, why="Cannot divide by 0"});
?tutorial_Operation_DIVIDE ->
?TUTORIAL_OPERATION_DIVIDE when Num2 == 0 ->
throw(#'InvalidOperation'{whatOp=Op, why="Cannot divide by 0"});
?TUTORIAL_OPERATION_DIVIDE ->
Num1 div Num2;
_Else ->
throw(#invalidOperation{whatOp=Op, why="Invalid operation"})
throw(#'InvalidOperation'{whatOp=Op, why="Invalid operation"})
end.
getStruct(Key) ->
debug("getStruct(~p)", [Key]),
#sharedStruct{key=Key, value="RARG"}.
#'SharedStruct'{key=Key, value="RARG"}.
zip() ->
debug("zip", []),

View File

@@ -21,8 +21,6 @@ if GOVERSION_LT_17
COMPILER_EXTRAFLAG=":legacy_context"
endif
THRIFT = $(top_builddir)/compiler/cpp/thrift
gen-go/tutorial/calculator.go gen-go/shared/shared_service.go: $(top_srcdir)/tutorial/tutorial.thrift
$(THRIFT) --gen go$(COMPILER_EXTRAFLAG) -r $<

View File

@@ -17,13 +17,10 @@
# under the License.
#
THRIFT = $(top_builddir)/compiler/cpp/thrift
BIN_CPP = bin/Main-debug
BIN_PHP = bin/php/Main-debug.php
BIN_PHP_WEB = bin/php-web-server/Main-debug.php
gen-haxe/tutorial/calculator.hx gen-haxe/shared/shared_service.hx: $(top_srcdir)/tutorial/tutorial.thrift
$(THRIFT) --gen haxe -r $<

View File

@@ -17,8 +17,6 @@
# under the License.
#
THRIFT = $(top_builddir)/compiler/cpp/thrift
gen-nodejs/Calculator.js gen-nodejs/SharedService.js: $(top_srcdir)/tutorial/tutorial.thrift
$(THRIFT) --gen js:node -r $<

View File

@@ -17,8 +17,6 @@
# under the License.
#
THRIFT = $(top_builddir)/compiler/cpp/thrift
gen-py.tornado/tutorial/Calculator.py gen-py.tornado/shared/SharedService.py: $(top_srcdir)/tutorial/tutorial.thrift
$(THRIFT) --gen py:tornado -r $<

View File

@@ -17,8 +17,6 @@
# under the License.
#
THRIFT = $(top_builddir)/compiler/cpp/thrift
gen-py/tutorial/Calculator.py gen-py/shared/SharedService.py: $(top_srcdir)/tutorial/tutorial.thrift
$(THRIFT) --gen py:twisted -r $<

View File

@@ -17,8 +17,6 @@
# under the License.
#
THRIFT = $(top_builddir)/compiler/cpp/thrift
gen-py/tutorial/Calculator.py gen-py/shared/SharedService.py: $(top_srcdir)/tutorial/tutorial.thrift
$(THRIFT) --gen py -r $<

View File

@@ -17,8 +17,6 @@
# under the License.
#
THRIFT = $(top_builddir)/compiler/cpp/thrift
gen-py/calculator.rb gen-py/shared_service.rb: $(top_srcdir)/tutorial/tutorial.thrift
$(THRIFT) --gen rb -r $<

View File

@@ -595,7 +595,7 @@ func ValidateStruct(s interface{}) (bool, error) {
continue // Private field
}
structResult := true
if valueField.Kind() == reflect.Struct && typeField.Tag.Get(tagName) != "-" {
if valueField.Kind() == reflect.Struct {
var err error
structResult, err = ValidateStruct(valueField.Interface())
if err != nil {

View File

@@ -21,6 +21,25 @@ consistency and thread safety. Bolt is currently used in high-load production
environments serving databases as large as 1TB. Many companies such as
Shopify and Heroku use Bolt-backed services every day.
## A message from the author
> The original goal of Bolt was to provide a simple pure Go key/value store and to
> not bloat the code with extraneous features. To that end, the project has been
> a success. However, this limited scope also means that the project is complete.
>
> Maintaining an open source database requires an immense amount of time and energy.
> Changes to the code can have unintended and sometimes catastrophic effects so
> even simple changes require hours and hours of careful testing and validation.
>
> Unfortunately I no longer have the time or energy to continue this work. Bolt is
> in a stable state and has years of successful production use. As such, I feel that
> leaving it in its current state is the most prudent course of action.
>
> If you are interested in using a more featureful version of Bolt, I suggest that
> you look at the CoreOS fork called [bbolt](https://github.com/coreos/bbolt).
- Ben Johnson ([@benbjohnson](https://twitter.com/benbjohnson))
## Table of Contents
- [Getting Started](#getting-started)

View File

@@ -1,6 +1,7 @@
language: go
go:
- 1.3.3
- 1.x
- tip
before_install:
- go get github.com/mattn/goveralls

View File

@@ -42,15 +42,27 @@ $ go get -u github.com/cloudflare/cfssl/cmd/cfssl
```
will download and build the CFSSL tool, installing it in
`$GOPATH/bin/cfssl`. To install the other utility programs that are in
this repo:
`$GOPATH/bin/cfssl`.
To install any of the other utility programs that are
in this repo (for instance `cffsljson` in this case):
```
$ go get -u github.com/cloudflare/cfssl/cmd/cfssljson
```
This will download and build the CFSSLJSON tool, installing it in
`$GOPATH/bin/`.
And to simply install __all__ of the programs in this repo:
```
$ go get -u github.com/cloudflare/cfssl/cmd/...
```
This will download, build, and install `cfssl`, `cfssljson`, and
`mkbundle` into `$GOPATH/bin/`.
This will download, build, and install all of the utility programs
(including `cfssl`, `cfssljson`, and `mkbundle` among others) into the
`$GOPATH/bin/` directory.
#### Installing pre-Go 1.6
@@ -87,10 +99,10 @@ operation it should carry out:
serve start the API server
version prints out the current version
selfsign generates a self-signed certificate
print-defaults print default configurations
print-defaults print default configurations
Use "cfssl [command] -help" to find out more about a command.
The version command takes no arguments.
Use `cfssl [command] -help` to find out more about a command.
The `version` command takes no arguments.
#### Signing
@@ -98,9 +110,9 @@ The version command takes no arguments.
cfssl sign [-ca cert] [-ca-key key] [-hostname comma,separated,hostnames] csr [subject]
```
The csr is the client's certificate request. The `-ca` and `-ca-key`
The `csr` is the client's certificate request. The `-ca` and `-ca-key`
flags are the CA's certificate and private key, respectively. By
default, they are "ca.pem" and "ca_key.pem". The `-hostname` is
default, they are `ca.pem` and `ca_key.pem`. The `-hostname` is
a comma separated hostname list that overrides the DNS names and
IP address in the certificate SAN extension.
For example, assuming the CA's private key is in
@@ -109,26 +121,27 @@ For example, assuming the CA's private key is in
for cloudflare.com:
```
cfssl sign -ca /etc/ssl/certs/cfssl.pem \
cfssl sign -ca /etc/ssl/certs/cfssl.pem \
-ca-key /etc/ssl/private/cfssl_key.pem \
-hostname cloudflare.com ./cloudflare.pem
-hostname cloudflare.com \
./cloudflare.pem
```
It is also possible to specify csr through '-csr' flag. By doing so,
It is also possible to specify CSR with the `-csr` flag. By doing so,
flag values take precedence and will overwrite the argument.
The subject is an optional file that contains subject information that
should be used in place of the information from the CSR. It should be
a JSON file with the type:
a JSON file as follows:
```json
{
"CN": "example.com",
"names": [
{
"C": "US",
"L": "San Francisco",
"O": "Internet Widgets, Inc.",
"C": "US",
"L": "San Francisco",
"O": "Internet Widgets, Inc.",
"OU": "WWW",
"ST": "California"
}
@@ -148,28 +161,29 @@ cfssl bundle [-ca-bundle bundle] [-int-bundle bundle] \
```
The bundles are used for the root and intermediate certificate
pools. In addition, platform metadata is specified through '-metadata'
pools. In addition, platform metadata is specified through `-metadata`.
The bundle files, metadata file (and auxiliary files) can be
found at [cfssl_trust](https://github.com/cloudflare/cfssl_trust)
found at:
https://github.com/cloudflare/cfssl_trust
Specify PEM-encoded client certificate and key through '-cert' and
'-key' respectively. If key is specified, the bundle will be built
Specify PEM-encoded client certificate and key through `-cert` and
`-key` respectively. If key is specified, the bundle will be built
and verified with the key. Otherwise the bundle will be built
without a private key. Instead of file path, use '-' for reading
certificate PEM from stdin. It is also acceptable the certificate
file contains a (partial) certificate bundle.
without a private key. Instead of file path, use `-` for reading
certificate PEM from stdin. It is also acceptable that the certificate
file should contain a (partial) certificate bundle.
Specify bundling flavor through '-flavor'. There are three flavors:
'optimal' to generate a bundle of shortest chain and most advanced
cryptographic algorithms, 'ubiquitous' to generate a bundle of most
Specify bundling flavor through `-flavor`. There are three flavors:
`optimal` to generate a bundle of shortest chain and most advanced
cryptographic algorithms, `ubiquitous` to generate a bundle of most
widely acceptance across different browsers and OS platforms, and
'force' to find an acceptable bundle which is identical to the
`force` to find an acceptable bundle which is identical to the
content of the input certificate file.
Alternatively, the client certificate can be pulled directly from
a domain. It is also possible to connect to the remote address
through '-ip'.
through `-ip`.
```
cfssl bundle [-ca-bundle bundle] [-int-bundle bundle] \
@@ -177,7 +191,7 @@ cfssl bundle [-ca-bundle bundle] [-int-bundle bundle] \
-domain domain_name [-ip ip_address]
```
The bundle output form should follow the example
The bundle output form should follow the example:
```json
{
@@ -213,7 +227,7 @@ cfssl genkey csr.json
```
To generate a private key and corresponding certificate request, specify
the key request as a JSON file. This file should follow the form
the key request as a JSON file. This file should follow the form:
```json
{
@@ -227,9 +241,9 @@ the key request as a JSON file. This file should follow the form
},
"names": [
{
"C": "US",
"L": "San Francisco",
"O": "Internet Widgets, Inc.",
"C": "US",
"L": "San Francisco",
"O": "Internet Widgets, Inc.",
"OU": "WWW",
"ST": "California"
}
@@ -244,7 +258,7 @@ cfssl genkey -initca csr.json | cfssljson -bare ca
```
To generate a self-signed root CA certificate, specify the key request as
the JSON file in the same format as in 'genkey'. Three PEM-encoded entities
a JSON file in the same format as in 'genkey'. Three PEM-encoded entities
will appear in the output: the private key, the csr, and the self-signed
certificate.
@@ -254,8 +268,8 @@ certificate.
cfssl gencert -remote=remote_server [-hostname=comma,separated,hostnames] csr.json
```
This is calls genkey, but has a remote CFSSL server sign and issue
a certificate. You may use `-hostname` to override certificate SANs.
This calls `genkey` but has a remote CFSSL server sign and issue
the certificate. You may use `-hostname` to override certificate SANs.
#### Generating a local-issued certificate and private key.
@@ -263,25 +277,25 @@ a certificate. You may use `-hostname` to override certificate SANs.
cfssl gencert -ca cert -ca-key key [-hostname=comma,separated,hostnames] csr.json
```
This is generates and issues a certificate and private key from a local CA
This generates and issues a certificate and private key from a local CA
via a JSON request. You may use `-hostname` to override certificate SANs.
#### Updating a OCSP responses file with a newly issued certificate
#### Updating an OCSP responses file with a newly issued certificate
```
cfssl ocspsign -ca cert -responder key -responder-key key -cert cert \
| cfssljson -bare -stdout >> responses
```
This will generate a OCSP response for the `cert` and add it to the
`responses` file. You can then pass `responses` to `ocspserve` to start a
This will generate an OCSP response for the `cert` and add it to the
`responses` file. You can then pass `responses` to `ocspserve` to start an
OCSP server.
### Starting the API Server
CFSSL comes with an HTTP-based API server; the endpoints are
documented in `doc/api/intro.txt`. The server is started with the "serve"
documented in `doc/api/intro.txt`. The server is started with the `serve`
command:
```
@@ -293,18 +307,19 @@ cfssl serve [-address address] [-ca cert] [-ca-bundle bundle] \
Address and port default to "127.0.0.1:8888". The `-ca` and `-ca-key`
arguments should be the PEM-encoded certificate and private key to use
for signing; by default, they are "ca.pem" and "ca_key.pem". The
for signing; by default, they are `ca.pem` and `ca_key.pem`. The
`-ca-bundle` and `-int-bundle` should be the certificate bundles used
for the root and intermediate certificate pools, respectively. These
default to "ca-bundle.crt" and "int-bundle." If the "remote" option is
provided, all signature operations will be forwarded to the remote CFSSL.
default to `ca-bundle.crt` and `int-bundle.crt` respectively. If the
`-remote` option is specified, all signature operations will be forwarded
to the remote CFSSL.
'-int-dir' specifies intermediates directory. '-metadata' is a file for
`-int-dir` specifies an intermediates directory. `-metadata` is a file for
root certificate presence. The content of the file is a json dictionary
(k,v): each key k is SHA-1 digest of a root certificate while value v
is a list of key store filenames. '-config' specifies path to configuration
file. '-responder' and '-responder-key' are Certificate for OCSP responder
and private key for OCSP responder certificate, respectively.
(k,v) such that each key k is an SHA-1 digest of a root certificate while value v
is a list of key store filenames. `-config` specifies a path to a configuration
file. `-responder` and `-responder-key` are the certificate and the
private key for the OCSP responder, respectively.
The amount of logging can be controlled with the `-loglevel` option. This
comes *after* the serve command:
@@ -315,18 +330,18 @@ cfssl serve -loglevel 2
The levels are:
* 0. DEBUG
* 1. INFO (this is the default level)
* 2. WARNING
* 3. ERROR
* 4. CRITICAL
* 0 - DEBUG
* 1 - INFO (this is the default level)
* 2 - WARNING
* 3 - ERROR
* 4 - CRITICAL
### The multirootca
The `cfssl` program can act as an online certificate authority, but it
only uses a single key. If multiple signing keys are needed, the
`multirootca` program can be used. It only provides the sign,
authsign, and info endpoints. The documentation contains instructions
`multirootca` program can be used. It only provides the `sign`,
`authsign` and `info` endpoints. The documentation contains instructions
for configuring and running the CA.
### The mkbundle Utility
@@ -343,49 +358,44 @@ support is planned for the next release) and expired certificates, and
bundles them into one file. It takes directories of certificates and
certificate files (which may contain multiple certificates). For example,
if the directory `intermediates` contains a number of intermediate
certificates,
certificates:
```
mkbundle -f int-bundle.crt intermediates
```
will check those certificates and combine valid ones into a single
will check those certificates and combine valid certificates into a single
`int-bundle.crt` file.
The `-f` flag specifies an output name; `-loglevel` specifies the verbosity
of the logging (using the same loglevels above), and `-nw` controls the
of the logging (using the same loglevels as above), and `-nw` controls the
number of revocation-checking workers.
### The cfssljson Utility
Most of the output from `cfssl` is in JSON. The `cfssljson` will take
this output and split it out into separate key, certificate, CSR, and
bundle files as appropriate. The tool takes a single flag, `-f`, that
Most of the output from `cfssl` is in JSON. The `cfssljson` utility can take
this output and split it out into separate `key`, `certificate`, `CSR`, and
`bundle` files as appropriate. The tool takes a single flag, `-f`, that
specifies the input file, and an argument that specifies the base name for
the files produced. If the input filename is "-" (which is the default),
`cfssljson` reads from standard input. It maps keys in the JSON file to
the files produced. If the input filename is `-` (which is the default),
cfssljson reads from standard input. It maps keys in the JSON file to
filenames in the following way:
* if there is a "cert" (or if not, if there's a "certificate") field, the
file "basename.pem" will be produced.
* if there is a "key" (or if not, if there's a "private_key") field, the
file "basename-key.pem" will be produced.
* if there is a "csr" (or if not, if there's a "certificate_request") field,
the file "basename.csr" will be produced.
* if there is a "bundle" field, the file "basename-bundle.pem" will
be produced.
* if there is a "ocspResponse" field, the file "basename-response.der" will
be produced.
* if __cert__ or __certificate__ is specified, __basename.pem__ will be produced.
* if __key__ or __private_key__ is specified, __basename-key.pem__ will be produced.
* if __csr__ or __certificate_request__ is specified, __basename.csr__ will be produced.
* if __bundle__ is specified, __basename-bundle.pem__ will be produced.
* if __ocspResponse__ is specified, __basename-response.der__ will be produced.
Instead of saving to a file, you can pass `-stdout` to output the encoded
contents.
contents to standard output.
### Static Builds
By default, the web assets are accessed from disk, based on their
relative locations. If youre wishing to distribute a single,
statically-linked, cfssl binary, youll want to embed these resources
before building. This can by done with the
relative locations. If you wish to distribute a single,
statically-linked, `cfssl` binary, youll want to embed these resources
before building. This can by done with the
[go.rice](https://github.com/GeertJohan/go.rice) tool.
```
@@ -396,16 +406,18 @@ Then building with `go build` will use the embedded resources.
### Using a PKCS#11 hardware token / HSM
For better security, you may want to store your private key in an HSM or
For better security, you may wish to store your private key in an HSM or
smartcard. The interface to both of these categories of device is described by
the PKCS#11 spec. If you need to do approximately one signing operation per
second or fewer, the Yubikey NEO and NEO-n are inexpensive smartcard options:
https://www.yubico.com/products/yubikey-hardware/yubikey-neo/. In general you
are looking for a product that supports PIV (personal identity verification). If
https://www.yubico.com/products/yubikey-hardware/yubikey-neo/
In general you should look for a product that supports PIV (personal identity verification). If
your signing needs are in the hundreds of signatures per second, you will need
to purchase an expensive HSM (in the thousands to many thousands of USD).
If you want to try out the PKCS#11 signing modes without a hardware token, you
If you wish to try out the PKCS#11 signing modes without a hardware token, you
can use the [SoftHSM](https://github.com/opendnssec/SoftHSMv1#softhsm)
implementation. Please note that using SoftHSM simply stores your private key in
a file on disk and does not increase security.
@@ -413,14 +425,14 @@ a file on disk and does not increase security.
To get started with your PKCS#11 token you will need to initialize it with a
private key, PIN, and token label. The instructions to do this will be specific
to each hardware device, and you should follow the instructions provided by your
vendor. You will also need to find the path to your 'module', a shared object
vendor. You will also need to find the path to your `module`, a shared object
file (.so). Having initialized your device, you can query it to check your token
label with:
pkcs11-tool --module <module path> --list-token-slots
You'll also want to check the label of the private key you imported (or
generated). Run the following command and look for a 'Private Key Object':
generated). Run the following command and look for a `Private Key Object`:
pkcs11-tool --module <module path> --pin <pin> \
--list-token-slots --login --list-objects
@@ -430,7 +442,7 @@ CFSSL supports PKCS#11 for certificate signing and OCSP signing. To create a
Signer (for certificate signing), import `signer/universal` and call NewSigner
with a Root object containing the module, pin, token label and private label
from above, plus a path to your certificate. The structure of the Root object is
documented in universal.go.
documented in `universal.go`.
Alternately, you can construct a pkcs11key.Key or pkcs11key.Pool yourself, and
pass it to ocsp.NewSigner (for OCSP) or local.NewSigner (for certificate
@@ -440,7 +452,7 @@ same time.
### Additional Documentation
Additional documentation can be found in the "doc/" directory:
Additional documentation can be found in the "doc" directory:
* `api/intro.txt`: documents the API endpoints
* `bootstrap.txt`: a walkthrough from building the package to getting

View File

@@ -27,7 +27,8 @@ Result:
* certificate_request: a PEM-encoded certificate request
* certificate: a PEM-encoded certificate, signed by the server
* sums: a JSON object holding both MD5 and SHA1 digests for the certificate
request and the certificate
request and the certificate; note that this is the digest of the DER
contents of the certificate, not the PEM contents
* bundle: See the result of endpoint_bundle.txt (only included if the bundle parameter was set)
Example:

Some files were not shown because too many files have changed in this diff Show More