Each time when MQ becomes unreachable HTTP GET /tasks returned HTTP 500
and code was not handling this case except expecting networking errors.
After that it tried to unmarshal empty response body that caused another sort of an error.
This patch triggers error based on http response code, explicitly checking if response code
is something unexpected (not HTTP 200 OK).
Response status code for /tasks for changed from 202 Accepted to 200 OK according to swagger doc.
* circle
* circle
* circle
* circle
* circle
* CIRCLE
* circle
* circle
* circle
* circle
* circle
* circle
* circle
* circle
* circle
* circle
* cijrcle
* circle
* circle
* circle
* circle
* c
* c
* circle
* testing release
* circle
* trying release
* c
* c
* functions: 0.3.25 release [skip ci]
* c
* functions: 0.3.26 release [skip ci]
* fn tool: 0.3.19 release [skip ci]
* testing cli release only
* fn tool: 0.3.20 release [skip ci]
* fn tool: 0.3.21 release [skip ci]
* hopefully the last thing
* fn tool: 0.3.22 release [skip ci]
* c
* fn tool: 0.3.23 release [skip ci]
* almost there....
* fn tool: 0.3.24 release [skip ci]
* fnlb: 0.0.2 release [skip ci]
* fn tool: 0.3.25 release [skip ci]
* fnlb: 0.0.3 release [skip ci]
* Added back in commented out lines.
* Fixing middleware example.
the async stuff uses carlos supervisor thing but in the normal request path we
aren't catching any panics and returning a 500 to user (conn just gets
closed & server dies). should catch any mistakes we might make, or any one of
the 10000 libraries we're importing.
closes#150
we had the inspect container here for 3 reasons:
1) get exit code
2) see if container is still running (debugging madness)
3) see if docker thinks it was an OOM
1) is something wait returns, but due to 2) and 3) we just delayed it until
inspection
2) was really just for debugging since we had 3)
3) seems unnecessary. to me, an OOM is an OOM is an OOM. so why have a whole
docker inspect call just to find out? (we could move this down, since it's a
sad path, and make the call only when necessary, but are we really getting any
value from this distinction anyway? i've never ran into it, myself)
inspect was actually causing tasks to time out, since the call to inspect
could put us over our task timeout, even though our container ran to
completion. we could have fixed this by checking the context earlier, but we
don't really need inspect either, which will reduce the docker calls we make,
which will make more unicorn puppers. now tasks should have more 'true'
timeouts.
tried to boy scout, but tracing patch also cleans this block up too.
we finally graduated high school and can make our own ramen
we no longer need this since fn appears to have no concept of canceling tasks
through an api we need to watch, and the context is plumbed if the request is
canceled. since tasks are short, we may never need to do cancellation of
running tasks like we had with iron worker. this was an added docker call
that's unnecessary since we are doing force removal of the container at the
end anyway.