Files
fn-serverless/api/agent/protocol/http.go
Reed Allman 3b261fc144 pipe swapparoo each slot (#721)
* pipe swapparoo each slot

previously, we made a pair of pipes for stdin and stdout for each container,
and then handed them out to each call (slot) to use. this meant that multiple
calls could have a handle on the same stdin pipe and stdout pipe to read/write
to/from from fn's perspective and could mix input/output and get garbage. this
also meant that each was blocked on the previous' reads.

now we make a new pipe every time we get a slot, and swap it out with the
previous ones. calls are no longer blocked from fn's perspective, and we don't
have to worry about timing out dispatch for any hot format. there is still the
issue that if a function does not finish reading the input from the previous
task, from its perspective, and reads the next call's it can error out the
second call. with fn deadline we provide the necessary tools to skirt this,
but without some additional coordination am not sure this is a closable hole
with our current protocols since terminating a previous calls input requires
some protocol specific bytes to go in (json in particular is tricky). anyway,
from fn's side fixing pipes was definitely a hole, but this client hole is
still hanging out. there was an attempt to send an io.EOF but the issue is
that will shut down docker's read on the stdin pipe (and the container). poop.

this adds a test for this behavior, and makes sure 2 containers don't get
launched.

this also closes the response writer header race a little, but not entirely, I
think there's still a chance that we read a full response from a function and
get a timeout while we're changing the headers. I guess we need a thread safe
header bucket, otherwise we have to rely on timings (racy). thinking on it.

* fix stats mu race
2018-01-31 17:25:24 -08:00

67 lines
1.7 KiB
Go

package protocol
import (
"bufio"
"context"
"io"
"net/http"
)
// HTTPProtocol converts stdin/stdout streams into HTTP/1.1 compliant
// communication. It relies on Content-Length to know when to stop reading from
// containers stdout. It also mandates valid HTTP headers back and forth, thus
// returning errors in case of parsing problems.
type HTTPProtocol struct {
in io.Writer
out io.Reader
}
func (p *HTTPProtocol) IsStreamable() bool { return true }
func (h *HTTPProtocol) Dispatch(ctx context.Context, ci CallInfo, w io.Writer) error {
req := ci.Request()
req.RequestURI = ci.RequestURL() // force set to this, for req.Write to use (TODO? still?)
// Add Fn-specific headers for this protocol
req.Header.Set("FN_DEADLINE", ci.Deadline().String())
req.Header.Set("FN_METHOD", ci.Method())
req.Header.Set("FN_REQUEST_URL", ci.RequestURL())
req.Header.Set("FN_CALL_ID", ci.CallID())
// req.Write handles if the user does not specify content length
err := req.Write(h.in)
if err != nil {
return err
}
resp, err := http.ReadResponse(bufio.NewReader(h.out), ci.Request())
if err != nil {
return err
}
defer resp.Body.Close()
rw, ok := w.(http.ResponseWriter)
if !ok {
// async / [some] tests go through here. write a full http request to the writer
resp.Write(w)
return nil
}
// if we're writing directly to the response writer, we need to set headers
// and status code, and only copy the body. resp.Write would copy a full
// http request into the response body (not what we want).
// add resp's on top of any specified on the route [on rw]
for k, vs := range resp.Header {
for _, v := range vs {
rw.Header().Add(k, v)
}
}
if resp.StatusCode > 0 {
rw.WriteHeader(resp.StatusCode)
}
io.Copy(rw, resp.Body)
return nil
}