Merge lp:~rvb/gwacl/tls-fix into lp:gwacl

Proposed by Raphaël Badin
Status: Merged
Approved by: Raphaël Badin
Approved revision: 202
Merged at revision: 197
Proposed branch: lp:~rvb/gwacl/tls-fix
Merge into: lp:gwacl
Diff against target: 10735 lines (+9979/-246)
41 files modified
HACKING.txt (+0/-24)
fork/LICENSE (+27/-0)
fork/README (+11/-0)
fork/go-tls-renegotiation.patch (+113/-0)
fork/http/chunked.go (+170/-0)
fork/http/client.go (+339/-0)
fork/http/cookie.go (+267/-0)
fork/http/doc.go (+80/-0)
fork/http/filetransport.go (+123/-0)
fork/http/fs.go (+367/-0)
fork/http/header.go (+78/-0)
fork/http/jar.go (+30/-0)
fork/http/lex.go (+136/-0)
fork/http/request.go (+743/-0)
fork/http/response.go (+239/-0)
fork/http/server.go (+1234/-0)
fork/http/sniff.go (+214/-0)
fork/http/status.go (+108/-0)
fork/http/transfer.go (+632/-0)
fork/http/transport.go (+757/-0)
fork/http/triv.go (+141/-0)
fork/tls/alert.go (+77/-0)
fork/tls/cipher_suites.go (+188/-0)
fork/tls/common.go (+322/-0)
fork/tls/conn.go (+886/-0)
fork/tls/generate_cert.go (+74/-0)
fork/tls/handshake_client.go (+347/-0)
fork/tls/handshake_messages.go (+1078/-0)
fork/tls/handshake_server.go (+352/-0)
fork/tls/key_agreement.go (+253/-0)
fork/tls/parse-gnutls-cli-debug-log.py (+57/-0)
fork/tls/prf.go (+235/-0)
fork/tls/tls.go (+187/-0)
httperror.go (+1/-2)
management_base_test.go (+29/-5)
poller_test.go (+1/-1)
testhelpers_x509dispatch.go (+1/-1)
x509dispatcher.go (+22/-154)
x509dispatcher_test.go (+17/-42)
x509session.go (+29/-3)
x509session_test.go (+14/-14)
To merge this branch: bzr merge lp:~rvb/gwacl/tls-fix
Reviewer Review Type Date Requested Status
Gavin Panella Approve
Review via email: mp+176185@code.launchpad.net

Commit message

Use a forked version of crypto/tls and net/http.

Description of the change

This branch adds a forked version of crypto/tls and net/http (crypto/tls is patched to add support for TLS renegociation and net/http is patched to use the forked version of crypto/tls).

It includes jtv's branch (lp:~jtv/gwacl/curlless) which switches from using go-curl to using the standard library's tls. (But that change is then modified to use the forked version of net/http and crypto/tls instead of the standard library.)

We forked from Go version 1.0.2 in order to support both Go 1.0.2 and Go 1.1.1 (I tried forking from crypto/tls and net/http version 1.1.1 but I was quickly overwhelmed by the number of libraries needing to be backported and the code made heavy use of Go 1.1.1-only features).

This was tested (unit tests run + script example run) with both Go 1.0.2 and Go 1.1.1.

To post a comment you must log in.
lp:~rvb/gwacl/tls-fix updated
197. By Raphaël Badin

Remove last mention of go-curl.

Revision history for this message
Gavin Panella (allenap) wrote :

Looks good!

[1]

--- fork/README 1970-01-01 00:00:00 +0000
+++ fork/README 2013-07-22 12:54:30 +0000
@@ -0,0 +1,7 @@
+This directory contains a fork of Go's standard libraries net/http and crypto/tls.
...

Please wrap lines in here at no more than 80 columns. I suggest 72.

[2]

=== added file 'fork/go-use-patched-tls.patch'
--- fork/go-use-patched-tls.patch       1970-01-01 00:00:00 +0000
+++ fork/go-use-patched-tls.patch       2013-07-22 13:11:27 +0000
@@ -0,0 +1,5 @@
+15c15
+<         "launchpad.net/gwacl/fork/tls"
+---
+>      "crypto/tls"
+

I'm not sure this is useful. Without context it's not very
informative. It's also in the wrong direction. I suggest omitting it.

[3]

+func performX509Request(session *x509Session, request *X509Request) (*x509Response, error) {
...
+    response.Header = httpResponse.Header
+    if err != nil {
+        return nil, err
+    }

I don't think that condition is needed here. Could this be one of
those Go-boilerplate-confuses-DVCS moments?

review: Approve
Revision history for this message
Gavin Panella (allenap) wrote :

Before landing, do you know if licensing is okay?

review: Needs Information
lp:~rvb/gwacl/tls-fix updated
198. By Raphaël Badin

Review fixes.

Revision history for this message
Raphaël Badin (rvb) wrote :

> Looks good!
>
>
> [1]
>
> --- fork/README 1970-01-01 00:00:00 +0000
> +++ fork/README 2013-07-22 12:54:30 +0000
> @@ -0,0 +1,7 @@
> +This directory contains a fork of Go's standard libraries net/http and
> crypto/tls.
> ...
>
> Please wrap lines in here at no more than 80 columns. I suggest 72.

Okay, done.

> [2]
>
> === added file 'fork/go-use-patched-tls.patch'
> --- fork/go-use-patched-tls.patch       1970-01-01 00:00:00 +0000
> +++ fork/go-use-patched-tls.patch       2013-07-22 13:11:27 +0000
> @@ -0,0 +1,5 @@
> +15c15
> +<         "launchpad.net/gwacl/fork/tls"
> +---
> +>      "crypto/tls"
> +
>
> I'm not sure this is useful. Without context it's not very
> informative. It's also in the wrong direction. I suggest omitting it.

You're right, fixed.

> [3]
>
> +func performX509Request(session *x509Session, request *X509Request)
> (*x509Response, error) {
> ...
> +    response.Header = httpResponse.Header
> +    if err != nil {
> +        return nil, err
> +    }
>
> I don't think that condition is needed here. Could this be one of
> those Go-boilerplate-confuses-DVCS moments?

Just a left-over from the recent refactoring, fixed.

lp:~rvb/gwacl/tls-fix updated
199. By Raphaël Badin

Rename files

200. By Raphaël Badin

Add LICENSE file.

Revision history for this message
Raphaël Badin (rvb) wrote :

After consulting with James Page, we got confirmation that the licencing is okay: BSD-licensed code can be included in gwacl, which is a AGPL3-licensed project. I'll add a LICENSE file in /fork and we will (in another branch) add proper COPYING/COPYING.LESSER licenses files for the gwacl project itself.

lp:~rvb/gwacl/tls-fix updated
201. By Raphaël Badin

Fix license file.

Revision history for this message
Gavin Panella (allenap) :
review: Approve
lp:~rvb/gwacl/tls-fix updated
202. By Raphaël Badin

Make format.

Revision history for this message
Julian Edwards (julian-edwards) wrote :

On 23/07/13 00:03, Raphaël Badin wrote:
> After consulting with James Page, we got confirmation that the licencing is okay: BSD-licensed code can be included in gwacl, which is a AGPL3-licensed project. I'll add a LICENSE file in /fork and we will (in another branch) add proper COPYING/COPYING.LESSER licenses files for the gwacl project itself.
>

It's actually LGPL but I think it still works though.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'HACKING.txt'
--- HACKING.txt 2013-06-24 00:39:08 +0000
+++ HACKING.txt 2013-07-22 14:27:42 +0000
@@ -20,30 +20,6 @@
20.. _Bazaar: http://bazaar.canonical.com/20.. _Bazaar: http://bazaar.canonical.com/
2121
2222
23Prerequisites
24-------------
25
26The code that communicates with Azure's management API uses *libcurl*, through
27a Go language binding called *go-curl*. You need to install the libcurl
28headers for your OS. On Ubuntu this is::
29
30 sudo apt-get install libcurl4-openssl-dev
31
32If you didn't ``go get`` gwacl you may also need to install go-curl::
33
34 go get github.com/andelf/go-curl
35
36On Ubuntu 12.10 at least, you specifically need the given version of libcurl.
37With other versions you will receive unexpected "403" HTTP status codes
38("Forbidden") from the Azure server.
39
40Why use libcurl? At the time of writing, Go's built-in http package does not
41support TLS renegotiation. We find that Azure forces such a renegotiation
42when we access the management API. The use of libcurl is embedded so that
43future implementations can swap it out for a different http library without
44breaking compatibility.
45
46
47API philosophy23API philosophy
48--------------24--------------
4925
5026
=== added directory 'fork'
=== added file 'fork/LICENSE'
--- fork/LICENSE 1970-01-01 00:00:00 +0000
+++ fork/LICENSE 2013-07-22 14:27:42 +0000
@@ -0,0 +1,27 @@
1Copyright (c) 2012 The Go Authors. All rights reserved.
2
3Redistribution and use in source and binary forms, with or without
4modification, are permitted provided that the following conditions are
5met:
6
7 * Redistributions of source code must retain the above copyright
8notice, this list of conditions and the following disclaimer.
9 * Redistributions in binary form must reproduce the above
10copyright notice, this list of conditions and the following disclaimer
11in the documentation and/or other materials provided with the
12distribution.
13 * Neither the name of Google Inc. nor the names of its
14contributors may be used to endorse or promote products derived from
15this software without specific prior written permission.
16
17THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
18"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
19LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
20A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
21OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
22SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
23LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
24DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
25THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
26(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
27OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
028
=== added file 'fork/README'
--- fork/README 1970-01-01 00:00:00 +0000
+++ fork/README 2013-07-22 14:27:42 +0000
@@ -0,0 +1,11 @@
1This directory contains a fork of Go's standard libraries net/http and
2crypto/tls.
3
4This fork is required to support the TLS renegociation which is triggered by
5the Windows Azure Server when establishing an https connexion. TLS
6renegociation is currently not supported by Go's standard library.
7
8The fork is based on go version 2:1.0.2-2.
9The library crypto/tls is patched to support TLS renegociation (see the patch
10file "go-tls-renegotiation.patch").
11The library net/http is patched to use the forked version of crypto/tls.
012
=== added file 'fork/go-tls-renegotiation.patch'
--- fork/go-tls-renegotiation.patch 1970-01-01 00:00:00 +0000
+++ fork/go-tls-renegotiation.patch 2013-07-22 14:27:42 +0000
@@ -0,0 +1,113 @@
1diff -r c242bbf5fa8c src/pkg/crypto/tls/common.go
2--- a/src/pkg/crypto/tls/common.go Wed Jul 17 14:03:27 2013 -0400
3+++ b/src/pkg/crypto/tls/common.go Thu Jul 18 13:45:43 2013 -0400
4@@ -44,6 +44,7 @@
5
6 // TLS handshake message types.
7 const (
8+ typeHelloRequest uint8 = 0
9 typeClientHello uint8 = 1
10 typeServerHello uint8 = 2
11 typeNewSessionTicket uint8 = 4
12diff -r c242bbf5fa8c src/pkg/crypto/tls/conn.go
13--- a/src/pkg/crypto/tls/conn.go Wed Jul 17 14:03:27 2013 -0400
14+++ b/src/pkg/crypto/tls/conn.go Thu Jul 18 13:45:43 2013 -0400
15@@ -146,6 +146,9 @@
16 hc.mac = hc.nextMac
17 hc.nextCipher = nil
18 hc.nextMac = nil
19+ for i := range hc.seq {
20+ hc.seq[i] = 0
21+ }
22 return nil
23 }
24
25@@ -478,7 +481,7 @@
26 func (c *Conn) readRecord(want recordType) error {
27 // Caller must be in sync with connection:
28 // handshake data if handshake not yet completed,
29- // else application data. (We don't support renegotiation.)
30+ // else application data.
31 switch want {
32 default:
33 return c.sendAlert(alertInternalError)
34@@ -611,7 +614,7 @@
35
36 case recordTypeHandshake:
37 // TODO(rsc): Should at least pick off connection close.
38- if typ != want {
39+ if typ != want && !c.isClient {
40 return c.sendAlert(alertNoRenegotiation)
41 }
42 c.hand.Write(data)
43@@ -741,6 +744,8 @@
44 data = c.hand.Next(4 + n)
45 var m handshakeMessage
46 switch data[0] {
47+ case typeHelloRequest:
48+ m = new(helloRequestMsg)
49 case typeClientHello:
50 m = new(clientHelloMsg)
51 case typeServerHello:
52@@ -825,6 +830,25 @@
53 return n + m, c.setError(err)
54 }
55
56+func (c *Conn) handleRenegotiation() error {
57+ c.handshakeComplete = false
58+ if !c.isClient {
59+ panic("renegotiation should only happen for a client")
60+ }
61+
62+ msg, err := c.readHandshake()
63+ if err != nil {
64+ return err
65+ }
66+ _, ok := msg.(*helloRequestMsg)
67+ if !ok {
68+ c.sendAlert(alertUnexpectedMessage)
69+ return alertUnexpectedMessage
70+ }
71+
72+ return c.Handshake()
73+}
74+
75 // Read can be made to time out and return a net.Error with Timeout() == true
76 // after a fixed time limit; see SetDeadline and SetReadDeadline.
77 func (c *Conn) Read(b []byte) (n int, err error) {
78@@ -844,6 +868,14 @@
79 // Soft error, like EAGAIN
80 return 0, err
81 }
82+ if c.hand.Len() > 0 {
83+ // We received handshake bytes, indicating the start of
84+ // a renegotiation.
85+ if err := c.handleRenegotiation(); err != nil {
86+ return 0, err
87+ }
88+ continue
89+ }
90 }
91 if err := c.error(); err != nil {
92 return 0, err
93diff -r c242bbf5fa8c src/pkg/crypto/tls/handshake_messages.go
94--- a/src/pkg/crypto/tls/handshake_messages.go Wed Jul 17 14:03:27 2013 -0400
95+++ b/src/pkg/crypto/tls/handshake_messages.go Thu Jul 18 13:45:43 2013 -0400
96@@ -1243,6 +1243,17 @@
97 return true
98 }
99
100+type helloRequestMsg struct {
101+}
102+
103+func (*helloRequestMsg) marshal() []byte {
104+ return []byte{typeHelloRequest, 0, 0, 0}
105+}
106+
107+func (*helloRequestMsg) unmarshal(data []byte) bool {
108+ return len(data) == 4
109+}
110+
111 func eqUint16s(x, y []uint16) bool {
112 if len(x) != len(y) {
113 return false
0114
=== added directory 'fork/http'
=== added file 'fork/http/chunked.go'
--- fork/http/chunked.go 1970-01-01 00:00:00 +0000
+++ fork/http/chunked.go 2013-07-22 14:27:42 +0000
@@ -0,0 +1,170 @@
1// Copyright 2009 The Go Authors. All rights reserved.
2// Use of this source code is governed by a BSD-style
3// license that can be found in the LICENSE file.
4
5// The wire protocol for HTTP's "chunked" Transfer-Encoding.
6
7// This code is duplicated in httputil/chunked.go.
8// Please make any changes in both files.
9
10package http
11
12import (
13 "bufio"
14 "bytes"
15 "errors"
16 "io"
17 "strconv"
18)
19
20const maxLineLength = 4096 // assumed <= bufio.defaultBufSize
21
22var ErrLineTooLong = errors.New("header line too long")
23
24// newChunkedReader returns a new chunkedReader that translates the data read from r
25// out of HTTP "chunked" format before returning it.
26// The chunkedReader returns io.EOF when the final 0-length chunk is read.
27//
28// newChunkedReader is not needed by normal applications. The http package
29// automatically decodes chunking when reading response bodies.
30func newChunkedReader(r io.Reader) io.Reader {
31 br, ok := r.(*bufio.Reader)
32 if !ok {
33 br = bufio.NewReader(r)
34 }
35 return &chunkedReader{r: br}
36}
37
38type chunkedReader struct {
39 r *bufio.Reader
40 n uint64 // unread bytes in chunk
41 err error
42}
43
44func (cr *chunkedReader) beginChunk() {
45 // chunk-size CRLF
46 var line string
47 line, cr.err = readLine(cr.r)
48 if cr.err != nil {
49 return
50 }
51 cr.n, cr.err = strconv.ParseUint(line, 16, 64)
52 if cr.err != nil {
53 return
54 }
55 if cr.n == 0 {
56 cr.err = io.EOF
57 }
58}
59
60func (cr *chunkedReader) Read(b []uint8) (n int, err error) {
61 if cr.err != nil {
62 return 0, cr.err
63 }
64 if cr.n == 0 {
65 cr.beginChunk()
66 if cr.err != nil {
67 return 0, cr.err
68 }
69 }
70 if uint64(len(b)) > cr.n {
71 b = b[0:cr.n]
72 }
73 n, cr.err = cr.r.Read(b)
74 cr.n -= uint64(n)
75 if cr.n == 0 && cr.err == nil {
76 // end of chunk (CRLF)
77 b := make([]byte, 2)
78 if _, cr.err = io.ReadFull(cr.r, b); cr.err == nil {
79 if b[0] != '\r' || b[1] != '\n' {
80 cr.err = errors.New("malformed chunked encoding")
81 }
82 }
83 }
84 return n, cr.err
85}
86
87// Read a line of bytes (up to \n) from b.
88// Give up if the line exceeds maxLineLength.
89// The returned bytes are a pointer into storage in
90// the bufio, so they are only valid until the next bufio read.
91func readLineBytes(b *bufio.Reader) (p []byte, err error) {
92 if p, err = b.ReadSlice('\n'); err != nil {
93 // We always know when EOF is coming.
94 // If the caller asked for a line, there should be a line.
95 if err == io.EOF {
96 err = io.ErrUnexpectedEOF
97 } else if err == bufio.ErrBufferFull {
98 err = ErrLineTooLong
99 }
100 return nil, err
101 }
102 if len(p) >= maxLineLength {
103 return nil, ErrLineTooLong
104 }
105
106 // Chop off trailing white space.
107 p = bytes.TrimRight(p, " \r\t\n")
108
109 return p, nil
110}
111
112// readLineBytes, but convert the bytes into a string.
113func readLine(b *bufio.Reader) (s string, err error) {
114 p, e := readLineBytes(b)
115 if e != nil {
116 return "", e
117 }
118 return string(p), nil
119}
120
121// newChunkedWriter returns a new chunkedWriter that translates writes into HTTP
122// "chunked" format before writing them to w. Closing the returned chunkedWriter
123// sends the final 0-length chunk that marks the end of the stream.
124//
125// newChunkedWriter is not needed by normal applications. The http
126// package adds chunking automatically if handlers don't set a
127// Content-Length header. Using newChunkedWriter inside a handler
128// would result in double chunking or chunking with a Content-Length
129// length, both of which are wrong.
130func newChunkedWriter(w io.Writer) io.WriteCloser {
131 return &chunkedWriter{w}
132}
133
134// Writing to chunkedWriter translates to writing in HTTP chunked Transfer
135// Encoding wire format to the underlying Wire chunkedWriter.
136type chunkedWriter struct {
137 Wire io.Writer
138}
139
140// Write the contents of data as one chunk to Wire.
141// NOTE: Note that the corresponding chunk-writing procedure in Conn.Write has
142// a bug since it does not check for success of io.WriteString
143func (cw *chunkedWriter) Write(data []byte) (n int, err error) {
144
145 // Don't send 0-length data. It looks like EOF for chunked encoding.
146 if len(data) == 0 {
147 return 0, nil
148 }
149
150 head := strconv.FormatInt(int64(len(data)), 16) + "\r\n"
151
152 if _, err = io.WriteString(cw.Wire, head); err != nil {
153 return 0, err
154 }
155 if n, err = cw.Wire.Write(data); err != nil {
156 return
157 }
158 if n != len(data) {
159 err = io.ErrShortWrite
160 return
161 }
162 _, err = io.WriteString(cw.Wire, "\r\n")
163
164 return
165}
166
167func (cw *chunkedWriter) Close() error {
168 _, err := io.WriteString(cw.Wire, "0\r\n")
169 return err
170}
0171
=== added file 'fork/http/client.go'
--- fork/http/client.go 1970-01-01 00:00:00 +0000
+++ fork/http/client.go 2013-07-22 14:27:42 +0000
@@ -0,0 +1,339 @@
1// Copyright 2009 The Go Authors. All rights reserved.
2// Use of this source code is governed by a BSD-style
3// license that can be found in the LICENSE file.
4
5// HTTP client. See RFC 2616.
6//
7// This is the high-level Client interface.
8// The low-level implementation is in transport.go.
9
10package http
11
12import (
13 "encoding/base64"
14 "errors"
15 "fmt"
16 "io"
17 "net/url"
18 "strings"
19)
20
21// A Client is an HTTP client. Its zero value (DefaultClient) is a usable client
22// that uses DefaultTransport.
23//
24// The Client's Transport typically has internal state (cached
25// TCP connections), so Clients should be reused instead of created as
26// needed. Clients are safe for concurrent use by multiple goroutines.
27type Client struct {
28 // Transport specifies the mechanism by which individual
29 // HTTP requests are made.
30 // If nil, DefaultTransport is used.
31 Transport RoundTripper
32
33 // CheckRedirect specifies the policy for handling redirects.
34 // If CheckRedirect is not nil, the client calls it before
35 // following an HTTP redirect. The arguments req and via
36 // are the upcoming request and the requests made already,
37 // oldest first. If CheckRedirect returns an error, the client
38 // returns that error instead of issue the Request req.
39 //
40 // If CheckRedirect is nil, the Client uses its default policy,
41 // which is to stop after 10 consecutive requests.
42 CheckRedirect func(req *Request, via []*Request) error
43
44 // Jar specifies the cookie jar.
45 // If Jar is nil, cookies are not sent in requests and ignored
46 // in responses.
47 Jar CookieJar
48}
49
50// DefaultClient is the default Client and is used by Get, Head, and Post.
51var DefaultClient = &Client{}
52
53// RoundTripper is an interface representing the ability to execute a
54// single HTTP transaction, obtaining the Response for a given Request.
55//
56// A RoundTripper must be safe for concurrent use by multiple
57// goroutines.
58type RoundTripper interface {
59 // RoundTrip executes a single HTTP transaction, returning
60 // the Response for the request req. RoundTrip should not
61 // attempt to interpret the response. In particular,
62 // RoundTrip must return err == nil if it obtained a response,
63 // regardless of the response's HTTP status code. A non-nil
64 // err should be reserved for failure to obtain a response.
65 // Similarly, RoundTrip should not attempt to handle
66 // higher-level protocol details such as redirects,
67 // authentication, or cookies.
68 //
69 // RoundTrip should not modify the request, except for
70 // consuming the Body. The request's URL and Header fields
71 // are guaranteed to be initialized.
72 RoundTrip(*Request) (*Response, error)
73}
74
75// Given a string of the form "host", "host:port", or "[ipv6::address]:port",
76// return true if the string includes a port.
77func hasPort(s string) bool { return strings.LastIndex(s, ":") > strings.LastIndex(s, "]") }
78
79// Used in Send to implement io.ReadCloser by bundling together the
80// bufio.Reader through which we read the response, and the underlying
81// network connection.
82type readClose struct {
83 io.Reader
84 io.Closer
85}
86
87// Do sends an HTTP request and returns an HTTP response, following
88// policy (e.g. redirects, cookies, auth) as configured on the client.
89//
90// A non-nil response always contains a non-nil resp.Body.
91//
92// Callers should close resp.Body when done reading from it. If
93// resp.Body is not closed, the Client's underlying RoundTripper
94// (typically Transport) may not be able to re-use a persistent TCP
95// connection to the server for a subsequent "keep-alive" request.
96//
97// Generally Get, Post, or PostForm will be used instead of Do.
98func (c *Client) Do(req *Request) (resp *Response, err error) {
99 if req.Method == "GET" || req.Method == "HEAD" {
100 return c.doFollowingRedirects(req)
101 }
102 return send(req, c.Transport)
103}
104
105// send issues an HTTP request. Caller should close resp.Body when done reading from it.
106func send(req *Request, t RoundTripper) (resp *Response, err error) {
107 if t == nil {
108 t = DefaultTransport
109 if t == nil {
110 err = errors.New("http: no Client.Transport or DefaultTransport")
111 return
112 }
113 }
114
115 if req.URL == nil {
116 return nil, errors.New("http: nil Request.URL")
117 }
118
119 if req.RequestURI != "" {
120 return nil, errors.New("http: Request.RequestURI can't be set in client requests.")
121 }
122
123 // Most the callers of send (Get, Post, et al) don't need
124 // Headers, leaving it uninitialized. We guarantee to the
125 // Transport that this has been initialized, though.
126 if req.Header == nil {
127 req.Header = make(Header)
128 }
129
130 if u := req.URL.User; u != nil {
131 req.Header.Set("Authorization", "Basic "+base64.URLEncoding.EncodeToString([]byte(u.String())))
132 }
133 return t.RoundTrip(req)
134}
135
136// True if the specified HTTP status code is one for which the Get utility should
137// automatically redirect.
138func shouldRedirect(statusCode int) bool {
139 switch statusCode {
140 case StatusMovedPermanently, StatusFound, StatusSeeOther, StatusTemporaryRedirect:
141 return true
142 }
143 return false
144}
145
146// Get issues a GET to the specified URL. If the response is one of the following
147// redirect codes, Get follows the redirect, up to a maximum of 10 redirects:
148//
149// 301 (Moved Permanently)
150// 302 (Found)
151// 303 (See Other)
152// 307 (Temporary Redirect)
153//
154// Caller should close r.Body when done reading from it.
155//
156// Get is a wrapper around DefaultClient.Get.
157func Get(url string) (r *Response, err error) {
158 return DefaultClient.Get(url)
159}
160
161// Get issues a GET to the specified URL. If the response is one of the
162// following redirect codes, Get follows the redirect after calling the
163// Client's CheckRedirect function.
164//
165// 301 (Moved Permanently)
166// 302 (Found)
167// 303 (See Other)
168// 307 (Temporary Redirect)
169//
170// Caller should close r.Body when done reading from it.
171func (c *Client) Get(url string) (r *Response, err error) {
172 req, err := NewRequest("GET", url, nil)
173 if err != nil {
174 return nil, err
175 }
176 return c.doFollowingRedirects(req)
177}
178
179func (c *Client) doFollowingRedirects(ireq *Request) (r *Response, err error) {
180 // TODO: if/when we add cookie support, the redirected request shouldn't
181 // necessarily supply the same cookies as the original.
182 var base *url.URL
183 redirectChecker := c.CheckRedirect
184 if redirectChecker == nil {
185 redirectChecker = defaultCheckRedirect
186 }
187 var via []*Request
188
189 if ireq.URL == nil {
190 return nil, errors.New("http: nil Request.URL")
191 }
192
193 jar := c.Jar
194 if jar == nil {
195 jar = blackHoleJar{}
196 }
197
198 req := ireq
199 urlStr := "" // next relative or absolute URL to fetch (after first request)
200 for redirect := 0; ; redirect++ {
201 if redirect != 0 {
202 req = new(Request)
203 req.Method = ireq.Method
204 req.Header = make(Header)
205 req.URL, err = base.Parse(urlStr)
206 if err != nil {
207 break
208 }
209 if len(via) > 0 {
210 // Add the Referer header.
211 lastReq := via[len(via)-1]
212 if lastReq.URL.Scheme != "https" {
213 req.Header.Set("Referer", lastReq.URL.String())
214 }
215
216 err = redirectChecker(req, via)
217 if err != nil {
218 break
219 }
220 }
221 }
222
223 for _, cookie := range jar.Cookies(req.URL) {
224 req.AddCookie(cookie)
225 }
226 urlStr = req.URL.String()
227 if r, err = send(req, c.Transport); err != nil {
228 break
229 }
230 if c := r.Cookies(); len(c) > 0 {
231 jar.SetCookies(req.URL, c)
232 }
233
234 if shouldRedirect(r.StatusCode) {
235 r.Body.Close()
236 if urlStr = r.Header.Get("Location"); urlStr == "" {
237 err = errors.New(fmt.Sprintf("%d response missing Location header", r.StatusCode))
238 break
239 }
240 base = req.URL
241 via = append(via, req)
242 continue
243 }
244 return
245 }
246
247 method := ireq.Method
248 err = &url.Error{
249 Op: method[0:1] + strings.ToLower(method[1:]),
250 URL: urlStr,
251 Err: err,
252 }
253 return
254}
255
256func defaultCheckRedirect(req *Request, via []*Request) error {
257 if len(via) >= 10 {
258 return errors.New("stopped after 10 redirects")
259 }
260 return nil
261}
262
263// Post issues a POST to the specified URL.
264//
265// Caller should close r.Body when done reading from it.
266//
267// Post is a wrapper around DefaultClient.Post
268func Post(url string, bodyType string, body io.Reader) (r *Response, err error) {
269 return DefaultClient.Post(url, bodyType, body)
270}
271
272// Post issues a POST to the specified URL.
273//
274// Caller should close r.Body when done reading from it.
275func (c *Client) Post(url string, bodyType string, body io.Reader) (r *Response, err error) {
276 req, err := NewRequest("POST", url, body)
277 if err != nil {
278 return nil, err
279 }
280 req.Header.Set("Content-Type", bodyType)
281 if c.Jar != nil {
282 for _, cookie := range c.Jar.Cookies(req.URL) {
283 req.AddCookie(cookie)
284 }
285 }
286 r, err = send(req, c.Transport)
287 if err == nil && c.Jar != nil {
288 c.Jar.SetCookies(req.URL, r.Cookies())
289 }
290 return r, err
291}
292
293// PostForm issues a POST to the specified URL,
294// with data's keys and values urlencoded as the request body.
295//
296// Caller should close r.Body when done reading from it.
297//
298// PostForm is a wrapper around DefaultClient.PostForm
299func PostForm(url string, data url.Values) (r *Response, err error) {
300 return DefaultClient.PostForm(url, data)
301}
302
303// PostForm issues a POST to the specified URL,
304// with data's keys and values urlencoded as the request body.
305//
306// Caller should close r.Body when done reading from it.
307func (c *Client) PostForm(url string, data url.Values) (r *Response, err error) {
308 return c.Post(url, "application/x-www-form-urlencoded", strings.NewReader(data.Encode()))
309}
310
311// Head issues a HEAD to the specified URL. If the response is one of the
312// following redirect codes, Head follows the redirect after calling the
313// Client's CheckRedirect function.
314//
315// 301 (Moved Permanently)
316// 302 (Found)
317// 303 (See Other)
318// 307 (Temporary Redirect)
319//
320// Head is a wrapper around DefaultClient.Head
321func Head(url string) (r *Response, err error) {
322 return DefaultClient.Head(url)
323}
324
325// Head issues a HEAD to the specified URL. If the response is one of the
326// following redirect codes, Head follows the redirect after calling the
327// Client's CheckRedirect function.
328//
329// 301 (Moved Permanently)
330// 302 (Found)
331// 303 (See Other)
332// 307 (Temporary Redirect)
333func (c *Client) Head(url string) (r *Response, err error) {
334 req, err := NewRequest("HEAD", url, nil)
335 if err != nil {
336 return nil, err
337 }
338 return c.doFollowingRedirects(req)
339}
0340
=== added file 'fork/http/cookie.go'
--- fork/http/cookie.go 1970-01-01 00:00:00 +0000
+++ fork/http/cookie.go 2013-07-22 14:27:42 +0000
@@ -0,0 +1,267 @@
1// Copyright 2009 The Go Authors. All rights reserved.
2// Use of this source code is governed by a BSD-style
3// license that can be found in the LICENSE file.
4
5package http
6
7import (
8 "bytes"
9 "fmt"
10 "strconv"
11 "strings"
12 "time"
13)
14
15// This implementation is done according to RFC 6265:
16//
17// http://tools.ietf.org/html/rfc6265
18
19// A Cookie represents an HTTP cookie as sent in the Set-Cookie header of an
20// HTTP response or the Cookie header of an HTTP request.
21type Cookie struct {
22 Name string
23 Value string
24 Path string
25 Domain string
26 Expires time.Time
27 RawExpires string
28
29 // MaxAge=0 means no 'Max-Age' attribute specified.
30 // MaxAge<0 means delete cookie now, equivalently 'Max-Age: 0'
31 // MaxAge>0 means Max-Age attribute present and given in seconds
32 MaxAge int
33 Secure bool
34 HttpOnly bool
35 Raw string
36 Unparsed []string // Raw text of unparsed attribute-value pairs
37}
38
39// readSetCookies parses all "Set-Cookie" values from
40// the header h and returns the successfully parsed Cookies.
41func readSetCookies(h Header) []*Cookie {
42 cookies := []*Cookie{}
43 for _, line := range h["Set-Cookie"] {
44 parts := strings.Split(strings.TrimSpace(line), ";")
45 if len(parts) == 1 && parts[0] == "" {
46 continue
47 }
48 parts[0] = strings.TrimSpace(parts[0])
49 j := strings.Index(parts[0], "=")
50 if j < 0 {
51 continue
52 }
53 name, value := parts[0][:j], parts[0][j+1:]
54 if !isCookieNameValid(name) {
55 continue
56 }
57 value, success := parseCookieValue(value)
58 if !success {
59 continue
60 }
61 c := &Cookie{
62 Name: name,
63 Value: value,
64 Raw: line,
65 }
66 for i := 1; i < len(parts); i++ {
67 parts[i] = strings.TrimSpace(parts[i])
68 if len(parts[i]) == 0 {
69 continue
70 }
71
72 attr, val := parts[i], ""
73 if j := strings.Index(attr, "="); j >= 0 {
74 attr, val = attr[:j], attr[j+1:]
75 }
76 lowerAttr := strings.ToLower(attr)
77 parseCookieValueFn := parseCookieValue
78 if lowerAttr == "expires" {
79 parseCookieValueFn = parseCookieExpiresValue
80 }
81 val, success = parseCookieValueFn(val)
82 if !success {
83 c.Unparsed = append(c.Unparsed, parts[i])
84 continue
85 }
86 switch lowerAttr {
87 case "secure":
88 c.Secure = true
89 continue
90 case "httponly":
91 c.HttpOnly = true
92 continue
93 case "domain":
94 c.Domain = val
95 // TODO: Add domain parsing
96 continue
97 case "max-age":
98 secs, err := strconv.Atoi(val)
99 if err != nil || secs != 0 && val[0] == '0' {
100 break
101 }
102 if secs <= 0 {
103 c.MaxAge = -1
104 } else {
105 c.MaxAge = secs
106 }
107 continue
108 case "expires":
109 c.RawExpires = val
110 exptime, err := time.Parse(time.RFC1123, val)
111 if err != nil {
112 exptime, err = time.Parse("Mon, 02-Jan-2006 15:04:05 MST", val)
113 if err != nil {
114 c.Expires = time.Time{}
115 break
116 }
117 }
118 c.Expires = exptime.UTC()
119 continue
120 case "path":
121 c.Path = val
122 // TODO: Add path parsing
123 continue
124 }
125 c.Unparsed = append(c.Unparsed, parts[i])
126 }
127 cookies = append(cookies, c)
128 }
129 return cookies
130}
131
132// SetCookie adds a Set-Cookie header to the provided ResponseWriter's headers.
133func SetCookie(w ResponseWriter, cookie *Cookie) {
134 w.Header().Add("Set-Cookie", cookie.String())
135}
136
137// String returns the serialization of the cookie for use in a Cookie
138// header (if only Name and Value are set) or a Set-Cookie response
139// header (if other fields are set).
140func (c *Cookie) String() string {
141 var b bytes.Buffer
142 fmt.Fprintf(&b, "%s=%s", sanitizeName(c.Name), sanitizeValue(c.Value))
143 if len(c.Path) > 0 {
144 fmt.Fprintf(&b, "; Path=%s", sanitizeValue(c.Path))
145 }
146 if len(c.Domain) > 0 {
147 fmt.Fprintf(&b, "; Domain=%s", sanitizeValue(c.Domain))
148 }
149 if c.Expires.Unix() > 0 {
150 fmt.Fprintf(&b, "; Expires=%s", c.Expires.UTC().Format(time.RFC1123))
151 }
152 if c.MaxAge > 0 {
153 fmt.Fprintf(&b, "; Max-Age=%d", c.MaxAge)
154 } else if c.MaxAge < 0 {
155 fmt.Fprintf(&b, "; Max-Age=0")
156 }
157 if c.HttpOnly {
158 fmt.Fprintf(&b, "; HttpOnly")
159 }
160 if c.Secure {
161 fmt.Fprintf(&b, "; Secure")
162 }
163 return b.String()
164}
165
166// readCookies parses all "Cookie" values from the header h and
167// returns the successfully parsed Cookies.
168//
169// if filter isn't empty, only cookies of that name are returned
170func readCookies(h Header, filter string) []*Cookie {
171 cookies := []*Cookie{}
172 lines, ok := h["Cookie"]
173 if !ok {
174 return cookies
175 }
176
177 for _, line := range lines {
178 parts := strings.Split(strings.TrimSpace(line), ";")
179 if len(parts) == 1 && parts[0] == "" {
180 continue
181 }
182 // Per-line attributes
183 parsedPairs := 0
184 for i := 0; i < len(parts); i++ {
185 parts[i] = strings.TrimSpace(parts[i])
186 if len(parts[i]) == 0 {
187 continue
188 }
189 name, val := parts[i], ""
190 if j := strings.Index(name, "="); j >= 0 {
191 name, val = name[:j], name[j+1:]
192 }
193 if !isCookieNameValid(name) {
194 continue
195 }
196 if filter != "" && filter != name {
197 continue
198 }
199 val, success := parseCookieValue(val)
200 if !success {
201 continue
202 }
203 cookies = append(cookies, &Cookie{Name: name, Value: val})
204 parsedPairs++
205 }
206 }
207 return cookies
208}
209
210var cookieNameSanitizer = strings.NewReplacer("\n", "-", "\r", "-")
211
212func sanitizeName(n string) string {
213 return cookieNameSanitizer.Replace(n)
214}
215
216var cookieValueSanitizer = strings.NewReplacer("\n", " ", "\r", " ", ";", " ")
217
218func sanitizeValue(v string) string {
219 return cookieValueSanitizer.Replace(v)
220}
221
222func unquoteCookieValue(v string) string {
223 if len(v) > 1 && v[0] == '"' && v[len(v)-1] == '"' {
224 return v[1 : len(v)-1]
225 }
226 return v
227}
228
229func isCookieByte(c byte) bool {
230 switch {
231 case c == 0x21, 0x23 <= c && c <= 0x2b, 0x2d <= c && c <= 0x3a,
232 0x3c <= c && c <= 0x5b, 0x5d <= c && c <= 0x7e:
233 return true
234 }
235 return false
236}
237
238func isCookieExpiresByte(c byte) (ok bool) {
239 return isCookieByte(c) || c == ',' || c == ' '
240}
241
242func parseCookieValue(raw string) (string, bool) {
243 return parseCookieValueUsing(raw, isCookieByte)
244}
245
246func parseCookieExpiresValue(raw string) (string, bool) {
247 return parseCookieValueUsing(raw, isCookieExpiresByte)
248}
249
250func parseCookieValueUsing(raw string, validByte func(byte) bool) (string, bool) {
251 raw = unquoteCookieValue(raw)
252 for i := 0; i < len(raw); i++ {
253 if !validByte(raw[i]) {
254 return "", false
255 }
256 }
257 return raw, true
258}
259
260func isCookieNameValid(raw string) bool {
261 for _, c := range raw {
262 if !isToken(byte(c)) {
263 return false
264 }
265 }
266 return true
267}
0268
=== added file 'fork/http/doc.go'
--- fork/http/doc.go 1970-01-01 00:00:00 +0000
+++ fork/http/doc.go 2013-07-22 14:27:42 +0000
@@ -0,0 +1,80 @@
1// Copyright 2011 The Go Authors. All rights reserved.
2// Use of this source code is governed by a BSD-style
3// license that can be found in the LICENSE file.
4
5/*
6Package http provides HTTP client and server implementations.
7
8Get, Head, Post, and PostForm make HTTP requests:
9
10 resp, err := http.Get("http://example.com/")
11 ...
12 resp, err := http.Post("http://example.com/upload", "image/jpeg", &buf)
13 ...
14 resp, err := http.PostForm("http://example.com/form",
15 url.Values{"key": {"Value"}, "id": {"123"}})
16
17The client must close the response body when finished with it:
18
19 resp, err := http.Get("http://example.com/")
20 if err != nil {
21 // handle error
22 }
23 defer resp.Body.Close()
24 body, err := ioutil.ReadAll(resp.Body)
25 // ...
26
27For control over HTTP client headers, redirect policy, and other
28settings, create a Client:
29
30 client := &http.Client{
31 CheckRedirect: redirectPolicyFunc,
32 }
33
34 resp, err := client.Get("http://example.com")
35 // ...
36
37 req, err := http.NewRequest("GET", "http://example.com", nil)
38 // ...
39 req.Header.Add("If-None-Match", `W/"wyzzy"`)
40 resp, err := client.Do(req)
41 // ...
42
43For control over proxies, TLS configuration, keep-alives,
44compression, and other settings, create a Transport:
45
46 tr := &http.Transport{
47 TLSClientConfig: &tls.Config{RootCAs: pool},
48 DisableCompression: true,
49 }
50 client := &http.Client{Transport: tr}
51 resp, err := client.Get("https://example.com")
52
53Clients and Transports are safe for concurrent use by multiple
54goroutines and for efficiency should only be created once and re-used.
55
56ListenAndServe starts an HTTP server with a given address and handler.
57The handler is usually nil, which means to use DefaultServeMux.
58Handle and HandleFunc add handlers to DefaultServeMux:
59
60 http.Handle("/foo", fooHandler)
61
62 http.HandleFunc("/bar", func(w http.ResponseWriter, r *http.Request) {
63 fmt.Fprintf(w, "Hello, %q", html.EscapeString(r.URL.Path))
64 })
65
66 log.Fatal(http.ListenAndServe(":8080", nil))
67
68More control over the server's behavior is available by creating a
69custom Server:
70
71 s := &http.Server{
72 Addr: ":8080",
73 Handler: myHandler,
74 ReadTimeout: 10 * time.Second,
75 WriteTimeout: 10 * time.Second,
76 MaxHeaderBytes: 1 << 20,
77 }
78 log.Fatal(s.ListenAndServe())
79*/
80package http
081
=== added file 'fork/http/filetransport.go'
--- fork/http/filetransport.go 1970-01-01 00:00:00 +0000
+++ fork/http/filetransport.go 2013-07-22 14:27:42 +0000
@@ -0,0 +1,123 @@
1// Copyright 2011 The Go Authors. All rights reserved.
2// Use of this source code is governed by a BSD-style
3// license that can be found in the LICENSE file.
4
5package http
6
7import (
8 "fmt"
9 "io"
10)
11
12// fileTransport implements RoundTripper for the 'file' protocol.
13type fileTransport struct {
14 fh fileHandler
15}
16
17// NewFileTransport returns a new RoundTripper, serving the provided
18// FileSystem. The returned RoundTripper ignores the URL host in its
19// incoming requests, as well as most other properties of the
20// request.
21//
22// The typical use case for NewFileTransport is to register the "file"
23// protocol with a Transport, as in:
24//
25// t := &http.Transport{}
26// t.RegisterProtocol("file", http.NewFileTransport(http.Dir("/")))
27// c := &http.Client{Transport: t}
28// res, err := c.Get("file:///etc/passwd")
29// ...
30func NewFileTransport(fs FileSystem) RoundTripper {
31 return fileTransport{fileHandler{fs}}
32}
33
34func (t fileTransport) RoundTrip(req *Request) (resp *Response, err error) {
35 // We start ServeHTTP in a goroutine, which may take a long
36 // time if the file is large. The newPopulateResponseWriter
37 // call returns a channel which either ServeHTTP or finish()
38 // sends our *Response on, once the *Response itself has been
39 // populated (even if the body itself is still being
40 // written to the res.Body, a pipe)
41 rw, resc := newPopulateResponseWriter()
42 go func() {
43 t.fh.ServeHTTP(rw, req)
44 rw.finish()
45 }()
46 return <-resc, nil
47}
48
49func newPopulateResponseWriter() (*populateResponse, <-chan *Response) {
50 pr, pw := io.Pipe()
51 rw := &populateResponse{
52 ch: make(chan *Response),
53 pw: pw,
54 res: &Response{
55 Proto: "HTTP/1.0",
56 ProtoMajor: 1,
57 Header: make(Header),
58 Close: true,
59 Body: pr,
60 },
61 }
62 return rw, rw.ch
63}
64
65// populateResponse is a ResponseWriter that populates the *Response
66// in res, and writes its body to a pipe connected to the response
67// body. Once writes begin or finish() is called, the response is sent
68// on ch.
69type populateResponse struct {
70 res *Response
71 ch chan *Response
72 wroteHeader bool
73 hasContent bool
74 sentResponse bool
75 pw *io.PipeWriter
76}
77
78func (pr *populateResponse) finish() {
79 if !pr.wroteHeader {
80 pr.WriteHeader(500)
81 }
82 if !pr.sentResponse {
83 pr.sendResponse()
84 }
85 pr.pw.Close()
86}
87
88func (pr *populateResponse) sendResponse() {
89 if pr.sentResponse {
90 return
91 }
92 pr.sentResponse = true
93
94 if pr.hasContent {
95 pr.res.ContentLength = -1
96 }
97 pr.ch <- pr.res
98}
99
100func (pr *populateResponse) Header() Header {
101 return pr.res.Header
102}
103
104func (pr *populateResponse) WriteHeader(code int) {
105 if pr.wroteHeader {
106 return
107 }
108 pr.wroteHeader = true
109
110 pr.res.StatusCode = code
111 pr.res.Status = fmt.Sprintf("%d %s", code, StatusText(code))
112}
113
114func (pr *populateResponse) Write(p []byte) (n int, err error) {
115 if !pr.wroteHeader {
116 pr.WriteHeader(StatusOK)
117 }
118 pr.hasContent = true
119 if !pr.sentResponse {
120 pr.sendResponse()
121 }
122 return pr.pw.Write(p)
123}
0124
=== added file 'fork/http/fs.go'
--- fork/http/fs.go 1970-01-01 00:00:00 +0000
+++ fork/http/fs.go 2013-07-22 14:27:42 +0000
@@ -0,0 +1,367 @@
1// Copyright 2009 The Go Authors. All rights reserved.
2// Use of this source code is governed by a BSD-style
3// license that can be found in the LICENSE file.
4
5// HTTP file system request handler
6
7package http
8
9import (
10 "errors"
11 "fmt"
12 "io"
13 "mime"
14 "os"
15 "path"
16 "path/filepath"
17 "strconv"
18 "strings"
19 "time"
20)
21
22// A Dir implements http.FileSystem using the native file
23// system restricted to a specific directory tree.
24//
25// An empty Dir is treated as ".".
26type Dir string
27
28func (d Dir) Open(name string) (File, error) {
29 if filepath.Separator != '/' && strings.IndexRune(name, filepath.Separator) >= 0 {
30 return nil, errors.New("http: invalid character in file path")
31 }
32 dir := string(d)
33 if dir == "" {
34 dir = "."
35 }
36 f, err := os.Open(filepath.Join(dir, filepath.FromSlash(path.Clean("/"+name))))
37 if err != nil {
38 return nil, err
39 }
40 return f, nil
41}
42
43// A FileSystem implements access to a collection of named files.
44// The elements in a file path are separated by slash ('/', U+002F)
45// characters, regardless of host operating system convention.
46type FileSystem interface {
47 Open(name string) (File, error)
48}
49
50// A File is returned by a FileSystem's Open method and can be
51// served by the FileServer implementation.
52type File interface {
53 Close() error
54 Stat() (os.FileInfo, error)
55 Readdir(count int) ([]os.FileInfo, error)
56 Read([]byte) (int, error)
57 Seek(offset int64, whence int) (int64, error)
58}
59
60func dirList(w ResponseWriter, f File) {
61 w.Header().Set("Content-Type", "text/html; charset=utf-8")
62 fmt.Fprintf(w, "<pre>\n")
63 for {
64 dirs, err := f.Readdir(100)
65 if err != nil || len(dirs) == 0 {
66 break
67 }
68 for _, d := range dirs {
69 name := d.Name()
70 if d.IsDir() {
71 name += "/"
72 }
73 // TODO htmlescape
74 fmt.Fprintf(w, "<a href=\"%s\">%s</a>\n", name, name)
75 }
76 }
77 fmt.Fprintf(w, "</pre>\n")
78}
79
80// ServeContent replies to the request using the content in the
81// provided ReadSeeker. The main benefit of ServeContent over io.Copy
82// is that it handles Range requests properly, sets the MIME type, and
83// handles If-Modified-Since requests.
84//
85// If the response's Content-Type header is not set, ServeContent
86// first tries to deduce the type from name's file extension and,
87// if that fails, falls back to reading the first block of the content
88// and passing it to DetectContentType.
89// The name is otherwise unused; in particular it can be empty and is
90// never sent in the response.
91//
92// If modtime is not the zero time, ServeContent includes it in a
93// Last-Modified header in the response. If the request includes an
94// If-Modified-Since header, ServeContent uses modtime to decide
95// whether the content needs to be sent at all.
96//
97// The content's Seek method must work: ServeContent uses
98// a seek to the end of the content to determine its size.
99//
100// Note that *os.File implements the io.ReadSeeker interface.
101func ServeContent(w ResponseWriter, req *Request, name string, modtime time.Time, content io.ReadSeeker) {
102 size, err := content.Seek(0, os.SEEK_END)
103 if err != nil {
104 Error(w, "seeker can't seek", StatusInternalServerError)
105 return
106 }
107 _, err = content.Seek(0, os.SEEK_SET)
108 if err != nil {
109 Error(w, "seeker can't seek", StatusInternalServerError)
110 return
111 }
112 serveContent(w, req, name, modtime, size, content)
113}
114
115// if name is empty, filename is unknown. (used for mime type, before sniffing)
116// if modtime.IsZero(), modtime is unknown.
117// content must be seeked to the beginning of the file.
118func serveContent(w ResponseWriter, r *Request, name string, modtime time.Time, size int64, content io.ReadSeeker) {
119 if checkLastModified(w, r, modtime) {
120 return
121 }
122
123 code := StatusOK
124
125 // If Content-Type isn't set, use the file's extension to find it.
126 if w.Header().Get("Content-Type") == "" {
127 ctype := mime.TypeByExtension(filepath.Ext(name))
128 if ctype == "" {
129 // read a chunk to decide between utf-8 text and binary
130 var buf [1024]byte
131 n, _ := io.ReadFull(content, buf[:])
132 b := buf[:n]
133 ctype = DetectContentType(b)
134 _, err := content.Seek(0, os.SEEK_SET) // rewind to output whole file
135 if err != nil {
136 Error(w, "seeker can't seek", StatusInternalServerError)
137 return
138 }
139 }
140 w.Header().Set("Content-Type", ctype)
141 }
142
143 // handle Content-Range header.
144 // TODO(adg): handle multiple ranges
145 sendSize := size
146 if size >= 0 {
147 ranges, err := parseRange(r.Header.Get("Range"), size)
148 if err == nil && len(ranges) > 1 {
149 err = errors.New("multiple ranges not supported")
150 }
151 if err != nil {
152 Error(w, err.Error(), StatusRequestedRangeNotSatisfiable)
153 return
154 }
155 if len(ranges) == 1 {
156 ra := ranges[0]
157 if _, err := content.Seek(ra.start, os.SEEK_SET); err != nil {
158 Error(w, err.Error(), StatusRequestedRangeNotSatisfiable)
159 return
160 }
161 sendSize = ra.length
162 code = StatusPartialContent
163 w.Header().Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", ra.start, ra.start+ra.length-1, size))
164 }
165
166 w.Header().Set("Accept-Ranges", "bytes")
167 if w.Header().Get("Content-Encoding") == "" {
168 w.Header().Set("Content-Length", strconv.FormatInt(sendSize, 10))
169 }
170 }
171
172 w.WriteHeader(code)
173
174 if r.Method != "HEAD" {
175 if sendSize == -1 {
176 io.Copy(w, content)
177 } else {
178 io.CopyN(w, content, sendSize)
179 }
180 }
181}
182
183// modtime is the modification time of the resource to be served, or IsZero().
184// return value is whether this request is now complete.
185func checkLastModified(w ResponseWriter, r *Request, modtime time.Time) bool {
186 if modtime.IsZero() {
187 return false
188 }
189
190 // The Date-Modified header truncates sub-second precision, so
191 // use mtime < t+1s instead of mtime <= t to check for unmodified.
192 if t, err := time.Parse(TimeFormat, r.Header.Get("If-Modified-Since")); err == nil && modtime.Before(t.Add(1*time.Second)) {
193 w.WriteHeader(StatusNotModified)
194 return true
195 }
196 w.Header().Set("Last-Modified", modtime.UTC().Format(TimeFormat))
197 return false
198}
199
200// name is '/'-separated, not filepath.Separator.
201func serveFile(w ResponseWriter, r *Request, fs FileSystem, name string, redirect bool) {
202 const indexPage = "/index.html"
203
204 // redirect .../index.html to .../
205 // can't use Redirect() because that would make the path absolute,
206 // which would be a problem running under StripPrefix
207 if strings.HasSuffix(r.URL.Path, indexPage) {
208 localRedirect(w, r, "./")
209 return
210 }
211
212 f, err := fs.Open(name)
213 if err != nil {
214 // TODO expose actual error?
215 NotFound(w, r)
216 return
217 }
218 defer f.Close()
219
220 d, err1 := f.Stat()
221 if err1 != nil {
222 // TODO expose actual error?
223 NotFound(w, r)
224 return
225 }
226
227 if redirect {
228 // redirect to canonical path: / at end of directory url
229 // r.URL.Path always begins with /
230 url := r.URL.Path
231 if d.IsDir() {
232 if url[len(url)-1] != '/' {
233 localRedirect(w, r, path.Base(url)+"/")
234 return
235 }
236 } else {
237 if url[len(url)-1] == '/' {
238 localRedirect(w, r, "../"+path.Base(url))
239 return
240 }
241 }
242 }
243
244 // use contents of index.html for directory, if present
245 if d.IsDir() {
246 if checkLastModified(w, r, d.ModTime()) {
247 return
248 }
249 index := name + indexPage
250 ff, err := fs.Open(index)
251 if err == nil {
252 defer ff.Close()
253 dd, err := ff.Stat()
254 if err == nil {
255 name = index
256 d = dd
257 f = ff
258 }
259 }
260 }
261
262 if d.IsDir() {
263 dirList(w, f)
264 return
265 }
266
267 serveContent(w, r, d.Name(), d.ModTime(), d.Size(), f)
268}
269
270// localRedirect gives a Moved Permanently response.
271// It does not convert relative paths to absolute paths like Redirect does.
272func localRedirect(w ResponseWriter, r *Request, newPath string) {
273 if q := r.URL.RawQuery; q != "" {
274 newPath += "?" + q
275 }
276 w.Header().Set("Location", newPath)
277 w.WriteHeader(StatusMovedPermanently)
278}
279
280// ServeFile replies to the request with the contents of the named file or directory.
281func ServeFile(w ResponseWriter, r *Request, name string) {
282 dir, file := filepath.Split(name)
283 serveFile(w, r, Dir(dir), file, false)
284}
285
286type fileHandler struct {
287 root FileSystem
288}
289
290// FileServer returns a handler that serves HTTP requests
291// with the contents of the file system rooted at root.
292//
293// To use the operating system's file system implementation,
294// use http.Dir:
295//
296// http.Handle("/", http.FileServer(http.Dir("/tmp")))
297func FileServer(root FileSystem) Handler {
298 return &fileHandler{root}
299}
300
301func (f *fileHandler) ServeHTTP(w ResponseWriter, r *Request) {
302 upath := r.URL.Path
303 if !strings.HasPrefix(upath, "/") {
304 upath = "/" + upath
305 r.URL.Path = upath
306 }
307 serveFile(w, r, f.root, path.Clean(upath), true)
308}
309
310// httpRange specifies the byte range to be sent to the client.
311type httpRange struct {
312 start, length int64
313}
314
315// parseRange parses a Range header string as per RFC 2616.
316func parseRange(s string, size int64) ([]httpRange, error) {
317 if s == "" {
318 return nil, nil // header not present
319 }
320 const b = "bytes="
321 if !strings.HasPrefix(s, b) {
322 return nil, errors.New("invalid range")
323 }
324 var ranges []httpRange
325 for _, ra := range strings.Split(s[len(b):], ",") {
326 i := strings.Index(ra, "-")
327 if i < 0 {
328 return nil, errors.New("invalid range")
329 }
330 start, end := ra[:i], ra[i+1:]
331 var r httpRange
332 if start == "" {
333 // If no start is specified, end specifies the
334 // range start relative to the end of the file.
335 i, err := strconv.ParseInt(end, 10, 64)
336 if err != nil {
337 return nil, errors.New("invalid range")
338 }
339 if i > size {
340 i = size
341 }
342 r.start = size - i
343 r.length = size - r.start
344 } else {
345 i, err := strconv.ParseInt(start, 10, 64)
346 if err != nil || i > size || i < 0 {
347 return nil, errors.New("invalid range")
348 }
349 r.start = i
350 if end == "" {
351 // If no end is specified, range extends to end of the file.
352 r.length = size - r.start
353 } else {
354 i, err := strconv.ParseInt(end, 10, 64)
355 if err != nil || r.start > i {
356 return nil, errors.New("invalid range")
357 }
358 if i >= size {
359 i = size - 1
360 }
361 r.length = i - r.start + 1
362 }
363 }
364 ranges = append(ranges, r)
365 }
366 return ranges, nil
367}
0368
=== added file 'fork/http/header.go'
--- fork/http/header.go 1970-01-01 00:00:00 +0000
+++ fork/http/header.go 2013-07-22 14:27:42 +0000
@@ -0,0 +1,78 @@
1// Copyright 2010 The Go Authors. All rights reserved.
2// Use of this source code is governed by a BSD-style
3// license that can be found in the LICENSE file.
4
5package http
6
7import (
8 "fmt"
9 "io"
10 "net/textproto"
11 "sort"
12 "strings"
13)
14
15// A Header represents the key-value pairs in an HTTP header.
16type Header map[string][]string
17
18// Add adds the key, value pair to the header.
19// It appends to any existing values associated with key.
20func (h Header) Add(key, value string) {
21 textproto.MIMEHeader(h).Add(key, value)
22}
23
24// Set sets the header entries associated with key to
25// the single element value. It replaces any existing
26// values associated with key.
27func (h Header) Set(key, value string) {
28 textproto.MIMEHeader(h).Set(key, value)
29}
30
31// Get gets the first value associated with the given key.
32// If there are no values associated with the key, Get returns "".
33// To access multiple values of a key, access the map directly
34// with CanonicalHeaderKey.
35func (h Header) Get(key string) string {
36 return textproto.MIMEHeader(h).Get(key)
37}
38
39// Del deletes the values associated with key.
40func (h Header) Del(key string) {
41 textproto.MIMEHeader(h).Del(key)
42}
43
44// Write writes a header in wire format.
45func (h Header) Write(w io.Writer) error {
46 return h.WriteSubset(w, nil)
47}
48
49var headerNewlineToSpace = strings.NewReplacer("\n", " ", "\r", " ")
50
51// WriteSubset writes a header in wire format.
52// If exclude is not nil, keys where exclude[key] == true are not written.
53func (h Header) WriteSubset(w io.Writer, exclude map[string]bool) error {
54 keys := make([]string, 0, len(h))
55 for k := range h {
56 if exclude == nil || !exclude[k] {
57 keys = append(keys, k)
58 }
59 }
60 sort.Strings(keys)
61 for _, k := range keys {
62 for _, v := range h[k] {
63 v = headerNewlineToSpace.Replace(v)
64 v = strings.TrimSpace(v)
65 if _, err := fmt.Fprintf(w, "%s: %s\r\n", k, v); err != nil {
66 return err
67 }
68 }
69 }
70 return nil
71}
72
73// CanonicalHeaderKey returns the canonical format of the
74// header key s. The canonicalization converts the first
75// letter and any letter following a hyphen to upper case;
76// the rest are converted to lowercase. For example, the
77// canonical key for "accept-encoding" is "Accept-Encoding".
78func CanonicalHeaderKey(s string) string { return textproto.CanonicalMIMEHeaderKey(s) }
079
=== added file 'fork/http/jar.go'
--- fork/http/jar.go 1970-01-01 00:00:00 +0000
+++ fork/http/jar.go 2013-07-22 14:27:42 +0000
@@ -0,0 +1,30 @@
1// Copyright 2011 The Go Authors. All rights reserved.
2// Use of this source code is governed by a BSD-style
3// license that can be found in the LICENSE file.
4
5package http
6
7import (
8 "net/url"
9)
10
11// A CookieJar manages storage and use of cookies in HTTP requests.
12//
13// Implementations of CookieJar must be safe for concurrent use by multiple
14// goroutines.
15type CookieJar interface {
16 // SetCookies handles the receipt of the cookies in a reply for the
17 // given URL. It may or may not choose to save the cookies, depending
18 // on the jar's policy and implementation.
19 SetCookies(u *url.URL, cookies []*Cookie)
20
21 // Cookies returns the cookies to send in a request for the given URL.
22 // It is up to the implementation to honor the standard cookie use
23 // restrictions such as in RFC 6265.
24 Cookies(u *url.URL) []*Cookie
25}
26
27type blackHoleJar struct{}
28
29func (blackHoleJar) SetCookies(u *url.URL, cookies []*Cookie) {}
30func (blackHoleJar) Cookies(u *url.URL) []*Cookie { return nil }
031
=== added file 'fork/http/lex.go'
--- fork/http/lex.go 1970-01-01 00:00:00 +0000
+++ fork/http/lex.go 2013-07-22 14:27:42 +0000
@@ -0,0 +1,136 @@
1// Copyright 2009 The Go Authors. All rights reserved.
2// Use of this source code is governed by a BSD-style
3// license that can be found in the LICENSE file.
4
5package http
6
7// This file deals with lexical matters of HTTP
8
9func isSeparator(c byte) bool {
10 switch c {
11 case '(', ')', '<', '>', '@', ',', ';', ':', '\\', '"', '/', '[', ']', '?', '=', '{', '}', ' ', '\t':
12 return true
13 }
14 return false
15}
16
17func isCtl(c byte) bool { return (0 <= c && c <= 31) || c == 127 }
18
19func isChar(c byte) bool { return 0 <= c && c <= 127 }
20
21func isAnyText(c byte) bool { return !isCtl(c) }
22
23func isQdText(c byte) bool { return isAnyText(c) && c != '"' }
24
25func isToken(c byte) bool { return isChar(c) && !isCtl(c) && !isSeparator(c) }
26
27// Valid escaped sequences are not specified in RFC 2616, so for now, we assume
28// that they coincide with the common sense ones used by GO. Malformed
29// characters should probably not be treated as errors by a robust (forgiving)
30// parser, so we replace them with the '?' character.
31func httpUnquotePair(b byte) byte {
32 // skip the first byte, which should always be '\'
33 switch b {
34 case 'a':
35 return '\a'
36 case 'b':
37 return '\b'
38 case 'f':
39 return '\f'
40 case 'n':
41 return '\n'
42 case 'r':
43 return '\r'
44 case 't':
45 return '\t'
46 case 'v':
47 return '\v'
48 case '\\':
49 return '\\'
50 case '\'':
51 return '\''
52 case '"':
53 return '"'
54 }
55 return '?'
56}
57
58// raw must begin with a valid quoted string. Only the first quoted string is
59// parsed and is unquoted in result. eaten is the number of bytes parsed, or -1
60// upon failure.
61func httpUnquote(raw []byte) (eaten int, result string) {
62 buf := make([]byte, len(raw))
63 if raw[0] != '"' {
64 return -1, ""
65 }
66 eaten = 1
67 j := 0 // # of bytes written in buf
68 for i := 1; i < len(raw); i++ {
69 switch b := raw[i]; b {
70 case '"':
71 eaten++
72 buf = buf[0:j]
73 return i + 1, string(buf)
74 case '\\':
75 if len(raw) < i+2 {
76 return -1, ""
77 }
78 buf[j] = httpUnquotePair(raw[i+1])
79 eaten += 2
80 j++
81 i++
82 default:
83 if isQdText(b) {
84 buf[j] = b
85 } else {
86 buf[j] = '?'
87 }
88 eaten++
89 j++
90 }
91 }
92 return -1, ""
93}
94
95// This is a best effort parse, so errors are not returned, instead not all of
96// the input string might be parsed. result is always non-nil.
97func httpSplitFieldValue(fv string) (eaten int, result []string) {
98 result = make([]string, 0, len(fv))
99 raw := []byte(fv)
100 i := 0
101 chunk := ""
102 for i < len(raw) {
103 b := raw[i]
104 switch {
105 case b == '"':
106 eaten, unq := httpUnquote(raw[i:])
107 if eaten < 0 {
108 return i, result
109 } else {
110 i += eaten
111 chunk += unq
112 }
113 case isSeparator(b):
114 if chunk != "" {
115 result = result[0 : len(result)+1]
116 result[len(result)-1] = chunk
117 chunk = ""
118 }
119 i++
120 case isToken(b):
121 chunk += string(b)
122 i++
123 case b == '\n' || b == '\r':
124 i++
125 default:
126 chunk += "?"
127 i++
128 }
129 }
130 if chunk != "" {
131 result = result[0 : len(result)+1]
132 result[len(result)-1] = chunk
133 chunk = ""
134 }
135 return i, result
136}
0137
=== added file 'fork/http/request.go'
--- fork/http/request.go 1970-01-01 00:00:00 +0000
+++ fork/http/request.go 2013-07-22 14:27:42 +0000
@@ -0,0 +1,743 @@
1// Copyright 2009 The Go Authors. All rights reserved.
2// Use of this source code is governed by a BSD-style
3// license that can be found in the LICENSE file.
4
5// HTTP Request reading and parsing.
6
7package http
8
9import (
10 "bufio"
11 "bytes"
12 "crypto/tls"
13 "encoding/base64"
14 "errors"
15 "fmt"
16 "io"
17 "io/ioutil"
18 "mime"
19 "mime/multipart"
20 "net/textproto"
21 "net/url"
22 "strings"
23)
24
25const (
26 maxValueLength = 4096
27 maxHeaderLines = 1024
28 chunkSize = 4 << 10 // 4 KB chunks
29 defaultMaxMemory = 32 << 20 // 32 MB
30)
31
32// ErrMissingFile is returned by FormFile when the provided file field name
33// is either not present in the request or not a file field.
34var ErrMissingFile = errors.New("http: no such file")
35
36// HTTP request parsing errors.
37type ProtocolError struct {
38 ErrorString string
39}
40
41func (err *ProtocolError) Error() string { return err.ErrorString }
42
43var (
44 ErrHeaderTooLong = &ProtocolError{"header too long"}
45 ErrShortBody = &ProtocolError{"entity body too short"}
46 ErrNotSupported = &ProtocolError{"feature not supported"}
47 ErrUnexpectedTrailer = &ProtocolError{"trailer header without chunked transfer encoding"}
48 ErrMissingContentLength = &ProtocolError{"missing ContentLength in HEAD response"}
49 ErrNotMultipart = &ProtocolError{"request Content-Type isn't multipart/form-data"}
50 ErrMissingBoundary = &ProtocolError{"no multipart boundary param Content-Type"}
51)
52
53type badStringError struct {
54 what string
55 str string
56}
57
58func (e *badStringError) Error() string { return fmt.Sprintf("%s %q", e.what, e.str) }
59
60// Headers that Request.Write handles itself and should be skipped.
61var reqWriteExcludeHeader = map[string]bool{
62 "Host": true, // not in Header map anyway
63 "User-Agent": true,
64 "Content-Length": true,
65 "Transfer-Encoding": true,
66 "Trailer": true,
67}
68
69// A Request represents an HTTP request received by a server
70// or to be sent by a client.
71type Request struct {
72 Method string // GET, POST, PUT, etc.
73 URL *url.URL
74
75 // The protocol version for incoming requests.
76 // Outgoing requests always use HTTP/1.1.
77 Proto string // "HTTP/1.0"
78 ProtoMajor int // 1
79 ProtoMinor int // 0
80
81 // A header maps request lines to their values.
82 // If the header says
83 //
84 // accept-encoding: gzip, deflate
85 // Accept-Language: en-us
86 // Connection: keep-alive
87 //
88 // then
89 //
90 // Header = map[string][]string{
91 // "Accept-Encoding": {"gzip, deflate"},
92 // "Accept-Language": {"en-us"},
93 // "Connection": {"keep-alive"},
94 // }
95 //
96 // HTTP defines that header names are case-insensitive.
97 // The request parser implements this by canonicalizing the
98 // name, making the first character and any characters
99 // following a hyphen uppercase and the rest lowercase.
100 Header Header
101
102 // The message body.
103 Body io.ReadCloser
104
105 // ContentLength records the length of the associated content.
106 // The value -1 indicates that the length is unknown.
107 // Values >= 0 indicate that the given number of bytes may
108 // be read from Body.
109 // For outgoing requests, a value of 0 means unknown if Body is not nil.
110 ContentLength int64
111
112 // TransferEncoding lists the transfer encodings from outermost to
113 // innermost. An empty list denotes the "identity" encoding.
114 // TransferEncoding can usually be ignored; chunked encoding is
115 // automatically added and removed as necessary when sending and
116 // receiving requests.
117 TransferEncoding []string
118
119 // Close indicates whether to close the connection after
120 // replying to this request.
121 Close bool
122
123 // The host on which the URL is sought.
124 // Per RFC 2616, this is either the value of the Host: header
125 // or the host name given in the URL itself.
126 Host string
127
128 // Form contains the parsed form data, including both the URL
129 // field's query parameters and the POST or PUT form data.
130 // This field is only available after ParseForm is called.
131 // The HTTP client ignores Form and uses Body instead.
132 Form url.Values
133
134 // MultipartForm is the parsed multipart form, including file uploads.
135 // This field is only available after ParseMultipartForm is called.
136 // The HTTP client ignores MultipartForm and uses Body instead.
137 MultipartForm *multipart.Form
138
139 // Trailer maps trailer keys to values. Like for Header, if the
140 // response has multiple trailer lines with the same key, they will be
141 // concatenated, delimited by commas.
142 // For server requests, Trailer is only populated after Body has been
143 // closed or fully consumed.
144 // Trailer support is only partially complete.
145 Trailer Header
146
147 // RemoteAddr allows HTTP servers and other software to record
148 // the network address that sent the request, usually for
149 // logging. This field is not filled in by ReadRequest and
150 // has no defined format. The HTTP server in this package
151 // sets RemoteAddr to an "IP:port" address before invoking a
152 // handler.
153 // This field is ignored by the HTTP client.
154 RemoteAddr string
155
156 // RequestURI is the unmodified Request-URI of the
157 // Request-Line (RFC 2616, Section 5.1) as sent by the client
158 // to a server. Usually the URL field should be used instead.
159 // It is an error to set this field in an HTTP client request.
160 RequestURI string
161
162 // TLS allows HTTP servers and other software to record
163 // information about the TLS connection on which the request
164 // was received. This field is not filled in by ReadRequest.
165 // The HTTP server in this package sets the field for
166 // TLS-enabled connections before invoking a handler;
167 // otherwise it leaves the field nil.
168 // This field is ignored by the HTTP client.
169 TLS *tls.ConnectionState
170}
171
172// ProtoAtLeast returns whether the HTTP protocol used
173// in the request is at least major.minor.
174func (r *Request) ProtoAtLeast(major, minor int) bool {
175 return r.ProtoMajor > major ||
176 r.ProtoMajor == major && r.ProtoMinor >= minor
177}
178
179// UserAgent returns the client's User-Agent, if sent in the request.
180func (r *Request) UserAgent() string {
181 return r.Header.Get("User-Agent")
182}
183
184// Cookies parses and returns the HTTP cookies sent with the request.
185func (r *Request) Cookies() []*Cookie {
186 return readCookies(r.Header, "")
187}
188
189var ErrNoCookie = errors.New("http: named cookie not present")
190
191// Cookie returns the named cookie provided in the request or
192// ErrNoCookie if not found.
193func (r *Request) Cookie(name string) (*Cookie, error) {
194 for _, c := range readCookies(r.Header, name) {
195 return c, nil
196 }
197 return nil, ErrNoCookie
198}
199
200// AddCookie adds a cookie to the request. Per RFC 6265 section 5.4,
201// AddCookie does not attach more than one Cookie header field. That
202// means all cookies, if any, are written into the same line,
203// separated by semicolon.
204func (r *Request) AddCookie(c *Cookie) {
205 s := fmt.Sprintf("%s=%s", sanitizeName(c.Name), sanitizeValue(c.Value))
206 if c := r.Header.Get("Cookie"); c != "" {
207 r.Header.Set("Cookie", c+"; "+s)
208 } else {
209 r.Header.Set("Cookie", s)
210 }
211}
212
213// Referer returns the referring URL, if sent in the request.
214//
215// Referer is misspelled as in the request itself, a mistake from the
216// earliest days of HTTP. This value can also be fetched from the
217// Header map as Header["Referer"]; the benefit of making it available
218// as a method is that the compiler can diagnose programs that use the
219// alternate (correct English) spelling req.Referrer() but cannot
220// diagnose programs that use Header["Referrer"].
221func (r *Request) Referer() string {
222 return r.Header.Get("Referer")
223}
224
225// multipartByReader is a sentinel value.
226// Its presence in Request.MultipartForm indicates that parsing of the request
227// body has been handed off to a MultipartReader instead of ParseMultipartFrom.
228var multipartByReader = &multipart.Form{
229 Value: make(map[string][]string),
230 File: make(map[string][]*multipart.FileHeader),
231}
232
233// MultipartReader returns a MIME multipart reader if this is a
234// multipart/form-data POST request, else returns nil and an error.
235// Use this function instead of ParseMultipartForm to
236// process the request body as a stream.
237func (r *Request) MultipartReader() (*multipart.Reader, error) {
238 if r.MultipartForm == multipartByReader {
239 return nil, errors.New("http: MultipartReader called twice")
240 }
241 if r.MultipartForm != nil {
242 return nil, errors.New("http: multipart handled by ParseMultipartForm")
243 }
244 r.MultipartForm = multipartByReader
245 return r.multipartReader()
246}
247
248func (r *Request) multipartReader() (*multipart.Reader, error) {
249 v := r.Header.Get("Content-Type")
250 if v == "" {
251 return nil, ErrNotMultipart
252 }
253 d, params, err := mime.ParseMediaType(v)
254 if err != nil || d != "multipart/form-data" {
255 return nil, ErrNotMultipart
256 }
257 boundary, ok := params["boundary"]
258 if !ok {
259 return nil, ErrMissingBoundary
260 }
261 return multipart.NewReader(r.Body, boundary), nil
262}
263
264// Return value if nonempty, def otherwise.
265func valueOrDefault(value, def string) string {
266 if value != "" {
267 return value
268 }
269 return def
270}
271
272const defaultUserAgent = "Go http package"
273
274// Write writes an HTTP/1.1 request -- header and body -- in wire format.
275// This method consults the following fields of the request:
276// Host
277// URL
278// Method (defaults to "GET")
279// Header
280// ContentLength
281// TransferEncoding
282// Body
283//
284// If Body is present, Content-Length is <= 0 and TransferEncoding
285// hasn't been set to "identity", Write adds "Transfer-Encoding:
286// chunked" to the header. Body is closed after it is sent.
287func (r *Request) Write(w io.Writer) error {
288 return r.write(w, false, nil)
289}
290
291// WriteProxy is like Write but writes the request in the form
292// expected by an HTTP proxy. In particular, WriteProxy writes the
293// initial Request-URI line of the request with an absolute URI, per
294// section 5.1.2 of RFC 2616, including the scheme and host.
295// In either case, WriteProxy also writes a Host header, using
296// either r.Host or r.URL.Host.
297func (r *Request) WriteProxy(w io.Writer) error {
298 return r.write(w, true, nil)
299}
300
301// extraHeaders may be nil
302func (req *Request) write(w io.Writer, usingProxy bool, extraHeaders Header) error {
303 host := req.Host
304 if host == "" {
305 if req.URL == nil {
306 return errors.New("http: Request.Write on Request with no Host or URL set")
307 }
308 host = req.URL.Host
309 }
310
311 ruri := req.URL.RequestURI()
312 if usingProxy && req.URL.Scheme != "" && req.URL.Opaque == "" {
313 ruri = req.URL.Scheme + "://" + host + ruri
314 } else if req.Method == "CONNECT" && req.URL.Path == "" {
315 // CONNECT requests normally give just the host and port, not a full URL.
316 ruri = host
317 }
318 // TODO(bradfitz): escape at least newlines in ruri?
319
320 bw := bufio.NewWriter(w)
321 fmt.Fprintf(bw, "%s %s HTTP/1.1\r\n", valueOrDefault(req.Method, "GET"), ruri)
322
323 // Header lines
324 fmt.Fprintf(bw, "Host: %s\r\n", host)
325
326 // Use the defaultUserAgent unless the Header contains one, which
327 // may be blank to not send the header.
328 userAgent := defaultUserAgent
329 if req.Header != nil {
330 if ua := req.Header["User-Agent"]; len(ua) > 0 {
331 userAgent = ua[0]
332 }
333 }
334 if userAgent != "" {
335 fmt.Fprintf(bw, "User-Agent: %s\r\n", userAgent)
336 }
337
338 // Process Body,ContentLength,Close,Trailer
339 tw, err := newTransferWriter(req)
340 if err != nil {
341 return err
342 }
343 err = tw.WriteHeader(bw)
344 if err != nil {
345 return err
346 }
347
348 // TODO: split long values? (If so, should share code with Conn.Write)
349 err = req.Header.WriteSubset(bw, reqWriteExcludeHeader)
350 if err != nil {
351 return err
352 }
353
354 if extraHeaders != nil {
355 err = extraHeaders.Write(bw)
356 if err != nil {
357 return err
358 }
359 }
360
361 io.WriteString(bw, "\r\n")
362
363 // Write body and trailer
364 err = tw.WriteBody(bw)
365 if err != nil {
366 return err
367 }
368
369 return bw.Flush()
370}
371
372// Convert decimal at s[i:len(s)] to integer,
373// returning value, string position where the digits stopped,
374// and whether there was a valid number (digits, not too big).
375func atoi(s string, i int) (n, i1 int, ok bool) {
376 const Big = 1000000
377 if i >= len(s) || s[i] < '0' || s[i] > '9' {
378 return 0, 0, false
379 }
380 n = 0
381 for ; i < len(s) && '0' <= s[i] && s[i] <= '9'; i++ {
382 n = n*10 + int(s[i]-'0')
383 if n > Big {
384 return 0, 0, false
385 }
386 }
387 return n, i, true
388}
389
390// ParseHTTPVersion parses a HTTP version string.
391// "HTTP/1.0" returns (1, 0, true).
392func ParseHTTPVersion(vers string) (major, minor int, ok bool) {
393 if len(vers) < 5 || vers[0:5] != "HTTP/" {
394 return 0, 0, false
395 }
396 major, i, ok := atoi(vers, 5)
397 if !ok || i >= len(vers) || vers[i] != '.' {
398 return 0, 0, false
399 }
400 minor, i, ok = atoi(vers, i+1)
401 if !ok || i != len(vers) {
402 return 0, 0, false
403 }
404 return major, minor, true
405}
406
407// NewRequest returns a new Request given a method, URL, and optional body.
408func NewRequest(method, urlStr string, body io.Reader) (*Request, error) {
409 u, err := url.Parse(urlStr)
410 if err != nil {
411 return nil, err
412 }
413 rc, ok := body.(io.ReadCloser)
414 if !ok && body != nil {
415 rc = ioutil.NopCloser(body)
416 }
417 req := &Request{
418 Method: method,
419 URL: u,
420 Proto: "HTTP/1.1",
421 ProtoMajor: 1,
422 ProtoMinor: 1,
423 Header: make(Header),
424 Body: rc,
425 Host: u.Host,
426 }
427 if body != nil {
428 switch v := body.(type) {
429 case *strings.Reader:
430 req.ContentLength = int64(v.Len())
431 case *bytes.Buffer:
432 req.ContentLength = int64(v.Len())
433 }
434 }
435
436 return req, nil
437}
438
439// SetBasicAuth sets the request's Authorization header to use HTTP
440// Basic Authentication with the provided username and password.
441//
442// With HTTP Basic Authentication the provided username and password
443// are not encrypted.
444func (r *Request) SetBasicAuth(username, password string) {
445 s := username + ":" + password
446 r.Header.Set("Authorization", "Basic "+base64.StdEncoding.EncodeToString([]byte(s)))
447}
448
449// ReadRequest reads and parses a request from b.
450func ReadRequest(b *bufio.Reader) (req *Request, err error) {
451
452 tp := textproto.NewReader(b)
453 req = new(Request)
454
455 // First line: GET /index.html HTTP/1.0
456 var s string
457 if s, err = tp.ReadLine(); err != nil {
458 return nil, err
459 }
460 defer func() {
461 if err == io.EOF {
462 err = io.ErrUnexpectedEOF
463 }
464 }()
465
466 var f []string
467 if f = strings.SplitN(s, " ", 3); len(f) < 3 {
468 return nil, &badStringError{"malformed HTTP request", s}
469 }
470 req.Method, req.RequestURI, req.Proto = f[0], f[1], f[2]
471 rawurl := req.RequestURI
472 var ok bool
473 if req.ProtoMajor, req.ProtoMinor, ok = ParseHTTPVersion(req.Proto); !ok {
474 return nil, &badStringError{"malformed HTTP version", req.Proto}
475 }
476
477 // CONNECT requests are used two different ways, and neither uses a full URL:
478 // The standard use is to tunnel HTTPS through an HTTP proxy.
479 // It looks like "CONNECT www.google.com:443 HTTP/1.1", and the parameter is
480 // just the authority section of a URL. This information should go in req.URL.Host.
481 //
482 // The net/rpc package also uses CONNECT, but there the parameter is a path
483 // that starts with a slash. It can be parsed with the regular URL parser,
484 // and the path will end up in req.URL.Path, where it needs to be in order for
485 // RPC to work.
486 justAuthority := req.Method == "CONNECT" && !strings.HasPrefix(rawurl, "/")
487 if justAuthority {
488 rawurl = "http://" + rawurl
489 }
490
491 if req.URL, err = url.ParseRequestURI(rawurl); err != nil {
492 return nil, err
493 }
494
495 if justAuthority {
496 // Strip the bogus "http://" back off.
497 req.URL.Scheme = ""
498 }
499
500 // Subsequent lines: Key: value.
501 mimeHeader, err := tp.ReadMIMEHeader()
502 if err != nil {
503 return nil, err
504 }
505 req.Header = Header(mimeHeader)
506
507 // RFC2616: Must treat
508 // GET /index.html HTTP/1.1
509 // Host: www.google.com
510 // and
511 // GET http://www.google.com/index.html HTTP/1.1
512 // Host: doesntmatter
513 // the same. In the second case, any Host line is ignored.
514 req.Host = req.URL.Host
515 if req.Host == "" {
516 req.Host = req.Header.Get("Host")
517 }
518 req.Header.Del("Host")
519
520 fixPragmaCacheControl(req.Header)
521
522 // TODO: Parse specific header values:
523 // Accept
524 // Accept-Encoding
525 // Accept-Language
526 // Authorization
527 // Cache-Control
528 // Connection
529 // Date
530 // Expect
531 // From
532 // If-Match
533 // If-Modified-Since
534 // If-None-Match
535 // If-Range
536 // If-Unmodified-Since
537 // Max-Forwards
538 // Proxy-Authorization
539 // Referer [sic]
540 // TE (transfer-codings)
541 // Trailer
542 // Transfer-Encoding
543 // Upgrade
544 // User-Agent
545 // Via
546 // Warning
547
548 err = readTransfer(req, b)
549 if err != nil {
550 return nil, err
551 }
552
553 return req, nil
554}
555
556// MaxBytesReader is similar to io.LimitReader but is intended for
557// limiting the size of incoming request bodies. In contrast to
558// io.LimitReader, MaxBytesReader's result is a ReadCloser, returns a
559// non-EOF error for a Read beyond the limit, and Closes the
560// underlying reader when its Close method is called.
561//
562// MaxBytesReader prevents clients from accidentally or maliciously
563// sending a large request and wasting server resources.
564func MaxBytesReader(w ResponseWriter, r io.ReadCloser, n int64) io.ReadCloser {
565 return &maxBytesReader{w: w, r: r, n: n}
566}
567
568type maxBytesReader struct {
569 w ResponseWriter
570 r io.ReadCloser // underlying reader
571 n int64 // max bytes remaining
572 stopped bool
573}
574
575func (l *maxBytesReader) Read(p []byte) (n int, err error) {
576 if l.n <= 0 {
577 if !l.stopped {
578 l.stopped = true
579 if res, ok := l.w.(*response); ok {
580 res.requestTooLarge()
581 }
582 }
583 return 0, errors.New("http: request body too large")
584 }
585 if int64(len(p)) > l.n {
586 p = p[:l.n]
587 }
588 n, err = l.r.Read(p)
589 l.n -= int64(n)
590 return
591}
592
593func (l *maxBytesReader) Close() error {
594 return l.r.Close()
595}
596
597// ParseForm parses the raw query from the URL.
598//
599// For POST or PUT requests, it also parses the request body as a form.
600// If the request Body's size has not already been limited by MaxBytesReader,
601// the size is capped at 10MB.
602//
603// ParseMultipartForm calls ParseForm automatically.
604// It is idempotent.
605func (r *Request) ParseForm() (err error) {
606 if r.Form != nil {
607 return
608 }
609 if r.URL != nil {
610 r.Form, err = url.ParseQuery(r.URL.RawQuery)
611 }
612 if r.Method == "POST" || r.Method == "PUT" {
613 if r.Body == nil {
614 return errors.New("missing form body")
615 }
616 ct := r.Header.Get("Content-Type")
617 ct, _, err = mime.ParseMediaType(ct)
618 switch {
619 case ct == "application/x-www-form-urlencoded":
620 var reader io.Reader = r.Body
621 maxFormSize := int64(1<<63 - 1)
622 if _, ok := r.Body.(*maxBytesReader); !ok {
623 maxFormSize = int64(10 << 20) // 10 MB is a lot of text.
624 reader = io.LimitReader(r.Body, maxFormSize+1)
625 }
626 b, e := ioutil.ReadAll(reader)
627 if e != nil {
628 if err == nil {
629 err = e
630 }
631 break
632 }
633 if int64(len(b)) > maxFormSize {
634 return errors.New("http: POST too large")
635 }
636 var newValues url.Values
637 newValues, e = url.ParseQuery(string(b))
638 if err == nil {
639 err = e
640 }
641 if r.Form == nil {
642 r.Form = make(url.Values)
643 }
644 // Copy values into r.Form. TODO: make this smoother.
645 for k, vs := range newValues {
646 for _, value := range vs {
647 r.Form.Add(k, value)
648 }
649 }
650 case ct == "multipart/form-data":
651 // handled by ParseMultipartForm (which is calling us, or should be)
652 // TODO(bradfitz): there are too many possible
653 // orders to call too many functions here.
654 // Clean this up and write more tests.
655 // request_test.go contains the start of this,
656 // in TestRequestMultipartCallOrder.
657 }
658 }
659 return err
660}
661
662// ParseMultipartForm parses a request body as multipart/form-data.
663// The whole request body is parsed and up to a total of maxMemory bytes of
664// its file parts are stored in memory, with the remainder stored on
665// disk in temporary files.
666// ParseMultipartForm calls ParseForm if necessary.
667// After one call to ParseMultipartForm, subsequent calls have no effect.
668func (r *Request) ParseMultipartForm(maxMemory int64) error {
669 if r.MultipartForm == multipartByReader {
670 return errors.New("http: multipart handled by MultipartReader")
671 }
672 if r.Form == nil {
673 err := r.ParseForm()
674 if err != nil {
675 return err
676 }
677 }
678 if r.MultipartForm != nil {
679 return nil
680 }
681
682 mr, err := r.multipartReader()
683 if err == ErrNotMultipart {
684 return nil
685 } else if err != nil {
686 return err
687 }
688
689 f, err := mr.ReadForm(maxMemory)
690 if err != nil {
691 return err
692 }
693 for k, v := range f.Value {
694 r.Form[k] = append(r.Form[k], v...)
695 }
696 r.MultipartForm = f
697
698 return nil
699}
700
701// FormValue returns the first value for the named component of the query.
702// FormValue calls ParseMultipartForm and ParseForm if necessary.
703func (r *Request) FormValue(key string) string {
704 if r.Form == nil {
705 r.ParseMultipartForm(defaultMaxMemory)
706 }
707 if vs := r.Form[key]; len(vs) > 0 {
708 return vs[0]
709 }
710 return ""
711}
712
713// FormFile returns the first file for the provided form key.
714// FormFile calls ParseMultipartForm and ParseForm if necessary.
715func (r *Request) FormFile(key string) (multipart.File, *multipart.FileHeader, error) {
716 if r.MultipartForm == multipartByReader {
717 return nil, nil, errors.New("http: multipart handled by MultipartReader")
718 }
719 if r.MultipartForm == nil {
720 err := r.ParseMultipartForm(defaultMaxMemory)
721 if err != nil {
722 return nil, nil, err
723 }
724 }
725 if r.MultipartForm != nil && r.MultipartForm.File != nil {
726 if fhs := r.MultipartForm.File[key]; len(fhs) > 0 {
727 f, err := fhs[0].Open()
728 return f, fhs[0], err
729 }
730 }
731 return nil, nil, ErrMissingFile
732}
733
734func (r *Request) expectsContinue() bool {
735 return strings.ToLower(r.Header.Get("Expect")) == "100-continue"
736}
737
738func (r *Request) wantsHttp10KeepAlive() bool {
739 if r.ProtoMajor != 1 || r.ProtoMinor != 0 {
740 return false
741 }
742 return strings.Contains(strings.ToLower(r.Header.Get("Connection")), "keep-alive")
743}
0744
=== added file 'fork/http/response.go'
--- fork/http/response.go 1970-01-01 00:00:00 +0000
+++ fork/http/response.go 2013-07-22 14:27:42 +0000
@@ -0,0 +1,239 @@
1// Copyright 2009 The Go Authors. All rights reserved.
2// Use of this source code is governed by a BSD-style
3// license that can be found in the LICENSE file.
4
5// HTTP Response reading and parsing.
6
7package http
8
9import (
10 "bufio"
11 "errors"
12 "io"
13 "net/textproto"
14 "net/url"
15 "strconv"
16 "strings"
17)
18
19var respExcludeHeader = map[string]bool{
20 "Content-Length": true,
21 "Transfer-Encoding": true,
22 "Trailer": true,
23}
24
25// Response represents the response from an HTTP request.
26//
27type Response struct {
28 Status string // e.g. "200 OK"
29 StatusCode int // e.g. 200
30 Proto string // e.g. "HTTP/1.0"
31 ProtoMajor int // e.g. 1
32 ProtoMinor int // e.g. 0
33
34 // Header maps header keys to values. If the response had multiple
35 // headers with the same key, they will be concatenated, with comma
36 // delimiters. (Section 4.2 of RFC 2616 requires that multiple headers
37 // be semantically equivalent to a comma-delimited sequence.) Values
38 // duplicated by other fields in this struct (e.g., ContentLength) are
39 // omitted from Header.
40 //
41 // Keys in the map are canonicalized (see CanonicalHeaderKey).
42 Header Header
43
44 // Body represents the response body.
45 //
46 // The http Client and Transport guarantee that Body is always
47 // non-nil, even on responses without a body or responses with
48 // a zero-lengthed body.
49 Body io.ReadCloser
50
51 // ContentLength records the length of the associated content. The
52 // value -1 indicates that the length is unknown. Unless RequestMethod
53 // is "HEAD", values >= 0 indicate that the given number of bytes may
54 // be read from Body.
55 ContentLength int64
56
57 // Contains transfer encodings from outer-most to inner-most. Value is
58 // nil, means that "identity" encoding is used.
59 TransferEncoding []string
60
61 // Close records whether the header directed that the connection be
62 // closed after reading Body. The value is advice for clients: neither
63 // ReadResponse nor Response.Write ever closes a connection.
64 Close bool
65
66 // Trailer maps trailer keys to values, in the same
67 // format as the header.
68 Trailer Header
69
70 // The Request that was sent to obtain this Response.
71 // Request's Body is nil (having already been consumed).
72 // This is only populated for Client requests.
73 Request *Request
74}
75
76// Cookies parses and returns the cookies set in the Set-Cookie headers.
77func (r *Response) Cookies() []*Cookie {
78 return readSetCookies(r.Header)
79}
80
81var ErrNoLocation = errors.New("http: no Location header in response")
82
83// Location returns the URL of the response's "Location" header,
84// if present. Relative redirects are resolved relative to
85// the Response's Request. ErrNoLocation is returned if no
86// Location header is present.
87func (r *Response) Location() (*url.URL, error) {
88 lv := r.Header.Get("Location")
89 if lv == "" {
90 return nil, ErrNoLocation
91 }
92 if r.Request != nil && r.Request.URL != nil {
93 return r.Request.URL.Parse(lv)
94 }
95 return url.Parse(lv)
96}
97
98// ReadResponse reads and returns an HTTP response from r. The
99// req parameter specifies the Request that corresponds to
100// this Response. Clients must call resp.Body.Close when finished
101// reading resp.Body. After that call, clients can inspect
102// resp.Trailer to find key/value pairs included in the response
103// trailer.
104func ReadResponse(r *bufio.Reader, req *Request) (resp *Response, err error) {
105
106 tp := textproto.NewReader(r)
107 resp = new(Response)
108
109 resp.Request = req
110 resp.Request.Method = strings.ToUpper(resp.Request.Method)
111
112 // Parse the first line of the response.
113 line, err := tp.ReadLine()
114 if err != nil {
115 if err == io.EOF {
116 err = io.ErrUnexpectedEOF
117 }
118 return nil, err
119 }
120 f := strings.SplitN(line, " ", 3)
121 if len(f) < 2 {
122 return nil, &badStringError{"malformed HTTP response", line}
123 }
124 reasonPhrase := ""
125 if len(f) > 2 {
126 reasonPhrase = f[2]
127 }
128 resp.Status = f[1] + " " + reasonPhrase
129 resp.StatusCode, err = strconv.Atoi(f[1])
130 if err != nil {
131 return nil, &badStringError{"malformed HTTP status code", f[1]}
132 }
133
134 resp.Proto = f[0]
135 var ok bool
136 if resp.ProtoMajor, resp.ProtoMinor, ok = ParseHTTPVersion(resp.Proto); !ok {
137 return nil, &badStringError{"malformed HTTP version", resp.Proto}
138 }
139
140 // Parse the response headers.
141 mimeHeader, err := tp.ReadMIMEHeader()
142 if err != nil {
143 return nil, err
144 }
145 resp.Header = Header(mimeHeader)
146
147 fixPragmaCacheControl(resp.Header)
148
149 err = readTransfer(resp, r)
150 if err != nil {
151 return nil, err
152 }
153
154 return resp, nil
155}
156
157// RFC2616: Should treat
158// Pragma: no-cache
159// like
160// Cache-Control: no-cache
161func fixPragmaCacheControl(header Header) {
162 if hp, ok := header["Pragma"]; ok && len(hp) > 0 && hp[0] == "no-cache" {
163 if _, presentcc := header["Cache-Control"]; !presentcc {
164 header["Cache-Control"] = []string{"no-cache"}
165 }
166 }
167}
168
169// ProtoAtLeast returns whether the HTTP protocol used
170// in the response is at least major.minor.
171func (r *Response) ProtoAtLeast(major, minor int) bool {
172 return r.ProtoMajor > major ||
173 r.ProtoMajor == major && r.ProtoMinor >= minor
174}
175
176// Writes the response (header, body and trailer) in wire format. This method
177// consults the following fields of the response:
178//
179// StatusCode
180// ProtoMajor
181// ProtoMinor
182// RequestMethod
183// TransferEncoding
184// Trailer
185// Body
186// ContentLength
187// Header, values for non-canonical keys will have unpredictable behavior
188//
189func (r *Response) Write(w io.Writer) error {
190
191 // RequestMethod should be upper-case
192 if r.Request != nil {
193 r.Request.Method = strings.ToUpper(r.Request.Method)
194 }
195
196 // Status line
197 text := r.Status
198 if text == "" {
199 var ok bool
200 text, ok = statusText[r.StatusCode]
201 if !ok {
202 text = "status code " + strconv.Itoa(r.StatusCode)
203 }
204 }
205 protoMajor, protoMinor := strconv.Itoa(r.ProtoMajor), strconv.Itoa(r.ProtoMinor)
206 statusCode := strconv.Itoa(r.StatusCode) + " "
207 if strings.HasPrefix(text, statusCode) {
208 text = text[len(statusCode):]
209 }
210 io.WriteString(w, "HTTP/"+protoMajor+"."+protoMinor+" "+statusCode+text+"\r\n")
211
212 // Process Body,ContentLength,Close,Trailer
213 tw, err := newTransferWriter(r)
214 if err != nil {
215 return err
216 }
217 err = tw.WriteHeader(w)
218 if err != nil {
219 return err
220 }
221
222 // Rest of header
223 err = r.Header.WriteSubset(w, respExcludeHeader)
224 if err != nil {
225 return err
226 }
227
228 // End-of-header
229 io.WriteString(w, "\r\n")
230
231 // Write body and trailer
232 err = tw.WriteBody(w)
233 if err != nil {
234 return err
235 }
236
237 // Success
238 return nil
239}
0240
=== added file 'fork/http/server.go'
--- fork/http/server.go 1970-01-01 00:00:00 +0000
+++ fork/http/server.go 2013-07-22 14:27:42 +0000
@@ -0,0 +1,1234 @@
1// Copyright 2009 The Go Authors. All rights reserved.
2// Use of this source code is governed by a BSD-style
3// license that can be found in the LICENSE file.
4
5// HTTP server. See RFC 2616.
6
7// TODO(rsc):
8// logging
9
10package http
11
12import (
13 "bufio"
14 "bytes"
15 "crypto/tls"
16 "errors"
17 "fmt"
18 "io"
19 "io/ioutil"
20 "log"
21 "net"
22 "net/url"
23 "path"
24 "runtime/debug"
25 "strconv"
26 "strings"
27 "sync"
28 "time"
29)
30
31// Errors introduced by the HTTP server.
32var (
33 ErrWriteAfterFlush = errors.New("Conn.Write called after Flush")
34 ErrBodyNotAllowed = errors.New("http: request method or response status code does not allow body")
35 ErrHijacked = errors.New("Conn has been hijacked")
36 ErrContentLength = errors.New("Conn.Write wrote more than the declared Content-Length")
37)
38
39// Objects implementing the Handler interface can be
40// registered to serve a particular path or subtree
41// in the HTTP server.
42//
43// ServeHTTP should write reply headers and data to the ResponseWriter
44// and then return. Returning signals that the request is finished
45// and that the HTTP server can move on to the next request on
46// the connection.
47type Handler interface {
48 ServeHTTP(ResponseWriter, *Request)
49}
50
51// A ResponseWriter interface is used by an HTTP handler to
52// construct an HTTP response.
53type ResponseWriter interface {
54 // Header returns the header map that will be sent by WriteHeader.
55 // Changing the header after a call to WriteHeader (or Write) has
56 // no effect.
57 Header() Header
58
59 // Write writes the data to the connection as part of an HTTP reply.
60 // If WriteHeader has not yet been called, Write calls WriteHeader(http.StatusOK)
61 // before writing the data. If the Header does not contain a
62 // Content-Type line, Write adds a Content-Type set to the result of passing
63 // the initial 512 bytes of written data to DetectContentType.
64 Write([]byte) (int, error)
65
66 // WriteHeader sends an HTTP response header with status code.
67 // If WriteHeader is not called explicitly, the first call to Write
68 // will trigger an implicit WriteHeader(http.StatusOK).
69 // Thus explicit calls to WriteHeader are mainly used to
70 // send error codes.
71 WriteHeader(int)
72}
73
74// The Flusher interface is implemented by ResponseWriters that allow
75// an HTTP handler to flush buffered data to the client.
76//
77// Note that even for ResponseWriters that support Flush,
78// if the client is connected through an HTTP proxy,
79// the buffered data may not reach the client until the response
80// completes.
81type Flusher interface {
82 // Flush sends any buffered data to the client.
83 Flush()
84}
85
86// The Hijacker interface is implemented by ResponseWriters that allow
87// an HTTP handler to take over the connection.
88type Hijacker interface {
89 // Hijack lets the caller take over the connection.
90 // After a call to Hijack(), the HTTP server library
91 // will not do anything else with the connection.
92 // It becomes the caller's responsibility to manage
93 // and close the connection.
94 Hijack() (net.Conn, *bufio.ReadWriter, error)
95}
96
97// A conn represents the server side of an HTTP connection.
98type conn struct {
99 remoteAddr string // network address of remote side
100 server *Server // the Server on which the connection arrived
101 rwc net.Conn // i/o connection
102 lr *io.LimitedReader // io.LimitReader(rwc)
103 buf *bufio.ReadWriter // buffered(lr,rwc), reading from bufio->limitReader->rwc
104 hijacked bool // connection has been hijacked by handler
105 tlsState *tls.ConnectionState // or nil when not using TLS
106 body []byte
107}
108
109// A response represents the server side of an HTTP response.
110type response struct {
111 conn *conn
112 req *Request // request for this response
113 chunking bool // using chunked transfer encoding for reply body
114 wroteHeader bool // reply header has been written
115 wroteContinue bool // 100 Continue response was written
116 header Header // reply header parameters
117 written int64 // number of bytes written in body
118 contentLength int64 // explicitly-declared Content-Length; or -1
119 status int // status code passed to WriteHeader
120 needSniff bool // need to sniff to find Content-Type
121
122 // close connection after this reply. set on request and
123 // updated after response from handler if there's a
124 // "Connection: keep-alive" response header and a
125 // Content-Length.
126 closeAfterReply bool
127
128 // requestBodyLimitHit is set by requestTooLarge when
129 // maxBytesReader hits its max size. It is checked in
130 // WriteHeader, to make sure we don't consume the the
131 // remaining request body to try to advance to the next HTTP
132 // request. Instead, when this is set, we stop doing
133 // subsequent requests on this connection and stop reading
134 // input from it.
135 requestBodyLimitHit bool
136}
137
138// requestTooLarge is called by maxBytesReader when too much input has
139// been read from the client.
140func (w *response) requestTooLarge() {
141 w.closeAfterReply = true
142 w.requestBodyLimitHit = true
143 if !w.wroteHeader {
144 w.Header().Set("Connection", "close")
145 }
146}
147
148type writerOnly struct {
149 io.Writer
150}
151
152func (w *response) ReadFrom(src io.Reader) (n int64, err error) {
153 // Call WriteHeader before checking w.chunking if it hasn't
154 // been called yet, since WriteHeader is what sets w.chunking.
155 if !w.wroteHeader {
156 w.WriteHeader(StatusOK)
157 }
158 if !w.chunking && w.bodyAllowed() && !w.needSniff {
159 w.Flush()
160 if rf, ok := w.conn.rwc.(io.ReaderFrom); ok {
161 n, err = rf.ReadFrom(src)
162 w.written += n
163 return
164 }
165 }
166 // Fall back to default io.Copy implementation.
167 // Use wrapper to hide w.ReadFrom from io.Copy.
168 return io.Copy(writerOnly{w}, src)
169}
170
171// noLimit is an effective infinite upper bound for io.LimitedReader
172const noLimit int64 = (1 << 63) - 1
173
174// Create new connection from rwc.
175func (srv *Server) newConn(rwc net.Conn) (c *conn, err error) {
176 c = new(conn)
177 c.remoteAddr = rwc.RemoteAddr().String()
178 c.server = srv
179 c.rwc = rwc
180 c.body = make([]byte, sniffLen)
181 c.lr = io.LimitReader(rwc, noLimit).(*io.LimitedReader)
182 br := bufio.NewReader(c.lr)
183 bw := bufio.NewWriter(rwc)
184 c.buf = bufio.NewReadWriter(br, bw)
185 return c, nil
186}
187
188// DefaultMaxHeaderBytes is the maximum permitted size of the headers
189// in an HTTP request.
190// This can be overridden by setting Server.MaxHeaderBytes.
191const DefaultMaxHeaderBytes = 1 << 20 // 1 MB
192
193func (srv *Server) maxHeaderBytes() int {
194 if srv.MaxHeaderBytes > 0 {
195 return srv.MaxHeaderBytes
196 }
197 return DefaultMaxHeaderBytes
198}
199
200// wrapper around io.ReaderCloser which on first read, sends an
201// HTTP/1.1 100 Continue header
202type expectContinueReader struct {
203 resp *response
204 readCloser io.ReadCloser
205 closed bool
206}
207
208func (ecr *expectContinueReader) Read(p []byte) (n int, err error) {
209 if ecr.closed {
210 return 0, errors.New("http: Read after Close on request Body")
211 }
212 if !ecr.resp.wroteContinue && !ecr.resp.conn.hijacked {
213 ecr.resp.wroteContinue = true
214 io.WriteString(ecr.resp.conn.buf, "HTTP/1.1 100 Continue\r\n\r\n")
215 ecr.resp.conn.buf.Flush()
216 }
217 return ecr.readCloser.Read(p)
218}
219
220func (ecr *expectContinueReader) Close() error {
221 ecr.closed = true
222 return ecr.readCloser.Close()
223}
224
225// TimeFormat is the time format to use with
226// time.Parse and time.Time.Format when parsing
227// or generating times in HTTP headers.
228// It is like time.RFC1123 but hard codes GMT as the time zone.
229const TimeFormat = "Mon, 02 Jan 2006 15:04:05 GMT"
230
231var errTooLarge = errors.New("http: request too large")
232
233// Read next request from connection.
234func (c *conn) readRequest() (w *response, err error) {
235 if c.hijacked {
236 return nil, ErrHijacked
237 }
238 c.lr.N = int64(c.server.maxHeaderBytes()) + 4096 /* bufio slop */
239 var req *Request
240 if req, err = ReadRequest(c.buf.Reader); err != nil {
241 if c.lr.N == 0 {
242 return nil, errTooLarge
243 }
244 return nil, err
245 }
246 c.lr.N = noLimit
247
248 req.RemoteAddr = c.remoteAddr
249 req.TLS = c.tlsState
250
251 w = new(response)
252 w.conn = c
253 w.req = req
254 w.header = make(Header)
255 w.contentLength = -1
256 c.body = c.body[:0]
257 return w, nil
258}
259
260func (w *response) Header() Header {
261 return w.header
262}
263
264// maxPostHandlerReadBytes is the max number of Request.Body bytes not
265// consumed by a handler that the server will read from the client
266// in order to keep a connection alive. If there are more bytes than
267// this then the server to be paranoid instead sends a "Connection:
268// close" response.
269//
270// This number is approximately what a typical machine's TCP buffer
271// size is anyway. (if we have the bytes on the machine, we might as
272// well read them)
273const maxPostHandlerReadBytes = 256 << 10
274
275func (w *response) WriteHeader(code int) {
276 if w.conn.hijacked {
277 log.Print("http: response.WriteHeader on hijacked connection")
278 return
279 }
280 if w.wroteHeader {
281 log.Print("http: multiple response.WriteHeader calls")
282 return
283 }
284 w.wroteHeader = true
285 w.status = code
286
287 // Check for a explicit (and valid) Content-Length header.
288 var hasCL bool
289 var contentLength int64
290 if clenStr := w.header.Get("Content-Length"); clenStr != "" {
291 var err error
292 contentLength, err = strconv.ParseInt(clenStr, 10, 64)
293 if err == nil {
294 hasCL = true
295 } else {
296 log.Printf("http: invalid Content-Length of %q sent", clenStr)
297 w.header.Del("Content-Length")
298 }
299 }
300
301 if w.req.wantsHttp10KeepAlive() && (w.req.Method == "HEAD" || hasCL) {
302 _, connectionHeaderSet := w.header["Connection"]
303 if !connectionHeaderSet {
304 w.header.Set("Connection", "keep-alive")
305 }
306 } else if !w.req.ProtoAtLeast(1, 1) {
307 // Client did not ask to keep connection alive.
308 w.closeAfterReply = true
309 }
310
311 if w.header.Get("Connection") == "close" {
312 w.closeAfterReply = true
313 }
314
315 // Per RFC 2616, we should consume the request body before
316 // replying, if the handler hasn't already done so. But we
317 // don't want to do an unbounded amount of reading here for
318 // DoS reasons, so we only try up to a threshold.
319 if w.req.ContentLength != 0 && !w.closeAfterReply {
320 ecr, isExpecter := w.req.Body.(*expectContinueReader)
321 if !isExpecter || ecr.resp.wroteContinue {
322 n, _ := io.CopyN(ioutil.Discard, w.req.Body, maxPostHandlerReadBytes+1)
323 if n >= maxPostHandlerReadBytes {
324 w.requestTooLarge()
325 w.header.Set("Connection", "close")
326 } else {
327 w.req.Body.Close()
328 }
329 }
330 }
331
332 if code == StatusNotModified {
333 // Must not have body.
334 for _, header := range []string{"Content-Type", "Content-Length", "Transfer-Encoding"} {
335 if w.header.Get(header) != "" {
336 // TODO: return an error if WriteHeader gets a return parameter
337 // or set a flag on w to make future Writes() write an error page?
338 // for now just log and drop the header.
339 log.Printf("http: StatusNotModified response with header %q defined", header)
340 w.header.Del(header)
341 }
342 }
343 } else {
344 // If no content type, apply sniffing algorithm to body.
345 if w.header.Get("Content-Type") == "" && w.req.Method != "HEAD" {
346 w.needSniff = true
347 }
348 }
349
350 if _, ok := w.header["Date"]; !ok {
351 w.Header().Set("Date", time.Now().UTC().Format(TimeFormat))
352 }
353
354 te := w.header.Get("Transfer-Encoding")
355 hasTE := te != ""
356 if hasCL && hasTE && te != "identity" {
357 // TODO: return an error if WriteHeader gets a return parameter
358 // For now just ignore the Content-Length.
359 log.Printf("http: WriteHeader called with both Transfer-Encoding of %q and a Content-Length of %d",
360 te, contentLength)
361 w.header.Del("Content-Length")
362 hasCL = false
363 }
364
365 if w.req.Method == "HEAD" || code == StatusNotModified {
366 // do nothing
367 } else if hasCL {
368 w.contentLength = contentLength
369 w.header.Del("Transfer-Encoding")
370 } else if w.req.ProtoAtLeast(1, 1) {
371 // HTTP/1.1 or greater: use chunked transfer encoding
372 // to avoid closing the connection at EOF.
373 // TODO: this blows away any custom or stacked Transfer-Encoding they
374 // might have set. Deal with that as need arises once we have a valid
375 // use case.
376 w.chunking = true
377 w.header.Set("Transfer-Encoding", "chunked")
378 } else {
379 // HTTP version < 1.1: cannot do chunked transfer
380 // encoding and we don't know the Content-Length so
381 // signal EOF by closing connection.
382 w.closeAfterReply = true
383 w.header.Del("Transfer-Encoding") // in case already set
384 }
385
386 // Cannot use Content-Length with non-identity Transfer-Encoding.
387 if w.chunking {
388 w.header.Del("Content-Length")
389 }
390 if !w.req.ProtoAtLeast(1, 0) {
391 return
392 }
393 proto := "HTTP/1.0"
394 if w.req.ProtoAtLeast(1, 1) {
395 proto = "HTTP/1.1"
396 }
397 codestring := strconv.Itoa(code)
398 text, ok := statusText[code]
399 if !ok {
400 text = "status code " + codestring
401 }
402 io.WriteString(w.conn.buf, proto+" "+codestring+" "+text+"\r\n")
403 w.header.Write(w.conn.buf)
404
405 // If we need to sniff the body, leave the header open.
406 // Otherwise, end it here.
407 if !w.needSniff {
408 io.WriteString(w.conn.buf, "\r\n")
409 }
410}
411
412// sniff uses the first block of written data,
413// stored in w.conn.body, to decide the Content-Type
414// for the HTTP body.
415func (w *response) sniff() {
416 if !w.needSniff {
417 return
418 }
419 w.needSniff = false
420
421 data := w.conn.body
422 fmt.Fprintf(w.conn.buf, "Content-Type: %s\r\n\r\n", DetectContentType(data))
423
424 if len(data) == 0 {
425 return
426 }
427 if w.chunking {
428 fmt.Fprintf(w.conn.buf, "%x\r\n", len(data))
429 }
430 _, err := w.conn.buf.Write(data)
431 if w.chunking && err == nil {
432 io.WriteString(w.conn.buf, "\r\n")
433 }
434}
435
436// bodyAllowed returns true if a Write is allowed for this response type.
437// It's illegal to call this before the header has been flushed.
438func (w *response) bodyAllowed() bool {
439 if !w.wroteHeader {
440 panic("")
441 }
442 return w.status != StatusNotModified && w.req.Method != "HEAD"
443}
444
445func (w *response) Write(data []byte) (n int, err error) {
446 if w.conn.hijacked {
447 log.Print("http: response.Write on hijacked connection")
448 return 0, ErrHijacked
449 }
450 if !w.wroteHeader {
451 w.WriteHeader(StatusOK)
452 }
453 if len(data) == 0 {
454 return 0, nil
455 }
456 if !w.bodyAllowed() {
457 return 0, ErrBodyNotAllowed
458 }
459
460 w.written += int64(len(data)) // ignoring errors, for errorKludge
461 if w.contentLength != -1 && w.written > w.contentLength {
462 return 0, ErrContentLength
463 }
464
465 var m int
466 if w.needSniff {
467 // We need to sniff the beginning of the output to
468 // determine the content type. Accumulate the
469 // initial writes in w.conn.body.
470 // Cap m so that append won't allocate.
471 m = cap(w.conn.body) - len(w.conn.body)
472 if m > len(data) {
473 m = len(data)
474 }
475 w.conn.body = append(w.conn.body, data[:m]...)
476 data = data[m:]
477 if len(data) == 0 {
478 // Copied everything into the buffer.
479 // Wait for next write.
480 return m, nil
481 }
482
483 // Filled the buffer; more data remains.
484 // Sniff the content (flushes the buffer)
485 // and then proceed with the remainder
486 // of the data as a normal Write.
487 // Calling sniff clears needSniff.
488 w.sniff()
489 }
490
491 // TODO(rsc): if chunking happened after the buffering,
492 // then there would be fewer chunk headers.
493 // On the other hand, it would make hijacking more difficult.
494 if w.chunking {
495 fmt.Fprintf(w.conn.buf, "%x\r\n", len(data)) // TODO(rsc): use strconv not fmt
496 }
497 n, err = w.conn.buf.Write(data)
498 if err == nil && w.chunking {
499 if n != len(data) {
500 err = io.ErrShortWrite
501 }
502 if err == nil {
503 io.WriteString(w.conn.buf, "\r\n")
504 }
505 }
506
507 return m + n, err
508}
509
510func (w *response) finishRequest() {
511 // If this was an HTTP/1.0 request with keep-alive and we sent a Content-Length
512 // back, we can make this a keep-alive response ...
513 if w.req.wantsHttp10KeepAlive() {
514 sentLength := w.header.Get("Content-Length") != ""
515 if sentLength && w.header.Get("Connection") == "keep-alive" {
516 w.closeAfterReply = false
517 }
518 }
519 if !w.wroteHeader {
520 w.WriteHeader(StatusOK)
521 }
522 if w.needSniff {
523 w.sniff()
524 }
525 if w.chunking {
526 io.WriteString(w.conn.buf, "0\r\n")
527 // trailer key/value pairs, followed by blank line
528 io.WriteString(w.conn.buf, "\r\n")
529 }
530 w.conn.buf.Flush()
531 // Close the body, unless we're about to close the whole TCP connection
532 // anyway.
533 if !w.closeAfterReply {
534 w.req.Body.Close()
535 }
536 if w.req.MultipartForm != nil {
537 w.req.MultipartForm.RemoveAll()
538 }
539
540 if w.contentLength != -1 && w.contentLength != w.written {
541 // Did not write enough. Avoid getting out of sync.
542 w.closeAfterReply = true
543 }
544}
545
546func (w *response) Flush() {
547 if !w.wroteHeader {
548 w.WriteHeader(StatusOK)
549 }
550 w.sniff()
551 w.conn.buf.Flush()
552}
553
554// Close the connection.
555func (c *conn) close() {
556 if c.buf != nil {
557 c.buf.Flush()
558 c.buf = nil
559 }
560 if c.rwc != nil {
561 c.rwc.Close()
562 c.rwc = nil
563 }
564}
565
566// Serve a new connection.
567func (c *conn) serve() {
568 defer func() {
569 err := recover()
570 if err == nil {
571 return
572 }
573
574 var buf bytes.Buffer
575 fmt.Fprintf(&buf, "http: panic serving %v: %v\n", c.remoteAddr, err)
576 buf.Write(debug.Stack())
577 log.Print(buf.String())
578
579 if c.rwc != nil { // may be nil if connection hijacked
580 c.rwc.Close()
581 }
582 }()
583
584 if tlsConn, ok := c.rwc.(*tls.Conn); ok {
585 if err := tlsConn.Handshake(); err != nil {
586 c.close()
587 return
588 }
589 c.tlsState = new(tls.ConnectionState)
590 *c.tlsState = tlsConn.ConnectionState()
591 }
592
593 for {
594 w, err := c.readRequest()
595 if err != nil {
596 msg := "400 Bad Request"
597 if err == errTooLarge {
598 // Their HTTP client may or may not be
599 // able to read this if we're
600 // responding to them and hanging up
601 // while they're still writing their
602 // request. Undefined behavior.
603 msg = "413 Request Entity Too Large"
604 } else if err == io.EOF {
605 break // Don't reply
606 } else if neterr, ok := err.(net.Error); ok && neterr.Timeout() {
607 break // Don't reply
608 }
609 fmt.Fprintf(c.rwc, "HTTP/1.1 %s\r\n\r\n", msg)
610 break
611 }
612
613 // Expect 100 Continue support
614 req := w.req
615 if req.expectsContinue() {
616 if req.ProtoAtLeast(1, 1) {
617 // Wrap the Body reader with one that replies on the connection
618 req.Body = &expectContinueReader{readCloser: req.Body, resp: w}
619 }
620 if req.ContentLength == 0 {
621 w.Header().Set("Connection", "close")
622 w.WriteHeader(StatusBadRequest)
623 w.finishRequest()
624 break
625 }
626 req.Header.Del("Expect")
627 } else if req.Header.Get("Expect") != "" {
628 // TODO(bradfitz): let ServeHTTP handlers handle
629 // requests with non-standard expectation[s]? Seems
630 // theoretical at best, and doesn't fit into the
631 // current ServeHTTP model anyway. We'd need to
632 // make the ResponseWriter an optional
633 // "ExpectReplier" interface or something.
634 //
635 // For now we'll just obey RFC 2616 14.20 which says
636 // "If a server receives a request containing an
637 // Expect field that includes an expectation-
638 // extension that it does not support, it MUST
639 // respond with a 417 (Expectation Failed) status."
640 w.Header().Set("Connection", "close")
641 w.WriteHeader(StatusExpectationFailed)
642 w.finishRequest()
643 break
644 }
645
646 handler := c.server.Handler
647 if handler == nil {
648 handler = DefaultServeMux
649 }
650
651 // HTTP cannot have multiple simultaneous active requests.[*]
652 // Until the server replies to this request, it can't read another,
653 // so we might as well run the handler in this goroutine.
654 // [*] Not strictly true: HTTP pipelining. We could let them all process
655 // in parallel even if their responses need to be serialized.
656 handler.ServeHTTP(w, w.req)
657 if c.hijacked {
658 return
659 }
660 w.finishRequest()
661 if w.closeAfterReply {
662 break
663 }
664 }
665 c.close()
666}
667
668// Hijack implements the Hijacker.Hijack method. Our response is both a ResponseWriter
669// and a Hijacker.
670func (w *response) Hijack() (rwc net.Conn, buf *bufio.ReadWriter, err error) {
671 if w.conn.hijacked {
672 return nil, nil, ErrHijacked
673 }
674 w.conn.hijacked = true
675 rwc = w.conn.rwc
676 buf = w.conn.buf
677 w.conn.rwc = nil
678 w.conn.buf = nil
679 return
680}
681
682// The HandlerFunc type is an adapter to allow the use of
683// ordinary functions as HTTP handlers. If f is a function
684// with the appropriate signature, HandlerFunc(f) is a
685// Handler object that calls f.
686type HandlerFunc func(ResponseWriter, *Request)
687
688// ServeHTTP calls f(w, r).
689func (f HandlerFunc) ServeHTTP(w ResponseWriter, r *Request) {
690 f(w, r)
691}
692
693// Helper handlers
694
695// Error replies to the request with the specified error message and HTTP code.
696func Error(w ResponseWriter, error string, code int) {
697 w.Header().Set("Content-Type", "text/plain; charset=utf-8")
698 w.WriteHeader(code)
699 fmt.Fprintln(w, error)
700}
701
702// NotFound replies to the request with an HTTP 404 not found error.
703func NotFound(w ResponseWriter, r *Request) { Error(w, "404 page not found", StatusNotFound) }
704
705// NotFoundHandler returns a simple request handler
706// that replies to each request with a ``404 page not found'' reply.
707func NotFoundHandler() Handler { return HandlerFunc(NotFound) }
708
709// StripPrefix returns a handler that serves HTTP requests
710// by removing the given prefix from the request URL's Path
711// and invoking the handler h. StripPrefix handles a
712// request for a path that doesn't begin with prefix by
713// replying with an HTTP 404 not found error.
714func StripPrefix(prefix string, h Handler) Handler {
715 return HandlerFunc(func(w ResponseWriter, r *Request) {
716 if !strings.HasPrefix(r.URL.Path, prefix) {
717 NotFound(w, r)
718 return
719 }
720 r.URL.Path = r.URL.Path[len(prefix):]
721 h.ServeHTTP(w, r)
722 })
723}
724
725// Redirect replies to the request with a redirect to url,
726// which may be a path relative to the request path.
727func Redirect(w ResponseWriter, r *Request, urlStr string, code int) {
728 if u, err := url.Parse(urlStr); err == nil {
729 // If url was relative, make absolute by
730 // combining with request path.
731 // The browser would probably do this for us,
732 // but doing it ourselves is more reliable.
733
734 // NOTE(rsc): RFC 2616 says that the Location
735 // line must be an absolute URI, like
736 // "http://www.google.com/redirect/",
737 // not a path like "/redirect/".
738 // Unfortunately, we don't know what to
739 // put in the host name section to get the
740 // client to connect to us again, so we can't
741 // know the right absolute URI to send back.
742 // Because of this problem, no one pays attention
743 // to the RFC; they all send back just a new path.
744 // So do we.
745 oldpath := r.URL.Path
746 if oldpath == "" { // should not happen, but avoid a crash if it does
747 oldpath = "/"
748 }
749 if u.Scheme == "" {
750 // no leading http://server
751 if urlStr == "" || urlStr[0] != '/' {
752 // make relative path absolute
753 olddir, _ := path.Split(oldpath)
754 urlStr = olddir + urlStr
755 }
756
757 var query string
758 if i := strings.Index(urlStr, "?"); i != -1 {
759 urlStr, query = urlStr[:i], urlStr[i:]
760 }
761
762 // clean up but preserve trailing slash
763 trailing := urlStr[len(urlStr)-1] == '/'
764 urlStr = path.Clean(urlStr)
765 if trailing && urlStr[len(urlStr)-1] != '/' {
766 urlStr += "/"
767 }
768 urlStr += query
769 }
770 }
771
772 w.Header().Set("Location", urlStr)
773 w.WriteHeader(code)
774
775 // RFC2616 recommends that a short note "SHOULD" be included in the
776 // response because older user agents may not understand 301/307.
777 // Shouldn't send the response for POST or HEAD; that leaves GET.
778 if r.Method == "GET" {
779 note := "<a href=\"" + htmlEscape(urlStr) + "\">" + statusText[code] + "</a>.\n"
780 fmt.Fprintln(w, note)
781 }
782}
783
784var htmlReplacer = strings.NewReplacer(
785 "&", "&amp;",
786 "<", "&lt;",
787 ">", "&gt;",
788 // "&#34;" is shorter than "&quot;".
789 `"`, "&#34;",
790 // "&#39;" is shorter than "&apos;" and apos was not in HTML until HTML5.
791 "'", "&#39;",
792)
793
794func htmlEscape(s string) string {
795 return htmlReplacer.Replace(s)
796}
797
798// Redirect to a fixed URL
799type redirectHandler struct {
800 url string
801 code int
802}
803
804func (rh *redirectHandler) ServeHTTP(w ResponseWriter, r *Request) {
805 Redirect(w, r, rh.url, rh.code)
806}
807
808// RedirectHandler returns a request handler that redirects
809// each request it receives to the given url using the given
810// status code.
811func RedirectHandler(url string, code int) Handler {
812 return &redirectHandler{url, code}
813}
814
815// ServeMux is an HTTP request multiplexer.
816// It matches the URL of each incoming request against a list of registered
817// patterns and calls the handler for the pattern that
818// most closely matches the URL.
819//
820// Patterns named fixed, rooted paths, like "/favicon.ico",
821// or rooted subtrees, like "/images/" (note the trailing slash).
822// Longer patterns take precedence over shorter ones, so that
823// if there are handlers registered for both "/images/"
824// and "/images/thumbnails/", the latter handler will be
825// called for paths beginning "/images/thumbnails/" and the
826// former will receiver requests for any other paths in the
827// "/images/" subtree.
828//
829// Patterns may optionally begin with a host name, restricting matches to
830// URLs on that host only. Host-specific patterns take precedence over
831// general patterns, so that a handler might register for the two patterns
832// "/codesearch" and "codesearch.google.com/" without also taking over
833// requests for "http://www.google.com/".
834//
835// ServeMux also takes care of sanitizing the URL request path,
836// redirecting any request containing . or .. elements to an
837// equivalent .- and ..-free URL.
838type ServeMux struct {
839 mu sync.RWMutex
840 m map[string]muxEntry
841}
842
843type muxEntry struct {
844 explicit bool
845 h Handler
846}
847
848// NewServeMux allocates and returns a new ServeMux.
849func NewServeMux() *ServeMux { return &ServeMux{m: make(map[string]muxEntry)} }
850
851// DefaultServeMux is the default ServeMux used by Serve.
852var DefaultServeMux = NewServeMux()
853
854// Does path match pattern?
855func pathMatch(pattern, path string) bool {
856 if len(pattern) == 0 {
857 // should not happen
858 return false
859 }
860 n := len(pattern)
861 if pattern[n-1] != '/' {
862 return pattern == path
863 }
864 return len(path) >= n && path[0:n] == pattern
865}
866
867// Return the canonical path for p, eliminating . and .. elements.
868func cleanPath(p string) string {
869 if p == "" {
870 return "/"
871 }
872 if p[0] != '/' {
873 p = "/" + p
874 }
875 np := path.Clean(p)
876 // path.Clean removes trailing slash except for root;
877 // put the trailing slash back if necessary.
878 if p[len(p)-1] == '/' && np != "/" {
879 np += "/"
880 }
881 return np
882}
883
884// Find a handler on a handler map given a path string
885// Most-specific (longest) pattern wins
886func (mux *ServeMux) match(path string) Handler {
887 var h Handler
888 var n = 0
889 for k, v := range mux.m {
890 if !pathMatch(k, path) {
891 continue
892 }
893 if h == nil || len(k) > n {
894 n = len(k)
895 h = v.h
896 }
897 }
898 return h
899}
900
901// handler returns the handler to use for the request r.
902func (mux *ServeMux) handler(r *Request) Handler {
903 mux.mu.RLock()
904 defer mux.mu.RUnlock()
905
906 // Host-specific pattern takes precedence over generic ones
907 h := mux.match(r.Host + r.URL.Path)
908 if h == nil {
909 h = mux.match(r.URL.Path)
910 }
911 if h == nil {
912 h = NotFoundHandler()
913 }
914 return h
915}
916
917// ServeHTTP dispatches the request to the handler whose
918// pattern most closely matches the request URL.
919func (mux *ServeMux) ServeHTTP(w ResponseWriter, r *Request) {
920 // Clean path to canonical form and redirect.
921 if p := cleanPath(r.URL.Path); p != r.URL.Path {
922 w.Header().Set("Location", p)
923 w.WriteHeader(StatusMovedPermanently)
924 return
925 }
926 mux.handler(r).ServeHTTP(w, r)
927}
928
929// Handle registers the handler for the given pattern.
930// If a handler already exists for pattern, Handle panics.
931func (mux *ServeMux) Handle(pattern string, handler Handler) {
932 mux.mu.Lock()
933 defer mux.mu.Unlock()
934
935 if pattern == "" {
936 panic("http: invalid pattern " + pattern)
937 }
938 if handler == nil {
939 panic("http: nil handler")
940 }
941 if mux.m[pattern].explicit {
942 panic("http: multiple registrations for " + pattern)
943 }
944
945 mux.m[pattern] = muxEntry{explicit: true, h: handler}
946
947 // Helpful behavior:
948 // If pattern is /tree/, insert an implicit permanent redirect for /tree.
949 // It can be overridden by an explicit registration.
950 n := len(pattern)
951 if n > 0 && pattern[n-1] == '/' && !mux.m[pattern[0:n-1]].explicit {
952 mux.m[pattern[0:n-1]] = muxEntry{h: RedirectHandler(pattern, StatusMovedPermanently)}
953 }
954}
955
956// HandleFunc registers the handler function for the given pattern.
957func (mux *ServeMux) HandleFunc(pattern string, handler func(ResponseWriter, *Request)) {
958 mux.Handle(pattern, HandlerFunc(handler))
959}
960
961// Handle registers the handler for the given pattern
962// in the DefaultServeMux.
963// The documentation for ServeMux explains how patterns are matched.
964func Handle(pattern string, handler Handler) { DefaultServeMux.Handle(pattern, handler) }
965
966// HandleFunc registers the handler function for the given pattern
967// in the DefaultServeMux.
968// The documentation for ServeMux explains how patterns are matched.
969func HandleFunc(pattern string, handler func(ResponseWriter, *Request)) {
970 DefaultServeMux.HandleFunc(pattern, handler)
971}
972
973// Serve accepts incoming HTTP connections on the listener l,
974// creating a new service thread for each. The service threads
975// read requests and then call handler to reply to them.
976// Handler is typically nil, in which case the DefaultServeMux is used.
977func Serve(l net.Listener, handler Handler) error {
978 srv := &Server{Handler: handler}
979 return srv.Serve(l)
980}
981
982// A Server defines parameters for running an HTTP server.
983type Server struct {
984 Addr string // TCP address to listen on, ":http" if empty
985 Handler Handler // handler to invoke, http.DefaultServeMux if nil
986 ReadTimeout time.Duration // maximum duration before timing out read of the request
987 WriteTimeout time.Duration // maximum duration before timing out write of the response
988 MaxHeaderBytes int // maximum size of request headers, DefaultMaxHeaderBytes if 0
989 TLSConfig *tls.Config // optional TLS config, used by ListenAndServeTLS
990}
991
992// ListenAndServe listens on the TCP network address srv.Addr and then
993// calls Serve to handle requests on incoming connections. If
994// srv.Addr is blank, ":http" is used.
995func (srv *Server) ListenAndServe() error {
996 addr := srv.Addr
997 if addr == "" {
998 addr = ":http"
999 }
1000 l, e := net.Listen("tcp", addr)
1001 if e != nil {
1002 return e
1003 }
1004 return srv.Serve(l)
1005}
1006
1007// Serve accepts incoming connections on the Listener l, creating a
1008// new service thread for each. The service threads read requests and
1009// then call srv.Handler to reply to them.
1010func (srv *Server) Serve(l net.Listener) error {
1011 defer l.Close()
1012 var tempDelay time.Duration // how long to sleep on accept failure
1013 for {
1014 rw, e := l.Accept()
1015 if e != nil {
1016 if ne, ok := e.(net.Error); ok && ne.Temporary() {
1017 if tempDelay == 0 {
1018 tempDelay = 5 * time.Millisecond
1019 } else {
1020 tempDelay *= 2
1021 }
1022 if max := 1 * time.Second; tempDelay > max {
1023 tempDelay = max
1024 }
1025 log.Printf("http: Accept error: %v; retrying in %v", e, tempDelay)
1026 time.Sleep(tempDelay)
1027 continue
1028 }
1029 return e
1030 }
1031 tempDelay = 0
1032 if srv.ReadTimeout != 0 {
1033 rw.SetReadDeadline(time.Now().Add(srv.ReadTimeout))
1034 }
1035 if srv.WriteTimeout != 0 {
1036 rw.SetWriteDeadline(time.Now().Add(srv.WriteTimeout))
1037 }
1038 c, err := srv.newConn(rw)
1039 if err != nil {
1040 continue
1041 }
1042 go c.serve()
1043 }
1044 panic("not reached")
1045}
1046
1047// ListenAndServe listens on the TCP network address addr
1048// and then calls Serve with handler to handle requests
1049// on incoming connections. Handler is typically nil,
1050// in which case the DefaultServeMux is used.
1051//
1052// A trivial example server is:
1053//
1054// package main
1055//
1056// import (
1057// "io"
1058// "net/http"
1059// "log"
1060// )
1061//
1062// // hello world, the web server
1063// func HelloServer(w http.ResponseWriter, req *http.Request) {
1064// io.WriteString(w, "hello, world!\n")
1065// }
1066//
1067// func main() {
1068// http.HandleFunc("/hello", HelloServer)
1069// err := http.ListenAndServe(":12345", nil)
1070// if err != nil {
1071// log.Fatal("ListenAndServe: ", err)
1072// }
1073// }
1074func ListenAndServe(addr string, handler Handler) error {
1075 server := &Server{Addr: addr, Handler: handler}
1076 return server.ListenAndServe()
1077}
1078
1079// ListenAndServeTLS acts identically to ListenAndServe, except that it
1080// expects HTTPS connections. Additionally, files containing a certificate and
1081// matching private key for the server must be provided. If the certificate
1082// is signed by a certificate authority, the certFile should be the concatenation
1083// of the server's certificate followed by the CA's certificate.
1084//
1085// A trivial example server is:
1086//
1087// import (
1088// "log"
1089// "net/http"
1090// )
1091//
1092// func handler(w http.ResponseWriter, req *http.Request) {
1093// w.Header().Set("Content-Type", "text/plain")
1094// w.Write([]byte("This is an example server.\n"))
1095// }
1096//
1097// func main() {
1098// http.HandleFunc("/", handler)
1099// log.Printf("About to listen on 10443. Go to https://127.0.0.1:10443/")
1100// err := http.ListenAndServeTLS(":10443", "cert.pem", "key.pem", nil)
1101// if err != nil {
1102// log.Fatal(err)
1103// }
1104// }
1105//
1106// One can use generate_cert.go in crypto/tls to generate cert.pem and key.pem.
1107func ListenAndServeTLS(addr string, certFile string, keyFile string, handler Handler) error {
1108 server := &Server{Addr: addr, Handler: handler}
1109 return server.ListenAndServeTLS(certFile, keyFile)
1110}
1111
1112// ListenAndServeTLS listens on the TCP network address srv.Addr and
1113// then calls Serve to handle requests on incoming TLS connections.
1114//
1115// Filenames containing a certificate and matching private key for
1116// the server must be provided. If the certificate is signed by a
1117// certificate authority, the certFile should be the concatenation
1118// of the server's certificate followed by the CA's certificate.
1119//
1120// If srv.Addr is blank, ":https" is used.
1121func (srv *Server) ListenAndServeTLS(certFile, keyFile string) error {
1122 addr := srv.Addr
1123 if addr == "" {
1124 addr = ":https"
1125 }
1126 config := &tls.Config{}
1127 if srv.TLSConfig != nil {
1128 *config = *srv.TLSConfig
1129 }
1130 if config.NextProtos == nil {
1131 config.NextProtos = []string{"http/1.1"}
1132 }
1133
1134 var err error
1135 config.Certificates = make([]tls.Certificate, 1)
1136 config.Certificates[0], err = tls.LoadX509KeyPair(certFile, keyFile)
1137 if err != nil {
1138 return err
1139 }
1140
1141 conn, err := net.Listen("tcp", addr)
1142 if err != nil {
1143 return err
1144 }
1145
1146 tlsListener := tls.NewListener(conn, config)
1147 return srv.Serve(tlsListener)
1148}
1149
1150// TimeoutHandler returns a Handler that runs h with the given time limit.
1151//
1152// The new Handler calls h.ServeHTTP to handle each request, but if a
1153// call runs for more than ns nanoseconds, the handler responds with
1154// a 503 Service Unavailable error and the given message in its body.
1155// (If msg is empty, a suitable default message will be sent.)
1156// After such a timeout, writes by h to its ResponseWriter will return
1157// ErrHandlerTimeout.
1158func TimeoutHandler(h Handler, dt time.Duration, msg string) Handler {
1159 f := func() <-chan time.Time {
1160 return time.After(dt)
1161 }
1162 return &timeoutHandler{h, f, msg}
1163}
1164
1165// ErrHandlerTimeout is returned on ResponseWriter Write calls
1166// in handlers which have timed out.
1167var ErrHandlerTimeout = errors.New("http: Handler timeout")
1168
1169type timeoutHandler struct {
1170 handler Handler
1171 timeout func() <-chan time.Time // returns channel producing a timeout
1172 body string
1173}
1174
1175func (h *timeoutHandler) errorBody() string {
1176 if h.body != "" {
1177 return h.body
1178 }
1179 return "<html><head><title>Timeout</title></head><body><h1>Timeout</h1></body></html>"
1180}
1181
1182func (h *timeoutHandler) ServeHTTP(w ResponseWriter, r *Request) {
1183 done := make(chan bool)
1184 tw := &timeoutWriter{w: w}
1185 go func() {
1186 h.handler.ServeHTTP(tw, r)
1187 done <- true
1188 }()
1189 select {
1190 case <-done:
1191 return
1192 case <-h.timeout():
1193 tw.mu.Lock()
1194 defer tw.mu.Unlock()
1195 if !tw.wroteHeader {
1196 tw.w.WriteHeader(StatusServiceUnavailable)
1197 tw.w.Write([]byte(h.errorBody()))
1198 }
1199 tw.timedOut = true
1200 }
1201}
1202
1203type timeoutWriter struct {
1204 w ResponseWriter
1205
1206 mu sync.Mutex
1207 timedOut bool
1208 wroteHeader bool
1209}
1210
1211func (tw *timeoutWriter) Header() Header {
1212 return tw.w.Header()
1213}
1214
1215func (tw *timeoutWriter) Write(p []byte) (int, error) {
1216 tw.mu.Lock()
1217 timedOut := tw.timedOut
1218 tw.mu.Unlock()
1219 if timedOut {
1220 return 0, ErrHandlerTimeout
1221 }
1222 return tw.w.Write(p)
1223}
1224
1225func (tw *timeoutWriter) WriteHeader(code int) {
1226 tw.mu.Lock()
1227 if tw.timedOut || tw.wroteHeader {
1228 tw.mu.Unlock()
1229 return
1230 }
1231 tw.wroteHeader = true
1232 tw.mu.Unlock()
1233 tw.w.WriteHeader(code)
1234}
01235
=== added file 'fork/http/sniff.go'
--- fork/http/sniff.go 1970-01-01 00:00:00 +0000
+++ fork/http/sniff.go 2013-07-22 14:27:42 +0000
@@ -0,0 +1,214 @@
1// Copyright 2011 The Go Authors. All rights reserved.
2// Use of this source code is governed by a BSD-style
3// license that can be found in the LICENSE file.
4
5package http
6
7import (
8 "bytes"
9 "encoding/binary"
10)
11
12// The algorithm uses at most sniffLen bytes to make its decision.
13const sniffLen = 512
14
15// DetectContentType implements the algorithm described
16// at http://mimesniff.spec.whatwg.org/ to determine the
17// Content-Type of the given data. It considers at most the
18// first 512 bytes of data. DetectContentType always returns
19// a valid MIME type: if it cannot determine a more specific one, it
20// returns "application/octet-stream".
21func DetectContentType(data []byte) string {
22 if len(data) > sniffLen {
23 data = data[:sniffLen]
24 }
25
26 // Index of the first non-whitespace byte in data.
27 firstNonWS := 0
28 for ; firstNonWS < len(data) && isWS(data[firstNonWS]); firstNonWS++ {
29 }
30
31 for _, sig := range sniffSignatures {
32 if ct := sig.match(data, firstNonWS); ct != "" {
33 return ct
34 }
35 }
36
37 return "application/octet-stream" // fallback
38}
39
40func isWS(b byte) bool {
41 return bytes.IndexByte([]byte("\t\n\x0C\r "), b) != -1
42}
43
44type sniffSig interface {
45 // match returns the MIME type of the data, or "" if unknown.
46 match(data []byte, firstNonWS int) string
47}
48
49// Data matching the table in section 6.
50var sniffSignatures = []sniffSig{
51 htmlSig("<!DOCTYPE HTML"),
52 htmlSig("<HTML"),
53 htmlSig("<HEAD"),
54 htmlSig("<SCRIPT"),
55 htmlSig("<IFRAME"),
56 htmlSig("<H1"),
57 htmlSig("<DIV"),
58 htmlSig("<FONT"),
59 htmlSig("<TABLE"),
60 htmlSig("<A"),
61 htmlSig("<STYLE"),
62 htmlSig("<TITLE"),
63 htmlSig("<B"),
64 htmlSig("<BODY"),
65 htmlSig("<BR"),
66 htmlSig("<P"),
67 htmlSig("<!--"),
68
69 &maskedSig{mask: []byte("\xFF\xFF\xFF\xFF\xFF"), pat: []byte("<?xml"), skipWS: true, ct: "text/xml; charset=utf-8"},
70
71 &exactSig{[]byte("%PDF-"), "application/pdf"},
72 &exactSig{[]byte("%!PS-Adobe-"), "application/postscript"},
73
74 // UTF BOMs.
75 &maskedSig{mask: []byte("\xFF\xFF\x00\x00"), pat: []byte("\xFE\xFF\x00\x00"), ct: "text/plain; charset=utf-16be"},
76 &maskedSig{mask: []byte("\xFF\xFF\x00\x00"), pat: []byte("\xFF\xFE\x00\x00"), ct: "text/plain; charset=utf-16le"},
77 &maskedSig{mask: []byte("\xFF\xFF\xFF\x00"), pat: []byte("\xEF\xBB\xBF\x00"), ct: "text/plain; charset=utf-8"},
78
79 &exactSig{[]byte("GIF87a"), "image/gif"},
80 &exactSig{[]byte("GIF89a"), "image/gif"},
81 &exactSig{[]byte("\x89\x50\x4E\x47\x0D\x0A\x1A\x0A"), "image/png"},
82 &exactSig{[]byte("\xFF\xD8\xFF"), "image/jpeg"},
83 &exactSig{[]byte("BM"), "image/bmp"},
84 &maskedSig{
85 mask: []byte("\xFF\xFF\xFF\xFF\x00\x00\x00\x00\xFF\xFF\xFF\xFF\xFF\xFF"),
86 pat: []byte("RIFF\x00\x00\x00\x00WEBPVP"),
87 ct: "image/webp",
88 },
89 &exactSig{[]byte("\x00\x00\x01\x00"), "image/vnd.microsoft.icon"},
90 &exactSig{[]byte("\x4F\x67\x67\x53\x00"), "application/ogg"},
91 &maskedSig{
92 mask: []byte("\xFF\xFF\xFF\xFF\x00\x00\x00\x00\xFF\xFF\xFF\xFF"),
93 pat: []byte("RIFF\x00\x00\x00\x00WAVE"),
94 ct: "audio/wave",
95 },
96 &exactSig{[]byte("\x1A\x45\xDF\xA3"), "video/webm"},
97 &exactSig{[]byte("\x52\x61\x72\x20\x1A\x07\x00"), "application/x-rar-compressed"},
98 &exactSig{[]byte("\x50\x4B\x03\x04"), "application/zip"},
99 &exactSig{[]byte("\x1F\x8B\x08"), "application/x-gzip"},
100
101 // TODO(dsymonds): Re-enable this when the spec is sorted w.r.t. MP4.
102 //mp4Sig(0),
103
104 textSig(0), // should be last
105}
106
107type exactSig struct {
108 sig []byte
109 ct string
110}
111
112func (e *exactSig) match(data []byte, firstNonWS int) string {
113 if bytes.HasPrefix(data, e.sig) {
114 return e.ct
115 }
116 return ""
117}
118
119type maskedSig struct {
120 mask, pat []byte
121 skipWS bool
122 ct string
123}
124
125func (m *maskedSig) match(data []byte, firstNonWS int) string {
126 if m.skipWS {
127 data = data[firstNonWS:]
128 }
129 if len(data) < len(m.mask) {
130 return ""
131 }
132 for i, mask := range m.mask {
133 db := data[i] & mask
134 if db != m.pat[i] {
135 return ""
136 }
137 }
138 return m.ct
139}
140
141type htmlSig []byte
142
143func (h htmlSig) match(data []byte, firstNonWS int) string {
144 data = data[firstNonWS:]
145 if len(data) < len(h)+1 {
146 return ""
147 }
148 for i, b := range h {
149 db := data[i]
150 if 'A' <= b && b <= 'Z' {
151 db &= 0xDF
152 }
153 if b != db {
154 return ""
155 }
156 }
157 // Next byte must be space or right angle bracket.
158 if db := data[len(h)]; db != ' ' && db != '>' {
159 return ""
160 }
161 return "text/html; charset=utf-8"
162}
163
164type mp4Sig int
165
166func (mp4Sig) match(data []byte, firstNonWS int) string {
167 // c.f. section 6.1.
168 if len(data) < 8 {
169 return ""
170 }
171 boxSize := int(binary.BigEndian.Uint32(data[:4]))
172 if boxSize%4 != 0 || len(data) < boxSize {
173 return ""
174 }
175 if !bytes.Equal(data[4:8], []byte("ftyp")) {
176 return ""
177 }
178 for st := 8; st < boxSize; st += 4 {
179 if st == 12 {
180 // minor version number
181 continue
182 }
183 seg := string(data[st : st+3])
184 switch seg {
185 case "mp4", "iso", "M4V", "M4P", "M4B":
186 return "video/mp4"
187 /* The remainder are not in the spec.
188 case "M4A":
189 return "audio/mp4"
190 case "3gp":
191 return "video/3gpp"
192 case "jp2":
193 return "image/jp2" // JPEG 2000
194 */
195 }
196 }
197 return ""
198}
199
200type textSig int
201
202func (textSig) match(data []byte, firstNonWS int) string {
203 // c.f. section 5, step 4.
204 for _, b := range data[firstNonWS:] {
205 switch {
206 case 0x00 <= b && b <= 0x08,
207 b == 0x0B,
208 0x0E <= b && b <= 0x1A,
209 0x1C <= b && b <= 0x1F:
210 return ""
211 }
212 }
213 return "text/plain; charset=utf-8"
214}
0215
=== added file 'fork/http/status.go'
--- fork/http/status.go 1970-01-01 00:00:00 +0000
+++ fork/http/status.go 2013-07-22 14:27:42 +0000
@@ -0,0 +1,108 @@
1// Copyright 2009 The Go Authors. All rights reserved.
2// Use of this source code is governed by a BSD-style
3// license that can be found in the LICENSE file.
4
5package http
6
7// HTTP status codes, defined in RFC 2616.
8const (
9 StatusContinue = 100
10 StatusSwitchingProtocols = 101
11
12 StatusOK = 200
13 StatusCreated = 201
14 StatusAccepted = 202
15 StatusNonAuthoritativeInfo = 203
16 StatusNoContent = 204
17 StatusResetContent = 205
18 StatusPartialContent = 206
19
20 StatusMultipleChoices = 300
21 StatusMovedPermanently = 301
22 StatusFound = 302
23 StatusSeeOther = 303
24 StatusNotModified = 304
25 StatusUseProxy = 305
26 StatusTemporaryRedirect = 307
27
28 StatusBadRequest = 400
29 StatusUnauthorized = 401
30 StatusPaymentRequired = 402
31 StatusForbidden = 403
32 StatusNotFound = 404
33 StatusMethodNotAllowed = 405
34 StatusNotAcceptable = 406
35 StatusProxyAuthRequired = 407
36 StatusRequestTimeout = 408
37 StatusConflict = 409
38 StatusGone = 410
39 StatusLengthRequired = 411
40 StatusPreconditionFailed = 412
41 StatusRequestEntityTooLarge = 413
42 StatusRequestURITooLong = 414
43 StatusUnsupportedMediaType = 415
44 StatusRequestedRangeNotSatisfiable = 416
45 StatusExpectationFailed = 417
46 StatusTeapot = 418
47
48 StatusInternalServerError = 500
49 StatusNotImplemented = 501
50 StatusBadGateway = 502
51 StatusServiceUnavailable = 503
52 StatusGatewayTimeout = 504
53 StatusHTTPVersionNotSupported = 505
54)
55
56var statusText = map[int]string{
57 StatusContinue: "Continue",
58 StatusSwitchingProtocols: "Switching Protocols",
59
60 StatusOK: "OK",
61 StatusCreated: "Created",
62 StatusAccepted: "Accepted",
63 StatusNonAuthoritativeInfo: "Non-Authoritative Information",
64 StatusNoContent: "No Content",
65 StatusResetContent: "Reset Content",
66 StatusPartialContent: "Partial Content",
67
68 StatusMultipleChoices: "Multiple Choices",
69 StatusMovedPermanently: "Moved Permanently",
70 StatusFound: "Found",
71 StatusSeeOther: "See Other",
72 StatusNotModified: "Not Modified",
73 StatusUseProxy: "Use Proxy",
74 StatusTemporaryRedirect: "Temporary Redirect",
75
76 StatusBadRequest: "Bad Request",
77 StatusUnauthorized: "Unauthorized",
78 StatusPaymentRequired: "Payment Required",
79 StatusForbidden: "Forbidden",
80 StatusNotFound: "Not Found",
81 StatusMethodNotAllowed: "Method Not Allowed",
82 StatusNotAcceptable: "Not Acceptable",
83 StatusProxyAuthRequired: "Proxy Authentication Required",
84 StatusRequestTimeout: "Request Timeout",
85 StatusConflict: "Conflict",
86 StatusGone: "Gone",
87 StatusLengthRequired: "Length Required",
88 StatusPreconditionFailed: "Precondition Failed",
89 StatusRequestEntityTooLarge: "Request Entity Too Large",
90 StatusRequestURITooLong: "Request URI Too Long",
91 StatusUnsupportedMediaType: "Unsupported Media Type",
92 StatusRequestedRangeNotSatisfiable: "Requested Range Not Satisfiable",
93 StatusExpectationFailed: "Expectation Failed",
94 StatusTeapot: "I'm a teapot",
95
96 StatusInternalServerError: "Internal Server Error",
97 StatusNotImplemented: "Not Implemented",
98 StatusBadGateway: "Bad Gateway",
99 StatusServiceUnavailable: "Service Unavailable",
100 StatusGatewayTimeout: "Gateway Timeout",
101 StatusHTTPVersionNotSupported: "HTTP Version Not Supported",
102}
103
104// StatusText returns a text for the HTTP status code. It returns the empty
105// string if the code is unknown.
106func StatusText(code int) string {
107 return statusText[code]
108}
0109
=== added file 'fork/http/transfer.go'
--- fork/http/transfer.go 1970-01-01 00:00:00 +0000
+++ fork/http/transfer.go 2013-07-22 14:27:42 +0000
@@ -0,0 +1,632 @@
1// Copyright 2009 The Go Authors. All rights reserved.
2// Use of this source code is governed by a BSD-style
3// license that can be found in the LICENSE file.
4
5package http
6
7import (
8 "bufio"
9 "bytes"
10 "errors"
11 "fmt"
12 "io"
13 "io/ioutil"
14 "net/textproto"
15 "strconv"
16 "strings"
17)
18
19// transferWriter inspects the fields of a user-supplied Request or Response,
20// sanitizes them without changing the user object and provides methods for
21// writing the respective header, body and trailer in wire format.
22type transferWriter struct {
23 Method string
24 Body io.Reader
25 BodyCloser io.Closer
26 ResponseToHEAD bool
27 ContentLength int64 // -1 means unknown, 0 means exactly none
28 Close bool
29 TransferEncoding []string
30 Trailer Header
31}
32
33func newTransferWriter(r interface{}) (t *transferWriter, err error) {
34 t = &transferWriter{}
35
36 // Extract relevant fields
37 atLeastHTTP11 := false
38 switch rr := r.(type) {
39 case *Request:
40 if rr.ContentLength != 0 && rr.Body == nil {
41 return nil, fmt.Errorf("http: Request.ContentLength=%d with nil Body", rr.ContentLength)
42 }
43 t.Method = rr.Method
44 t.Body = rr.Body
45 t.BodyCloser = rr.Body
46 t.ContentLength = rr.ContentLength
47 t.Close = rr.Close
48 t.TransferEncoding = rr.TransferEncoding
49 t.Trailer = rr.Trailer
50 atLeastHTTP11 = rr.ProtoAtLeast(1, 1)
51 if t.Body != nil && len(t.TransferEncoding) == 0 && atLeastHTTP11 {
52 if t.ContentLength == 0 {
53 // Test to see if it's actually zero or just unset.
54 var buf [1]byte
55 n, _ := io.ReadFull(t.Body, buf[:])
56 if n == 1 {
57 // Oh, guess there is data in this Body Reader after all.
58 // The ContentLength field just wasn't set.
59 // Stich the Body back together again, re-attaching our
60 // consumed byte.
61 t.ContentLength = -1
62 t.Body = io.MultiReader(bytes.NewBuffer(buf[:]), t.Body)
63 } else {
64 // Body is actually empty.
65 t.Body = nil
66 t.BodyCloser = nil
67 }
68 }
69 if t.ContentLength < 0 {
70 t.TransferEncoding = []string{"chunked"}
71 }
72 }
73 case *Response:
74 if rr.Request != nil {
75 t.Method = rr.Request.Method
76 }
77 t.Body = rr.Body
78 t.BodyCloser = rr.Body
79 t.ContentLength = rr.ContentLength
80 t.Close = rr.Close
81 t.TransferEncoding = rr.TransferEncoding
82 t.Trailer = rr.Trailer
83 atLeastHTTP11 = rr.ProtoAtLeast(1, 1)
84 t.ResponseToHEAD = noBodyExpected(t.Method)
85 }
86
87 // Sanitize Body,ContentLength,TransferEncoding
88 if t.ResponseToHEAD {
89 t.Body = nil
90 t.TransferEncoding = nil
91 // ContentLength is expected to hold Content-Length
92 if t.ContentLength < 0 {
93 return nil, ErrMissingContentLength
94 }
95 } else {
96 if !atLeastHTTP11 || t.Body == nil {
97 t.TransferEncoding = nil
98 }
99 if chunked(t.TransferEncoding) {
100 t.ContentLength = -1
101 } else if t.Body == nil { // no chunking, no body
102 t.ContentLength = 0
103 }
104 }
105
106 // Sanitize Trailer
107 if !chunked(t.TransferEncoding) {
108 t.Trailer = nil
109 }
110
111 return t, nil
112}
113
114func noBodyExpected(requestMethod string) bool {
115 return requestMethod == "HEAD"
116}
117
118func (t *transferWriter) shouldSendContentLength() bool {
119 if chunked(t.TransferEncoding) {
120 return false
121 }
122 if t.ContentLength > 0 {
123 return true
124 }
125 if t.ResponseToHEAD {
126 return true
127 }
128 // Many servers expect a Content-Length for these methods
129 if t.Method == "POST" || t.Method == "PUT" {
130 return true
131 }
132 if t.ContentLength == 0 && isIdentity(t.TransferEncoding) {
133 return true
134 }
135
136 return false
137}
138
139func (t *transferWriter) WriteHeader(w io.Writer) (err error) {
140 if t.Close {
141 _, err = io.WriteString(w, "Connection: close\r\n")
142 if err != nil {
143 return
144 }
145 }
146
147 // Write Content-Length and/or Transfer-Encoding whose values are a
148 // function of the sanitized field triple (Body, ContentLength,
149 // TransferEncoding)
150 if t.shouldSendContentLength() {
151 io.WriteString(w, "Content-Length: ")
152 _, err = io.WriteString(w, strconv.FormatInt(t.ContentLength, 10)+"\r\n")
153 if err != nil {
154 return
155 }
156 } else if chunked(t.TransferEncoding) {
157 _, err = io.WriteString(w, "Transfer-Encoding: chunked\r\n")
158 if err != nil {
159 return
160 }
161 }
162
163 // Write Trailer header
164 if t.Trailer != nil {
165 // TODO: At some point, there should be a generic mechanism for
166 // writing long headers, using HTTP line splitting
167 io.WriteString(w, "Trailer: ")
168 needComma := false
169 for k := range t.Trailer {
170 k = CanonicalHeaderKey(k)
171 switch k {
172 case "Transfer-Encoding", "Trailer", "Content-Length":
173 return &badStringError{"invalid Trailer key", k}
174 }
175 if needComma {
176 io.WriteString(w, ",")
177 }
178 io.WriteString(w, k)
179 needComma = true
180 }
181 _, err = io.WriteString(w, "\r\n")
182 }
183
184 return
185}
186
187func (t *transferWriter) WriteBody(w io.Writer) (err error) {
188 var ncopy int64
189
190 // Write body
191 if t.Body != nil {
192 if chunked(t.TransferEncoding) {
193 cw := newChunkedWriter(w)
194 _, err = io.Copy(cw, t.Body)
195 if err == nil {
196 err = cw.Close()
197 }
198 } else if t.ContentLength == -1 {
199 ncopy, err = io.Copy(w, t.Body)
200 } else {
201 ncopy, err = io.Copy(w, io.LimitReader(t.Body, t.ContentLength))
202 nextra, err := io.Copy(ioutil.Discard, t.Body)
203 if err != nil {
204 return err
205 }
206 ncopy += nextra
207 }
208 if err != nil {
209 return err
210 }
211 if err = t.BodyCloser.Close(); err != nil {
212 return err
213 }
214 }
215
216 if t.ContentLength != -1 && t.ContentLength != ncopy {
217 return fmt.Errorf("http: Request.ContentLength=%d with Body length %d",
218 t.ContentLength, ncopy)
219 }
220
221 // TODO(petar): Place trailer writer code here.
222 if chunked(t.TransferEncoding) {
223 // Last chunk, empty trailer
224 _, err = io.WriteString(w, "\r\n")
225 }
226
227 return
228}
229
230type transferReader struct {
231 // Input
232 Header Header
233 StatusCode int
234 RequestMethod string
235 ProtoMajor int
236 ProtoMinor int
237 // Output
238 Body io.ReadCloser
239 ContentLength int64
240 TransferEncoding []string
241 Close bool
242 Trailer Header
243}
244
245// bodyAllowedForStatus returns whether a given response status code
246// permits a body. See RFC2616, section 4.4.
247func bodyAllowedForStatus(status int) bool {
248 switch {
249 case status >= 100 && status <= 199:
250 return false
251 case status == 204:
252 return false
253 case status == 304:
254 return false
255 }
256 return true
257}
258
259// msg is *Request or *Response.
260func readTransfer(msg interface{}, r *bufio.Reader) (err error) {
261 t := &transferReader{}
262
263 // Unify input
264 isResponse := false
265 switch rr := msg.(type) {
266 case *Response:
267 t.Header = rr.Header
268 t.StatusCode = rr.StatusCode
269 t.RequestMethod = rr.Request.Method
270 t.ProtoMajor = rr.ProtoMajor
271 t.ProtoMinor = rr.ProtoMinor
272 t.Close = shouldClose(t.ProtoMajor, t.ProtoMinor, t.Header)
273 isResponse = true
274 case *Request:
275 t.Header = rr.Header
276 t.ProtoMajor = rr.ProtoMajor
277 t.ProtoMinor = rr.ProtoMinor
278 // Transfer semantics for Requests are exactly like those for
279 // Responses with status code 200, responding to a GET method
280 t.StatusCode = 200
281 t.RequestMethod = "GET"
282 default:
283 panic("unexpected type")
284 }
285
286 // Default to HTTP/1.1
287 if t.ProtoMajor == 0 && t.ProtoMinor == 0 {
288 t.ProtoMajor, t.ProtoMinor = 1, 1
289 }
290
291 // Transfer encoding, content length
292 t.TransferEncoding, err = fixTransferEncoding(t.RequestMethod, t.Header)
293 if err != nil {
294 return err
295 }
296
297 t.ContentLength, err = fixLength(isResponse, t.StatusCode, t.RequestMethod, t.Header, t.TransferEncoding)
298 if err != nil {
299 return err
300 }
301
302 // Trailer
303 t.Trailer, err = fixTrailer(t.Header, t.TransferEncoding)
304 if err != nil {
305 return err
306 }
307
308 // If there is no Content-Length or chunked Transfer-Encoding on a *Response
309 // and the status is not 1xx, 204 or 304, then the body is unbounded.
310 // See RFC2616, section 4.4.
311 switch msg.(type) {
312 case *Response:
313 if t.ContentLength == -1 &&
314 !chunked(t.TransferEncoding) &&
315 bodyAllowedForStatus(t.StatusCode) {
316 // Unbounded body.
317 t.Close = true
318 }
319 }
320
321 // Prepare body reader. ContentLength < 0 means chunked encoding
322 // or close connection when finished, since multipart is not supported yet
323 switch {
324 case chunked(t.TransferEncoding):
325 t.Body = &body{Reader: newChunkedReader(r), hdr: msg, r: r, closing: t.Close}
326 case t.ContentLength >= 0:
327 // TODO: limit the Content-Length. This is an easy DoS vector.
328 t.Body = &body{Reader: io.LimitReader(r, t.ContentLength), closing: t.Close}
329 default:
330 // t.ContentLength < 0, i.e. "Content-Length" not mentioned in header
331 if t.Close {
332 // Close semantics (i.e. HTTP/1.0)
333 t.Body = &body{Reader: r, closing: t.Close}
334 } else {
335 // Persistent connection (i.e. HTTP/1.1)
336 t.Body = &body{Reader: io.LimitReader(r, 0), closing: t.Close}
337 }
338 }
339
340 // Unify output
341 switch rr := msg.(type) {
342 case *Request:
343 rr.Body = t.Body
344 rr.ContentLength = t.ContentLength
345 rr.TransferEncoding = t.TransferEncoding
346 rr.Close = t.Close
347 rr.Trailer = t.Trailer
348 case *Response:
349 rr.Body = t.Body
350 rr.ContentLength = t.ContentLength
351 rr.TransferEncoding = t.TransferEncoding
352 rr.Close = t.Close
353 rr.Trailer = t.Trailer
354 }
355
356 return nil
357}
358
359// Checks whether chunked is part of the encodings stack
360func chunked(te []string) bool { return len(te) > 0 && te[0] == "chunked" }
361
362// Checks whether the encoding is explicitly "identity".
363func isIdentity(te []string) bool { return len(te) == 1 && te[0] == "identity" }
364
365// Sanitize transfer encoding
366func fixTransferEncoding(requestMethod string, header Header) ([]string, error) {
367 raw, present := header["Transfer-Encoding"]
368 if !present {
369 return nil, nil
370 }
371
372 delete(header, "Transfer-Encoding")
373
374 // Head responses have no bodies, so the transfer encoding
375 // should be ignored.
376 if requestMethod == "HEAD" {
377 return nil, nil
378 }
379
380 encodings := strings.Split(raw[0], ",")
381 te := make([]string, 0, len(encodings))
382 // TODO: Even though we only support "identity" and "chunked"
383 // encodings, the loop below is designed with foresight. One
384 // invariant that must be maintained is that, if present,
385 // chunked encoding must always come first.
386 for _, encoding := range encodings {
387 encoding = strings.ToLower(strings.TrimSpace(encoding))
388 // "identity" encoding is not recorded
389 if encoding == "identity" {
390 break
391 }
392 if encoding != "chunked" {
393 return nil, &badStringError{"unsupported transfer encoding", encoding}
394 }
395 te = te[0 : len(te)+1]
396 te[len(te)-1] = encoding
397 }
398 if len(te) > 1 {
399 return nil, &badStringError{"too many transfer encodings", strings.Join(te, ",")}
400 }
401 if len(te) > 0 {
402 // Chunked encoding trumps Content-Length. See RFC 2616
403 // Section 4.4. Currently len(te) > 0 implies chunked
404 // encoding.
405 delete(header, "Content-Length")
406 return te, nil
407 }
408
409 return nil, nil
410}
411
412// Determine the expected body length, using RFC 2616 Section 4.4. This
413// function is not a method, because ultimately it should be shared by
414// ReadResponse and ReadRequest.
415func fixLength(isResponse bool, status int, requestMethod string, header Header, te []string) (int64, error) {
416
417 // Logic based on response type or status
418 if noBodyExpected(requestMethod) {
419 return 0, nil
420 }
421 if status/100 == 1 {
422 return 0, nil
423 }
424 switch status {
425 case 204, 304:
426 return 0, nil
427 }
428
429 // Logic based on Transfer-Encoding
430 if chunked(te) {
431 return -1, nil
432 }
433
434 // Logic based on Content-Length
435 cl := strings.TrimSpace(header.Get("Content-Length"))
436 if cl != "" {
437 n, err := strconv.ParseInt(cl, 10, 64)
438 if err != nil || n < 0 {
439 return -1, &badStringError{"bad Content-Length", cl}
440 }
441 return n, nil
442 } else {
443 header.Del("Content-Length")
444 }
445
446 if !isResponse && requestMethod == "GET" {
447 // RFC 2616 doesn't explicitly permit nor forbid an
448 // entity-body on a GET request so we permit one if
449 // declared, but we default to 0 here (not -1 below)
450 // if there's no mention of a body.
451 return 0, nil
452 }
453
454 // Logic based on media type. The purpose of the following code is just
455 // to detect whether the unsupported "multipart/byteranges" is being
456 // used. A proper Content-Type parser is needed in the future.
457 if strings.Contains(strings.ToLower(header.Get("Content-Type")), "multipart/byteranges") {
458 return -1, ErrNotSupported
459 }
460
461 // Body-EOF logic based on other methods (like closing, or chunked coding)
462 return -1, nil
463}
464
465// Determine whether to hang up after sending a request and body, or
466// receiving a response and body
467// 'header' is the request headers
468func shouldClose(major, minor int, header Header) bool {
469 if major < 1 {
470 return true
471 } else if major == 1 && minor == 0 {
472 if !strings.Contains(strings.ToLower(header.Get("Connection")), "keep-alive") {
473 return true
474 }
475 return false
476 } else {
477 // TODO: Should split on commas, toss surrounding white space,
478 // and check each field.
479 if strings.ToLower(header.Get("Connection")) == "close" {
480 header.Del("Connection")
481 return true
482 }
483 }
484 return false
485}
486
487// Parse the trailer header
488func fixTrailer(header Header, te []string) (Header, error) {
489 raw := header.Get("Trailer")
490 if raw == "" {
491 return nil, nil
492 }
493
494 header.Del("Trailer")
495 trailer := make(Header)
496 keys := strings.Split(raw, ",")
497 for _, key := range keys {
498 key = CanonicalHeaderKey(strings.TrimSpace(key))
499 switch key {
500 case "Transfer-Encoding", "Trailer", "Content-Length":
501 return nil, &badStringError{"bad trailer key", key}
502 }
503 trailer.Del(key)
504 }
505 if len(trailer) == 0 {
506 return nil, nil
507 }
508 if !chunked(te) {
509 // Trailer and no chunking
510 return nil, ErrUnexpectedTrailer
511 }
512 return trailer, nil
513}
514
515// body turns a Reader into a ReadCloser.
516// Close ensures that the body has been fully read
517// and then reads the trailer if necessary.
518type body struct {
519 io.Reader
520 hdr interface{} // non-nil (Response or Request) value means read trailer
521 r *bufio.Reader // underlying wire-format reader for the trailer
522 closing bool // is the connection to be closed after reading body?
523 closed bool
524
525 res *response // response writer for server requests, else nil
526}
527
528// ErrBodyReadAfterClose is returned when reading a Request Body after
529// the body has been closed. This typically happens when the body is
530// read after an HTTP Handler calls WriteHeader or Write on its
531// ResponseWriter.
532var ErrBodyReadAfterClose = errors.New("http: invalid Read on closed request Body")
533
534func (b *body) Read(p []byte) (n int, err error) {
535 if b.closed {
536 return 0, ErrBodyReadAfterClose
537 }
538 n, err = b.Reader.Read(p)
539
540 // Read the final trailer once we hit EOF.
541 if err == io.EOF && b.hdr != nil {
542 if e := b.readTrailer(); e != nil {
543 err = e
544 }
545 b.hdr = nil
546 }
547 return n, err
548}
549
550var (
551 singleCRLF = []byte("\r\n")
552 doubleCRLF = []byte("\r\n\r\n")
553)
554
555func seeUpcomingDoubleCRLF(r *bufio.Reader) bool {
556 for peekSize := 4; ; peekSize++ {
557 // This loop stops when Peek returns an error,
558 // which it does when r's buffer has been filled.
559 buf, err := r.Peek(peekSize)
560 if bytes.HasSuffix(buf, doubleCRLF) {
561 return true
562 }
563 if err != nil {
564 break
565 }
566 }
567 return false
568}
569
570func (b *body) readTrailer() error {
571 // The common case, since nobody uses trailers.
572 buf, _ := b.r.Peek(2)
573 if bytes.Equal(buf, singleCRLF) {
574 b.r.ReadByte()
575 b.r.ReadByte()
576 return nil
577 }
578
579 // Make sure there's a header terminator coming up, to prevent
580 // a DoS with an unbounded size Trailer. It's not easy to
581 // slip in a LimitReader here, as textproto.NewReader requires
582 // a concrete *bufio.Reader. Also, we can't get all the way
583 // back up to our conn's LimitedReader that *might* be backing
584 // this bufio.Reader. Instead, a hack: we iteratively Peek up
585 // to the bufio.Reader's max size, looking for a double CRLF.
586 // This limits the trailer to the underlying buffer size, typically 4kB.
587 if !seeUpcomingDoubleCRLF(b.r) {
588 return errors.New("http: suspiciously long trailer after chunked body")
589 }
590
591 hdr, err := textproto.NewReader(b.r).ReadMIMEHeader()
592 if err != nil {
593 return err
594 }
595 switch rr := b.hdr.(type) {
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches

to all changes: