Status: | Merged |
---|---|
Approved by: | Raphaël Badin |
Approved revision: | 202 |
Merged at revision: | 197 |
Proposed branch: | lp:~rvb/gwacl/tls-fix |
Merge into: | lp:gwacl |
Diff against target: |
10735 lines (+9979/-246) 41 files modified
HACKING.txt (+0/-24) fork/LICENSE (+27/-0) fork/README (+11/-0) fork/go-tls-renegotiation.patch (+113/-0) fork/http/chunked.go (+170/-0) fork/http/client.go (+339/-0) fork/http/cookie.go (+267/-0) fork/http/doc.go (+80/-0) fork/http/filetransport.go (+123/-0) fork/http/fs.go (+367/-0) fork/http/header.go (+78/-0) fork/http/jar.go (+30/-0) fork/http/lex.go (+136/-0) fork/http/request.go (+743/-0) fork/http/response.go (+239/-0) fork/http/server.go (+1234/-0) fork/http/sniff.go (+214/-0) fork/http/status.go (+108/-0) fork/http/transfer.go (+632/-0) fork/http/transport.go (+757/-0) fork/http/triv.go (+141/-0) fork/tls/alert.go (+77/-0) fork/tls/cipher_suites.go (+188/-0) fork/tls/common.go (+322/-0) fork/tls/conn.go (+886/-0) fork/tls/generate_cert.go (+74/-0) fork/tls/handshake_client.go (+347/-0) fork/tls/handshake_messages.go (+1078/-0) fork/tls/handshake_server.go (+352/-0) fork/tls/key_agreement.go (+253/-0) fork/tls/parse-gnutls-cli-debug-log.py (+57/-0) fork/tls/prf.go (+235/-0) fork/tls/tls.go (+187/-0) httperror.go (+1/-2) management_base_test.go (+29/-5) poller_test.go (+1/-1) testhelpers_x509dispatch.go (+1/-1) x509dispatcher.go (+22/-154) x509dispatcher_test.go (+17/-42) x509session.go (+29/-3) x509session_test.go (+14/-14) |
To merge this branch: | bzr merge lp:~rvb/gwacl/tls-fix |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Gavin Panella | Approve | ||
Review via email: mp+176185@code.launchpad.net |
Commit message
Use a forked version of crypto/tls and net/http.
Description of the change
This branch adds a forked version of crypto/tls and net/http (crypto/tls is patched to add support for TLS renegociation and net/http is patched to use the forked version of crypto/tls).
It includes jtv's branch (lp:~jtv/gwacl/curlless) which switches from using go-curl to using the standard library's tls. (But that change is then modified to use the forked version of net/http and crypto/tls instead of the standard library.)
We forked from Go version 1.0.2 in order to support both Go 1.0.2 and Go 1.1.1 (I tried forking from crypto/tls and net/http version 1.1.1 but I was quickly overwhelmed by the number of libraries needing to be backported and the code made heavy use of Go 1.1.1-only features).
This was tested (unit tests run + script example run) with both Go 1.0.2 and Go 1.1.1.
- 197. By Raphaël Badin
-
Remove last mention of go-curl.
Gavin Panella (allenap) wrote : | # |
Before landing, do you know if licensing is okay?
- 198. By Raphaël Badin
-
Review fixes.
Raphaël Badin (rvb) wrote : | # |
> Looks good!
>
>
> [1]
>
> --- fork/README 1970-01-01 00:00:00 +0000
> +++ fork/README 2013-07-22 12:54:30 +0000
> @@ -0,0 +1,7 @@
> +This directory contains a fork of Go's standard libraries net/http and
> crypto/tls.
> ...
>
> Please wrap lines in here at no more than 80 columns. I suggest 72.
Okay, done.
> [2]
>
> === added file 'fork/go-
> --- fork/go-
> +++ fork/go-
> @@ -0,0 +1,5 @@
> +15c15
> +< "launchpad.
> +---
> +> "crypto/tls"
> +
>
> I'm not sure this is useful. Without context it's not very
> informative. It's also in the wrong direction. I suggest omitting it.
You're right, fixed.
> [3]
>
> +func performX509Requ
> (*x509Response, error) {
> ...
> + response.Header = httpResponse.Header
> + if err != nil {
> + return nil, err
> + }
>
> I don't think that condition is needed here. Could this be one of
> those Go-boilerplate-
Just a left-over from the recent refactoring, fixed.
- 199. By Raphaël Badin
-
Rename files
- 200. By Raphaël Badin
-
Add LICENSE file.
Raphaël Badin (rvb) wrote : | # |
After consulting with James Page, we got confirmation that the licencing is okay: BSD-licensed code can be included in gwacl, which is a AGPL3-licensed project. I'll add a LICENSE file in /fork and we will (in another branch) add proper COPYING/
- 201. By Raphaël Badin
-
Fix license file.
Gavin Panella (allenap) : | # |
- 202. By Raphaël Badin
-
Make format.
Julian Edwards (julian-edwards) wrote : | # |
On 23/07/13 00:03, Raphaël Badin wrote:
> After consulting with James Page, we got confirmation that the licencing is okay: BSD-licensed code can be included in gwacl, which is a AGPL3-licensed project. I'll add a LICENSE file in /fork and we will (in another branch) add proper COPYING/
>
It's actually LGPL but I think it still works though.
Preview Diff
1 | === modified file 'HACKING.txt' |
2 | --- HACKING.txt 2013-06-24 00:39:08 +0000 |
3 | +++ HACKING.txt 2013-07-22 14:27:42 +0000 |
4 | @@ -20,30 +20,6 @@ |
5 | .. _Bazaar: http://bazaar.canonical.com/ |
6 | |
7 | |
8 | -Prerequisites |
9 | -------------- |
10 | - |
11 | -The code that communicates with Azure's management API uses *libcurl*, through |
12 | -a Go language binding called *go-curl*. You need to install the libcurl |
13 | -headers for your OS. On Ubuntu this is:: |
14 | - |
15 | - sudo apt-get install libcurl4-openssl-dev |
16 | - |
17 | -If you didn't ``go get`` gwacl you may also need to install go-curl:: |
18 | - |
19 | - go get github.com/andelf/go-curl |
20 | - |
21 | -On Ubuntu 12.10 at least, you specifically need the given version of libcurl. |
22 | -With other versions you will receive unexpected "403" HTTP status codes |
23 | -("Forbidden") from the Azure server. |
24 | - |
25 | -Why use libcurl? At the time of writing, Go's built-in http package does not |
26 | -support TLS renegotiation. We find that Azure forces such a renegotiation |
27 | -when we access the management API. The use of libcurl is embedded so that |
28 | -future implementations can swap it out for a different http library without |
29 | -breaking compatibility. |
30 | - |
31 | - |
32 | API philosophy |
33 | -------------- |
34 | |
35 | |
36 | === added directory 'fork' |
37 | === added file 'fork/LICENSE' |
38 | --- fork/LICENSE 1970-01-01 00:00:00 +0000 |
39 | +++ fork/LICENSE 2013-07-22 14:27:42 +0000 |
40 | @@ -0,0 +1,27 @@ |
41 | +Copyright (c) 2012 The Go Authors. All rights reserved. |
42 | + |
43 | +Redistribution and use in source and binary forms, with or without |
44 | +modification, are permitted provided that the following conditions are |
45 | +met: |
46 | + |
47 | + * Redistributions of source code must retain the above copyright |
48 | +notice, this list of conditions and the following disclaimer. |
49 | + * Redistributions in binary form must reproduce the above |
50 | +copyright notice, this list of conditions and the following disclaimer |
51 | +in the documentation and/or other materials provided with the |
52 | +distribution. |
53 | + * Neither the name of Google Inc. nor the names of its |
54 | +contributors may be used to endorse or promote products derived from |
55 | +this software without specific prior written permission. |
56 | + |
57 | +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS |
58 | +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT |
59 | +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR |
60 | +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT |
61 | +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, |
62 | +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT |
63 | +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, |
64 | +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY |
65 | +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT |
66 | +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE |
67 | +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. |
68 | |
69 | === added file 'fork/README' |
70 | --- fork/README 1970-01-01 00:00:00 +0000 |
71 | +++ fork/README 2013-07-22 14:27:42 +0000 |
72 | @@ -0,0 +1,11 @@ |
73 | +This directory contains a fork of Go's standard libraries net/http and |
74 | +crypto/tls. |
75 | + |
76 | +This fork is required to support the TLS renegociation which is triggered by |
77 | +the Windows Azure Server when establishing an https connexion. TLS |
78 | +renegociation is currently not supported by Go's standard library. |
79 | + |
80 | +The fork is based on go version 2:1.0.2-2. |
81 | +The library crypto/tls is patched to support TLS renegociation (see the patch |
82 | +file "go-tls-renegotiation.patch"). |
83 | +The library net/http is patched to use the forked version of crypto/tls. |
84 | |
85 | === added file 'fork/go-tls-renegotiation.patch' |
86 | --- fork/go-tls-renegotiation.patch 1970-01-01 00:00:00 +0000 |
87 | +++ fork/go-tls-renegotiation.patch 2013-07-22 14:27:42 +0000 |
88 | @@ -0,0 +1,113 @@ |
89 | +diff -r c242bbf5fa8c src/pkg/crypto/tls/common.go |
90 | +--- a/src/pkg/crypto/tls/common.go Wed Jul 17 14:03:27 2013 -0400 |
91 | ++++ b/src/pkg/crypto/tls/common.go Thu Jul 18 13:45:43 2013 -0400 |
92 | +@@ -44,6 +44,7 @@ |
93 | + |
94 | + // TLS handshake message types. |
95 | + const ( |
96 | ++ typeHelloRequest uint8 = 0 |
97 | + typeClientHello uint8 = 1 |
98 | + typeServerHello uint8 = 2 |
99 | + typeNewSessionTicket uint8 = 4 |
100 | +diff -r c242bbf5fa8c src/pkg/crypto/tls/conn.go |
101 | +--- a/src/pkg/crypto/tls/conn.go Wed Jul 17 14:03:27 2013 -0400 |
102 | ++++ b/src/pkg/crypto/tls/conn.go Thu Jul 18 13:45:43 2013 -0400 |
103 | +@@ -146,6 +146,9 @@ |
104 | + hc.mac = hc.nextMac |
105 | + hc.nextCipher = nil |
106 | + hc.nextMac = nil |
107 | ++ for i := range hc.seq { |
108 | ++ hc.seq[i] = 0 |
109 | ++ } |
110 | + return nil |
111 | + } |
112 | + |
113 | +@@ -478,7 +481,7 @@ |
114 | + func (c *Conn) readRecord(want recordType) error { |
115 | + // Caller must be in sync with connection: |
116 | + // handshake data if handshake not yet completed, |
117 | +- // else application data. (We don't support renegotiation.) |
118 | ++ // else application data. |
119 | + switch want { |
120 | + default: |
121 | + return c.sendAlert(alertInternalError) |
122 | +@@ -611,7 +614,7 @@ |
123 | + |
124 | + case recordTypeHandshake: |
125 | + // TODO(rsc): Should at least pick off connection close. |
126 | +- if typ != want { |
127 | ++ if typ != want && !c.isClient { |
128 | + return c.sendAlert(alertNoRenegotiation) |
129 | + } |
130 | + c.hand.Write(data) |
131 | +@@ -741,6 +744,8 @@ |
132 | + data = c.hand.Next(4 + n) |
133 | + var m handshakeMessage |
134 | + switch data[0] { |
135 | ++ case typeHelloRequest: |
136 | ++ m = new(helloRequestMsg) |
137 | + case typeClientHello: |
138 | + m = new(clientHelloMsg) |
139 | + case typeServerHello: |
140 | +@@ -825,6 +830,25 @@ |
141 | + return n + m, c.setError(err) |
142 | + } |
143 | + |
144 | ++func (c *Conn) handleRenegotiation() error { |
145 | ++ c.handshakeComplete = false |
146 | ++ if !c.isClient { |
147 | ++ panic("renegotiation should only happen for a client") |
148 | ++ } |
149 | ++ |
150 | ++ msg, err := c.readHandshake() |
151 | ++ if err != nil { |
152 | ++ return err |
153 | ++ } |
154 | ++ _, ok := msg.(*helloRequestMsg) |
155 | ++ if !ok { |
156 | ++ c.sendAlert(alertUnexpectedMessage) |
157 | ++ return alertUnexpectedMessage |
158 | ++ } |
159 | ++ |
160 | ++ return c.Handshake() |
161 | ++} |
162 | ++ |
163 | + // Read can be made to time out and return a net.Error with Timeout() == true |
164 | + // after a fixed time limit; see SetDeadline and SetReadDeadline. |
165 | + func (c *Conn) Read(b []byte) (n int, err error) { |
166 | +@@ -844,6 +868,14 @@ |
167 | + // Soft error, like EAGAIN |
168 | + return 0, err |
169 | + } |
170 | ++ if c.hand.Len() > 0 { |
171 | ++ // We received handshake bytes, indicating the start of |
172 | ++ // a renegotiation. |
173 | ++ if err := c.handleRenegotiation(); err != nil { |
174 | ++ return 0, err |
175 | ++ } |
176 | ++ continue |
177 | ++ } |
178 | + } |
179 | + if err := c.error(); err != nil { |
180 | + return 0, err |
181 | +diff -r c242bbf5fa8c src/pkg/crypto/tls/handshake_messages.go |
182 | +--- a/src/pkg/crypto/tls/handshake_messages.go Wed Jul 17 14:03:27 2013 -0400 |
183 | ++++ b/src/pkg/crypto/tls/handshake_messages.go Thu Jul 18 13:45:43 2013 -0400 |
184 | +@@ -1243,6 +1243,17 @@ |
185 | + return true |
186 | + } |
187 | + |
188 | ++type helloRequestMsg struct { |
189 | ++} |
190 | ++ |
191 | ++func (*helloRequestMsg) marshal() []byte { |
192 | ++ return []byte{typeHelloRequest, 0, 0, 0} |
193 | ++} |
194 | ++ |
195 | ++func (*helloRequestMsg) unmarshal(data []byte) bool { |
196 | ++ return len(data) == 4 |
197 | ++} |
198 | ++ |
199 | + func eqUint16s(x, y []uint16) bool { |
200 | + if len(x) != len(y) { |
201 | + return false |
202 | |
203 | === added directory 'fork/http' |
204 | === added file 'fork/http/chunked.go' |
205 | --- fork/http/chunked.go 1970-01-01 00:00:00 +0000 |
206 | +++ fork/http/chunked.go 2013-07-22 14:27:42 +0000 |
207 | @@ -0,0 +1,170 @@ |
208 | +// Copyright 2009 The Go Authors. All rights reserved. |
209 | +// Use of this source code is governed by a BSD-style |
210 | +// license that can be found in the LICENSE file. |
211 | + |
212 | +// The wire protocol for HTTP's "chunked" Transfer-Encoding. |
213 | + |
214 | +// This code is duplicated in httputil/chunked.go. |
215 | +// Please make any changes in both files. |
216 | + |
217 | +package http |
218 | + |
219 | +import ( |
220 | + "bufio" |
221 | + "bytes" |
222 | + "errors" |
223 | + "io" |
224 | + "strconv" |
225 | +) |
226 | + |
227 | +const maxLineLength = 4096 // assumed <= bufio.defaultBufSize |
228 | + |
229 | +var ErrLineTooLong = errors.New("header line too long") |
230 | + |
231 | +// newChunkedReader returns a new chunkedReader that translates the data read from r |
232 | +// out of HTTP "chunked" format before returning it. |
233 | +// The chunkedReader returns io.EOF when the final 0-length chunk is read. |
234 | +// |
235 | +// newChunkedReader is not needed by normal applications. The http package |
236 | +// automatically decodes chunking when reading response bodies. |
237 | +func newChunkedReader(r io.Reader) io.Reader { |
238 | + br, ok := r.(*bufio.Reader) |
239 | + if !ok { |
240 | + br = bufio.NewReader(r) |
241 | + } |
242 | + return &chunkedReader{r: br} |
243 | +} |
244 | + |
245 | +type chunkedReader struct { |
246 | + r *bufio.Reader |
247 | + n uint64 // unread bytes in chunk |
248 | + err error |
249 | +} |
250 | + |
251 | +func (cr *chunkedReader) beginChunk() { |
252 | + // chunk-size CRLF |
253 | + var line string |
254 | + line, cr.err = readLine(cr.r) |
255 | + if cr.err != nil { |
256 | + return |
257 | + } |
258 | + cr.n, cr.err = strconv.ParseUint(line, 16, 64) |
259 | + if cr.err != nil { |
260 | + return |
261 | + } |
262 | + if cr.n == 0 { |
263 | + cr.err = io.EOF |
264 | + } |
265 | +} |
266 | + |
267 | +func (cr *chunkedReader) Read(b []uint8) (n int, err error) { |
268 | + if cr.err != nil { |
269 | + return 0, cr.err |
270 | + } |
271 | + if cr.n == 0 { |
272 | + cr.beginChunk() |
273 | + if cr.err != nil { |
274 | + return 0, cr.err |
275 | + } |
276 | + } |
277 | + if uint64(len(b)) > cr.n { |
278 | + b = b[0:cr.n] |
279 | + } |
280 | + n, cr.err = cr.r.Read(b) |
281 | + cr.n -= uint64(n) |
282 | + if cr.n == 0 && cr.err == nil { |
283 | + // end of chunk (CRLF) |
284 | + b := make([]byte, 2) |
285 | + if _, cr.err = io.ReadFull(cr.r, b); cr.err == nil { |
286 | + if b[0] != '\r' || b[1] != '\n' { |
287 | + cr.err = errors.New("malformed chunked encoding") |
288 | + } |
289 | + } |
290 | + } |
291 | + return n, cr.err |
292 | +} |
293 | + |
294 | +// Read a line of bytes (up to \n) from b. |
295 | +// Give up if the line exceeds maxLineLength. |
296 | +// The returned bytes are a pointer into storage in |
297 | +// the bufio, so they are only valid until the next bufio read. |
298 | +func readLineBytes(b *bufio.Reader) (p []byte, err error) { |
299 | + if p, err = b.ReadSlice('\n'); err != nil { |
300 | + // We always know when EOF is coming. |
301 | + // If the caller asked for a line, there should be a line. |
302 | + if err == io.EOF { |
303 | + err = io.ErrUnexpectedEOF |
304 | + } else if err == bufio.ErrBufferFull { |
305 | + err = ErrLineTooLong |
306 | + } |
307 | + return nil, err |
308 | + } |
309 | + if len(p) >= maxLineLength { |
310 | + return nil, ErrLineTooLong |
311 | + } |
312 | + |
313 | + // Chop off trailing white space. |
314 | + p = bytes.TrimRight(p, " \r\t\n") |
315 | + |
316 | + return p, nil |
317 | +} |
318 | + |
319 | +// readLineBytes, but convert the bytes into a string. |
320 | +func readLine(b *bufio.Reader) (s string, err error) { |
321 | + p, e := readLineBytes(b) |
322 | + if e != nil { |
323 | + return "", e |
324 | + } |
325 | + return string(p), nil |
326 | +} |
327 | + |
328 | +// newChunkedWriter returns a new chunkedWriter that translates writes into HTTP |
329 | +// "chunked" format before writing them to w. Closing the returned chunkedWriter |
330 | +// sends the final 0-length chunk that marks the end of the stream. |
331 | +// |
332 | +// newChunkedWriter is not needed by normal applications. The http |
333 | +// package adds chunking automatically if handlers don't set a |
334 | +// Content-Length header. Using newChunkedWriter inside a handler |
335 | +// would result in double chunking or chunking with a Content-Length |
336 | +// length, both of which are wrong. |
337 | +func newChunkedWriter(w io.Writer) io.WriteCloser { |
338 | + return &chunkedWriter{w} |
339 | +} |
340 | + |
341 | +// Writing to chunkedWriter translates to writing in HTTP chunked Transfer |
342 | +// Encoding wire format to the underlying Wire chunkedWriter. |
343 | +type chunkedWriter struct { |
344 | + Wire io.Writer |
345 | +} |
346 | + |
347 | +// Write the contents of data as one chunk to Wire. |
348 | +// NOTE: Note that the corresponding chunk-writing procedure in Conn.Write has |
349 | +// a bug since it does not check for success of io.WriteString |
350 | +func (cw *chunkedWriter) Write(data []byte) (n int, err error) { |
351 | + |
352 | + // Don't send 0-length data. It looks like EOF for chunked encoding. |
353 | + if len(data) == 0 { |
354 | + return 0, nil |
355 | + } |
356 | + |
357 | + head := strconv.FormatInt(int64(len(data)), 16) + "\r\n" |
358 | + |
359 | + if _, err = io.WriteString(cw.Wire, head); err != nil { |
360 | + return 0, err |
361 | + } |
362 | + if n, err = cw.Wire.Write(data); err != nil { |
363 | + return |
364 | + } |
365 | + if n != len(data) { |
366 | + err = io.ErrShortWrite |
367 | + return |
368 | + } |
369 | + _, err = io.WriteString(cw.Wire, "\r\n") |
370 | + |
371 | + return |
372 | +} |
373 | + |
374 | +func (cw *chunkedWriter) Close() error { |
375 | + _, err := io.WriteString(cw.Wire, "0\r\n") |
376 | + return err |
377 | +} |
378 | |
379 | === added file 'fork/http/client.go' |
380 | --- fork/http/client.go 1970-01-01 00:00:00 +0000 |
381 | +++ fork/http/client.go 2013-07-22 14:27:42 +0000 |
382 | @@ -0,0 +1,339 @@ |
383 | +// Copyright 2009 The Go Authors. All rights reserved. |
384 | +// Use of this source code is governed by a BSD-style |
385 | +// license that can be found in the LICENSE file. |
386 | + |
387 | +// HTTP client. See RFC 2616. |
388 | +// |
389 | +// This is the high-level Client interface. |
390 | +// The low-level implementation is in transport.go. |
391 | + |
392 | +package http |
393 | + |
394 | +import ( |
395 | + "encoding/base64" |
396 | + "errors" |
397 | + "fmt" |
398 | + "io" |
399 | + "net/url" |
400 | + "strings" |
401 | +) |
402 | + |
403 | +// A Client is an HTTP client. Its zero value (DefaultClient) is a usable client |
404 | +// that uses DefaultTransport. |
405 | +// |
406 | +// The Client's Transport typically has internal state (cached |
407 | +// TCP connections), so Clients should be reused instead of created as |
408 | +// needed. Clients are safe for concurrent use by multiple goroutines. |
409 | +type Client struct { |
410 | + // Transport specifies the mechanism by which individual |
411 | + // HTTP requests are made. |
412 | + // If nil, DefaultTransport is used. |
413 | + Transport RoundTripper |
414 | + |
415 | + // CheckRedirect specifies the policy for handling redirects. |
416 | + // If CheckRedirect is not nil, the client calls it before |
417 | + // following an HTTP redirect. The arguments req and via |
418 | + // are the upcoming request and the requests made already, |
419 | + // oldest first. If CheckRedirect returns an error, the client |
420 | + // returns that error instead of issue the Request req. |
421 | + // |
422 | + // If CheckRedirect is nil, the Client uses its default policy, |
423 | + // which is to stop after 10 consecutive requests. |
424 | + CheckRedirect func(req *Request, via []*Request) error |
425 | + |
426 | + // Jar specifies the cookie jar. |
427 | + // If Jar is nil, cookies are not sent in requests and ignored |
428 | + // in responses. |
429 | + Jar CookieJar |
430 | +} |
431 | + |
432 | +// DefaultClient is the default Client and is used by Get, Head, and Post. |
433 | +var DefaultClient = &Client{} |
434 | + |
435 | +// RoundTripper is an interface representing the ability to execute a |
436 | +// single HTTP transaction, obtaining the Response for a given Request. |
437 | +// |
438 | +// A RoundTripper must be safe for concurrent use by multiple |
439 | +// goroutines. |
440 | +type RoundTripper interface { |
441 | + // RoundTrip executes a single HTTP transaction, returning |
442 | + // the Response for the request req. RoundTrip should not |
443 | + // attempt to interpret the response. In particular, |
444 | + // RoundTrip must return err == nil if it obtained a response, |
445 | + // regardless of the response's HTTP status code. A non-nil |
446 | + // err should be reserved for failure to obtain a response. |
447 | + // Similarly, RoundTrip should not attempt to handle |
448 | + // higher-level protocol details such as redirects, |
449 | + // authentication, or cookies. |
450 | + // |
451 | + // RoundTrip should not modify the request, except for |
452 | + // consuming the Body. The request's URL and Header fields |
453 | + // are guaranteed to be initialized. |
454 | + RoundTrip(*Request) (*Response, error) |
455 | +} |
456 | + |
457 | +// Given a string of the form "host", "host:port", or "[ipv6::address]:port", |
458 | +// return true if the string includes a port. |
459 | +func hasPort(s string) bool { return strings.LastIndex(s, ":") > strings.LastIndex(s, "]") } |
460 | + |
461 | +// Used in Send to implement io.ReadCloser by bundling together the |
462 | +// bufio.Reader through which we read the response, and the underlying |
463 | +// network connection. |
464 | +type readClose struct { |
465 | + io.Reader |
466 | + io.Closer |
467 | +} |
468 | + |
469 | +// Do sends an HTTP request and returns an HTTP response, following |
470 | +// policy (e.g. redirects, cookies, auth) as configured on the client. |
471 | +// |
472 | +// A non-nil response always contains a non-nil resp.Body. |
473 | +// |
474 | +// Callers should close resp.Body when done reading from it. If |
475 | +// resp.Body is not closed, the Client's underlying RoundTripper |
476 | +// (typically Transport) may not be able to re-use a persistent TCP |
477 | +// connection to the server for a subsequent "keep-alive" request. |
478 | +// |
479 | +// Generally Get, Post, or PostForm will be used instead of Do. |
480 | +func (c *Client) Do(req *Request) (resp *Response, err error) { |
481 | + if req.Method == "GET" || req.Method == "HEAD" { |
482 | + return c.doFollowingRedirects(req) |
483 | + } |
484 | + return send(req, c.Transport) |
485 | +} |
486 | + |
487 | +// send issues an HTTP request. Caller should close resp.Body when done reading from it. |
488 | +func send(req *Request, t RoundTripper) (resp *Response, err error) { |
489 | + if t == nil { |
490 | + t = DefaultTransport |
491 | + if t == nil { |
492 | + err = errors.New("http: no Client.Transport or DefaultTransport") |
493 | + return |
494 | + } |
495 | + } |
496 | + |
497 | + if req.URL == nil { |
498 | + return nil, errors.New("http: nil Request.URL") |
499 | + } |
500 | + |
501 | + if req.RequestURI != "" { |
502 | + return nil, errors.New("http: Request.RequestURI can't be set in client requests.") |
503 | + } |
504 | + |
505 | + // Most the callers of send (Get, Post, et al) don't need |
506 | + // Headers, leaving it uninitialized. We guarantee to the |
507 | + // Transport that this has been initialized, though. |
508 | + if req.Header == nil { |
509 | + req.Header = make(Header) |
510 | + } |
511 | + |
512 | + if u := req.URL.User; u != nil { |
513 | + req.Header.Set("Authorization", "Basic "+base64.URLEncoding.EncodeToString([]byte(u.String()))) |
514 | + } |
515 | + return t.RoundTrip(req) |
516 | +} |
517 | + |
518 | +// True if the specified HTTP status code is one for which the Get utility should |
519 | +// automatically redirect. |
520 | +func shouldRedirect(statusCode int) bool { |
521 | + switch statusCode { |
522 | + case StatusMovedPermanently, StatusFound, StatusSeeOther, StatusTemporaryRedirect: |
523 | + return true |
524 | + } |
525 | + return false |
526 | +} |
527 | + |
528 | +// Get issues a GET to the specified URL. If the response is one of the following |
529 | +// redirect codes, Get follows the redirect, up to a maximum of 10 redirects: |
530 | +// |
531 | +// 301 (Moved Permanently) |
532 | +// 302 (Found) |
533 | +// 303 (See Other) |
534 | +// 307 (Temporary Redirect) |
535 | +// |
536 | +// Caller should close r.Body when done reading from it. |
537 | +// |
538 | +// Get is a wrapper around DefaultClient.Get. |
539 | +func Get(url string) (r *Response, err error) { |
540 | + return DefaultClient.Get(url) |
541 | +} |
542 | + |
543 | +// Get issues a GET to the specified URL. If the response is one of the |
544 | +// following redirect codes, Get follows the redirect after calling the |
545 | +// Client's CheckRedirect function. |
546 | +// |
547 | +// 301 (Moved Permanently) |
548 | +// 302 (Found) |
549 | +// 303 (See Other) |
550 | +// 307 (Temporary Redirect) |
551 | +// |
552 | +// Caller should close r.Body when done reading from it. |
553 | +func (c *Client) Get(url string) (r *Response, err error) { |
554 | + req, err := NewRequest("GET", url, nil) |
555 | + if err != nil { |
556 | + return nil, err |
557 | + } |
558 | + return c.doFollowingRedirects(req) |
559 | +} |
560 | + |
561 | +func (c *Client) doFollowingRedirects(ireq *Request) (r *Response, err error) { |
562 | + // TODO: if/when we add cookie support, the redirected request shouldn't |
563 | + // necessarily supply the same cookies as the original. |
564 | + var base *url.URL |
565 | + redirectChecker := c.CheckRedirect |
566 | + if redirectChecker == nil { |
567 | + redirectChecker = defaultCheckRedirect |
568 | + } |
569 | + var via []*Request |
570 | + |
571 | + if ireq.URL == nil { |
572 | + return nil, errors.New("http: nil Request.URL") |
573 | + } |
574 | + |
575 | + jar := c.Jar |
576 | + if jar == nil { |
577 | + jar = blackHoleJar{} |
578 | + } |
579 | + |
580 | + req := ireq |
581 | + urlStr := "" // next relative or absolute URL to fetch (after first request) |
582 | + for redirect := 0; ; redirect++ { |
583 | + if redirect != 0 { |
584 | + req = new(Request) |
585 | + req.Method = ireq.Method |
586 | + req.Header = make(Header) |
587 | + req.URL, err = base.Parse(urlStr) |
588 | + if err != nil { |
589 | + break |
590 | + } |
591 | + if len(via) > 0 { |
592 | + // Add the Referer header. |
593 | + lastReq := via[len(via)-1] |
594 | + if lastReq.URL.Scheme != "https" { |
595 | + req.Header.Set("Referer", lastReq.URL.String()) |
596 | + } |
597 | + |
598 | + err = redirectChecker(req, via) |
599 | + if err != nil { |
600 | + break |
601 | + } |
602 | + } |
603 | + } |
604 | + |
605 | + for _, cookie := range jar.Cookies(req.URL) { |
606 | + req.AddCookie(cookie) |
607 | + } |
608 | + urlStr = req.URL.String() |
609 | + if r, err = send(req, c.Transport); err != nil { |
610 | + break |
611 | + } |
612 | + if c := r.Cookies(); len(c) > 0 { |
613 | + jar.SetCookies(req.URL, c) |
614 | + } |
615 | + |
616 | + if shouldRedirect(r.StatusCode) { |
617 | + r.Body.Close() |
618 | + if urlStr = r.Header.Get("Location"); urlStr == "" { |
619 | + err = errors.New(fmt.Sprintf("%d response missing Location header", r.StatusCode)) |
620 | + break |
621 | + } |
622 | + base = req.URL |
623 | + via = append(via, req) |
624 | + continue |
625 | + } |
626 | + return |
627 | + } |
628 | + |
629 | + method := ireq.Method |
630 | + err = &url.Error{ |
631 | + Op: method[0:1] + strings.ToLower(method[1:]), |
632 | + URL: urlStr, |
633 | + Err: err, |
634 | + } |
635 | + return |
636 | +} |
637 | + |
638 | +func defaultCheckRedirect(req *Request, via []*Request) error { |
639 | + if len(via) >= 10 { |
640 | + return errors.New("stopped after 10 redirects") |
641 | + } |
642 | + return nil |
643 | +} |
644 | + |
645 | +// Post issues a POST to the specified URL. |
646 | +// |
647 | +// Caller should close r.Body when done reading from it. |
648 | +// |
649 | +// Post is a wrapper around DefaultClient.Post |
650 | +func Post(url string, bodyType string, body io.Reader) (r *Response, err error) { |
651 | + return DefaultClient.Post(url, bodyType, body) |
652 | +} |
653 | + |
654 | +// Post issues a POST to the specified URL. |
655 | +// |
656 | +// Caller should close r.Body when done reading from it. |
657 | +func (c *Client) Post(url string, bodyType string, body io.Reader) (r *Response, err error) { |
658 | + req, err := NewRequest("POST", url, body) |
659 | + if err != nil { |
660 | + return nil, err |
661 | + } |
662 | + req.Header.Set("Content-Type", bodyType) |
663 | + if c.Jar != nil { |
664 | + for _, cookie := range c.Jar.Cookies(req.URL) { |
665 | + req.AddCookie(cookie) |
666 | + } |
667 | + } |
668 | + r, err = send(req, c.Transport) |
669 | + if err == nil && c.Jar != nil { |
670 | + c.Jar.SetCookies(req.URL, r.Cookies()) |
671 | + } |
672 | + return r, err |
673 | +} |
674 | + |
675 | +// PostForm issues a POST to the specified URL, |
676 | +// with data's keys and values urlencoded as the request body. |
677 | +// |
678 | +// Caller should close r.Body when done reading from it. |
679 | +// |
680 | +// PostForm is a wrapper around DefaultClient.PostForm |
681 | +func PostForm(url string, data url.Values) (r *Response, err error) { |
682 | + return DefaultClient.PostForm(url, data) |
683 | +} |
684 | + |
685 | +// PostForm issues a POST to the specified URL, |
686 | +// with data's keys and values urlencoded as the request body. |
687 | +// |
688 | +// Caller should close r.Body when done reading from it. |
689 | +func (c *Client) PostForm(url string, data url.Values) (r *Response, err error) { |
690 | + return c.Post(url, "application/x-www-form-urlencoded", strings.NewReader(data.Encode())) |
691 | +} |
692 | + |
693 | +// Head issues a HEAD to the specified URL. If the response is one of the |
694 | +// following redirect codes, Head follows the redirect after calling the |
695 | +// Client's CheckRedirect function. |
696 | +// |
697 | +// 301 (Moved Permanently) |
698 | +// 302 (Found) |
699 | +// 303 (See Other) |
700 | +// 307 (Temporary Redirect) |
701 | +// |
702 | +// Head is a wrapper around DefaultClient.Head |
703 | +func Head(url string) (r *Response, err error) { |
704 | + return DefaultClient.Head(url) |
705 | +} |
706 | + |
707 | +// Head issues a HEAD to the specified URL. If the response is one of the |
708 | +// following redirect codes, Head follows the redirect after calling the |
709 | +// Client's CheckRedirect function. |
710 | +// |
711 | +// 301 (Moved Permanently) |
712 | +// 302 (Found) |
713 | +// 303 (See Other) |
714 | +// 307 (Temporary Redirect) |
715 | +func (c *Client) Head(url string) (r *Response, err error) { |
716 | + req, err := NewRequest("HEAD", url, nil) |
717 | + if err != nil { |
718 | + return nil, err |
719 | + } |
720 | + return c.doFollowingRedirects(req) |
721 | +} |
722 | |
723 | === added file 'fork/http/cookie.go' |
724 | --- fork/http/cookie.go 1970-01-01 00:00:00 +0000 |
725 | +++ fork/http/cookie.go 2013-07-22 14:27:42 +0000 |
726 | @@ -0,0 +1,267 @@ |
727 | +// Copyright 2009 The Go Authors. All rights reserved. |
728 | +// Use of this source code is governed by a BSD-style |
729 | +// license that can be found in the LICENSE file. |
730 | + |
731 | +package http |
732 | + |
733 | +import ( |
734 | + "bytes" |
735 | + "fmt" |
736 | + "strconv" |
737 | + "strings" |
738 | + "time" |
739 | +) |
740 | + |
741 | +// This implementation is done according to RFC 6265: |
742 | +// |
743 | +// http://tools.ietf.org/html/rfc6265 |
744 | + |
745 | +// A Cookie represents an HTTP cookie as sent in the Set-Cookie header of an |
746 | +// HTTP response or the Cookie header of an HTTP request. |
747 | +type Cookie struct { |
748 | + Name string |
749 | + Value string |
750 | + Path string |
751 | + Domain string |
752 | + Expires time.Time |
753 | + RawExpires string |
754 | + |
755 | + // MaxAge=0 means no 'Max-Age' attribute specified. |
756 | + // MaxAge<0 means delete cookie now, equivalently 'Max-Age: 0' |
757 | + // MaxAge>0 means Max-Age attribute present and given in seconds |
758 | + MaxAge int |
759 | + Secure bool |
760 | + HttpOnly bool |
761 | + Raw string |
762 | + Unparsed []string // Raw text of unparsed attribute-value pairs |
763 | +} |
764 | + |
765 | +// readSetCookies parses all "Set-Cookie" values from |
766 | +// the header h and returns the successfully parsed Cookies. |
767 | +func readSetCookies(h Header) []*Cookie { |
768 | + cookies := []*Cookie{} |
769 | + for _, line := range h["Set-Cookie"] { |
770 | + parts := strings.Split(strings.TrimSpace(line), ";") |
771 | + if len(parts) == 1 && parts[0] == "" { |
772 | + continue |
773 | + } |
774 | + parts[0] = strings.TrimSpace(parts[0]) |
775 | + j := strings.Index(parts[0], "=") |
776 | + if j < 0 { |
777 | + continue |
778 | + } |
779 | + name, value := parts[0][:j], parts[0][j+1:] |
780 | + if !isCookieNameValid(name) { |
781 | + continue |
782 | + } |
783 | + value, success := parseCookieValue(value) |
784 | + if !success { |
785 | + continue |
786 | + } |
787 | + c := &Cookie{ |
788 | + Name: name, |
789 | + Value: value, |
790 | + Raw: line, |
791 | + } |
792 | + for i := 1; i < len(parts); i++ { |
793 | + parts[i] = strings.TrimSpace(parts[i]) |
794 | + if len(parts[i]) == 0 { |
795 | + continue |
796 | + } |
797 | + |
798 | + attr, val := parts[i], "" |
799 | + if j := strings.Index(attr, "="); j >= 0 { |
800 | + attr, val = attr[:j], attr[j+1:] |
801 | + } |
802 | + lowerAttr := strings.ToLower(attr) |
803 | + parseCookieValueFn := parseCookieValue |
804 | + if lowerAttr == "expires" { |
805 | + parseCookieValueFn = parseCookieExpiresValue |
806 | + } |
807 | + val, success = parseCookieValueFn(val) |
808 | + if !success { |
809 | + c.Unparsed = append(c.Unparsed, parts[i]) |
810 | + continue |
811 | + } |
812 | + switch lowerAttr { |
813 | + case "secure": |
814 | + c.Secure = true |
815 | + continue |
816 | + case "httponly": |
817 | + c.HttpOnly = true |
818 | + continue |
819 | + case "domain": |
820 | + c.Domain = val |
821 | + // TODO: Add domain parsing |
822 | + continue |
823 | + case "max-age": |
824 | + secs, err := strconv.Atoi(val) |
825 | + if err != nil || secs != 0 && val[0] == '0' { |
826 | + break |
827 | + } |
828 | + if secs <= 0 { |
829 | + c.MaxAge = -1 |
830 | + } else { |
831 | + c.MaxAge = secs |
832 | + } |
833 | + continue |
834 | + case "expires": |
835 | + c.RawExpires = val |
836 | + exptime, err := time.Parse(time.RFC1123, val) |
837 | + if err != nil { |
838 | + exptime, err = time.Parse("Mon, 02-Jan-2006 15:04:05 MST", val) |
839 | + if err != nil { |
840 | + c.Expires = time.Time{} |
841 | + break |
842 | + } |
843 | + } |
844 | + c.Expires = exptime.UTC() |
845 | + continue |
846 | + case "path": |
847 | + c.Path = val |
848 | + // TODO: Add path parsing |
849 | + continue |
850 | + } |
851 | + c.Unparsed = append(c.Unparsed, parts[i]) |
852 | + } |
853 | + cookies = append(cookies, c) |
854 | + } |
855 | + return cookies |
856 | +} |
857 | + |
858 | +// SetCookie adds a Set-Cookie header to the provided ResponseWriter's headers. |
859 | +func SetCookie(w ResponseWriter, cookie *Cookie) { |
860 | + w.Header().Add("Set-Cookie", cookie.String()) |
861 | +} |
862 | + |
863 | +// String returns the serialization of the cookie for use in a Cookie |
864 | +// header (if only Name and Value are set) or a Set-Cookie response |
865 | +// header (if other fields are set). |
866 | +func (c *Cookie) String() string { |
867 | + var b bytes.Buffer |
868 | + fmt.Fprintf(&b, "%s=%s", sanitizeName(c.Name), sanitizeValue(c.Value)) |
869 | + if len(c.Path) > 0 { |
870 | + fmt.Fprintf(&b, "; Path=%s", sanitizeValue(c.Path)) |
871 | + } |
872 | + if len(c.Domain) > 0 { |
873 | + fmt.Fprintf(&b, "; Domain=%s", sanitizeValue(c.Domain)) |
874 | + } |
875 | + if c.Expires.Unix() > 0 { |
876 | + fmt.Fprintf(&b, "; Expires=%s", c.Expires.UTC().Format(time.RFC1123)) |
877 | + } |
878 | + if c.MaxAge > 0 { |
879 | + fmt.Fprintf(&b, "; Max-Age=%d", c.MaxAge) |
880 | + } else if c.MaxAge < 0 { |
881 | + fmt.Fprintf(&b, "; Max-Age=0") |
882 | + } |
883 | + if c.HttpOnly { |
884 | + fmt.Fprintf(&b, "; HttpOnly") |
885 | + } |
886 | + if c.Secure { |
887 | + fmt.Fprintf(&b, "; Secure") |
888 | + } |
889 | + return b.String() |
890 | +} |
891 | + |
892 | +// readCookies parses all "Cookie" values from the header h and |
893 | +// returns the successfully parsed Cookies. |
894 | +// |
895 | +// if filter isn't empty, only cookies of that name are returned |
896 | +func readCookies(h Header, filter string) []*Cookie { |
897 | + cookies := []*Cookie{} |
898 | + lines, ok := h["Cookie"] |
899 | + if !ok { |
900 | + return cookies |
901 | + } |
902 | + |
903 | + for _, line := range lines { |
904 | + parts := strings.Split(strings.TrimSpace(line), ";") |
905 | + if len(parts) == 1 && parts[0] == "" { |
906 | + continue |
907 | + } |
908 | + // Per-line attributes |
909 | + parsedPairs := 0 |
910 | + for i := 0; i < len(parts); i++ { |
911 | + parts[i] = strings.TrimSpace(parts[i]) |
912 | + if len(parts[i]) == 0 { |
913 | + continue |
914 | + } |
915 | + name, val := parts[i], "" |
916 | + if j := strings.Index(name, "="); j >= 0 { |
917 | + name, val = name[:j], name[j+1:] |
918 | + } |
919 | + if !isCookieNameValid(name) { |
920 | + continue |
921 | + } |
922 | + if filter != "" && filter != name { |
923 | + continue |
924 | + } |
925 | + val, success := parseCookieValue(val) |
926 | + if !success { |
927 | + continue |
928 | + } |
929 | + cookies = append(cookies, &Cookie{Name: name, Value: val}) |
930 | + parsedPairs++ |
931 | + } |
932 | + } |
933 | + return cookies |
934 | +} |
935 | + |
936 | +var cookieNameSanitizer = strings.NewReplacer("\n", "-", "\r", "-") |
937 | + |
938 | +func sanitizeName(n string) string { |
939 | + return cookieNameSanitizer.Replace(n) |
940 | +} |
941 | + |
942 | +var cookieValueSanitizer = strings.NewReplacer("\n", " ", "\r", " ", ";", " ") |
943 | + |
944 | +func sanitizeValue(v string) string { |
945 | + return cookieValueSanitizer.Replace(v) |
946 | +} |
947 | + |
948 | +func unquoteCookieValue(v string) string { |
949 | + if len(v) > 1 && v[0] == '"' && v[len(v)-1] == '"' { |
950 | + return v[1 : len(v)-1] |
951 | + } |
952 | + return v |
953 | +} |
954 | + |
955 | +func isCookieByte(c byte) bool { |
956 | + switch { |
957 | + case c == 0x21, 0x23 <= c && c <= 0x2b, 0x2d <= c && c <= 0x3a, |
958 | + 0x3c <= c && c <= 0x5b, 0x5d <= c && c <= 0x7e: |
959 | + return true |
960 | + } |
961 | + return false |
962 | +} |
963 | + |
964 | +func isCookieExpiresByte(c byte) (ok bool) { |
965 | + return isCookieByte(c) || c == ',' || c == ' ' |
966 | +} |
967 | + |
968 | +func parseCookieValue(raw string) (string, bool) { |
969 | + return parseCookieValueUsing(raw, isCookieByte) |
970 | +} |
971 | + |
972 | +func parseCookieExpiresValue(raw string) (string, bool) { |
973 | + return parseCookieValueUsing(raw, isCookieExpiresByte) |
974 | +} |
975 | + |
976 | +func parseCookieValueUsing(raw string, validByte func(byte) bool) (string, bool) { |
977 | + raw = unquoteCookieValue(raw) |
978 | + for i := 0; i < len(raw); i++ { |
979 | + if !validByte(raw[i]) { |
980 | + return "", false |
981 | + } |
982 | + } |
983 | + return raw, true |
984 | +} |
985 | + |
986 | +func isCookieNameValid(raw string) bool { |
987 | + for _, c := range raw { |
988 | + if !isToken(byte(c)) { |
989 | + return false |
990 | + } |
991 | + } |
992 | + return true |
993 | +} |
994 | |
995 | === added file 'fork/http/doc.go' |
996 | --- fork/http/doc.go 1970-01-01 00:00:00 +0000 |
997 | +++ fork/http/doc.go 2013-07-22 14:27:42 +0000 |
998 | @@ -0,0 +1,80 @@ |
999 | +// Copyright 2011 The Go Authors. All rights reserved. |
1000 | +// Use of this source code is governed by a BSD-style |
1001 | +// license that can be found in the LICENSE file. |
1002 | + |
1003 | +/* |
1004 | +Package http provides HTTP client and server implementations. |
1005 | + |
1006 | +Get, Head, Post, and PostForm make HTTP requests: |
1007 | + |
1008 | + resp, err := http.Get("http://example.com/") |
1009 | + ... |
1010 | + resp, err := http.Post("http://example.com/upload", "image/jpeg", &buf) |
1011 | + ... |
1012 | + resp, err := http.PostForm("http://example.com/form", |
1013 | + url.Values{"key": {"Value"}, "id": {"123"}}) |
1014 | + |
1015 | +The client must close the response body when finished with it: |
1016 | + |
1017 | + resp, err := http.Get("http://example.com/") |
1018 | + if err != nil { |
1019 | + // handle error |
1020 | + } |
1021 | + defer resp.Body.Close() |
1022 | + body, err := ioutil.ReadAll(resp.Body) |
1023 | + // ... |
1024 | + |
1025 | +For control over HTTP client headers, redirect policy, and other |
1026 | +settings, create a Client: |
1027 | + |
1028 | + client := &http.Client{ |
1029 | + CheckRedirect: redirectPolicyFunc, |
1030 | + } |
1031 | + |
1032 | + resp, err := client.Get("http://example.com") |
1033 | + // ... |
1034 | + |
1035 | + req, err := http.NewRequest("GET", "http://example.com", nil) |
1036 | + // ... |
1037 | + req.Header.Add("If-None-Match", `W/"wyzzy"`) |
1038 | + resp, err := client.Do(req) |
1039 | + // ... |
1040 | + |
1041 | +For control over proxies, TLS configuration, keep-alives, |
1042 | +compression, and other settings, create a Transport: |
1043 | + |
1044 | + tr := &http.Transport{ |
1045 | + TLSClientConfig: &tls.Config{RootCAs: pool}, |
1046 | + DisableCompression: true, |
1047 | + } |
1048 | + client := &http.Client{Transport: tr} |
1049 | + resp, err := client.Get("https://example.com") |
1050 | + |
1051 | +Clients and Transports are safe for concurrent use by multiple |
1052 | +goroutines and for efficiency should only be created once and re-used. |
1053 | + |
1054 | +ListenAndServe starts an HTTP server with a given address and handler. |
1055 | +The handler is usually nil, which means to use DefaultServeMux. |
1056 | +Handle and HandleFunc add handlers to DefaultServeMux: |
1057 | + |
1058 | + http.Handle("/foo", fooHandler) |
1059 | + |
1060 | + http.HandleFunc("/bar", func(w http.ResponseWriter, r *http.Request) { |
1061 | + fmt.Fprintf(w, "Hello, %q", html.EscapeString(r.URL.Path)) |
1062 | + }) |
1063 | + |
1064 | + log.Fatal(http.ListenAndServe(":8080", nil)) |
1065 | + |
1066 | +More control over the server's behavior is available by creating a |
1067 | +custom Server: |
1068 | + |
1069 | + s := &http.Server{ |
1070 | + Addr: ":8080", |
1071 | + Handler: myHandler, |
1072 | + ReadTimeout: 10 * time.Second, |
1073 | + WriteTimeout: 10 * time.Second, |
1074 | + MaxHeaderBytes: 1 << 20, |
1075 | + } |
1076 | + log.Fatal(s.ListenAndServe()) |
1077 | +*/ |
1078 | +package http |
1079 | |
1080 | === added file 'fork/http/filetransport.go' |
1081 | --- fork/http/filetransport.go 1970-01-01 00:00:00 +0000 |
1082 | +++ fork/http/filetransport.go 2013-07-22 14:27:42 +0000 |
1083 | @@ -0,0 +1,123 @@ |
1084 | +// Copyright 2011 The Go Authors. All rights reserved. |
1085 | +// Use of this source code is governed by a BSD-style |
1086 | +// license that can be found in the LICENSE file. |
1087 | + |
1088 | +package http |
1089 | + |
1090 | +import ( |
1091 | + "fmt" |
1092 | + "io" |
1093 | +) |
1094 | + |
1095 | +// fileTransport implements RoundTripper for the 'file' protocol. |
1096 | +type fileTransport struct { |
1097 | + fh fileHandler |
1098 | +} |
1099 | + |
1100 | +// NewFileTransport returns a new RoundTripper, serving the provided |
1101 | +// FileSystem. The returned RoundTripper ignores the URL host in its |
1102 | +// incoming requests, as well as most other properties of the |
1103 | +// request. |
1104 | +// |
1105 | +// The typical use case for NewFileTransport is to register the "file" |
1106 | +// protocol with a Transport, as in: |
1107 | +// |
1108 | +// t := &http.Transport{} |
1109 | +// t.RegisterProtocol("file", http.NewFileTransport(http.Dir("/"))) |
1110 | +// c := &http.Client{Transport: t} |
1111 | +// res, err := c.Get("file:///etc/passwd") |
1112 | +// ... |
1113 | +func NewFileTransport(fs FileSystem) RoundTripper { |
1114 | + return fileTransport{fileHandler{fs}} |
1115 | +} |
1116 | + |
1117 | +func (t fileTransport) RoundTrip(req *Request) (resp *Response, err error) { |
1118 | + // We start ServeHTTP in a goroutine, which may take a long |
1119 | + // time if the file is large. The newPopulateResponseWriter |
1120 | + // call returns a channel which either ServeHTTP or finish() |
1121 | + // sends our *Response on, once the *Response itself has been |
1122 | + // populated (even if the body itself is still being |
1123 | + // written to the res.Body, a pipe) |
1124 | + rw, resc := newPopulateResponseWriter() |
1125 | + go func() { |
1126 | + t.fh.ServeHTTP(rw, req) |
1127 | + rw.finish() |
1128 | + }() |
1129 | + return <-resc, nil |
1130 | +} |
1131 | + |
1132 | +func newPopulateResponseWriter() (*populateResponse, <-chan *Response) { |
1133 | + pr, pw := io.Pipe() |
1134 | + rw := &populateResponse{ |
1135 | + ch: make(chan *Response), |
1136 | + pw: pw, |
1137 | + res: &Response{ |
1138 | + Proto: "HTTP/1.0", |
1139 | + ProtoMajor: 1, |
1140 | + Header: make(Header), |
1141 | + Close: true, |
1142 | + Body: pr, |
1143 | + }, |
1144 | + } |
1145 | + return rw, rw.ch |
1146 | +} |
1147 | + |
1148 | +// populateResponse is a ResponseWriter that populates the *Response |
1149 | +// in res, and writes its body to a pipe connected to the response |
1150 | +// body. Once writes begin or finish() is called, the response is sent |
1151 | +// on ch. |
1152 | +type populateResponse struct { |
1153 | + res *Response |
1154 | + ch chan *Response |
1155 | + wroteHeader bool |
1156 | + hasContent bool |
1157 | + sentResponse bool |
1158 | + pw *io.PipeWriter |
1159 | +} |
1160 | + |
1161 | +func (pr *populateResponse) finish() { |
1162 | + if !pr.wroteHeader { |
1163 | + pr.WriteHeader(500) |
1164 | + } |
1165 | + if !pr.sentResponse { |
1166 | + pr.sendResponse() |
1167 | + } |
1168 | + pr.pw.Close() |
1169 | +} |
1170 | + |
1171 | +func (pr *populateResponse) sendResponse() { |
1172 | + if pr.sentResponse { |
1173 | + return |
1174 | + } |
1175 | + pr.sentResponse = true |
1176 | + |
1177 | + if pr.hasContent { |
1178 | + pr.res.ContentLength = -1 |
1179 | + } |
1180 | + pr.ch <- pr.res |
1181 | +} |
1182 | + |
1183 | +func (pr *populateResponse) Header() Header { |
1184 | + return pr.res.Header |
1185 | +} |
1186 | + |
1187 | +func (pr *populateResponse) WriteHeader(code int) { |
1188 | + if pr.wroteHeader { |
1189 | + return |
1190 | + } |
1191 | + pr.wroteHeader = true |
1192 | + |
1193 | + pr.res.StatusCode = code |
1194 | + pr.res.Status = fmt.Sprintf("%d %s", code, StatusText(code)) |
1195 | +} |
1196 | + |
1197 | +func (pr *populateResponse) Write(p []byte) (n int, err error) { |
1198 | + if !pr.wroteHeader { |
1199 | + pr.WriteHeader(StatusOK) |
1200 | + } |
1201 | + pr.hasContent = true |
1202 | + if !pr.sentResponse { |
1203 | + pr.sendResponse() |
1204 | + } |
1205 | + return pr.pw.Write(p) |
1206 | +} |
1207 | |
1208 | === added file 'fork/http/fs.go' |
1209 | --- fork/http/fs.go 1970-01-01 00:00:00 +0000 |
1210 | +++ fork/http/fs.go 2013-07-22 14:27:42 +0000 |
1211 | @@ -0,0 +1,367 @@ |
1212 | +// Copyright 2009 The Go Authors. All rights reserved. |
1213 | +// Use of this source code is governed by a BSD-style |
1214 | +// license that can be found in the LICENSE file. |
1215 | + |
1216 | +// HTTP file system request handler |
1217 | + |
1218 | +package http |
1219 | + |
1220 | +import ( |
1221 | + "errors" |
1222 | + "fmt" |
1223 | + "io" |
1224 | + "mime" |
1225 | + "os" |
1226 | + "path" |
1227 | + "path/filepath" |
1228 | + "strconv" |
1229 | + "strings" |
1230 | + "time" |
1231 | +) |
1232 | + |
1233 | +// A Dir implements http.FileSystem using the native file |
1234 | +// system restricted to a specific directory tree. |
1235 | +// |
1236 | +// An empty Dir is treated as ".". |
1237 | +type Dir string |
1238 | + |
1239 | +func (d Dir) Open(name string) (File, error) { |
1240 | + if filepath.Separator != '/' && strings.IndexRune(name, filepath.Separator) >= 0 { |
1241 | + return nil, errors.New("http: invalid character in file path") |
1242 | + } |
1243 | + dir := string(d) |
1244 | + if dir == "" { |
1245 | + dir = "." |
1246 | + } |
1247 | + f, err := os.Open(filepath.Join(dir, filepath.FromSlash(path.Clean("/"+name)))) |
1248 | + if err != nil { |
1249 | + return nil, err |
1250 | + } |
1251 | + return f, nil |
1252 | +} |
1253 | + |
1254 | +// A FileSystem implements access to a collection of named files. |
1255 | +// The elements in a file path are separated by slash ('/', U+002F) |
1256 | +// characters, regardless of host operating system convention. |
1257 | +type FileSystem interface { |
1258 | + Open(name string) (File, error) |
1259 | +} |
1260 | + |
1261 | +// A File is returned by a FileSystem's Open method and can be |
1262 | +// served by the FileServer implementation. |
1263 | +type File interface { |
1264 | + Close() error |
1265 | + Stat() (os.FileInfo, error) |
1266 | + Readdir(count int) ([]os.FileInfo, error) |
1267 | + Read([]byte) (int, error) |
1268 | + Seek(offset int64, whence int) (int64, error) |
1269 | +} |
1270 | + |
1271 | +func dirList(w ResponseWriter, f File) { |
1272 | + w.Header().Set("Content-Type", "text/html; charset=utf-8") |
1273 | + fmt.Fprintf(w, "<pre>\n") |
1274 | + for { |
1275 | + dirs, err := f.Readdir(100) |
1276 | + if err != nil || len(dirs) == 0 { |
1277 | + break |
1278 | + } |
1279 | + for _, d := range dirs { |
1280 | + name := d.Name() |
1281 | + if d.IsDir() { |
1282 | + name += "/" |
1283 | + } |
1284 | + // TODO htmlescape |
1285 | + fmt.Fprintf(w, "<a href=\"%s\">%s</a>\n", name, name) |
1286 | + } |
1287 | + } |
1288 | + fmt.Fprintf(w, "</pre>\n") |
1289 | +} |
1290 | + |
1291 | +// ServeContent replies to the request using the content in the |
1292 | +// provided ReadSeeker. The main benefit of ServeContent over io.Copy |
1293 | +// is that it handles Range requests properly, sets the MIME type, and |
1294 | +// handles If-Modified-Since requests. |
1295 | +// |
1296 | +// If the response's Content-Type header is not set, ServeContent |
1297 | +// first tries to deduce the type from name's file extension and, |
1298 | +// if that fails, falls back to reading the first block of the content |
1299 | +// and passing it to DetectContentType. |
1300 | +// The name is otherwise unused; in particular it can be empty and is |
1301 | +// never sent in the response. |
1302 | +// |
1303 | +// If modtime is not the zero time, ServeContent includes it in a |
1304 | +// Last-Modified header in the response. If the request includes an |
1305 | +// If-Modified-Since header, ServeContent uses modtime to decide |
1306 | +// whether the content needs to be sent at all. |
1307 | +// |
1308 | +// The content's Seek method must work: ServeContent uses |
1309 | +// a seek to the end of the content to determine its size. |
1310 | +// |
1311 | +// Note that *os.File implements the io.ReadSeeker interface. |
1312 | +func ServeContent(w ResponseWriter, req *Request, name string, modtime time.Time, content io.ReadSeeker) { |
1313 | + size, err := content.Seek(0, os.SEEK_END) |
1314 | + if err != nil { |
1315 | + Error(w, "seeker can't seek", StatusInternalServerError) |
1316 | + return |
1317 | + } |
1318 | + _, err = content.Seek(0, os.SEEK_SET) |
1319 | + if err != nil { |
1320 | + Error(w, "seeker can't seek", StatusInternalServerError) |
1321 | + return |
1322 | + } |
1323 | + serveContent(w, req, name, modtime, size, content) |
1324 | +} |
1325 | + |
1326 | +// if name is empty, filename is unknown. (used for mime type, before sniffing) |
1327 | +// if modtime.IsZero(), modtime is unknown. |
1328 | +// content must be seeked to the beginning of the file. |
1329 | +func serveContent(w ResponseWriter, r *Request, name string, modtime time.Time, size int64, content io.ReadSeeker) { |
1330 | + if checkLastModified(w, r, modtime) { |
1331 | + return |
1332 | + } |
1333 | + |
1334 | + code := StatusOK |
1335 | + |
1336 | + // If Content-Type isn't set, use the file's extension to find it. |
1337 | + if w.Header().Get("Content-Type") == "" { |
1338 | + ctype := mime.TypeByExtension(filepath.Ext(name)) |
1339 | + if ctype == "" { |
1340 | + // read a chunk to decide between utf-8 text and binary |
1341 | + var buf [1024]byte |
1342 | + n, _ := io.ReadFull(content, buf[:]) |
1343 | + b := buf[:n] |
1344 | + ctype = DetectContentType(b) |
1345 | + _, err := content.Seek(0, os.SEEK_SET) // rewind to output whole file |
1346 | + if err != nil { |
1347 | + Error(w, "seeker can't seek", StatusInternalServerError) |
1348 | + return |
1349 | + } |
1350 | + } |
1351 | + w.Header().Set("Content-Type", ctype) |
1352 | + } |
1353 | + |
1354 | + // handle Content-Range header. |
1355 | + // TODO(adg): handle multiple ranges |
1356 | + sendSize := size |
1357 | + if size >= 0 { |
1358 | + ranges, err := parseRange(r.Header.Get("Range"), size) |
1359 | + if err == nil && len(ranges) > 1 { |
1360 | + err = errors.New("multiple ranges not supported") |
1361 | + } |
1362 | + if err != nil { |
1363 | + Error(w, err.Error(), StatusRequestedRangeNotSatisfiable) |
1364 | + return |
1365 | + } |
1366 | + if len(ranges) == 1 { |
1367 | + ra := ranges[0] |
1368 | + if _, err := content.Seek(ra.start, os.SEEK_SET); err != nil { |
1369 | + Error(w, err.Error(), StatusRequestedRangeNotSatisfiable) |
1370 | + return |
1371 | + } |
1372 | + sendSize = ra.length |
1373 | + code = StatusPartialContent |
1374 | + w.Header().Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", ra.start, ra.start+ra.length-1, size)) |
1375 | + } |
1376 | + |
1377 | + w.Header().Set("Accept-Ranges", "bytes") |
1378 | + if w.Header().Get("Content-Encoding") == "" { |
1379 | + w.Header().Set("Content-Length", strconv.FormatInt(sendSize, 10)) |
1380 | + } |
1381 | + } |
1382 | + |
1383 | + w.WriteHeader(code) |
1384 | + |
1385 | + if r.Method != "HEAD" { |
1386 | + if sendSize == -1 { |
1387 | + io.Copy(w, content) |
1388 | + } else { |
1389 | + io.CopyN(w, content, sendSize) |
1390 | + } |
1391 | + } |
1392 | +} |
1393 | + |
1394 | +// modtime is the modification time of the resource to be served, or IsZero(). |
1395 | +// return value is whether this request is now complete. |
1396 | +func checkLastModified(w ResponseWriter, r *Request, modtime time.Time) bool { |
1397 | + if modtime.IsZero() { |
1398 | + return false |
1399 | + } |
1400 | + |
1401 | + // The Date-Modified header truncates sub-second precision, so |
1402 | + // use mtime < t+1s instead of mtime <= t to check for unmodified. |
1403 | + if t, err := time.Parse(TimeFormat, r.Header.Get("If-Modified-Since")); err == nil && modtime.Before(t.Add(1*time.Second)) { |
1404 | + w.WriteHeader(StatusNotModified) |
1405 | + return true |
1406 | + } |
1407 | + w.Header().Set("Last-Modified", modtime.UTC().Format(TimeFormat)) |
1408 | + return false |
1409 | +} |
1410 | + |
1411 | +// name is '/'-separated, not filepath.Separator. |
1412 | +func serveFile(w ResponseWriter, r *Request, fs FileSystem, name string, redirect bool) { |
1413 | + const indexPage = "/index.html" |
1414 | + |
1415 | + // redirect .../index.html to .../ |
1416 | + // can't use Redirect() because that would make the path absolute, |
1417 | + // which would be a problem running under StripPrefix |
1418 | + if strings.HasSuffix(r.URL.Path, indexPage) { |
1419 | + localRedirect(w, r, "./") |
1420 | + return |
1421 | + } |
1422 | + |
1423 | + f, err := fs.Open(name) |
1424 | + if err != nil { |
1425 | + // TODO expose actual error? |
1426 | + NotFound(w, r) |
1427 | + return |
1428 | + } |
1429 | + defer f.Close() |
1430 | + |
1431 | + d, err1 := f.Stat() |
1432 | + if err1 != nil { |
1433 | + // TODO expose actual error? |
1434 | + NotFound(w, r) |
1435 | + return |
1436 | + } |
1437 | + |
1438 | + if redirect { |
1439 | + // redirect to canonical path: / at end of directory url |
1440 | + // r.URL.Path always begins with / |
1441 | + url := r.URL.Path |
1442 | + if d.IsDir() { |
1443 | + if url[len(url)-1] != '/' { |
1444 | + localRedirect(w, r, path.Base(url)+"/") |
1445 | + return |
1446 | + } |
1447 | + } else { |
1448 | + if url[len(url)-1] == '/' { |
1449 | + localRedirect(w, r, "../"+path.Base(url)) |
1450 | + return |
1451 | + } |
1452 | + } |
1453 | + } |
1454 | + |
1455 | + // use contents of index.html for directory, if present |
1456 | + if d.IsDir() { |
1457 | + if checkLastModified(w, r, d.ModTime()) { |
1458 | + return |
1459 | + } |
1460 | + index := name + indexPage |
1461 | + ff, err := fs.Open(index) |
1462 | + if err == nil { |
1463 | + defer ff.Close() |
1464 | + dd, err := ff.Stat() |
1465 | + if err == nil { |
1466 | + name = index |
1467 | + d = dd |
1468 | + f = ff |
1469 | + } |
1470 | + } |
1471 | + } |
1472 | + |
1473 | + if d.IsDir() { |
1474 | + dirList(w, f) |
1475 | + return |
1476 | + } |
1477 | + |
1478 | + serveContent(w, r, d.Name(), d.ModTime(), d.Size(), f) |
1479 | +} |
1480 | + |
1481 | +// localRedirect gives a Moved Permanently response. |
1482 | +// It does not convert relative paths to absolute paths like Redirect does. |
1483 | +func localRedirect(w ResponseWriter, r *Request, newPath string) { |
1484 | + if q := r.URL.RawQuery; q != "" { |
1485 | + newPath += "?" + q |
1486 | + } |
1487 | + w.Header().Set("Location", newPath) |
1488 | + w.WriteHeader(StatusMovedPermanently) |
1489 | +} |
1490 | + |
1491 | +// ServeFile replies to the request with the contents of the named file or directory. |
1492 | +func ServeFile(w ResponseWriter, r *Request, name string) { |
1493 | + dir, file := filepath.Split(name) |
1494 | + serveFile(w, r, Dir(dir), file, false) |
1495 | +} |
1496 | + |
1497 | +type fileHandler struct { |
1498 | + root FileSystem |
1499 | +} |
1500 | + |
1501 | +// FileServer returns a handler that serves HTTP requests |
1502 | +// with the contents of the file system rooted at root. |
1503 | +// |
1504 | +// To use the operating system's file system implementation, |
1505 | +// use http.Dir: |
1506 | +// |
1507 | +// http.Handle("/", http.FileServer(http.Dir("/tmp"))) |
1508 | +func FileServer(root FileSystem) Handler { |
1509 | + return &fileHandler{root} |
1510 | +} |
1511 | + |
1512 | +func (f *fileHandler) ServeHTTP(w ResponseWriter, r *Request) { |
1513 | + upath := r.URL.Path |
1514 | + if !strings.HasPrefix(upath, "/") { |
1515 | + upath = "/" + upath |
1516 | + r.URL.Path = upath |
1517 | + } |
1518 | + serveFile(w, r, f.root, path.Clean(upath), true) |
1519 | +} |
1520 | + |
1521 | +// httpRange specifies the byte range to be sent to the client. |
1522 | +type httpRange struct { |
1523 | + start, length int64 |
1524 | +} |
1525 | + |
1526 | +// parseRange parses a Range header string as per RFC 2616. |
1527 | +func parseRange(s string, size int64) ([]httpRange, error) { |
1528 | + if s == "" { |
1529 | + return nil, nil // header not present |
1530 | + } |
1531 | + const b = "bytes=" |
1532 | + if !strings.HasPrefix(s, b) { |
1533 | + return nil, errors.New("invalid range") |
1534 | + } |
1535 | + var ranges []httpRange |
1536 | + for _, ra := range strings.Split(s[len(b):], ",") { |
1537 | + i := strings.Index(ra, "-") |
1538 | + if i < 0 { |
1539 | + return nil, errors.New("invalid range") |
1540 | + } |
1541 | + start, end := ra[:i], ra[i+1:] |
1542 | + var r httpRange |
1543 | + if start == "" { |
1544 | + // If no start is specified, end specifies the |
1545 | + // range start relative to the end of the file. |
1546 | + i, err := strconv.ParseInt(end, 10, 64) |
1547 | + if err != nil { |
1548 | + return nil, errors.New("invalid range") |
1549 | + } |
1550 | + if i > size { |
1551 | + i = size |
1552 | + } |
1553 | + r.start = size - i |
1554 | + r.length = size - r.start |
1555 | + } else { |
1556 | + i, err := strconv.ParseInt(start, 10, 64) |
1557 | + if err != nil || i > size || i < 0 { |
1558 | + return nil, errors.New("invalid range") |
1559 | + } |
1560 | + r.start = i |
1561 | + if end == "" { |
1562 | + // If no end is specified, range extends to end of the file. |
1563 | + r.length = size - r.start |
1564 | + } else { |
1565 | + i, err := strconv.ParseInt(end, 10, 64) |
1566 | + if err != nil || r.start > i { |
1567 | + return nil, errors.New("invalid range") |
1568 | + } |
1569 | + if i >= size { |
1570 | + i = size - 1 |
1571 | + } |
1572 | + r.length = i - r.start + 1 |
1573 | + } |
1574 | + } |
1575 | + ranges = append(ranges, r) |
1576 | + } |
1577 | + return ranges, nil |
1578 | +} |
1579 | |
1580 | === added file 'fork/http/header.go' |
1581 | --- fork/http/header.go 1970-01-01 00:00:00 +0000 |
1582 | +++ fork/http/header.go 2013-07-22 14:27:42 +0000 |
1583 | @@ -0,0 +1,78 @@ |
1584 | +// Copyright 2010 The Go Authors. All rights reserved. |
1585 | +// Use of this source code is governed by a BSD-style |
1586 | +// license that can be found in the LICENSE file. |
1587 | + |
1588 | +package http |
1589 | + |
1590 | +import ( |
1591 | + "fmt" |
1592 | + "io" |
1593 | + "net/textproto" |
1594 | + "sort" |
1595 | + "strings" |
1596 | +) |
1597 | + |
1598 | +// A Header represents the key-value pairs in an HTTP header. |
1599 | +type Header map[string][]string |
1600 | + |
1601 | +// Add adds the key, value pair to the header. |
1602 | +// It appends to any existing values associated with key. |
1603 | +func (h Header) Add(key, value string) { |
1604 | + textproto.MIMEHeader(h).Add(key, value) |
1605 | +} |
1606 | + |
1607 | +// Set sets the header entries associated with key to |
1608 | +// the single element value. It replaces any existing |
1609 | +// values associated with key. |
1610 | +func (h Header) Set(key, value string) { |
1611 | + textproto.MIMEHeader(h).Set(key, value) |
1612 | +} |
1613 | + |
1614 | +// Get gets the first value associated with the given key. |
1615 | +// If there are no values associated with the key, Get returns "". |
1616 | +// To access multiple values of a key, access the map directly |
1617 | +// with CanonicalHeaderKey. |
1618 | +func (h Header) Get(key string) string { |
1619 | + return textproto.MIMEHeader(h).Get(key) |
1620 | +} |
1621 | + |
1622 | +// Del deletes the values associated with key. |
1623 | +func (h Header) Del(key string) { |
1624 | + textproto.MIMEHeader(h).Del(key) |
1625 | +} |
1626 | + |
1627 | +// Write writes a header in wire format. |
1628 | +func (h Header) Write(w io.Writer) error { |
1629 | + return h.WriteSubset(w, nil) |
1630 | +} |
1631 | + |
1632 | +var headerNewlineToSpace = strings.NewReplacer("\n", " ", "\r", " ") |
1633 | + |
1634 | +// WriteSubset writes a header in wire format. |
1635 | +// If exclude is not nil, keys where exclude[key] == true are not written. |
1636 | +func (h Header) WriteSubset(w io.Writer, exclude map[string]bool) error { |
1637 | + keys := make([]string, 0, len(h)) |
1638 | + for k := range h { |
1639 | + if exclude == nil || !exclude[k] { |
1640 | + keys = append(keys, k) |
1641 | + } |
1642 | + } |
1643 | + sort.Strings(keys) |
1644 | + for _, k := range keys { |
1645 | + for _, v := range h[k] { |
1646 | + v = headerNewlineToSpace.Replace(v) |
1647 | + v = strings.TrimSpace(v) |
1648 | + if _, err := fmt.Fprintf(w, "%s: %s\r\n", k, v); err != nil { |
1649 | + return err |
1650 | + } |
1651 | + } |
1652 | + } |
1653 | + return nil |
1654 | +} |
1655 | + |
1656 | +// CanonicalHeaderKey returns the canonical format of the |
1657 | +// header key s. The canonicalization converts the first |
1658 | +// letter and any letter following a hyphen to upper case; |
1659 | +// the rest are converted to lowercase. For example, the |
1660 | +// canonical key for "accept-encoding" is "Accept-Encoding". |
1661 | +func CanonicalHeaderKey(s string) string { return textproto.CanonicalMIMEHeaderKey(s) } |
1662 | |
1663 | === added file 'fork/http/jar.go' |
1664 | --- fork/http/jar.go 1970-01-01 00:00:00 +0000 |
1665 | +++ fork/http/jar.go 2013-07-22 14:27:42 +0000 |
1666 | @@ -0,0 +1,30 @@ |
1667 | +// Copyright 2011 The Go Authors. All rights reserved. |
1668 | +// Use of this source code is governed by a BSD-style |
1669 | +// license that can be found in the LICENSE file. |
1670 | + |
1671 | +package http |
1672 | + |
1673 | +import ( |
1674 | + "net/url" |
1675 | +) |
1676 | + |
1677 | +// A CookieJar manages storage and use of cookies in HTTP requests. |
1678 | +// |
1679 | +// Implementations of CookieJar must be safe for concurrent use by multiple |
1680 | +// goroutines. |
1681 | +type CookieJar interface { |
1682 | + // SetCookies handles the receipt of the cookies in a reply for the |
1683 | + // given URL. It may or may not choose to save the cookies, depending |
1684 | + // on the jar's policy and implementation. |
1685 | + SetCookies(u *url.URL, cookies []*Cookie) |
1686 | + |
1687 | + // Cookies returns the cookies to send in a request for the given URL. |
1688 | + // It is up to the implementation to honor the standard cookie use |
1689 | + // restrictions such as in RFC 6265. |
1690 | + Cookies(u *url.URL) []*Cookie |
1691 | +} |
1692 | + |
1693 | +type blackHoleJar struct{} |
1694 | + |
1695 | +func (blackHoleJar) SetCookies(u *url.URL, cookies []*Cookie) {} |
1696 | +func (blackHoleJar) Cookies(u *url.URL) []*Cookie { return nil } |
1697 | |
1698 | === added file 'fork/http/lex.go' |
1699 | --- fork/http/lex.go 1970-01-01 00:00:00 +0000 |
1700 | +++ fork/http/lex.go 2013-07-22 14:27:42 +0000 |
1701 | @@ -0,0 +1,136 @@ |
1702 | +// Copyright 2009 The Go Authors. All rights reserved. |
1703 | +// Use of this source code is governed by a BSD-style |
1704 | +// license that can be found in the LICENSE file. |
1705 | + |
1706 | +package http |
1707 | + |
1708 | +// This file deals with lexical matters of HTTP |
1709 | + |
1710 | +func isSeparator(c byte) bool { |
1711 | + switch c { |
1712 | + case '(', ')', '<', '>', '@', ',', ';', ':', '\\', '"', '/', '[', ']', '?', '=', '{', '}', ' ', '\t': |
1713 | + return true |
1714 | + } |
1715 | + return false |
1716 | +} |
1717 | + |
1718 | +func isCtl(c byte) bool { return (0 <= c && c <= 31) || c == 127 } |
1719 | + |
1720 | +func isChar(c byte) bool { return 0 <= c && c <= 127 } |
1721 | + |
1722 | +func isAnyText(c byte) bool { return !isCtl(c) } |
1723 | + |
1724 | +func isQdText(c byte) bool { return isAnyText(c) && c != '"' } |
1725 | + |
1726 | +func isToken(c byte) bool { return isChar(c) && !isCtl(c) && !isSeparator(c) } |
1727 | + |
1728 | +// Valid escaped sequences are not specified in RFC 2616, so for now, we assume |
1729 | +// that they coincide with the common sense ones used by GO. Malformed |
1730 | +// characters should probably not be treated as errors by a robust (forgiving) |
1731 | +// parser, so we replace them with the '?' character. |
1732 | +func httpUnquotePair(b byte) byte { |
1733 | + // skip the first byte, which should always be '\' |
1734 | + switch b { |
1735 | + case 'a': |
1736 | + return '\a' |
1737 | + case 'b': |
1738 | + return '\b' |
1739 | + case 'f': |
1740 | + return '\f' |
1741 | + case 'n': |
1742 | + return '\n' |
1743 | + case 'r': |
1744 | + return '\r' |
1745 | + case 't': |
1746 | + return '\t' |
1747 | + case 'v': |
1748 | + return '\v' |
1749 | + case '\\': |
1750 | + return '\\' |
1751 | + case '\'': |
1752 | + return '\'' |
1753 | + case '"': |
1754 | + return '"' |
1755 | + } |
1756 | + return '?' |
1757 | +} |
1758 | + |
1759 | +// raw must begin with a valid quoted string. Only the first quoted string is |
1760 | +// parsed and is unquoted in result. eaten is the number of bytes parsed, or -1 |
1761 | +// upon failure. |
1762 | +func httpUnquote(raw []byte) (eaten int, result string) { |
1763 | + buf := make([]byte, len(raw)) |
1764 | + if raw[0] != '"' { |
1765 | + return -1, "" |
1766 | + } |
1767 | + eaten = 1 |
1768 | + j := 0 // # of bytes written in buf |
1769 | + for i := 1; i < len(raw); i++ { |
1770 | + switch b := raw[i]; b { |
1771 | + case '"': |
1772 | + eaten++ |
1773 | + buf = buf[0:j] |
1774 | + return i + 1, string(buf) |
1775 | + case '\\': |
1776 | + if len(raw) < i+2 { |
1777 | + return -1, "" |
1778 | + } |
1779 | + buf[j] = httpUnquotePair(raw[i+1]) |
1780 | + eaten += 2 |
1781 | + j++ |
1782 | + i++ |
1783 | + default: |
1784 | + if isQdText(b) { |
1785 | + buf[j] = b |
1786 | + } else { |
1787 | + buf[j] = '?' |
1788 | + } |
1789 | + eaten++ |
1790 | + j++ |
1791 | + } |
1792 | + } |
1793 | + return -1, "" |
1794 | +} |
1795 | + |
1796 | +// This is a best effort parse, so errors are not returned, instead not all of |
1797 | +// the input string might be parsed. result is always non-nil. |
1798 | +func httpSplitFieldValue(fv string) (eaten int, result []string) { |
1799 | + result = make([]string, 0, len(fv)) |
1800 | + raw := []byte(fv) |
1801 | + i := 0 |
1802 | + chunk := "" |
1803 | + for i < len(raw) { |
1804 | + b := raw[i] |
1805 | + switch { |
1806 | + case b == '"': |
1807 | + eaten, unq := httpUnquote(raw[i:]) |
1808 | + if eaten < 0 { |
1809 | + return i, result |
1810 | + } else { |
1811 | + i += eaten |
1812 | + chunk += unq |
1813 | + } |
1814 | + case isSeparator(b): |
1815 | + if chunk != "" { |
1816 | + result = result[0 : len(result)+1] |
1817 | + result[len(result)-1] = chunk |
1818 | + chunk = "" |
1819 | + } |
1820 | + i++ |
1821 | + case isToken(b): |
1822 | + chunk += string(b) |
1823 | + i++ |
1824 | + case b == '\n' || b == '\r': |
1825 | + i++ |
1826 | + default: |
1827 | + chunk += "?" |
1828 | + i++ |
1829 | + } |
1830 | + } |
1831 | + if chunk != "" { |
1832 | + result = result[0 : len(result)+1] |
1833 | + result[len(result)-1] = chunk |
1834 | + chunk = "" |
1835 | + } |
1836 | + return i, result |
1837 | +} |
1838 | |
1839 | === added file 'fork/http/request.go' |
1840 | --- fork/http/request.go 1970-01-01 00:00:00 +0000 |
1841 | +++ fork/http/request.go 2013-07-22 14:27:42 +0000 |
1842 | @@ -0,0 +1,743 @@ |
1843 | +// Copyright 2009 The Go Authors. All rights reserved. |
1844 | +// Use of this source code is governed by a BSD-style |
1845 | +// license that can be found in the LICENSE file. |
1846 | + |
1847 | +// HTTP Request reading and parsing. |
1848 | + |
1849 | +package http |
1850 | + |
1851 | +import ( |
1852 | + "bufio" |
1853 | + "bytes" |
1854 | + "crypto/tls" |
1855 | + "encoding/base64" |
1856 | + "errors" |
1857 | + "fmt" |
1858 | + "io" |
1859 | + "io/ioutil" |
1860 | + "mime" |
1861 | + "mime/multipart" |
1862 | + "net/textproto" |
1863 | + "net/url" |
1864 | + "strings" |
1865 | +) |
1866 | + |
1867 | +const ( |
1868 | + maxValueLength = 4096 |
1869 | + maxHeaderLines = 1024 |
1870 | + chunkSize = 4 << 10 // 4 KB chunks |
1871 | + defaultMaxMemory = 32 << 20 // 32 MB |
1872 | +) |
1873 | + |
1874 | +// ErrMissingFile is returned by FormFile when the provided file field name |
1875 | +// is either not present in the request or not a file field. |
1876 | +var ErrMissingFile = errors.New("http: no such file") |
1877 | + |
1878 | +// HTTP request parsing errors. |
1879 | +type ProtocolError struct { |
1880 | + ErrorString string |
1881 | +} |
1882 | + |
1883 | +func (err *ProtocolError) Error() string { return err.ErrorString } |
1884 | + |
1885 | +var ( |
1886 | + ErrHeaderTooLong = &ProtocolError{"header too long"} |
1887 | + ErrShortBody = &ProtocolError{"entity body too short"} |
1888 | + ErrNotSupported = &ProtocolError{"feature not supported"} |
1889 | + ErrUnexpectedTrailer = &ProtocolError{"trailer header without chunked transfer encoding"} |
1890 | + ErrMissingContentLength = &ProtocolError{"missing ContentLength in HEAD response"} |
1891 | + ErrNotMultipart = &ProtocolError{"request Content-Type isn't multipart/form-data"} |
1892 | + ErrMissingBoundary = &ProtocolError{"no multipart boundary param Content-Type"} |
1893 | +) |
1894 | + |
1895 | +type badStringError struct { |
1896 | + what string |
1897 | + str string |
1898 | +} |
1899 | + |
1900 | +func (e *badStringError) Error() string { return fmt.Sprintf("%s %q", e.what, e.str) } |
1901 | + |
1902 | +// Headers that Request.Write handles itself and should be skipped. |
1903 | +var reqWriteExcludeHeader = map[string]bool{ |
1904 | + "Host": true, // not in Header map anyway |
1905 | + "User-Agent": true, |
1906 | + "Content-Length": true, |
1907 | + "Transfer-Encoding": true, |
1908 | + "Trailer": true, |
1909 | +} |
1910 | + |
1911 | +// A Request represents an HTTP request received by a server |
1912 | +// or to be sent by a client. |
1913 | +type Request struct { |
1914 | + Method string // GET, POST, PUT, etc. |
1915 | + URL *url.URL |
1916 | + |
1917 | + // The protocol version for incoming requests. |
1918 | + // Outgoing requests always use HTTP/1.1. |
1919 | + Proto string // "HTTP/1.0" |
1920 | + ProtoMajor int // 1 |
1921 | + ProtoMinor int // 0 |
1922 | + |
1923 | + // A header maps request lines to their values. |
1924 | + // If the header says |
1925 | + // |
1926 | + // accept-encoding: gzip, deflate |
1927 | + // Accept-Language: en-us |
1928 | + // Connection: keep-alive |
1929 | + // |
1930 | + // then |
1931 | + // |
1932 | + // Header = map[string][]string{ |
1933 | + // "Accept-Encoding": {"gzip, deflate"}, |
1934 | + // "Accept-Language": {"en-us"}, |
1935 | + // "Connection": {"keep-alive"}, |
1936 | + // } |
1937 | + // |
1938 | + // HTTP defines that header names are case-insensitive. |
1939 | + // The request parser implements this by canonicalizing the |
1940 | + // name, making the first character and any characters |
1941 | + // following a hyphen uppercase and the rest lowercase. |
1942 | + Header Header |
1943 | + |
1944 | + // The message body. |
1945 | + Body io.ReadCloser |
1946 | + |
1947 | + // ContentLength records the length of the associated content. |
1948 | + // The value -1 indicates that the length is unknown. |
1949 | + // Values >= 0 indicate that the given number of bytes may |
1950 | + // be read from Body. |
1951 | + // For outgoing requests, a value of 0 means unknown if Body is not nil. |
1952 | + ContentLength int64 |
1953 | + |
1954 | + // TransferEncoding lists the transfer encodings from outermost to |
1955 | + // innermost. An empty list denotes the "identity" encoding. |
1956 | + // TransferEncoding can usually be ignored; chunked encoding is |
1957 | + // automatically added and removed as necessary when sending and |
1958 | + // receiving requests. |
1959 | + TransferEncoding []string |
1960 | + |
1961 | + // Close indicates whether to close the connection after |
1962 | + // replying to this request. |
1963 | + Close bool |
1964 | + |
1965 | + // The host on which the URL is sought. |
1966 | + // Per RFC 2616, this is either the value of the Host: header |
1967 | + // or the host name given in the URL itself. |
1968 | + Host string |
1969 | + |
1970 | + // Form contains the parsed form data, including both the URL |
1971 | + // field's query parameters and the POST or PUT form data. |
1972 | + // This field is only available after ParseForm is called. |
1973 | + // The HTTP client ignores Form and uses Body instead. |
1974 | + Form url.Values |
1975 | + |
1976 | + // MultipartForm is the parsed multipart form, including file uploads. |
1977 | + // This field is only available after ParseMultipartForm is called. |
1978 | + // The HTTP client ignores MultipartForm and uses Body instead. |
1979 | + MultipartForm *multipart.Form |
1980 | + |
1981 | + // Trailer maps trailer keys to values. Like for Header, if the |
1982 | + // response has multiple trailer lines with the same key, they will be |
1983 | + // concatenated, delimited by commas. |
1984 | + // For server requests, Trailer is only populated after Body has been |
1985 | + // closed or fully consumed. |
1986 | + // Trailer support is only partially complete. |
1987 | + Trailer Header |
1988 | + |
1989 | + // RemoteAddr allows HTTP servers and other software to record |
1990 | + // the network address that sent the request, usually for |
1991 | + // logging. This field is not filled in by ReadRequest and |
1992 | + // has no defined format. The HTTP server in this package |
1993 | + // sets RemoteAddr to an "IP:port" address before invoking a |
1994 | + // handler. |
1995 | + // This field is ignored by the HTTP client. |
1996 | + RemoteAddr string |
1997 | + |
1998 | + // RequestURI is the unmodified Request-URI of the |
1999 | + // Request-Line (RFC 2616, Section 5.1) as sent by the client |
2000 | + // to a server. Usually the URL field should be used instead. |
2001 | + // It is an error to set this field in an HTTP client request. |
2002 | + RequestURI string |
2003 | + |
2004 | + // TLS allows HTTP servers and other software to record |
2005 | + // information about the TLS connection on which the request |
2006 | + // was received. This field is not filled in by ReadRequest. |
2007 | + // The HTTP server in this package sets the field for |
2008 | + // TLS-enabled connections before invoking a handler; |
2009 | + // otherwise it leaves the field nil. |
2010 | + // This field is ignored by the HTTP client. |
2011 | + TLS *tls.ConnectionState |
2012 | +} |
2013 | + |
2014 | +// ProtoAtLeast returns whether the HTTP protocol used |
2015 | +// in the request is at least major.minor. |
2016 | +func (r *Request) ProtoAtLeast(major, minor int) bool { |
2017 | + return r.ProtoMajor > major || |
2018 | + r.ProtoMajor == major && r.ProtoMinor >= minor |
2019 | +} |
2020 | + |
2021 | +// UserAgent returns the client's User-Agent, if sent in the request. |
2022 | +func (r *Request) UserAgent() string { |
2023 | + return r.Header.Get("User-Agent") |
2024 | +} |
2025 | + |
2026 | +// Cookies parses and returns the HTTP cookies sent with the request. |
2027 | +func (r *Request) Cookies() []*Cookie { |
2028 | + return readCookies(r.Header, "") |
2029 | +} |
2030 | + |
2031 | +var ErrNoCookie = errors.New("http: named cookie not present") |
2032 | + |
2033 | +// Cookie returns the named cookie provided in the request or |
2034 | +// ErrNoCookie if not found. |
2035 | +func (r *Request) Cookie(name string) (*Cookie, error) { |
2036 | + for _, c := range readCookies(r.Header, name) { |
2037 | + return c, nil |
2038 | + } |
2039 | + return nil, ErrNoCookie |
2040 | +} |
2041 | + |
2042 | +// AddCookie adds a cookie to the request. Per RFC 6265 section 5.4, |
2043 | +// AddCookie does not attach more than one Cookie header field. That |
2044 | +// means all cookies, if any, are written into the same line, |
2045 | +// separated by semicolon. |
2046 | +func (r *Request) AddCookie(c *Cookie) { |
2047 | + s := fmt.Sprintf("%s=%s", sanitizeName(c.Name), sanitizeValue(c.Value)) |
2048 | + if c := r.Header.Get("Cookie"); c != "" { |
2049 | + r.Header.Set("Cookie", c+"; "+s) |
2050 | + } else { |
2051 | + r.Header.Set("Cookie", s) |
2052 | + } |
2053 | +} |
2054 | + |
2055 | +// Referer returns the referring URL, if sent in the request. |
2056 | +// |
2057 | +// Referer is misspelled as in the request itself, a mistake from the |
2058 | +// earliest days of HTTP. This value can also be fetched from the |
2059 | +// Header map as Header["Referer"]; the benefit of making it available |
2060 | +// as a method is that the compiler can diagnose programs that use the |
2061 | +// alternate (correct English) spelling req.Referrer() but cannot |
2062 | +// diagnose programs that use Header["Referrer"]. |
2063 | +func (r *Request) Referer() string { |
2064 | + return r.Header.Get("Referer") |
2065 | +} |
2066 | + |
2067 | +// multipartByReader is a sentinel value. |
2068 | +// Its presence in Request.MultipartForm indicates that parsing of the request |
2069 | +// body has been handed off to a MultipartReader instead of ParseMultipartFrom. |
2070 | +var multipartByReader = &multipart.Form{ |
2071 | + Value: make(map[string][]string), |
2072 | + File: make(map[string][]*multipart.FileHeader), |
2073 | +} |
2074 | + |
2075 | +// MultipartReader returns a MIME multipart reader if this is a |
2076 | +// multipart/form-data POST request, else returns nil and an error. |
2077 | +// Use this function instead of ParseMultipartForm to |
2078 | +// process the request body as a stream. |
2079 | +func (r *Request) MultipartReader() (*multipart.Reader, error) { |
2080 | + if r.MultipartForm == multipartByReader { |
2081 | + return nil, errors.New("http: MultipartReader called twice") |
2082 | + } |
2083 | + if r.MultipartForm != nil { |
2084 | + return nil, errors.New("http: multipart handled by ParseMultipartForm") |
2085 | + } |
2086 | + r.MultipartForm = multipartByReader |
2087 | + return r.multipartReader() |
2088 | +} |
2089 | + |
2090 | +func (r *Request) multipartReader() (*multipart.Reader, error) { |
2091 | + v := r.Header.Get("Content-Type") |
2092 | + if v == "" { |
2093 | + return nil, ErrNotMultipart |
2094 | + } |
2095 | + d, params, err := mime.ParseMediaType(v) |
2096 | + if err != nil || d != "multipart/form-data" { |
2097 | + return nil, ErrNotMultipart |
2098 | + } |
2099 | + boundary, ok := params["boundary"] |
2100 | + if !ok { |
2101 | + return nil, ErrMissingBoundary |
2102 | + } |
2103 | + return multipart.NewReader(r.Body, boundary), nil |
2104 | +} |
2105 | + |
2106 | +// Return value if nonempty, def otherwise. |
2107 | +func valueOrDefault(value, def string) string { |
2108 | + if value != "" { |
2109 | + return value |
2110 | + } |
2111 | + return def |
2112 | +} |
2113 | + |
2114 | +const defaultUserAgent = "Go http package" |
2115 | + |
2116 | +// Write writes an HTTP/1.1 request -- header and body -- in wire format. |
2117 | +// This method consults the following fields of the request: |
2118 | +// Host |
2119 | +// URL |
2120 | +// Method (defaults to "GET") |
2121 | +// Header |
2122 | +// ContentLength |
2123 | +// TransferEncoding |
2124 | +// Body |
2125 | +// |
2126 | +// If Body is present, Content-Length is <= 0 and TransferEncoding |
2127 | +// hasn't been set to "identity", Write adds "Transfer-Encoding: |
2128 | +// chunked" to the header. Body is closed after it is sent. |
2129 | +func (r *Request) Write(w io.Writer) error { |
2130 | + return r.write(w, false, nil) |
2131 | +} |
2132 | + |
2133 | +// WriteProxy is like Write but writes the request in the form |
2134 | +// expected by an HTTP proxy. In particular, WriteProxy writes the |
2135 | +// initial Request-URI line of the request with an absolute URI, per |
2136 | +// section 5.1.2 of RFC 2616, including the scheme and host. |
2137 | +// In either case, WriteProxy also writes a Host header, using |
2138 | +// either r.Host or r.URL.Host. |
2139 | +func (r *Request) WriteProxy(w io.Writer) error { |
2140 | + return r.write(w, true, nil) |
2141 | +} |
2142 | + |
2143 | +// extraHeaders may be nil |
2144 | +func (req *Request) write(w io.Writer, usingProxy bool, extraHeaders Header) error { |
2145 | + host := req.Host |
2146 | + if host == "" { |
2147 | + if req.URL == nil { |
2148 | + return errors.New("http: Request.Write on Request with no Host or URL set") |
2149 | + } |
2150 | + host = req.URL.Host |
2151 | + } |
2152 | + |
2153 | + ruri := req.URL.RequestURI() |
2154 | + if usingProxy && req.URL.Scheme != "" && req.URL.Opaque == "" { |
2155 | + ruri = req.URL.Scheme + "://" + host + ruri |
2156 | + } else if req.Method == "CONNECT" && req.URL.Path == "" { |
2157 | + // CONNECT requests normally give just the host and port, not a full URL. |
2158 | + ruri = host |
2159 | + } |
2160 | + // TODO(bradfitz): escape at least newlines in ruri? |
2161 | + |
2162 | + bw := bufio.NewWriter(w) |
2163 | + fmt.Fprintf(bw, "%s %s HTTP/1.1\r\n", valueOrDefault(req.Method, "GET"), ruri) |
2164 | + |
2165 | + // Header lines |
2166 | + fmt.Fprintf(bw, "Host: %s\r\n", host) |
2167 | + |
2168 | + // Use the defaultUserAgent unless the Header contains one, which |
2169 | + // may be blank to not send the header. |
2170 | + userAgent := defaultUserAgent |
2171 | + if req.Header != nil { |
2172 | + if ua := req.Header["User-Agent"]; len(ua) > 0 { |
2173 | + userAgent = ua[0] |
2174 | + } |
2175 | + } |
2176 | + if userAgent != "" { |
2177 | + fmt.Fprintf(bw, "User-Agent: %s\r\n", userAgent) |
2178 | + } |
2179 | + |
2180 | + // Process Body,ContentLength,Close,Trailer |
2181 | + tw, err := newTransferWriter(req) |
2182 | + if err != nil { |
2183 | + return err |
2184 | + } |
2185 | + err = tw.WriteHeader(bw) |
2186 | + if err != nil { |
2187 | + return err |
2188 | + } |
2189 | + |
2190 | + // TODO: split long values? (If so, should share code with Conn.Write) |
2191 | + err = req.Header.WriteSubset(bw, reqWriteExcludeHeader) |
2192 | + if err != nil { |
2193 | + return err |
2194 | + } |
2195 | + |
2196 | + if extraHeaders != nil { |
2197 | + err = extraHeaders.Write(bw) |
2198 | + if err != nil { |
2199 | + return err |
2200 | + } |
2201 | + } |
2202 | + |
2203 | + io.WriteString(bw, "\r\n") |
2204 | + |
2205 | + // Write body and trailer |
2206 | + err = tw.WriteBody(bw) |
2207 | + if err != nil { |
2208 | + return err |
2209 | + } |
2210 | + |
2211 | + return bw.Flush() |
2212 | +} |
2213 | + |
2214 | +// Convert decimal at s[i:len(s)] to integer, |
2215 | +// returning value, string position where the digits stopped, |
2216 | +// and whether there was a valid number (digits, not too big). |
2217 | +func atoi(s string, i int) (n, i1 int, ok bool) { |
2218 | + const Big = 1000000 |
2219 | + if i >= len(s) || s[i] < '0' || s[i] > '9' { |
2220 | + return 0, 0, false |
2221 | + } |
2222 | + n = 0 |
2223 | + for ; i < len(s) && '0' <= s[i] && s[i] <= '9'; i++ { |
2224 | + n = n*10 + int(s[i]-'0') |
2225 | + if n > Big { |
2226 | + return 0, 0, false |
2227 | + } |
2228 | + } |
2229 | + return n, i, true |
2230 | +} |
2231 | + |
2232 | +// ParseHTTPVersion parses a HTTP version string. |
2233 | +// "HTTP/1.0" returns (1, 0, true). |
2234 | +func ParseHTTPVersion(vers string) (major, minor int, ok bool) { |
2235 | + if len(vers) < 5 || vers[0:5] != "HTTP/" { |
2236 | + return 0, 0, false |
2237 | + } |
2238 | + major, i, ok := atoi(vers, 5) |
2239 | + if !ok || i >= len(vers) || vers[i] != '.' { |
2240 | + return 0, 0, false |
2241 | + } |
2242 | + minor, i, ok = atoi(vers, i+1) |
2243 | + if !ok || i != len(vers) { |
2244 | + return 0, 0, false |
2245 | + } |
2246 | + return major, minor, true |
2247 | +} |
2248 | + |
2249 | +// NewRequest returns a new Request given a method, URL, and optional body. |
2250 | +func NewRequest(method, urlStr string, body io.Reader) (*Request, error) { |
2251 | + u, err := url.Parse(urlStr) |
2252 | + if err != nil { |
2253 | + return nil, err |
2254 | + } |
2255 | + rc, ok := body.(io.ReadCloser) |
2256 | + if !ok && body != nil { |
2257 | + rc = ioutil.NopCloser(body) |
2258 | + } |
2259 | + req := &Request{ |
2260 | + Method: method, |
2261 | + URL: u, |
2262 | + Proto: "HTTP/1.1", |
2263 | + ProtoMajor: 1, |
2264 | + ProtoMinor: 1, |
2265 | + Header: make(Header), |
2266 | + Body: rc, |
2267 | + Host: u.Host, |
2268 | + } |
2269 | + if body != nil { |
2270 | + switch v := body.(type) { |
2271 | + case *strings.Reader: |
2272 | + req.ContentLength = int64(v.Len()) |
2273 | + case *bytes.Buffer: |
2274 | + req.ContentLength = int64(v.Len()) |
2275 | + } |
2276 | + } |
2277 | + |
2278 | + return req, nil |
2279 | +} |
2280 | + |
2281 | +// SetBasicAuth sets the request's Authorization header to use HTTP |
2282 | +// Basic Authentication with the provided username and password. |
2283 | +// |
2284 | +// With HTTP Basic Authentication the provided username and password |
2285 | +// are not encrypted. |
2286 | +func (r *Request) SetBasicAuth(username, password string) { |
2287 | + s := username + ":" + password |
2288 | + r.Header.Set("Authorization", "Basic "+base64.StdEncoding.EncodeToString([]byte(s))) |
2289 | +} |
2290 | + |
2291 | +// ReadRequest reads and parses a request from b. |
2292 | +func ReadRequest(b *bufio.Reader) (req *Request, err error) { |
2293 | + |
2294 | + tp := textproto.NewReader(b) |
2295 | + req = new(Request) |
2296 | + |
2297 | + // First line: GET /index.html HTTP/1.0 |
2298 | + var s string |
2299 | + if s, err = tp.ReadLine(); err != nil { |
2300 | + return nil, err |
2301 | + } |
2302 | + defer func() { |
2303 | + if err == io.EOF { |
2304 | + err = io.ErrUnexpectedEOF |
2305 | + } |
2306 | + }() |
2307 | + |
2308 | + var f []string |
2309 | + if f = strings.SplitN(s, " ", 3); len(f) < 3 { |
2310 | + return nil, &badStringError{"malformed HTTP request", s} |
2311 | + } |
2312 | + req.Method, req.RequestURI, req.Proto = f[0], f[1], f[2] |
2313 | + rawurl := req.RequestURI |
2314 | + var ok bool |
2315 | + if req.ProtoMajor, req.ProtoMinor, ok = ParseHTTPVersion(req.Proto); !ok { |
2316 | + return nil, &badStringError{"malformed HTTP version", req.Proto} |
2317 | + } |
2318 | + |
2319 | + // CONNECT requests are used two different ways, and neither uses a full URL: |
2320 | + // The standard use is to tunnel HTTPS through an HTTP proxy. |
2321 | + // It looks like "CONNECT www.google.com:443 HTTP/1.1", and the parameter is |
2322 | + // just the authority section of a URL. This information should go in req.URL.Host. |
2323 | + // |
2324 | + // The net/rpc package also uses CONNECT, but there the parameter is a path |
2325 | + // that starts with a slash. It can be parsed with the regular URL parser, |
2326 | + // and the path will end up in req.URL.Path, where it needs to be in order for |
2327 | + // RPC to work. |
2328 | + justAuthority := req.Method == "CONNECT" && !strings.HasPrefix(rawurl, "/") |
2329 | + if justAuthority { |
2330 | + rawurl = "http://" + rawurl |
2331 | + } |
2332 | + |
2333 | + if req.URL, err = url.ParseRequestURI(rawurl); err != nil { |
2334 | + return nil, err |
2335 | + } |
2336 | + |
2337 | + if justAuthority { |
2338 | + // Strip the bogus "http://" back off. |
2339 | + req.URL.Scheme = "" |
2340 | + } |
2341 | + |
2342 | + // Subsequent lines: Key: value. |
2343 | + mimeHeader, err := tp.ReadMIMEHeader() |
2344 | + if err != nil { |
2345 | + return nil, err |
2346 | + } |
2347 | + req.Header = Header(mimeHeader) |
2348 | + |
2349 | + // RFC2616: Must treat |
2350 | + // GET /index.html HTTP/1.1 |
2351 | + // Host: www.google.com |
2352 | + // and |
2353 | + // GET http://www.google.com/index.html HTTP/1.1 |
2354 | + // Host: doesntmatter |
2355 | + // the same. In the second case, any Host line is ignored. |
2356 | + req.Host = req.URL.Host |
2357 | + if req.Host == "" { |
2358 | + req.Host = req.Header.Get("Host") |
2359 | + } |
2360 | + req.Header.Del("Host") |
2361 | + |
2362 | + fixPragmaCacheControl(req.Header) |
2363 | + |
2364 | + // TODO: Parse specific header values: |
2365 | + // Accept |
2366 | + // Accept-Encoding |
2367 | + // Accept-Language |
2368 | + // Authorization |
2369 | + // Cache-Control |
2370 | + // Connection |
2371 | + // Date |
2372 | + // Expect |
2373 | + // From |
2374 | + // If-Match |
2375 | + // If-Modified-Since |
2376 | + // If-None-Match |
2377 | + // If-Range |
2378 | + // If-Unmodified-Since |
2379 | + // Max-Forwards |
2380 | + // Proxy-Authorization |
2381 | + // Referer [sic] |
2382 | + // TE (transfer-codings) |
2383 | + // Trailer |
2384 | + // Transfer-Encoding |
2385 | + // Upgrade |
2386 | + // User-Agent |
2387 | + // Via |
2388 | + // Warning |
2389 | + |
2390 | + err = readTransfer(req, b) |
2391 | + if err != nil { |
2392 | + return nil, err |
2393 | + } |
2394 | + |
2395 | + return req, nil |
2396 | +} |
2397 | + |
2398 | +// MaxBytesReader is similar to io.LimitReader but is intended for |
2399 | +// limiting the size of incoming request bodies. In contrast to |
2400 | +// io.LimitReader, MaxBytesReader's result is a ReadCloser, returns a |
2401 | +// non-EOF error for a Read beyond the limit, and Closes the |
2402 | +// underlying reader when its Close method is called. |
2403 | +// |
2404 | +// MaxBytesReader prevents clients from accidentally or maliciously |
2405 | +// sending a large request and wasting server resources. |
2406 | +func MaxBytesReader(w ResponseWriter, r io.ReadCloser, n int64) io.ReadCloser { |
2407 | + return &maxBytesReader{w: w, r: r, n: n} |
2408 | +} |
2409 | + |
2410 | +type maxBytesReader struct { |
2411 | + w ResponseWriter |
2412 | + r io.ReadCloser // underlying reader |
2413 | + n int64 // max bytes remaining |
2414 | + stopped bool |
2415 | +} |
2416 | + |
2417 | +func (l *maxBytesReader) Read(p []byte) (n int, err error) { |
2418 | + if l.n <= 0 { |
2419 | + if !l.stopped { |
2420 | + l.stopped = true |
2421 | + if res, ok := l.w.(*response); ok { |
2422 | + res.requestTooLarge() |
2423 | + } |
2424 | + } |
2425 | + return 0, errors.New("http: request body too large") |
2426 | + } |
2427 | + if int64(len(p)) > l.n { |
2428 | + p = p[:l.n] |
2429 | + } |
2430 | + n, err = l.r.Read(p) |
2431 | + l.n -= int64(n) |
2432 | + return |
2433 | +} |
2434 | + |
2435 | +func (l *maxBytesReader) Close() error { |
2436 | + return l.r.Close() |
2437 | +} |
2438 | + |
2439 | +// ParseForm parses the raw query from the URL. |
2440 | +// |
2441 | +// For POST or PUT requests, it also parses the request body as a form. |
2442 | +// If the request Body's size has not already been limited by MaxBytesReader, |
2443 | +// the size is capped at 10MB. |
2444 | +// |
2445 | +// ParseMultipartForm calls ParseForm automatically. |
2446 | +// It is idempotent. |
2447 | +func (r *Request) ParseForm() (err error) { |
2448 | + if r.Form != nil { |
2449 | + return |
2450 | + } |
2451 | + if r.URL != nil { |
2452 | + r.Form, err = url.ParseQuery(r.URL.RawQuery) |
2453 | + } |
2454 | + if r.Method == "POST" || r.Method == "PUT" { |
2455 | + if r.Body == nil { |
2456 | + return errors.New("missing form body") |
2457 | + } |
2458 | + ct := r.Header.Get("Content-Type") |
2459 | + ct, _, err = mime.ParseMediaType(ct) |
2460 | + switch { |
2461 | + case ct == "application/x-www-form-urlencoded": |
2462 | + var reader io.Reader = r.Body |
2463 | + maxFormSize := int64(1<<63 - 1) |
2464 | + if _, ok := r.Body.(*maxBytesReader); !ok { |
2465 | + maxFormSize = int64(10 << 20) // 10 MB is a lot of text. |
2466 | + reader = io.LimitReader(r.Body, maxFormSize+1) |
2467 | + } |
2468 | + b, e := ioutil.ReadAll(reader) |
2469 | + if e != nil { |
2470 | + if err == nil { |
2471 | + err = e |
2472 | + } |
2473 | + break |
2474 | + } |
2475 | + if int64(len(b)) > maxFormSize { |
2476 | + return errors.New("http: POST too large") |
2477 | + } |
2478 | + var newValues url.Values |
2479 | + newValues, e = url.ParseQuery(string(b)) |
2480 | + if err == nil { |
2481 | + err = e |
2482 | + } |
2483 | + if r.Form == nil { |
2484 | + r.Form = make(url.Values) |
2485 | + } |
2486 | + // Copy values into r.Form. TODO: make this smoother. |
2487 | + for k, vs := range newValues { |
2488 | + for _, value := range vs { |
2489 | + r.Form.Add(k, value) |
2490 | + } |
2491 | + } |
2492 | + case ct == "multipart/form-data": |
2493 | + // handled by ParseMultipartForm (which is calling us, or should be) |
2494 | + // TODO(bradfitz): there are too many possible |
2495 | + // orders to call too many functions here. |
2496 | + // Clean this up and write more tests. |
2497 | + // request_test.go contains the start of this, |
2498 | + // in TestRequestMultipartCallOrder. |
2499 | + } |
2500 | + } |
2501 | + return err |
2502 | +} |
2503 | + |
2504 | +// ParseMultipartForm parses a request body as multipart/form-data. |
2505 | +// The whole request body is parsed and up to a total of maxMemory bytes of |
2506 | +// its file parts are stored in memory, with the remainder stored on |
2507 | +// disk in temporary files. |
2508 | +// ParseMultipartForm calls ParseForm if necessary. |
2509 | +// After one call to ParseMultipartForm, subsequent calls have no effect. |
2510 | +func (r *Request) ParseMultipartForm(maxMemory int64) error { |
2511 | + if r.MultipartForm == multipartByReader { |
2512 | + return errors.New("http: multipart handled by MultipartReader") |
2513 | + } |
2514 | + if r.Form == nil { |
2515 | + err := r.ParseForm() |
2516 | + if err != nil { |
2517 | + return err |
2518 | + } |
2519 | + } |
2520 | + if r.MultipartForm != nil { |
2521 | + return nil |
2522 | + } |
2523 | + |
2524 | + mr, err := r.multipartReader() |
2525 | + if err == ErrNotMultipart { |
2526 | + return nil |
2527 | + } else if err != nil { |
2528 | + return err |
2529 | + } |
2530 | + |
2531 | + f, err := mr.ReadForm(maxMemory) |
2532 | + if err != nil { |
2533 | + return err |
2534 | + } |
2535 | + for k, v := range f.Value { |
2536 | + r.Form[k] = append(r.Form[k], v...) |
2537 | + } |
2538 | + r.MultipartForm = f |
2539 | + |
2540 | + return nil |
2541 | +} |
2542 | + |
2543 | +// FormValue returns the first value for the named component of the query. |
2544 | +// FormValue calls ParseMultipartForm and ParseForm if necessary. |
2545 | +func (r *Request) FormValue(key string) string { |
2546 | + if r.Form == nil { |
2547 | + r.ParseMultipartForm(defaultMaxMemory) |
2548 | + } |
2549 | + if vs := r.Form[key]; len(vs) > 0 { |
2550 | + return vs[0] |
2551 | + } |
2552 | + return "" |
2553 | +} |
2554 | + |
2555 | +// FormFile returns the first file for the provided form key. |
2556 | +// FormFile calls ParseMultipartForm and ParseForm if necessary. |
2557 | +func (r *Request) FormFile(key string) (multipart.File, *multipart.FileHeader, error) { |
2558 | + if r.MultipartForm == multipartByReader { |
2559 | + return nil, nil, errors.New("http: multipart handled by MultipartReader") |
2560 | + } |
2561 | + if r.MultipartForm == nil { |
2562 | + err := r.ParseMultipartForm(defaultMaxMemory) |
2563 | + if err != nil { |
2564 | + return nil, nil, err |
2565 | + } |
2566 | + } |
2567 | + if r.MultipartForm != nil && r.MultipartForm.File != nil { |
2568 | + if fhs := r.MultipartForm.File[key]; len(fhs) > 0 { |
2569 | + f, err := fhs[0].Open() |
2570 | + return f, fhs[0], err |
2571 | + } |
2572 | + } |
2573 | + return nil, nil, ErrMissingFile |
2574 | +} |
2575 | + |
2576 | +func (r *Request) expectsContinue() bool { |
2577 | + return strings.ToLower(r.Header.Get("Expect")) == "100-continue" |
2578 | +} |
2579 | + |
2580 | +func (r *Request) wantsHttp10KeepAlive() bool { |
2581 | + if r.ProtoMajor != 1 || r.ProtoMinor != 0 { |
2582 | + return false |
2583 | + } |
2584 | + return strings.Contains(strings.ToLower(r.Header.Get("Connection")), "keep-alive") |
2585 | +} |
2586 | |
2587 | === added file 'fork/http/response.go' |
2588 | --- fork/http/response.go 1970-01-01 00:00:00 +0000 |
2589 | +++ fork/http/response.go 2013-07-22 14:27:42 +0000 |
2590 | @@ -0,0 +1,239 @@ |
2591 | +// Copyright 2009 The Go Authors. All rights reserved. |
2592 | +// Use of this source code is governed by a BSD-style |
2593 | +// license that can be found in the LICENSE file. |
2594 | + |
2595 | +// HTTP Response reading and parsing. |
2596 | + |
2597 | +package http |
2598 | + |
2599 | +import ( |
2600 | + "bufio" |
2601 | + "errors" |
2602 | + "io" |
2603 | + "net/textproto" |
2604 | + "net/url" |
2605 | + "strconv" |
2606 | + "strings" |
2607 | +) |
2608 | + |
2609 | +var respExcludeHeader = map[string]bool{ |
2610 | + "Content-Length": true, |
2611 | + "Transfer-Encoding": true, |
2612 | + "Trailer": true, |
2613 | +} |
2614 | + |
2615 | +// Response represents the response from an HTTP request. |
2616 | +// |
2617 | +type Response struct { |
2618 | + Status string // e.g. "200 OK" |
2619 | + StatusCode int // e.g. 200 |
2620 | + Proto string // e.g. "HTTP/1.0" |
2621 | + ProtoMajor int // e.g. 1 |
2622 | + ProtoMinor int // e.g. 0 |
2623 | + |
2624 | + // Header maps header keys to values. If the response had multiple |
2625 | + // headers with the same key, they will be concatenated, with comma |
2626 | + // delimiters. (Section 4.2 of RFC 2616 requires that multiple headers |
2627 | + // be semantically equivalent to a comma-delimited sequence.) Values |
2628 | + // duplicated by other fields in this struct (e.g., ContentLength) are |
2629 | + // omitted from Header. |
2630 | + // |
2631 | + // Keys in the map are canonicalized (see CanonicalHeaderKey). |
2632 | + Header Header |
2633 | + |
2634 | + // Body represents the response body. |
2635 | + // |
2636 | + // The http Client and Transport guarantee that Body is always |
2637 | + // non-nil, even on responses without a body or responses with |
2638 | + // a zero-lengthed body. |
2639 | + Body io.ReadCloser |
2640 | + |
2641 | + // ContentLength records the length of the associated content. The |
2642 | + // value -1 indicates that the length is unknown. Unless RequestMethod |
2643 | + // is "HEAD", values >= 0 indicate that the given number of bytes may |
2644 | + // be read from Body. |
2645 | + ContentLength int64 |
2646 | + |
2647 | + // Contains transfer encodings from outer-most to inner-most. Value is |
2648 | + // nil, means that "identity" encoding is used. |
2649 | + TransferEncoding []string |
2650 | + |
2651 | + // Close records whether the header directed that the connection be |
2652 | + // closed after reading Body. The value is advice for clients: neither |
2653 | + // ReadResponse nor Response.Write ever closes a connection. |
2654 | + Close bool |
2655 | + |
2656 | + // Trailer maps trailer keys to values, in the same |
2657 | + // format as the header. |
2658 | + Trailer Header |
2659 | + |
2660 | + // The Request that was sent to obtain this Response. |
2661 | + // Request's Body is nil (having already been consumed). |
2662 | + // This is only populated for Client requests. |
2663 | + Request *Request |
2664 | +} |
2665 | + |
2666 | +// Cookies parses and returns the cookies set in the Set-Cookie headers. |
2667 | +func (r *Response) Cookies() []*Cookie { |
2668 | + return readSetCookies(r.Header) |
2669 | +} |
2670 | + |
2671 | +var ErrNoLocation = errors.New("http: no Location header in response") |
2672 | + |
2673 | +// Location returns the URL of the response's "Location" header, |
2674 | +// if present. Relative redirects are resolved relative to |
2675 | +// the Response's Request. ErrNoLocation is returned if no |
2676 | +// Location header is present. |
2677 | +func (r *Response) Location() (*url.URL, error) { |
2678 | + lv := r.Header.Get("Location") |
2679 | + if lv == "" { |
2680 | + return nil, ErrNoLocation |
2681 | + } |
2682 | + if r.Request != nil && r.Request.URL != nil { |
2683 | + return r.Request.URL.Parse(lv) |
2684 | + } |
2685 | + return url.Parse(lv) |
2686 | +} |
2687 | + |
2688 | +// ReadResponse reads and returns an HTTP response from r. The |
2689 | +// req parameter specifies the Request that corresponds to |
2690 | +// this Response. Clients must call resp.Body.Close when finished |
2691 | +// reading resp.Body. After that call, clients can inspect |
2692 | +// resp.Trailer to find key/value pairs included in the response |
2693 | +// trailer. |
2694 | +func ReadResponse(r *bufio.Reader, req *Request) (resp *Response, err error) { |
2695 | + |
2696 | + tp := textproto.NewReader(r) |
2697 | + resp = new(Response) |
2698 | + |
2699 | + resp.Request = req |
2700 | + resp.Request.Method = strings.ToUpper(resp.Request.Method) |
2701 | + |
2702 | + // Parse the first line of the response. |
2703 | + line, err := tp.ReadLine() |
2704 | + if err != nil { |
2705 | + if err == io.EOF { |
2706 | + err = io.ErrUnexpectedEOF |
2707 | + } |
2708 | + return nil, err |
2709 | + } |
2710 | + f := strings.SplitN(line, " ", 3) |
2711 | + if len(f) < 2 { |
2712 | + return nil, &badStringError{"malformed HTTP response", line} |
2713 | + } |
2714 | + reasonPhrase := "" |
2715 | + if len(f) > 2 { |
2716 | + reasonPhrase = f[2] |
2717 | + } |
2718 | + resp.Status = f[1] + " " + reasonPhrase |
2719 | + resp.StatusCode, err = strconv.Atoi(f[1]) |
2720 | + if err != nil { |
2721 | + return nil, &badStringError{"malformed HTTP status code", f[1]} |
2722 | + } |
2723 | + |
2724 | + resp.Proto = f[0] |
2725 | + var ok bool |
2726 | + if resp.ProtoMajor, resp.ProtoMinor, ok = ParseHTTPVersion(resp.Proto); !ok { |
2727 | + return nil, &badStringError{"malformed HTTP version", resp.Proto} |
2728 | + } |
2729 | + |
2730 | + // Parse the response headers. |
2731 | + mimeHeader, err := tp.ReadMIMEHeader() |
2732 | + if err != nil { |
2733 | + return nil, err |
2734 | + } |
2735 | + resp.Header = Header(mimeHeader) |
2736 | + |
2737 | + fixPragmaCacheControl(resp.Header) |
2738 | + |
2739 | + err = readTransfer(resp, r) |
2740 | + if err != nil { |
2741 | + return nil, err |
2742 | + } |
2743 | + |
2744 | + return resp, nil |
2745 | +} |
2746 | + |
2747 | +// RFC2616: Should treat |
2748 | +// Pragma: no-cache |
2749 | +// like |
2750 | +// Cache-Control: no-cache |
2751 | +func fixPragmaCacheControl(header Header) { |
2752 | + if hp, ok := header["Pragma"]; ok && len(hp) > 0 && hp[0] == "no-cache" { |
2753 | + if _, presentcc := header["Cache-Control"]; !presentcc { |
2754 | + header["Cache-Control"] = []string{"no-cache"} |
2755 | + } |
2756 | + } |
2757 | +} |
2758 | + |
2759 | +// ProtoAtLeast returns whether the HTTP protocol used |
2760 | +// in the response is at least major.minor. |
2761 | +func (r *Response) ProtoAtLeast(major, minor int) bool { |
2762 | + return r.ProtoMajor > major || |
2763 | + r.ProtoMajor == major && r.ProtoMinor >= minor |
2764 | +} |
2765 | + |
2766 | +// Writes the response (header, body and trailer) in wire format. This method |
2767 | +// consults the following fields of the response: |
2768 | +// |
2769 | +// StatusCode |
2770 | +// ProtoMajor |
2771 | +// ProtoMinor |
2772 | +// RequestMethod |
2773 | +// TransferEncoding |
2774 | +// Trailer |
2775 | +// Body |
2776 | +// ContentLength |
2777 | +// Header, values for non-canonical keys will have unpredictable behavior |
2778 | +// |
2779 | +func (r *Response) Write(w io.Writer) error { |
2780 | + |
2781 | + // RequestMethod should be upper-case |
2782 | + if r.Request != nil { |
2783 | + r.Request.Method = strings.ToUpper(r.Request.Method) |
2784 | + } |
2785 | + |
2786 | + // Status line |
2787 | + text := r.Status |
2788 | + if text == "" { |
2789 | + var ok bool |
2790 | + text, ok = statusText[r.StatusCode] |
2791 | + if !ok { |
2792 | + text = "status code " + strconv.Itoa(r.StatusCode) |
2793 | + } |
2794 | + } |
2795 | + protoMajor, protoMinor := strconv.Itoa(r.ProtoMajor), strconv.Itoa(r.ProtoMinor) |
2796 | + statusCode := strconv.Itoa(r.StatusCode) + " " |
2797 | + if strings.HasPrefix(text, statusCode) { |
2798 | + text = text[len(statusCode):] |
2799 | + } |
2800 | + io.WriteString(w, "HTTP/"+protoMajor+"."+protoMinor+" "+statusCode+text+"\r\n") |
2801 | + |
2802 | + // Process Body,ContentLength,Close,Trailer |
2803 | + tw, err := newTransferWriter(r) |
2804 | + if err != nil { |
2805 | + return err |
2806 | + } |
2807 | + err = tw.WriteHeader(w) |
2808 | + if err != nil { |
2809 | + return err |
2810 | + } |
2811 | + |
2812 | + // Rest of header |
2813 | + err = r.Header.WriteSubset(w, respExcludeHeader) |
2814 | + if err != nil { |
2815 | + return err |
2816 | + } |
2817 | + |
2818 | + // End-of-header |
2819 | + io.WriteString(w, "\r\n") |
2820 | + |
2821 | + // Write body and trailer |
2822 | + err = tw.WriteBody(w) |
2823 | + if err != nil { |
2824 | + return err |
2825 | + } |
2826 | + |
2827 | + // Success |
2828 | + return nil |
2829 | +} |
2830 | |
2831 | === added file 'fork/http/server.go' |
2832 | --- fork/http/server.go 1970-01-01 00:00:00 +0000 |
2833 | +++ fork/http/server.go 2013-07-22 14:27:42 +0000 |
2834 | @@ -0,0 +1,1234 @@ |
2835 | +// Copyright 2009 The Go Authors. All rights reserved. |
2836 | +// Use of this source code is governed by a BSD-style |
2837 | +// license that can be found in the LICENSE file. |
2838 | + |
2839 | +// HTTP server. See RFC 2616. |
2840 | + |
2841 | +// TODO(rsc): |
2842 | +// logging |
2843 | + |
2844 | +package http |
2845 | + |
2846 | +import ( |
2847 | + "bufio" |
2848 | + "bytes" |
2849 | + "crypto/tls" |
2850 | + "errors" |
2851 | + "fmt" |
2852 | + "io" |
2853 | + "io/ioutil" |
2854 | + "log" |
2855 | + "net" |
2856 | + "net/url" |
2857 | + "path" |
2858 | + "runtime/debug" |
2859 | + "strconv" |
2860 | + "strings" |
2861 | + "sync" |
2862 | + "time" |
2863 | +) |
2864 | + |
2865 | +// Errors introduced by the HTTP server. |
2866 | +var ( |
2867 | + ErrWriteAfterFlush = errors.New("Conn.Write called after Flush") |
2868 | + ErrBodyNotAllowed = errors.New("http: request method or response status code does not allow body") |
2869 | + ErrHijacked = errors.New("Conn has been hijacked") |
2870 | + ErrContentLength = errors.New("Conn.Write wrote more than the declared Content-Length") |
2871 | +) |
2872 | + |
2873 | +// Objects implementing the Handler interface can be |
2874 | +// registered to serve a particular path or subtree |
2875 | +// in the HTTP server. |
2876 | +// |
2877 | +// ServeHTTP should write reply headers and data to the ResponseWriter |
2878 | +// and then return. Returning signals that the request is finished |
2879 | +// and that the HTTP server can move on to the next request on |
2880 | +// the connection. |
2881 | +type Handler interface { |
2882 | + ServeHTTP(ResponseWriter, *Request) |
2883 | +} |
2884 | + |
2885 | +// A ResponseWriter interface is used by an HTTP handler to |
2886 | +// construct an HTTP response. |
2887 | +type ResponseWriter interface { |
2888 | + // Header returns the header map that will be sent by WriteHeader. |
2889 | + // Changing the header after a call to WriteHeader (or Write) has |
2890 | + // no effect. |
2891 | + Header() Header |
2892 | + |
2893 | + // Write writes the data to the connection as part of an HTTP reply. |
2894 | + // If WriteHeader has not yet been called, Write calls WriteHeader(http.StatusOK) |
2895 | + // before writing the data. If the Header does not contain a |
2896 | + // Content-Type line, Write adds a Content-Type set to the result of passing |
2897 | + // the initial 512 bytes of written data to DetectContentType. |
2898 | + Write([]byte) (int, error) |
2899 | + |
2900 | + // WriteHeader sends an HTTP response header with status code. |
2901 | + // If WriteHeader is not called explicitly, the first call to Write |
2902 | + // will trigger an implicit WriteHeader(http.StatusOK). |
2903 | + // Thus explicit calls to WriteHeader are mainly used to |
2904 | + // send error codes. |
2905 | + WriteHeader(int) |
2906 | +} |
2907 | + |
2908 | +// The Flusher interface is implemented by ResponseWriters that allow |
2909 | +// an HTTP handler to flush buffered data to the client. |
2910 | +// |
2911 | +// Note that even for ResponseWriters that support Flush, |
2912 | +// if the client is connected through an HTTP proxy, |
2913 | +// the buffered data may not reach the client until the response |
2914 | +// completes. |
2915 | +type Flusher interface { |
2916 | + // Flush sends any buffered data to the client. |
2917 | + Flush() |
2918 | +} |
2919 | + |
2920 | +// The Hijacker interface is implemented by ResponseWriters that allow |
2921 | +// an HTTP handler to take over the connection. |
2922 | +type Hijacker interface { |
2923 | + // Hijack lets the caller take over the connection. |
2924 | + // After a call to Hijack(), the HTTP server library |
2925 | + // will not do anything else with the connection. |
2926 | + // It becomes the caller's responsibility to manage |
2927 | + // and close the connection. |
2928 | + Hijack() (net.Conn, *bufio.ReadWriter, error) |
2929 | +} |
2930 | + |
2931 | +// A conn represents the server side of an HTTP connection. |
2932 | +type conn struct { |
2933 | + remoteAddr string // network address of remote side |
2934 | + server *Server // the Server on which the connection arrived |
2935 | + rwc net.Conn // i/o connection |
2936 | + lr *io.LimitedReader // io.LimitReader(rwc) |
2937 | + buf *bufio.ReadWriter // buffered(lr,rwc), reading from bufio->limitReader->rwc |
2938 | + hijacked bool // connection has been hijacked by handler |
2939 | + tlsState *tls.ConnectionState // or nil when not using TLS |
2940 | + body []byte |
2941 | +} |
2942 | + |
2943 | +// A response represents the server side of an HTTP response. |
2944 | +type response struct { |
2945 | + conn *conn |
2946 | + req *Request // request for this response |
2947 | + chunking bool // using chunked transfer encoding for reply body |
2948 | + wroteHeader bool // reply header has been written |
2949 | + wroteContinue bool // 100 Continue response was written |
2950 | + header Header // reply header parameters |
2951 | + written int64 // number of bytes written in body |
2952 | + contentLength int64 // explicitly-declared Content-Length; or -1 |
2953 | + status int // status code passed to WriteHeader |
2954 | + needSniff bool // need to sniff to find Content-Type |
2955 | + |
2956 | + // close connection after this reply. set on request and |
2957 | + // updated after response from handler if there's a |
2958 | + // "Connection: keep-alive" response header and a |
2959 | + // Content-Length. |
2960 | + closeAfterReply bool |
2961 | + |
2962 | + // requestBodyLimitHit is set by requestTooLarge when |
2963 | + // maxBytesReader hits its max size. It is checked in |
2964 | + // WriteHeader, to make sure we don't consume the the |
2965 | + // remaining request body to try to advance to the next HTTP |
2966 | + // request. Instead, when this is set, we stop doing |
2967 | + // subsequent requests on this connection and stop reading |
2968 | + // input from it. |
2969 | + requestBodyLimitHit bool |
2970 | +} |
2971 | + |
2972 | +// requestTooLarge is called by maxBytesReader when too much input has |
2973 | +// been read from the client. |
2974 | +func (w *response) requestTooLarge() { |
2975 | + w.closeAfterReply = true |
2976 | + w.requestBodyLimitHit = true |
2977 | + if !w.wroteHeader { |
2978 | + w.Header().Set("Connection", "close") |
2979 | + } |
2980 | +} |
2981 | + |
2982 | +type writerOnly struct { |
2983 | + io.Writer |
2984 | +} |
2985 | + |
2986 | +func (w *response) ReadFrom(src io.Reader) (n int64, err error) { |
2987 | + // Call WriteHeader before checking w.chunking if it hasn't |
2988 | + // been called yet, since WriteHeader is what sets w.chunking. |
2989 | + if !w.wroteHeader { |
2990 | + w.WriteHeader(StatusOK) |
2991 | + } |
2992 | + if !w.chunking && w.bodyAllowed() && !w.needSniff { |
2993 | + w.Flush() |
2994 | + if rf, ok := w.conn.rwc.(io.ReaderFrom); ok { |
2995 | + n, err = rf.ReadFrom(src) |
2996 | + w.written += n |
2997 | + return |
2998 | + } |
2999 | + } |
3000 | + // Fall back to default io.Copy implementation. |
3001 | + // Use wrapper to hide w.ReadFrom from io.Copy. |
3002 | + return io.Copy(writerOnly{w}, src) |
3003 | +} |
3004 | + |
3005 | +// noLimit is an effective infinite upper bound for io.LimitedReader |
3006 | +const noLimit int64 = (1 << 63) - 1 |
3007 | + |
3008 | +// Create new connection from rwc. |
3009 | +func (srv *Server) newConn(rwc net.Conn) (c *conn, err error) { |
3010 | + c = new(conn) |
3011 | + c.remoteAddr = rwc.RemoteAddr().String() |
3012 | + c.server = srv |
3013 | + c.rwc = rwc |
3014 | + c.body = make([]byte, sniffLen) |
3015 | + c.lr = io.LimitReader(rwc, noLimit).(*io.LimitedReader) |
3016 | + br := bufio.NewReader(c.lr) |
3017 | + bw := bufio.NewWriter(rwc) |
3018 | + c.buf = bufio.NewReadWriter(br, bw) |
3019 | + return c, nil |
3020 | +} |
3021 | + |
3022 | +// DefaultMaxHeaderBytes is the maximum permitted size of the headers |
3023 | +// in an HTTP request. |
3024 | +// This can be overridden by setting Server.MaxHeaderBytes. |
3025 | +const DefaultMaxHeaderBytes = 1 << 20 // 1 MB |
3026 | + |
3027 | +func (srv *Server) maxHeaderBytes() int { |
3028 | + if srv.MaxHeaderBytes > 0 { |
3029 | + return srv.MaxHeaderBytes |
3030 | + } |
3031 | + return DefaultMaxHeaderBytes |
3032 | +} |
3033 | + |
3034 | +// wrapper around io.ReaderCloser which on first read, sends an |
3035 | +// HTTP/1.1 100 Continue header |
3036 | +type expectContinueReader struct { |
3037 | + resp *response |
3038 | + readCloser io.ReadCloser |
3039 | + closed bool |
3040 | +} |
3041 | + |
3042 | +func (ecr *expectContinueReader) Read(p []byte) (n int, err error) { |
3043 | + if ecr.closed { |
3044 | + return 0, errors.New("http: Read after Close on request Body") |
3045 | + } |
3046 | + if !ecr.resp.wroteContinue && !ecr.resp.conn.hijacked { |
3047 | + ecr.resp.wroteContinue = true |
3048 | + io.WriteString(ecr.resp.conn.buf, "HTTP/1.1 100 Continue\r\n\r\n") |
3049 | + ecr.resp.conn.buf.Flush() |
3050 | + } |
3051 | + return ecr.readCloser.Read(p) |
3052 | +} |
3053 | + |
3054 | +func (ecr *expectContinueReader) Close() error { |
3055 | + ecr.closed = true |
3056 | + return ecr.readCloser.Close() |
3057 | +} |
3058 | + |
3059 | +// TimeFormat is the time format to use with |
3060 | +// time.Parse and time.Time.Format when parsing |
3061 | +// or generating times in HTTP headers. |
3062 | +// It is like time.RFC1123 but hard codes GMT as the time zone. |
3063 | +const TimeFormat = "Mon, 02 Jan 2006 15:04:05 GMT" |
3064 | + |
3065 | +var errTooLarge = errors.New("http: request too large") |
3066 | + |
3067 | +// Read next request from connection. |
3068 | +func (c *conn) readRequest() (w *response, err error) { |
3069 | + if c.hijacked { |
3070 | + return nil, ErrHijacked |
3071 | + } |
3072 | + c.lr.N = int64(c.server.maxHeaderBytes()) + 4096 /* bufio slop */ |
3073 | + var req *Request |
3074 | + if req, err = ReadRequest(c.buf.Reader); err != nil { |
3075 | + if c.lr.N == 0 { |
3076 | + return nil, errTooLarge |
3077 | + } |
3078 | + return nil, err |
3079 | + } |
3080 | + c.lr.N = noLimit |
3081 | + |
3082 | + req.RemoteAddr = c.remoteAddr |
3083 | + req.TLS = c.tlsState |
3084 | + |
3085 | + w = new(response) |
3086 | + w.conn = c |
3087 | + w.req = req |
3088 | + w.header = make(Header) |
3089 | + w.contentLength = -1 |
3090 | + c.body = c.body[:0] |
3091 | + return w, nil |
3092 | +} |
3093 | + |
3094 | +func (w *response) Header() Header { |
3095 | + return w.header |
3096 | +} |
3097 | + |
3098 | +// maxPostHandlerReadBytes is the max number of Request.Body bytes not |
3099 | +// consumed by a handler that the server will read from the client |
3100 | +// in order to keep a connection alive. If there are more bytes than |
3101 | +// this then the server to be paranoid instead sends a "Connection: |
3102 | +// close" response. |
3103 | +// |
3104 | +// This number is approximately what a typical machine's TCP buffer |
3105 | +// size is anyway. (if we have the bytes on the machine, we might as |
3106 | +// well read them) |
3107 | +const maxPostHandlerReadBytes = 256 << 10 |
3108 | + |
3109 | +func (w *response) WriteHeader(code int) { |
3110 | + if w.conn.hijacked { |
3111 | + log.Print("http: response.WriteHeader on hijacked connection") |
3112 | + return |
3113 | + } |
3114 | + if w.wroteHeader { |
3115 | + log.Print("http: multiple response.WriteHeader calls") |
3116 | + return |
3117 | + } |
3118 | + w.wroteHeader = true |
3119 | + w.status = code |
3120 | + |
3121 | + // Check for a explicit (and valid) Content-Length header. |
3122 | + var hasCL bool |
3123 | + var contentLength int64 |
3124 | + if clenStr := w.header.Get("Content-Length"); clenStr != "" { |
3125 | + var err error |
3126 | + contentLength, err = strconv.ParseInt(clenStr, 10, 64) |
3127 | + if err == nil { |
3128 | + hasCL = true |
3129 | + } else { |
3130 | + log.Printf("http: invalid Content-Length of %q sent", clenStr) |
3131 | + w.header.Del("Content-Length") |
3132 | + } |
3133 | + } |
3134 | + |
3135 | + if w.req.wantsHttp10KeepAlive() && (w.req.Method == "HEAD" || hasCL) { |
3136 | + _, connectionHeaderSet := w.header["Connection"] |
3137 | + if !connectionHeaderSet { |
3138 | + w.header.Set("Connection", "keep-alive") |
3139 | + } |
3140 | + } else if !w.req.ProtoAtLeast(1, 1) { |
3141 | + // Client did not ask to keep connection alive. |
3142 | + w.closeAfterReply = true |
3143 | + } |
3144 | + |
3145 | + if w.header.Get("Connection") == "close" { |
3146 | + w.closeAfterReply = true |
3147 | + } |
3148 | + |
3149 | + // Per RFC 2616, we should consume the request body before |
3150 | + // replying, if the handler hasn't already done so. But we |
3151 | + // don't want to do an unbounded amount of reading here for |
3152 | + // DoS reasons, so we only try up to a threshold. |
3153 | + if w.req.ContentLength != 0 && !w.closeAfterReply { |
3154 | + ecr, isExpecter := w.req.Body.(*expectContinueReader) |
3155 | + if !isExpecter || ecr.resp.wroteContinue { |
3156 | + n, _ := io.CopyN(ioutil.Discard, w.req.Body, maxPostHandlerReadBytes+1) |
3157 | + if n >= maxPostHandlerReadBytes { |
3158 | + w.requestTooLarge() |
3159 | + w.header.Set("Connection", "close") |
3160 | + } else { |
3161 | + w.req.Body.Close() |
3162 | + } |
3163 | + } |
3164 | + } |
3165 | + |
3166 | + if code == StatusNotModified { |
3167 | + // Must not have body. |
3168 | + for _, header := range []string{"Content-Type", "Content-Length", "Transfer-Encoding"} { |
3169 | + if w.header.Get(header) != "" { |
3170 | + // TODO: return an error if WriteHeader gets a return parameter |
3171 | + // or set a flag on w to make future Writes() write an error page? |
3172 | + // for now just log and drop the header. |
3173 | + log.Printf("http: StatusNotModified response with header %q defined", header) |
3174 | + w.header.Del(header) |
3175 | + } |
3176 | + } |
3177 | + } else { |
3178 | + // If no content type, apply sniffing algorithm to body. |
3179 | + if w.header.Get("Content-Type") == "" && w.req.Method != "HEAD" { |
3180 | + w.needSniff = true |
3181 | + } |
3182 | + } |
3183 | + |
3184 | + if _, ok := w.header["Date"]; !ok { |
3185 | + w.Header().Set("Date", time.Now().UTC().Format(TimeFormat)) |
3186 | + } |
3187 | + |
3188 | + te := w.header.Get("Transfer-Encoding") |
3189 | + hasTE := te != "" |
3190 | + if hasCL && hasTE && te != "identity" { |
3191 | + // TODO: return an error if WriteHeader gets a return parameter |
3192 | + // For now just ignore the Content-Length. |
3193 | + log.Printf("http: WriteHeader called with both Transfer-Encoding of %q and a Content-Length of %d", |
3194 | + te, contentLength) |
3195 | + w.header.Del("Content-Length") |
3196 | + hasCL = false |
3197 | + } |
3198 | + |
3199 | + if w.req.Method == "HEAD" || code == StatusNotModified { |
3200 | + // do nothing |
3201 | + } else if hasCL { |
3202 | + w.contentLength = contentLength |
3203 | + w.header.Del("Transfer-Encoding") |
3204 | + } else if w.req.ProtoAtLeast(1, 1) { |
3205 | + // HTTP/1.1 or greater: use chunked transfer encoding |
3206 | + // to avoid closing the connection at EOF. |
3207 | + // TODO: this blows away any custom or stacked Transfer-Encoding they |
3208 | + // might have set. Deal with that as need arises once we have a valid |
3209 | + // use case. |
3210 | + w.chunking = true |
3211 | + w.header.Set("Transfer-Encoding", "chunked") |
3212 | + } else { |
3213 | + // HTTP version < 1.1: cannot do chunked transfer |
3214 | + // encoding and we don't know the Content-Length so |
3215 | + // signal EOF by closing connection. |
3216 | + w.closeAfterReply = true |
3217 | + w.header.Del("Transfer-Encoding") // in case already set |
3218 | + } |
3219 | + |
3220 | + // Cannot use Content-Length with non-identity Transfer-Encoding. |
3221 | + if w.chunking { |
3222 | + w.header.Del("Content-Length") |
3223 | + } |
3224 | + if !w.req.ProtoAtLeast(1, 0) { |
3225 | + return |
3226 | + } |
3227 | + proto := "HTTP/1.0" |
3228 | + if w.req.ProtoAtLeast(1, 1) { |
3229 | + proto = "HTTP/1.1" |
3230 | + } |
3231 | + codestring := strconv.Itoa(code) |
3232 | + text, ok := statusText[code] |
3233 | + if !ok { |
3234 | + text = "status code " + codestring |
3235 | + } |
3236 | + io.WriteString(w.conn.buf, proto+" "+codestring+" "+text+"\r\n") |
3237 | + w.header.Write(w.conn.buf) |
3238 | + |
3239 | + // If we need to sniff the body, leave the header open. |
3240 | + // Otherwise, end it here. |
3241 | + if !w.needSniff { |
3242 | + io.WriteString(w.conn.buf, "\r\n") |
3243 | + } |
3244 | +} |
3245 | + |
3246 | +// sniff uses the first block of written data, |
3247 | +// stored in w.conn.body, to decide the Content-Type |
3248 | +// for the HTTP body. |
3249 | +func (w *response) sniff() { |
3250 | + if !w.needSniff { |
3251 | + return |
3252 | + } |
3253 | + w.needSniff = false |
3254 | + |
3255 | + data := w.conn.body |
3256 | + fmt.Fprintf(w.conn.buf, "Content-Type: %s\r\n\r\n", DetectContentType(data)) |
3257 | + |
3258 | + if len(data) == 0 { |
3259 | + return |
3260 | + } |
3261 | + if w.chunking { |
3262 | + fmt.Fprintf(w.conn.buf, "%x\r\n", len(data)) |
3263 | + } |
3264 | + _, err := w.conn.buf.Write(data) |
3265 | + if w.chunking && err == nil { |
3266 | + io.WriteString(w.conn.buf, "\r\n") |
3267 | + } |
3268 | +} |
3269 | + |
3270 | +// bodyAllowed returns true if a Write is allowed for this response type. |
3271 | +// It's illegal to call this before the header has been flushed. |
3272 | +func (w *response) bodyAllowed() bool { |
3273 | + if !w.wroteHeader { |
3274 | + panic("") |
3275 | + } |
3276 | + return w.status != StatusNotModified && w.req.Method != "HEAD" |
3277 | +} |
3278 | + |
3279 | +func (w *response) Write(data []byte) (n int, err error) { |
3280 | + if w.conn.hijacked { |
3281 | + log.Print("http: response.Write on hijacked connection") |
3282 | + return 0, ErrHijacked |
3283 | + } |
3284 | + if !w.wroteHeader { |
3285 | + w.WriteHeader(StatusOK) |
3286 | + } |
3287 | + if len(data) == 0 { |
3288 | + return 0, nil |
3289 | + } |
3290 | + if !w.bodyAllowed() { |
3291 | + return 0, ErrBodyNotAllowed |
3292 | + } |
3293 | + |
3294 | + w.written += int64(len(data)) // ignoring errors, for errorKludge |
3295 | + if w.contentLength != -1 && w.written > w.contentLength { |
3296 | + return 0, ErrContentLength |
3297 | + } |
3298 | + |
3299 | + var m int |
3300 | + if w.needSniff { |
3301 | + // We need to sniff the beginning of the output to |
3302 | + // determine the content type. Accumulate the |
3303 | + // initial writes in w.conn.body. |
3304 | + // Cap m so that append won't allocate. |
3305 | + m = cap(w.conn.body) - len(w.conn.body) |
3306 | + if m > len(data) { |
3307 | + m = len(data) |
3308 | + } |
3309 | + w.conn.body = append(w.conn.body, data[:m]...) |
3310 | + data = data[m:] |
3311 | + if len(data) == 0 { |
3312 | + // Copied everything into the buffer. |
3313 | + // Wait for next write. |
3314 | + return m, nil |
3315 | + } |
3316 | + |
3317 | + // Filled the buffer; more data remains. |
3318 | + // Sniff the content (flushes the buffer) |
3319 | + // and then proceed with the remainder |
3320 | + // of the data as a normal Write. |
3321 | + // Calling sniff clears needSniff. |
3322 | + w.sniff() |
3323 | + } |
3324 | + |
3325 | + // TODO(rsc): if chunking happened after the buffering, |
3326 | + // then there would be fewer chunk headers. |
3327 | + // On the other hand, it would make hijacking more difficult. |
3328 | + if w.chunking { |
3329 | + fmt.Fprintf(w.conn.buf, "%x\r\n", len(data)) // TODO(rsc): use strconv not fmt |
3330 | + } |
3331 | + n, err = w.conn.buf.Write(data) |
3332 | + if err == nil && w.chunking { |
3333 | + if n != len(data) { |
3334 | + err = io.ErrShortWrite |
3335 | + } |
3336 | + if err == nil { |
3337 | + io.WriteString(w.conn.buf, "\r\n") |
3338 | + } |
3339 | + } |
3340 | + |
3341 | + return m + n, err |
3342 | +} |
3343 | + |
3344 | +func (w *response) finishRequest() { |
3345 | + // If this was an HTTP/1.0 request with keep-alive and we sent a Content-Length |
3346 | + // back, we can make this a keep-alive response ... |
3347 | + if w.req.wantsHttp10KeepAlive() { |
3348 | + sentLength := w.header.Get("Content-Length") != "" |
3349 | + if sentLength && w.header.Get("Connection") == "keep-alive" { |
3350 | + w.closeAfterReply = false |
3351 | + } |
3352 | + } |
3353 | + if !w.wroteHeader { |
3354 | + w.WriteHeader(StatusOK) |
3355 | + } |
3356 | + if w.needSniff { |
3357 | + w.sniff() |
3358 | + } |
3359 | + if w.chunking { |
3360 | + io.WriteString(w.conn.buf, "0\r\n") |
3361 | + // trailer key/value pairs, followed by blank line |
3362 | + io.WriteString(w.conn.buf, "\r\n") |
3363 | + } |
3364 | + w.conn.buf.Flush() |
3365 | + // Close the body, unless we're about to close the whole TCP connection |
3366 | + // anyway. |
3367 | + if !w.closeAfterReply { |
3368 | + w.req.Body.Close() |
3369 | + } |
3370 | + if w.req.MultipartForm != nil { |
3371 | + w.req.MultipartForm.RemoveAll() |
3372 | + } |
3373 | + |
3374 | + if w.contentLength != -1 && w.contentLength != w.written { |
3375 | + // Did not write enough. Avoid getting out of sync. |
3376 | + w.closeAfterReply = true |
3377 | + } |
3378 | +} |
3379 | + |
3380 | +func (w *response) Flush() { |
3381 | + if !w.wroteHeader { |
3382 | + w.WriteHeader(StatusOK) |
3383 | + } |
3384 | + w.sniff() |
3385 | + w.conn.buf.Flush() |
3386 | +} |
3387 | + |
3388 | +// Close the connection. |
3389 | +func (c *conn) close() { |
3390 | + if c.buf != nil { |
3391 | + c.buf.Flush() |
3392 | + c.buf = nil |
3393 | + } |
3394 | + if c.rwc != nil { |
3395 | + c.rwc.Close() |
3396 | + c.rwc = nil |
3397 | + } |
3398 | +} |
3399 | + |
3400 | +// Serve a new connection. |
3401 | +func (c *conn) serve() { |
3402 | + defer func() { |
3403 | + err := recover() |
3404 | + if err == nil { |
3405 | + return |
3406 | + } |
3407 | + |
3408 | + var buf bytes.Buffer |
3409 | + fmt.Fprintf(&buf, "http: panic serving %v: %v\n", c.remoteAddr, err) |
3410 | + buf.Write(debug.Stack()) |
3411 | + log.Print(buf.String()) |
3412 | + |
3413 | + if c.rwc != nil { // may be nil if connection hijacked |
3414 | + c.rwc.Close() |
3415 | + } |
3416 | + }() |
3417 | + |
3418 | + if tlsConn, ok := c.rwc.(*tls.Conn); ok { |
3419 | + if err := tlsConn.Handshake(); err != nil { |
3420 | + c.close() |
3421 | + return |
3422 | + } |
3423 | + c.tlsState = new(tls.ConnectionState) |
3424 | + *c.tlsState = tlsConn.ConnectionState() |
3425 | + } |
3426 | + |
3427 | + for { |
3428 | + w, err := c.readRequest() |
3429 | + if err != nil { |
3430 | + msg := "400 Bad Request" |
3431 | + if err == errTooLarge { |
3432 | + // Their HTTP client may or may not be |
3433 | + // able to read this if we're |
3434 | + // responding to them and hanging up |
3435 | + // while they're still writing their |
3436 | + // request. Undefined behavior. |
3437 | + msg = "413 Request Entity Too Large" |
3438 | + } else if err == io.EOF { |
3439 | + break // Don't reply |
3440 | + } else if neterr, ok := err.(net.Error); ok && neterr.Timeout() { |
3441 | + break // Don't reply |
3442 | + } |
3443 | + fmt.Fprintf(c.rwc, "HTTP/1.1 %s\r\n\r\n", msg) |
3444 | + break |
3445 | + } |
3446 | + |
3447 | + // Expect 100 Continue support |
3448 | + req := w.req |
3449 | + if req.expectsContinue() { |
3450 | + if req.ProtoAtLeast(1, 1) { |
3451 | + // Wrap the Body reader with one that replies on the connection |
3452 | + req.Body = &expectContinueReader{readCloser: req.Body, resp: w} |
3453 | + } |
3454 | + if req.ContentLength == 0 { |
3455 | + w.Header().Set("Connection", "close") |
3456 | + w.WriteHeader(StatusBadRequest) |
3457 | + w.finishRequest() |
3458 | + break |
3459 | + } |
3460 | + req.Header.Del("Expect") |
3461 | + } else if req.Header.Get("Expect") != "" { |
3462 | + // TODO(bradfitz): let ServeHTTP handlers handle |
3463 | + // requests with non-standard expectation[s]? Seems |
3464 | + // theoretical at best, and doesn't fit into the |
3465 | + // current ServeHTTP model anyway. We'd need to |
3466 | + // make the ResponseWriter an optional |
3467 | + // "ExpectReplier" interface or something. |
3468 | + // |
3469 | + // For now we'll just obey RFC 2616 14.20 which says |
3470 | + // "If a server receives a request containing an |
3471 | + // Expect field that includes an expectation- |
3472 | + // extension that it does not support, it MUST |
3473 | + // respond with a 417 (Expectation Failed) status." |
3474 | + w.Header().Set("Connection", "close") |
3475 | + w.WriteHeader(StatusExpectationFailed) |
3476 | + w.finishRequest() |
3477 | + break |
3478 | + } |
3479 | + |
3480 | + handler := c.server.Handler |
3481 | + if handler == nil { |
3482 | + handler = DefaultServeMux |
3483 | + } |
3484 | + |
3485 | + // HTTP cannot have multiple simultaneous active requests.[*] |
3486 | + // Until the server replies to this request, it can't read another, |
3487 | + // so we might as well run the handler in this goroutine. |
3488 | + // [*] Not strictly true: HTTP pipelining. We could let them all process |
3489 | + // in parallel even if their responses need to be serialized. |
3490 | + handler.ServeHTTP(w, w.req) |
3491 | + if c.hijacked { |
3492 | + return |
3493 | + } |
3494 | + w.finishRequest() |
3495 | + if w.closeAfterReply { |
3496 | + break |
3497 | + } |
3498 | + } |
3499 | + c.close() |
3500 | +} |
3501 | + |
3502 | +// Hijack implements the Hijacker.Hijack method. Our response is both a ResponseWriter |
3503 | +// and a Hijacker. |
3504 | +func (w *response) Hijack() (rwc net.Conn, buf *bufio.ReadWriter, err error) { |
3505 | + if w.conn.hijacked { |
3506 | + return nil, nil, ErrHijacked |
3507 | + } |
3508 | + w.conn.hijacked = true |
3509 | + rwc = w.conn.rwc |
3510 | + buf = w.conn.buf |
3511 | + w.conn.rwc = nil |
3512 | + w.conn.buf = nil |
3513 | + return |
3514 | +} |
3515 | + |
3516 | +// The HandlerFunc type is an adapter to allow the use of |
3517 | +// ordinary functions as HTTP handlers. If f is a function |
3518 | +// with the appropriate signature, HandlerFunc(f) is a |
3519 | +// Handler object that calls f. |
3520 | +type HandlerFunc func(ResponseWriter, *Request) |
3521 | + |
3522 | +// ServeHTTP calls f(w, r). |
3523 | +func (f HandlerFunc) ServeHTTP(w ResponseWriter, r *Request) { |
3524 | + f(w, r) |
3525 | +} |
3526 | + |
3527 | +// Helper handlers |
3528 | + |
3529 | +// Error replies to the request with the specified error message and HTTP code. |
3530 | +func Error(w ResponseWriter, error string, code int) { |
3531 | + w.Header().Set("Content-Type", "text/plain; charset=utf-8") |
3532 | + w.WriteHeader(code) |
3533 | + fmt.Fprintln(w, error) |
3534 | +} |
3535 | + |
3536 | +// NotFound replies to the request with an HTTP 404 not found error. |
3537 | +func NotFound(w ResponseWriter, r *Request) { Error(w, "404 page not found", StatusNotFound) } |
3538 | + |
3539 | +// NotFoundHandler returns a simple request handler |
3540 | +// that replies to each request with a ``404 page not found'' reply. |
3541 | +func NotFoundHandler() Handler { return HandlerFunc(NotFound) } |
3542 | + |
3543 | +// StripPrefix returns a handler that serves HTTP requests |
3544 | +// by removing the given prefix from the request URL's Path |
3545 | +// and invoking the handler h. StripPrefix handles a |
3546 | +// request for a path that doesn't begin with prefix by |
3547 | +// replying with an HTTP 404 not found error. |
3548 | +func StripPrefix(prefix string, h Handler) Handler { |
3549 | + return HandlerFunc(func(w ResponseWriter, r *Request) { |
3550 | + if !strings.HasPrefix(r.URL.Path, prefix) { |
3551 | + NotFound(w, r) |
3552 | + return |
3553 | + } |
3554 | + r.URL.Path = r.URL.Path[len(prefix):] |
3555 | + h.ServeHTTP(w, r) |
3556 | + }) |
3557 | +} |
3558 | + |
3559 | +// Redirect replies to the request with a redirect to url, |
3560 | +// which may be a path relative to the request path. |
3561 | +func Redirect(w ResponseWriter, r *Request, urlStr string, code int) { |
3562 | + if u, err := url.Parse(urlStr); err == nil { |
3563 | + // If url was relative, make absolute by |
3564 | + // combining with request path. |
3565 | + // The browser would probably do this for us, |
3566 | + // but doing it ourselves is more reliable. |
3567 | + |
3568 | + // NOTE(rsc): RFC 2616 says that the Location |
3569 | + // line must be an absolute URI, like |
3570 | + // "http://www.google.com/redirect/", |
3571 | + // not a path like "/redirect/". |
3572 | + // Unfortunately, we don't know what to |
3573 | + // put in the host name section to get the |
3574 | + // client to connect to us again, so we can't |
3575 | + // know the right absolute URI to send back. |
3576 | + // Because of this problem, no one pays attention |
3577 | + // to the RFC; they all send back just a new path. |
3578 | + // So do we. |
3579 | + oldpath := r.URL.Path |
3580 | + if oldpath == "" { // should not happen, but avoid a crash if it does |
3581 | + oldpath = "/" |
3582 | + } |
3583 | + if u.Scheme == "" { |
3584 | + // no leading http://server |
3585 | + if urlStr == "" || urlStr[0] != '/' { |
3586 | + // make relative path absolute |
3587 | + olddir, _ := path.Split(oldpath) |
3588 | + urlStr = olddir + urlStr |
3589 | + } |
3590 | + |
3591 | + var query string |
3592 | + if i := strings.Index(urlStr, "?"); i != -1 { |
3593 | + urlStr, query = urlStr[:i], urlStr[i:] |
3594 | + } |
3595 | + |
3596 | + // clean up but preserve trailing slash |
3597 | + trailing := urlStr[len(urlStr)-1] == '/' |
3598 | + urlStr = path.Clean(urlStr) |
3599 | + if trailing && urlStr[len(urlStr)-1] != '/' { |
3600 | + urlStr += "/" |
3601 | + } |
3602 | + urlStr += query |
3603 | + } |
3604 | + } |
3605 | + |
3606 | + w.Header().Set("Location", urlStr) |
3607 | + w.WriteHeader(code) |
3608 | + |
3609 | + // RFC2616 recommends that a short note "SHOULD" be included in the |
3610 | + // response because older user agents may not understand 301/307. |
3611 | + // Shouldn't send the response for POST or HEAD; that leaves GET. |
3612 | + if r.Method == "GET" { |
3613 | + note := "<a href=\"" + htmlEscape(urlStr) + "\">" + statusText[code] + "</a>.\n" |
3614 | + fmt.Fprintln(w, note) |
3615 | + } |
3616 | +} |
3617 | + |
3618 | +var htmlReplacer = strings.NewReplacer( |
3619 | + "&", "&", |
3620 | + "<", "<", |
3621 | + ">", ">", |
3622 | + // """ is shorter than """. |
3623 | + `"`, """, |
3624 | + // "'" is shorter than "'" and apos was not in HTML until HTML5. |
3625 | + "'", "'", |
3626 | +) |
3627 | + |
3628 | +func htmlEscape(s string) string { |
3629 | + return htmlReplacer.Replace(s) |
3630 | +} |
3631 | + |
3632 | +// Redirect to a fixed URL |
3633 | +type redirectHandler struct { |
3634 | + url string |
3635 | + code int |
3636 | +} |
3637 | + |
3638 | +func (rh *redirectHandler) ServeHTTP(w ResponseWriter, r *Request) { |
3639 | + Redirect(w, r, rh.url, rh.code) |
3640 | +} |
3641 | + |
3642 | +// RedirectHandler returns a request handler that redirects |
3643 | +// each request it receives to the given url using the given |
3644 | +// status code. |
3645 | +func RedirectHandler(url string, code int) Handler { |
3646 | + return &redirectHandler{url, code} |
3647 | +} |
3648 | + |
3649 | +// ServeMux is an HTTP request multiplexer. |
3650 | +// It matches the URL of each incoming request against a list of registered |
3651 | +// patterns and calls the handler for the pattern that |
3652 | +// most closely matches the URL. |
3653 | +// |
3654 | +// Patterns named fixed, rooted paths, like "/favicon.ico", |
3655 | +// or rooted subtrees, like "/images/" (note the trailing slash). |
3656 | +// Longer patterns take precedence over shorter ones, so that |
3657 | +// if there are handlers registered for both "/images/" |
3658 | +// and "/images/thumbnails/", the latter handler will be |
3659 | +// called for paths beginning "/images/thumbnails/" and the |
3660 | +// former will receiver requests for any other paths in the |
3661 | +// "/images/" subtree. |
3662 | +// |
3663 | +// Patterns may optionally begin with a host name, restricting matches to |
3664 | +// URLs on that host only. Host-specific patterns take precedence over |
3665 | +// general patterns, so that a handler might register for the two patterns |
3666 | +// "/codesearch" and "codesearch.google.com/" without also taking over |
3667 | +// requests for "http://www.google.com/". |
3668 | +// |
3669 | +// ServeMux also takes care of sanitizing the URL request path, |
3670 | +// redirecting any request containing . or .. elements to an |
3671 | +// equivalent .- and ..-free URL. |
3672 | +type ServeMux struct { |
3673 | + mu sync.RWMutex |
3674 | + m map[string]muxEntry |
3675 | +} |
3676 | + |
3677 | +type muxEntry struct { |
3678 | + explicit bool |
3679 | + h Handler |
3680 | +} |
3681 | + |
3682 | +// NewServeMux allocates and returns a new ServeMux. |
3683 | +func NewServeMux() *ServeMux { return &ServeMux{m: make(map[string]muxEntry)} } |
3684 | + |
3685 | +// DefaultServeMux is the default ServeMux used by Serve. |
3686 | +var DefaultServeMux = NewServeMux() |
3687 | + |
3688 | +// Does path match pattern? |
3689 | +func pathMatch(pattern, path string) bool { |
3690 | + if len(pattern) == 0 { |
3691 | + // should not happen |
3692 | + return false |
3693 | + } |
3694 | + n := len(pattern) |
3695 | + if pattern[n-1] != '/' { |
3696 | + return pattern == path |
3697 | + } |
3698 | + return len(path) >= n && path[0:n] == pattern |
3699 | +} |
3700 | + |
3701 | +// Return the canonical path for p, eliminating . and .. elements. |
3702 | +func cleanPath(p string) string { |
3703 | + if p == "" { |
3704 | + return "/" |
3705 | + } |
3706 | + if p[0] != '/' { |
3707 | + p = "/" + p |
3708 | + } |
3709 | + np := path.Clean(p) |
3710 | + // path.Clean removes trailing slash except for root; |
3711 | + // put the trailing slash back if necessary. |
3712 | + if p[len(p)-1] == '/' && np != "/" { |
3713 | + np += "/" |
3714 | + } |
3715 | + return np |
3716 | +} |
3717 | + |
3718 | +// Find a handler on a handler map given a path string |
3719 | +// Most-specific (longest) pattern wins |
3720 | +func (mux *ServeMux) match(path string) Handler { |
3721 | + var h Handler |
3722 | + var n = 0 |
3723 | + for k, v := range mux.m { |
3724 | + if !pathMatch(k, path) { |
3725 | + continue |
3726 | + } |
3727 | + if h == nil || len(k) > n { |
3728 | + n = len(k) |
3729 | + h = v.h |
3730 | + } |
3731 | + } |
3732 | + return h |
3733 | +} |
3734 | + |
3735 | +// handler returns the handler to use for the request r. |
3736 | +func (mux *ServeMux) handler(r *Request) Handler { |
3737 | + mux.mu.RLock() |
3738 | + defer mux.mu.RUnlock() |
3739 | + |
3740 | + // Host-specific pattern takes precedence over generic ones |
3741 | + h := mux.match(r.Host + r.URL.Path) |
3742 | + if h == nil { |
3743 | + h = mux.match(r.URL.Path) |
3744 | + } |
3745 | + if h == nil { |
3746 | + h = NotFoundHandler() |
3747 | + } |
3748 | + return h |
3749 | +} |
3750 | + |
3751 | +// ServeHTTP dispatches the request to the handler whose |
3752 | +// pattern most closely matches the request URL. |
3753 | +func (mux *ServeMux) ServeHTTP(w ResponseWriter, r *Request) { |
3754 | + // Clean path to canonical form and redirect. |
3755 | + if p := cleanPath(r.URL.Path); p != r.URL.Path { |
3756 | + w.Header().Set("Location", p) |
3757 | + w.WriteHeader(StatusMovedPermanently) |
3758 | + return |
3759 | + } |
3760 | + mux.handler(r).ServeHTTP(w, r) |
3761 | +} |
3762 | + |
3763 | +// Handle registers the handler for the given pattern. |
3764 | +// If a handler already exists for pattern, Handle panics. |
3765 | +func (mux *ServeMux) Handle(pattern string, handler Handler) { |
3766 | + mux.mu.Lock() |
3767 | + defer mux.mu.Unlock() |
3768 | + |
3769 | + if pattern == "" { |
3770 | + panic("http: invalid pattern " + pattern) |
3771 | + } |
3772 | + if handler == nil { |
3773 | + panic("http: nil handler") |
3774 | + } |
3775 | + if mux.m[pattern].explicit { |
3776 | + panic("http: multiple registrations for " + pattern) |
3777 | + } |
3778 | + |
3779 | + mux.m[pattern] = muxEntry{explicit: true, h: handler} |
3780 | + |
3781 | + // Helpful behavior: |
3782 | + // If pattern is /tree/, insert an implicit permanent redirect for /tree. |
3783 | + // It can be overridden by an explicit registration. |
3784 | + n := len(pattern) |
3785 | + if n > 0 && pattern[n-1] == '/' && !mux.m[pattern[0:n-1]].explicit { |
3786 | + mux.m[pattern[0:n-1]] = muxEntry{h: RedirectHandler(pattern, StatusMovedPermanently)} |
3787 | + } |
3788 | +} |
3789 | + |
3790 | +// HandleFunc registers the handler function for the given pattern. |
3791 | +func (mux *ServeMux) HandleFunc(pattern string, handler func(ResponseWriter, *Request)) { |
3792 | + mux.Handle(pattern, HandlerFunc(handler)) |
3793 | +} |
3794 | + |
3795 | +// Handle registers the handler for the given pattern |
3796 | +// in the DefaultServeMux. |
3797 | +// The documentation for ServeMux explains how patterns are matched. |
3798 | +func Handle(pattern string, handler Handler) { DefaultServeMux.Handle(pattern, handler) } |
3799 | + |
3800 | +// HandleFunc registers the handler function for the given pattern |
3801 | +// in the DefaultServeMux. |
3802 | +// The documentation for ServeMux explains how patterns are matched. |
3803 | +func HandleFunc(pattern string, handler func(ResponseWriter, *Request)) { |
3804 | + DefaultServeMux.HandleFunc(pattern, handler) |
3805 | +} |
3806 | + |
3807 | +// Serve accepts incoming HTTP connections on the listener l, |
3808 | +// creating a new service thread for each. The service threads |
3809 | +// read requests and then call handler to reply to them. |
3810 | +// Handler is typically nil, in which case the DefaultServeMux is used. |
3811 | +func Serve(l net.Listener, handler Handler) error { |
3812 | + srv := &Server{Handler: handler} |
3813 | + return srv.Serve(l) |
3814 | +} |
3815 | + |
3816 | +// A Server defines parameters for running an HTTP server. |
3817 | +type Server struct { |
3818 | + Addr string // TCP address to listen on, ":http" if empty |
3819 | + Handler Handler // handler to invoke, http.DefaultServeMux if nil |
3820 | + ReadTimeout time.Duration // maximum duration before timing out read of the request |
3821 | + WriteTimeout time.Duration // maximum duration before timing out write of the response |
3822 | + MaxHeaderBytes int // maximum size of request headers, DefaultMaxHeaderBytes if 0 |
3823 | + TLSConfig *tls.Config // optional TLS config, used by ListenAndServeTLS |
3824 | +} |
3825 | + |
3826 | +// ListenAndServe listens on the TCP network address srv.Addr and then |
3827 | +// calls Serve to handle requests on incoming connections. If |
3828 | +// srv.Addr is blank, ":http" is used. |
3829 | +func (srv *Server) ListenAndServe() error { |
3830 | + addr := srv.Addr |
3831 | + if addr == "" { |
3832 | + addr = ":http" |
3833 | + } |
3834 | + l, e := net.Listen("tcp", addr) |
3835 | + if e != nil { |
3836 | + return e |
3837 | + } |
3838 | + return srv.Serve(l) |
3839 | +} |
3840 | + |
3841 | +// Serve accepts incoming connections on the Listener l, creating a |
3842 | +// new service thread for each. The service threads read requests and |
3843 | +// then call srv.Handler to reply to them. |
3844 | +func (srv *Server) Serve(l net.Listener) error { |
3845 | + defer l.Close() |
3846 | + var tempDelay time.Duration // how long to sleep on accept failure |
3847 | + for { |
3848 | + rw, e := l.Accept() |
3849 | + if e != nil { |
3850 | + if ne, ok := e.(net.Error); ok && ne.Temporary() { |
3851 | + if tempDelay == 0 { |
3852 | + tempDelay = 5 * time.Millisecond |
3853 | + } else { |
3854 | + tempDelay *= 2 |
3855 | + } |
3856 | + if max := 1 * time.Second; tempDelay > max { |
3857 | + tempDelay = max |
3858 | + } |
3859 | + log.Printf("http: Accept error: %v; retrying in %v", e, tempDelay) |
3860 | + time.Sleep(tempDelay) |
3861 | + continue |
3862 | + } |
3863 | + return e |
3864 | + } |
3865 | + tempDelay = 0 |
3866 | + if srv.ReadTimeout != 0 { |
3867 | + rw.SetReadDeadline(time.Now().Add(srv.ReadTimeout)) |
3868 | + } |
3869 | + if srv.WriteTimeout != 0 { |
3870 | + rw.SetWriteDeadline(time.Now().Add(srv.WriteTimeout)) |
3871 | + } |
3872 | + c, err := srv.newConn(rw) |
3873 | + if err != nil { |
3874 | + continue |
3875 | + } |
3876 | + go c.serve() |
3877 | + } |
3878 | + panic("not reached") |
3879 | +} |
3880 | + |
3881 | +// ListenAndServe listens on the TCP network address addr |
3882 | +// and then calls Serve with handler to handle requests |
3883 | +// on incoming connections. Handler is typically nil, |
3884 | +// in which case the DefaultServeMux is used. |
3885 | +// |
3886 | +// A trivial example server is: |
3887 | +// |
3888 | +// package main |
3889 | +// |
3890 | +// import ( |
3891 | +// "io" |
3892 | +// "net/http" |
3893 | +// "log" |
3894 | +// ) |
3895 | +// |
3896 | +// // hello world, the web server |
3897 | +// func HelloServer(w http.ResponseWriter, req *http.Request) { |
3898 | +// io.WriteString(w, "hello, world!\n") |
3899 | +// } |
3900 | +// |
3901 | +// func main() { |
3902 | +// http.HandleFunc("/hello", HelloServer) |
3903 | +// err := http.ListenAndServe(":12345", nil) |
3904 | +// if err != nil { |
3905 | +// log.Fatal("ListenAndServe: ", err) |
3906 | +// } |
3907 | +// } |
3908 | +func ListenAndServe(addr string, handler Handler) error { |
3909 | + server := &Server{Addr: addr, Handler: handler} |
3910 | + return server.ListenAndServe() |
3911 | +} |
3912 | + |
3913 | +// ListenAndServeTLS acts identically to ListenAndServe, except that it |
3914 | +// expects HTTPS connections. Additionally, files containing a certificate and |
3915 | +// matching private key for the server must be provided. If the certificate |
3916 | +// is signed by a certificate authority, the certFile should be the concatenation |
3917 | +// of the server's certificate followed by the CA's certificate. |
3918 | +// |
3919 | +// A trivial example server is: |
3920 | +// |
3921 | +// import ( |
3922 | +// "log" |
3923 | +// "net/http" |
3924 | +// ) |
3925 | +// |
3926 | +// func handler(w http.ResponseWriter, req *http.Request) { |
3927 | +// w.Header().Set("Content-Type", "text/plain") |
3928 | +// w.Write([]byte("This is an example server.\n")) |
3929 | +// } |
3930 | +// |
3931 | +// func main() { |
3932 | +// http.HandleFunc("/", handler) |
3933 | +// log.Printf("About to listen on 10443. Go to https://127.0.0.1:10443/") |
3934 | +// err := http.ListenAndServeTLS(":10443", "cert.pem", "key.pem", nil) |
3935 | +// if err != nil { |
3936 | +// log.Fatal(err) |
3937 | +// } |
3938 | +// } |
3939 | +// |
3940 | +// One can use generate_cert.go in crypto/tls to generate cert.pem and key.pem. |
3941 | +func ListenAndServeTLS(addr string, certFile string, keyFile string, handler Handler) error { |
3942 | + server := &Server{Addr: addr, Handler: handler} |
3943 | + return server.ListenAndServeTLS(certFile, keyFile) |
3944 | +} |
3945 | + |
3946 | +// ListenAndServeTLS listens on the TCP network address srv.Addr and |
3947 | +// then calls Serve to handle requests on incoming TLS connections. |
3948 | +// |
3949 | +// Filenames containing a certificate and matching private key for |
3950 | +// the server must be provided. If the certificate is signed by a |
3951 | +// certificate authority, the certFile should be the concatenation |
3952 | +// of the server's certificate followed by the CA's certificate. |
3953 | +// |
3954 | +// If srv.Addr is blank, ":https" is used. |
3955 | +func (srv *Server) ListenAndServeTLS(certFile, keyFile string) error { |
3956 | + addr := srv.Addr |
3957 | + if addr == "" { |
3958 | + addr = ":https" |
3959 | + } |
3960 | + config := &tls.Config{} |
3961 | + if srv.TLSConfig != nil { |
3962 | + *config = *srv.TLSConfig |
3963 | + } |
3964 | + if config.NextProtos == nil { |
3965 | + config.NextProtos = []string{"http/1.1"} |
3966 | + } |
3967 | + |
3968 | + var err error |
3969 | + config.Certificates = make([]tls.Certificate, 1) |
3970 | + config.Certificates[0], err = tls.LoadX509KeyPair(certFile, keyFile) |
3971 | + if err != nil { |
3972 | + return err |
3973 | + } |
3974 | + |
3975 | + conn, err := net.Listen("tcp", addr) |
3976 | + if err != nil { |
3977 | + return err |
3978 | + } |
3979 | + |
3980 | + tlsListener := tls.NewListener(conn, config) |
3981 | + return srv.Serve(tlsListener) |
3982 | +} |
3983 | + |
3984 | +// TimeoutHandler returns a Handler that runs h with the given time limit. |
3985 | +// |
3986 | +// The new Handler calls h.ServeHTTP to handle each request, but if a |
3987 | +// call runs for more than ns nanoseconds, the handler responds with |
3988 | +// a 503 Service Unavailable error and the given message in its body. |
3989 | +// (If msg is empty, a suitable default message will be sent.) |
3990 | +// After such a timeout, writes by h to its ResponseWriter will return |
3991 | +// ErrHandlerTimeout. |
3992 | +func TimeoutHandler(h Handler, dt time.Duration, msg string) Handler { |
3993 | + f := func() <-chan time.Time { |
3994 | + return time.After(dt) |
3995 | + } |
3996 | + return &timeoutHandler{h, f, msg} |
3997 | +} |
3998 | + |
3999 | +// ErrHandlerTimeout is returned on ResponseWriter Write calls |
4000 | +// in handlers which have timed out. |
4001 | +var ErrHandlerTimeout = errors.New("http: Handler timeout") |
4002 | + |
4003 | +type timeoutHandler struct { |
4004 | + handler Handler |
4005 | + timeout func() <-chan time.Time // returns channel producing a timeout |
4006 | + body string |
4007 | +} |
4008 | + |
4009 | +func (h *timeoutHandler) errorBody() string { |
4010 | + if h.body != "" { |
4011 | + return h.body |
4012 | + } |
4013 | + return "<html><head><title>Timeout</title></head><body><h1>Timeout</h1></body></html>" |
4014 | +} |
4015 | + |
4016 | +func (h *timeoutHandler) ServeHTTP(w ResponseWriter, r *Request) { |
4017 | + done := make(chan bool) |
4018 | + tw := &timeoutWriter{w: w} |
4019 | + go func() { |
4020 | + h.handler.ServeHTTP(tw, r) |
4021 | + done <- true |
4022 | + }() |
4023 | + select { |
4024 | + case <-done: |
4025 | + return |
4026 | + case <-h.timeout(): |
4027 | + tw.mu.Lock() |
4028 | + defer tw.mu.Unlock() |
4029 | + if !tw.wroteHeader { |
4030 | + tw.w.WriteHeader(StatusServiceUnavailable) |
4031 | + tw.w.Write([]byte(h.errorBody())) |
4032 | + } |
4033 | + tw.timedOut = true |
4034 | + } |
4035 | +} |
4036 | + |
4037 | +type timeoutWriter struct { |
4038 | + w ResponseWriter |
4039 | + |
4040 | + mu sync.Mutex |
4041 | + timedOut bool |
4042 | + wroteHeader bool |
4043 | +} |
4044 | + |
4045 | +func (tw *timeoutWriter) Header() Header { |
4046 | + return tw.w.Header() |
4047 | +} |
4048 | + |
4049 | +func (tw *timeoutWriter) Write(p []byte) (int, error) { |
4050 | + tw.mu.Lock() |
4051 | + timedOut := tw.timedOut |
4052 | + tw.mu.Unlock() |
4053 | + if timedOut { |
4054 | + return 0, ErrHandlerTimeout |
4055 | + } |
4056 | + return tw.w.Write(p) |
4057 | +} |
4058 | + |
4059 | +func (tw *timeoutWriter) WriteHeader(code int) { |
4060 | + tw.mu.Lock() |
4061 | + if tw.timedOut || tw.wroteHeader { |
4062 | + tw.mu.Unlock() |
4063 | + return |
4064 | + } |
4065 | + tw.wroteHeader = true |
4066 | + tw.mu.Unlock() |
4067 | + tw.w.WriteHeader(code) |
4068 | +} |
4069 | |
4070 | === added file 'fork/http/sniff.go' |
4071 | --- fork/http/sniff.go 1970-01-01 00:00:00 +0000 |
4072 | +++ fork/http/sniff.go 2013-07-22 14:27:42 +0000 |
4073 | @@ -0,0 +1,214 @@ |
4074 | +// Copyright 2011 The Go Authors. All rights reserved. |
4075 | +// Use of this source code is governed by a BSD-style |
4076 | +// license that can be found in the LICENSE file. |
4077 | + |
4078 | +package http |
4079 | + |
4080 | +import ( |
4081 | + "bytes" |
4082 | + "encoding/binary" |
4083 | +) |
4084 | + |
4085 | +// The algorithm uses at most sniffLen bytes to make its decision. |
4086 | +const sniffLen = 512 |
4087 | + |
4088 | +// DetectContentType implements the algorithm described |
4089 | +// at http://mimesniff.spec.whatwg.org/ to determine the |
4090 | +// Content-Type of the given data. It considers at most the |
4091 | +// first 512 bytes of data. DetectContentType always returns |
4092 | +// a valid MIME type: if it cannot determine a more specific one, it |
4093 | +// returns "application/octet-stream". |
4094 | +func DetectContentType(data []byte) string { |
4095 | + if len(data) > sniffLen { |
4096 | + data = data[:sniffLen] |
4097 | + } |
4098 | + |
4099 | + // Index of the first non-whitespace byte in data. |
4100 | + firstNonWS := 0 |
4101 | + for ; firstNonWS < len(data) && isWS(data[firstNonWS]); firstNonWS++ { |
4102 | + } |
4103 | + |
4104 | + for _, sig := range sniffSignatures { |
4105 | + if ct := sig.match(data, firstNonWS); ct != "" { |
4106 | + return ct |
4107 | + } |
4108 | + } |
4109 | + |
4110 | + return "application/octet-stream" // fallback |
4111 | +} |
4112 | + |
4113 | +func isWS(b byte) bool { |
4114 | + return bytes.IndexByte([]byte("\t\n\x0C\r "), b) != -1 |
4115 | +} |
4116 | + |
4117 | +type sniffSig interface { |
4118 | + // match returns the MIME type of the data, or "" if unknown. |
4119 | + match(data []byte, firstNonWS int) string |
4120 | +} |
4121 | + |
4122 | +// Data matching the table in section 6. |
4123 | +var sniffSignatures = []sniffSig{ |
4124 | + htmlSig("<!DOCTYPE HTML"), |
4125 | + htmlSig("<HTML"), |
4126 | + htmlSig("<HEAD"), |
4127 | + htmlSig("<SCRIPT"), |
4128 | + htmlSig("<IFRAME"), |
4129 | + htmlSig("<H1"), |
4130 | + htmlSig("<DIV"), |
4131 | + htmlSig("<FONT"), |
4132 | + htmlSig("<TABLE"), |
4133 | + htmlSig("<A"), |
4134 | + htmlSig("<STYLE"), |
4135 | + htmlSig("<TITLE"), |
4136 | + htmlSig("<B"), |
4137 | + htmlSig("<BODY"), |
4138 | + htmlSig("<BR"), |
4139 | + htmlSig("<P"), |
4140 | + htmlSig("<!--"), |
4141 | + |
4142 | + &maskedSig{mask: []byte("\xFF\xFF\xFF\xFF\xFF"), pat: []byte("<?xml"), skipWS: true, ct: "text/xml; charset=utf-8"}, |
4143 | + |
4144 | + &exactSig{[]byte("%PDF-"), "application/pdf"}, |
4145 | + &exactSig{[]byte("%!PS-Adobe-"), "application/postscript"}, |
4146 | + |
4147 | + // UTF BOMs. |
4148 | + &maskedSig{mask: []byte("\xFF\xFF\x00\x00"), pat: []byte("\xFE\xFF\x00\x00"), ct: "text/plain; charset=utf-16be"}, |
4149 | + &maskedSig{mask: []byte("\xFF\xFF\x00\x00"), pat: []byte("\xFF\xFE\x00\x00"), ct: "text/plain; charset=utf-16le"}, |
4150 | + &maskedSig{mask: []byte("\xFF\xFF\xFF\x00"), pat: []byte("\xEF\xBB\xBF\x00"), ct: "text/plain; charset=utf-8"}, |
4151 | + |
4152 | + &exactSig{[]byte("GIF87a"), "image/gif"}, |
4153 | + &exactSig{[]byte("GIF89a"), "image/gif"}, |
4154 | + &exactSig{[]byte("\x89\x50\x4E\x47\x0D\x0A\x1A\x0A"), "image/png"}, |
4155 | + &exactSig{[]byte("\xFF\xD8\xFF"), "image/jpeg"}, |
4156 | + &exactSig{[]byte("BM"), "image/bmp"}, |
4157 | + &maskedSig{ |
4158 | + mask: []byte("\xFF\xFF\xFF\xFF\x00\x00\x00\x00\xFF\xFF\xFF\xFF\xFF\xFF"), |
4159 | + pat: []byte("RIFF\x00\x00\x00\x00WEBPVP"), |
4160 | + ct: "image/webp", |
4161 | + }, |
4162 | + &exactSig{[]byte("\x00\x00\x01\x00"), "image/vnd.microsoft.icon"}, |
4163 | + &exactSig{[]byte("\x4F\x67\x67\x53\x00"), "application/ogg"}, |
4164 | + &maskedSig{ |
4165 | + mask: []byte("\xFF\xFF\xFF\xFF\x00\x00\x00\x00\xFF\xFF\xFF\xFF"), |
4166 | + pat: []byte("RIFF\x00\x00\x00\x00WAVE"), |
4167 | + ct: "audio/wave", |
4168 | + }, |
4169 | + &exactSig{[]byte("\x1A\x45\xDF\xA3"), "video/webm"}, |
4170 | + &exactSig{[]byte("\x52\x61\x72\x20\x1A\x07\x00"), "application/x-rar-compressed"}, |
4171 | + &exactSig{[]byte("\x50\x4B\x03\x04"), "application/zip"}, |
4172 | + &exactSig{[]byte("\x1F\x8B\x08"), "application/x-gzip"}, |
4173 | + |
4174 | + // TODO(dsymonds): Re-enable this when the spec is sorted w.r.t. MP4. |
4175 | + //mp4Sig(0), |
4176 | + |
4177 | + textSig(0), // should be last |
4178 | +} |
4179 | + |
4180 | +type exactSig struct { |
4181 | + sig []byte |
4182 | + ct string |
4183 | +} |
4184 | + |
4185 | +func (e *exactSig) match(data []byte, firstNonWS int) string { |
4186 | + if bytes.HasPrefix(data, e.sig) { |
4187 | + return e.ct |
4188 | + } |
4189 | + return "" |
4190 | +} |
4191 | + |
4192 | +type maskedSig struct { |
4193 | + mask, pat []byte |
4194 | + skipWS bool |
4195 | + ct string |
4196 | +} |
4197 | + |
4198 | +func (m *maskedSig) match(data []byte, firstNonWS int) string { |
4199 | + if m.skipWS { |
4200 | + data = data[firstNonWS:] |
4201 | + } |
4202 | + if len(data) < len(m.mask) { |
4203 | + return "" |
4204 | + } |
4205 | + for i, mask := range m.mask { |
4206 | + db := data[i] & mask |
4207 | + if db != m.pat[i] { |
4208 | + return "" |
4209 | + } |
4210 | + } |
4211 | + return m.ct |
4212 | +} |
4213 | + |
4214 | +type htmlSig []byte |
4215 | + |
4216 | +func (h htmlSig) match(data []byte, firstNonWS int) string { |
4217 | + data = data[firstNonWS:] |
4218 | + if len(data) < len(h)+1 { |
4219 | + return "" |
4220 | + } |
4221 | + for i, b := range h { |
4222 | + db := data[i] |
4223 | + if 'A' <= b && b <= 'Z' { |
4224 | + db &= 0xDF |
4225 | + } |
4226 | + if b != db { |
4227 | + return "" |
4228 | + } |
4229 | + } |
4230 | + // Next byte must be space or right angle bracket. |
4231 | + if db := data[len(h)]; db != ' ' && db != '>' { |
4232 | + return "" |
4233 | + } |
4234 | + return "text/html; charset=utf-8" |
4235 | +} |
4236 | + |
4237 | +type mp4Sig int |
4238 | + |
4239 | +func (mp4Sig) match(data []byte, firstNonWS int) string { |
4240 | + // c.f. section 6.1. |
4241 | + if len(data) < 8 { |
4242 | + return "" |
4243 | + } |
4244 | + boxSize := int(binary.BigEndian.Uint32(data[:4])) |
4245 | + if boxSize%4 != 0 || len(data) < boxSize { |
4246 | + return "" |
4247 | + } |
4248 | + if !bytes.Equal(data[4:8], []byte("ftyp")) { |
4249 | + return "" |
4250 | + } |
4251 | + for st := 8; st < boxSize; st += 4 { |
4252 | + if st == 12 { |
4253 | + // minor version number |
4254 | + continue |
4255 | + } |
4256 | + seg := string(data[st : st+3]) |
4257 | + switch seg { |
4258 | + case "mp4", "iso", "M4V", "M4P", "M4B": |
4259 | + return "video/mp4" |
4260 | + /* The remainder are not in the spec. |
4261 | + case "M4A": |
4262 | + return "audio/mp4" |
4263 | + case "3gp": |
4264 | + return "video/3gpp" |
4265 | + case "jp2": |
4266 | + return "image/jp2" // JPEG 2000 |
4267 | + */ |
4268 | + } |
4269 | + } |
4270 | + return "" |
4271 | +} |
4272 | + |
4273 | +type textSig int |
4274 | + |
4275 | +func (textSig) match(data []byte, firstNonWS int) string { |
4276 | + // c.f. section 5, step 4. |
4277 | + for _, b := range data[firstNonWS:] { |
4278 | + switch { |
4279 | + case 0x00 <= b && b <= 0x08, |
4280 | + b == 0x0B, |
4281 | + 0x0E <= b && b <= 0x1A, |
4282 | + 0x1C <= b && b <= 0x1F: |
4283 | + return "" |
4284 | + } |
4285 | + } |
4286 | + return "text/plain; charset=utf-8" |
4287 | +} |
4288 | |
4289 | === added file 'fork/http/status.go' |
4290 | --- fork/http/status.go 1970-01-01 00:00:00 +0000 |
4291 | +++ fork/http/status.go 2013-07-22 14:27:42 +0000 |
4292 | @@ -0,0 +1,108 @@ |
4293 | +// Copyright 2009 The Go Authors. All rights reserved. |
4294 | +// Use of this source code is governed by a BSD-style |
4295 | +// license that can be found in the LICENSE file. |
4296 | + |
4297 | +package http |
4298 | + |
4299 | +// HTTP status codes, defined in RFC 2616. |
4300 | +const ( |
4301 | + StatusContinue = 100 |
4302 | + StatusSwitchingProtocols = 101 |
4303 | + |
4304 | + StatusOK = 200 |
4305 | + StatusCreated = 201 |
4306 | + StatusAccepted = 202 |
4307 | + StatusNonAuthoritativeInfo = 203 |
4308 | + StatusNoContent = 204 |
4309 | + StatusResetContent = 205 |
4310 | + StatusPartialContent = 206 |
4311 | + |
4312 | + StatusMultipleChoices = 300 |
4313 | + StatusMovedPermanently = 301 |
4314 | + StatusFound = 302 |
4315 | + StatusSeeOther = 303 |
4316 | + StatusNotModified = 304 |
4317 | + StatusUseProxy = 305 |
4318 | + StatusTemporaryRedirect = 307 |
4319 | + |
4320 | + StatusBadRequest = 400 |
4321 | + StatusUnauthorized = 401 |
4322 | + StatusPaymentRequired = 402 |
4323 | + StatusForbidden = 403 |
4324 | + StatusNotFound = 404 |
4325 | + StatusMethodNotAllowed = 405 |
4326 | + StatusNotAcceptable = 406 |
4327 | + StatusProxyAuthRequired = 407 |
4328 | + StatusRequestTimeout = 408 |
4329 | + StatusConflict = 409 |
4330 | + StatusGone = 410 |
4331 | + StatusLengthRequired = 411 |
4332 | + StatusPreconditionFailed = 412 |
4333 | + StatusRequestEntityTooLarge = 413 |
4334 | + StatusRequestURITooLong = 414 |
4335 | + StatusUnsupportedMediaType = 415 |
4336 | + StatusRequestedRangeNotSatisfiable = 416 |
4337 | + StatusExpectationFailed = 417 |
4338 | + StatusTeapot = 418 |
4339 | + |
4340 | + StatusInternalServerError = 500 |
4341 | + StatusNotImplemented = 501 |
4342 | + StatusBadGateway = 502 |
4343 | + StatusServiceUnavailable = 503 |
4344 | + StatusGatewayTimeout = 504 |
4345 | + StatusHTTPVersionNotSupported = 505 |
4346 | +) |
4347 | + |
4348 | +var statusText = map[int]string{ |
4349 | + StatusContinue: "Continue", |
4350 | + StatusSwitchingProtocols: "Switching Protocols", |
4351 | + |
4352 | + StatusOK: "OK", |
4353 | + StatusCreated: "Created", |
4354 | + StatusAccepted: "Accepted", |
4355 | + StatusNonAuthoritativeInfo: "Non-Authoritative Information", |
4356 | + StatusNoContent: "No Content", |
4357 | + StatusResetContent: "Reset Content", |
4358 | + StatusPartialContent: "Partial Content", |
4359 | + |
4360 | + StatusMultipleChoices: "Multiple Choices", |
4361 | + StatusMovedPermanently: "Moved Permanently", |
4362 | + StatusFound: "Found", |
4363 | + StatusSeeOther: "See Other", |
4364 | + StatusNotModified: "Not Modified", |
4365 | + StatusUseProxy: "Use Proxy", |
4366 | + StatusTemporaryRedirect: "Temporary Redirect", |
4367 | + |
4368 | + StatusBadRequest: "Bad Request", |
4369 | + StatusUnauthorized: "Unauthorized", |
4370 | + StatusPaymentRequired: "Payment Required", |
4371 | + StatusForbidden: "Forbidden", |
4372 | + StatusNotFound: "Not Found", |
4373 | + StatusMethodNotAllowed: "Method Not Allowed", |
4374 | + StatusNotAcceptable: "Not Acceptable", |
4375 | + StatusProxyAuthRequired: "Proxy Authentication Required", |
4376 | + StatusRequestTimeout: "Request Timeout", |
4377 | + StatusConflict: "Conflict", |
4378 | + StatusGone: "Gone", |
4379 | + StatusLengthRequired: "Length Required", |
4380 | + StatusPreconditionFailed: "Precondition Failed", |
4381 | + StatusRequestEntityTooLarge: "Request Entity Too Large", |
4382 | + StatusRequestURITooLong: "Request URI Too Long", |
4383 | + StatusUnsupportedMediaType: "Unsupported Media Type", |
4384 | + StatusRequestedRangeNotSatisfiable: "Requested Range Not Satisfiable", |
4385 | + StatusExpectationFailed: "Expectation Failed", |
4386 | + StatusTeapot: "I'm a teapot", |
4387 | + |
4388 | + StatusInternalServerError: "Internal Server Error", |
4389 | + StatusNotImplemented: "Not Implemented", |
4390 | + StatusBadGateway: "Bad Gateway", |
4391 | + StatusServiceUnavailable: "Service Unavailable", |
4392 | + StatusGatewayTimeout: "Gateway Timeout", |
4393 | + StatusHTTPVersionNotSupported: "HTTP Version Not Supported", |
4394 | +} |
4395 | + |
4396 | +// StatusText returns a text for the HTTP status code. It returns the empty |
4397 | +// string if the code is unknown. |
4398 | +func StatusText(code int) string { |
4399 | + return statusText[code] |
4400 | +} |
4401 | |
4402 | === added file 'fork/http/transfer.go' |
4403 | --- fork/http/transfer.go 1970-01-01 00:00:00 +0000 |
4404 | +++ fork/http/transfer.go 2013-07-22 14:27:42 +0000 |
4405 | @@ -0,0 +1,632 @@ |
4406 | +// Copyright 2009 The Go Authors. All rights reserved. |
4407 | +// Use of this source code is governed by a BSD-style |
4408 | +// license that can be found in the LICENSE file. |
4409 | + |
4410 | +package http |
4411 | + |
4412 | +import ( |
4413 | + "bufio" |
4414 | + "bytes" |
4415 | + "errors" |
4416 | + "fmt" |
4417 | + "io" |
4418 | + "io/ioutil" |
4419 | + "net/textproto" |
4420 | + "strconv" |
4421 | + "strings" |
4422 | +) |
4423 | + |
4424 | +// transferWriter inspects the fields of a user-supplied Request or Response, |
4425 | +// sanitizes them without changing the user object and provides methods for |
4426 | +// writing the respective header, body and trailer in wire format. |
4427 | +type transferWriter struct { |
4428 | + Method string |
4429 | + Body io.Reader |
4430 | + BodyCloser io.Closer |
4431 | + ResponseToHEAD bool |
4432 | + ContentLength int64 // -1 means unknown, 0 means exactly none |
4433 | + Close bool |
4434 | + TransferEncoding []string |
4435 | + Trailer Header |
4436 | +} |
4437 | + |
4438 | +func newTransferWriter(r interface{}) (t *transferWriter, err error) { |
4439 | + t = &transferWriter{} |
4440 | + |
4441 | + // Extract relevant fields |
4442 | + atLeastHTTP11 := false |
4443 | + switch rr := r.(type) { |
4444 | + case *Request: |
4445 | + if rr.ContentLength != 0 && rr.Body == nil { |
4446 | + return nil, fmt.Errorf("http: Request.ContentLength=%d with nil Body", rr.ContentLength) |
4447 | + } |
4448 | + t.Method = rr.Method |
4449 | + t.Body = rr.Body |
4450 | + t.BodyCloser = rr.Body |
4451 | + t.ContentLength = rr.ContentLength |
4452 | + t.Close = rr.Close |
4453 | + t.TransferEncoding = rr.TransferEncoding |
4454 | + t.Trailer = rr.Trailer |
4455 | + atLeastHTTP11 = rr.ProtoAtLeast(1, 1) |
4456 | + if t.Body != nil && len(t.TransferEncoding) == 0 && atLeastHTTP11 { |
4457 | + if t.ContentLength == 0 { |
4458 | + // Test to see if it's actually zero or just unset. |
4459 | + var buf [1]byte |
4460 | + n, _ := io.ReadFull(t.Body, buf[:]) |
4461 | + if n == 1 { |
4462 | + // Oh, guess there is data in this Body Reader after all. |
4463 | + // The ContentLength field just wasn't set. |
4464 | + // Stich the Body back together again, re-attaching our |
4465 | + // consumed byte. |
4466 | + t.ContentLength = -1 |
4467 | + t.Body = io.MultiReader(bytes.NewBuffer(buf[:]), t.Body) |
4468 | + } else { |
4469 | + // Body is actually empty. |
4470 | + t.Body = nil |
4471 | + t.BodyCloser = nil |
4472 | + } |
4473 | + } |
4474 | + if t.ContentLength < 0 { |
4475 | + t.TransferEncoding = []string{"chunked"} |
4476 | + } |
4477 | + } |
4478 | + case *Response: |
4479 | + if rr.Request != nil { |
4480 | + t.Method = rr.Request.Method |
4481 | + } |
4482 | + t.Body = rr.Body |
4483 | + t.BodyCloser = rr.Body |
4484 | + t.ContentLength = rr.ContentLength |
4485 | + t.Close = rr.Close |
4486 | + t.TransferEncoding = rr.TransferEncoding |
4487 | + t.Trailer = rr.Trailer |
4488 | + atLeastHTTP11 = rr.ProtoAtLeast(1, 1) |
4489 | + t.ResponseToHEAD = noBodyExpected(t.Method) |
4490 | + } |
4491 | + |
4492 | + // Sanitize Body,ContentLength,TransferEncoding |
4493 | + if t.ResponseToHEAD { |
4494 | + t.Body = nil |
4495 | + t.TransferEncoding = nil |
4496 | + // ContentLength is expected to hold Content-Length |
4497 | + if t.ContentLength < 0 { |
4498 | + return nil, ErrMissingContentLength |
4499 | + } |
4500 | + } else { |
4501 | + if !atLeastHTTP11 || t.Body == nil { |
4502 | + t.TransferEncoding = nil |
4503 | + } |
4504 | + if chunked(t.TransferEncoding) { |
4505 | + t.ContentLength = -1 |
4506 | + } else if t.Body == nil { // no chunking, no body |
4507 | + t.ContentLength = 0 |
4508 | + } |
4509 | + } |
4510 | + |
4511 | + // Sanitize Trailer |
4512 | + if !chunked(t.TransferEncoding) { |
4513 | + t.Trailer = nil |
4514 | + } |
4515 | + |
4516 | + return t, nil |
4517 | +} |
4518 | + |
4519 | +func noBodyExpected(requestMethod string) bool { |
4520 | + return requestMethod == "HEAD" |
4521 | +} |
4522 | + |
4523 | +func (t *transferWriter) shouldSendContentLength() bool { |
4524 | + if chunked(t.TransferEncoding) { |
4525 | + return false |
4526 | + } |
4527 | + if t.ContentLength > 0 { |
4528 | + return true |
4529 | + } |
4530 | + if t.ResponseToHEAD { |
4531 | + return true |
4532 | + } |
4533 | + // Many servers expect a Content-Length for these methods |
4534 | + if t.Method == "POST" || t.Method == "PUT" { |
4535 | + return true |
4536 | + } |
4537 | + if t.ContentLength == 0 && isIdentity(t.TransferEncoding) { |
4538 | + return true |
4539 | + } |
4540 | + |
4541 | + return false |
4542 | +} |
4543 | + |
4544 | +func (t *transferWriter) WriteHeader(w io.Writer) (err error) { |
4545 | + if t.Close { |
4546 | + _, err = io.WriteString(w, "Connection: close\r\n") |
4547 | + if err != nil { |
4548 | + return |
4549 | + } |
4550 | + } |
4551 | + |
4552 | + // Write Content-Length and/or Transfer-Encoding whose values are a |
4553 | + // function of the sanitized field triple (Body, ContentLength, |
4554 | + // TransferEncoding) |
4555 | + if t.shouldSendContentLength() { |
4556 | + io.WriteString(w, "Content-Length: ") |
4557 | + _, err = io.WriteString(w, strconv.FormatInt(t.ContentLength, 10)+"\r\n") |
4558 | + if err != nil { |
4559 | + return |
4560 | + } |
4561 | + } else if chunked(t.TransferEncoding) { |
4562 | + _, err = io.WriteString(w, "Transfer-Encoding: chunked\r\n") |
4563 | + if err != nil { |
4564 | + return |
4565 | + } |
4566 | + } |
4567 | + |
4568 | + // Write Trailer header |
4569 | + if t.Trailer != nil { |
4570 | + // TODO: At some point, there should be a generic mechanism for |
4571 | + // writing long headers, using HTTP line splitting |
4572 | + io.WriteString(w, "Trailer: ") |
4573 | + needComma := false |
4574 | + for k := range t.Trailer { |
4575 | + k = CanonicalHeaderKey(k) |
4576 | + switch k { |
4577 | + case "Transfer-Encoding", "Trailer", "Content-Length": |
4578 | + return &badStringError{"invalid Trailer key", k} |
4579 | + } |
4580 | + if needComma { |
4581 | + io.WriteString(w, ",") |
4582 | + } |
4583 | + io.WriteString(w, k) |
4584 | + needComma = true |
4585 | + } |
4586 | + _, err = io.WriteString(w, "\r\n") |
4587 | + } |
4588 | + |
4589 | + return |
4590 | +} |
4591 | + |
4592 | +func (t *transferWriter) WriteBody(w io.Writer) (err error) { |
4593 | + var ncopy int64 |
4594 | + |
4595 | + // Write body |
4596 | + if t.Body != nil { |
4597 | + if chunked(t.TransferEncoding) { |
4598 | + cw := newChunkedWriter(w) |
4599 | + _, err = io.Copy(cw, t.Body) |
4600 | + if err == nil { |
4601 | + err = cw.Close() |
4602 | + } |
4603 | + } else if t.ContentLength == -1 { |
4604 | + ncopy, err = io.Copy(w, t.Body) |
4605 | + } else { |
4606 | + ncopy, err = io.Copy(w, io.LimitReader(t.Body, t.ContentLength)) |
4607 | + nextra, err := io.Copy(ioutil.Discard, t.Body) |
4608 | + if err != nil { |
4609 | + return err |
4610 | + } |
4611 | + ncopy += nextra |
4612 | + } |
4613 | + if err != nil { |
4614 | + return err |
4615 | + } |
4616 | + if err = t.BodyCloser.Close(); err != nil { |
4617 | + return err |
4618 | + } |
4619 | + } |
4620 | + |
4621 | + if t.ContentLength != -1 && t.ContentLength != ncopy { |
4622 | + return fmt.Errorf("http: Request.ContentLength=%d with Body length %d", |
4623 | + t.ContentLength, ncopy) |
4624 | + } |
4625 | + |
4626 | + // TODO(petar): Place trailer writer code here. |
4627 | + if chunked(t.TransferEncoding) { |
4628 | + // Last chunk, empty trailer |
4629 | + _, err = io.WriteString(w, "\r\n") |
4630 | + } |
4631 | + |
4632 | + return |
4633 | +} |
4634 | + |
4635 | +type transferReader struct { |
4636 | + // Input |
4637 | + Header Header |
4638 | + StatusCode int |
4639 | + RequestMethod string |
4640 | + ProtoMajor int |
4641 | + ProtoMinor int |
4642 | + // Output |
4643 | + Body io.ReadCloser |
4644 | + ContentLength int64 |
4645 | + TransferEncoding []string |
4646 | + Close bool |
4647 | + Trailer Header |
4648 | +} |
4649 | + |
4650 | +// bodyAllowedForStatus returns whether a given response status code |
4651 | +// permits a body. See RFC2616, section 4.4. |
4652 | +func bodyAllowedForStatus(status int) bool { |
4653 | + switch { |
4654 | + case status >= 100 && status <= 199: |
4655 | + return false |
4656 | + case status == 204: |
4657 | + return false |
4658 | + case status == 304: |
4659 | + return false |
4660 | + } |
4661 | + return true |
4662 | +} |
4663 | + |
4664 | +// msg is *Request or *Response. |
4665 | +func readTransfer(msg interface{}, r *bufio.Reader) (err error) { |
4666 | + t := &transferReader{} |
4667 | + |
4668 | + // Unify input |
4669 | + isResponse := false |
4670 | + switch rr := msg.(type) { |
4671 | + case *Response: |
4672 | + t.Header = rr.Header |
4673 | + t.StatusCode = rr.StatusCode |
4674 | + t.RequestMethod = rr.Request.Method |
4675 | + t.ProtoMajor = rr.ProtoMajor |
4676 | + t.ProtoMinor = rr.ProtoMinor |
4677 | + t.Close = shouldClose(t.ProtoMajor, t.ProtoMinor, t.Header) |
4678 | + isResponse = true |
4679 | + case *Request: |
4680 | + t.Header = rr.Header |
4681 | + t.ProtoMajor = rr.ProtoMajor |
4682 | + t.ProtoMinor = rr.ProtoMinor |
4683 | + // Transfer semantics for Requests are exactly like those for |
4684 | + // Responses with status code 200, responding to a GET method |
4685 | + t.StatusCode = 200 |
4686 | + t.RequestMethod = "GET" |
4687 | + default: |
4688 | + panic("unexpected type") |
4689 | + } |
4690 | + |
4691 | + // Default to HTTP/1.1 |
4692 | + if t.ProtoMajor == 0 && t.ProtoMinor == 0 { |
4693 | + t.ProtoMajor, t.ProtoMinor = 1, 1 |
4694 | + } |
4695 | + |
4696 | + // Transfer encoding, content length |
4697 | + t.TransferEncoding, err = fixTransferEncoding(t.RequestMethod, t.Header) |
4698 | + if err != nil { |
4699 | + return err |
4700 | + } |
4701 | + |
4702 | + t.ContentLength, err = fixLength(isResponse, t.StatusCode, t.RequestMethod, t.Header, t.TransferEncoding) |
4703 | + if err != nil { |
4704 | + return err |
4705 | + } |
4706 | + |
4707 | + // Trailer |
4708 | + t.Trailer, err = fixTrailer(t.Header, t.TransferEncoding) |
4709 | + if err != nil { |
4710 | + return err |
4711 | + } |
4712 | + |
4713 | + // If there is no Content-Length or chunked Transfer-Encoding on a *Response |
4714 | + // and the status is not 1xx, 204 or 304, then the body is unbounded. |
4715 | + // See RFC2616, section 4.4. |
4716 | + switch msg.(type) { |
4717 | + case *Response: |
4718 | + if t.ContentLength == -1 && |
4719 | + !chunked(t.TransferEncoding) && |
4720 | + bodyAllowedForStatus(t.StatusCode) { |
4721 | + // Unbounded body. |
4722 | + t.Close = true |
4723 | + } |
4724 | + } |
4725 | + |
4726 | + // Prepare body reader. ContentLength < 0 means chunked encoding |
4727 | + // or close connection when finished, since multipart is not supported yet |
4728 | + switch { |
4729 | + case chunked(t.TransferEncoding): |
4730 | + t.Body = &body{Reader: newChunkedReader(r), hdr: msg, r: r, closing: t.Close} |
4731 | + case t.ContentLength >= 0: |
4732 | + // TODO: limit the Content-Length. This is an easy DoS vector. |
4733 | + t.Body = &body{Reader: io.LimitReader(r, t.ContentLength), closing: t.Close} |
4734 | + default: |
4735 | + // t.ContentLength < 0, i.e. "Content-Length" not mentioned in header |
4736 | + if t.Close { |
4737 | + // Close semantics (i.e. HTTP/1.0) |
4738 | + t.Body = &body{Reader: r, closing: t.Close} |
4739 | + } else { |
4740 | + // Persistent connection (i.e. HTTP/1.1) |
4741 | + t.Body = &body{Reader: io.LimitReader(r, 0), closing: t.Close} |
4742 | + } |
4743 | + } |
4744 | + |
4745 | + // Unify output |
4746 | + switch rr := msg.(type) { |
4747 | + case *Request: |
4748 | + rr.Body = t.Body |
4749 | + rr.ContentLength = t.ContentLength |
4750 | + rr.TransferEncoding = t.TransferEncoding |
4751 | + rr.Close = t.Close |
4752 | + rr.Trailer = t.Trailer |
4753 | + case *Response: |
4754 | + rr.Body = t.Body |
4755 | + rr.ContentLength = t.ContentLength |
4756 | + rr.TransferEncoding = t.TransferEncoding |
4757 | + rr.Close = t.Close |
4758 | + rr.Trailer = t.Trailer |
4759 | + } |
4760 | + |
4761 | + return nil |
4762 | +} |
4763 | + |
4764 | +// Checks whether chunked is part of the encodings stack |
4765 | +func chunked(te []string) bool { return len(te) > 0 && te[0] == "chunked" } |
4766 | + |
4767 | +// Checks whether the encoding is explicitly "identity". |
4768 | +func isIdentity(te []string) bool { return len(te) == 1 && te[0] == "identity" } |
4769 | + |
4770 | +// Sanitize transfer encoding |
4771 | +func fixTransferEncoding(requestMethod string, header Header) ([]string, error) { |
4772 | + raw, present := header["Transfer-Encoding"] |
4773 | + if !present { |
4774 | + return nil, nil |
4775 | + } |
4776 | + |
4777 | + delete(header, "Transfer-Encoding") |
4778 | + |
4779 | + // Head responses have no bodies, so the transfer encoding |
4780 | + // should be ignored. |
4781 | + if requestMethod == "HEAD" { |
4782 | + return nil, nil |
4783 | + } |
4784 | + |
4785 | + encodings := strings.Split(raw[0], ",") |
4786 | + te := make([]string, 0, len(encodings)) |
4787 | + // TODO: Even though we only support "identity" and "chunked" |
4788 | + // encodings, the loop below is designed with foresight. One |
4789 | + // invariant that must be maintained is that, if present, |
4790 | + // chunked encoding must always come first. |
4791 | + for _, encoding := range encodings { |
4792 | + encoding = strings.ToLower(strings.TrimSpace(encoding)) |
4793 | + // "identity" encoding is not recorded |
4794 | + if encoding == "identity" { |
4795 | + break |
4796 | + } |
4797 | + if encoding != "chunked" { |
4798 | + return nil, &badStringError{"unsupported transfer encoding", encoding} |
4799 | + } |
4800 | + te = te[0 : len(te)+1] |
4801 | + te[len(te)-1] = encoding |
4802 | + } |
4803 | + if len(te) > 1 { |
4804 | + return nil, &badStringError{"too many transfer encodings", strings.Join(te, ",")} |
4805 | + } |
4806 | + if len(te) > 0 { |
4807 | + // Chunked encoding trumps Content-Length. See RFC 2616 |
4808 | + // Section 4.4. Currently len(te) > 0 implies chunked |
4809 | + // encoding. |
4810 | + delete(header, "Content-Length") |
4811 | + return te, nil |
4812 | + } |
4813 | + |
4814 | + return nil, nil |
4815 | +} |
4816 | + |
4817 | +// Determine the expected body length, using RFC 2616 Section 4.4. This |
4818 | +// function is not a method, because ultimately it should be shared by |
4819 | +// ReadResponse and ReadRequest. |
4820 | +func fixLength(isResponse bool, status int, requestMethod string, header Header, te []string) (int64, error) { |
4821 | + |
4822 | + // Logic based on response type or status |
4823 | + if noBodyExpected(requestMethod) { |
4824 | + return 0, nil |
4825 | + } |
4826 | + if status/100 == 1 { |
4827 | + return 0, nil |
4828 | + } |
4829 | + switch status { |
4830 | + case 204, 304: |
4831 | + return 0, nil |
4832 | + } |
4833 | + |
4834 | + // Logic based on Transfer-Encoding |
4835 | + if chunked(te) { |
4836 | + return -1, nil |
4837 | + } |
4838 | + |
4839 | + // Logic based on Content-Length |
4840 | + cl := strings.TrimSpace(header.Get("Content-Length")) |
4841 | + if cl != "" { |
4842 | + n, err := strconv.ParseInt(cl, 10, 64) |
4843 | + if err != nil || n < 0 { |
4844 | + return -1, &badStringError{"bad Content-Length", cl} |
4845 | + } |
4846 | + return n, nil |
4847 | + } else { |
4848 | + header.Del("Content-Length") |
4849 | + } |
4850 | + |
4851 | + if !isResponse && requestMethod == "GET" { |
4852 | + // RFC 2616 doesn't explicitly permit nor forbid an |
4853 | + // entity-body on a GET request so we permit one if |
4854 | + // declared, but we default to 0 here (not -1 below) |
4855 | + // if there's no mention of a body. |
4856 | + return 0, nil |
4857 | + } |
4858 | + |
4859 | + // Logic based on media type. The purpose of the following code is just |
4860 | + // to detect whether the unsupported "multipart/byteranges" is being |
4861 | + // used. A proper Content-Type parser is needed in the future. |
4862 | + if strings.Contains(strings.ToLower(header.Get("Content-Type")), "multipart/byteranges") { |
4863 | + return -1, ErrNotSupported |
4864 | + } |
4865 | + |
4866 | + // Body-EOF logic based on other methods (like closing, or chunked coding) |
4867 | + return -1, nil |
4868 | +} |
4869 | + |
4870 | +// Determine whether to hang up after sending a request and body, or |
4871 | +// receiving a response and body |
4872 | +// 'header' is the request headers |
4873 | +func shouldClose(major, minor int, header Header) bool { |
4874 | + if major < 1 { |
4875 | + return true |
4876 | + } else if major == 1 && minor == 0 { |
4877 | + if !strings.Contains(strings.ToLower(header.Get("Connection")), "keep-alive") { |
4878 | + return true |
4879 | + } |
4880 | + return false |
4881 | + } else { |
4882 | + // TODO: Should split on commas, toss surrounding white space, |
4883 | + // and check each field. |
4884 | + if strings.ToLower(header.Get("Connection")) == "close" { |
4885 | + header.Del("Connection") |
4886 | + return true |
4887 | + } |
4888 | + } |
4889 | + return false |
4890 | +} |
4891 | + |
4892 | +// Parse the trailer header |
4893 | +func fixTrailer(header Header, te []string) (Header, error) { |
4894 | + raw := header.Get("Trailer") |
4895 | + if raw == "" { |
4896 | + return nil, nil |
4897 | + } |
4898 | + |
4899 | + header.Del("Trailer") |
4900 | + trailer := make(Header) |
4901 | + keys := strings.Split(raw, ",") |
4902 | + for _, key := range keys { |
4903 | + key = CanonicalHeaderKey(strings.TrimSpace(key)) |
4904 | + switch key { |
4905 | + case "Transfer-Encoding", "Trailer", "Content-Length": |
4906 | + return nil, &badStringError{"bad trailer key", key} |
4907 | + } |
4908 | + trailer.Del(key) |
4909 | + } |
4910 | + if len(trailer) == 0 { |
4911 | + return nil, nil |
4912 | + } |
4913 | + if !chunked(te) { |
4914 | + // Trailer and no chunking |
4915 | + return nil, ErrUnexpectedTrailer |
4916 | + } |
4917 | + return trailer, nil |
4918 | +} |
4919 | + |
4920 | +// body turns a Reader into a ReadCloser. |
4921 | +// Close ensures that the body has been fully read |
4922 | +// and then reads the trailer if necessary. |
4923 | +type body struct { |
4924 | + io.Reader |
4925 | + hdr interface{} // non-nil (Response or Request) value means read trailer |
4926 | + r *bufio.Reader // underlying wire-format reader for the trailer |
4927 | + closing bool // is the connection to be closed after reading body? |
4928 | + closed bool |
4929 | + |
4930 | + res *response // response writer for server requests, else nil |
4931 | +} |
4932 | + |
4933 | +// ErrBodyReadAfterClose is returned when reading a Request Body after |
4934 | +// the body has been closed. This typically happens when the body is |
4935 | +// read after an HTTP Handler calls WriteHeader or Write on its |
4936 | +// ResponseWriter. |
4937 | +var ErrBodyReadAfterClose = errors.New("http: invalid Read on closed request Body") |
4938 | + |
4939 | +func (b *body) Read(p []byte) (n int, err error) { |
4940 | + if b.closed { |
4941 | + return 0, ErrBodyReadAfterClose |
4942 | + } |
4943 | + n, err = b.Reader.Read(p) |
4944 | + |
4945 | + // Read the final trailer once we hit EOF. |
4946 | + if err == io.EOF && b.hdr != nil { |
4947 | + if e := b.readTrailer(); e != nil { |
4948 | + err = e |
4949 | + } |
4950 | + b.hdr = nil |
4951 | + } |
4952 | + return n, err |
4953 | +} |
4954 | + |
4955 | +var ( |
4956 | + singleCRLF = []byte("\r\n") |
4957 | + doubleCRLF = []byte("\r\n\r\n") |
4958 | +) |
4959 | + |
4960 | +func seeUpcomingDoubleCRLF(r *bufio.Reader) bool { |
4961 | + for peekSize := 4; ; peekSize++ { |
4962 | + // This loop stops when Peek returns an error, |
4963 | + // which it does when r's buffer has been filled. |
4964 | + buf, err := r.Peek(peekSize) |
4965 | + if bytes.HasSuffix(buf, doubleCRLF) { |
4966 | + return true |
4967 | + } |
4968 | + if err != nil { |
4969 | + break |
4970 | + } |
4971 | + } |
4972 | + return false |
4973 | +} |
4974 | + |
4975 | +func (b *body) readTrailer() error { |
4976 | + // The common case, since nobody uses trailers. |
4977 | + buf, _ := b.r.Peek(2) |
4978 | + if bytes.Equal(buf, singleCRLF) { |
4979 | + b.r.ReadByte() |
4980 | + b.r.ReadByte() |
4981 | + return nil |
4982 | + } |
4983 | + |
4984 | + // Make sure there's a header terminator coming up, to prevent |
4985 | + // a DoS with an unbounded size Trailer. It's not easy to |
4986 | + // slip in a LimitReader here, as textproto.NewReader requires |
4987 | + // a concrete *bufio.Reader. Also, we can't get all the way |
4988 | + // back up to our conn's LimitedReader that *might* be backing |
4989 | + // this bufio.Reader. Instead, a hack: we iteratively Peek up |
4990 | + // to the bufio.Reader's max size, looking for a double CRLF. |
4991 | + // This limits the trailer to the underlying buffer size, typically 4kB. |
4992 | + if !seeUpcomingDoubleCRLF(b.r) { |
4993 | + return errors.New("http: suspiciously long trailer after chunked body") |
4994 | + } |
4995 | + |
4996 | + hdr, err := textproto.NewReader(b.r).ReadMIMEHeader() |
4997 | + if err != nil { |
4998 | + return err |
4999 | + } |
5000 | + switch rr := b.hdr.(type) { |
Looks good!
[1]
--- fork/README 1970-01-01 00:00:00 +0000
+++ fork/README 2013-07-22 12:54:30 +0000
@@ -0,0 +1,7 @@
+This directory contains a fork of Go's standard libraries net/http and crypto/tls.
...
Please wrap lines in here at no more than 80 columns. I suggest 72.
[2]
=== added file 'fork/go- use-patched- tls.patch' use-patched- tls.patch 1970-01-01 00:00:00 +0000 use-patched- tls.patch 2013-07-22 13:11:27 +0000 net/gwacl/ fork/tls"
--- fork/go-
+++ fork/go-
@@ -0,0 +1,5 @@
+15c15
+< "launchpad.
+---
+> "crypto/tls"
+
I'm not sure this is useful. Without context it's not very
informative. It's also in the wrong direction. I suggest omitting it.
[3]
+func performX509Requ est(session *x509Session, request *X509Request) (*x509Response, error) {
...
+ response.Header = httpResponse.Header
+ if err != nil {
+ return nil, err
+ }
I don't think that condition is needed here. Could this be one of confuses- DVCS moments?
those Go-boilerplate-