Merge lp:~thumper/juju-core/fix-backlog-limits into lp:~go-bot/juju-core/trunk

Proposed by Tim Penhey
Status: Merged
Approved by: Tim Penhey
Approved revision: no longer in the source branch.
Merged at revision: 2623
Proposed branch: lp:~thumper/juju-core/fix-backlog-limits
Merge into: lp:~go-bot/juju-core/trunk
Prerequisite: lp:~thumper/juju-core/debug-log-api
Diff against target: 424 lines (+100/-84)
6 files modified
state/apiserver/debuglog.go (+24/-6)
state/apiserver/debuglog_internal_test.go (+30/-35)
state/apiserver/debuglog_test.go (+12/-0)
utils/tailer/export_test.go (+4/-1)
utils/tailer/tailer.go (+18/-38)
utils/tailer/tailer_test.go (+12/-4)
To merge this branch: bzr merge lp:~thumper/juju-core/fix-backlog-limits
Reviewer Review Type Date Requested Status
Juju Engineering Pending
Review via email: mp+215333@code.launchpad.net

Commit message

Fix the limit for the debug-log calls.

Unbeknownst to me, the filter method was being called
during the backlog iteration as well. This fix introduces
a callback that the tailer calls when it starts the forward
filtering.

This allows us to count the filter calls only when the
filter is being called to write out the results. We cannot
count the write calls because the tailer uses buffered i/o
there.

https://codereview.appspot.com/85570045/

Description of the change

Fix the limit for the debug-log calls.

Unbeknownst to me, the filter method was being called
during the backlog iteration as well. This fix introduces
a callback that the tailer calls when it starts the forward
filtering.

This allows us to count the filter calls only when the
filter is being called to write out the results. We cannot
count the write calls because the tailer uses buffered i/o
there.

https://codereview.appspot.com/85570045/

To post a comment you must log in.
Revision history for this message
Tim Penhey (thumper) wrote :
Download full text (8.0 KiB)

Reviewers: mp+215333_code.launchpad.net,

Message:
Please take a look.

Description:
Fix the limit for the debug-log calls.

Unbeknownst to me, the filter method was being called
during the backlog iteration as well. This fix introduces
a callback that the tailer calls when it starts the forward
filtering.

This allows us to count the filter calls only when the
filter is being called to write out the results. We cannot
count the write calls because the tailer uses buffered i/o
there.

https://code.launchpad.net/~thumper/juju-core/fix-backlog-limits/+merge/215333

Requires:
https://code.launchpad.net/~thumper/juju-core/debug-log-api/+merge/215323

(do not edit description out of merge proposal)

Please review this at https://codereview.appspot.com/85570045/

Affected files (+42, -10 lines):
   A [revision details]
   M container/lxc/clonetemplate.go
   M state/apiserver/debuglog.go
   M state/apiserver/debuglog_internal_test.go
   M state/apiserver/debuglog_test.go
   M utils/tailer/tailer.go
   M utils/tailer/tailer_test.go

Index: [revision details]
=== added file '[revision details]'
--- [revision details] 2012-01-01 00:00:00 +0000
+++ [revision details] 2012-01-01 00:00:00 +0000
@@ -0,0 +1,2 @@
+Old revision: <email address hidden>
+New revision: <email address hidden>

Index: container/lxc/clonetemplate.go
=== modified file 'container/lxc/clonetemplate.go'
--- container/lxc/clonetemplate.go 2014-04-04 03:46:13 +0000
+++ container/lxc/clonetemplate.go 2014-04-11 01:57:45 +0000
@@ -179,7 +179,7 @@
   }

   tailWriter := &logTail{tick: time.Now()}
- consoleTailer := tailer.NewTailer(console, tailWriter, nil)
+ consoleTailer := tailer.NewTailer(console, tailWriter, nil, nil)
   defer consoleTailer.Stop()

   // We should wait maybe 1 minute between output?

Index: state/apiserver/debuglog.go
=== modified file 'state/apiserver/debuglog.go'
--- state/apiserver/debuglog.go 2014-04-10 09:43:51 +0000
+++ state/apiserver/debuglog.go 2014-04-11 01:54:49 +0000
@@ -209,18 +209,23 @@
   maxLines uint
   lineCount uint
   fromTheStart bool
+ started bool
  }

  // start the tailer listening to the logFile, and sending the matching
  // lines to the writer.
  func (stream *logStream) start(logFile io.ReadSeeker, writer io.Writer) {
   if stream.fromTheStart {
- stream.logTailer = tailer.NewTailer(logFile, writer, stream.filterLine)
+ stream.logTailer = tailer.NewTailer(logFile, writer, stream.filterLine,
stream.tailStarted)
   } else {
- stream.logTailer = tailer.NewTailerBacktrack(logFile, writer,
stream.backlog, stream.filterLine)
+ stream.logTailer = tailer.NewTailerBacktrack(logFile, writer,
stream.backlog, stream.filterLine, stream.tailStarted)
   }
  }

+func (stream *logStream) tailStarted() {
+ stream.started = true
+}
+
  // loop starts the tailer with the log file and the web socket.
  func (stream *logStream) loop() error {
   select {
@@ -239,7 +244,7 @@
    stream.checkIncludeModule(log) &&
    !stream.exclude(log) &&
    stream.checkLevel(log)
- if result && stream.maxLines > 0 {
+ if stream.started && result && stream.maxLines > 0 {
    stream...

Read more...

Revision history for this message
Andrew Wilkins (axwalk) wrote :

https://codereview.appspot.com/85570045/diff/1/utils/tailer/tailer.go
File utils/tailer/tailer.go (right):

https://codereview.appspot.com/85570045/diff/1/utils/tailer/tailer.go#newcode56
utils/tailer/tailer.go:56: filter TailerFilterFunc, callback
TailerFilterStartedFunc) *Tailer {
Rather than adding *another* parameter that's only relevant to
backtracking, I really think it would be best to ditch
NewTailerBacktrack and create a separate function that does the
backtracking.

https://codereview.appspot.com/85570045/diff/1/utils/tailer/tailer.go#newcode65
utils/tailer/tailer.go:65: filter TailerFilterFunc, callback
TailerFilterStartedFunc) *Tailer {
Isn't having a callback fairly pointless here? It's going to be called
immediately.

https://codereview.appspot.com/85570045/

Revision history for this message
Roger Peppe (rogpeppe) wrote :

As far as I can see, this makes a nice Tailer API into
a not-so-nice one because you've made a stateful
filter function and you want it to be called in just
the way you expect.

I don't think the filter function should be stateful.

I can see a couple of solutions that might be better:

1) (my preferred option) ditch the server-side maxLines
functionality completely. It could be implemented client-side by
simply closing the connection when the right number
of lines is received, but we've always got head(1),
so I'd suggest just losing it entirely.
2) add maxLines functionality to the Tailer itself.

1) might use a little extra bandwidth,
but I can't see that that would be great problem
in practice. If it is, the second option could
be implemented later.

https://codereview.appspot.com/85570045/

Revision history for this message
Tim Penhey (thumper) wrote :

Due to the buffered i/o in the tailer, I'd prefer to keep the line
parsing functionality in the server side.

However I think that axw's approach of breaking out the backtracking is
a good one, and that is the approach I'll use.

https://codereview.appspot.com/85570045/

Revision history for this message
Tim Penhey (thumper) wrote :
Revision history for this message
Andrew Wilkins (axwalk) wrote :

LGTM with a few suggestions

https://codereview.appspot.com/85570045/diff/20001/state/apiserver/debuglog.go
File state/apiserver/debuglog.go (right):

https://codereview.appspot.com/85570045/diff/20001/state/apiserver/debuglog.go#newcode225
state/apiserver/debuglog.go:225: err := tailer.SeekLastLines(logFile,
stream.backlog, stream.filterLine)
return tailer.SeekLastLines(...

https://codereview.appspot.com/85570045/diff/20001/state/apiserver/debuglog.go#newcode258
state/apiserver/debuglog.go:258: if stream.started && result &&
stream.maxLines > 0 {
It would be better to just split filterLine into something non-counting
and counting, where the latter calls the former and adds the linecount
check. Then you don't need the boolean.

https://codereview.appspot.com/85570045/diff/20001/utils/tailer/tailer.go
File utils/tailer/tailer.go (right):

https://codereview.appspot.com/85570045/diff/20001/utils/tailer/tailer.go#newcode31
utils/tailer/tailer.go:31: // TailerFilterStartedFunc is a callback that
is called when the filtering is
Delete this?

https://codereview.appspot.com/85570045/

Revision history for this message
Roger Peppe (rogpeppe) wrote :

> Due to the buffered i/o in the tailer, I'd prefer to keep the line
parsing
> functionality in the server side.

I don't quite understand this remark (the buffer is flushed whenever we
get to the end of the file, so it shouldn't make any significant
difference), but splitting out the backtracking code makes me much
happier.

LGTM with a couple of trivial suggestions below.

https://codereview.appspot.com/85570045/diff/20001/state/apiserver/debuglog.go
File state/apiserver/debuglog.go (right):

https://codereview.appspot.com/85570045/diff/20001/state/apiserver/debuglog.go#newcode224
state/apiserver/debuglog.go:224: if !stream.fromTheStart {
if stream.fromTheStart {
     return nil
}
...

saves a negative and a level of indentation.

https://codereview.appspot.com/85570045/diff/20001/state/apiserver/debuglog.go#newcode258
state/apiserver/debuglog.go:258: if stream.started && result &&
stream.maxLines > 0 {
On 2014/04/14 03:52:40, axw wrote:
> It would be better to just split filterLine into something
non-counting and
> counting, where the latter calls the former and adds the linecount
check. Then
> you don't need the boolean.

+1

https://codereview.appspot.com/85570045/diff/20001/utils/tailer/tailer.go
File utils/tailer/tailer.go (right):

https://codereview.appspot.com/85570045/diff/20001/utils/tailer/tailer.go#newcode31
utils/tailer/tailer.go:31: // TailerFilterStartedFunc is a callback that
is called when the filtering is
On 2014/04/14 03:52:40, axw wrote:
> Delete this?

+1

https://codereview.appspot.com/85570045/diff/20001/utils/tailer/tailer.go#newcode132
utils/tailer/tailer.go:132: // wanted number of filtered lines before
the end.
// If filter is non-nil, only lines for which filter returns
// true will be counted.

?

https://codereview.appspot.com/85570045/

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'state/apiserver/debuglog.go'
2--- state/apiserver/debuglog.go 2014-04-10 09:43:51 +0000
3+++ state/apiserver/debuglog.go 2014-04-14 04:14:21 +0000
4@@ -70,6 +70,11 @@
5 return
6 }
7 defer logFile.Close()
8+ if err := stream.positionLogFile(logFile); err != nil {
9+ h.sendError(socket, fmt.Errorf("cannot position log file: %v", err))
10+ socket.Close()
11+ return
12+ }
13
14 // If we get to here, no more errors to report, so we report a nil
15 // error. This way the first line of the socket is always a json
16@@ -211,14 +216,20 @@
17 fromTheStart bool
18 }
19
20+// positionLogFile will update the internal read position of the logFile to be
21+// at the end of the file or somewhere in the middle if backlog has been specified.
22+func (stream *logStream) positionLogFile(logFile io.ReadSeeker) error {
23+ // Seek to the end, or lines back from the end if we need to.
24+ if !stream.fromTheStart {
25+ return tailer.SeekLastLines(logFile, stream.backlog, stream.filterLine)
26+ }
27+ return nil
28+}
29+
30 // start the tailer listening to the logFile, and sending the matching
31 // lines to the writer.
32 func (stream *logStream) start(logFile io.ReadSeeker, writer io.Writer) {
33- if stream.fromTheStart {
34- stream.logTailer = tailer.NewTailer(logFile, writer, stream.filterLine)
35- } else {
36- stream.logTailer = tailer.NewTailerBacktrack(logFile, writer, stream.backlog, stream.filterLine)
37- }
38+ stream.logTailer = tailer.NewTailer(logFile, writer, stream.countedFilterLine)
39 }
40
41 // loop starts the tailer with the log file and the web socket.
42@@ -235,10 +246,17 @@
43 // filterLine checks the received line for one of the confgured tags.
44 func (stream *logStream) filterLine(line []byte) bool {
45 log := parseLogLine(string(line))
46- result := stream.checkIncludeEntity(log) &&
47+ return stream.checkIncludeEntity(log) &&
48 stream.checkIncludeModule(log) &&
49 !stream.exclude(log) &&
50 stream.checkLevel(log)
51+}
52+
53+// countedFilterLine checks the received line for one of the confgured tags,
54+// and also checks to make sure the stream doesn't send more than the
55+// specified number of lines.
56+func (stream *logStream) countedFilterLine(line []byte) bool {
57+ result := stream.filterLine(line)
58 if result && stream.maxLines > 0 {
59 stream.lineCount++
60 result = stream.lineCount <= stream.maxLines
61
62=== modified file 'state/apiserver/debuglog_internal_test.go'
63--- state/apiserver/debuglog_internal_test.go 2014-04-07 05:11:48 +0000
64+++ state/apiserver/debuglog_internal_test.go 2014-04-14 04:14:21 +0000
65@@ -7,7 +7,6 @@
66
67 import (
68 "bytes"
69- "io"
70 "net/url"
71 "os"
72 "path/filepath"
73@@ -179,36 +178,22 @@
74 "machine-0: date time WARNING juju.foo.bar")), jc.IsFalse)
75 }
76
77-func (s *debugInternalSuite) TestFilterLineWithLimit(c *gc.C) {
78+func (s *debugInternalSuite) TestCountedFilterLineWithLimit(c *gc.C) {
79 stream := &logStream{
80 filterLevel: loggo.INFO,
81 maxLines: 5,
82 }
83 line := []byte("machine-0: date time WARNING juju")
84- c.Check(stream.filterLine(line), jc.IsTrue)
85- c.Check(stream.filterLine(line), jc.IsTrue)
86- c.Check(stream.filterLine(line), jc.IsTrue)
87- c.Check(stream.filterLine(line), jc.IsTrue)
88- c.Check(stream.filterLine(line), jc.IsTrue)
89- c.Check(stream.filterLine(line), jc.IsFalse)
90- c.Check(stream.filterLine(line), jc.IsFalse)
91-}
92-
93-type seekWaitReader struct {
94- io.ReadSeeker
95- wait chan struct{}
96-}
97-
98-func (w *seekWaitReader) Seek(offset int64, whence int) (int64, error) {
99- pos, err := w.ReadSeeker.Seek(offset, whence)
100- if w.wait != nil {
101- close(w.wait)
102- w.wait = nil
103- }
104- return pos, err
105-}
106-
107-func (s *debugInternalSuite) testStreamInternal(c *gc.C, fromTheStart bool, maxLines uint, expected, errMatch string) {
108+ c.Check(stream.countedFilterLine(line), jc.IsTrue)
109+ c.Check(stream.countedFilterLine(line), jc.IsTrue)
110+ c.Check(stream.countedFilterLine(line), jc.IsTrue)
111+ c.Check(stream.countedFilterLine(line), jc.IsTrue)
112+ c.Check(stream.countedFilterLine(line), jc.IsTrue)
113+ c.Check(stream.countedFilterLine(line), jc.IsFalse)
114+ c.Check(stream.countedFilterLine(line), jc.IsFalse)
115+}
116+
117+func (s *debugInternalSuite) testStreamInternal(c *gc.C, fromTheStart bool, backlog, maxLines uint, expected, errMatch string) {
118
119 dir := c.MkDir()
120 logPath := filepath.Join(dir, "logfile.txt")
121@@ -223,17 +208,20 @@
122 line 2
123 line 3
124 `)
125- stream := &logStream{fromTheStart: fromTheStart, maxLines: maxLines}
126+ stream := &logStream{
127+ fromTheStart: fromTheStart,
128+ backlog: backlog,
129+ maxLines: maxLines,
130+ }
131+ err = stream.positionLogFile(logFileReader)
132+ c.Assert(err, gc.IsNil)
133 output := &bytes.Buffer{}
134- waitReader := &seekWaitReader{logFileReader, make(chan struct{})}
135- stream.start(waitReader, output)
136+ stream.start(logFileReader, output)
137
138 go func() {
139 defer stream.tomb.Done()
140 stream.tomb.Kill(stream.loop())
141 }()
142- // wait for the tailer to have started tailing before writing more
143- <-waitReader.wait
144
145 logFile.WriteString("line 4\n")
146 logFile.WriteString("line 5\n")
147@@ -265,7 +253,7 @@
148 line 4
149 line 5
150 `
151- s.testStreamInternal(c, true, 0, expected, "")
152+ s.testStreamInternal(c, true, 0, 0, expected, "")
153 }
154
155 func (s *debugInternalSuite) TestLogStreamLoopFromTheStartMaxLines(c *gc.C) {
156@@ -273,21 +261,28 @@
157 line 2
158 line 3
159 `
160- s.testStreamInternal(c, true, 3, expected, "max lines reached")
161+ s.testStreamInternal(c, true, 0, 3, expected, "max lines reached")
162 }
163
164 func (s *debugInternalSuite) TestLogStreamLoopJustTail(c *gc.C) {
165 expected := `line 4
166 line 5
167 `
168- s.testStreamInternal(c, false, 0, expected, "")
169+ s.testStreamInternal(c, false, 0, 0, expected, "")
170+}
171+
172+func (s *debugInternalSuite) TestLogStreamLoopBackOneLimitTwo(c *gc.C) {
173+ expected := `line 3
174+line 4
175+`
176+ s.testStreamInternal(c, false, 1, 2, expected, "max lines reached")
177 }
178
179 func (s *debugInternalSuite) TestLogStreamLoopTailMaxLinesNotYetReached(c *gc.C) {
180 expected := `line 4
181 line 5
182 `
183- s.testStreamInternal(c, false, 3, expected, "")
184+ s.testStreamInternal(c, false, 0, 3, expected, "")
185 }
186
187 func assertStreamParams(c *gc.C, obtained, expected *logStream) {
188
189=== modified file 'state/apiserver/debuglog_test.go'
190--- state/apiserver/debuglog_test.go 2014-04-10 09:43:51 +0000
191+++ state/apiserver/debuglog_test.go 2014-04-14 04:14:21 +0000
192@@ -122,6 +122,18 @@
193 s.assertWebsocketClosed(c, reader)
194 }
195
196+func (s *debugLogSuite) TestBacklogWithMaxLines(c *gc.C) {
197+ s.writeLogLines(c, 10)
198+
199+ reader := s.openWebsocket(c, url.Values{"backlog": {"5"}, "maxLines": {"10"}})
200+ s.assertLogFollowing(c, reader)
201+ s.writeLogLines(c, logLineCount)
202+
203+ linesRead := s.readLogLines(c, reader, 10)
204+ c.Assert(linesRead, jc.DeepEquals, logLines[5:15])
205+ s.assertWebsocketClosed(c, reader)
206+}
207+
208 func (s *debugLogSuite) TestFilter(c *gc.C) {
209 s.ensureLogFile(c)
210
211
212=== modified file 'utils/tailer/export_test.go'
213--- utils/tailer/export_test.go 2013-12-11 17:11:30 +0000
214+++ utils/tailer/export_test.go 2014-04-14 04:14:21 +0000
215@@ -3,4 +3,7 @@
216
217 package tailer
218
219-var NewTestTailer = newTailer
220+var (
221+ BufferSize = &bufferSize
222+ NewTestTailer = newTailer
223+)
224
225=== modified file 'utils/tailer/tailer.go'
226--- utils/tailer/tailer.go 2014-04-04 04:35:44 +0000
227+++ utils/tailer/tailer.go 2014-04-14 04:14:21 +0000
228@@ -14,12 +14,13 @@
229 )
230
231 const (
232- bufferSize = 4096
233- polltime = time.Second
234- delimiter = '\n'
235+ defaultBufferSize = 4096
236+ polltime = time.Second
237+ delimiter = '\n'
238 )
239
240 var (
241+ bufferSize = defaultBufferSize
242 delimiters = []byte{delimiter}
243 )
244
245@@ -35,20 +36,8 @@
246 reader *bufio.Reader
247 writeCloser io.WriteCloser
248 writer *bufio.Writer
249- lines uint
250 filter TailerFilterFunc
251- bufferSize int
252 polltime time.Duration
253- lookBack bool
254-}
255-
256-// NewTailerBacktrack starts a Tailer which reads strings from the passed
257-// ReadSeeker line by line. If a filter function is specified the read
258-// lines are filtered. The matching lines are written to the passed
259-// Writer. The reading begins the specified number of matching lines
260-// from the end.
261-func NewTailerBacktrack(readSeeker io.ReadSeeker, writer io.Writer, lines uint, filter TailerFilterFunc) *Tailer {
262- return newTailer(readSeeker, writer, lines, filter, bufferSize, polltime, true)
263 }
264
265 // NewTailer starts a Tailer which reads strings from the passed
266@@ -56,22 +45,19 @@
267 // lines are filtered. The matching lines are written to the passed
268 // Writer.
269 func NewTailer(readSeeker io.ReadSeeker, writer io.Writer, filter TailerFilterFunc) *Tailer {
270- return newTailer(readSeeker, writer, 0, filter, bufferSize, polltime, false)
271+ return newTailer(readSeeker, writer, filter, polltime)
272 }
273
274 // newTailer starts a Tailer like NewTailer but allows the setting of
275 // the read buffer size and the time between pollings for testing.
276-func newTailer(readSeeker io.ReadSeeker, writer io.Writer, lines uint, filter TailerFilterFunc,
277- bufferSize int, polltime time.Duration, lookBack bool) *Tailer {
278+func newTailer(readSeeker io.ReadSeeker, writer io.Writer,
279+ filter TailerFilterFunc, polltime time.Duration) *Tailer {
280 t := &Tailer{
281 readSeeker: readSeeker,
282 reader: bufio.NewReaderSize(readSeeker, bufferSize),
283 writer: bufio.NewWriter(writer),
284- lines: lines,
285 filter: filter,
286- bufferSize: bufferSize,
287 polltime: polltime,
288- lookBack: lookBack,
289 }
290 go func() {
291 defer t.tomb.Done()
292@@ -107,12 +93,6 @@
293 // writer and then polls for more data to write it to the
294 // writer too.
295 func (t *Tailer) loop() error {
296- // Position the readSeeker.
297- if t.lookBack {
298- if err := t.seekLastLines(); err != nil {
299- return err
300- }
301- }
302 // Start polling.
303 // TODO(mue) 2013-12-06
304 // Handling of read-seeker/files being truncated during
305@@ -144,26 +124,26 @@
306 }
307 }
308
309-// seekLastLines sets the read position of the ReadSeeker to the
310+// SeekLastLines sets the read position of the ReadSeeker to the
311 // wanted number of filtered lines before the end.
312-func (t *Tailer) seekLastLines() error {
313- offset, err := t.readSeeker.Seek(0, os.SEEK_END)
314+func SeekLastLines(readSeeker io.ReadSeeker, lines uint, filter TailerFilterFunc) error {
315+ offset, err := readSeeker.Seek(0, os.SEEK_END)
316 if err != nil {
317 return err
318 }
319- if t.lines == 0 {
320+ if lines == 0 {
321 // We are done, just seeking to the end is sufficient.
322 return nil
323 }
324 seekPos := int64(0)
325 found := uint(0)
326- buffer := make([]byte, t.bufferSize)
327+ buffer := make([]byte, bufferSize)
328 SeekLoop:
329 for offset > 0 {
330 // buffer contains the data left over from the
331 // previous iteration.
332 space := cap(buffer) - len(buffer)
333- if space < t.bufferSize {
334+ if space < bufferSize {
335 // Grow buffer.
336 newBuffer := make([]byte, len(buffer), cap(buffer)*2)
337 copy(newBuffer, buffer)
338@@ -180,11 +160,11 @@
339 copy(buffer[space:cap(buffer)], buffer)
340 buffer = buffer[0 : len(buffer)+space]
341 offset -= int64(space)
342- _, err := t.readSeeker.Seek(offset, os.SEEK_SET)
343+ _, err := readSeeker.Seek(offset, os.SEEK_SET)
344 if err != nil {
345 return err
346 }
347- _, err = io.ReadFull(t.readSeeker, buffer[0:space])
348+ _, err = io.ReadFull(readSeeker, buffer[0:space])
349 if err != nil {
350 return err
351 }
352@@ -207,9 +187,9 @@
353 break
354 }
355 start++
356- if t.isValid(buffer[start:end]) {
357+ if filter == nil || filter(buffer[start:end]) {
358 found++
359- if found >= t.lines {
360+ if found >= lines {
361 seekPos = offset + int64(start)
362 break SeekLoop
363 }
364@@ -221,7 +201,7 @@
365 buffer = buffer[0:end]
366 }
367 // Final positioning.
368- t.readSeeker.Seek(seekPos, os.SEEK_SET)
369+ readSeeker.Seek(seekPos, os.SEEK_SET)
370 return nil
371 }
372
373
374=== modified file 'utils/tailer/tailer_test.go'
375--- utils/tailer/tailer_test.go 2014-04-04 03:46:13 +0000
376+++ utils/tailer/tailer_test.go 2014-04-14 04:14:21 +0000
377@@ -15,6 +15,7 @@
378 gc "launchpad.net/gocheck"
379
380 "launchpad.net/juju-core/testing"
381+ "launchpad.net/juju-core/testing/testbase"
382 "launchpad.net/juju-core/utils/tailer"
383 )
384
385@@ -22,9 +23,11 @@
386 gc.TestingT(t)
387 }
388
389-type tailerSuite struct{}
390+type tailerSuite struct {
391+ testbase.LoggingSuite
392+}
393
394-var _ = gc.Suite(tailerSuite{})
395+var _ = gc.Suite(&tailerSuite{})
396
397 var alphabetData = []string{
398 "alpha alpha\n",
399@@ -326,7 +329,7 @@
400 },
401 }}
402
403-func (tailerSuite) TestTailer(c *gc.C) {
404+func (s *tailerSuite) TestTailer(c *gc.C) {
405 for i, test := range tests {
406 c.Logf("Test #%d) %s", i, test.description)
407 bufferSize := test.bufferSize
408@@ -334,10 +337,15 @@
409 // Default value.
410 bufferSize = 4096
411 }
412+ s.PatchValue(tailer.BufferSize, bufferSize)
413 reader, writer := io.Pipe()
414 sigc := make(chan struct{}, 1)
415 rs := startReadSeeker(c, test.data, test.initialLinesWritten, sigc)
416- tailer := tailer.NewTestTailer(rs, writer, test.initialLinesRequested, test.filter, bufferSize, 2*time.Millisecond, !test.fromStart)
417+ if !test.fromStart {
418+ err := tailer.SeekLastLines(rs, test.initialLinesRequested, test.filter)
419+ c.Assert(err, gc.IsNil)
420+ }
421+ tailer := tailer.NewTestTailer(rs, writer, test.filter, 2*time.Millisecond)
422 linec := startReading(c, tailer, reader, writer)
423
424 // Collect initial data.

Subscribers

People subscribed via source and target branches

to status/vote changes: