Status: | Needs review |
---|---|
Proposed branch: | lp:~charlesk/keeper/wtf |
Merge into: | lp:keeper/devel |
Diff against target: |
513 lines (+260/-117) 2 files modified
src/tar/tar-creator.cpp (+123/-64) tests/unit/tar/tar-creator-test.cpp (+137/-53) |
To merge this branch: | bzr merge lp:~charlesk/keeper/wtf |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
unity-api-1-bot | continuous-integration | Approve | |
Unity API Team | Pending | ||
Review via email: mp+305854@code.launchpad.net |
Commit message
Another testing branch for the CI bot, do not review/approve/land
Description of the change
Another testing branch for the CI bot, do not review/approve/land
unity-api-1-bot (unity-api-1-bot) wrote : | # |
- 111. By Charles Kerr
-
split tar-creator-test out from one big test to several more focused tests
unity-api-1-bot (unity-api-1-bot) wrote : | # |
PASSED: Continuous integration, rev:
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
Click here to trigger a rebuild:
https:/
unity-api-1-bot (unity-api-1-bot) wrote : | # |
PASSED: Continuous integration, rev:111
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
Click here to trigger a rebuild:
https:/
- 112. By Charles Kerr
-
refactor tar-creator-test a bit more to remove redundant code
- 113. By Charles Kerr
-
in tar-creator, fix dtor issue by moving field step_buf_ ahead of step_archive_ in instantiation order
unity-api-1-bot (unity-api-1-bot) wrote : | # |
PASSED: Continuous integration, rev:113
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
Click here to trigger a rebuild:
https:/
- 114. By Charles Kerr
-
in TarCreator:
:Impl:: Impl(), remove redundant variable initialization - 115. By Charles Kerr
-
in tar-creator, reintroduce wrapped_
archive_ write_new( ), which is a convenience function that wraps archive_write_new() in a shared_ptr
unity-api-1-bot (unity-api-1-bot) wrote : | # |
PASSED: Continuous integration, rev:115
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
Click here to trigger a rebuild:
https:/
- 116. By Charles Kerr
-
in TarCreator:
:Impl:: step(), clear the step_archive_ smart_ptr after the last step is done
unity-api-1-bot (unity-api-1-bot) wrote : | # |
PASSED: Continuous integration, rev:116
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
Click here to trigger a rebuild:
https:/
- 117. By Charles Kerr
-
in TarCreator::Impl add wrapped_
archive_ write_header( ), a helper that calls archive_ write_header( ) and handles return values like ARCHIVE_RETRY and ARCHIVE_WARN
unity-api-1-bot (unity-api-1-bot) wrote : | # |
PASSED: Continuous integration, rev:117
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
Click here to trigger a rebuild:
https:/
- 118. By Charles Kerr
-
in TarCreator::Impl add wrapped_
archive_ write_data( ), a helper that calls archive_ write_data( ) and handles return values like ARCHIVE_RETRY and ARCHIVE_WARN
unity-api-1-bot (unity-api-1-bot) wrote : | # |
PASSED: Continuous integration, rev:118
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
Click here to trigger a rebuild:
https:/
- 119. By Charles Kerr
-
in TarCreator::Impl add wrapped_
archive_ write_close( ), a helper that calls archive_ write_close( ) and handles return values like ARCHIVE_RETRY and ARCHIVE_WARN
unity-api-1-bot (unity-api-1-bot) wrote : | # |
PASSED: Continuous integration, rev:119
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
Click here to trigger a rebuild:
https:/
- 120. By Charles Kerr
-
in TarCreator:
:Impl:: step(), fix looping bug that caused too much data to be read in a single pass
unity-api-1-bot (unity-api-1-bot) wrote : | # |
PASSED: Continuous integration, rev:120
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
Click here to trigger a rebuild:
https:/
- 121. By Charles Kerr
-
experimental commit: re-introduce TarCreatorFixtu
re::CreateCompr essed
unity-api-1-bot (unity-api-1-bot) wrote : | # |
PASSED: Continuous integration, rev:121
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
Click here to trigger a rebuild:
https:/
- 122. By Charles Kerr
-
sync with trunk
- 123. By Charles Kerr
-
in tar-creator-test, add variations of making a tar from an empty file set
unity-api-1-bot (unity-api-1-bot) wrote : | # |
PASSED: Continuous integration, rev:123
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
Click here to trigger a rebuild:
https:/
Unmerged revisions
- 123. By Charles Kerr
-
in tar-creator-test, add variations of making a tar from an empty file set
- 122. By Charles Kerr
-
sync with trunk
- 121. By Charles Kerr
-
experimental commit: re-introduce TarCreatorFixtu
re::CreateCompr essed - 120. By Charles Kerr
-
in TarCreator:
:Impl:: step(), fix looping bug that caused too much data to be read in a single pass - 119. By Charles Kerr
-
in TarCreator::Impl add wrapped_
archive_ write_close( ), a helper that calls archive_ write_close( ) and handles return values like ARCHIVE_RETRY and ARCHIVE_WARN - 118. By Charles Kerr
-
in TarCreator::Impl add wrapped_
archive_ write_data( ), a helper that calls archive_ write_data( ) and handles return values like ARCHIVE_RETRY and ARCHIVE_WARN - 117. By Charles Kerr
-
in TarCreator::Impl add wrapped_
archive_ write_header( ), a helper that calls archive_ write_header( ) and handles return values like ARCHIVE_RETRY and ARCHIVE_WARN - 116. By Charles Kerr
-
in TarCreator:
:Impl:: step(), clear the step_archive_ smart_ptr after the last step is done - 115. By Charles Kerr
-
in tar-creator, reintroduce wrapped_
archive_ write_new( ), which is a convenience function that wraps archive_write_new() in a shared_ptr - 114. By Charles Kerr
-
in TarCreator:
:Impl:: Impl(), remove redundant variable initialization
Preview Diff
1 | === modified file 'src/tar/tar-creator.cpp' | |||
2 | --- src/tar/tar-creator.cpp 2016-09-05 18:38:36 +0000 | |||
3 | +++ src/tar/tar-creator.cpp 2016-09-17 00:21:32 +0000 | |||
4 | @@ -48,10 +48,6 @@ | |||
5 | 48 | Impl(const QStringList& filenames, bool compress) | 48 | Impl(const QStringList& filenames, bool compress) |
6 | 49 | : filenames_(filenames) | 49 | : filenames_(filenames) |
7 | 50 | , compress_(compress) | 50 | , compress_(compress) |
8 | 51 | , step_archive_() | ||
9 | 52 | , step_filenum_(-1) | ||
10 | 53 | , step_file_() | ||
11 | 54 | , step_buf_() | ||
12 | 55 | { | 51 | { |
13 | 56 | } | 52 | } |
14 | 57 | 53 | ||
15 | @@ -63,15 +59,13 @@ | |||
16 | 63 | bool step(std::vector<char>& fillme) | 59 | bool step(std::vector<char>& fillme) |
17 | 64 | { | 60 | { |
18 | 65 | step_buf_.resize(0); | 61 | step_buf_.resize(0); |
20 | 66 | bool success = true; | 62 | bool success {true}; |
21 | 67 | 63 | ||
22 | 68 | // if this is the first step, create an archive | 64 | // if this is the first step, create an archive |
23 | 69 | if (!step_archive_) | 65 | if (!step_archive_) |
24 | 70 | { | 66 | { |
29 | 71 | step_archive_.reset(archive_write_new(), [](struct archive* a){archive_write_free(a);}); | 67 | qDebug() << "new archive"; |
30 | 72 | archive_write_set_format_pax(step_archive_.get()); | 68 | step_archive_ = wrapped_archive_write_new(compress_); |
27 | 73 | if (compress_) | ||
28 | 74 | archive_write_add_filter_xz(step_archive_.get()); | ||
31 | 75 | archive_write_open(step_archive_.get(), &step_buf_, nullptr, append_bytes_write_cb, nullptr); | 69 | archive_write_open(step_archive_.get(), &step_buf_, nullptr, append_bytes_write_cb, nullptr); |
32 | 76 | 70 | ||
33 | 77 | step_file_.reset(); | 71 | step_file_.reset(); |
34 | @@ -86,13 +80,15 @@ | |||
35 | 86 | if (step_filenum_ >= filenames_.size()) // tried to read past the end | 80 | if (step_filenum_ >= filenames_.size()) // tried to read past the end |
36 | 87 | { | 81 | { |
37 | 88 | success = false; | 82 | success = false; |
38 | 83 | step_archive_.reset(); | ||
39 | 89 | break; | 84 | break; |
40 | 90 | } | 85 | } |
41 | 91 | 86 | ||
42 | 92 | // step to next file | 87 | // step to next file |
43 | 93 | if (++step_filenum_ == filenames_.size()) // we made it to the end! | 88 | if (++step_filenum_ == filenames_.size()) // we made it to the end! |
44 | 94 | { | 89 | { |
46 | 95 | archive_write_close(step_archive_.get()); | 90 | qDebug() << "finished last file, let's close the archive"; |
47 | 91 | wrapped_archive_write_close(step_archive_.get()); | ||
48 | 96 | break; | 92 | break; |
49 | 97 | } | 93 | } |
50 | 98 | 94 | ||
51 | @@ -110,28 +106,16 @@ | |||
52 | 110 | const auto n = step_file_->read(inbuf, sizeof(inbuf)); | 106 | const auto n = step_file_->read(inbuf, sizeof(inbuf)); |
53 | 111 | if (n > 0) // got data | 107 | if (n > 0) // got data |
54 | 112 | { | 108 | { |
69 | 113 | for(;;) { | 109 | wrapped_archive_write_data(step_archive_.get(), inbuf, size_t(n), step_file_->fileName()); |
70 | 114 | if (archive_write_data(step_archive_.get(), inbuf, size_t(n)) != -1) | 110 | break; |
57 | 115 | break; | ||
58 | 116 | const auto err = archive_errno(step_archive_.get()); | ||
59 | 117 | if (err == ARCHIVE_RETRY) | ||
60 | 118 | continue; | ||
61 | 119 | auto errstr = QString::fromUtf8("Error adding data for '%1': %2 (%3)") | ||
62 | 120 | .arg(step_file_->fileName()) | ||
63 | 121 | .arg(archive_error_string(step_archive_.get())) | ||
64 | 122 | .arg(err); | ||
65 | 123 | qWarning() << qPrintable(errstr); | ||
66 | 124 | if (err != ARCHIVE_WARN) | ||
67 | 125 | throw std::runtime_error(errstr.toStdString()); | ||
68 | 126 | } | ||
71 | 127 | } | 111 | } |
72 | 128 | else if (n < 0) // read error | 112 | else if (n < 0) // read error |
73 | 129 | { | 113 | { |
74 | 130 | success = false; | 114 | success = false; |
79 | 131 | auto errstr = QStringLiteral("read()ing %1 returned %2 (%3)") | 115 | auto const errstr = QStringLiteral("read()ing %1 returned %2 (%3)") |
80 | 132 | .arg(step_file_->fileName()) | 116 | .arg(step_file_->fileName()) |
81 | 133 | .arg(n) | 117 | .arg(n) |
82 | 134 | .arg(step_file_->errorString()); | 118 | .arg(step_file_->errorString()); |
83 | 135 | qWarning() << errstr; | 119 | qWarning() << errstr; |
84 | 136 | throw std::runtime_error(errstr.toStdString()); | 120 | throw std::runtime_error(errstr.toStdString()); |
85 | 137 | } | 121 | } |
86 | @@ -154,9 +138,9 @@ | |||
87 | 154 | const void * vsource, | 138 | const void * vsource, |
88 | 155 | size_t len) | 139 | size_t len) |
89 | 156 | { | 140 | { |
93 | 157 | auto& target = *static_cast<std::vector<char>*>(vtarget); | 141 | auto target = static_cast<std::vector<char>*>(vtarget); |
94 | 158 | const auto& source = static_cast<const char*>(vsource); | 142 | auto const source = static_cast<const char*>(vsource); |
95 | 159 | target.insert(target.end(), source, source+len); | 143 | target->insert(target->end(), source, source+len); |
96 | 160 | return ssize_t(len); | 144 | return ssize_t(len); |
97 | 161 | } | 145 | } |
98 | 162 | 146 | ||
99 | @@ -165,12 +149,13 @@ | |||
100 | 165 | const void *, | 149 | const void *, |
101 | 166 | size_t len) | 150 | size_t len) |
102 | 167 | { | 151 | { |
105 | 168 | *static_cast<ssize_t*>(userdata) += len; | 152 | auto const sslen = ssize_t(len); |
106 | 169 | return ssize_t(len); | 153 | *static_cast<ssize_t*>(userdata) += sslen; |
107 | 154 | return sslen; | ||
108 | 170 | } | 155 | } |
109 | 171 | 156 | ||
112 | 172 | static void add_file_header_to_archive(struct archive* archive, | 157 | static void add_file_header_to_archive(struct archive * archive, |
113 | 173 | const QString& filename) | 158 | QString const & filename) |
114 | 174 | { | 159 | { |
115 | 175 | struct stat st; | 160 | struct stat st; |
116 | 176 | const auto filename_utf8 = filename.toUtf8(); | 161 | const auto filename_utf8 = filename.toUtf8(); |
117 | @@ -180,42 +165,119 @@ | |||
118 | 180 | archive_entry_copy_stat(entry, &st); | 165 | archive_entry_copy_stat(entry, &st); |
119 | 181 | archive_entry_set_pathname(entry, filename_utf8.constData()); | 166 | archive_entry_set_pathname(entry, filename_utf8.constData()); |
120 | 182 | 167 | ||
135 | 183 | int ret; | 168 | wrapped_archive_write_header(archive, entry, filename); |
122 | 184 | do { | ||
123 | 185 | ret = archive_write_header(archive, entry); | ||
124 | 186 | if ((ret==ARCHIVE_WARN) || (ret==ARCHIVE_FAILED) || (ret==ARCHIVE_FATAL)) | ||
125 | 187 | { | ||
126 | 188 | auto errstr = QString::fromUtf8("Error adding header for '%1': %2 (%3)") | ||
127 | 189 | .arg(filename) | ||
128 | 190 | .arg(archive_error_string(archive)) | ||
129 | 191 | .arg(ret); | ||
130 | 192 | qWarning() << qPrintable(errstr); | ||
131 | 193 | if ((ret==ARCHIVE_FATAL) || (ret==ARCHIVE_FAILED)) | ||
132 | 194 | throw std::runtime_error(errstr.toStdString()); | ||
133 | 195 | } | ||
134 | 196 | } while (ret == ARCHIVE_RETRY); | ||
136 | 197 | 169 | ||
137 | 198 | archive_entry_free(entry); | 170 | archive_entry_free(entry); |
138 | 199 | } | 171 | } |
139 | 200 | 172 | ||
140 | 173 | static void wrapped_archive_write_header(struct archive * archive, | ||
141 | 174 | struct archive_entry * entry, | ||
142 | 175 | QString const & source) | ||
143 | 176 | { | ||
144 | 177 | for (;;) | ||
145 | 178 | { | ||
146 | 179 | auto const err = archive_write_header(archive, entry); | ||
147 | 180 | if (err == ARCHIVE_OK) | ||
148 | 181 | break; | ||
149 | 182 | |||
150 | 183 | if (err == ARCHIVE_RETRY) | ||
151 | 184 | continue; | ||
152 | 185 | |||
153 | 186 | auto const errstr = QStringLiteral("Error adding header for '%1': %2 (%3)") | ||
154 | 187 | .arg(source) | ||
155 | 188 | .arg(archive_error_string(archive)) | ||
156 | 189 | .arg(err); | ||
157 | 190 | qWarning() << qPrintable(errstr); | ||
158 | 191 | if (err == ARCHIVE_WARN) | ||
159 | 192 | break; | ||
160 | 193 | |||
161 | 194 | throw std::runtime_error(errstr.toStdString()); | ||
162 | 195 | } | ||
163 | 196 | } | ||
164 | 197 | |||
165 | 198 | static void wrapped_archive_write_data(struct archive * archive, | ||
166 | 199 | void const * buf_in, | ||
167 | 200 | size_t bufsize_in, | ||
168 | 201 | QString const source) | ||
169 | 202 | { | ||
170 | 203 | auto bufsize = bufsize_in; | ||
171 | 204 | auto buf = static_cast<char const*>(buf_in); | ||
172 | 205 | |||
173 | 206 | while (bufsize > 0) | ||
174 | 207 | { | ||
175 | 208 | auto const n_written = archive_write_data(archive, buf, bufsize); | ||
176 | 209 | |||
177 | 210 | if (n_written != -1) | ||
178 | 211 | { | ||
179 | 212 | bufsize -= n_written; | ||
180 | 213 | buf += n_written; | ||
181 | 214 | continue; | ||
182 | 215 | } | ||
183 | 216 | |||
184 | 217 | auto const err = archive_errno(archive); | ||
185 | 218 | if (err == ARCHIVE_RETRY) | ||
186 | 219 | continue; | ||
187 | 220 | |||
188 | 221 | auto const errstr = QStringLiteral("Error adding data for '%1': %2 (%3)") | ||
189 | 222 | .arg(source) | ||
190 | 223 | .arg(archive_error_string(archive)) | ||
191 | 224 | .arg(err); | ||
192 | 225 | qWarning() << qPrintable(errstr); | ||
193 | 226 | if (err == ARCHIVE_WARN) | ||
194 | 227 | continue; | ||
195 | 228 | |||
196 | 229 | throw std::runtime_error(errstr.toStdString()); | ||
197 | 230 | } | ||
198 | 231 | } | ||
199 | 232 | |||
200 | 233 | static std::shared_ptr<struct archive> wrapped_archive_write_new(bool compress) | ||
201 | 234 | { | ||
202 | 235 | auto archive = archive_write_new(); | ||
203 | 236 | archive_write_set_format_pax(archive); | ||
204 | 237 | archive_write_set_bytes_per_block(archive, 0); | ||
205 | 238 | if (compress) | ||
206 | 239 | archive_write_add_filter_xz(archive); | ||
207 | 240 | return std::shared_ptr<struct archive>(archive, [](struct archive* a){archive_write_free(a);}); | ||
208 | 241 | } | ||
209 | 242 | |||
210 | 243 | static void wrapped_archive_write_close(struct archive* archive) | ||
211 | 244 | { | ||
212 | 245 | for (;;) | ||
213 | 246 | { | ||
214 | 247 | auto const err = archive_write_close(archive); | ||
215 | 248 | if (err == ARCHIVE_OK) | ||
216 | 249 | break; | ||
217 | 250 | |||
218 | 251 | if (err == ARCHIVE_RETRY) | ||
219 | 252 | continue; | ||
220 | 253 | |||
221 | 254 | auto const errstr = QStringLiteral("Error calling archive_write_close(): %1 (%2)") | ||
222 | 255 | .arg(archive_error_string(archive)) | ||
223 | 256 | .arg(err); | ||
224 | 257 | qWarning() << qPrintable(errstr); | ||
225 | 258 | if (err == ARCHIVE_WARN) | ||
226 | 259 | break; | ||
227 | 260 | |||
228 | 261 | throw std::runtime_error(errstr.toStdString()); | ||
229 | 262 | } | ||
230 | 263 | } | ||
231 | 264 | |||
232 | 201 | ssize_t calculate_uncompressed_size() const | 265 | ssize_t calculate_uncompressed_size() const |
233 | 202 | { | 266 | { |
234 | 203 | ssize_t archive_size {}; | 267 | ssize_t archive_size {}; |
235 | 204 | 268 | ||
239 | 205 | auto a = archive_write_new(); | 269 | auto a = wrapped_archive_write_new(false); |
240 | 206 | archive_write_set_format_pax(a); | 270 | archive_write_open(a.get(), &archive_size, nullptr, count_bytes_write_cb, nullptr); |
238 | 207 | archive_write_open(a, &archive_size, nullptr, count_bytes_write_cb, nullptr); | ||
241 | 208 | 271 | ||
242 | 209 | for (const auto& filename : filenames_) | 272 | for (const auto& filename : filenames_) |
243 | 210 | { | 273 | { |
245 | 211 | add_file_header_to_archive(a, filename); | 274 | add_file_header_to_archive(a.get(), filename); |
246 | 212 | 275 | ||
247 | 213 | // libarchive pads any missing data, | 276 | // libarchive pads any missing data, |
248 | 214 | // so we don't need to call archive_write_data() | 277 | // so we don't need to call archive_write_data() |
249 | 215 | } | 278 | } |
250 | 216 | 279 | ||
253 | 217 | archive_write_close(a); | 280 | wrapped_archive_write_close(a.get()); |
252 | 218 | archive_write_free(a); | ||
254 | 219 | return archive_size; | 281 | return archive_size; |
255 | 220 | } | 282 | } |
256 | 221 | 283 | ||
257 | @@ -223,14 +285,12 @@ | |||
258 | 223 | { | 285 | { |
259 | 224 | ssize_t archive_size {}; | 286 | ssize_t archive_size {}; |
260 | 225 | 287 | ||
265 | 226 | auto a = archive_write_new(); | 288 | auto a = wrapped_archive_write_new(true); |
266 | 227 | archive_write_set_format_pax(a); | 289 | archive_write_open(a.get(), &archive_size, nullptr, count_bytes_write_cb, nullptr); |
263 | 228 | archive_write_add_filter_xz(a); | ||
264 | 229 | archive_write_open(a, &archive_size, nullptr, count_bytes_write_cb, nullptr); | ||
267 | 230 | 290 | ||
268 | 231 | for (const auto& filename : filenames_) | 291 | for (const auto& filename : filenames_) |
269 | 232 | { | 292 | { |
271 | 233 | add_file_header_to_archive(a, filename); | 293 | add_file_header_to_archive(a.get(), filename); |
272 | 234 | 294 | ||
273 | 235 | // process the file | 295 | // process the file |
274 | 236 | QFile file(filename); | 296 | QFile file(filename); |
275 | @@ -242,7 +302,7 @@ | |||
276 | 242 | if (n_read == 0) | 302 | if (n_read == 0) |
277 | 243 | break; | 303 | break; |
278 | 244 | if (n_read > 0) | 304 | if (n_read > 0) |
280 | 245 | archive_write_data(a, buf, size_t(n_read)); | 305 | wrapped_archive_write_data(a.get(), buf, size_t(n_read), filename); |
281 | 246 | if (n_read < 0) { | 306 | if (n_read < 0) { |
282 | 247 | auto errstr = QStringLiteral("Reading '%1' returned %2 (%3)") | 307 | auto errstr = QStringLiteral("Reading '%1' returned %2 (%3)") |
283 | 248 | .arg(file.fileName()) | 308 | .arg(file.fileName()) |
284 | @@ -254,18 +314,17 @@ | |||
285 | 254 | } | 314 | } |
286 | 255 | } | 315 | } |
287 | 256 | 316 | ||
290 | 257 | archive_write_close(a); | 317 | wrapped_archive_write_close(a.get()); |
289 | 258 | archive_write_free(a); | ||
291 | 259 | return archive_size; | 318 | return archive_size; |
292 | 260 | } | 319 | } |
293 | 261 | 320 | ||
294 | 262 | const QStringList filenames_; | 321 | const QStringList filenames_; |
295 | 263 | const bool compress_ {}; | 322 | const bool compress_ {}; |
296 | 264 | 323 | ||
297 | 324 | std::vector<char> step_buf_; | ||
298 | 265 | std::shared_ptr<struct archive> step_archive_; | 325 | std::shared_ptr<struct archive> step_archive_; |
299 | 266 | int step_filenum_ {-1}; | 326 | int step_filenum_ {-1}; |
300 | 267 | QSharedPointer<QFile> step_file_; | 327 | QSharedPointer<QFile> step_file_; |
301 | 268 | std::vector<char> step_buf_; | ||
302 | 269 | }; | 328 | }; |
303 | 270 | 329 | ||
304 | 271 | /** | 330 | /** |
305 | 272 | 331 | ||
306 | === modified file 'tests/unit/tar/tar-creator-test.cpp' | |||
307 | --- tests/unit/tar/tar-creator-test.cpp 2016-09-12 15:28:06 +0000 | |||
308 | +++ tests/unit/tar/tar-creator-test.cpp 2016-09-17 00:21:32 +0000 | |||
309 | @@ -48,67 +48,151 @@ | |||
310 | 48 | { | 48 | { |
311 | 49 | } | 49 | } |
312 | 50 | 50 | ||
324 | 51 | }; | 51 | void test_tar_creation(int min_files, |
325 | 52 | 52 | int max_files, | |
326 | 53 | /*** | 53 | int max_filesize, |
327 | 54 | **** | 54 | int max_dirs, |
328 | 55 | ***/ | 55 | bool compressed, |
329 | 56 | 56 | int n_runs) | |
319 | 57 | TEST_F(TarCreatorFixture, Create) | ||
320 | 58 | { | ||
321 | 59 | static constexpr int n_runs {5}; | ||
322 | 60 | |||
323 | 61 | for (int i=0; i<n_runs; ++i) | ||
330 | 62 | { | 57 | { |
332 | 63 | for (const auto compression_enabled : std::array<bool,2>{false, true}) | 58 | for (int i=0; i<n_runs; ++i) |
333 | 64 | { | 59 | { |
334 | 65 | // build a directory full of random files | ||
335 | 66 | QTemporaryDir in; | 60 | QTemporaryDir in; |
336 | 67 | QDir indir(in.path()); | 61 | QDir indir(in.path()); |
349 | 68 | FileUtils::fillTemporaryDirectory(in.path()); | 62 | FileUtils::fillTemporaryDirectory(in.path(), min_files, max_files, max_filesize, max_dirs); |
350 | 69 | 63 | test_tar_creation(indir, compressed); | |
351 | 70 | // create the tar creator | 64 | } |
352 | 71 | EXPECT_TRUE(QDir::setCurrent(in.path())); | 65 | } |
353 | 72 | QStringList files; | 66 | |
354 | 73 | for (auto file : FileUtils::getFilesRecursively(in.path())) | 67 | void test_tar_creation(QDir const& in, |
355 | 74 | files += indir.relativeFilePath(file); | 68 | bool compression_enabled) |
356 | 75 | TarCreator tar_creator(files, compression_enabled); | 69 | { |
357 | 76 | 70 | qDebug() << Q_FUNC_INFO; | |
358 | 77 | // simple sanity check on its size estimate | 71 | |
359 | 78 | const auto estimated_size = tar_creator.calculate_size(); | 72 | // create the tar creator |
360 | 79 | const auto filesize_sum = std::accumulate( | 73 | EXPECT_TRUE(QDir::setCurrent(in.path())); |
361 | 74 | QStringList files; | ||
362 | 75 | for (auto& file : FileUtils::getFilesRecursively(in.path())) { | ||
363 | 76 | qDebug() << file; | ||
364 | 77 | files += in.relativeFilePath(file); | ||
365 | 78 | } | ||
366 | 79 | TarCreator tar_creator(files, compression_enabled); | ||
367 | 80 | |||
368 | 81 | // test that the size calculator returns a consistent value | ||
369 | 82 | const auto calculated_size = tar_creator.calculate_size(); | ||
370 | 83 | for (int i=0, n=5; i<n; ++i) | ||
371 | 84 | EXPECT_EQ(calculated_size, tar_creator.calculate_size()); | ||
372 | 85 | |||
373 | 86 | // if uncompressed, test that the tar is at least as large as the source files | ||
374 | 87 | if (!compression_enabled) { | ||
375 | 88 | auto const filesize_sum = std::accumulate( | ||
376 | 80 | files.begin(), | 89 | files.begin(), |
377 | 81 | files.end(), | 90 | files.end(), |
378 | 82 | 0, | 91 | 0, |
379 | 83 | [](ssize_t sum, QString const& filename){return sum + QFileInfo(filename).size();} | 92 | [](ssize_t sum, QString const& filename){return sum + QFileInfo(filename).size();} |
380 | 84 | ); | 93 | ); |
409 | 85 | if (!compression_enabled) | 94 | EXPECT_GT(calculated_size, filesize_sum); |
410 | 86 | ASSERT_GT(estimated_size, filesize_sum); | 95 | } |
411 | 87 | 96 | ||
412 | 88 | // does it match the actual size? | 97 | // create the tar |
413 | 89 | size_t actual_size {}; | 98 | size_t actual_size {}; |
414 | 90 | std::vector<char> contents, step; | 99 | std::vector<char> contents, step; |
415 | 91 | while (tar_creator.step(step)) { | 100 | while (tar_creator.step(step)) { |
416 | 92 | contents.insert(contents.end(), step.begin(), step.end()); | 101 | contents.insert(contents.end(), step.begin(), step.end()); |
417 | 93 | actual_size += step.size(); | 102 | actual_size += step.size(); |
418 | 94 | } | 103 | } |
419 | 95 | ASSERT_EQ(estimated_size, actual_size); | 104 | EXPECT_EQ(calculated_size, actual_size); |
420 | 96 | 105 | ||
421 | 97 | // untar it | 106 | // untar it |
422 | 98 | QTemporaryDir out; | 107 | QTemporaryDir out; |
423 | 99 | QDir outdir(out.path()); | 108 | QDir const outdir(out.path()); |
424 | 100 | QFile tarfile(outdir.filePath("tmp.tar")); | 109 | QFile tarfile(outdir.filePath("tmp.tar")); |
425 | 101 | tarfile.open(QIODevice::WriteOnly); | 110 | tarfile.open(QIODevice::WriteOnly); |
426 | 102 | tarfile.write(contents.data(), contents.size()); | 111 | tarfile.write(contents.data(), contents.size()); |
427 | 103 | tarfile.close(); | 112 | tarfile.close(); |
428 | 104 | QProcess untar; | 113 | QProcess untar; |
429 | 105 | untar.setWorkingDirectory(outdir.path()); | 114 | untar.setWorkingDirectory(outdir.path()); |
430 | 106 | untar.start("tar", QStringList() << "xf" << tarfile.fileName()); | 115 | untar.start("tar", QStringList() << "xf" << tarfile.fileName()); |
431 | 107 | EXPECT_TRUE(untar.waitForFinished()) << qPrintable(untar.errorString()); | 116 | EXPECT_TRUE(untar.waitForFinished()) << qPrintable(untar.errorString()); |
432 | 108 | 117 | ||
433 | 109 | // compare it to the original | 118 | // compare it to the original |
434 | 110 | EXPECT_TRUE(tarfile.remove()); | 119 | EXPECT_TRUE(tarfile.remove()); |
435 | 111 | EXPECT_TRUE(FileUtils::compareDirectories(in.path(), out.path())); | 120 | EXPECT_TRUE(FileUtils::compareDirectories(in.path(), out.path())); |
408 | 112 | } | ||
436 | 113 | } | 121 | } |
437 | 122 | }; | ||
438 | 123 | |||
439 | 124 | /*** | ||
440 | 125 | **** | ||
441 | 126 | ***/ | ||
442 | 127 | |||
443 | 128 | TEST_F(TarCreatorFixture, CreateUncompressedOfNothing) | ||
444 | 129 | { | ||
445 | 130 | static constexpr int min_files {0}; | ||
446 | 131 | static constexpr int max_files {min_files}; | ||
447 | 132 | static constexpr int max_filesize {0}; | ||
448 | 133 | static constexpr int max_dirs {0}; | ||
449 | 134 | static constexpr bool compressed {false}; | ||
450 | 135 | static constexpr int n_runs {5}; | ||
451 | 136 | |||
452 | 137 | test_tar_creation(min_files, max_files, max_filesize, max_dirs, compressed, n_runs); | ||
453 | 138 | } | ||
454 | 139 | |||
455 | 140 | TEST_F(TarCreatorFixture, CreateCompressedOfNothing) | ||
456 | 141 | { | ||
457 | 142 | static constexpr int min_files {0}; | ||
458 | 143 | static constexpr int max_files {min_files}; | ||
459 | 144 | static constexpr int max_filesize {0}; | ||
460 | 145 | static constexpr int max_dirs {0}; | ||
461 | 146 | static constexpr bool compressed {true}; | ||
462 | 147 | static constexpr int n_runs {5}; | ||
463 | 148 | |||
464 | 149 | test_tar_creation(min_files, max_files, max_filesize, max_dirs, compressed, n_runs); | ||
465 | 150 | } | ||
466 | 151 | |||
467 | 152 | TEST_F(TarCreatorFixture, CreateUncompressedOfSingleFile) | ||
468 | 153 | { | ||
469 | 154 | static constexpr int min_files {1}; | ||
470 | 155 | static constexpr int max_files {min_files}; | ||
471 | 156 | static constexpr int max_filesize {1024}; | ||
472 | 157 | static constexpr int max_dirs {0}; | ||
473 | 158 | static constexpr bool compressed {false}; | ||
474 | 159 | static constexpr int n_runs {5}; | ||
475 | 160 | |||
476 | 161 | test_tar_creation(min_files, max_files, max_filesize, max_dirs, compressed, n_runs); | ||
477 | 162 | } | ||
478 | 163 | |||
479 | 164 | TEST_F(TarCreatorFixture, CreateCompressedOfSingleFile) | ||
480 | 165 | { | ||
481 | 166 | static constexpr int min_files {1}; | ||
482 | 167 | static constexpr int max_files {min_files}; | ||
483 | 168 | static constexpr int max_filesize {1024}; | ||
484 | 169 | static constexpr int max_dirs {0}; | ||
485 | 170 | static constexpr bool compressed {true}; | ||
486 | 171 | static constexpr int n_runs {5}; | ||
487 | 172 | |||
488 | 173 | test_tar_creation(min_files, max_files, max_filesize, max_dirs, compressed, n_runs); | ||
489 | 174 | } | ||
490 | 175 | |||
491 | 176 | TEST_F(TarCreatorFixture, CreateUncompressedOfTree) | ||
492 | 177 | { | ||
493 | 178 | static constexpr int min_files {100}; | ||
494 | 179 | static constexpr int max_files {min_files}; | ||
495 | 180 | static constexpr int max_filesize {1024}; | ||
496 | 181 | static constexpr int max_dirs {10}; | ||
497 | 182 | static constexpr bool compressed {false}; | ||
498 | 183 | static constexpr int n_runs {5}; | ||
499 | 184 | |||
500 | 185 | test_tar_creation(min_files, max_files, max_filesize, max_dirs, compressed, n_runs); | ||
501 | 186 | } | ||
502 | 187 | |||
503 | 188 | TEST_F(TarCreatorFixture, CreateCompressedOfTree) | ||
504 | 189 | { | ||
505 | 190 | static constexpr int min_files {100}; | ||
506 | 191 | static constexpr int max_files {min_files}; | ||
507 | 192 | static constexpr int max_filesize {1024}; | ||
508 | 193 | static constexpr int max_dirs {10}; | ||
509 | 194 | static constexpr bool compressed {true}; | ||
510 | 195 | static constexpr int n_runs {5}; | ||
511 | 196 | |||
512 | 197 | test_tar_creation(min_files, max_files, max_filesize, max_dirs, compressed, n_runs); | ||
513 | 114 | } | 198 | } |
FAILED: Continuous integration, rev:110 /jenkins. canonical. com/unity- api-1/job/ lp-keeper- ci/77/ /jenkins. canonical. com/unity- api-1/job/ build/663/ console /jenkins. canonical. com/unity- api-1/job/ build-0- fetch/669 /jenkins. canonical. com/unity- api-1/job/ build-2- binpkg/ arch=amd64, release= vivid+overlay/ 484 /jenkins. canonical. com/unity- api-1/job/ build-2- binpkg/ arch=amd64, release= vivid+overlay/ 484/artifact/ output/ *zip*/output. zip /jenkins. canonical. com/unity- api-1/job/ build-2- binpkg/ arch=amd64, release= xenial+ overlay/ 484 /jenkins. canonical. com/unity- api-1/job/ build-2- binpkg/ arch=amd64, release= xenial+ overlay/ 484/artifact/ output/ *zip*/output. zip /jenkins. canonical. com/unity- api-1/job/ build-2- binpkg/ arch=amd64, release= yakkety/ 484 /jenkins. canonical. com/unity- api-1/job/ build-2- binpkg/ arch=amd64, release= yakkety/ 484/artifact/ output/ *zip*/output. zip /jenkins. canonical. com/unity- api-1/job/ build-2- binpkg/ arch=armhf, release= vivid+overlay/ 484 /jenkins. canonical. com/unity- api-1/job/ build-2- binpkg/ arch=armhf, release= vivid+overlay/ 484/artifact/ output/ *zip*/output. zip /jenkins. canonical. com/unity- api-1/job/ build-2- binpkg/ arch=armhf, release= xenial+ overlay/ 484 /jenkins. canonical. com/unity- api-1/job/ build-2- binpkg/ arch=armhf, release= xenial+ overlay/ 484/artifact/ output/ *zip*/output. zip /jenkins. canonical. com/unity- api-1/job/ build-2- binpkg/ arch=armhf, release= yakkety/ 484 /jenkins. canonical. com/unity- api-1/job/ build-2- binpkg/ arch=armhf, release= yakkety/ 484/artifact/ output/ *zip*/output. zip /jenkins. canonical. com/unity- api-1/job/ build-2- binpkg/ arch=i386, release= vivid+overlay/ 484/console /jenkins. canonical. com/unity- api-1/job/ build-2- binpkg/ arch=i386, release= xenial+ overlay/ 484 /jenkins. canonical. com/unity- api-1/job/ build-2- binpkg/ arch=i386, release= xenial+ overlay/ 484/artifact/ output/ *zip*/output. zip /jenkins. canonical. com/unity- api-1/job/ build-2- binpkg/ arch=i386, release= yakkety/ 484 /jenkins. canonical. com/unity- api-1/job/ build-2- binpkg/ arch=i386, release= yakkety/ 484/artifact/ output/ *zip*/output. zip
https:/
Executed test runs:
FAILURE: https:/
SUCCESS: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
FAILURE: https:/
SUCCESS: https:/
deb: https:/
SUCCESS: https:/
deb: https:/
Click here to trigger a rebuild: /jenkins. canonical. com/unity- api-1/job/ lp-keeper- ci/77/rebuild
https:/