Merge lp:~c2c-oerpscenario/oerpscenario/trunk-behave-better into lp:oerpscenario
- trunk-behave-better
- Merge into trunk-python
Status: | Merged | ||||
---|---|---|---|---|---|
Merged at revision: | 335 | ||||
Proposed branch: | lp:~c2c-oerpscenario/oerpscenario/trunk-behave-better | ||||
Merge into: | lp:oerpscenario | ||||
Diff against target: |
972 lines (+426/-284) 12 files modified
README.md (+42/-0) Readme.rst (+8/-8) ReleaseNotes.md (+51/-0) features/environment.py (+7/-6) features/steps/company_config.py (+2/-2) features/steps/dsl.py (+10/-195) features/steps/dsl_helpers.py (+199/-0) features/steps/tools.py (+3/-2) features/steps/user_config.py (+3/-3) features/support/behave_better.py (+100/-67) features/support/tools.py (+0/-1) requires.txt (+1/-0) |
||||
To merge this branch: | bzr merge lp:~c2c-oerpscenario/oerpscenario/trunk-behave-better | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Nicolas Bessi - Camptocamp (community) | Approve | ||
Alexandre Fayolle - camptocamp | code review, no test | Approve | |
Review via email: mp+152625@code.launchpad.net |
Commit message
Description of the change
These changes are updates for the Behave patches.
They are synchronized with the openobject-mirliton project.
New fixes:
- return exit status > 0 if there's failure(s)
- fix the --pretty formatter when the table width exceeds the screen width
- flush the output for the --plain formatter after each step (in cases where stdout is buffered)
- allow loading steps from different directories
Other changes: adapted to run with behave 1.2.4, added some documentation.
- 308. By Alexandre Fayolle - camptocamp
-
[FIX] account_config: force the code_digits value which can be overwritten by onchange_
chart_template_ id - 309. By Yannick Vaucher @ Camptocamp
-
[MRG] Add phrase to create logo headers for webkit reports.
- 310. By Alexandre Fayolle - camptocamp
-
[FIX] user_config: group assignment
The ir.module.category model has no unicity constraint on the name column. In
my instance, I have duplicates in this column, so I need to take this into
account when matching groups. The assertion is broken too, so I commented it
out
Nicolas Bessi - Camptocamp (nbessi-c2c-deactivatedaccount) wrote : | # |
- 311. By Nicolas Bessi - Camptocamp
-
[MRG] Fix global property assignment in mono company mode and when the linked entity has no company_id field
- 312. By Nicolas Bessi - Camptocamp
-
[MRG] Fix currency assignment sentence if type is null
- 313. By Nicolas Bessi - Camptocamp
-
[MRG] Add some utils to create database and login from config files
- 314. By Nicolas Bessi - Camptocamp
-
[MRG] Use new config reader features in finance setup
[REF] whitespace clean
- 315. By Florent
-
[FIX] ctx.data not initialized in some cases
- 316. By Alexandre Fayolle - camptocamp
-
[MRG] ADD delete sentence to DSL
- 317. By Alexandre Fayolle - camptocamp
-
[MRG] fix domain due to new erppeek setup
- 318. By Alexandre Fayolle - camptocamp
-
[MRG] fix company scope syntax of DSL: incorrect use of ctx.data + company_id in table is now taken in account
- 319. By Nicolas Bessi - Camptocamp
-
[FIX] by xxx in table success even if nothing found
Alexandre Fayolle - camptocamp (alexandre-fayolle-c2c) wrote : | # |
> LGTM but there is some behave point that I not comfortable with.
> Alexandre should review this one too.
>
> Nicolas
Any specific issue in mind ?
Nicolas Bessi - Camptocamp (nbessi-c2c-deactivatedaccount) wrote : | # |
It's a little bit old, but there is some behaviors with the runner that I'm not sure about.
If we have many folders of features how should we sort the load of feature.
Must we aggregate all files path and then sort or should we respect the order of the args, and do multiple sorts.
Otherwise as the MP is a little old maybe some fixes have been merged in core.
- 320. By Alexandre Fayolle - camptocamp
-
fix failure when linking property to company independent record
- 321. By Alexandre Fayolle - camptocamp
-
[IMP] don't do 2 lookups when not required
- 322. By Nicolas Bessi - Camptocamp
-
[MRG] FIX base finance setup to be green in V7.0 and remove credit control tests that where moved into the addon
- 323. By Florent
-
[MRG] merge the behave_better updates, pre-1.2.3
- 324. By Florent
-
[FIX] now Behave manage multiple output streams
- 325. By Florent
-
[FIX] Update patches for Behave 1.2.3
Nicolas Bessi - Camptocamp (nbessi-c2c-deactivatedaccount) wrote : | # |
Hello,
Many thanks for the patch.
I have tested it with behave 1.2.1, 1.2.2, 1.2.3
It works with 1.2.3 but breaks the support of assert helper:
NameError: global name 'assert_equal' is not defined
The compatibility fix with 1.2.3 breaks compatibility with previous version.
That means it as to be released with precaution and an announce or we may add compatibility code.
But it will be a little "fat"I prefer to do a new release or series.
Regards
Nicolas
Alexandre Fayolle - camptocamp (alexandre-fayolle-c2c) wrote : | # |
LGTM
Alexandre Fayolle - camptocamp (alexandre-fayolle-c2c) wrote : | # |
> Hello,
>
> Many thanks for the patch.
>
> I have tested it with behave 1.2.1, 1.2.2, 1.2.3
>
> It works with 1.2.3 but breaks the support of assert helper:
>
> NameError: global name 'assert_equal' is not defined
From what I read in the code, assertEqual is expected.
2 options : change the code tools/support.py to add also assert_equal or fix the python code to use the new spelling.
Alexandre Fayolle - camptocamp (alexandre-fayolle-c2c) wrote : | # |
> > Hello,
> >
> > Many thanks for the patch.
> >
> > I have tested it with behave 1.2.1, 1.2.2, 1.2.3
> >
> > It works with 1.2.3 but breaks the support of assert helper:
> >
> > NameError: global name 'assert_equal' is not defined
>
>
> From what I read in the code, assertEqual is expected.
>
> 2 options : change the code tools/support.py to add also assert_equal or fix
> the python code to use the new spelling.
Sorry I misread the code.
Actually I think what is needed is "from support import *" in environment.py around line 4
Alexandre Fayolle - camptocamp (alexandre-fayolle-c2c) wrote : | # |
> > Hello,
> >
> > Many thanks for the patch.
> >
> > I have tested it with behave 1.2.1, 1.2.2, 1.2.3
> >
> > It works with 1.2.3 but breaks the support of assert helper:
> >
> > NameError: global name 'assert_equal' is not defined
>
>
> From what I read in the code, assertEqual is expected.
>
> 2 options : change the code tools/support.py to add also assert_equal or fix
> the python code to use the new spelling.
Sorry I misread the code.
Actually I think what is needed is "from support import *" in environment.py around line 4 to import the dynamically defined symbols
- 326. By Alexandre Fayolle - camptocamp
-
add missing import to ensure compat with behave >= 1.2.3
- 327. By Alexandre Fayolle - camptocamp
-
behave runner: enable loading step definitions from multiple features/steps directories
monkey patch the load_step_
definitions of Runner to work around
https://github. com/behave/ behave/ issues/ 248 - 328. By Alexandre Fayolle - camptocamp
-
bring the branch on par with trunk
- 329. By Alexandre Fayolle - camptocamp
-
adapt to a change in the path handling in behave 1.2.4
- 330. By Alexandre Fayolle - camptocamp
-
specify we need behave 1.2.4
- 331. By Alexandre Fayolle - camptocamp
-
split dsl_helpers out of dsl.py to help importing functions
- 332. By Alexandre Fayolle - camptocamp
-
fixed missing import
- 333. By Alexandre Fayolle - camptocamp
-
fixed missing import
- 334. By Alexandre Fayolle - camptocamp
-
added README and Release Notes
Nicolas Bessi - Camptocamp (nbessi-c2c-deactivatedaccount) wrote : | # |
LGTM and steps will be more pythonic this way
Preview Diff
1 | === added file 'README.md' |
2 | --- README.md 1970-01-01 00:00:00 +0000 |
3 | +++ README.md 2014-08-28 08:14:30 +0000 |
4 | @@ -0,0 +1,42 @@ |
5 | +============ |
6 | +OERPScenario |
7 | +============ |
8 | + |
9 | +OERPScenario is a tool to allows Business Driven Development (BDD). It allows |
10 | +non-technical people to write real business cases, that will be tested among |
11 | +OpenERP to ensure no regressions. |
12 | + |
13 | +OERPScenario will allow us to detect regressions from one version to another by |
14 | +running a test suites composed by scenario on a specified OpenERP server |
15 | +(directly on the customer replication instance, or just on the last stable |
16 | +release). |
17 | + |
18 | +We also include in this brand new version written in Python and based on |
19 | +Erppeek (http://erppeek.readthedocs.org/en/latest/) a complete DSL that allow |
20 | +you to write tests at the speed of thought. |
21 | + |
22 | +This means a business specialist can write something like the following: |
23 | + |
24 | + Scenario: SO013 CREATION |
25 | + Given I need a "sale.order" with name: SO013 and oid: scenario.anglosaxon_SO013 |
26 | + And having: |
27 | + | name | value | |
28 | + | date_order | %Y-03-15 | |
29 | + | name | SO013 | |
30 | + | partner_id | by oid: scenario.customer_1 | |
31 | + | pricelist_id | by id: 1 | |
32 | + | partner_invoice_id | by oid: scenario.customer_1_add | |
33 | + | partner_order_id | by oid: scenario.customer_1_add | |
34 | + | partner_shipping_id | by oid: scenario.customer_1_add | |
35 | + | shop_id | by id: 1 | |
36 | + Given I need a "sale.order.line" with oid: scenario.anglosaxon_SO013_line1 |
37 | + And having: |
38 | + | name | value | |
39 | + | name | SO013_line1 | |
40 | + | product_id | by oid: scenario.p5 | |
41 | + | price_unit | 450 | |
42 | + | product_uom_qty | 1.0 | |
43 | + | product_uom | by name: PCE | |
44 | + | order_id | by oid: scenario.anglosaxon_SO013 | |
45 | + |
46 | + |
47 | |
48 | === modified file 'Readme.rst' |
49 | --- Readme.rst 2013-01-25 09:13:59 +0000 |
50 | +++ Readme.rst 2014-08-28 08:14:30 +0000 |
51 | @@ -1,8 +1,8 @@ |
52 | OpenERP Scenario in Python. |
53 | ########################### |
54 | |
55 | -Integration of OpenERP scenario with Python, behave and the anybox buidlout recipe: |
56 | -`http://pypi.python.org/pypi/anybox.recipe.openerp/1.3.0 <http://pypi.python.org/pypi/anybox.recipe.openerp/1.3.0>`_ |
57 | +Integration of OpenERP scenario with Python, Behave and the Anybox buildout recipe: |
58 | +`http://pypi.python.org/pypi/anybox.recipe.openerp <http://pypi.python.org/pypi/anybox.recipe.openerp>`_ |
59 | |
60 | Installation: |
61 | Refer to Anybox recipe documentation to create your instance. |
62 | @@ -52,7 +52,7 @@ |
63 | |
64 | should be available. To run some scenario launch the following command:: |
65 | |
66 | - bin/behave -k --tags=mytag ../path_to_python_scenario/features/ path_to_my_custom_scenario/features |
67 | + bin/behave -k --tags=mytag ../path_to_python_scenario/features/ path_to_my_custom_scenario/features |
68 | |
69 | The -k option will only show executed scenario --tags will launch specific scenario. |
70 | For more information, please refer to behave documentation: |
71 | @@ -68,12 +68,12 @@ |
72 | |
73 | OERPScenario/ |
74 | ├── data |
75 | - │ ├── account_chart.csv |
76 | - │ └── logo.png |
77 | + │ ├── account_chart.csv |
78 | + │ └── logo.png |
79 | └── features |
80 | ├── setup |
81 | - │ ├── 01_installation.feature |
82 | - │ └── 02_installation_after_import.feature |
83 | + │ ├── 01_installation.feature |
84 | + │ └── 02_installation_after_import.feature |
85 | ├── addons |
86 | ├── steps |
87 | ├── stories |
88 | @@ -85,4 +85,4 @@ |
89 | * addons: contains addons specific tests, small independent scenarios. |
90 | * stories: contains user/workflow tests that are related. |
91 | * upgrade: scenario to update an instance. |
92 | -* steps: contains Python code implementing the gherkin phrases |
93 | +* steps: contains Python code implementing the Gherkin phrases |
94 | |
95 | === added file 'ReleaseNotes.md' |
96 | --- ReleaseNotes.md 1970-01-01 00:00:00 +0000 |
97 | +++ ReleaseNotes.md 2014-08-28 08:14:30 +0000 |
98 | @@ -0,0 +1,51 @@ |
99 | +Version x.x.x |
100 | +============= |
101 | + |
102 | + |
103 | +This version brings OERPScenario compatibility with behave 1.2.4. |
104 | + |
105 | +Things to change in your features and steps: |
106 | + |
107 | +1. the tools which are defined in `support/tools.py` are no longer available in |
108 | +the `globals()` of your step definitions, so you need to import them manually: |
109 | + |
110 | + from support.tools import model, puts, set_trace, assert_equal |
111 | + |
112 | +There is also a shortcut available: |
113 | + |
114 | + from support import * |
115 | + |
116 | + |
117 | + |
118 | +2. There has been a change in the way the `ctx.feature.filename` attribute is |
119 | +managed. If your step definitions use this variable, e.g. to get a path to a |
120 | +`data` directory to load files, you will need to adapt. For the record the |
121 | +patch applied to the basic step definitions is: |
122 | + |
123 | + === modified file 'features/steps/tools.py' |
124 | + --- features/steps/tools.py 2014-08-27 13:46:19 +0000 |
125 | + +++ features/steps/tools.py 2014-08-28 06:41:58 +0000 |
126 | + @@ -38,9 +38,9 @@ |
127 | + @given('"{model_name}" is imported from CSV "{csvfile}" using delimiter "{sep}"') |
128 | + def impl(ctx, model_name, csvfile, sep=","): |
129 | + tmp_path = ctx.feature.filename.split(os.path.sep) |
130 | + - tmp_path = tmp_path[1: tmp_path.index('features')] + ['data', csvfile] |
131 | + + tmp_path = tmp_path[: tmp_path.index('features')] + ['data', csvfile] |
132 | + tmp_path = [str(x) for x in tmp_path] |
133 | + - path = os.path.join('/', *tmp_path) |
134 | + + path = os.path.join(*tmp_path) |
135 | + assert os.path.exists(path) |
136 | + data = csv.reader(open(path, 'rb'), delimiter=str(sep)) |
137 | + head = data.next() |
138 | + |
139 | +3. If you step definitions used the helper functions from `dsl.py` to parse |
140 | +domain from table data for instance, you need to import them. They were |
141 | +extracted to a new module `dsl_helper` to ease things and avoid a mess with |
142 | +duplicate step definitions: |
143 | + |
144 | + from dsl_helpers import (parse_domain, |
145 | + build_search_domain, |
146 | + parse_table_values, |
147 | + ) |
148 | + |
149 | + |
150 | |
151 | === modified file 'features/environment.py' |
152 | --- features/environment.py 2014-06-16 10:25:44 +0000 |
153 | +++ features/environment.py 2014-08-28 08:14:30 +0000 |
154 | @@ -15,6 +15,10 @@ |
155 | def before_all(ctx): |
156 | server = erppeek.start_openerp_services(OPENERP_ARGS) |
157 | database = server.tools.config['db_name'] |
158 | + def _output_write(text): |
159 | + for stream in ctx.config.outputs: |
160 | + stream.open().write(text) |
161 | + ctx._output_write = _output_write |
162 | ctx._is_context = True |
163 | ctx.client = erppeek.Client(server, verbose=ctx.config.verbose) |
164 | ctx.conf = {'server': server, |
165 | @@ -33,7 +37,6 @@ |
166 | |
167 | |
168 | def before_feature(ctx, feature): |
169 | - #pdb.set_trace() |
170 | ctx.data = {} |
171 | |
172 | |
173 | @@ -45,7 +48,6 @@ |
174 | |
175 | |
176 | def before_step(ctx, step): |
177 | - #pdb.set_trace() |
178 | ctx._messages = [] |
179 | # Extra cleanup (should be fixed upstream?) |
180 | ctx.table = None |
181 | @@ -53,14 +55,13 @@ |
182 | |
183 | |
184 | def after_step(ctx, laststep): |
185 | - #pdb.set_trace() |
186 | if ctx._messages: |
187 | # Flush the messages collected with puts(...) |
188 | - output = ctx.config.output |
189 | for item in ctx._messages: |
190 | for line in str(item).splitlines(): |
191 | - output.write(u' %s\n' % (line,)) |
192 | - # output.flush() |
193 | + ctx._output_write(u' %s\n' % (line,)) |
194 | + for stream in ctx.config.outputs: |
195 | + stream.open().flush() |
196 | if laststep.status == 'failed' and ctx.config.stop: |
197 | # Enter the interactive debugger |
198 | tools.set_trace() |
199 | |
200 | === modified file 'features/steps/company_config.py' |
201 | --- features/steps/company_config.py 2013-03-28 14:19:01 +0000 |
202 | +++ features/steps/company_config.py 2014-08-28 08:14:30 +0000 |
203 | @@ -4,10 +4,10 @@ |
204 | |
205 | def get_encoded_image(ctx, image_path): |
206 | tmp_path = ctx.feature.filename.split(os.path.sep) |
207 | - tmp_path = tmp_path[1: tmp_path.index('features')] |
208 | + tmp_path = tmp_path[: tmp_path.index('features')] |
209 | tmp_path.extend(['data', image_path]) |
210 | tmp_path = [str(x) for x in tmp_path] |
211 | - path = os.path.join('/', *tmp_path) |
212 | + path = os.path.join(*tmp_path) |
213 | assert os.path.exists(path), "path not found %s" % path |
214 | with open(path, "rb") as image_file: |
215 | return base64.b64encode(image_file.read()) |
216 | |
217 | === modified file 'features/steps/dsl.py' |
218 | --- features/steps/dsl.py 2014-08-25 13:27:48 +0000 |
219 | +++ features/steps/dsl.py 2014-08-28 08:14:30 +0000 |
220 | @@ -1,163 +1,15 @@ |
221 | from ast import literal_eval |
222 | import time |
223 | -from support.tools import puts, set_trace, model |
224 | -from behave.matchers import register_type |
225 | - |
226 | - |
227 | -def parse_optional(text): |
228 | - return text.strip() |
229 | -# https://pypi.python.org/pypi/parse#custom-type-conversions |
230 | -parse_optional.pattern = r'\s?\w*\s?' |
231 | - |
232 | -register_type(optional=parse_optional) |
233 | - |
234 | - |
235 | -def parse_domain(domain): |
236 | - rv = {} |
237 | - if domain[-1:] == ':': |
238 | - domain = domain[:-1] |
239 | - for term in domain.split(' and '): |
240 | - key, value = term.split(None, 1) |
241 | - if key[-1:] == ':': |
242 | - key = key[:-1] |
243 | - try: |
244 | - value = literal_eval(value) |
245 | - except Exception: |
246 | - # Interpret the value as a string |
247 | - pass |
248 | - rv[key.lstrip()] = value |
249 | - if 'oid' in rv: |
250 | - rv['xmlid'] = rv.pop('oid') |
251 | - return rv |
252 | - |
253 | - |
254 | -def build_search_domain(ctx, obj, values, active=True): |
255 | - """ Build a search domain as expected by `search()` |
256 | - |
257 | - :param obj: name of the model as string |
258 | - :param values: search values (dict of field names with their values) |
259 | - :param active: False: only inactive records |
260 | - None: include inactive and active records |
261 | - True: only active records |
262 | - |
263 | - """ |
264 | - values = values.copy() |
265 | - xml_id = values.pop('xmlid', None) |
266 | - res_id = values.pop('id', None) |
267 | - if xml_id: |
268 | - if 'active' in model(obj).fields(): |
269 | - active = None # we must find a record by xmlid, even inactive |
270 | - module, name = xml_id.split('.') |
271 | - search_domain = [('module', '=', module), ('name', '=', name)] |
272 | - records = model('ir.model.data').browse(search_domain) |
273 | - if not records: |
274 | - return None |
275 | - res = records[0].read('model res_id') |
276 | - assert_equal(res['model'], obj) |
277 | - if res_id: |
278 | - assert_equal(res_id, res['res_id']) |
279 | - else: |
280 | - res_id = res['res_id'] |
281 | - search_domain = [(key, '=', value) for (key, value) in values.items()] |
282 | - if active in (False, None): |
283 | - if 'active' not in model(obj).fields(): |
284 | - puts("Searching inactive records on %s has no effect " |
285 | - "because it has no 'active' field." % obj) |
286 | - elif active is None: |
287 | - search_domain += ['|', ('active', '=', False), |
288 | - ('active', '=', True)] |
289 | - elif active is False: |
290 | - search_domain += [('active', '=', False)] |
291 | - if res_id: |
292 | - search_domain = [('id', '=', res_id)] + search_domain |
293 | - if hasattr(ctx, 'company_id') and \ |
294 | - 'company_id' in model(obj).fields() and \ |
295 | - not [term for term in search_domain if term[0] == 'company_id']: |
296 | - # we add a company_id domain restriction if there is one definied in ctx, |
297 | - # and there is a company_id column in the model |
298 | - # and there was no explicit company_id restriction in the domain |
299 | - # (we need this to search shared records, such as res.currencies) |
300 | - search_domain.append(('company_id', '=', ctx.company_id)) |
301 | - return search_domain |
302 | - |
303 | - |
304 | -def parse_table_values(ctx, obj, table): |
305 | - """ Parse the values of the tables in the phrases 'And having:' |
306 | - |
307 | - The relations support the following options: |
308 | - |
309 | - * by {field}: {value} |
310 | - * all by {field}: {value} |
311 | - * add all by {field}: {value} |
312 | - * inactive by {field}: {value} |
313 | - * possibly inactive by {field}: {value} |
314 | - * all inactive by {field}: {value} |
315 | - * add all inactive by {field}: {value} |
316 | - * all possibly inactive by {field}: {value} |
317 | - * add all possibly inactive by {field}: {value} |
318 | - |
319 | - """ |
320 | - fields = model(obj).fields() |
321 | - if hasattr(table, 'headings'): |
322 | - # if we have a real table, ensure it has 2 columns |
323 | - # otherwise, we will just fail during iteration |
324 | - assert_equal(len(table.headings), 2) |
325 | - assert_true(fields) |
326 | - res = {} |
327 | - for (key, value) in table: |
328 | - add_mode = False |
329 | - field_type = fields[key]['type'] |
330 | - if field_type in ('char', 'text'): |
331 | - pass |
332 | - elif value.lower() in ('false', '0', 'no', 'f', 'n', 'nil'): |
333 | - value = False |
334 | - elif field_type in ('many2one', 'one2many', 'many2many'): |
335 | - relation = fields[key]['relation'] |
336 | - active = True |
337 | - if value.startswith('add all'): |
338 | - add_mode = True |
339 | - value = value[4:] # fall back on "all by xxx" below |
340 | - else: |
341 | - add_mode = False |
342 | - if (value.startswith('inactive by ') or |
343 | - value.startswith('all inactive by ')): |
344 | - active = False |
345 | - # fall back on "by " and "all by " below |
346 | - value = value.replace('inactive ', '', 1) |
347 | - if (value.startswith('possibly inactive by ') or |
348 | - value.startswith('all possibly inactive by ')): |
349 | - active = None |
350 | - # fall back on "by " and "all by " below |
351 | - value = value.replace('possibly inactive ', '', 1) |
352 | - if value.startswith('by ') or value.startswith('all by '): |
353 | - value = value.split('by ', 1)[1] |
354 | - values = parse_domain(value) |
355 | - search_domain = build_search_domain(ctx, relation, values, active=active) |
356 | - if search_domain: |
357 | - value = model(relation).browse(search_domain).id |
358 | - assert value, "no value found for col %s domain %s" % (key, str(search_domain)) |
359 | - else: |
360 | - value = [] |
361 | - if add_mode: |
362 | - value = res.get(key, []) + value |
363 | - else: |
364 | - method = getattr(model(relation), value) |
365 | - value = method() |
366 | - if field_type == 'many2one': |
367 | - assert_true(value, msg="no item found for %s" % key) |
368 | - assert_equal(len(value), 1, |
369 | - msg="more than item found for %s" % key) |
370 | - value = value[0] |
371 | - elif field_type == 'integer': |
372 | - value = int(value) |
373 | - elif field_type == 'float': |
374 | - value = float(value) |
375 | - elif field_type == 'boolean': |
376 | - value = True |
377 | - elif field_type in ('date', 'datetime') and '%' in value: |
378 | - value = time.strftime(value) |
379 | - res[key] = value |
380 | - return res |
381 | +from support.tools import puts, set_trace, model, assert_true, assert_equal |
382 | +from dsl_helpers import (parse_domain, |
383 | + build_search_domain, |
384 | + parse_table_values, |
385 | + parse_optional, |
386 | + create_new_obj, |
387 | + get_company_property |
388 | + ) |
389 | + |
390 | + |
391 | |
392 | |
393 | @step('/^having:?$/') |
394 | @@ -185,21 +37,6 @@ |
395 | ctx.oe_context = literal_eval(oe_context_string) |
396 | |
397 | |
398 | -def create_new_obj(ctx, model_name, values): |
399 | - values = values.copy() |
400 | - xmlid = values.pop('xmlid', None) |
401 | - oe_context = getattr(ctx, 'oe_context', None) |
402 | - record = model(model_name).create(values, context=oe_context) |
403 | - if xmlid is not None: |
404 | - ModelData = model('ir.model.data') |
405 | - module, xmlid = xmlid.split('.', 1) |
406 | - _model_data = ModelData.create({ |
407 | - 'name': xmlid, |
408 | - 'model': model_name, |
409 | - 'res_id': record.id, |
410 | - 'module': module, |
411 | - }, context=oe_context) |
412 | - return record |
413 | |
414 | |
415 | @step(u'I find a{n:optional}{active_text:optional} "{model_name}" with {domain}') |
416 | @@ -269,28 +106,6 @@ |
417 | Model.write(ids, new_attrs) |
418 | |
419 | |
420 | -def get_company_property(ctx, pname, modelname, fieldname, company_oid=None): |
421 | - company = None |
422 | - if company_oid: |
423 | - c_domain = build_search_domain(ctx, 'res.company', {'xmlid': company_oid}) |
424 | - company = model('res.company').get(c_domain) |
425 | - assert company |
426 | - field = model('ir.model.fields').get([('name', '=', fieldname), ('model', '=', modelname)]) |
427 | - assert field is not None, 'no field %s in model %s' % (fieldname, modelname) |
428 | - domain = [('name', '=', pname), |
429 | - ('fields_id', '=', field.id), |
430 | - ('res_id', '=', False)] |
431 | - if company: |
432 | - domain.append(('company_id', '=', company.id)) |
433 | - ir_property = model('ir.property').get(domain) |
434 | - if ir_property is None: |
435 | - ir_property = model('ir.property').create({'fields_id': field.id, |
436 | - 'name': pname, |
437 | - 'res_id': False, |
438 | - 'type': 'many2one'}) |
439 | - if company: |
440 | - ir_property.write({'company_id': company.id}) |
441 | - ctx.ir_property = ir_property |
442 | |
443 | @given('I set global property named "{pname}" for model "{modelname}" and field "{fieldname}" for company with ref "{company_oid}"') |
444 | def impl(ctx, pname, modelname, fieldname, company_oid): |
445 | |
446 | === added file 'features/steps/dsl_helpers.py' |
447 | --- features/steps/dsl_helpers.py 1970-01-01 00:00:00 +0000 |
448 | +++ features/steps/dsl_helpers.py 2014-08-28 08:14:30 +0000 |
449 | @@ -0,0 +1,199 @@ |
450 | +''' |
451 | +helper function for dsl manipulation |
452 | +''' |
453 | +from behave.matchers import register_type |
454 | +from support import * |
455 | + |
456 | +def parse_optional(text): |
457 | + return text.strip() |
458 | +# https://pypi.python.org/pypi/parse#custom-type-conversions |
459 | +parse_optional.pattern = r'\s?\w*\s?' |
460 | + |
461 | +register_type(optional=parse_optional) |
462 | + |
463 | + |
464 | +def parse_domain(domain): |
465 | + rv = {} |
466 | + if domain[-1:] == ':': |
467 | + domain = domain[:-1] |
468 | + for term in domain.split(' and '): |
469 | + key, value = term.split(None, 1) |
470 | + if key[-1:] == ':': |
471 | + key = key[:-1] |
472 | + try: |
473 | + value = literal_eval(value) |
474 | + except Exception: |
475 | + # Interpret the value as a string |
476 | + pass |
477 | + rv[key.lstrip()] = value |
478 | + if 'oid' in rv: |
479 | + rv['xmlid'] = rv.pop('oid') |
480 | + return rv |
481 | + |
482 | + |
483 | +def build_search_domain(ctx, obj, values, active=True): |
484 | + """ Build a search domain as expected by `search()` |
485 | + |
486 | + :param obj: name of the model as string |
487 | + :param values: search values (dict of field names with their values) |
488 | + :param active: False: only inactive records |
489 | + None: include inactive and active records |
490 | + True: only active records |
491 | + |
492 | + """ |
493 | + values = values.copy() |
494 | + xml_id = values.pop('xmlid', None) |
495 | + res_id = values.pop('id', None) |
496 | + if xml_id: |
497 | + if 'active' in model(obj).fields(): |
498 | + active = None # we must find a record by xmlid, even inactive |
499 | + module, name = xml_id.split('.') |
500 | + search_domain = [('module', '=', module), ('name', '=', name)] |
501 | + records = model('ir.model.data').browse(search_domain) |
502 | + if not records: |
503 | + return None |
504 | + res = records[0].read('model res_id') |
505 | + assert_equal(res['model'], obj) |
506 | + if res_id: |
507 | + assert_equal(res_id, res['res_id']) |
508 | + else: |
509 | + res_id = res['res_id'] |
510 | + search_domain = [(key, '=', value) for (key, value) in values.items()] |
511 | + if active in (False, None): |
512 | + if 'active' not in model(obj).fields(): |
513 | + puts("Searching inactive records on %s has no effect " |
514 | + "because it has no 'active' field." % obj) |
515 | + elif active is None: |
516 | + search_domain += ['|', ('active', '=', False), |
517 | + ('active', '=', True)] |
518 | + elif active is False: |
519 | + search_domain += [('active', '=', False)] |
520 | + if res_id: |
521 | + search_domain = [('id', '=', res_id)] + search_domain |
522 | + if hasattr(ctx, 'company_id') and \ |
523 | + 'company_id' in model(obj).fields() and \ |
524 | + not [term for term in search_domain if term[0] == 'company_id']: |
525 | + # we add a company_id domain restriction if there is one definied in ctx, |
526 | + # and there is a company_id column in the model |
527 | + # and there was no explicit company_id restriction in the domain |
528 | + # (we need this to search shared records, such as res.currencies) |
529 | + search_domain.append(('company_id', '=', ctx.company_id)) |
530 | + return search_domain |
531 | + |
532 | + |
533 | +def parse_table_values(ctx, obj, table): |
534 | + """ Parse the values of the tables in the phrases 'And having:' |
535 | + |
536 | + The relations support the following options: |
537 | + |
538 | + * by {field}: {value} |
539 | + * all by {field}: {value} |
540 | + * add all by {field}: {value} |
541 | + * inactive by {field}: {value} |
542 | + * possibly inactive by {field}: {value} |
543 | + * all inactive by {field}: {value} |
544 | + * add all inactive by {field}: {value} |
545 | + * all possibly inactive by {field}: {value} |
546 | + * add all possibly inactive by {field}: {value} |
547 | + |
548 | + """ |
549 | + fields = model(obj).fields() |
550 | + if hasattr(table, 'headings'): |
551 | + # if we have a real table, ensure it has 2 columns |
552 | + # otherwise, we will just fail during iteration |
553 | + assert_equal(len(table.headings), 2) |
554 | + assert_true(fields) |
555 | + res = {} |
556 | + for (key, value) in table: |
557 | + add_mode = False |
558 | + field_type = fields[key]['type'] |
559 | + if field_type in ('char', 'text'): |
560 | + pass |
561 | + elif value.lower() in ('false', '0', 'no', 'f', 'n', 'nil'): |
562 | + value = False |
563 | + elif field_type in ('many2one', 'one2many', 'many2many'): |
564 | + relation = fields[key]['relation'] |
565 | + active = True |
566 | + if value.startswith('add all'): |
567 | + add_mode = True |
568 | + value = value[4:] # fall back on "all by xxx" below |
569 | + else: |
570 | + add_mode = False |
571 | + if (value.startswith('inactive by ') or |
572 | + value.startswith('all inactive by ')): |
573 | + active = False |
574 | + # fall back on "by " and "all by " below |
575 | + value = value.replace('inactive ', '', 1) |
576 | + if (value.startswith('possibly inactive by ') or |
577 | + value.startswith('all possibly inactive by ')): |
578 | + active = None |
579 | + # fall back on "by " and "all by " below |
580 | + value = value.replace('possibly inactive ', '', 1) |
581 | + if value.startswith('by ') or value.startswith('all by '): |
582 | + value = value.split('by ', 1)[1] |
583 | + values = parse_domain(value) |
584 | + search_domain = build_search_domain(ctx, relation, values, active=active) |
585 | + if search_domain: |
586 | + value = model(relation).browse(search_domain).id |
587 | + assert value, "no value found for col %s domain %s" % (key, str(search_domain)) |
588 | + else: |
589 | + value = [] |
590 | + if add_mode: |
591 | + value = res.get(key, []) + value |
592 | + else: |
593 | + method = getattr(model(relation), value) |
594 | + value = method() |
595 | + if field_type == 'many2one': |
596 | + assert_true(value, msg="no item found for %s" % key) |
597 | + assert_equal(len(value), 1, |
598 | + msg="more than item found for %s" % key) |
599 | + value = value[0] |
600 | + elif field_type == 'integer': |
601 | + value = int(value) |
602 | + elif field_type == 'float': |
603 | + value = float(value) |
604 | + elif field_type == 'boolean': |
605 | + value = True |
606 | + elif field_type in ('date', 'datetime') and '%' in value: |
607 | + value = time.strftime(value) |
608 | + res[key] = value |
609 | + return res |
610 | + |
611 | +def create_new_obj(ctx, model_name, values): |
612 | + values = values.copy() |
613 | + xmlid = values.pop('xmlid', None) |
614 | + oe_context = getattr(ctx, 'oe_context', None) |
615 | + record = model(model_name).create(values, context=oe_context) |
616 | + if xmlid is not None: |
617 | + ModelData = model('ir.model.data') |
618 | + module, xmlid = xmlid.split('.', 1) |
619 | + _model_data = ModelData.create({ |
620 | + 'name': xmlid, |
621 | + 'model': model_name, |
622 | + 'res_id': record.id, |
623 | + 'module': module, |
624 | + }, context=oe_context) |
625 | + return record |
626 | + |
627 | +def get_company_property(ctx, pname, modelname, fieldname, company_oid=None): |
628 | + company = None |
629 | + if company_oid: |
630 | + c_domain = build_search_domain(ctx, 'res.company', {'xmlid': company_oid}) |
631 | + company = model('res.company').get(c_domain) |
632 | + assert company |
633 | + field = model('ir.model.fields').get([('name', '=', fieldname), ('model', '=', modelname)]) |
634 | + assert field is not None, 'no field %s in model %s' % (fieldname, modelname) |
635 | + domain = [('name', '=', pname), |
636 | + ('fields_id', '=', field.id), |
637 | + ('res_id', '=', False)] |
638 | + if company: |
639 | + domain.append(('company_id', '=', company.id)) |
640 | + ir_property = model('ir.property').get(domain) |
641 | + if ir_property is None: |
642 | + ir_property = model('ir.property').create({'fields_id': field.id, |
643 | + 'name': pname, |
644 | + 'res_id': False, |
645 | + 'type': 'many2one'}) |
646 | + if company: |
647 | + ir_property.write({'company_id': company.id}) |
648 | + ctx.ir_property = ir_property |
649 | |
650 | === modified file 'features/steps/tools.py' |
651 | --- features/steps/tools.py 2013-01-21 16:02:27 +0000 |
652 | +++ features/steps/tools.py 2014-08-28 08:14:30 +0000 |
653 | @@ -1,6 +1,7 @@ |
654 | import openerp |
655 | import csv |
656 | import os |
657 | +from support import * |
658 | |
659 | @given('I execute the Python commands') |
660 | def impl(ctx): |
661 | @@ -37,9 +38,9 @@ |
662 | @given('"{model_name}" is imported from CSV "{csvfile}" using delimiter "{sep}"') |
663 | def impl(ctx, model_name, csvfile, sep=","): |
664 | tmp_path = ctx.feature.filename.split(os.path.sep) |
665 | - tmp_path = tmp_path[1: tmp_path.index('features')] + ['data', csvfile] |
666 | + tmp_path = tmp_path[: tmp_path.index('features')] + ['data', csvfile] |
667 | tmp_path = [str(x) for x in tmp_path] |
668 | - path = os.path.join('/', *tmp_path) |
669 | + path = os.path.join(*tmp_path) |
670 | assert os.path.exists(path) |
671 | data = csv.reader(open(path, 'rb'), delimiter=str(sep)) |
672 | head = data.next() |
673 | |
674 | === modified file 'features/steps/user_config.py' |
675 | --- features/steps/user_config.py 2013-03-11 14:44:28 +0000 |
676 | +++ features/steps/user_config.py 2014-08-28 08:14:30 +0000 |
677 | @@ -24,7 +24,7 @@ |
678 | groups = model('res.groups').browse([]) |
679 | for user in ctx.found_items: |
680 | assign_groups(user, groups) |
681 | - |
682 | + |
683 | |
684 | @step(u'we assign to {users} the groups below') |
685 | def impl(ctx, users): |
686 | @@ -46,7 +46,7 @@ |
687 | assert category_ids, 'no category named %s' % categ |
688 | condition = [ |
689 | '&', |
690 | - ('name', '=', name), |
691 | + ('name', '=', name), |
692 | ('category_id', 'in', category_ids) |
693 | ] |
694 | # Take the category_id to build the domain |
695 | @@ -55,7 +55,7 @@ |
696 | # ('&',('name','=','User'), ('category_id','=',47)), |
697 | # ] |
698 | full_name_cond += condition |
699 | - num_operators = len(group_full_names) - 1 |
700 | + num_operators = len(group_full_names) - 1 |
701 | or_operators = ['|'] * num_operators |
702 | search_cond = or_operators + full_name_cond |
703 | groups.extend(model('res.groups').browse(search_cond)) |
704 | |
705 | === modified file 'features/support/behave_better.py' |
706 | --- features/support/behave_better.py 2013-02-08 14:03:28 +0000 |
707 | +++ features/support/behave_better.py 2014-08-28 08:14:30 +0000 |
708 | @@ -3,11 +3,17 @@ |
709 | |
710 | Some of them might be proposed upstream |
711 | """ |
712 | +import os.path |
713 | |
714 | from behave import formatter |
715 | from behave import matchers |
716 | from behave import model |
717 | from behave import runner |
718 | +from behave.formatter.ansi_escapes import up |
719 | +from behave.model_describe import escape_cell, escape_triple_quotes, indent |
720 | +# Defeat lazy import, because we need to patch the formatters |
721 | +import behave.formatter.plain |
722 | +import behave.formatter.pretty |
723 | |
724 | __all__ = ['patch_all'] |
725 | _behave_patched = False |
726 | @@ -18,13 +24,33 @@ |
727 | if not _behave_patched: |
728 | patch_matchers_get_matcher() |
729 | patch_model_Table_raw() |
730 | + formatter.formatters.register(PlainFormatter) |
731 | formatter.formatters.register(PrettyFormatter) |
732 | + patch_runner_load_step_definitions() |
733 | _behave_patched = True |
734 | |
735 | +def patch_runner_load_step_definitions(): |
736 | + """ |
737 | + Pass extra steps directories to Runner.load_step_definitions |
738 | + |
739 | + That method has an extra_step_paths kwarg defaulting to nothing, and the |
740 | + caller does not provide a value. We compute someting sensible from the |
741 | + command line paths. |
742 | + """ |
743 | + runner.Runner._load_step_definitions = runner.Runner.load_step_definitions |
744 | + def load_step_definitions(self): |
745 | + extra_step_paths = [] |
746 | + for path in self.config.paths[1:]: |
747 | + path = os.path.abspath(path) |
748 | + path = os.path.join(path, 'steps') |
749 | + if os.path.isdir(path): |
750 | + extra_step_paths.append(path) |
751 | + self._load_step_definitions(extra_step_paths) |
752 | + runner.Runner.load_step_definitions = load_step_definitions |
753 | |
754 | def patch_matchers_get_matcher(): |
755 | # Detect the regex expressions |
756 | - # https://github.com/jeamland/behave/issues/73 |
757 | + # https://github.com/behave/behave/issues/73 |
758 | def get_matcher(func, string): |
759 | if string[:1] == string[-1:] == '/': |
760 | return matchers.RegexMatcher(func, string[1:-1]) |
761 | @@ -41,17 +67,53 @@ |
762 | model.Table.raw = property(raw) |
763 | |
764 | |
765 | +# Flush the output after each scenario |
766 | +class PlainFormatter(formatter.plain.PlainFormatter): |
767 | + |
768 | + def result(self, result): |
769 | + super(PlainFormatter, self).result(result) |
770 | + self.stream.flush() |
771 | + |
772 | + def eof(self): |
773 | + if self.config.show_skipped: |
774 | + self.stream.write('\n') |
775 | + |
776 | + |
777 | +# https://github.com/behave/behave/pull/157 |
778 | +# https://github.com/behave/behave/pull/165 |
779 | +# https://github.com/behave/behave/issues/118 |
780 | +# |
781 | # Fixes: |
782 | # * colors for tags |
783 | # * colors for tables |
784 | # * colors for docstrings |
785 | class PrettyFormatter(formatter.pretty.PrettyFormatter): |
786 | |
787 | + def result(self, result): |
788 | + if not self.monochrome: |
789 | + lines = self.step_lines + 1 |
790 | + if self.show_multiline: |
791 | + if result.table: |
792 | + lines += self.table_lines |
793 | + if result.text: |
794 | + lines += self.text_lines |
795 | + self.stream.write(up(lines)) |
796 | + arguments = [] |
797 | + location = None |
798 | + if self._match: |
799 | + arguments = self._match.arguments |
800 | + location = self._match.location |
801 | + self.print_step(result.status, arguments, location, True) |
802 | + if result.error_message: |
803 | + self.stream.write(indent(result.error_message.strip(), u' ')) |
804 | + self.stream.write('\n\n') |
805 | + self.stream.flush() |
806 | + |
807 | def table(self, table, strformat=unicode): |
808 | cell_lengths = [] |
809 | all_rows = [table.headings] + table.rows |
810 | for row in all_rows: |
811 | - lengths = [len(formatter.pretty.escape_cell(c)) for c in row] |
812 | + lengths = [len(escape_cell(c)) for c in row] |
813 | cell_lengths.append(lengths) |
814 | |
815 | max_lengths = [] |
816 | @@ -70,10 +132,16 @@ |
817 | self.stream.write('\n') |
818 | self.stream.flush() |
819 | |
820 | + table_width = 7 + 3 * len(table.headings) + sum(max_lengths) |
821 | + self.table_lines = len(all_rows) * (1 + table_width // self.display_width) |
822 | + |
823 | def doc_string(self, doc_string, strformat=unicode): |
824 | triplequotes = self.format('comments').text(u'"""') |
825 | - doc_string = strformat(self.escape_triple_quotes(doc_string)) |
826 | - self.stream.write(self.indent(u'\n'.join([ |
827 | + self.text_lines = 2 + sum( |
828 | + [(1 + (6 + len(line)) // self.display_width) |
829 | + for line in doc_string.splitlines()]) |
830 | + doc_string = strformat(escape_triple_quotes(doc_string)) |
831 | + self.stream.write(indent(u'\n'.join([ |
832 | triplequotes, doc_string, triplequotes]), u' ') + u'\n') |
833 | |
834 | def print_step(self, status, arguments, location, proceed): |
835 | @@ -93,6 +161,13 @@ |
836 | |
837 | text_start = 0 |
838 | for arg in arguments: |
839 | + if arg.end <= text_start: |
840 | + # -- SKIP-OVER: Optional and nested regexp args |
841 | + # - Optional regexp args (unmatched: None). |
842 | + # - Nested regexp args that are already processed. |
843 | + continue |
844 | + # -- VALID, MATCHED ARGUMENT: |
845 | + assert arg.original is not None |
846 | text = step_name[text_start:arg.start] |
847 | self.stream.write(text_format.text(text)) |
848 | line_length += len(text) |
849 | @@ -105,13 +180,27 @@ |
850 | self.stream.write(text_format.text(text)) |
851 | line_length += (len(text)) |
852 | |
853 | - location = self.indented_location(location, proceed) |
854 | + if self.show_timings: |
855 | + if status in ('passed', 'failed'): |
856 | + timing = '%6.3fs' % step.duration |
857 | + else: |
858 | + timing = ' ' * 7 |
859 | + else: |
860 | + timing = '' |
861 | if self.show_source: |
862 | + location = unicode(location) |
863 | + if timing: |
864 | + location = location + ' ' + timing |
865 | + location = self.indented_text(location, proceed) |
866 | self.stream.write(self.format('comments').text(location)) |
867 | line_length += len(location) |
868 | + elif timing: |
869 | + timing = self.indented_text(timing, proceed) |
870 | + self.stream.write(self.format('comments').text(timing)) |
871 | + line_length += len(timing) |
872 | self.stream.write("\n") |
873 | |
874 | - self.step_lines = int((line_length - 1) / self.display_width) |
875 | + self.step_lines = int((line_length - 1) // self.display_width) |
876 | |
877 | if self.show_multiline: |
878 | if step.text: |
879 | @@ -119,70 +208,14 @@ |
880 | if step.table: |
881 | self.table(step.table, strformat=text_format.text) |
882 | |
883 | - def print_tags(self, tags, indent): |
884 | + def print_tags(self, tags, indentation): |
885 | if not tags: |
886 | return |
887 | - formatted_tags = u' '.join(self.format('tag').text('@' + tag) |
888 | - for tag in tags) |
889 | - self.stream.write(indent + formatted_tags + '\n') |
890 | + tags = u' '.join(u'@' + tag for tag in tags) |
891 | + self.stream.write(indentation + self.format('tag').text(tags) + '\n') |
892 | |
893 | def eof(self): |
894 | self.replay() |
895 | - self.stream.write('\033[A') |
896 | + if self.config.show_skipped: |
897 | + self.stream.write('\n') |
898 | self.stream.flush() |
899 | - |
900 | - |
901 | -# monkey patch Runner so that feature files are sorted |
902 | -import os, sys |
903 | -from behave.runner import exec_file |
904 | -from behave import step_registry |
905 | -def _patched_feature_files(self): |
906 | - files = [] |
907 | - for path in self.config.paths: |
908 | - if os.path.isdir(path): |
909 | - for dirpath, dirnames, filenames in os.walk(path): |
910 | - dirnames.sort() |
911 | - for filename in sorted(filenames): |
912 | - if filename.endswith('.feature'): |
913 | - files.append(os.path.join(dirpath, filename)) |
914 | - elif path.startswith('@'): |
915 | - files.extend([filename.strip() for filename in open(path)]) |
916 | - elif os.path.exists(path): |
917 | - files.append(path) |
918 | - else: |
919 | - raise Exception("Can't find path: " + path) |
920 | - return files |
921 | - |
922 | -def _patched_load_step_definitions(self, extra_step_paths=None): |
923 | - steps_dir = os.path.join(self.base_dir, 'steps') |
924 | - if extra_step_paths is None: |
925 | - extra_step_paths = [] |
926 | - for path in self.config.paths[1:]: |
927 | - dirname = os.path.abspath(path) |
928 | - for dirname, subdirs, _fnames in os.walk(dirname): |
929 | - if 'steps' in subdirs: |
930 | - extra_step_paths.append(os.path.join(dirname, 'steps')) |
931 | - subdirs.remove('steps') # prune search |
932 | - # allow steps to import other stuff from the steps dir |
933 | - sys.path.insert(0, steps_dir) |
934 | - |
935 | - step_globals = { |
936 | - 'step_matcher': matchers.step_matcher, |
937 | - } |
938 | - |
939 | - for step_type in ('given', 'when', 'then', 'step'): |
940 | - decorator = getattr(step_registry, step_type) |
941 | - step_globals[step_type] = decorator |
942 | - step_globals[step_type.title()] = decorator |
943 | - |
944 | - for path in [steps_dir] + list(extra_step_paths): |
945 | - for name in os.listdir(path): |
946 | - if name.endswith('.py'): |
947 | - exec_file(os.path.join(path, name), step_globals) |
948 | - |
949 | - # clean up the path |
950 | - sys.path.pop(0) |
951 | - |
952 | - |
953 | -runner.Runner.feature_files = _patched_feature_files |
954 | -runner.Runner.load_step_definitions = _patched_load_step_definitions |
955 | |
956 | === modified file 'features/support/tools.py' |
957 | --- features/support/tools.py 2012-12-13 14:58:43 +0000 |
958 | +++ features/support/tools.py 2014-08-28 08:14:30 +0000 |
959 | @@ -4,7 +4,6 @@ |
960 | |
961 | import erppeek |
962 | |
963 | - |
964 | __all__ = ['model', 'puts', 'set_trace'] # + 20 'assert_*' helpers |
965 | |
966 | |
967 | |
968 | === added file 'requires.txt' |
969 | --- requires.txt 1970-01-01 00:00:00 +0000 |
970 | +++ requires.txt 2014-08-28 08:14:30 +0000 |
971 | @@ -0,0 +1,1 @@ |
972 | +behave==1.2.4 |
LGTM but there is some behave point that I not comfortable with.
Alexandre should review this one too.
Nicolas