Merge lp:~elopio/autopilot/1079129-typo-good-tests into lp:autopilot

Proposed by Leo Arias
Status: Merged
Approved by: Thomi Richards
Approved revision: 96
Merged at revision: 97
Proposed branch: lp:~elopio/autopilot/1079129-typo-good-tests
Merge into: lp:autopilot
Diff against target: 74 lines (+8/-8)
1 file modified
docs/tutorial/good_tests.rst (+8/-8)
To merge this branch: bzr merge lp:~elopio/autopilot/1079129-typo-good-tests
Reviewer Review Type Date Requested Status
Thomi Richards (community) Approve
PS Jenkins bot continuous-integration Approve
Review via email: mp+134444@code.launchpad.net

Commit message

Fix typos in the writing good tests page.

Description of the change

Fix typos in the writing good tests page.

To post a comment you must log in.
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Approve (continuous-integration)
Revision history for this message
Thomi Richards (thomir-deactivatedaccount) :
review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'docs/tutorial/good_tests.rst'
2--- docs/tutorial/good_tests.rst 2012-10-11 04:09:36 +0000
3+++ docs/tutorial/good_tests.rst 2012-11-15 11:35:25 +0000
4@@ -30,7 +30,7 @@
5 self.keyboard.press_and_release("Alt+F4")
6 self.assertThat(self.dash.visible, Eventually(Equals(False)))
7
8-This test tests one thing only. It's three lines match perfectly with the typical three stages of a test (see above), and it only tests for things that it's supposed to. Remember that it's fine to assume that other parts of unity work as expected, as long as they're covered by an autopilot test somewhere else - that's why we don't need to verify that the dash really did open when we called ``self.dash.ensure_visible()``.
9+This test tests one thing only. Its three lines match perfectly with the typical three stages of a test (see above), and it only tests for things that it's supposed to. Remember that it's fine to assume that other parts of unity work as expected, as long as they're covered by an autopilot test somewhere else - that's why we don't need to verify that the dash really did open when we called ``self.dash.ensure_visible()``.
10
11 Fail Well
12 +++++++++
13@@ -167,7 +167,7 @@
14 self.launcher_instance.switcher_next()
15 self.assertThat(self.launcher.key_nav_selection, Eventually(GreaterThan(0)))
16
17- This leads to a shorter test (which we've already said is a good thing), but the test itself is incomplete. Without scrolling up to the ``setUp`` and ``tearDown`` methods, it's hard to tell how the launcher switcher is started. THe situation gets even worse when test classes derive from each other, since the code that starts the launcher switcher may not even be in the same class!
18+ This leads to a shorter test (which we've already said is a good thing), but the test itself is incomplete. Without scrolling up to the ``setUp`` and ``tearDown`` methods, it's hard to tell how the launcher switcher is started. The situation gets even worse when test classes derive from each other, since the code that starts the launcher switcher may not even be in the same class!
19
20 A much better solution in this example is to initiate the switcher explicitly, and use ``addCleanup()`` to cancel it when the test ends, like this:
21
22@@ -251,7 +251,7 @@
23 Prefer ``wait_for`` and ``Eventually`` to ``sleep``
24 ++++++++++++++++++++++++++++++++++++++++++++++++++++
25
26-Early autopilot tests relied on extensive use of the python ``sleep`` call to halt tests long enough for unity to change it's state before the test continued. Previously, an autopilot test might have looked like this:
27+Early autopilot tests relied on extensive use of the python ``sleep`` call to halt tests long enough for unity to change its state before the test continued. Previously, an autopilot test might have looked like this:
28
29 **Bad Example:**
30
31@@ -265,7 +265,7 @@
32 sleep(2)
33 self.assertThat(self.dash.visible, Equals(False))
34
35-This test uses two ``sleep`` calls. The first makes sure the dash has had time to open before the test continues, and the second make sure that the dash has had time to respond to our key presses before we start testing things.
36+This test uses two ``sleep`` calls. The first makes sure the dash has had time to open before the test continues, and the second makes sure that the dash has had time to respond to our key presses before we start testing things.
37
38 There are several issues with this approach:
39 1. On slow machines (like a jenkins instance running on a virtual machine), we may not be sleeping long enough. This can lead to tests failing on jenkins that pass on developers machines.
40@@ -276,7 +276,7 @@
41 In Tests
42 --------
43
44-Tests should use the ``Eventually`` matcher. This cen be imported as follows:
45+Tests should use the ``Eventually`` matcher. This can be imported as follows:
46
47 .. code-block:: python
48
49@@ -301,7 +301,7 @@
50 In Emulators
51 ------------
52
53-Emulators are not test cases, and do not have access to the ``self.assertThat`` method. However, we want emylator methods to block until unity has had time to process the commands given. For example, the ``ensure_visible`` method on the Dash controller should block until the dash really is visible.
54+Emulators are not test cases, and do not have access to the ``self.assertThat`` method. However, we want emulator methods to block until unity has had time to process the commands given. For example, the ``ensure_visible`` method on the Dash controller should block until the dash really is visible.
55
56 To achieve this goal, all attributes on unity emulators have been patched with a ``wait_for`` method that takes a testtools matcher (just like ``Eventually`` - in fact, the ``Eventually`` matcher just calls wait_for under the hood). For example, previously the ``ensure_visible`` method on the Dash controller might have looked like this:
57
58@@ -367,7 +367,7 @@
59 This is a simplified version of the IBus tests. In this case, the ``test_simple_input_dash`` test will be called 5 times. Each time, the ``self.input`` and ``self.result`` attribute will be set to the values in the scenario list. The first part of the scenario tuple is the scenario name - this is appended to the test id, and can be whatever you want.
60
61 .. Important::
62- It is important to notice that the test does not change it's behavior depending on the scenario it is run under. Exactly the same steps are taken - the only difference in this case is what gets typed on the keyboard, and what result is expected.
63+ It is important to notice that the test does not change its behavior depending on the scenario it is run under. Exactly the same steps are taken - the only difference in this case is what gets typed on the keyboard, and what result is expected.
64
65 Scenarios are applied before the test's ``setUp`` or ``tearDown`` methods are called, so it's safe (and indeed encouraged) to set up the test environment based on these attributes. For example, you may wish to set certain unity options for the duration of the test based on a scenario parameter.
66
67@@ -447,7 +447,7 @@
68
69 There are two ways to get around this problem, and they both lead to terrible tests:
70
71- 1. Detect these situations and skip the test. This is bad for sveeral reasons - first, skipped tests should be viewed with the same level of suspicion as commented out code. Test skips should only be used in exceptional circumstances. A test skip in the test results is just as serious as a test failure.
72+ 1. Detect these situations and skip the test. This is bad for several reasons - first, skipped tests should be viewed with the same level of suspicion as commented out code. Test skips should only be used in exceptional circumstances. A test skip in the test results is just as serious as a test failure.
73
74 2. Detect the situation in the test, and run different code using an if statement. For example, we might decode to do this:
75

Subscribers

People subscribed via source and target branches