On Thu, Nov 04, 2021 at 01:37:57AM -0000, Alex Murray wrote: > Review: Approve > > LGTM in general but I don't love the idea of hard-coding the product name info stuff - and instead would prefer if we can somehow add this into cve_lib directly and look it up via the subprojects configuration. > > Also I get one failure when running the end_to_end oval lib test on impish (but the other oval_lib test suites all pass): I was getting the same failure in the same way, see below for a deep dive into why and feedback on the generated results: > [amurray:~/ubuntu … ubuntu-cve-tracker] master(13)+* 14s ± PYTHONPATH=scripts pytest-3 test/test_oval_lib_end_to_end.py > ============================================================================================================ test session starts ============================================================================================================= > platform linux -- Python 3.9.7, pytest-6.0.2, py-1.10.0, pluggy-0.13.0 > rootdir: /home/amurray/ubuntu/git/ubuntu-cve-tracker > collected 4 items > > test/test_oval_lib_end_to_end.py ..F. [100%] > > ================================================================================================================== FAILURES ================================================================================================================== > __________________________________________________________________ TestOvalLibEndToEnd.test_validate_entire_oci_oval[com.ubuntu.bionic.usn.oval.xml-bionic_20180814-bionic] __________________________________________________________________ > > self = , dpkg_file = 'com.ubuntu.bionic.usn.oval.xml', manifest = 'bionic_20180814', release = 'bionic' > > @pytest.mark.parametrize("dpkg_file,manifest,release", > # The timestamped gold manifest oscap output has not been manually > # checked but it's nice to flag changes to the results of past USNs > [(util.bionic_dpkg_file, "bionic_20180814", "bionic"), > (util.trusty_dpkg_file, "trusty_20191107", "trusty")]) > def test_validate_entire_oci_oval(self, dpkg_file, manifest, release): > """Coherence check of entire generated oci OVAL""" > > util.create_validate_oci(dpkg_file, "{}_full".format(release), > ["--usn-oval-release", release], manifest, release) > > test/test_oval_lib_end_to_end.py:31: > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > > cls = , output_file = 'com.ubuntu.bionic.usn.oval.xml', new_filename = 'bionic_full', oscap_args = ['--usn-oval-release', 'bionic'], manifest = 'bionic_20180814', gold_file = 'bionic' > > @classmethod > def create_validate_oci(cls, output_file, new_filename, oscap_args, > manifest, gold_file): > """Generate and validate oci and dpkg oval XML files for a single Ubuntu > release""" > > new_file = cls.rel_test_path + new_filename > > # generate-oval creates files with identical names per-release > # This setup allows running multiple tests on the same OVAL file > # (without regenerating an identical oval) then creating a new OVAL > # with an identical name and different content for following tests > # so the OVAL is generated only as needed (less frequently than > # every test run, more frequently than once per module) > if not os.path.exists(new_file): > dpkg_file = cls.rel_test_path + output_file > oci_file = cls.rel_test_path + "oci." + output_file > > # Generate OVAL > if sys.version_info[0] < 3: > pycov = "python-coverage" > else: > pycov = "python3-coverage" > subprocess.check_output([pycov, "run", "-a", > "scripts/generate-oval", "--oci", "--usn-oval", > "--output-dir={}".format(cls.rel_test_path)] + oscap_args) > > # Validate file structure > subprocess.check_output(["oscap", "oval", "validate", > dpkg_file], stderr=subprocess.STDOUT) > > subprocess.check_output(["oscap", "oval", "validate", > oci_file], stderr=subprocess.STDOUT) > > os.rename(oci_file, new_file) > > cls.files_to_del.update([new_file, dpkg_file]) > > # Test the oci XML file against a manifest > manifest_dir = cls.rel_test_path + "manifests/" + manifest + "/" > cmd_output = subprocess.check_output(["oscap", "oval", "eval", > new_file], stderr=subprocess.STDOUT, cwd=manifest_dir) > # Convert to str for py 3 compatibility > cmd_output = cmd_output.decode("utf-8") > > # Compare output to expected > with open(cls.rel_test_path + "gold_oci_results/" + gold_file) as f: > gold_output = f.readlines() > > for line in gold_output: > > assert(line in cmd_output) > E AssertionError: assert 'Definition oval:com.ubuntu.bionic:def:38401000000: true\n' in 'Definition oval:com.ubuntu.bionic:def:48781000000: false\nDefinition oval:com.ubuntu.bionic:def:48771000000: true\nDe...com.ubuntu.bionic:def:36293000000: false\nDefinition oval:com.ubuntu.bionic:def:36272000000: false\nEvaluation done.\n' I dug into this (when I should have been sleeping). The test does an oscap oval oci usn eval against the usn db (database.json [0]) in your $UCT against the OCI manifest file in test/manifests/bionic_20180814/ [1]. It then looks for tests that indicate a patch is needed to be applied in the oscap tools output; i.e. definitions that return true. The definition identifiers are derived from the USN number. Thus in this case, the assert is tripping because it can't find 'Definition oval:com.ubuntu.bionic:def:38401000000: true\n' in the oscap output, meaning that for the manifest, it should report that it still needs USN 3840-1 to be applied. USN 3840-1 fixed openssl / libssl1.1 with version 1.1.0g-2ubuntu4.3 and libssl1.0.0 with 1.0.2n-1ubuntu5.2; the manifest tested against contains: libssl1.0.0:amd64 1.0.2n-1ubuntu5.1 libssl1.1:amd64 1.1.0g-2ubuntu4.1 both of which should indicate tht they are vulnerable. Running the oscap evaluation against the current USN OVAL served off of security-metadata.ubuntu.com results in USN 3840-1 correctly being flagged as needing to be applied, and the end_to_end tests succeed on master. That's how it's all supposed to work. However. While reproducing the faiure and trying to generate reports and output based on switching between master and Eduardo's git branch in the same cloned tree, I got an even weirder result (due to operator error), and in a fit of trying make sure things were in a sane state, deleted scripts/__pycache__/ ... and Eduardo's branch started behaving as expected. For testing, we should thus ensure python's jit files are cleared. (I had also hit earlier test failures because the .coverage file from a long ago run was in a different format.) I did go ahead and generate the oscap html report based on the trusty and bionic test manifests and pushed them to: https://people.canonical.com/~sbeattie/trusty-output-usn-oval-improvement.html https://people.canonical.com/~sbeattie/trusty-output-usn-oval-improvement.html along with their respective base case reports generated with unpatched oval data: https://people.canonical.com/~sbeattie/trusty-pristine-output-usn-oval-improvement.html https://people.canonical.com/~sbeattie/bionic-pristine-output-usn-oval-improvement.html Some comments on what gets generated from looking at differences in the XML content when things are run correctly: - there are definite bugs in CVE handling/reporting[2] being fixed by Eduardo's branch (e.g. see the reference to CVE-2021-25219 missing for the bind9 USN 5126-[12] entries). - that said, despite the expanded usn descriptions, the oscap report doesn't surface them. Not sure it's an issue. - what might be an issue is the expanded usn description texts end up being all one entry, with no delineation between paragraph breaks. Not sure how parsers of the oval will handle it. - The origin for each of the fix/USN is pulled from the product description and ignores the significant effort Emi and Paulo have gone through to annotate in all our USNs where each individual binary can pulled from (security or esm-infra), which leads to USNs that were published when trusty was still in LTS status being marked as available from UA Infra. - The origin is sprinkled through as a comment field to a lot of the tags: criterion, constant_variable, ind:textfilecontent54_state, ind:textfilecontent54_object, ind:textfilecontent54_test, which seems like overkill, and also probably not the right way to represent this information. I'm not sure what the right way is, possibly as a 'reference' entry type? Research on how something like this should show up in OVAL is needed. (And despite all the origin references, none is surfaced in the oscap html report, but I'm not sure if that's just a limitation of oscap's html generator.) - I think there is a desire to include a description of the UA products and that UA Infra releases are supported. Again, I don't know enough OVAL to know the right way to do this I want to land this branch but I am concerned that the references to the products are not done correctly, and if we do land it, people parsing it for consumption in other scanners will see the changes and assume that's the final format we're going with. And I don't think we want that. I'll look at what individual commits we might be able to land tomorrow. Thanks. [0] generate-oval.py should take json filename as an optional argument, not a directory where it expects to find database.json in it. [1] the test should be updated to use a temporary directory to pull the latest USN db into and to output its generated xml files in, so that it doesn't clutter up your local $UCT directory. [2] we should also surface launchpad bug reports here as well. -- Steve Beattie