APT

apt:1.8.2.z

Last commit made on 2021-04-19
Get this branch:
git clone -b 1.8.2.z https://git.launchpad.net/apt

Branch merges

Branch information

Name:
1.8.2.z
Repository:
lp:apt

Recent commits

f59f0e9... by Julian Andres Klode

Release 1.8.2.3

e340ae0... by Julian Andres Klode

Default Acquire::AllowReleaseInfoChange::Suite to "true"

Closes: #931566
(cherry picked from commit 64b45e294f0c6931a9b57ae6cc99ecded8f6a2d3)
(cherry picked from commit 96942f9180c8e19bd487027fac35a5c2e967e004)

95e417c... by Julian Andres Klode

Release 1.8.2.2

0e3b54d... by Julian Andres Klode

CVE-2020-27350: tarfile: integer overflow: Limit tar items to 128 GiB

The integer overflow was detected by DonKult who added a check like this:

(std::numeric_limits<decltype(Itm.Size)>::max() - (2 * sizeof(Block)))

Which deals with the code as is, but also still is a fairly big limit,
and could become fragile if we change the code. Let's limit our file
sizes to 128 GiB, which should be sufficient for everyone.

Original comment by DonKult:

The code assumes that it can add sizeof(Block)-1 to the size of the item
later on, but if we are close to a 64bit overflow this is not possible.
Fixing this seems too complex compared to just ensuring there is enough
room left given that we will have a lot more problems the moment we will
be acting on files that large as if the item is that large, the (valid)
tar including it probably doesn't fit in 64bit either.

ed78618... by Julian Andres Klode

CVE-2020-27350: debfile: integer overflow: Limit control size to 64 MiB

Like the code in arfile.cc, MemControlExtract also has buffer
overflows, in code allocating memory for parsing control files.

Specify an upper limit of 64 MiB for control files to both protect
against the Size overflowing (we allocate Size + 2 bytes), and
protect a bit against control files consisting only of zeroes.

29581d1... by Julian Andres Klode

tarfile: OOM hardening: Limit size of long names/links to 1 MiB

Tarballs have long names and long link targets structured by a
special tar header with a GNU extension followed by the actual
content (padded to 512 bytes). Essentially, think of a name as
a special kind of file.

The limit of a file size in a header is 12 bytes, aka 10**12
or 1 TB. While this works OK-ish for file content that we stream
to extractors, we need to copy file names into memory, and this
opens us up to an OOM DoS attack.

Limit the file name size to 1 MiB, as libarchive does, to make
things safer.

66962a6... by Julian Andres Klode

CVE-2020-27350: arfile: Integer overflow in parsing

GHSL-2020-169: This first hunk adds a check that we have more files
left to read in the file than the size of the member, ensuring that
(a) the number is not negative, which caused the crash here and (b)
ensures that we similarly avoid other issues with trying to read too
much data.

GHSL-2020-168: Long file names are encoded by a special marker in
the filename and then the real filename is part of what is normally
the data. We did not check that the length of the file name is within
the length of the member, which means that we got a overflow later
when subtracting the length from the member size to get the remaining
member size.

The file createdeb-lp1899193.cc was provided by GitHub Security Lab
and reformatted using apt coding style for inclusion in the test
case, both of these issues have an automated test case in
test/integration/test-ubuntu-bug-1899193-security-issues.

LP: #1899193

bb8d03f... by Julian Andres Klode

Fix location of testdeb in added regression tests

b382030... by Julian Andres Klode

Release 1.8.2.1

9dd4e8a... by Julian Andres Klode

.gitlab.ci.yml: Point to debian:buster