Merge ~mfo/ubuntu/+source/xen:lp1956166 into ubuntu/+source/xen:ubuntu/focal-devel

Proposed by Mauricio Faria de Oliveira
Status: Merged
Merged at revision: 6ab693720b8d5475538292fce9465d42d6da8393
Proposed branch: ~mfo/ubuntu/+source/xen:lp1956166
Merge into: ubuntu/+source/xen:ubuntu/focal-devel
Diff against target: 8480 lines (+8423/-0)
8 files modified
debian/changelog (+12/-0)
debian/control (+1/-0)
debian/patches/lp1956166-0001-introduce-unaligned.h.patch (+284/-0)
debian/patches/lp1956166-0002-lib-introduce-xxhash.patch (+888/-0)
debian/patches/lp1956166-0003-x86-Dom0-support-zstd-compressed-kernels.patch (+6404/-0)
debian/patches/lp1956166-0004-libxenguest-add-get_unaligned_le32.patch (+112/-0)
debian/patches/lp1956166-0005-libxenguest-support-zstd-compressed-kernels.patch (+717/-0)
debian/patches/series (+5/-0)
Reviewer Review Type Date Requested Status
Stefan Bader Pending
Canonical Server Core Reviewers Pending
Christian Ehrhardt  Pending
Review via email: mp+426221@code.launchpad.net

Description of the change

This patchset adds support for zstd compressed kernels
on Focal so it can boot the 5.15 HWE kernel from Jammy
on Xen modes Dom0 and DomU PV (DomU HVM works already).

To post a comment you must log in.
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

- we agreed to ignore the few days left for Impish
- patches LGTM, apply fine and have good headers to track where they are from
- One could argue it is a new feature, but by using zstd kernels we made this a bug that deserves to be fixed
- SRU template in the bug is ready
- it will grow a libzstd1 in focal, xen isn't in main anyway, but libzstd1 would be in any case :-)

The patches are quite huge, but that is what they are upstream.
You already went to some efforts reducing them - thanks.
You listed those references fine in the patch headers.

You already pre-built and pre-tested the changes.

Changelog is readable and complete and the bug reference is ok.
Double checked with debdiff that nothing else slipped in

Eventually it is up for the SRU team to decide, but this LGTM.
Sponsoring.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1diff --git a/debian/changelog b/debian/changelog
2index e335912..ddb115c 100644
3--- a/debian/changelog
4+++ b/debian/changelog
5@@ -1,3 +1,15 @@
6+xen (4.11.3+24-g14b62ab3e5-1ubuntu2.1) focal; urgency=medium
7+
8+ * Add support for zstd compressed kernels for Dom0/DomU on x86 (LP: #1956166)
9+ - d/p/lp1956166-0001-introduce-unaligned.h.patch
10+ - d/p/lp1956166-0002-lib-introduce-xxhash.patch
11+ - d/p/lp1956166-0003-x86-Dom0-support-zstd-compressed-kernels.patch
12+ - d/p/lp1956166-0004-libxenguest-add-get_unaligned_le32.patch
13+ - d/p/lp1956166-0005-libxenguest-support-zstd-compressed-kernels.patch
14+ - d/control: add libzstd-dev as build-dep
15+
16+ -- Mauricio Faria de Oliveira <mfo@canonical.com> Mon, 04 Jul 2022 16:02:20 -0300
17+
18 xen (4.11.3+24-g14b62ab3e5-1ubuntu2) focal; urgency=medium
19
20 * Update: Building hypervisor with cf-protection enabled
21diff --git a/debian/control b/debian/control
22index 73c186e..a906e84 100644
23--- a/debian/control
24+++ b/debian/control
25@@ -34,6 +34,7 @@ Build-Depends:
26 ocaml-native-compilers | ocaml-nox,
27 ocaml-findlib,
28 lmodern,
29+ libzstd-dev,
30 XS-Python-Version: current
31 Homepage: https://xenproject.org/
32 Vcs-Browser: https://salsa.debian.org/xen-team/debian-xen
33diff --git a/debian/patches/lp1956166-0001-introduce-unaligned.h.patch b/debian/patches/lp1956166-0001-introduce-unaligned.h.patch
34new file mode 100644
35index 0000000..e060642
36--- /dev/null
37+++ b/debian/patches/lp1956166-0001-introduce-unaligned.h.patch
38@@ -0,0 +1,284 @@
39+From 3453f57b52a84a522b864a5d01773e0911a2184e Mon Sep 17 00:00:00 2001
40+From: Jan Beulich <jbeulich@suse.com>
41+Date: Mon, 18 Jan 2021 12:09:13 +0100
42+Subject: [PATCH 1/5] introduce unaligned.h
43+
44+Rather than open-coding commonly used constructs in yet more places when
45+pulling in zstd decompression support (and its xxhash prereq), pull out
46+the custom bits into a commonly used header (for the hypervisor build;
47+the tool stack and stubdom builds of libxenguest will still remain in
48+need of similarly taking care of). For now this is limited to x86, where
49+custom logic isn't needed (considering this is going to be used in init
50+code only, even using alternatives patching to use MOVBE doesn't seem
51+worthwhile).
52+
53+For Arm64 with CONFIG_ACPI=y (due to efi-dom0.c's re-use of xz/crc32.c)
54+drop the not really necessary inclusion of xz's private.h.
55+
56+No change in generated code.
57+
58+Signed-off-by: Jan Beulich <jbeulich@suse.com>
59+Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
60+
61+Bug-Ubuntu: https://bugs.launchpad.net/bugs/1956166
62+Origin: backport, http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=7c9f81687ad611515474b1c17afc2f79f19faef5
63+[backport: xen/common/lzo.c: refresh 2 context lines.]
64+---
65+ xen/common/lz4/defs.h | 9 ++--
66+ xen/common/lzo.c | 7 ++-
67+ xen/common/unlzo.c | 19 ++------
68+ xen/common/xz/crc32.c | 2 -
69+ xen/common/xz/private.h | 23 +++-------
70+ xen/include/asm-x86/unaligned.h | 6 +++
71+ xen/include/xen/unaligned.h | 79 +++++++++++++++++++++++++++++++++
72+ 7 files changed, 104 insertions(+), 41 deletions(-)
73+ create mode 100644 xen/include/asm-x86/unaligned.h
74+ create mode 100644 xen/include/xen/unaligned.h
75+
76+diff --git a/xen/common/lz4/defs.h b/xen/common/lz4/defs.h
77+index d886a4e122b8..4fbea2ac3dd4 100644
78+--- a/xen/common/lz4/defs.h
79++++ b/xen/common/lz4/defs.h
80+@@ -10,18 +10,21 @@
81+
82+ #ifdef __XEN__
83+ #include <asm/byteorder.h>
84+-#endif
85++#include <asm/unaligned.h>
86++#else
87+
88+-static inline u16 INIT get_unaligned_le16(const void *p)
89++static inline u16 get_unaligned_le16(const void *p)
90+ {
91+ return le16_to_cpup(p);
92+ }
93+
94+-static inline u32 INIT get_unaligned_le32(const void *p)
95++static inline u32 get_unaligned_le32(const void *p)
96+ {
97+ return le32_to_cpup(p);
98+ }
99+
100++#endif
101++
102+ /*
103+ * Detects 64 bits mode
104+ */
105+diff --git a/xen/common/lzo.c b/xen/common/lzo.c
106+index 74831cb26836..f1cd1b58d27f 100644
107+--- a/xen/common/lzo.c
108++++ b/xen/common/lzo.c
109+@@ -97,13 +97,12 @@
110+ #ifdef __XEN__
111+ #include <xen/lib.h>
112+ #include <asm/byteorder.h>
113++#include <asm/unaligned.h>
114++#else
115++#define get_unaligned_le16(_p) (*(u16 *)(_p))
116+ #endif
117+
118+ #include <xen/lzo.h>
119+-#define get_unaligned(_p) (*(_p))
120+-#define put_unaligned(_val,_p) (*(_p)=_val)
121+-#define get_unaligned_le16(_p) (*(u16 *)(_p))
122+-#define get_unaligned_le32(_p) (*(u32 *)(_p))
123+
124+ static noinline size_t
125+ lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
126+diff --git a/xen/common/unlzo.c b/xen/common/unlzo.c
127+index 5ae6cf911e86..11f64fcf3b26 100644
128+--- a/xen/common/unlzo.c
129++++ b/xen/common/unlzo.c
130+@@ -34,30 +34,19 @@
131+
132+ #ifdef __XEN__
133+ #include <asm/byteorder.h>
134+-#endif
135++#include <asm/unaligned.h>
136++#else
137+
138+-#if 1 /* ndef CONFIG_??? */
139+-static inline u16 INIT get_unaligned_be16(void *p)
140++static inline u16 get_unaligned_be16(const void *p)
141+ {
142+ return be16_to_cpup(p);
143+ }
144+
145+-static inline u32 INIT get_unaligned_be32(void *p)
146++static inline u32 get_unaligned_be32(const void *p)
147+ {
148+ return be32_to_cpup(p);
149+ }
150+-#else
151+-#include <asm/unaligned.h>
152+-
153+-static inline u16 INIT get_unaligned_be16(void *p)
154+-{
155+- return be16_to_cpu(__get_unaligned(p, 2));
156+-}
157+
158+-static inline u32 INIT get_unaligned_be32(void *p)
159+-{
160+- return be32_to_cpu(__get_unaligned(p, 4));
161+-}
162+ #endif
163+
164+ static const unsigned char lzop_magic[] = {
165+diff --git a/xen/common/xz/crc32.c b/xen/common/xz/crc32.c
166+index af08ae2cf6e2..0708b6163812 100644
167+--- a/xen/common/xz/crc32.c
168++++ b/xen/common/xz/crc32.c
169+@@ -15,8 +15,6 @@
170+ * but they are bigger and use more memory for the lookup table.
171+ */
172+
173+-#include "private.h"
174+-
175+ XZ_EXTERN uint32_t INITDATA xz_crc32_table[256];
176+
177+ XZ_EXTERN void INIT xz_crc32_init(void)
178+diff --git a/xen/common/xz/private.h b/xen/common/xz/private.h
179+index 7ea24892297f..511343fcc234 100644
180+--- a/xen/common/xz/private.h
181++++ b/xen/common/xz/private.h
182+@@ -13,34 +13,23 @@
183+ #ifdef __XEN__
184+ #include <xen/kernel.h>
185+ #include <asm/byteorder.h>
186+-#endif
187+-
188+-#define get_le32(p) le32_to_cpup((const uint32_t *)(p))
189++#include <asm/unaligned.h>
190++#else
191+
192+-#if 1 /* ndef CONFIG_??? */
193+-static inline u32 INIT get_unaligned_le32(void *p)
194++static inline u32 get_unaligned_le32(const void *p)
195+ {
196+ return le32_to_cpup(p);
197+ }
198+
199+-static inline void INIT put_unaligned_le32(u32 val, void *p)
200++static inline void put_unaligned_le32(u32 val, void *p)
201+ {
202+ *(__force __le32*)p = cpu_to_le32(val);
203+ }
204+-#else
205+-#include <asm/unaligned.h>
206+-
207+-static inline u32 INIT get_unaligned_le32(void *p)
208+-{
209+- return le32_to_cpu(__get_unaligned(p, 4));
210+-}
211+
212+-static inline void INIT put_unaligned_le32(u32 val, void *p)
213+-{
214+- __put_unaligned(cpu_to_le32(val), p, 4);
215+-}
216+ #endif
217+
218++#define get_le32(p) le32_to_cpup((const uint32_t *)(p))
219++
220+ #define false 0
221+ #define true 1
222+
223+diff --git a/xen/include/asm-x86/unaligned.h b/xen/include/asm-x86/unaligned.h
224+new file mode 100644
225+index 000000000000..6070801d4afd
226+--- /dev/null
227++++ b/xen/include/asm-x86/unaligned.h
228+@@ -0,0 +1,6 @@
229++#ifndef __ASM_UNALIGNED_H__
230++#define __ASM_UNALIGNED_H__
231++
232++#include <xen/unaligned.h>
233++
234++#endif /* __ASM_UNALIGNED_H__ */
235+diff --git a/xen/include/xen/unaligned.h b/xen/include/xen/unaligned.h
236+new file mode 100644
237+index 000000000000..eef7ec73b658
238+--- /dev/null
239++++ b/xen/include/xen/unaligned.h
240+@@ -0,0 +1,79 @@
241++/*
242++ * This header can be used by architectures where unaligned accesses work
243++ * without faulting, and at least reasonably efficiently. Other architectures
244++ * will need to have a custom asm/unaligned.h.
245++ */
246++#ifndef __ASM_UNALIGNED_H__
247++#error "xen/unaligned.h should not be included directly - include asm/unaligned.h instead"
248++#endif
249++
250++#ifndef __XEN_UNALIGNED_H__
251++#define __XEN_UNALIGNED_H__
252++
253++#include <xen/types.h>
254++#include <asm/byteorder.h>
255++
256++#define get_unaligned(p) (*(p))
257++#define put_unaligned(val, p) (*(p) = (val))
258++
259++static inline uint16_t get_unaligned_be16(const void *p)
260++{
261++ return be16_to_cpup(p);
262++}
263++
264++static inline void put_unaligned_be16(uint16_t val, void *p)
265++{
266++ *(__force __be16*)p = cpu_to_be16(val);
267++}
268++
269++static inline uint32_t get_unaligned_be32(const void *p)
270++{
271++ return be32_to_cpup(p);
272++}
273++
274++static inline void put_unaligned_be32(uint32_t val, void *p)
275++{
276++ *(__force __be32*)p = cpu_to_be32(val);
277++}
278++
279++static inline uint64_t get_unaligned_be64(const void *p)
280++{
281++ return be64_to_cpup(p);
282++}
283++
284++static inline void put_unaligned_be64(uint64_t val, void *p)
285++{
286++ *(__force __be64*)p = cpu_to_be64(val);
287++}
288++
289++static inline uint16_t get_unaligned_le16(const void *p)
290++{
291++ return le16_to_cpup(p);
292++}
293++
294++static inline void put_unaligned_le16(uint16_t val, void *p)
295++{
296++ *(__force __le16*)p = cpu_to_le16(val);
297++}
298++
299++static inline uint32_t get_unaligned_le32(const void *p)
300++{
301++ return le32_to_cpup(p);
302++}
303++
304++static inline void put_unaligned_le32(uint32_t val, void *p)
305++{
306++ *(__force __le32*)p = cpu_to_le32(val);
307++}
308++
309++static inline uint64_t get_unaligned_le64(const void *p)
310++{
311++ return le64_to_cpup(p);
312++}
313++
314++static inline void put_unaligned_le64(uint64_t val, void *p)
315++{
316++ *(__force __le64*)p = cpu_to_le64(val);
317++}
318++
319++#endif /* __XEN_UNALIGNED_H__ */
320+--
321+2.34.1
322+
323diff --git a/debian/patches/lp1956166-0002-lib-introduce-xxhash.patch b/debian/patches/lp1956166-0002-lib-introduce-xxhash.patch
324new file mode 100644
325index 0000000..6435d6d
326--- /dev/null
327+++ b/debian/patches/lp1956166-0002-lib-introduce-xxhash.patch
328@@ -0,0 +1,888 @@
329+From 7253046d49a835c7fc13de1bd3529ff66dd2e1df Mon Sep 17 00:00:00 2001
330+From: Jan Beulich <jbeulich@suse.com>
331+Date: Mon, 18 Jan 2021 12:10:34 +0100
332+Subject: [PATCH 2/5] lib: introduce xxhash
333+
334+Taken from Linux at commit d89775fc929c ("lib/: replace HTTP links with
335+HTTPS ones"), but split into separate 32-bit and 64-bit sources, since
336+the immediate consumer (zstd) will need only the latter.
337+
338+Note that the building of this code is restricted to x86 for now because
339+of the need to sort asm/unaligned.h for Arm.
340+
341+Signed-off-by: Jan Beulich <jbeulich@suse.com>
342+Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
343+
344+Bug-Ubuntu: https://bugs.launchpad.net/bugs/1956166
345+Origin: backport, http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=35d2960ae65f28106fdc5c2130f5f08fadca0e4c
346+[backport: additional changes: Makefile and Rules.mk,
347+ based on much larger/unneeded commits, respectively:
348+ commit f301f9a9e84f ("lib: collect library files in an archive")
349+ commit fea2fab96356 ("libx86: introduce a libx86 shared library")
350+ - xen/lib/Makefile: add objects xen/lib/xxhash{32,64}.o
351+ - xen/Rules.mk: add dir xen/lib/]
352+---
353+ xen/Rules.mk | 1 +
354+ xen/include/xen/xxhash.h | 259 ++++++++++++++++++++++++++++++++++
355+ xen/lib/Makefile | 2 +
356+ xen/lib/xxhash32.c | 259 ++++++++++++++++++++++++++++++++++
357+ xen/lib/xxhash64.c | 294 +++++++++++++++++++++++++++++++++++++++
358+ 5 files changed, 815 insertions(+)
359+ create mode 100644 xen/include/xen/xxhash.h
360+ create mode 100644 xen/lib/Makefile
361+ create mode 100644 xen/lib/xxhash32.c
362+ create mode 100644 xen/lib/xxhash64.c
363+
364+diff --git a/xen/Rules.mk b/xen/Rules.mk
365+index 5337e206ee17..47c954425d69 100644
366+--- a/xen/Rules.mk
367++++ b/xen/Rules.mk
368+@@ -36,6 +36,7 @@ TARGET := $(BASEDIR)/xen
369+ # Note that link order matters!
370+ ALL_OBJS-y += $(BASEDIR)/common/built_in.o
371+ ALL_OBJS-y += $(BASEDIR)/drivers/built_in.o
372++ALL_OBJS-$(CONFIG_X86) += $(BASEDIR)/lib/built_in.o
373+ ALL_OBJS-y += $(BASEDIR)/xsm/built_in.o
374+ ALL_OBJS-y += $(BASEDIR)/arch/$(TARGET_ARCH)/built_in.o
375+ ALL_OBJS-$(CONFIG_CRYPTO) += $(BASEDIR)/crypto/built_in.o
376+diff --git a/xen/include/xen/xxhash.h b/xen/include/xen/xxhash.h
377+new file mode 100644
378+index 000000000000..6f2237cbcf8e
379+--- /dev/null
380++++ b/xen/include/xen/xxhash.h
381+@@ -0,0 +1,259 @@
382++/*
383++ * xxHash - Extremely Fast Hash algorithm
384++ * Copyright (C) 2012-2016, Yann Collet.
385++ *
386++ * BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
387++ *
388++ * Redistribution and use in source and binary forms, with or without
389++ * modification, are permitted provided that the following conditions are
390++ * met:
391++ *
392++ * * Redistributions of source code must retain the above copyright
393++ * notice, this list of conditions and the following disclaimer.
394++ * * Redistributions in binary form must reproduce the above
395++ * copyright notice, this list of conditions and the following disclaimer
396++ * in the documentation and/or other materials provided with the
397++ * distribution.
398++ *
399++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
400++ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
401++ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
402++ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
403++ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
404++ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
405++ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
406++ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
407++ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
408++ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
409++ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
410++ *
411++ * This program is free software; you can redistribute it and/or modify it under
412++ * the terms of the GNU General Public License version 2 as published by the
413++ * Free Software Foundation. This program is dual-licensed; you may select
414++ * either version 2 of the GNU General Public License ("GPL") or BSD license
415++ * ("BSD").
416++ *
417++ * You can contact the author at:
418++ * - xxHash homepage: https://cyan4973.github.io/xxHash/
419++ * - xxHash source repository: https://github.com/Cyan4973/xxHash
420++ */
421++
422++/*
423++ * Notice extracted from xxHash homepage:
424++ *
425++ * xxHash is an extremely fast Hash algorithm, running at RAM speed limits.
426++ * It also successfully passes all tests from the SMHasher suite.
427++ *
428++ * Comparison (single thread, Windows Seven 32 bits, using SMHasher on a Core 2
429++ * Duo @3GHz)
430++ *
431++ * Name Speed Q.Score Author
432++ * xxHash 5.4 GB/s 10
433++ * CrapWow 3.2 GB/s 2 Andrew
434++ * MumurHash 3a 2.7 GB/s 10 Austin Appleby
435++ * SpookyHash 2.0 GB/s 10 Bob Jenkins
436++ * SBox 1.4 GB/s 9 Bret Mulvey
437++ * Lookup3 1.2 GB/s 9 Bob Jenkins
438++ * SuperFastHash 1.2 GB/s 1 Paul Hsieh
439++ * CityHash64 1.05 GB/s 10 Pike & Alakuijala
440++ * FNV 0.55 GB/s 5 Fowler, Noll, Vo
441++ * CRC32 0.43 GB/s 9
442++ * MD5-32 0.33 GB/s 10 Ronald L. Rivest
443++ * SHA1-32 0.28 GB/s 10
444++ *
445++ * Q.Score is a measure of quality of the hash function.
446++ * It depends on successfully passing SMHasher test set.
447++ * 10 is a perfect score.
448++ *
449++ * A 64-bits version, named xxh64 offers much better speed,
450++ * but for 64-bits applications only.
451++ * Name Speed on 64 bits Speed on 32 bits
452++ * xxh64 13.8 GB/s 1.9 GB/s
453++ * xxh32 6.8 GB/s 6.0 GB/s
454++ */
455++
456++#ifndef __XENXXHASH_H__
457++#define __XENXXHASH_H__
458++
459++#include <xen/types.h>
460++
461++/*-****************************
462++ * Simple Hash Functions
463++ *****************************/
464++
465++/**
466++ * xxh32() - calculate the 32-bit hash of the input with a given seed.
467++ *
468++ * @input: The data to hash.
469++ * @length: The length of the data to hash.
470++ * @seed: The seed can be used to alter the result predictably.
471++ *
472++ * Speed on Core 2 Duo @ 3 GHz (single thread, SMHasher benchmark) : 5.4 GB/s
473++ *
474++ * Return: The 32-bit hash of the data.
475++ */
476++uint32_t xxh32(const void *input, size_t length, uint32_t seed);
477++
478++/**
479++ * xxh64() - calculate the 64-bit hash of the input with a given seed.
480++ *
481++ * @input: The data to hash.
482++ * @length: The length of the data to hash.
483++ * @seed: The seed can be used to alter the result predictably.
484++ *
485++ * This function runs 2x faster on 64-bit systems, but slower on 32-bit systems.
486++ *
487++ * Return: The 64-bit hash of the data.
488++ */
489++uint64_t xxh64(const void *input, size_t length, uint64_t seed);
490++
491++/**
492++ * xxhash() - calculate wordsize hash of the input with a given seed
493++ * @input: The data to hash.
494++ * @length: The length of the data to hash.
495++ * @seed: The seed can be used to alter the result predictably.
496++ *
497++ * If the hash does not need to be comparable between machines with
498++ * different word sizes, this function will call whichever of xxh32()
499++ * or xxh64() is faster.
500++ *
501++ * Return: wordsize hash of the data.
502++ */
503++
504++static inline unsigned long xxhash(const void *input, size_t length,
505++ uint64_t seed)
506++{
507++#if BITS_PER_LONG == 64
508++ return xxh64(input, length, seed);
509++#else
510++ return xxh32(input, length, seed);
511++#endif
512++}
513++
514++/*-****************************
515++ * Streaming Hash Functions
516++ *****************************/
517++
518++/*
519++ * These definitions are only meant to allow allocation of XXH state
520++ * statically, on stack, or in a struct for example.
521++ * Do not use members directly.
522++ */
523++
524++/**
525++ * struct xxh32_state - private xxh32 state, do not use members directly
526++ */
527++struct xxh32_state {
528++ uint32_t total_len_32;
529++ uint32_t large_len;
530++ uint32_t v1;
531++ uint32_t v2;
532++ uint32_t v3;
533++ uint32_t v4;
534++ uint32_t mem32[4];
535++ uint32_t memsize;
536++};
537++
538++/**
539++ * struct xxh32_state - private xxh64 state, do not use members directly
540++ */
541++struct xxh64_state {
542++ uint64_t total_len;
543++ uint64_t v1;
544++ uint64_t v2;
545++ uint64_t v3;
546++ uint64_t v4;
547++ uint64_t mem64[4];
548++ uint32_t memsize;
549++};
550++
551++/**
552++ * xxh32_reset() - reset the xxh32 state to start a new hashing operation
553++ *
554++ * @state: The xxh32 state to reset.
555++ * @seed: Initialize the hash state with this seed.
556++ *
557++ * Call this function on any xxh32_state to prepare for a new hashing operation.
558++ */
559++void xxh32_reset(struct xxh32_state *state, uint32_t seed);
560++
561++/**
562++ * xxh32_update() - hash the data given and update the xxh32 state
563++ *
564++ * @state: The xxh32 state to update.
565++ * @input: The data to hash.
566++ * @length: The length of the data to hash.
567++ *
568++ * After calling xxh32_reset() call xxh32_update() as many times as necessary.
569++ *
570++ * Return: Zero on success, otherwise an error code.
571++ */
572++int xxh32_update(struct xxh32_state *state, const void *input, size_t length);
573++
574++/**
575++ * xxh32_digest() - produce the current xxh32 hash
576++ *
577++ * @state: Produce the current xxh32 hash of this state.
578++ *
579++ * A hash value can be produced at any time. It is still possible to continue
580++ * inserting input into the hash state after a call to xxh32_digest(), and
581++ * generate new hashes later on, by calling xxh32_digest() again.
582++ *
583++ * Return: The xxh32 hash stored in the state.
584++ */
585++uint32_t xxh32_digest(const struct xxh32_state *state);
586++
587++/**
588++ * xxh64_reset() - reset the xxh64 state to start a new hashing operation
589++ *
590++ * @state: The xxh64 state to reset.
591++ * @seed: Initialize the hash state with this seed.
592++ */
593++void xxh64_reset(struct xxh64_state *state, uint64_t seed);
594++
595++/**
596++ * xxh64_update() - hash the data given and update the xxh64 state
597++ * @state: The xxh64 state to update.
598++ * @input: The data to hash.
599++ * @length: The length of the data to hash.
600++ *
601++ * After calling xxh64_reset() call xxh64_update() as many times as necessary.
602++ *
603++ * Return: Zero on success, otherwise an error code.
604++ */
605++int xxh64_update(struct xxh64_state *state, const void *input, size_t length);
606++
607++/**
608++ * xxh64_digest() - produce the current xxh64 hash
609++ *
610++ * @state: Produce the current xxh64 hash of this state.
611++ *
612++ * A hash value can be produced at any time. It is still possible to continue
613++ * inserting input into the hash state after a call to xxh64_digest(), and
614++ * generate new hashes later on, by calling xxh64_digest() again.
615++ *
616++ * Return: The xxh64 hash stored in the state.
617++ */
618++uint64_t xxh64_digest(const struct xxh64_state *state);
619++
620++/*-**************************
621++ * Utils
622++ ***************************/
623++
624++/**
625++ * xxh32_copy_state() - copy the source state into the destination state
626++ *
627++ * @src: The source xxh32 state.
628++ * @dst: The destination xxh32 state.
629++ */
630++void xxh32_copy_state(struct xxh32_state *dst, const struct xxh32_state *src);
631++
632++/**
633++ * xxh64_copy_state() - copy the source state into the destination state
634++ *
635++ * @src: The source xxh64 state.
636++ * @dst: The destination xxh64 state.
637++ */
638++void xxh64_copy_state(struct xxh64_state *dst, const struct xxh64_state *src);
639++
640++#endif /* __XENXXHASH_H__ */
641+diff --git a/xen/lib/Makefile b/xen/lib/Makefile
642+new file mode 100644
643+index 000000000000..922e09439a80
644+--- /dev/null
645++++ b/xen/lib/Makefile
646+@@ -0,0 +1,2 @@
647++obj-$(CONFIG_X86) += xxhash32.o
648++obj-$(CONFIG_X86) += xxhash64.o
649+diff --git a/xen/lib/xxhash32.c b/xen/lib/xxhash32.c
650+new file mode 100644
651+index 000000000000..e8d403e5ced6
652+--- /dev/null
653++++ b/xen/lib/xxhash32.c
654+@@ -0,0 +1,259 @@
655++/*
656++ * xxHash - Extremely Fast Hash algorithm
657++ * Copyright (C) 2012-2016, Yann Collet.
658++ *
659++ * BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
660++ *
661++ * Redistribution and use in source and binary forms, with or without
662++ * modification, are permitted provided that the following conditions are
663++ * met:
664++ *
665++ * * Redistributions of source code must retain the above copyright
666++ * notice, this list of conditions and the following disclaimer.
667++ * * Redistributions in binary form must reproduce the above
668++ * copyright notice, this list of conditions and the following disclaimer
669++ * in the documentation and/or other materials provided with the
670++ * distribution.
671++ *
672++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
673++ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
674++ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
675++ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
676++ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
677++ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
678++ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
679++ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
680++ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
681++ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
682++ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
683++ *
684++ * This program is free software; you can redistribute it and/or modify it under
685++ * the terms of the GNU General Public License version 2 as published by the
686++ * Free Software Foundation. This program is dual-licensed; you may select
687++ * either version 2 of the GNU General Public License ("GPL") or BSD license
688++ * ("BSD").
689++ *
690++ * You can contact the author at:
691++ * - xxHash homepage: https://cyan4973.github.io/xxHash/
692++ * - xxHash source repository: https://github.com/Cyan4973/xxHash
693++ */
694++
695++#include <xen/compiler.h>
696++#include <xen/errno.h>
697++#include <xen/string.h>
698++#include <xen/xxhash.h>
699++#include <asm/unaligned.h>
700++
701++/*-*************************************
702++ * Macros
703++ **************************************/
704++#define xxh_rotl32(x, r) ((x << r) | (x >> (32 - r)))
705++
706++#ifdef __LITTLE_ENDIAN
707++# define XXH_CPU_LITTLE_ENDIAN 1
708++#else
709++# define XXH_CPU_LITTLE_ENDIAN 0
710++#endif
711++
712++/*-*************************************
713++ * Constants
714++ **************************************/
715++static const uint32_t PRIME32_1 = 2654435761U;
716++static const uint32_t PRIME32_2 = 2246822519U;
717++static const uint32_t PRIME32_3 = 3266489917U;
718++static const uint32_t PRIME32_4 = 668265263U;
719++static const uint32_t PRIME32_5 = 374761393U;
720++
721++/*-**************************
722++ * Utils
723++ ***************************/
724++void xxh32_copy_state(struct xxh32_state *dst, const struct xxh32_state *src)
725++{
726++ memcpy(dst, src, sizeof(*dst));
727++}
728++
729++/*-***************************
730++ * Simple Hash Functions
731++ ****************************/
732++static uint32_t xxh32_round(uint32_t seed, const uint32_t input)
733++{
734++ seed += input * PRIME32_2;
735++ seed = xxh_rotl32(seed, 13);
736++ seed *= PRIME32_1;
737++ return seed;
738++}
739++
740++uint32_t xxh32(const void *input, const size_t len, const uint32_t seed)
741++{
742++ const uint8_t *p = (const uint8_t *)input;
743++ const uint8_t *b_end = p + len;
744++ uint32_t h32;
745++
746++ if (len >= 16) {
747++ const uint8_t *const limit = b_end - 16;
748++ uint32_t v1 = seed + PRIME32_1 + PRIME32_2;
749++ uint32_t v2 = seed + PRIME32_2;
750++ uint32_t v3 = seed + 0;
751++ uint32_t v4 = seed - PRIME32_1;
752++
753++ do {
754++ v1 = xxh32_round(v1, get_unaligned_le32(p));
755++ p += 4;
756++ v2 = xxh32_round(v2, get_unaligned_le32(p));
757++ p += 4;
758++ v3 = xxh32_round(v3, get_unaligned_le32(p));
759++ p += 4;
760++ v4 = xxh32_round(v4, get_unaligned_le32(p));
761++ p += 4;
762++ } while (p <= limit);
763++
764++ h32 = xxh_rotl32(v1, 1) + xxh_rotl32(v2, 7) +
765++ xxh_rotl32(v3, 12) + xxh_rotl32(v4, 18);
766++ } else {
767++ h32 = seed + PRIME32_5;
768++ }
769++
770++ h32 += (uint32_t)len;
771++
772++ while (p + 4 <= b_end) {
773++ h32 += get_unaligned_le32(p) * PRIME32_3;
774++ h32 = xxh_rotl32(h32, 17) * PRIME32_4;
775++ p += 4;
776++ }
777++
778++ while (p < b_end) {
779++ h32 += (*p) * PRIME32_5;
780++ h32 = xxh_rotl32(h32, 11) * PRIME32_1;
781++ p++;
782++ }
783++
784++ h32 ^= h32 >> 15;
785++ h32 *= PRIME32_2;
786++ h32 ^= h32 >> 13;
787++ h32 *= PRIME32_3;
788++ h32 ^= h32 >> 16;
789++
790++ return h32;
791++}
792++
793++/*-**************************************************
794++ * Advanced Hash Functions
795++ ***************************************************/
796++void xxh32_reset(struct xxh32_state *statePtr, const uint32_t seed)
797++{
798++ /* use a local state for memcpy() to avoid strict-aliasing warnings */
799++ struct xxh32_state state;
800++
801++ memset(&state, 0, sizeof(state));
802++ state.v1 = seed + PRIME32_1 + PRIME32_2;
803++ state.v2 = seed + PRIME32_2;
804++ state.v3 = seed + 0;
805++ state.v4 = seed - PRIME32_1;
806++ memcpy(statePtr, &state, sizeof(state));
807++}
808++
809++int xxh32_update(struct xxh32_state *state, const void *input, const size_t len)
810++{
811++ const uint8_t *p = (const uint8_t *)input;
812++ const uint8_t *const b_end = p + len;
813++
814++ if (input == NULL)
815++ return -EINVAL;
816++
817++ state->total_len_32 += (uint32_t)len;
818++ state->large_len |= (len >= 16) | (state->total_len_32 >= 16);
819++
820++ if (state->memsize + len < 16) { /* fill in tmp buffer */
821++ memcpy((uint8_t *)(state->mem32) + state->memsize, input, len);
822++ state->memsize += (uint32_t)len;
823++ return 0;
824++ }
825++
826++ if (state->memsize) { /* some data left from previous update */
827++ const uint32_t *p32 = state->mem32;
828++
829++ memcpy((uint8_t *)(state->mem32) + state->memsize, input,
830++ 16 - state->memsize);
831++
832++ state->v1 = xxh32_round(state->v1, get_unaligned_le32(p32));
833++ p32++;
834++ state->v2 = xxh32_round(state->v2, get_unaligned_le32(p32));
835++ p32++;
836++ state->v3 = xxh32_round(state->v3, get_unaligned_le32(p32));
837++ p32++;
838++ state->v4 = xxh32_round(state->v4, get_unaligned_le32(p32));
839++ p32++;
840++
841++ p += 16-state->memsize;
842++ state->memsize = 0;
843++ }
844++
845++ if (p <= b_end - 16) {
846++ const uint8_t *const limit = b_end - 16;
847++ uint32_t v1 = state->v1;
848++ uint32_t v2 = state->v2;
849++ uint32_t v3 = state->v3;
850++ uint32_t v4 = state->v4;
851++
852++ do {
853++ v1 = xxh32_round(v1, get_unaligned_le32(p));
854++ p += 4;
855++ v2 = xxh32_round(v2, get_unaligned_le32(p));
856++ p += 4;
857++ v3 = xxh32_round(v3, get_unaligned_le32(p));
858++ p += 4;
859++ v4 = xxh32_round(v4, get_unaligned_le32(p));
860++ p += 4;
861++ } while (p <= limit);
862++
863++ state->v1 = v1;
864++ state->v2 = v2;
865++ state->v3 = v3;
866++ state->v4 = v4;
867++ }
868++
869++ if (p < b_end) {
870++ memcpy(state->mem32, p, (size_t)(b_end-p));
871++ state->memsize = (uint32_t)(b_end-p);
872++ }
873++
874++ return 0;
875++}
876++
877++uint32_t xxh32_digest(const struct xxh32_state *state)
878++{
879++ const uint8_t *p = (const uint8_t *)state->mem32;
880++ const uint8_t *const b_end = (const uint8_t *)(state->mem32) +
881++ state->memsize;
882++ uint32_t h32;
883++
884++ if (state->large_len) {
885++ h32 = xxh_rotl32(state->v1, 1) + xxh_rotl32(state->v2, 7) +
886++ xxh_rotl32(state->v3, 12) + xxh_rotl32(state->v4, 18);
887++ } else {
888++ h32 = state->v3 /* == seed */ + PRIME32_5;
889++ }
890++
891++ h32 += state->total_len_32;
892++
893++ while (p + 4 <= b_end) {
894++ h32 += get_unaligned_le32(p) * PRIME32_3;
895++ h32 = xxh_rotl32(h32, 17) * PRIME32_4;
896++ p += 4;
897++ }
898++
899++ while (p < b_end) {
900++ h32 += (*p) * PRIME32_5;
901++ h32 = xxh_rotl32(h32, 11) * PRIME32_1;
902++ p++;
903++ }
904++
905++ h32 ^= h32 >> 15;
906++ h32 *= PRIME32_2;
907++ h32 ^= h32 >> 13;
908++ h32 *= PRIME32_3;
909++ h32 ^= h32 >> 16;
910++
911++ return h32;
912++}
913++
914+diff --git a/xen/lib/xxhash64.c b/xen/lib/xxhash64.c
915+new file mode 100644
916+index 000000000000..ba6bcf152d6f
917+--- /dev/null
918++++ b/xen/lib/xxhash64.c
919+@@ -0,0 +1,294 @@
920++/*
921++ * xxHash - Extremely Fast Hash algorithm
922++ * Copyright (C) 2012-2016, Yann Collet.
923++ *
924++ * BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
925++ *
926++ * Redistribution and use in source and binary forms, with or without
927++ * modification, are permitted provided that the following conditions are
928++ * met:
929++ *
930++ * * Redistributions of source code must retain the above copyright
931++ * notice, this list of conditions and the following disclaimer.
932++ * * Redistributions in binary form must reproduce the above
933++ * copyright notice, this list of conditions and the following disclaimer
934++ * in the documentation and/or other materials provided with the
935++ * distribution.
936++ *
937++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
938++ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
939++ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
940++ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
941++ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
942++ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
943++ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
944++ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
945++ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
946++ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
947++ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
948++ *
949++ * This program is free software; you can redistribute it and/or modify it under
950++ * the terms of the GNU General Public License version 2 as published by the
951++ * Free Software Foundation. This program is dual-licensed; you may select
952++ * either version 2 of the GNU General Public License ("GPL") or BSD license
953++ * ("BSD").
954++ *
955++ * You can contact the author at:
956++ * - xxHash homepage: https://cyan4973.github.io/xxHash/
957++ * - xxHash source repository: https://github.com/Cyan4973/xxHash
958++ */
959++
960++#include <xen/compiler.h>
961++#include <xen/errno.h>
962++#include <xen/string.h>
963++#include <xen/xxhash.h>
964++#include <asm/unaligned.h>
965++
966++/*-*************************************
967++ * Macros
968++ **************************************/
969++#define xxh_rotl64(x, r) ((x << r) | (x >> (64 - r)))
970++
971++#ifdef __LITTLE_ENDIAN
972++# define XXH_CPU_LITTLE_ENDIAN 1
973++#else
974++# define XXH_CPU_LITTLE_ENDIAN 0
975++#endif
976++
977++/*-*************************************
978++ * Constants
979++ **************************************/
980++static const uint64_t PRIME64_1 = 11400714785074694791ULL;
981++static const uint64_t PRIME64_2 = 14029467366897019727ULL;
982++static const uint64_t PRIME64_3 = 1609587929392839161ULL;
983++static const uint64_t PRIME64_4 = 9650029242287828579ULL;
984++static const uint64_t PRIME64_5 = 2870177450012600261ULL;
985++
986++/*-**************************
987++ * Utils
988++ ***************************/
989++void xxh64_copy_state(struct xxh64_state *dst, const struct xxh64_state *src)
990++{
991++ memcpy(dst, src, sizeof(*dst));
992++}
993++
994++/*-***************************
995++ * Simple Hash Functions
996++ ****************************/
997++static uint64_t xxh64_round(uint64_t acc, const uint64_t input)
998++{
999++ acc += input * PRIME64_2;
1000++ acc = xxh_rotl64(acc, 31);
1001++ acc *= PRIME64_1;
1002++ return acc;
1003++}
1004++
1005++static uint64_t xxh64_merge_round(uint64_t acc, uint64_t val)
1006++{
1007++ val = xxh64_round(0, val);
1008++ acc ^= val;
1009++ acc = acc * PRIME64_1 + PRIME64_4;
1010++ return acc;
1011++}
1012++
1013++uint64_t xxh64(const void *input, const size_t len, const uint64_t seed)
1014++{
1015++ const uint8_t *p = (const uint8_t *)input;
1016++ const uint8_t *const b_end = p + len;
1017++ uint64_t h64;
1018++
1019++ if (len >= 32) {
1020++ const uint8_t *const limit = b_end - 32;
1021++ uint64_t v1 = seed + PRIME64_1 + PRIME64_2;
1022++ uint64_t v2 = seed + PRIME64_2;
1023++ uint64_t v3 = seed + 0;
1024++ uint64_t v4 = seed - PRIME64_1;
1025++
1026++ do {
1027++ v1 = xxh64_round(v1, get_unaligned_le64(p));
1028++ p += 8;
1029++ v2 = xxh64_round(v2, get_unaligned_le64(p));
1030++ p += 8;
1031++ v3 = xxh64_round(v3, get_unaligned_le64(p));
1032++ p += 8;
1033++ v4 = xxh64_round(v4, get_unaligned_le64(p));
1034++ p += 8;
1035++ } while (p <= limit);
1036++
1037++ h64 = xxh_rotl64(v1, 1) + xxh_rotl64(v2, 7) +
1038++ xxh_rotl64(v3, 12) + xxh_rotl64(v4, 18);
1039++ h64 = xxh64_merge_round(h64, v1);
1040++ h64 = xxh64_merge_round(h64, v2);
1041++ h64 = xxh64_merge_round(h64, v3);
1042++ h64 = xxh64_merge_round(h64, v4);
1043++
1044++ } else {
1045++ h64 = seed + PRIME64_5;
1046++ }
1047++
1048++ h64 += (uint64_t)len;
1049++
1050++ while (p + 8 <= b_end) {
1051++ const uint64_t k1 = xxh64_round(0, get_unaligned_le64(p));
1052++
1053++ h64 ^= k1;
1054++ h64 = xxh_rotl64(h64, 27) * PRIME64_1 + PRIME64_4;
1055++ p += 8;
1056++ }
1057++
1058++ if (p + 4 <= b_end) {
1059++ h64 ^= (uint64_t)(get_unaligned_le32(p)) * PRIME64_1;
1060++ h64 = xxh_rotl64(h64, 23) * PRIME64_2 + PRIME64_3;
1061++ p += 4;
1062++ }
1063++
1064++ while (p < b_end) {
1065++ h64 ^= (*p) * PRIME64_5;
1066++ h64 = xxh_rotl64(h64, 11) * PRIME64_1;
1067++ p++;
1068++ }
1069++
1070++ h64 ^= h64 >> 33;
1071++ h64 *= PRIME64_2;
1072++ h64 ^= h64 >> 29;
1073++ h64 *= PRIME64_3;
1074++ h64 ^= h64 >> 32;
1075++
1076++ return h64;
1077++}
1078++
1079++/*-**************************************************
1080++ * Advanced Hash Functions
1081++ ***************************************************/
1082++void xxh64_reset(struct xxh64_state *statePtr, const uint64_t seed)
1083++{
1084++ /* use a local state for memcpy() to avoid strict-aliasing warnings */
1085++ struct xxh64_state state;
1086++
1087++ memset(&state, 0, sizeof(state));
1088++ state.v1 = seed + PRIME64_1 + PRIME64_2;
1089++ state.v2 = seed + PRIME64_2;
1090++ state.v3 = seed + 0;
1091++ state.v4 = seed - PRIME64_1;
1092++ memcpy(statePtr, &state, sizeof(state));
1093++}
1094++
1095++int xxh64_update(struct xxh64_state *state, const void *input, const size_t len)
1096++{
1097++ const uint8_t *p = (const uint8_t *)input;
1098++ const uint8_t *const b_end = p + len;
1099++
1100++ if (input == NULL)
1101++ return -EINVAL;
1102++
1103++ state->total_len += len;
1104++
1105++ if (state->memsize + len < 32) { /* fill in tmp buffer */
1106++ memcpy(((uint8_t *)state->mem64) + state->memsize, input, len);
1107++ state->memsize += (uint32_t)len;
1108++ return 0;
1109++ }
1110++
1111++ if (state->memsize) { /* tmp buffer is full */
1112++ uint64_t *p64 = state->mem64;
1113++
1114++ memcpy(((uint8_t *)p64) + state->memsize, input,
1115++ 32 - state->memsize);
1116++
1117++ state->v1 = xxh64_round(state->v1, get_unaligned_le64(p64));
1118++ p64++;
1119++ state->v2 = xxh64_round(state->v2, get_unaligned_le64(p64));
1120++ p64++;
1121++ state->v3 = xxh64_round(state->v3, get_unaligned_le64(p64));
1122++ p64++;
1123++ state->v4 = xxh64_round(state->v4, get_unaligned_le64(p64));
1124++
1125++ p += 32 - state->memsize;
1126++ state->memsize = 0;
1127++ }
1128++
1129++ if (p + 32 <= b_end) {
1130++ const uint8_t *const limit = b_end - 32;
1131++ uint64_t v1 = state->v1;
1132++ uint64_t v2 = state->v2;
1133++ uint64_t v3 = state->v3;
1134++ uint64_t v4 = state->v4;
1135++
1136++ do {
1137++ v1 = xxh64_round(v1, get_unaligned_le64(p));
1138++ p += 8;
1139++ v2 = xxh64_round(v2, get_unaligned_le64(p));
1140++ p += 8;
1141++ v3 = xxh64_round(v3, get_unaligned_le64(p));
1142++ p += 8;
1143++ v4 = xxh64_round(v4, get_unaligned_le64(p));
1144++ p += 8;
1145++ } while (p <= limit);
1146++
1147++ state->v1 = v1;
1148++ state->v2 = v2;
1149++ state->v3 = v3;
1150++ state->v4 = v4;
1151++ }
1152++
1153++ if (p < b_end) {
1154++ memcpy(state->mem64, p, (size_t)(b_end-p));
1155++ state->memsize = (uint32_t)(b_end - p);
1156++ }
1157++
1158++ return 0;
1159++}
1160++
1161++uint64_t xxh64_digest(const struct xxh64_state *state)
1162++{
1163++ const uint8_t *p = (const uint8_t *)state->mem64;
1164++ const uint8_t *const b_end = (const uint8_t *)state->mem64 +
1165++ state->memsize;
1166++ uint64_t h64;
1167++
1168++ if (state->total_len >= 32) {
1169++ const uint64_t v1 = state->v1;
1170++ const uint64_t v2 = state->v2;
1171++ const uint64_t v3 = state->v3;
1172++ const uint64_t v4 = state->v4;
1173++
1174++ h64 = xxh_rotl64(v1, 1) + xxh_rotl64(v2, 7) +
1175++ xxh_rotl64(v3, 12) + xxh_rotl64(v4, 18);
1176++ h64 = xxh64_merge_round(h64, v1);
1177++ h64 = xxh64_merge_round(h64, v2);
1178++ h64 = xxh64_merge_round(h64, v3);
1179++ h64 = xxh64_merge_round(h64, v4);
1180++ } else {
1181++ h64 = state->v3 + PRIME64_5;
1182++ }
1183++
1184++ h64 += (uint64_t)state->total_len;
1185++
1186++ while (p + 8 <= b_end) {
1187++ const uint64_t k1 = xxh64_round(0, get_unaligned_le64(p));
1188++
1189++ h64 ^= k1;
1190++ h64 = xxh_rotl64(h64, 27) * PRIME64_1 + PRIME64_4;
1191++ p += 8;
1192++ }
1193++
1194++ if (p + 4 <= b_end) {
1195++ h64 ^= (uint64_t)(get_unaligned_le32(p)) * PRIME64_1;
1196++ h64 = xxh_rotl64(h64, 23) * PRIME64_2 + PRIME64_3;
1197++ p += 4;
1198++ }
1199++
1200++ while (p < b_end) {
1201++ h64 ^= (*p) * PRIME64_5;
1202++ h64 = xxh_rotl64(h64, 11) * PRIME64_1;
1203++ p++;
1204++ }
1205++
1206++ h64 ^= h64 >> 33;
1207++ h64 *= PRIME64_2;
1208++ h64 ^= h64 >> 29;
1209++ h64 *= PRIME64_3;
1210++ h64 ^= h64 >> 32;
1211++
1212++ return h64;
1213++}
1214+--
1215+2.34.1
1216+
1217diff --git a/debian/patches/lp1956166-0003-x86-Dom0-support-zstd-compressed-kernels.patch b/debian/patches/lp1956166-0003-x86-Dom0-support-zstd-compressed-kernels.patch
1218new file mode 100644
1219index 0000000..cb6d5f3
1220--- /dev/null
1221+++ b/debian/patches/lp1956166-0003-x86-Dom0-support-zstd-compressed-kernels.patch
1222@@ -0,0 +1,6404 @@
1223+From 95becb20279ede2cc0b87e0311f43911997a53e7 Mon Sep 17 00:00:00 2001
1224+From: Jan Beulich <jbeulich@suse.com>
1225+Date: Mon, 18 Jan 2021 12:12:23 +0100
1226+Subject: [PATCH 3/5] x86/Dom0: support zstd compressed kernels
1227+
1228+Taken from Linux at commit 1c4dd334df3a ("lib: decompress_unzstd: Limit
1229+output size") for unzstd.c (renamed from decompress_unzstd.c) and
1230+36f9ff9e03de ("lib: Fix fall-through warnings for Clang") for zstd/,
1231+with bits from linux/zstd.h merged into suitable other headers.
1232+
1233+To limit the editing necessary, introduce ptrdiff_t.
1234+
1235+Signed-off-by: Jan Beulich <jbeulich@suse.com>
1236+Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
1237+
1238+Bug-Ubuntu: https://bugs.launchpad.net/bugs/1956166
1239+Origin: backport, http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=d6627cf1b63ce57a6a7e2c1800dbc50eed742c32
1240+[backport: xen/common/Makefile: remove 'lzo' from list,
1241+ and refresh 1 context line.]
1242+---
1243+ xen/common/Makefile | 2 +-
1244+ xen/common/decompress.c | 3 +
1245+ xen/common/unzstd.c | 308 ++++
1246+ xen/common/zstd/bitstream.h | 380 +++++
1247+ xen/common/zstd/decompress.c | 2496 ++++++++++++++++++++++++++++++
1248+ xen/common/zstd/entropy_common.c | 243 +++
1249+ xen/common/zstd/error_private.h | 110 ++
1250+ xen/common/zstd/fse.h | 575 +++++++
1251+ xen/common/zstd/fse_decompress.c | 324 ++++
1252+ xen/common/zstd/huf.h | 212 +++
1253+ xen/common/zstd/huf_decompress.c | 960 ++++++++++++
1254+ xen/common/zstd/mem.h | 151 ++
1255+ xen/common/zstd/zstd_common.c | 74 +
1256+ xen/common/zstd/zstd_internal.h | 372 +++++
1257+ xen/include/asm-arm/types.h | 6 +
1258+ xen/include/asm-x86/types.h | 6 +
1259+ xen/include/xen/decompress.h | 2 +-
1260+ 17 files changed, 6222 insertions(+), 2 deletions(-)
1261+ create mode 100644 xen/common/unzstd.c
1262+ create mode 100644 xen/common/zstd/bitstream.h
1263+ create mode 100644 xen/common/zstd/decompress.c
1264+ create mode 100644 xen/common/zstd/entropy_common.c
1265+ create mode 100644 xen/common/zstd/error_private.h
1266+ create mode 100644 xen/common/zstd/fse.h
1267+ create mode 100644 xen/common/zstd/fse_decompress.c
1268+ create mode 100644 xen/common/zstd/huf.h
1269+ create mode 100644 xen/common/zstd/huf_decompress.c
1270+ create mode 100644 xen/common/zstd/mem.h
1271+ create mode 100644 xen/common/zstd/zstd_common.c
1272+ create mode 100644 xen/common/zstd/zstd_internal.h
1273+
1274+diff --git a/xen/common/Makefile b/xen/common/Makefile
1275+index 24d4752ccc55..c4dceff97842 100644
1276+--- a/xen/common/Makefile
1277++++ b/xen/common/Makefile
1278+@@ -66,7 +66,7 @@ obj-bin-y += warning.init.o
1279+ obj-$(CONFIG_XENOPROF) += xenoprof.o
1280+ obj-y += xmalloc_tlsf.o
1281+
1282+-obj-bin-$(CONFIG_X86) += $(foreach n,decompress bunzip2 unxz unlzma unlzo unlz4 earlycpio,$(n).init.o)
1283++obj-bin-$(CONFIG_X86) += $(foreach n,decompress bunzip2 unxz unlzma unlzo unlz4 unzstd earlycpio,$(n).init.o)
1284+
1285+
1286+ obj-$(CONFIG_COMPAT) += $(addprefix compat/,domain.o kernel.o memory.o multicall.o xlat.o)
1287+diff --git a/xen/common/decompress.c b/xen/common/decompress.c
1288+index 9d6e0c4ab075..79e60f4802d5 100644
1289+--- a/xen/common/decompress.c
1290++++ b/xen/common/decompress.c
1291+@@ -31,5 +31,8 @@ int __init decompress(void *inbuf, unsigned int len, void *outbuf)
1292+ if ( len >= 2 && !memcmp(inbuf, "\x02\x21", 2) )
1293+ return unlz4(inbuf, len, NULL, NULL, outbuf, NULL, error);
1294+
1295++ if ( len >= 4 && !memcmp(inbuf, "\x28\xb5\x2f\xfd", 4) )
1296++ return unzstd(inbuf, len, NULL, NULL, outbuf, NULL, error);
1297++
1298+ return 1;
1299+ }
1300+diff --git a/xen/common/unzstd.c b/xen/common/unzstd.c
1301+new file mode 100644
1302+index 000000000000..a10761642764
1303+--- /dev/null
1304++++ b/xen/common/unzstd.c
1305+@@ -0,0 +1,308 @@
1306++// SPDX-License-Identifier: GPL-2.0
1307++
1308++/*
1309++ * Important notes about in-place decompression
1310++ *
1311++ * At least on x86, the kernel is decompressed in place: the compressed data
1312++ * is placed to the end of the output buffer, and the decompressor overwrites
1313++ * most of the compressed data. There must be enough safety margin to
1314++ * guarantee that the write position is always behind the read position.
1315++ *
1316++ * The safety margin for ZSTD with a 128 KB block size is calculated below.
1317++ * Note that the margin with ZSTD is bigger than with GZIP or XZ!
1318++ *
1319++ * The worst case for in-place decompression is that the beginning of
1320++ * the file is compressed extremely well, and the rest of the file is
1321++ * uncompressible. Thus, we must look for worst-case expansion when the
1322++ * compressor is encoding uncompressible data.
1323++ *
1324++ * The structure of the .zst file in case of a compresed kernel is as follows.
1325++ * Maximum sizes (as bytes) of the fields are in parenthesis.
1326++ *
1327++ * Frame Header: (18)
1328++ * Blocks: (N)
1329++ * Checksum: (4)
1330++ *
1331++ * The frame header and checksum overhead is at most 22 bytes.
1332++ *
1333++ * ZSTD stores the data in blocks. Each block has a header whose size is
1334++ * a 3 bytes. After the block header, there is up to 128 KB of payload.
1335++ * The maximum uncompressed size of the payload is 128 KB. The minimum
1336++ * uncompressed size of the payload is never less than the payload size
1337++ * (excluding the block header).
1338++ *
1339++ * The assumption, that the uncompressed size of the payload is never
1340++ * smaller than the payload itself, is valid only when talking about
1341++ * the payload as a whole. It is possible that the payload has parts where
1342++ * the decompressor consumes more input than it produces output. Calculating
1343++ * the worst case for this would be tricky. Instead of trying to do that,
1344++ * let's simply make sure that the decompressor never overwrites any bytes
1345++ * of the payload which it is currently reading.
1346++ *
1347++ * Now we have enough information to calculate the safety margin. We need
1348++ * - 22 bytes for the .zst file format headers;
1349++ * - 3 bytes per every 128 KiB of uncompressed size (one block header per
1350++ * block); and
1351++ * - 128 KiB (biggest possible zstd block size) to make sure that the
1352++ * decompressor never overwrites anything from the block it is currently
1353++ * reading.
1354++ *
1355++ * We get the following formula:
1356++ *
1357++ * safety_margin = 22 + uncompressed_size * 3 / 131072 + 131072
1358++ * <= 22 + (uncompressed_size >> 15) + 131072
1359++ */
1360++
1361++#include "decompress.h"
1362++
1363++#include "zstd/entropy_common.c"
1364++#include "zstd/fse_decompress.c"
1365++#include "zstd/huf_decompress.c"
1366++#include "zstd/zstd_common.c"
1367++#include "zstd/decompress.c"
1368++
1369++/* 128MB is the maximum window size supported by zstd. */
1370++#define ZSTD_WINDOWSIZE_MAX (1 << ZSTD_WINDOWLOG_MAX)
1371++/*
1372++ * Size of the input and output buffers in multi-call mode.
1373++ * Pick a larger size because it isn't used during kernel decompression,
1374++ * since that is single pass, and we have to allocate a large buffer for
1375++ * zstd's window anyway. The larger size speeds up initramfs decompression.
1376++ */
1377++#define ZSTD_IOBUF_SIZE (1 << 17)
1378++
1379++static int INIT handle_zstd_error(size_t ret, void (*error)(const char *x))
1380++{
1381++ const int err = ZSTD_getErrorCode(ret);
1382++
1383++ if (!ZSTD_isError(ret))
1384++ return 0;
1385++
1386++ switch (err) {
1387++ case ZSTD_error_memory_allocation:
1388++ error("ZSTD decompressor ran out of memory");
1389++ break;
1390++ case ZSTD_error_prefix_unknown:
1391++ error("Input is not in the ZSTD format (wrong magic bytes)");
1392++ break;
1393++ case ZSTD_error_dstSize_tooSmall:
1394++ case ZSTD_error_corruption_detected:
1395++ case ZSTD_error_checksum_wrong:
1396++ error("ZSTD-compressed data is corrupt");
1397++ break;
1398++ default:
1399++ error("ZSTD-compressed data is probably corrupt");
1400++ break;
1401++ }
1402++ return -1;
1403++}
1404++
1405++/*
1406++ * Handle the case where we have the entire input and output in one segment.
1407++ * We can allocate less memory (no circular buffer for the sliding window),
1408++ * and avoid some memcpy() calls.
1409++ */
1410++static int INIT decompress_single(const u8 *in_buf, long in_len, u8 *out_buf,
1411++ long out_len, unsigned int *in_pos,
1412++ void (*error)(const char *x))
1413++{
1414++ const size_t wksp_size = ZSTD_DCtxWorkspaceBound();
1415++ void *wksp = large_malloc(wksp_size);
1416++ ZSTD_DCtx *dctx = ZSTD_initDCtx(wksp, wksp_size);
1417++ int err;
1418++ size_t ret;
1419++
1420++ if (dctx == NULL) {
1421++ error("Out of memory while allocating ZSTD_DCtx");
1422++ err = -1;
1423++ goto out;
1424++ }
1425++ /*
1426++ * Find out how large the frame actually is, there may be junk at
1427++ * the end of the frame that ZSTD_decompressDCtx() can't handle.
1428++ */
1429++ ret = ZSTD_findFrameCompressedSize(in_buf, in_len);
1430++ err = handle_zstd_error(ret, error);
1431++ if (err)
1432++ goto out;
1433++ in_len = (long)ret;
1434++
1435++ ret = ZSTD_decompressDCtx(dctx, out_buf, out_len, in_buf, in_len);
1436++ err = handle_zstd_error(ret, error);
1437++ if (err)
1438++ goto out;
1439++
1440++ if (in_pos != NULL)
1441++ *in_pos = in_len;
1442++
1443++ err = 0;
1444++out:
1445++ if (wksp != NULL)
1446++ large_free(wksp);
1447++ return err;
1448++}
1449++
1450++STATIC int INIT unzstd(unsigned char *in_buf, unsigned int in_len,
1451++ int (*fill)(void*, unsigned int),
1452++ int (*flush)(void*, unsigned int),
1453++ unsigned char *out_buf,
1454++ unsigned int *in_pos,
1455++ void (*error)(const char *x))
1456++{
1457++ ZSTD_inBuffer in;
1458++ ZSTD_outBuffer out;
1459++ ZSTD_frameParams params;
1460++ void *in_allocated = NULL;
1461++ void *out_allocated = NULL;
1462++ void *wksp = NULL;
1463++ size_t wksp_size;
1464++ ZSTD_DStream *dstream;
1465++ int err;
1466++ size_t ret;
1467++ /*
1468++ * ZSTD decompression code won't be happy if the buffer size is so big
1469++ * that its end address overflows. When the size is not provided, make
1470++ * it as big as possible without having the end address overflow.
1471++ */
1472++ unsigned long out_len = ULONG_MAX - (unsigned long)out_buf;
1473++
1474++ if (fill == NULL && flush == NULL)
1475++ /*
1476++ * We can decompress faster and with less memory when we have a
1477++ * single chunk.
1478++ */
1479++ return decompress_single(in_buf, in_len, out_buf, out_len,
1480++ in_pos, error);
1481++
1482++ /*
1483++ * If in_buf is not provided, we must be using fill(), so allocate
1484++ * a large enough buffer. If it is provided, it must be at least
1485++ * ZSTD_IOBUF_SIZE large.
1486++ */
1487++ if (in_buf == NULL) {
1488++ in_allocated = large_malloc(ZSTD_IOBUF_SIZE);
1489++ if (in_allocated == NULL) {
1490++ error("Out of memory while allocating input buffer");
1491++ err = -1;
1492++ goto out;
1493++ }
1494++ in_buf = in_allocated;
1495++ in_len = 0;
1496++ }
1497++ /* Read the first chunk, since we need to decode the frame header. */
1498++ if (fill != NULL)
1499++ in_len = fill(in_buf, ZSTD_IOBUF_SIZE);
1500++ if ((int)in_len < 0) {
1501++ error("ZSTD-compressed data is truncated");
1502++ err = -1;
1503++ goto out;
1504++ }
1505++ /* Set the first non-empty input buffer. */
1506++ in.src = in_buf;
1507++ in.pos = 0;
1508++ in.size = in_len;
1509++ /* Allocate the output buffer if we are using flush(). */
1510++ if (flush != NULL) {
1511++ out_allocated = large_malloc(ZSTD_IOBUF_SIZE);
1512++ if (out_allocated == NULL) {
1513++ error("Out of memory while allocating output buffer");
1514++ err = -1;
1515++ goto out;
1516++ }
1517++ out_buf = out_allocated;
1518++ out_len = ZSTD_IOBUF_SIZE;
1519++ }
1520++ /* Set the output buffer. */
1521++ out.dst = out_buf;
1522++ out.pos = 0;
1523++ out.size = out_len;
1524++
1525++ /*
1526++ * We need to know the window size to allocate the ZSTD_DStream.
1527++ * Since we are streaming, we need to allocate a buffer for the sliding
1528++ * window. The window size varies from 1 KB to ZSTD_WINDOWSIZE_MAX
1529++ * (8 MB), so it is important to use the actual value so as not to
1530++ * waste memory when it is smaller.
1531++ */
1532++ ret = ZSTD_getFrameParams(&params, in.src, in.size);
1533++ err = handle_zstd_error(ret, error);
1534++ if (err)
1535++ goto out;
1536++ if (ret != 0) {
1537++ error("ZSTD-compressed data has an incomplete frame header");
1538++ err = -1;
1539++ goto out;
1540++ }
1541++ if (params.windowSize > ZSTD_WINDOWSIZE_MAX) {
1542++ error("ZSTD-compressed data has too large a window size");
1543++ err = -1;
1544++ goto out;
1545++ }
1546++
1547++ /*
1548++ * Allocate the ZSTD_DStream now that we know how much memory is
1549++ * required.
1550++ */
1551++ wksp_size = ZSTD_DStreamWorkspaceBound(params.windowSize);
1552++ wksp = large_malloc(wksp_size);
1553++ dstream = ZSTD_initDStream(params.windowSize, wksp, wksp_size);
1554++ if (dstream == NULL) {
1555++ error("Out of memory while allocating ZSTD_DStream");
1556++ err = -1;
1557++ goto out;
1558++ }
1559++
1560++ /*
1561++ * Decompression loop:
1562++ * Read more data if necessary (error if no more data can be read).
1563++ * Call the decompression function, which returns 0 when finished.
1564++ * Flush any data produced if using flush().
1565++ */
1566++ if (in_pos != NULL)
1567++ *in_pos = 0;
1568++ do {
1569++ /*
1570++ * If we need to reload data, either we have fill() and can
1571++ * try to get more data, or we don't and the input is truncated.
1572++ */
1573++ if (in.pos == in.size) {
1574++ if (in_pos != NULL)
1575++ *in_pos += in.pos;
1576++ in_len = fill ? fill(in_buf, ZSTD_IOBUF_SIZE) : -1;
1577++ if ((int)in_len < 0) {
1578++ error("ZSTD-compressed data is truncated");
1579++ err = -1;
1580++ goto out;
1581++ }
1582++ in.pos = 0;
1583++ in.size = in_len;
1584++ }
1585++ /* Returns zero when the frame is complete. */
1586++ ret = ZSTD_decompressStream(dstream, &out, &in);
1587++ err = handle_zstd_error(ret, error);
1588++ if (err)
1589++ goto out;
1590++ /* Flush all of the data produced if using flush(). */
1591++ if (flush != NULL && out.pos > 0) {
1592++ if (out.pos != flush(out.dst, out.pos)) {
1593++ error("Failed to flush()");
1594++ err = -1;
1595++ goto out;
1596++ }
1597++ out.pos = 0;
1598++ }
1599++ } while (ret != 0);
1600++
1601++ if (in_pos != NULL)
1602++ *in_pos += in.pos;
1603++
1604++ err = 0;
1605++out:
1606++ if (in_allocated != NULL)
1607++ large_free(in_allocated);
1608++ if (out_allocated != NULL)
1609++ large_free(out_allocated);
1610++ if (wksp != NULL)
1611++ large_free(wksp);
1612++ return err;
1613++}
1614+diff --git a/xen/common/zstd/bitstream.h b/xen/common/zstd/bitstream.h
1615+new file mode 100644
1616+index 000000000000..2b06d4551f03
1617+--- /dev/null
1618++++ b/xen/common/zstd/bitstream.h
1619+@@ -0,0 +1,380 @@
1620++/*
1621++ * bitstream
1622++ * Part of FSE library
1623++ * header file (to include)
1624++ * Copyright (C) 2013-2016, Yann Collet.
1625++ *
1626++ * BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
1627++ *
1628++ * Redistribution and use in source and binary forms, with or without
1629++ * modification, are permitted provided that the following conditions are
1630++ * met:
1631++ *
1632++ * * Redistributions of source code must retain the above copyright
1633++ * notice, this list of conditions and the following disclaimer.
1634++ * * Redistributions in binary form must reproduce the above
1635++ * copyright notice, this list of conditions and the following disclaimer
1636++ * in the documentation and/or other materials provided with the
1637++ * distribution.
1638++ *
1639++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
1640++ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
1641++ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
1642++ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
1643++ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
1644++ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
1645++ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
1646++ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
1647++ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
1648++ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
1649++ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
1650++ *
1651++ * This program is free software; you can redistribute it and/or modify it under
1652++ * the terms of the GNU General Public License version 2 as published by the
1653++ * Free Software Foundation. This program is dual-licensed; you may select
1654++ * either version 2 of the GNU General Public License ("GPL") or BSD license
1655++ * ("BSD").
1656++ *
1657++ * You can contact the author at :
1658++ * - Source repository : https://github.com/Cyan4973/FiniteStateEntropy
1659++ */
1660++#ifndef BITSTREAM_H_MODULE
1661++#define BITSTREAM_H_MODULE
1662++
1663++/*
1664++* This API consists of small unitary functions, which must be inlined for best performance.
1665++* Since link-time-optimization is not available for all compilers,
1666++* these functions are defined into a .h to be included.
1667++*/
1668++
1669++/*-****************************************
1670++* Dependencies
1671++******************************************/
1672++#include "error_private.h" /* error codes and messages */
1673++#include "mem.h" /* unaligned access routines */
1674++
1675++/*=========================================
1676++* Target specific
1677++=========================================*/
1678++#define STREAM_ACCUMULATOR_MIN_32 25
1679++#define STREAM_ACCUMULATOR_MIN_64 57
1680++#define STREAM_ACCUMULATOR_MIN ((U32)(ZSTD_32bits() ? STREAM_ACCUMULATOR_MIN_32 : STREAM_ACCUMULATOR_MIN_64))
1681++
1682++/*-******************************************
1683++* bitStream encoding API (write forward)
1684++********************************************/
1685++/* bitStream can mix input from multiple sources.
1686++* A critical property of these streams is that they encode and decode in **reverse** direction.
1687++* So the first bit sequence you add will be the last to be read, like a LIFO stack.
1688++*/
1689++typedef struct {
1690++ size_t bitContainer;
1691++ int bitPos;
1692++ char *startPtr;
1693++ char *ptr;
1694++ char *endPtr;
1695++} BIT_CStream_t;
1696++
1697++ZSTD_STATIC size_t BIT_initCStream(BIT_CStream_t *bitC, void *dstBuffer, size_t dstCapacity);
1698++ZSTD_STATIC void BIT_addBits(BIT_CStream_t *bitC, size_t value, unsigned nbBits);
1699++ZSTD_STATIC void BIT_flushBits(BIT_CStream_t *bitC);
1700++ZSTD_STATIC size_t BIT_closeCStream(BIT_CStream_t *bitC);
1701++
1702++/* Start with initCStream, providing the size of buffer to write into.
1703++* bitStream will never write outside of this buffer.
1704++* `dstCapacity` must be >= sizeof(bitD->bitContainer), otherwise @return will be an error code.
1705++*
1706++* bits are first added to a local register.
1707++* Local register is size_t, hence 64-bits on 64-bits systems, or 32-bits on 32-bits systems.
1708++* Writing data into memory is an explicit operation, performed by the flushBits function.
1709++* Hence keep track how many bits are potentially stored into local register to avoid register overflow.
1710++* After a flushBits, a maximum of 7 bits might still be stored into local register.
1711++*
1712++* Avoid storing elements of more than 24 bits if you want compatibility with 32-bits bitstream readers.
1713++*
1714++* Last operation is to close the bitStream.
1715++* The function returns the final size of CStream in bytes.
1716++* If data couldn't fit into `dstBuffer`, it will return a 0 ( == not storable)
1717++*/
1718++
1719++/*-********************************************
1720++* bitStream decoding API (read backward)
1721++**********************************************/
1722++typedef struct {
1723++ size_t bitContainer;
1724++ unsigned bitsConsumed;
1725++ const char *ptr;
1726++ const char *start;
1727++} BIT_DStream_t;
1728++
1729++typedef enum {
1730++ BIT_DStream_unfinished = 0,
1731++ BIT_DStream_endOfBuffer = 1,
1732++ BIT_DStream_completed = 2,
1733++ BIT_DStream_overflow = 3
1734++} BIT_DStream_status; /* result of BIT_reloadDStream() */
1735++/* 1,2,4,8 would be better for bitmap combinations, but slows down performance a bit ... :( */
1736++
1737++ZSTD_STATIC size_t BIT_initDStream(BIT_DStream_t *bitD, const void *srcBuffer, size_t srcSize);
1738++ZSTD_STATIC size_t BIT_readBits(BIT_DStream_t *bitD, unsigned nbBits);
1739++ZSTD_STATIC BIT_DStream_status BIT_reloadDStream(BIT_DStream_t *bitD);
1740++ZSTD_STATIC unsigned BIT_endOfDStream(const BIT_DStream_t *bitD);
1741++
1742++/* Start by invoking BIT_initDStream().
1743++* A chunk of the bitStream is then stored into a local register.
1744++* Local register size is 64-bits on 64-bits systems, 32-bits on 32-bits systems (size_t).
1745++* You can then retrieve bitFields stored into the local register, **in reverse order**.
1746++* Local register is explicitly reloaded from memory by the BIT_reloadDStream() method.
1747++* A reload guarantee a minimum of ((8*sizeof(bitD->bitContainer))-7) bits when its result is BIT_DStream_unfinished.
1748++* Otherwise, it can be less than that, so proceed accordingly.
1749++* Checking if DStream has reached its end can be performed with BIT_endOfDStream().
1750++*/
1751++
1752++/*-****************************************
1753++* unsafe API
1754++******************************************/
1755++ZSTD_STATIC void BIT_addBitsFast(BIT_CStream_t *bitC, size_t value, unsigned nbBits);
1756++/* faster, but works only if value is "clean", meaning all high bits above nbBits are 0 */
1757++
1758++ZSTD_STATIC void BIT_flushBitsFast(BIT_CStream_t *bitC);
1759++/* unsafe version; does not check buffer overflow */
1760++
1761++ZSTD_STATIC size_t BIT_readBitsFast(BIT_DStream_t *bitD, unsigned nbBits);
1762++/* faster, but works only if nbBits >= 1 */
1763++
1764++/*-**************************************************************
1765++* Internal functions
1766++****************************************************************/
1767++ZSTD_STATIC unsigned BIT_highbit32(register U32 val) { return 31 - __builtin_clz(val); }
1768++
1769++/*===== Local Constants =====*/
1770++static const unsigned BIT_mask[] = {0, 1, 3, 7, 0xF, 0x1F, 0x3F, 0x7F, 0xFF,
1771++ 0x1FF, 0x3FF, 0x7FF, 0xFFF, 0x1FFF, 0x3FFF, 0x7FFF, 0xFFFF, 0x1FFFF,
1772++ 0x3FFFF, 0x7FFFF, 0xFFFFF, 0x1FFFFF, 0x3FFFFF, 0x7FFFFF, 0xFFFFFF, 0x1FFFFFF, 0x3FFFFFF}; /* up to 26 bits */
1773++
1774++/*-**************************************************************
1775++* bitStream encoding
1776++****************************************************************/
1777++/*! BIT_initCStream() :
1778++ * `dstCapacity` must be > sizeof(void*)
1779++ * @return : 0 if success,
1780++ otherwise an error code (can be tested using ERR_isError() ) */
1781++ZSTD_STATIC size_t BIT_initCStream(BIT_CStream_t *bitC, void *startPtr, size_t dstCapacity)
1782++{
1783++ bitC->bitContainer = 0;
1784++ bitC->bitPos = 0;
1785++ bitC->startPtr = (char *)startPtr;
1786++ bitC->ptr = bitC->startPtr;
1787++ bitC->endPtr = bitC->startPtr + dstCapacity - sizeof(bitC->ptr);
1788++ if (dstCapacity <= sizeof(bitC->ptr))
1789++ return ERROR(dstSize_tooSmall);
1790++ return 0;
1791++}
1792++
1793++/*! BIT_addBits() :
1794++ can add up to 26 bits into `bitC`.
1795++ Does not check for register overflow ! */
1796++ZSTD_STATIC void BIT_addBits(BIT_CStream_t *bitC, size_t value, unsigned nbBits)
1797++{
1798++ bitC->bitContainer |= (value & BIT_mask[nbBits]) << bitC->bitPos;
1799++ bitC->bitPos += nbBits;
1800++}
1801++
1802++/*! BIT_addBitsFast() :
1803++ * works only if `value` is _clean_, meaning all high bits above nbBits are 0 */
1804++ZSTD_STATIC void BIT_addBitsFast(BIT_CStream_t *bitC, size_t value, unsigned nbBits)
1805++{
1806++ bitC->bitContainer |= value << bitC->bitPos;
1807++ bitC->bitPos += nbBits;
1808++}
1809++
1810++/*! BIT_flushBitsFast() :
1811++ * unsafe version; does not check buffer overflow */
1812++ZSTD_STATIC void BIT_flushBitsFast(BIT_CStream_t *bitC)
1813++{
1814++ size_t const nbBytes = bitC->bitPos >> 3;
1815++ ZSTD_writeLEST(bitC->ptr, bitC->bitContainer);
1816++ bitC->ptr += nbBytes;
1817++ bitC->bitPos &= 7;
1818++ bitC->bitContainer >>= nbBytes * 8; /* if bitPos >= sizeof(bitContainer)*8 --> undefined behavior */
1819++}
1820++
1821++/*! BIT_flushBits() :
1822++ * safe version; check for buffer overflow, and prevents it.
1823++ * note : does not signal buffer overflow. This will be revealed later on using BIT_closeCStream() */
1824++ZSTD_STATIC void BIT_flushBits(BIT_CStream_t *bitC)
1825++{
1826++ size_t const nbBytes = bitC->bitPos >> 3;
1827++ ZSTD_writeLEST(bitC->ptr, bitC->bitContainer);
1828++ bitC->ptr += nbBytes;
1829++ if (bitC->ptr > bitC->endPtr)
1830++ bitC->ptr = bitC->endPtr;
1831++ bitC->bitPos &= 7;
1832++ bitC->bitContainer >>= nbBytes * 8; /* if bitPos >= sizeof(bitContainer)*8 --> undefined behavior */
1833++}
1834++
1835++/*! BIT_closeCStream() :
1836++ * @return : size of CStream, in bytes,
1837++ or 0 if it could not fit into dstBuffer */
1838++ZSTD_STATIC size_t BIT_closeCStream(BIT_CStream_t *bitC)
1839++{
1840++ BIT_addBitsFast(bitC, 1, 1); /* endMark */
1841++ BIT_flushBits(bitC);
1842++
1843++ if (bitC->ptr >= bitC->endPtr)
1844++ return 0; /* doesn't fit within authorized budget : cancel */
1845++
1846++ return (bitC->ptr - bitC->startPtr) + (bitC->bitPos > 0);
1847++}
1848++
1849++/*-********************************************************
1850++* bitStream decoding
1851++**********************************************************/
1852++/*! BIT_initDStream() :
1853++* Initialize a BIT_DStream_t.
1854++* `bitD` : a pointer to an already allocated BIT_DStream_t structure.
1855++* `srcSize` must be the *exact* size of the bitStream, in bytes.
1856++* @return : size of stream (== srcSize) or an errorCode if a problem is detected
1857++*/
1858++ZSTD_STATIC size_t BIT_initDStream(BIT_DStream_t *bitD, const void *srcBuffer, size_t srcSize)
1859++{
1860++ if (srcSize < 1) {
1861++ memset(bitD, 0, sizeof(*bitD));
1862++ return ERROR(srcSize_wrong);
1863++ }
1864++
1865++ if (srcSize >= sizeof(bitD->bitContainer)) { /* normal case */
1866++ bitD->start = (const char *)srcBuffer;
1867++ bitD->ptr = (const char *)srcBuffer + srcSize - sizeof(bitD->bitContainer);
1868++ bitD->bitContainer = ZSTD_readLEST(bitD->ptr);
1869++ {
1870++ BYTE const lastByte = ((const BYTE *)srcBuffer)[srcSize - 1];
1871++ bitD->bitsConsumed = lastByte ? 8 - BIT_highbit32(lastByte) : 0; /* ensures bitsConsumed is always set */
1872++ if (lastByte == 0)
1873++ return ERROR(GENERIC); /* endMark not present */
1874++ }
1875++ } else {
1876++ bitD->start = (const char *)srcBuffer;
1877++ bitD->ptr = bitD->start;
1878++ bitD->bitContainer = *(const BYTE *)(bitD->start);
1879++ switch (srcSize) {
1880++ case 7: bitD->bitContainer += (size_t)(((const BYTE *)(srcBuffer))[6]) << (sizeof(bitD->bitContainer) * 8 - 16);
1881++ /* fallthrough */
1882++ case 6: bitD->bitContainer += (size_t)(((const BYTE *)(srcBuffer))[5]) << (sizeof(bitD->bitContainer) * 8 - 24);
1883++ /* fallthrough */
1884++ case 5: bitD->bitContainer += (size_t)(((const BYTE *)(srcBuffer))[4]) << (sizeof(bitD->bitContainer) * 8 - 32);
1885++ /* fallthrough */
1886++ case 4: bitD->bitContainer += (size_t)(((const BYTE *)(srcBuffer))[3]) << 24;
1887++ /* fallthrough */
1888++ case 3: bitD->bitContainer += (size_t)(((const BYTE *)(srcBuffer))[2]) << 16;
1889++ /* fallthrough */
1890++ case 2: bitD->bitContainer += (size_t)(((const BYTE *)(srcBuffer))[1]) << 8;
1891++ /* fallthrough */
1892++ default:;
1893++ }
1894++ {
1895++ BYTE const lastByte = ((const BYTE *)srcBuffer)[srcSize - 1];
1896++ bitD->bitsConsumed = lastByte ? 8 - BIT_highbit32(lastByte) : 0;
1897++ if (lastByte == 0)
1898++ return ERROR(GENERIC); /* endMark not present */
1899++ }
1900++ bitD->bitsConsumed += (U32)(sizeof(bitD->bitContainer) - srcSize) * 8;
1901++ }
1902++
1903++ return srcSize;
1904++}
1905++
1906++ZSTD_STATIC size_t BIT_getUpperBits(size_t bitContainer, U32 const start) { return bitContainer >> start; }
1907++
1908++ZSTD_STATIC size_t BIT_getMiddleBits(size_t bitContainer, U32 const start, U32 const nbBits) { return (bitContainer >> start) & BIT_mask[nbBits]; }
1909++
1910++ZSTD_STATIC size_t BIT_getLowerBits(size_t bitContainer, U32 const nbBits) { return bitContainer & BIT_mask[nbBits]; }
1911++
1912++/*! BIT_lookBits() :
1913++ * Provides next n bits from local register.
1914++ * local register is not modified.
1915++ * On 32-bits, maxNbBits==24.
1916++ * On 64-bits, maxNbBits==56.
1917++ * @return : value extracted
1918++ */
1919++ZSTD_STATIC size_t BIT_lookBits(const BIT_DStream_t *bitD, U32 nbBits)
1920++{
1921++ U32 const bitMask = sizeof(bitD->bitContainer) * 8 - 1;
1922++ return ((bitD->bitContainer << (bitD->bitsConsumed & bitMask)) >> 1) >> ((bitMask - nbBits) & bitMask);
1923++}
1924++
1925++/*! BIT_lookBitsFast() :
1926++* unsafe version; only works only if nbBits >= 1 */
1927++ZSTD_STATIC size_t BIT_lookBitsFast(const BIT_DStream_t *bitD, U32 nbBits)
1928++{
1929++ U32 const bitMask = sizeof(bitD->bitContainer) * 8 - 1;
1930++ return (bitD->bitContainer << (bitD->bitsConsumed & bitMask)) >> (((bitMask + 1) - nbBits) & bitMask);
1931++}
1932++
1933++ZSTD_STATIC void BIT_skipBits(BIT_DStream_t *bitD, U32 nbBits) { bitD->bitsConsumed += nbBits; }
1934++
1935++/*! BIT_readBits() :
1936++ * Read (consume) next n bits from local register and update.
1937++ * Pay attention to not read more than nbBits contained into local register.
1938++ * @return : extracted value.
1939++ */
1940++ZSTD_STATIC size_t BIT_readBits(BIT_DStream_t *bitD, U32 nbBits)
1941++{
1942++ size_t const value = BIT_lookBits(bitD, nbBits);
1943++ BIT_skipBits(bitD, nbBits);
1944++ return value;
1945++}
1946++
1947++/*! BIT_readBitsFast() :
1948++* unsafe version; only works only if nbBits >= 1 */
1949++ZSTD_STATIC size_t BIT_readBitsFast(BIT_DStream_t *bitD, U32 nbBits)
1950++{
1951++ size_t const value = BIT_lookBitsFast(bitD, nbBits);
1952++ BIT_skipBits(bitD, nbBits);
1953++ return value;
1954++}
1955++
1956++/*! BIT_reloadDStream() :
1957++* Refill `bitD` from buffer previously set in BIT_initDStream() .
1958++* This function is safe, it guarantees it will not read beyond src buffer.
1959++* @return : status of `BIT_DStream_t` internal register.
1960++ if status == BIT_DStream_unfinished, internal register is filled with >= (sizeof(bitD->bitContainer)*8 - 7) bits */
1961++ZSTD_STATIC BIT_DStream_status BIT_reloadDStream(BIT_DStream_t *bitD)
1962++{
1963++ if (bitD->bitsConsumed > (sizeof(bitD->bitContainer) * 8)) /* should not happen => corruption detected */
1964++ return BIT_DStream_overflow;
1965++
1966++ if (bitD->ptr >= bitD->start + sizeof(bitD->bitContainer)) {
1967++ bitD->ptr -= bitD->bitsConsumed >> 3;
1968++ bitD->bitsConsumed &= 7;
1969++ bitD->bitContainer = ZSTD_readLEST(bitD->ptr);
1970++ return BIT_DStream_unfinished;
1971++ }
1972++ if (bitD->ptr == bitD->start) {
1973++ if (bitD->bitsConsumed < sizeof(bitD->bitContainer) * 8)
1974++ return BIT_DStream_endOfBuffer;
1975++ return BIT_DStream_completed;
1976++ }
1977++ {
1978++ U32 nbBytes = bitD->bitsConsumed >> 3;
1979++ BIT_DStream_status result = BIT_DStream_unfinished;
1980++ if (bitD->ptr - nbBytes < bitD->start) {
1981++ nbBytes = (U32)(bitD->ptr - bitD->start); /* ptr > start */
1982++ result = BIT_DStream_endOfBuffer;
1983++ }
1984++ bitD->ptr -= nbBytes;
1985++ bitD->bitsConsumed -= nbBytes * 8;
1986++ bitD->bitContainer = ZSTD_readLEST(bitD->ptr); /* reminder : srcSize > sizeof(bitD) */
1987++ return result;
1988++ }
1989++}
1990++
1991++/*! BIT_endOfDStream() :
1992++* @return Tells if DStream has exactly reached its end (all bits consumed).
1993++*/
1994++ZSTD_STATIC unsigned BIT_endOfDStream(const BIT_DStream_t *DStream)
1995++{
1996++ return ((DStream->ptr == DStream->start) && (DStream->bitsConsumed == sizeof(DStream->bitContainer) * 8));
1997++}
1998++
1999++#endif /* BITSTREAM_H_MODULE */
2000+diff --git a/xen/common/zstd/decompress.c b/xen/common/zstd/decompress.c
2001+new file mode 100644
2002+index 000000000000..3d3ef136e5c2
2003+--- /dev/null
2004++++ b/xen/common/zstd/decompress.c
2005+@@ -0,0 +1,2496 @@
2006++/**
2007++ * Copyright (c) 2016-present, Yann Collet, Facebook, Inc.
2008++ * All rights reserved.
2009++ *
2010++ * This source code is licensed under the BSD-style license found in the
2011++ * LICENSE file in the root directory of https://github.com/facebook/zstd.
2012++ * An additional grant of patent rights can be found in the PATENTS file in the
2013++ * same directory.
2014++ *
2015++ * This program is free software; you can redistribute it and/or modify it under
2016++ * the terms of the GNU General Public License version 2 as published by the
2017++ * Free Software Foundation. This program is dual-licensed; you may select
2018++ * either version 2 of the GNU General Public License ("GPL") or BSD license
2019++ * ("BSD").
2020++ */
2021++
2022++/* ***************************************************************
2023++* Tuning parameters
2024++*****************************************************************/
2025++/*!
2026++* MAXWINDOWSIZE_DEFAULT :
2027++* maximum window size accepted by DStream, by default.
2028++* Frames requiring more memory will be rejected.
2029++*/
2030++#ifndef ZSTD_MAXWINDOWSIZE_DEFAULT
2031++#define ZSTD_MAXWINDOWSIZE_DEFAULT ((1 << ZSTD_WINDOWLOG_MAX) + 1) /* defined within zstd.h */
2032++#endif
2033++
2034++/*-*******************************************************
2035++* Dependencies
2036++*********************************************************/
2037++#include "fse.h"
2038++#include "huf.h"
2039++#include "mem.h" /* low level memory routines */
2040++#include "zstd_internal.h"
2041++#include <xen/string.h> /* memcpy, memmove, memset */
2042++
2043++#define ZSTD_PREFETCH(ptr) __builtin_prefetch(ptr, 0, 0)
2044++
2045++/*-*************************************
2046++* Macros
2047++***************************************/
2048++#define ZSTD_isError ERR_isError /* for inlining */
2049++#define FSE_isError ERR_isError
2050++#define HUF_isError ERR_isError
2051++
2052++/*_*******************************************************
2053++* Memory operations
2054++**********************************************************/
2055++static void INIT ZSTD_copy4(void *dst, const void *src) { memcpy(dst, src, 4); }
2056++
2057++/*-*************************************************************
2058++* Context management
2059++***************************************************************/
2060++typedef enum {
2061++ ZSTDds_getFrameHeaderSize,
2062++ ZSTDds_decodeFrameHeader,
2063++ ZSTDds_decodeBlockHeader,
2064++ ZSTDds_decompressBlock,
2065++ ZSTDds_decompressLastBlock,
2066++ ZSTDds_checkChecksum,
2067++ ZSTDds_decodeSkippableHeader,
2068++ ZSTDds_skipFrame
2069++} ZSTD_dStage;
2070++
2071++typedef struct {
2072++ FSE_DTable LLTable[FSE_DTABLE_SIZE_U32(LLFSELog)];
2073++ FSE_DTable OFTable[FSE_DTABLE_SIZE_U32(OffFSELog)];
2074++ FSE_DTable MLTable[FSE_DTABLE_SIZE_U32(MLFSELog)];
2075++ HUF_DTable hufTable[HUF_DTABLE_SIZE(HufLog)]; /* can accommodate HUF_decompress4X */
2076++ U64 workspace[HUF_DECOMPRESS_WORKSPACE_SIZE_U32 / 2];
2077++ U32 rep[ZSTD_REP_NUM];
2078++} ZSTD_entropyTables_t;
2079++
2080++struct ZSTD_DCtx_s {
2081++ const FSE_DTable *LLTptr;
2082++ const FSE_DTable *MLTptr;
2083++ const FSE_DTable *OFTptr;
2084++ const HUF_DTable *HUFptr;
2085++ ZSTD_entropyTables_t entropy;
2086++ const void *previousDstEnd; /* detect continuity */
2087++ const void *base; /* start of curr segment */
2088++ const void *vBase; /* virtual start of previous segment if it was just before curr one */
2089++ const void *dictEnd; /* end of previous segment */
2090++ size_t expected;
2091++ ZSTD_frameParams fParams;
2092++ blockType_e bType; /* used in ZSTD_decompressContinue(), to transfer blockType between header decoding and block decoding stages */
2093++ ZSTD_dStage stage;
2094++ U32 litEntropy;
2095++ U32 fseEntropy;
2096++ struct xxh64_state xxhState;
2097++ size_t headerSize;
2098++ U32 dictID;
2099++ const BYTE *litPtr;
2100++ ZSTD_customMem customMem;
2101++ size_t litSize;
2102++ size_t rleSize;
2103++ BYTE litBuffer[ZSTD_BLOCKSIZE_ABSOLUTEMAX + WILDCOPY_OVERLENGTH];
2104++ BYTE headerBuffer[ZSTD_FRAMEHEADERSIZE_MAX];
2105++}; /* typedef'd to ZSTD_DCtx within "zstd.h" */
2106++
2107++size_t INIT ZSTD_DCtxWorkspaceBound(void) { return ZSTD_ALIGN(sizeof(ZSTD_stack)) + ZSTD_ALIGN(sizeof(ZSTD_DCtx)); }
2108++
2109++size_t INIT ZSTD_decompressBegin(ZSTD_DCtx *dctx)
2110++{
2111++ dctx->expected = ZSTD_frameHeaderSize_prefix;
2112++ dctx->stage = ZSTDds_getFrameHeaderSize;
2113++ dctx->previousDstEnd = NULL;
2114++ dctx->base = NULL;
2115++ dctx->vBase = NULL;
2116++ dctx->dictEnd = NULL;
2117++ dctx->entropy.hufTable[0] = (HUF_DTable)((HufLog)*0x1000001); /* cover both little and big endian */
2118++ dctx->litEntropy = dctx->fseEntropy = 0;
2119++ dctx->dictID = 0;
2120++ ZSTD_STATIC_ASSERT(sizeof(dctx->entropy.rep) == sizeof(repStartValue));
2121++ memcpy(dctx->entropy.rep, repStartValue, sizeof(repStartValue)); /* initial repcodes */
2122++ dctx->LLTptr = dctx->entropy.LLTable;
2123++ dctx->MLTptr = dctx->entropy.MLTable;
2124++ dctx->OFTptr = dctx->entropy.OFTable;
2125++ dctx->HUFptr = dctx->entropy.hufTable;
2126++ return 0;
2127++}
2128++
2129++ZSTD_DCtx *INIT ZSTD_createDCtx_advanced(ZSTD_customMem customMem)
2130++{
2131++ ZSTD_DCtx *dctx;
2132++
2133++ if (!customMem.customAlloc || !customMem.customFree)
2134++ return NULL;
2135++
2136++ dctx = (ZSTD_DCtx *)ZSTD_malloc(sizeof(ZSTD_DCtx), customMem);
2137++ if (!dctx)
2138++ return NULL;
2139++ memcpy(&dctx->customMem, &customMem, sizeof(customMem));
2140++ ZSTD_decompressBegin(dctx);
2141++ return dctx;
2142++}
2143++
2144++ZSTD_DCtx *INIT ZSTD_initDCtx(void *workspace, size_t workspaceSize)
2145++{
2146++ ZSTD_customMem const stackMem = ZSTD_initStack(workspace, workspaceSize);
2147++ return ZSTD_createDCtx_advanced(stackMem);
2148++}
2149++
2150++size_t INIT ZSTD_freeDCtx(ZSTD_DCtx *dctx)
2151++{
2152++ if (dctx == NULL)
2153++ return 0; /* support free on NULL */
2154++ ZSTD_free(dctx, dctx->customMem);
2155++ return 0; /* reserved as a potential error code in the future */
2156++}
2157++
2158++void INIT ZSTD_copyDCtx(ZSTD_DCtx *dstDCtx, const ZSTD_DCtx *srcDCtx)
2159++{
2160++ size_t const workSpaceSize = (ZSTD_BLOCKSIZE_ABSOLUTEMAX + WILDCOPY_OVERLENGTH) + ZSTD_frameHeaderSize_max;
2161++ memcpy(dstDCtx, srcDCtx, sizeof(ZSTD_DCtx) - workSpaceSize); /* no need to copy workspace */
2162++}
2163++
2164++STATIC size_t ZSTD_findFrameCompressedSize(const void *src, size_t srcSize);
2165++STATIC size_t ZSTD_decompressBegin_usingDict(ZSTD_DCtx *dctx, const void *dict,
2166++ size_t dictSize);
2167++
2168++static void ZSTD_refDDict(ZSTD_DCtx *dstDCtx, const ZSTD_DDict *ddict);
2169++
2170++/*-*************************************************************
2171++* Decompression section
2172++***************************************************************/
2173++
2174++/*! ZSTD_isFrame() :
2175++ * Tells if the content of `buffer` starts with a valid Frame Identifier.
2176++ * Note : Frame Identifier is 4 bytes. If `size < 4`, @return will always be 0.
2177++ * Note 2 : Legacy Frame Identifiers are considered valid only if Legacy Support is enabled.
2178++ * Note 3 : Skippable Frame Identifiers are considered valid. */
2179++unsigned INIT ZSTD_isFrame(const void *buffer, size_t size)
2180++{
2181++ if (size < 4)
2182++ return 0;
2183++ {
2184++ U32 const magic = ZSTD_readLE32(buffer);
2185++ if (magic == ZSTD_MAGICNUMBER)
2186++ return 1;
2187++ if ((magic & 0xFFFFFFF0U) == ZSTD_MAGIC_SKIPPABLE_START)
2188++ return 1;
2189++ }
2190++ return 0;
2191++}
2192++
2193++/** ZSTD_frameHeaderSize() :
2194++* srcSize must be >= ZSTD_frameHeaderSize_prefix.
2195++* @return : size of the Frame Header */
2196++static size_t INIT ZSTD_frameHeaderSize(const void *src, size_t srcSize)
2197++{
2198++ if (srcSize < ZSTD_frameHeaderSize_prefix)
2199++ return ERROR(srcSize_wrong);
2200++ {
2201++ BYTE const fhd = ((const BYTE *)src)[4];
2202++ U32 const dictID = fhd & 3;
2203++ U32 const singleSegment = (fhd >> 5) & 1;
2204++ U32 const fcsId = fhd >> 6;
2205++ return ZSTD_frameHeaderSize_prefix + !singleSegment + ZSTD_did_fieldSize[dictID] + ZSTD_fcs_fieldSize[fcsId] + (singleSegment && !fcsId);
2206++ }
2207++}
2208++
2209++/** ZSTD_getFrameParams() :
2210++* decode Frame Header, or require larger `srcSize`.
2211++* @return : 0, `fparamsPtr` is correctly filled,
2212++* >0, `srcSize` is too small, result is expected `srcSize`,
2213++* or an error code, which can be tested using ZSTD_isError() */
2214++size_t INIT ZSTD_getFrameParams(ZSTD_frameParams *fparamsPtr, const void *src, size_t srcSize)
2215++{
2216++ const BYTE *ip = (const BYTE *)src;
2217++
2218++ if (srcSize < ZSTD_frameHeaderSize_prefix)
2219++ return ZSTD_frameHeaderSize_prefix;
2220++ if (ZSTD_readLE32(src) != ZSTD_MAGICNUMBER) {
2221++ if ((ZSTD_readLE32(src) & 0xFFFFFFF0U) == ZSTD_MAGIC_SKIPPABLE_START) {
2222++ if (srcSize < ZSTD_skippableHeaderSize)
2223++ return ZSTD_skippableHeaderSize; /* magic number + skippable frame length */
2224++ memset(fparamsPtr, 0, sizeof(*fparamsPtr));
2225++ fparamsPtr->frameContentSize = ZSTD_readLE32((const char *)src + 4);
2226++ fparamsPtr->windowSize = 0; /* windowSize==0 means a frame is skippable */
2227++ return 0;
2228++ }
2229++ return ERROR(prefix_unknown);
2230++ }
2231++
2232++ /* ensure there is enough `srcSize` to fully read/decode frame header */
2233++ {
2234++ size_t const fhsize = ZSTD_frameHeaderSize(src, srcSize);
2235++ if (srcSize < fhsize)
2236++ return fhsize;
2237++ }
2238++
2239++ {
2240++ BYTE const fhdByte = ip[4];
2241++ size_t pos = 5;
2242++ U32 const dictIDSizeCode = fhdByte & 3;
2243++ U32 const checksumFlag = (fhdByte >> 2) & 1;
2244++ U32 const singleSegment = (fhdByte >> 5) & 1;
2245++ U32 const fcsID = fhdByte >> 6;
2246++ U32 const windowSizeMax = 1U << ZSTD_WINDOWLOG_MAX;
2247++ U32 windowSize = 0;
2248++ U32 dictID = 0;
2249++ U64 frameContentSize = 0;
2250++ if ((fhdByte & 0x08) != 0)
2251++ return ERROR(frameParameter_unsupported); /* reserved bits, which must be zero */
2252++ if (!singleSegment) {
2253++ BYTE const wlByte = ip[pos++];
2254++ U32 const windowLog = (wlByte >> 3) + ZSTD_WINDOWLOG_ABSOLUTEMIN;
2255++ if (windowLog > ZSTD_WINDOWLOG_MAX)
2256++ return ERROR(frameParameter_windowTooLarge); /* avoids issue with 1 << windowLog */
2257++ windowSize = (1U << windowLog);
2258++ windowSize += (windowSize >> 3) * (wlByte & 7);
2259++ }
2260++
2261++ switch (dictIDSizeCode) {
2262++ default: /* impossible */
2263++ case 0: break;
2264++ case 1:
2265++ dictID = ip[pos];
2266++ pos++;
2267++ break;
2268++ case 2:
2269++ dictID = ZSTD_readLE16(ip + pos);
2270++ pos += 2;
2271++ break;
2272++ case 3:
2273++ dictID = ZSTD_readLE32(ip + pos);
2274++ pos += 4;
2275++ break;
2276++ }
2277++ switch (fcsID) {
2278++ default: /* impossible */
2279++ case 0:
2280++ if (singleSegment)
2281++ frameContentSize = ip[pos];
2282++ break;
2283++ case 1: frameContentSize = ZSTD_readLE16(ip + pos) + 256; break;
2284++ case 2: frameContentSize = ZSTD_readLE32(ip + pos); break;
2285++ case 3: frameContentSize = ZSTD_readLE64(ip + pos); break;
2286++ }
2287++ if (!windowSize)
2288++ windowSize = (U32)frameContentSize;
2289++ if (windowSize > windowSizeMax)
2290++ return ERROR(frameParameter_windowTooLarge);
2291++ fparamsPtr->frameContentSize = frameContentSize;
2292++ fparamsPtr->windowSize = windowSize;
2293++ fparamsPtr->dictID = dictID;
2294++ fparamsPtr->checksumFlag = checksumFlag;
2295++ }
2296++ return 0;
2297++}
2298++
2299++/** ZSTD_getFrameContentSize() :
2300++* compatible with legacy mode
2301++* @return : decompressed size of the single frame pointed to be `src` if known, otherwise
2302++* - ZSTD_CONTENTSIZE_UNKNOWN if the size cannot be determined
2303++* - ZSTD_CONTENTSIZE_ERROR if an error occurred (e.g. invalid magic number, srcSize too small) */
2304++unsigned long long INIT ZSTD_getFrameContentSize(const void *src, size_t srcSize)
2305++{
2306++ {
2307++ ZSTD_frameParams fParams;
2308++ if (ZSTD_getFrameParams(&fParams, src, srcSize) != 0)
2309++ return ZSTD_CONTENTSIZE_ERROR;
2310++ if (fParams.windowSize == 0) {
2311++ /* Either skippable or empty frame, size == 0 either way */
2312++ return 0;
2313++ } else if (fParams.frameContentSize != 0) {
2314++ return fParams.frameContentSize;
2315++ } else {
2316++ return ZSTD_CONTENTSIZE_UNKNOWN;
2317++ }
2318++ }
2319++}
2320++
2321++/** ZSTD_findDecompressedSize() :
2322++ * compatible with legacy mode
2323++ * `srcSize` must be the exact length of some number of ZSTD compressed and/or
2324++ * skippable frames
2325++ * @return : decompressed size of the frames contained */
2326++unsigned long long INIT ZSTD_findDecompressedSize(const void *src, size_t srcSize)
2327++{
2328++ {
2329++ unsigned long long totalDstSize = 0;
2330++ while (srcSize >= ZSTD_frameHeaderSize_prefix) {
2331++ const U32 magicNumber = ZSTD_readLE32(src);
2332++
2333++ if ((magicNumber & 0xFFFFFFF0U) == ZSTD_MAGIC_SKIPPABLE_START) {
2334++ size_t skippableSize;
2335++ if (srcSize < ZSTD_skippableHeaderSize)
2336++ return ERROR(srcSize_wrong);
2337++ skippableSize = ZSTD_readLE32((const BYTE *)src + 4) + ZSTD_skippableHeaderSize;
2338++ if (srcSize < skippableSize) {
2339++ return ZSTD_CONTENTSIZE_ERROR;
2340++ }
2341++
2342++ src = (const BYTE *)src + skippableSize;
2343++ srcSize -= skippableSize;
2344++ continue;
2345++ }
2346++
2347++ {
2348++ unsigned long long const ret = ZSTD_getFrameContentSize(src, srcSize);
2349++ if (ret >= ZSTD_CONTENTSIZE_ERROR)
2350++ return ret;
2351++
2352++ /* check for overflow */
2353++ if (totalDstSize + ret < totalDstSize)
2354++ return ZSTD_CONTENTSIZE_ERROR;
2355++ totalDstSize += ret;
2356++ }
2357++ {
2358++ size_t const frameSrcSize = ZSTD_findFrameCompressedSize(src, srcSize);
2359++ if (ZSTD_isError(frameSrcSize)) {
2360++ return ZSTD_CONTENTSIZE_ERROR;
2361++ }
2362++
2363++ src = (const BYTE *)src + frameSrcSize;
2364++ srcSize -= frameSrcSize;
2365++ }
2366++ }
2367++
2368++ if (srcSize) {
2369++ return ZSTD_CONTENTSIZE_ERROR;
2370++ }
2371++
2372++ return totalDstSize;
2373++ }
2374++}
2375++
2376++/** ZSTD_decodeFrameHeader() :
2377++* `headerSize` must be the size provided by ZSTD_frameHeaderSize().
2378++* @return : 0 if success, or an error code, which can be tested using ZSTD_isError() */
2379++static size_t INIT ZSTD_decodeFrameHeader(ZSTD_DCtx *dctx, const void *src, size_t headerSize)
2380++{
2381++ size_t const result = ZSTD_getFrameParams(&(dctx->fParams), src, headerSize);
2382++ if (ZSTD_isError(result))
2383++ return result; /* invalid header */
2384++ if (result > 0)
2385++ return ERROR(srcSize_wrong); /* headerSize too small */
2386++ if (dctx->fParams.dictID && (dctx->dictID != dctx->fParams.dictID))
2387++ return ERROR(dictionary_wrong);
2388++ if (dctx->fParams.checksumFlag)
2389++ xxh64_reset(&dctx->xxhState, 0);
2390++ return 0;
2391++}
2392++
2393++typedef struct {
2394++ blockType_e blockType;
2395++ U32 lastBlock;
2396++ U32 origSize;
2397++} blockProperties_t;
2398++
2399++/*! ZSTD_getcBlockSize() :
2400++* Provides the size of compressed block from block header `src` */
2401++size_t INIT ZSTD_getcBlockSize(const void *src, size_t srcSize, blockProperties_t *bpPtr)
2402++{
2403++ if (srcSize < ZSTD_blockHeaderSize)
2404++ return ERROR(srcSize_wrong);
2405++ {
2406++ U32 const cBlockHeader = ZSTD_readLE24(src);
2407++ U32 const cSize = cBlockHeader >> 3;
2408++ bpPtr->lastBlock = cBlockHeader & 1;
2409++ bpPtr->blockType = (blockType_e)((cBlockHeader >> 1) & 3);
2410++ bpPtr->origSize = cSize; /* only useful for RLE */
2411++ if (bpPtr->blockType == bt_rle)
2412++ return 1;
2413++ if (bpPtr->blockType == bt_reserved)
2414++ return ERROR(corruption_detected);
2415++ return cSize;
2416++ }
2417++}
2418++
2419++static size_t INIT ZSTD_copyRawBlock(void *dst, size_t dstCapacity, const void *src, size_t srcSize)
2420++{
2421++ if (srcSize > dstCapacity)
2422++ return ERROR(dstSize_tooSmall);
2423++ memcpy(dst, src, srcSize);
2424++ return srcSize;
2425++}
2426++
2427++static size_t INIT ZSTD_setRleBlock(void *dst, size_t dstCapacity, const void *src, size_t srcSize, size_t regenSize)
2428++{
2429++ if (srcSize != 1)
2430++ return ERROR(srcSize_wrong);
2431++ if (regenSize > dstCapacity)
2432++ return ERROR(dstSize_tooSmall);
2433++ memset(dst, *(const BYTE *)src, regenSize);
2434++ return regenSize;
2435++}
2436++
2437++/*! ZSTD_decodeLiteralsBlock() :
2438++ @return : nb of bytes read from src (< srcSize ) */
2439++size_t INIT ZSTD_decodeLiteralsBlock(ZSTD_DCtx *dctx, const void *src, size_t srcSize) /* note : srcSize < BLOCKSIZE */
2440++{
2441++ if (srcSize < MIN_CBLOCK_SIZE)
2442++ return ERROR(corruption_detected);
2443++
2444++ {
2445++ const BYTE *const istart = (const BYTE *)src;
2446++ symbolEncodingType_e const litEncType = (symbolEncodingType_e)(istart[0] & 3);
2447++
2448++ switch (litEncType) {
2449++ case set_repeat:
2450++ if (dctx->litEntropy == 0)
2451++ return ERROR(dictionary_corrupted);
2452++ /* fallthrough */
2453++ case set_compressed:
2454++ if (srcSize < 5)
2455++ return ERROR(corruption_detected); /* srcSize >= MIN_CBLOCK_SIZE == 3; here we need up to 5 for case 3 */
2456++ {
2457++ size_t lhSize, litSize, litCSize;
2458++ U32 singleStream = 0;
2459++ U32 const lhlCode = (istart[0] >> 2) & 3;
2460++ U32 const lhc = ZSTD_readLE32(istart);
2461++ switch (lhlCode) {
2462++ case 0:
2463++ case 1:
2464++ default: /* note : default is impossible, since lhlCode into [0..3] */
2465++ /* 2 - 2 - 10 - 10 */
2466++ singleStream = !lhlCode;
2467++ lhSize = 3;
2468++ litSize = (lhc >> 4) & 0x3FF;
2469++ litCSize = (lhc >> 14) & 0x3FF;
2470++ break;
2471++ case 2:
2472++ /* 2 - 2 - 14 - 14 */
2473++ lhSize = 4;
2474++ litSize = (lhc >> 4) & 0x3FFF;
2475++ litCSize = lhc >> 18;
2476++ break;
2477++ case 3:
2478++ /* 2 - 2 - 18 - 18 */
2479++ lhSize = 5;
2480++ litSize = (lhc >> 4) & 0x3FFFF;
2481++ litCSize = (lhc >> 22) + (istart[4] << 10);
2482++ break;
2483++ }
2484++ if (litSize > ZSTD_BLOCKSIZE_ABSOLUTEMAX)
2485++ return ERROR(corruption_detected);
2486++ if (litCSize + lhSize > srcSize)
2487++ return ERROR(corruption_detected);
2488++
2489++ if (HUF_isError(
2490++ (litEncType == set_repeat)
2491++ ? (singleStream ? HUF_decompress1X_usingDTable(dctx->litBuffer, litSize, istart + lhSize, litCSize, dctx->HUFptr)
2492++ : HUF_decompress4X_usingDTable(dctx->litBuffer, litSize, istart + lhSize, litCSize, dctx->HUFptr))
2493++ : (singleStream
2494++ ? HUF_decompress1X2_DCtx_wksp(dctx->entropy.hufTable, dctx->litBuffer, litSize, istart + lhSize, litCSize,
2495++ dctx->entropy.workspace, sizeof(dctx->entropy.workspace))
2496++ : HUF_decompress4X_hufOnly_wksp(dctx->entropy.hufTable, dctx->litBuffer, litSize, istart + lhSize, litCSize,
2497++ dctx->entropy.workspace, sizeof(dctx->entropy.workspace)))))
2498++ return ERROR(corruption_detected);
2499++
2500++ dctx->litPtr = dctx->litBuffer;
2501++ dctx->litSize = litSize;
2502++ dctx->litEntropy = 1;
2503++ if (litEncType == set_compressed)
2504++ dctx->HUFptr = dctx->entropy.hufTable;
2505++ memset(dctx->litBuffer + dctx->litSize, 0, WILDCOPY_OVERLENGTH);
2506++ return litCSize + lhSize;
2507++ }
2508++
2509++ case set_basic: {
2510++ size_t litSize, lhSize;
2511++ U32 const lhlCode = ((istart[0]) >> 2) & 3;
2512++ switch (lhlCode) {
2513++ case 0:
2514++ case 2:
2515++ default: /* note : default is impossible, since lhlCode into [0..3] */
2516++ lhSize = 1;
2517++ litSize = istart[0] >> 3;
2518++ break;
2519++ case 1:
2520++ lhSize = 2;
2521++ litSize = ZSTD_readLE16(istart) >> 4;
2522++ break;
2523++ case 3:
2524++ lhSize = 3;
2525++ litSize = ZSTD_readLE24(istart) >> 4;
2526++ break;
2527++ }
2528++
2529++ if (lhSize + litSize + WILDCOPY_OVERLENGTH > srcSize) { /* risk reading beyond src buffer with wildcopy */
2530++ if (litSize + lhSize > srcSize)
2531++ return ERROR(corruption_detected);
2532++ memcpy(dctx->litBuffer, istart + lhSize, litSize);
2533++ dctx->litPtr = dctx->litBuffer;
2534++ dctx->litSize = litSize;
2535++ memset(dctx->litBuffer + dctx->litSize, 0, WILDCOPY_OVERLENGTH);
2536++ return lhSize + litSize;
2537++ }
2538++ /* direct reference into compressed stream */
2539++ dctx->litPtr = istart + lhSize;
2540++ dctx->litSize = litSize;
2541++ return lhSize + litSize;
2542++ }
2543++
2544++ case set_rle: {
2545++ U32 const lhlCode = ((istart[0]) >> 2) & 3;
2546++ size_t litSize, lhSize;
2547++ switch (lhlCode) {
2548++ case 0:
2549++ case 2:
2550++ default: /* note : default is impossible, since lhlCode into [0..3] */
2551++ lhSize = 1;
2552++ litSize = istart[0] >> 3;
2553++ break;
2554++ case 1:
2555++ lhSize = 2;
2556++ litSize = ZSTD_readLE16(istart) >> 4;
2557++ break;
2558++ case 3:
2559++ lhSize = 3;
2560++ litSize = ZSTD_readLE24(istart) >> 4;
2561++ if (srcSize < 4)
2562++ return ERROR(corruption_detected); /* srcSize >= MIN_CBLOCK_SIZE == 3; here we need lhSize+1 = 4 */
2563++ break;
2564++ }
2565++ if (litSize > ZSTD_BLOCKSIZE_ABSOLUTEMAX)
2566++ return ERROR(corruption_detected);
2567++ memset(dctx->litBuffer, istart[lhSize], litSize + WILDCOPY_OVERLENGTH);
2568++ dctx->litPtr = dctx->litBuffer;
2569++ dctx->litSize = litSize;
2570++ return lhSize + 1;
2571++ }
2572++ default:
2573++ return ERROR(corruption_detected); /* impossible */
2574++ }
2575++ }
2576++}
2577++
2578++typedef union {
2579++ FSE_decode_t realData;
2580++ U32 alignedBy4;
2581++} FSE_decode_t4;
2582++
2583++static const FSE_decode_t4 LL_defaultDTable[(1 << LL_DEFAULTNORMLOG) + 1] = {
2584++ {{LL_DEFAULTNORMLOG, 1, 1}}, /* header : tableLog, fastMode, fastMode */
2585++ {{0, 0, 4}}, /* 0 : base, symbol, bits */
2586++ {{16, 0, 4}},
2587++ {{32, 1, 5}},
2588++ {{0, 3, 5}},
2589++ {{0, 4, 5}},
2590++ {{0, 6, 5}},
2591++ {{0, 7, 5}},
2592++ {{0, 9, 5}},
2593++ {{0, 10, 5}},
2594++ {{0, 12, 5}},
2595++ {{0, 14, 6}},
2596++ {{0, 16, 5}},
2597++ {{0, 18, 5}},
2598++ {{0, 19, 5}},
2599++ {{0, 21, 5}},
2600++ {{0, 22, 5}},
2601++ {{0, 24, 5}},
2602++ {{32, 25, 5}},
2603++ {{0, 26, 5}},
2604++ {{0, 27, 6}},
2605++ {{0, 29, 6}},
2606++ {{0, 31, 6}},
2607++ {{32, 0, 4}},
2608++ {{0, 1, 4}},
2609++ {{0, 2, 5}},
2610++ {{32, 4, 5}},
2611++ {{0, 5, 5}},
2612++ {{32, 7, 5}},
2613++ {{0, 8, 5}},
2614++ {{32, 10, 5}},
2615++ {{0, 11, 5}},
2616++ {{0, 13, 6}},
2617++ {{32, 16, 5}},
2618++ {{0, 17, 5}},
2619++ {{32, 19, 5}},
2620++ {{0, 20, 5}},
2621++ {{32, 22, 5}},
2622++ {{0, 23, 5}},
2623++ {{0, 25, 4}},
2624++ {{16, 25, 4}},
2625++ {{32, 26, 5}},
2626++ {{0, 28, 6}},
2627++ {{0, 30, 6}},
2628++ {{48, 0, 4}},
2629++ {{16, 1, 4}},
2630++ {{32, 2, 5}},
2631++ {{32, 3, 5}},
2632++ {{32, 5, 5}},
2633++ {{32, 6, 5}},
2634++ {{32, 8, 5}},
2635++ {{32, 9, 5}},
2636++ {{32, 11, 5}},
2637++ {{32, 12, 5}},
2638++ {{0, 15, 6}},
2639++ {{32, 17, 5}},
2640++ {{32, 18, 5}},
2641++ {{32, 20, 5}},
2642++ {{32, 21, 5}},
2643++ {{32, 23, 5}},
2644++ {{32, 24, 5}},
2645++ {{0, 35, 6}},
2646++ {{0, 34, 6}},
2647++ {{0, 33, 6}},
2648++ {{0, 32, 6}},
2649++}; /* LL_defaultDTable */
2650++
2651++static const FSE_decode_t4 ML_defaultDTable[(1 << ML_DEFAULTNORMLOG) + 1] = {
2652++ {{ML_DEFAULTNORMLOG, 1, 1}}, /* header : tableLog, fastMode, fastMode */
2653++ {{0, 0, 6}}, /* 0 : base, symbol, bits */
2654++ {{0, 1, 4}},
2655++ {{32, 2, 5}},
2656++ {{0, 3, 5}},
2657++ {{0, 5, 5}},
2658++ {{0, 6, 5}},
2659++ {{0, 8, 5}},
2660++ {{0, 10, 6}},
2661++ {{0, 13, 6}},
2662++ {{0, 16, 6}},
2663++ {{0, 19, 6}},
2664++ {{0, 22, 6}},
2665++ {{0, 25, 6}},
2666++ {{0, 28, 6}},
2667++ {{0, 31, 6}},
2668++ {{0, 33, 6}},
2669++ {{0, 35, 6}},
2670++ {{0, 37, 6}},
2671++ {{0, 39, 6}},
2672++ {{0, 41, 6}},
2673++ {{0, 43, 6}},
2674++ {{0, 45, 6}},
2675++ {{16, 1, 4}},
2676++ {{0, 2, 4}},
2677++ {{32, 3, 5}},
2678++ {{0, 4, 5}},
2679++ {{32, 6, 5}},
2680++ {{0, 7, 5}},
2681++ {{0, 9, 6}},
2682++ {{0, 12, 6}},
2683++ {{0, 15, 6}},
2684++ {{0, 18, 6}},
2685++ {{0, 21, 6}},
2686++ {{0, 24, 6}},
2687++ {{0, 27, 6}},
2688++ {{0, 30, 6}},
2689++ {{0, 32, 6}},
2690++ {{0, 34, 6}},
2691++ {{0, 36, 6}},
2692++ {{0, 38, 6}},
2693++ {{0, 40, 6}},
2694++ {{0, 42, 6}},
2695++ {{0, 44, 6}},
2696++ {{32, 1, 4}},
2697++ {{48, 1, 4}},
2698++ {{16, 2, 4}},
2699++ {{32, 4, 5}},
2700++ {{32, 5, 5}},
2701++ {{32, 7, 5}},
2702++ {{32, 8, 5}},
2703++ {{0, 11, 6}},
2704++ {{0, 14, 6}},
2705++ {{0, 17, 6}},
2706++ {{0, 20, 6}},
2707++ {{0, 23, 6}},
2708++ {{0, 26, 6}},
2709++ {{0, 29, 6}},
2710++ {{0, 52, 6}},
2711++ {{0, 51, 6}},
2712++ {{0, 50, 6}},
2713++ {{0, 49, 6}},
2714++ {{0, 48, 6}},
2715++ {{0, 47, 6}},
2716++ {{0, 46, 6}},
2717++}; /* ML_defaultDTable */
2718++
2719++static const FSE_decode_t4 OF_defaultDTable[(1 << OF_DEFAULTNORMLOG) + 1] = {
2720++ {{OF_DEFAULTNORMLOG, 1, 1}}, /* header : tableLog, fastMode, fastMode */
2721++ {{0, 0, 5}}, /* 0 : base, symbol, bits */
2722++ {{0, 6, 4}},
2723++ {{0, 9, 5}},
2724++ {{0, 15, 5}},
2725++ {{0, 21, 5}},
2726++ {{0, 3, 5}},
2727++ {{0, 7, 4}},
2728++ {{0, 12, 5}},
2729++ {{0, 18, 5}},
2730++ {{0, 23, 5}},
2731++ {{0, 5, 5}},
2732++ {{0, 8, 4}},
2733++ {{0, 14, 5}},
2734++ {{0, 20, 5}},
2735++ {{0, 2, 5}},
2736++ {{16, 7, 4}},
2737++ {{0, 11, 5}},
2738++ {{0, 17, 5}},
2739++ {{0, 22, 5}},
2740++ {{0, 4, 5}},
2741++ {{16, 8, 4}},
2742++ {{0, 13, 5}},
2743++ {{0, 19, 5}},
2744++ {{0, 1, 5}},
2745++ {{16, 6, 4}},
2746++ {{0, 10, 5}},
2747++ {{0, 16, 5}},
2748++ {{0, 28, 5}},
2749++ {{0, 27, 5}},
2750++ {{0, 26, 5}},
2751++ {{0, 25, 5}},
2752++ {{0, 24, 5}},
2753++}; /* OF_defaultDTable */
2754++
2755++/*! ZSTD_buildSeqTable() :
2756++ @return : nb bytes read from src,
2757++ or an error code if it fails, testable with ZSTD_isError()
2758++*/
2759++static size_t INIT ZSTD_buildSeqTable(FSE_DTable *DTableSpace, const FSE_DTable **DTablePtr,
2760++ symbolEncodingType_e type, U32 max, U32 maxLog, const void *src,
2761++ size_t srcSize, const FSE_decode_t4 *defaultTable,
2762++ U32 flagRepeatTable, void *workspace, size_t workspaceSize)
2763++{
2764++ const void *const tmpPtr = defaultTable; /* bypass strict aliasing */
2765++ switch (type) {
2766++ case set_rle:
2767++ if (!srcSize)
2768++ return ERROR(srcSize_wrong);
2769++ if ((*(const BYTE *)src) > max)
2770++ return ERROR(corruption_detected);
2771++ FSE_buildDTable_rle(DTableSpace, *(const BYTE *)src);
2772++ *DTablePtr = DTableSpace;
2773++ return 1;
2774++ case set_basic: *DTablePtr = (const FSE_DTable *)tmpPtr; return 0;
2775++ case set_repeat:
2776++ if (!flagRepeatTable)
2777++ return ERROR(corruption_detected);
2778++ return 0;
2779++ default: /* impossible */
2780++ case set_compressed: {
2781++ U32 tableLog;
2782++ S16 *norm = (S16 *)workspace;
2783++ size_t const spaceUsed32 = ALIGN(sizeof(S16) * (MaxSeq + 1), sizeof(U32)) >> 2;
2784++
2785++ if ((spaceUsed32 << 2) > workspaceSize)
2786++ return ERROR(GENERIC);
2787++ workspace = (U32 *)workspace + spaceUsed32;
2788++ workspaceSize -= (spaceUsed32 << 2);
2789++ {
2790++ size_t const headerSize = FSE_readNCount(norm, &max, &tableLog, src, srcSize);
2791++ if (FSE_isError(headerSize))
2792++ return ERROR(corruption_detected);
2793++ if (tableLog > maxLog)
2794++ return ERROR(corruption_detected);
2795++ FSE_buildDTable_wksp(DTableSpace, norm, max, tableLog, workspace, workspaceSize);
2796++ *DTablePtr = DTableSpace;
2797++ return headerSize;
2798++ }
2799++ }
2800++ }
2801++}
2802++
2803++size_t INIT ZSTD_decodeSeqHeaders(ZSTD_DCtx *dctx, int *nbSeqPtr, const void *src, size_t srcSize)
2804++{
2805++ const BYTE *const istart = (const BYTE *const)src;
2806++ const BYTE *const iend = istart + srcSize;
2807++ const BYTE *ip = istart;
2808++
2809++ /* check */
2810++ if (srcSize < MIN_SEQUENCES_SIZE)
2811++ return ERROR(srcSize_wrong);
2812++
2813++ /* SeqHead */
2814++ {
2815++ int nbSeq = *ip++;
2816++ if (!nbSeq) {
2817++ *nbSeqPtr = 0;
2818++ return 1;
2819++ }
2820++ if (nbSeq > 0x7F) {
2821++ if (nbSeq == 0xFF) {
2822++ if (ip + 2 > iend)
2823++ return ERROR(srcSize_wrong);
2824++ nbSeq = ZSTD_readLE16(ip) + LONGNBSEQ, ip += 2;
2825++ } else {
2826++ if (ip >= iend)
2827++ return ERROR(srcSize_wrong);
2828++ nbSeq = ((nbSeq - 0x80) << 8) + *ip++;
2829++ }
2830++ }
2831++ *nbSeqPtr = nbSeq;
2832++ }
2833++
2834++ /* FSE table descriptors */
2835++ if (ip + 4 > iend)
2836++ return ERROR(srcSize_wrong); /* minimum possible size */
2837++ {
2838++ symbolEncodingType_e const LLtype = (symbolEncodingType_e)(*ip >> 6);
2839++ symbolEncodingType_e const OFtype = (symbolEncodingType_e)((*ip >> 4) & 3);
2840++ symbolEncodingType_e const MLtype = (symbolEncodingType_e)((*ip >> 2) & 3);
2841++ ip++;
2842++
2843++ /* Build DTables */
2844++ {
2845++ size_t const llhSize = ZSTD_buildSeqTable(dctx->entropy.LLTable, &dctx->LLTptr, LLtype, MaxLL, LLFSELog, ip, iend - ip,
2846++ LL_defaultDTable, dctx->fseEntropy, dctx->entropy.workspace, sizeof(dctx->entropy.workspace));
2847++ if (ZSTD_isError(llhSize))
2848++ return ERROR(corruption_detected);
2849++ ip += llhSize;
2850++ }
2851++ {
2852++ size_t const ofhSize = ZSTD_buildSeqTable(dctx->entropy.OFTable, &dctx->OFTptr, OFtype, MaxOff, OffFSELog, ip, iend - ip,
2853++ OF_defaultDTable, dctx->fseEntropy, dctx->entropy.workspace, sizeof(dctx->entropy.workspace));
2854++ if (ZSTD_isError(ofhSize))
2855++ return ERROR(corruption_detected);
2856++ ip += ofhSize;
2857++ }
2858++ {
2859++ size_t const mlhSize = ZSTD_buildSeqTable(dctx->entropy.MLTable, &dctx->MLTptr, MLtype, MaxML, MLFSELog, ip, iend - ip,
2860++ ML_defaultDTable, dctx->fseEntropy, dctx->entropy.workspace, sizeof(dctx->entropy.workspace));
2861++ if (ZSTD_isError(mlhSize))
2862++ return ERROR(corruption_detected);
2863++ ip += mlhSize;
2864++ }
2865++ }
2866++
2867++ return ip - istart;
2868++}
2869++
2870++typedef struct {
2871++ size_t litLength;
2872++ size_t matchLength;
2873++ size_t offset;
2874++ const BYTE *match;
2875++} seq_t;
2876++
2877++typedef struct {
2878++ BIT_DStream_t DStream;
2879++ FSE_DState_t stateLL;
2880++ FSE_DState_t stateOffb;
2881++ FSE_DState_t stateML;
2882++ size_t prevOffset[ZSTD_REP_NUM];
2883++ const BYTE *base;
2884++ size_t pos;
2885++ uPtrDiff gotoDict;
2886++} seqState_t;
2887++
2888++FORCE_NOINLINE
2889++size_t ZSTD_execSequenceLast7(BYTE *op, BYTE *const oend, seq_t sequence, const BYTE **litPtr, const BYTE *const litLimit, const BYTE *const base,
2890++ const BYTE *const vBase, const BYTE *const dictEnd)
2891++{
2892++ BYTE *const oLitEnd = op + sequence.litLength;
2893++ size_t const sequenceLength = sequence.litLength + sequence.matchLength;
2894++ BYTE *const oMatchEnd = op + sequenceLength; /* risk : address space overflow (32-bits) */
2895++ BYTE *const oend_w = oend - WILDCOPY_OVERLENGTH;
2896++ const BYTE *const iLitEnd = *litPtr + sequence.litLength;
2897++ const BYTE *match = oLitEnd - sequence.offset;
2898++
2899++ /* check */
2900++ if (oMatchEnd > oend)
2901++ return ERROR(dstSize_tooSmall); /* last match must start at a minimum distance of WILDCOPY_OVERLENGTH from oend */
2902++ if (iLitEnd > litLimit)
2903++ return ERROR(corruption_detected); /* over-read beyond lit buffer */
2904++ if (oLitEnd <= oend_w)
2905++ return ERROR(GENERIC); /* Precondition */
2906++
2907++ /* copy literals */
2908++ if (op < oend_w) {
2909++ ZSTD_wildcopy(op, *litPtr, oend_w - op);
2910++ *litPtr += oend_w - op;
2911++ op = oend_w;
2912++ }
2913++ while (op < oLitEnd)
2914++ *op++ = *(*litPtr)++;
2915++
2916++ /* copy Match */
2917++ if (sequence.offset > (size_t)(oLitEnd - base)) {
2918++ /* offset beyond prefix */
2919++ if (sequence.offset > (size_t)(oLitEnd - vBase))
2920++ return ERROR(corruption_detected);
2921++ match = dictEnd - (base - match);
2922++ if (match + sequence.matchLength <= dictEnd) {
2923++ memmove(oLitEnd, match, sequence.matchLength);
2924++ return sequenceLength;
2925++ }
2926++ /* span extDict & currPrefixSegment */
2927++ {
2928++ size_t const length1 = dictEnd - match;
2929++ memmove(oLitEnd, match, length1);
2930++ op = oLitEnd + length1;
2931++ sequence.matchLength -= length1;
2932++ match = base;
2933++ }
2934++ }
2935++ while (op < oMatchEnd)
2936++ *op++ = *match++;
2937++ return sequenceLength;
2938++}
2939++
2940++static seq_t INIT ZSTD_decodeSequence(seqState_t *seqState)
2941++{
2942++ seq_t seq;
2943++
2944++ U32 const llCode = FSE_peekSymbol(&seqState->stateLL);
2945++ U32 const mlCode = FSE_peekSymbol(&seqState->stateML);
2946++ U32 const ofCode = FSE_peekSymbol(&seqState->stateOffb); /* <= maxOff, by table construction */
2947++
2948++ U32 const llBits = LL_bits[llCode];
2949++ U32 const mlBits = ML_bits[mlCode];
2950++ U32 const ofBits = ofCode;
2951++ U32 const totalBits = llBits + mlBits + ofBits;
2952++
2953++ static const U32 LL_base[MaxLL + 1] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 18,
2954++ 20, 22, 24, 28, 32, 40, 48, 64, 0x80, 0x100, 0x200, 0x400, 0x800, 0x1000, 0x2000, 0x4000, 0x8000, 0x10000};
2955++
2956++ static const U32 ML_base[MaxML + 1] = {3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
2957++ 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 37, 39, 41,
2958++ 43, 47, 51, 59, 67, 83, 99, 0x83, 0x103, 0x203, 0x403, 0x803, 0x1003, 0x2003, 0x4003, 0x8003, 0x10003};
2959++
2960++ static const U32 OF_base[MaxOff + 1] = {0, 1, 1, 5, 0xD, 0x1D, 0x3D, 0x7D, 0xFD, 0x1FD,
2961++ 0x3FD, 0x7FD, 0xFFD, 0x1FFD, 0x3FFD, 0x7FFD, 0xFFFD, 0x1FFFD, 0x3FFFD, 0x7FFFD,
2962++ 0xFFFFD, 0x1FFFFD, 0x3FFFFD, 0x7FFFFD, 0xFFFFFD, 0x1FFFFFD, 0x3FFFFFD, 0x7FFFFFD, 0xFFFFFFD};
2963++
2964++ /* sequence */
2965++ {
2966++ size_t offset;
2967++ if (!ofCode)
2968++ offset = 0;
2969++ else {
2970++ offset = OF_base[ofCode] + BIT_readBitsFast(&seqState->DStream, ofBits); /* <= (ZSTD_WINDOWLOG_MAX-1) bits */
2971++ if (ZSTD_32bits())
2972++ BIT_reloadDStream(&seqState->DStream);
2973++ }
2974++
2975++ if (ofCode <= 1) {
2976++ offset += (llCode == 0);
2977++ if (offset) {
2978++ size_t temp = (offset == 3) ? seqState->prevOffset[0] - 1 : seqState->prevOffset[offset];
2979++ temp += !temp; /* 0 is not valid; input is corrupted; force offset to 1 */
2980++ if (offset != 1)
2981++ seqState->prevOffset[2] = seqState->prevOffset[1];
2982++ seqState->prevOffset[1] = seqState->prevOffset[0];
2983++ seqState->prevOffset[0] = offset = temp;
2984++ } else {
2985++ offset = seqState->prevOffset[0];
2986++ }
2987++ } else {
2988++ seqState->prevOffset[2] = seqState->prevOffset[1];
2989++ seqState->prevOffset[1] = seqState->prevOffset[0];
2990++ seqState->prevOffset[0] = offset;
2991++ }
2992++ seq.offset = offset;
2993++ }
2994++
2995++ seq.matchLength = ML_base[mlCode] + ((mlCode > 31) ? BIT_readBitsFast(&seqState->DStream, mlBits) : 0); /* <= 16 bits */
2996++ if (ZSTD_32bits() && (mlBits + llBits > 24))
2997++ BIT_reloadDStream(&seqState->DStream);
2998++
2999++ seq.litLength = LL_base[llCode] + ((llCode > 15) ? BIT_readBitsFast(&seqState->DStream, llBits) : 0); /* <= 16 bits */
3000++ if (ZSTD_32bits() || (totalBits > 64 - 7 - (LLFSELog + MLFSELog + OffFSELog)))
3001++ BIT_reloadDStream(&seqState->DStream);
3002++
3003++ /* ANS state update */
3004++ FSE_updateState(&seqState->stateLL, &seqState->DStream); /* <= 9 bits */
3005++ FSE_updateState(&seqState->stateML, &seqState->DStream); /* <= 9 bits */
3006++ if (ZSTD_32bits())
3007++ BIT_reloadDStream(&seqState->DStream); /* <= 18 bits */
3008++ FSE_updateState(&seqState->stateOffb, &seqState->DStream); /* <= 8 bits */
3009++
3010++ seq.match = NULL;
3011++
3012++ return seq;
3013++}
3014++
3015++FORCE_INLINE
3016++size_t ZSTD_execSequence(BYTE *op, BYTE *const oend, seq_t sequence, const BYTE **litPtr, const BYTE *const litLimit, const BYTE *const base,
3017++ const BYTE *const vBase, const BYTE *const dictEnd)
3018++{
3019++ BYTE *const oLitEnd = op + sequence.litLength;
3020++ size_t const sequenceLength = sequence.litLength + sequence.matchLength;
3021++ BYTE *const oMatchEnd = op + sequenceLength; /* risk : address space overflow (32-bits) */
3022++ BYTE *const oend_w = oend - WILDCOPY_OVERLENGTH;
3023++ const BYTE *const iLitEnd = *litPtr + sequence.litLength;
3024++ const BYTE *match = oLitEnd - sequence.offset;
3025++
3026++ /* check */
3027++ if (oMatchEnd > oend)
3028++ return ERROR(dstSize_tooSmall); /* last match must start at a minimum distance of WILDCOPY_OVERLENGTH from oend */
3029++ if (iLitEnd > litLimit)
3030++ return ERROR(corruption_detected); /* over-read beyond lit buffer */
3031++ if (oLitEnd > oend_w)
3032++ return ZSTD_execSequenceLast7(op, oend, sequence, litPtr, litLimit, base, vBase, dictEnd);
3033++
3034++ /* copy Literals */
3035++ ZSTD_copy8(op, *litPtr);
3036++ if (sequence.litLength > 8)
3037++ ZSTD_wildcopy(op + 8, (*litPtr) + 8,
3038++ sequence.litLength - 8); /* note : since oLitEnd <= oend-WILDCOPY_OVERLENGTH, no risk of overwrite beyond oend */
3039++ op = oLitEnd;
3040++ *litPtr = iLitEnd; /* update for next sequence */
3041++
3042++ /* copy Match */
3043++ if (sequence.offset > (size_t)(oLitEnd - base)) {
3044++ /* offset beyond prefix */
3045++ if (sequence.offset > (size_t)(oLitEnd - vBase))
3046++ return ERROR(corruption_detected);
3047++ match = dictEnd + (match - base);
3048++ if (match + sequence.matchLength <= dictEnd) {
3049++ memmove(oLitEnd, match, sequence.matchLength);
3050++ return sequenceLength;
3051++ }
3052++ /* span extDict & currPrefixSegment */
3053++ {
3054++ size_t const length1 = dictEnd - match;
3055++ memmove(oLitEnd, match, length1);
3056++ op = oLitEnd + length1;
3057++ sequence.matchLength -= length1;
3058++ match = base;
3059++ if (op > oend_w || sequence.matchLength < MINMATCH) {
3060++ U32 i;
3061++ for (i = 0; i < sequence.matchLength; ++i)
3062++ op[i] = match[i];
3063++ return sequenceLength;
3064++ }
3065++ }
3066++ }
3067++ /* Requirement: op <= oend_w && sequence.matchLength >= MINMATCH */
3068++
3069++ /* match within prefix */
3070++ if (sequence.offset < 8) {
3071++ /* close range match, overlap */
3072++ static const U32 dec32table[] = {0, 1, 2, 1, 4, 4, 4, 4}; /* added */
3073++ static const int dec64table[] = {8, 8, 8, 7, 8, 9, 10, 11}; /* subtracted */
3074++ int const sub2 = dec64table[sequence.offset];
3075++ op[0] = match[0];
3076++ op[1] = match[1];
3077++ op[2] = match[2];
3078++ op[3] = match[3];
3079++ match += dec32table[sequence.offset];
3080++ ZSTD_copy4(op + 4, match);
3081++ match -= sub2;
3082++ } else {
3083++ ZSTD_copy8(op, match);
3084++ }
3085++ op += 8;
3086++ match += 8;
3087++
3088++ if (oMatchEnd > oend - (16 - MINMATCH)) {
3089++ if (op < oend_w) {
3090++ ZSTD_wildcopy(op, match, oend_w - op);
3091++ match += oend_w - op;
3092++ op = oend_w;
3093++ }
3094++ while (op < oMatchEnd)
3095++ *op++ = *match++;
3096++ } else {
3097++ ZSTD_wildcopy(op, match, (ptrdiff_t)sequence.matchLength - 8); /* works even if matchLength < 8 */
3098++ }
3099++ return sequenceLength;
3100++}
3101++
3102++static size_t INIT ZSTD_decompressSequences(ZSTD_DCtx *dctx, void *dst, size_t maxDstSize, const void *seqStart, size_t seqSize)
3103++{
3104++ const BYTE *ip = (const BYTE *)seqStart;
3105++ const BYTE *const iend = ip + seqSize;
3106++ BYTE *const ostart = (BYTE * const)dst;
3107++ BYTE *const oend = ostart + maxDstSize;
3108++ BYTE *op = ostart;
3109++ const BYTE *litPtr = dctx->litPtr;
3110++ const BYTE *const litEnd = litPtr + dctx->litSize;
3111++ const BYTE *const base = (const BYTE *)(dctx->base);
3112++ const BYTE *const vBase = (const BYTE *)(dctx->vBase);
3113++ const BYTE *const dictEnd = (const BYTE *)(dctx->dictEnd);
3114++ int nbSeq;
3115++
3116++ /* Build Decoding Tables */
3117++ {
3118++ size_t const seqHSize = ZSTD_decodeSeqHeaders(dctx, &nbSeq, ip, seqSize);
3119++ if (ZSTD_isError(seqHSize))
3120++ return seqHSize;
3121++ ip += seqHSize;
3122++ }
3123++
3124++ /* Regen sequences */
3125++ if (nbSeq) {
3126++ seqState_t seqState;
3127++ dctx->fseEntropy = 1;
3128++ {
3129++ U32 i;
3130++ for (i = 0; i < ZSTD_REP_NUM; i++)
3131++ seqState.prevOffset[i] = dctx->entropy.rep[i];
3132++ }
3133++ CHECK_E(BIT_initDStream(&seqState.DStream, ip, iend - ip), corruption_detected);
3134++ FSE_initDState(&seqState.stateLL, &seqState.DStream, dctx->LLTptr);
3135++ FSE_initDState(&seqState.stateOffb, &seqState.DStream, dctx->OFTptr);
3136++ FSE_initDState(&seqState.stateML, &seqState.DStream, dctx->MLTptr);
3137++
3138++ for (; (BIT_reloadDStream(&(seqState.DStream)) <= BIT_DStream_completed) && nbSeq;) {
3139++ nbSeq--;
3140++ {
3141++ seq_t const sequence = ZSTD_decodeSequence(&seqState);
3142++ size_t const oneSeqSize = ZSTD_execSequence(op, oend, sequence, &litPtr, litEnd, base, vBase, dictEnd);
3143++ if (ZSTD_isError(oneSeqSize))
3144++ return oneSeqSize;
3145++ op += oneSeqSize;
3146++ }
3147++ }
3148++
3149++ /* check if reached exact end */
3150++ if (nbSeq)
3151++ return ERROR(corruption_detected);
3152++ /* save reps for next block */
3153++ {
3154++ U32 i;
3155++ for (i = 0; i < ZSTD_REP_NUM; i++)
3156++ dctx->entropy.rep[i] = (U32)(seqState.prevOffset[i]);
3157++ }
3158++ }
3159++
3160++ /* last literal segment */
3161++ {
3162++ size_t const lastLLSize = litEnd - litPtr;
3163++ if (lastLLSize > (size_t)(oend - op))
3164++ return ERROR(dstSize_tooSmall);
3165++ memcpy(op, litPtr, lastLLSize);
3166++ op += lastLLSize;
3167++ }
3168++
3169++ return op - ostart;
3170++}
3171++
3172++FORCE_INLINE seq_t ZSTD_decodeSequenceLong_generic(seqState_t *seqState, int const longOffsets)
3173++{
3174++ seq_t seq;
3175++
3176++ U32 const llCode = FSE_peekSymbol(&seqState->stateLL);
3177++ U32 const mlCode = FSE_peekSymbol(&seqState->stateML);
3178++ U32 const ofCode = FSE_peekSymbol(&seqState->stateOffb); /* <= maxOff, by table construction */
3179++
3180++ U32 const llBits = LL_bits[llCode];
3181++ U32 const mlBits = ML_bits[mlCode];
3182++ U32 const ofBits = ofCode;
3183++ U32 const totalBits = llBits + mlBits + ofBits;
3184++
3185++ static const U32 LL_base[MaxLL + 1] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 18,
3186++ 20, 22, 24, 28, 32, 40, 48, 64, 0x80, 0x100, 0x200, 0x400, 0x800, 0x1000, 0x2000, 0x4000, 0x8000, 0x10000};
3187++
3188++ static const U32 ML_base[MaxML + 1] = {3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
3189++ 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 37, 39, 41,
3190++ 43, 47, 51, 59, 67, 83, 99, 0x83, 0x103, 0x203, 0x403, 0x803, 0x1003, 0x2003, 0x4003, 0x8003, 0x10003};
3191++
3192++ static const U32 OF_base[MaxOff + 1] = {0, 1, 1, 5, 0xD, 0x1D, 0x3D, 0x7D, 0xFD, 0x1FD,
3193++ 0x3FD, 0x7FD, 0xFFD, 0x1FFD, 0x3FFD, 0x7FFD, 0xFFFD, 0x1FFFD, 0x3FFFD, 0x7FFFD,
3194++ 0xFFFFD, 0x1FFFFD, 0x3FFFFD, 0x7FFFFD, 0xFFFFFD, 0x1FFFFFD, 0x3FFFFFD, 0x7FFFFFD, 0xFFFFFFD};
3195++
3196++ /* sequence */
3197++ {
3198++ size_t offset;
3199++ if (!ofCode)
3200++ offset = 0;
3201++ else {
3202++ if (longOffsets) {
3203++ int const extraBits = ofBits - MIN(ofBits, STREAM_ACCUMULATOR_MIN);
3204++ offset = OF_base[ofCode] + (BIT_readBitsFast(&seqState->DStream, ofBits - extraBits) << extraBits);
3205++ if (ZSTD_32bits() || extraBits)
3206++ BIT_reloadDStream(&seqState->DStream);
3207++ if (extraBits)
3208++ offset += BIT_readBitsFast(&seqState->DStream, extraBits);
3209++ } else {
3210++ offset = OF_base[ofCode] + BIT_readBitsFast(&seqState->DStream, ofBits); /* <= (ZSTD_WINDOWLOG_MAX-1) bits */
3211++ if (ZSTD_32bits())
3212++ BIT_reloadDStream(&seqState->DStream);
3213++ }
3214++ }
3215++
3216++ if (ofCode <= 1) {
3217++ offset += (llCode == 0);
3218++ if (offset) {
3219++ size_t temp = (offset == 3) ? seqState->prevOffset[0] - 1 : seqState->prevOffset[offset];
3220++ temp += !temp; /* 0 is not valid; input is corrupted; force offset to 1 */
3221++ if (offset != 1)
3222++ seqState->prevOffset[2] = seqState->prevOffset[1];
3223++ seqState->prevOffset[1] = seqState->prevOffset[0];
3224++ seqState->prevOffset[0] = offset = temp;
3225++ } else {
3226++ offset = seqState->prevOffset[0];
3227++ }
3228++ } else {
3229++ seqState->prevOffset[2] = seqState->prevOffset[1];
3230++ seqState->prevOffset[1] = seqState->prevOffset[0];
3231++ seqState->prevOffset[0] = offset;
3232++ }
3233++ seq.offset = offset;
3234++ }
3235++
3236++ seq.matchLength = ML_base[mlCode] + ((mlCode > 31) ? BIT_readBitsFast(&seqState->DStream, mlBits) : 0); /* <= 16 bits */
3237++ if (ZSTD_32bits() && (mlBits + llBits > 24))
3238++ BIT_reloadDStream(&seqState->DStream);
3239++
3240++ seq.litLength = LL_base[llCode] + ((llCode > 15) ? BIT_readBitsFast(&seqState->DStream, llBits) : 0); /* <= 16 bits */
3241++ if (ZSTD_32bits() || (totalBits > 64 - 7 - (LLFSELog + MLFSELog + OffFSELog)))
3242++ BIT_reloadDStream(&seqState->DStream);
3243++
3244++ {
3245++ size_t const pos = seqState->pos + seq.litLength;
3246++ seq.match = seqState->base + pos - seq.offset; /* single memory segment */
3247++ if (seq.offset > pos)
3248++ seq.match += seqState->gotoDict; /* separate memory segment */
3249++ seqState->pos = pos + seq.matchLength;
3250++ }
3251++
3252++ /* ANS state update */
3253++ FSE_updateState(&seqState->stateLL, &seqState->DStream); /* <= 9 bits */
3254++ FSE_updateState(&seqState->stateML, &seqState->DStream); /* <= 9 bits */
3255++ if (ZSTD_32bits())
3256++ BIT_reloadDStream(&seqState->DStream); /* <= 18 bits */
3257++ FSE_updateState(&seqState->stateOffb, &seqState->DStream); /* <= 8 bits */
3258++
3259++ return seq;
3260++}
3261++
3262++static seq_t INIT ZSTD_decodeSequenceLong(seqState_t *seqState, unsigned const windowSize)
3263++{
3264++ if (ZSTD_highbit32(windowSize) > STREAM_ACCUMULATOR_MIN) {
3265++ return ZSTD_decodeSequenceLong_generic(seqState, 1);
3266++ } else {
3267++ return ZSTD_decodeSequenceLong_generic(seqState, 0);
3268++ }
3269++}
3270++
3271++FORCE_INLINE
3272++size_t INIT ZSTD_execSequenceLong(BYTE *op, BYTE *const oend, seq_t sequence, const BYTE **litPtr,
3273++ const BYTE *const litLimit, const BYTE *const base,
3274++ const BYTE *const vBase, const BYTE *const dictEnd)
3275++{
3276++ BYTE *const oLitEnd = op + sequence.litLength;
3277++ size_t const sequenceLength = sequence.litLength + sequence.matchLength;
3278++ BYTE *const oMatchEnd = op + sequenceLength; /* risk : address space overflow (32-bits) */
3279++ BYTE *const oend_w = oend - WILDCOPY_OVERLENGTH;
3280++ const BYTE *const iLitEnd = *litPtr + sequence.litLength;
3281++ const BYTE *match = sequence.match;
3282++
3283++ /* check */
3284++ if (oMatchEnd > oend)
3285++ return ERROR(dstSize_tooSmall); /* last match must start at a minimum distance of WILDCOPY_OVERLENGTH from oend */
3286++ if (iLitEnd > litLimit)
3287++ return ERROR(corruption_detected); /* over-read beyond lit buffer */
3288++ if (oLitEnd > oend_w)
3289++ return ZSTD_execSequenceLast7(op, oend, sequence, litPtr, litLimit, base, vBase, dictEnd);
3290++
3291++ /* copy Literals */
3292++ ZSTD_copy8(op, *litPtr);
3293++ if (sequence.litLength > 8)
3294++ ZSTD_wildcopy(op + 8, (*litPtr) + 8,
3295++ sequence.litLength - 8); /* note : since oLitEnd <= oend-WILDCOPY_OVERLENGTH, no risk of overwrite beyond oend */
3296++ op = oLitEnd;
3297++ *litPtr = iLitEnd; /* update for next sequence */
3298++
3299++ /* copy Match */
3300++ if (sequence.offset > (size_t)(oLitEnd - base)) {
3301++ /* offset beyond prefix */
3302++ if (sequence.offset > (size_t)(oLitEnd - vBase))
3303++ return ERROR(corruption_detected);
3304++ if (match + sequence.matchLength <= dictEnd) {
3305++ memmove(oLitEnd, match, sequence.matchLength);
3306++ return sequenceLength;
3307++ }
3308++ /* span extDict & currPrefixSegment */
3309++ {
3310++ size_t const length1 = dictEnd - match;
3311++ memmove(oLitEnd, match, length1);
3312++ op = oLitEnd + length1;
3313++ sequence.matchLength -= length1;
3314++ match = base;
3315++ if (op > oend_w || sequence.matchLength < MINMATCH) {
3316++ U32 i;
3317++ for (i = 0; i < sequence.matchLength; ++i)
3318++ op[i] = match[i];
3319++ return sequenceLength;
3320++ }
3321++ }
3322++ }
3323++ /* Requirement: op <= oend_w && sequence.matchLength >= MINMATCH */
3324++
3325++ /* match within prefix */
3326++ if (sequence.offset < 8) {
3327++ /* close range match, overlap */
3328++ static const U32 dec32table[] = {0, 1, 2, 1, 4, 4, 4, 4}; /* added */
3329++ static const int dec64table[] = {8, 8, 8, 7, 8, 9, 10, 11}; /* subtracted */
3330++ int const sub2 = dec64table[sequence.offset];
3331++ op[0] = match[0];
3332++ op[1] = match[1];
3333++ op[2] = match[2];
3334++ op[3] = match[3];
3335++ match += dec32table[sequence.offset];
3336++ ZSTD_copy4(op + 4, match);
3337++ match -= sub2;
3338++ } else {
3339++ ZSTD_copy8(op, match);
3340++ }
3341++ op += 8;
3342++ match += 8;
3343++
3344++ if (oMatchEnd > oend - (16 - MINMATCH)) {
3345++ if (op < oend_w) {
3346++ ZSTD_wildcopy(op, match, oend_w - op);
3347++ match += oend_w - op;
3348++ op = oend_w;
3349++ }
3350++ while (op < oMatchEnd)
3351++ *op++ = *match++;
3352++ } else {
3353++ ZSTD_wildcopy(op, match, (ptrdiff_t)sequence.matchLength - 8); /* works even if matchLength < 8 */
3354++ }
3355++ return sequenceLength;
3356++}
3357++
3358++static size_t INIT ZSTD_decompressSequencesLong(ZSTD_DCtx *dctx, void *dst, size_t maxDstSize, const void *seqStart, size_t seqSize)
3359++{
3360++ const BYTE *ip = (const BYTE *)seqStart;
3361++ const BYTE *const iend = ip + seqSize;
3362++ BYTE *const ostart = (BYTE * const)dst;
3363++ BYTE *const oend = ostart + maxDstSize;
3364++ BYTE *op = ostart;
3365++ const BYTE *litPtr = dctx->litPtr;
3366++ const BYTE *const litEnd = litPtr + dctx->litSize;
3367++ const BYTE *const base = (const BYTE *)(dctx->base);
3368++ const BYTE *const vBase = (const BYTE *)(dctx->vBase);
3369++ const BYTE *const dictEnd = (const BYTE *)(dctx->dictEnd);
3370++ unsigned const windowSize = dctx->fParams.windowSize;
3371++ int nbSeq;
3372++
3373++ /* Build Decoding Tables */
3374++ {
3375++ size_t const seqHSize = ZSTD_decodeSeqHeaders(dctx, &nbSeq, ip, seqSize);
3376++ if (ZSTD_isError(seqHSize))
3377++ return seqHSize;
3378++ ip += seqHSize;
3379++ }
3380++
3381++ /* Regen sequences */
3382++ if (nbSeq) {
3383++#define STORED_SEQS 4
3384++#define STOSEQ_MASK (STORED_SEQS - 1)
3385++#define ADVANCED_SEQS 4
3386++ seq_t *sequences = (seq_t *)dctx->entropy.workspace;
3387++ int const seqAdvance = MIN(nbSeq, ADVANCED_SEQS);
3388++ seqState_t seqState;
3389++ int seqNb;
3390++ ZSTD_STATIC_ASSERT(sizeof(dctx->entropy.workspace) >= sizeof(seq_t) * STORED_SEQS);
3391++ dctx->fseEntropy = 1;
3392++ {
3393++ U32 i;
3394++ for (i = 0; i < ZSTD_REP_NUM; i++)
3395++ seqState.prevOffset[i] = dctx->entropy.rep[i];
3396++ }
3397++ seqState.base = base;
3398++ seqState.pos = (size_t)(op - base);
3399++ seqState.gotoDict = (uPtrDiff)dictEnd - (uPtrDiff)base; /* cast to avoid undefined behaviour */
3400++ CHECK_E(BIT_initDStream(&seqState.DStream, ip, iend - ip), corruption_detected);
3401++ FSE_initDState(&seqState.stateLL, &seqState.DStream, dctx->LLTptr);
3402++ FSE_initDState(&seqState.stateOffb, &seqState.DStream, dctx->OFTptr);
3403++ FSE_initDState(&seqState.stateML, &seqState.DStream, dctx->MLTptr);
3404++
3405++ /* prepare in advance */
3406++ for (seqNb = 0; (BIT_reloadDStream(&seqState.DStream) <= BIT_DStream_completed) && seqNb < seqAdvance; seqNb++) {
3407++ sequences[seqNb] = ZSTD_decodeSequenceLong(&seqState, windowSize);
3408++ }
3409++ if (seqNb < seqAdvance)
3410++ return ERROR(corruption_detected);
3411++
3412++ /* decode and decompress */
3413++ for (; (BIT_reloadDStream(&(seqState.DStream)) <= BIT_DStream_completed) && seqNb < nbSeq; seqNb++) {
3414++ seq_t const sequence = ZSTD_decodeSequenceLong(&seqState, windowSize);
3415++ size_t const oneSeqSize =
3416++ ZSTD_execSequenceLong(op, oend, sequences[(seqNb - ADVANCED_SEQS) & STOSEQ_MASK], &litPtr, litEnd, base, vBase, dictEnd);
3417++ if (ZSTD_isError(oneSeqSize))
3418++ return oneSeqSize;
3419++ ZSTD_PREFETCH(sequence.match);
3420++ sequences[seqNb & STOSEQ_MASK] = sequence;
3421++ op += oneSeqSize;
3422++ }
3423++ if (seqNb < nbSeq)
3424++ return ERROR(corruption_detected);
3425++
3426++ /* finish queue */
3427++ seqNb -= seqAdvance;
3428++ for (; seqNb < nbSeq; seqNb++) {
3429++ size_t const oneSeqSize = ZSTD_execSequenceLong(op, oend, sequences[seqNb & STOSEQ_MASK], &litPtr, litEnd, base, vBase, dictEnd);
3430++ if (ZSTD_isError(oneSeqSize))
3431++ return oneSeqSize;
3432++ op += oneSeqSize;
3433++ }
3434++
3435++ /* save reps for next block */
3436++ {
3437++ U32 i;
3438++ for (i = 0; i < ZSTD_REP_NUM; i++)
3439++ dctx->entropy.rep[i] = (U32)(seqState.prevOffset[i]);
3440++ }
3441++ }
3442++
3443++ /* last literal segment */
3444++ {
3445++ size_t const lastLLSize = litEnd - litPtr;
3446++ if (lastLLSize > (size_t)(oend - op))
3447++ return ERROR(dstSize_tooSmall);
3448++ memcpy(op, litPtr, lastLLSize);
3449++ op += lastLLSize;
3450++ }
3451++
3452++ return op - ostart;
3453++}
3454++
3455++static size_t INIT ZSTD_decompressBlock_internal(ZSTD_DCtx *dctx, void *dst, size_t dstCapacity, const void *src, size_t srcSize)
3456++{ /* blockType == blockCompressed */
3457++ const BYTE *ip = (const BYTE *)src;
3458++
3459++ if (srcSize >= ZSTD_BLOCKSIZE_ABSOLUTEMAX)
3460++ return ERROR(srcSize_wrong);
3461++
3462++ /* Decode literals section */
3463++ {
3464++ size_t const litCSize = ZSTD_decodeLiteralsBlock(dctx, src, srcSize);
3465++ if (ZSTD_isError(litCSize))
3466++ return litCSize;
3467++ ip += litCSize;
3468++ srcSize -= litCSize;
3469++ }
3470++ if (sizeof(size_t) > 4) /* do not enable prefetching on 32-bits x86, as it's performance detrimental */
3471++ /* likely because of register pressure */
3472++ /* if that's the correct cause, then 32-bits ARM should be affected differently */
3473++ /* it would be good to test this on ARM real hardware, to see if prefetch version improves speed */
3474++ if (dctx->fParams.windowSize > (1 << 23))
3475++ return ZSTD_decompressSequencesLong(dctx, dst, dstCapacity, ip, srcSize);
3476++ return ZSTD_decompressSequences(dctx, dst, dstCapacity, ip, srcSize);
3477++}
3478++
3479++static void INIT ZSTD_checkContinuity(ZSTD_DCtx *dctx, const void *dst)
3480++{
3481++ if (dst != dctx->previousDstEnd) { /* not contiguous */
3482++ dctx->dictEnd = dctx->previousDstEnd;
3483++ dctx->vBase = (const char *)dst - ((const char *)(dctx->previousDstEnd) - (const char *)(dctx->base));
3484++ dctx->base = dst;
3485++ dctx->previousDstEnd = dst;
3486++ }
3487++}
3488++
3489++size_t INIT ZSTD_decompressBlock(ZSTD_DCtx *dctx, void *dst, size_t dstCapacity, const void *src, size_t srcSize)
3490++{
3491++ size_t dSize;
3492++ ZSTD_checkContinuity(dctx, dst);
3493++ dSize = ZSTD_decompressBlock_internal(dctx, dst, dstCapacity, src, srcSize);
3494++ dctx->previousDstEnd = (char *)dst + dSize;
3495++ return dSize;
3496++}
3497++
3498++/** ZSTD_insertBlock() :
3499++ insert `src` block into `dctx` history. Useful to track uncompressed blocks. */
3500++size_t INIT ZSTD_insertBlock(ZSTD_DCtx *dctx, const void *blockStart, size_t blockSize)
3501++{
3502++ ZSTD_checkContinuity(dctx, blockStart);
3503++ dctx->previousDstEnd = (const char *)blockStart + blockSize;
3504++ return blockSize;
3505++}
3506++
3507++size_t INIT ZSTD_generateNxBytes(void *dst, size_t dstCapacity, BYTE byte, size_t length)
3508++{
3509++ if (length > dstCapacity)
3510++ return ERROR(dstSize_tooSmall);
3511++ memset(dst, byte, length);
3512++ return length;
3513++}
3514++
3515++/** ZSTD_findFrameCompressedSize() :
3516++ * compatible with legacy mode
3517++ * `src` must point to the start of a ZSTD frame, ZSTD legacy frame, or skippable frame
3518++ * `srcSize` must be at least as large as the frame contained
3519++ * @return : the compressed size of the frame starting at `src` */
3520++size_t INIT ZSTD_findFrameCompressedSize(const void *src, size_t srcSize)
3521++{
3522++ if (srcSize >= ZSTD_skippableHeaderSize && (ZSTD_readLE32(src) & 0xFFFFFFF0U) == ZSTD_MAGIC_SKIPPABLE_START) {
3523++ return ZSTD_skippableHeaderSize + ZSTD_readLE32((const BYTE *)src + 4);
3524++ } else {
3525++ const BYTE *ip = (const BYTE *)src;
3526++ const BYTE *const ipstart = ip;
3527++ size_t remainingSize = srcSize;
3528++ ZSTD_frameParams fParams;
3529++
3530++ size_t const headerSize = ZSTD_frameHeaderSize(ip, remainingSize);
3531++ if (ZSTD_isError(headerSize))
3532++ return headerSize;
3533++
3534++ /* Frame Header */
3535++ {
3536++ size_t const ret = ZSTD_getFrameParams(&fParams, ip, remainingSize);
3537++ if (ZSTD_isError(ret))
3538++ return ret;
3539++ if (ret > 0)
3540++ return ERROR(srcSize_wrong);
3541++ }
3542++
3543++ ip += headerSize;
3544++ remainingSize -= headerSize;
3545++
3546++ /* Loop on each block */
3547++ while (1) {
3548++ blockProperties_t blockProperties;
3549++ size_t const cBlockSize = ZSTD_getcBlockSize(ip, remainingSize, &blockProperties);
3550++ if (ZSTD_isError(cBlockSize))
3551++ return cBlockSize;
3552++
3553++ if (ZSTD_blockHeaderSize + cBlockSize > remainingSize)
3554++ return ERROR(srcSize_wrong);
3555++
3556++ ip += ZSTD_blockHeaderSize + cBlockSize;
3557++ remainingSize -= ZSTD_blockHeaderSize + cBlockSize;
3558++
3559++ if (blockProperties.lastBlock)
3560++ break;
3561++ }
3562++
3563++ if (fParams.checksumFlag) { /* Frame content checksum */
3564++ if (remainingSize < 4)
3565++ return ERROR(srcSize_wrong);
3566++ ip += 4;
3567++ remainingSize -= 4;
3568++ }
3569++
3570++ return ip - ipstart;
3571++ }
3572++}
3573++
3574++/*! ZSTD_decompressFrame() :
3575++* @dctx must be properly initialized */
3576++static size_t INIT ZSTD_decompressFrame(ZSTD_DCtx *dctx, void *dst, size_t dstCapacity, const void **srcPtr, size_t *srcSizePtr)
3577++{
3578++ const BYTE *ip = (const BYTE *)(*srcPtr);
3579++ BYTE *const ostart = (BYTE * const)dst;
3580++ BYTE *const oend = ostart + dstCapacity;
3581++ BYTE *op = ostart;
3582++ size_t remainingSize = *srcSizePtr;
3583++
3584++ /* check */
3585++ if (remainingSize < ZSTD_frameHeaderSize_min + ZSTD_blockHeaderSize)
3586++ return ERROR(srcSize_wrong);
3587++
3588++ /* Frame Header */
3589++ {
3590++ size_t const frameHeaderSize = ZSTD_frameHeaderSize(ip, ZSTD_frameHeaderSize_prefix);
3591++ if (ZSTD_isError(frameHeaderSize))
3592++ return frameHeaderSize;
3593++ if (remainingSize < frameHeaderSize + ZSTD_blockHeaderSize)
3594++ return ERROR(srcSize_wrong);
3595++ CHECK_F(ZSTD_decodeFrameHeader(dctx, ip, frameHeaderSize));
3596++ ip += frameHeaderSize;
3597++ remainingSize -= frameHeaderSize;
3598++ }
3599++
3600++ /* Loop on each block */
3601++ while (1) {
3602++ size_t decodedSize;
3603++ blockProperties_t blockProperties;
3604++ size_t const cBlockSize = ZSTD_getcBlockSize(ip, remainingSize, &blockProperties);
3605++ if (ZSTD_isError(cBlockSize))
3606++ return cBlockSize;
3607++
3608++ ip += ZSTD_blockHeaderSize;
3609++ remainingSize -= ZSTD_blockHeaderSize;
3610++ if (cBlockSize > remainingSize)
3611++ return ERROR(srcSize_wrong);
3612++
3613++ switch (blockProperties.blockType) {
3614++ case bt_compressed: decodedSize = ZSTD_decompressBlock_internal(dctx, op, oend - op, ip, cBlockSize); break;
3615++ case bt_raw: decodedSize = ZSTD_copyRawBlock(op, oend - op, ip, cBlockSize); break;
3616++ case bt_rle: decodedSize = ZSTD_generateNxBytes(op, oend - op, *ip, blockProperties.origSize); break;
3617++ case bt_reserved:
3618++ default: return ERROR(corruption_detected);
3619++ }
3620++
3621++ if (ZSTD_isError(decodedSize))
3622++ return decodedSize;
3623++ if (dctx->fParams.checksumFlag)
3624++ xxh64_update(&dctx->xxhState, op, decodedSize);
3625++ op += decodedSize;
3626++ ip += cBlockSize;
3627++ remainingSize -= cBlockSize;
3628++ if (blockProperties.lastBlock)
3629++ break;
3630++ }
3631++
3632++ if (dctx->fParams.checksumFlag) { /* Frame content checksum verification */
3633++ U32 const checkCalc = (U32)xxh64_digest(&dctx->xxhState);
3634++ U32 checkRead;
3635++ if (remainingSize < 4)
3636++ return ERROR(checksum_wrong);
3637++ checkRead = ZSTD_readLE32(ip);
3638++ if (checkRead != checkCalc)
3639++ return ERROR(checksum_wrong);
3640++ ip += 4;
3641++ remainingSize -= 4;
3642++ }
3643++
3644++ /* Allow caller to get size read */
3645++ *srcPtr = ip;
3646++ *srcSizePtr = remainingSize;
3647++ return op - ostart;
3648++}
3649++
3650++static const void *ZSTD_DDictDictContent(const ZSTD_DDict *ddict);
3651++static size_t ZSTD_DDictDictSize(const ZSTD_DDict *ddict);
3652++
3653++static size_t INIT ZSTD_decompressMultiFrame(ZSTD_DCtx *dctx, void *dst, size_t dstCapacity, const void *src, size_t srcSize, const void *dict, size_t dictSize,
3654++ const ZSTD_DDict *ddict)
3655++{
3656++ void *const dststart = dst;
3657++
3658++ if (ddict) {
3659++ if (dict) {
3660++ /* programmer error, these two cases should be mutually exclusive */
3661++ return ERROR(GENERIC);
3662++ }
3663++
3664++ dict = ZSTD_DDictDictContent(ddict);
3665++ dictSize = ZSTD_DDictDictSize(ddict);
3666++ }
3667++
3668++ while (srcSize >= ZSTD_frameHeaderSize_prefix) {
3669++ U32 magicNumber;
3670++
3671++ magicNumber = ZSTD_readLE32(src);
3672++ if (magicNumber != ZSTD_MAGICNUMBER) {
3673++ if ((magicNumber & 0xFFFFFFF0U) == ZSTD_MAGIC_SKIPPABLE_START) {
3674++ size_t skippableSize;
3675++ if (srcSize < ZSTD_skippableHeaderSize)
3676++ return ERROR(srcSize_wrong);
3677++ skippableSize = ZSTD_readLE32((const BYTE *)src + 4) + ZSTD_skippableHeaderSize;
3678++ if (srcSize < skippableSize) {
3679++ return ERROR(srcSize_wrong);
3680++ }
3681++
3682++ src = (const BYTE *)src + skippableSize;
3683++ srcSize -= skippableSize;
3684++ continue;
3685++ } else {
3686++ return ERROR(prefix_unknown);
3687++ }
3688++ }
3689++
3690++ if (ddict) {
3691++ /* we were called from ZSTD_decompress_usingDDict */
3692++ ZSTD_refDDict(dctx, ddict);
3693++ } else {
3694++ /* this will initialize correctly with no dict if dict == NULL, so
3695++ * use this in all cases but ddict */
3696++ CHECK_F(ZSTD_decompressBegin_usingDict(dctx, dict, dictSize));
3697++ }
3698++ ZSTD_checkContinuity(dctx, dst);
3699++
3700++ {
3701++ const size_t res = ZSTD_decompressFrame(dctx, dst, dstCapacity, &src, &srcSize);
3702++ if (ZSTD_isError(res))
3703++ return res;
3704++ /* don't need to bounds check this, ZSTD_decompressFrame will have
3705++ * already */
3706++ dst = (BYTE *)dst + res;
3707++ dstCapacity -= res;
3708++ }
3709++ }
3710++
3711++ if (srcSize)
3712++ return ERROR(srcSize_wrong); /* input not entirely consumed */
3713++
3714++ return (BYTE *)dst - (BYTE *)dststart;
3715++}
3716++
3717++size_t INIT ZSTD_decompress_usingDict(ZSTD_DCtx *dctx, void *dst, size_t dstCapacity, const void *src, size_t srcSize, const void *dict, size_t dictSize)
3718++{
3719++ return ZSTD_decompressMultiFrame(dctx, dst, dstCapacity, src, srcSize, dict, dictSize, NULL);
3720++}
3721++
3722++size_t INIT ZSTD_decompressDCtx(ZSTD_DCtx *dctx, void *dst, size_t dstCapacity, const void *src, size_t srcSize)
3723++{
3724++ return ZSTD_decompress_usingDict(dctx, dst, dstCapacity, src, srcSize, NULL, 0);
3725++}
3726++
3727++/*-**************************************
3728++* Advanced Streaming Decompression API
3729++* Bufferless and synchronous
3730++****************************************/
3731++size_t INIT ZSTD_nextSrcSizeToDecompress(ZSTD_DCtx *dctx) { return dctx->expected; }
3732++
3733++ZSTD_nextInputType_e INIT ZSTD_nextInputType(ZSTD_DCtx *dctx)
3734++{
3735++ switch (dctx->stage) {
3736++ default: /* should not happen */
3737++ case ZSTDds_getFrameHeaderSize:
3738++ case ZSTDds_decodeFrameHeader: return ZSTDnit_frameHeader;
3739++ case ZSTDds_decodeBlockHeader: return ZSTDnit_blockHeader;
3740++ case ZSTDds_decompressBlock: return ZSTDnit_block;
3741++ case ZSTDds_decompressLastBlock: return ZSTDnit_lastBlock;
3742++ case ZSTDds_checkChecksum: return ZSTDnit_checksum;
3743++ case ZSTDds_decodeSkippableHeader:
3744++ case ZSTDds_skipFrame: return ZSTDnit_skippableFrame;
3745++ }
3746++}
3747++
3748++int INIT ZSTD_isSkipFrame(ZSTD_DCtx *dctx) { return dctx->stage == ZSTDds_skipFrame; } /* for zbuff */
3749++
3750++/** ZSTD_decompressContinue() :
3751++* @return : nb of bytes generated into `dst` (necessarily <= `dstCapacity)
3752++* or an error code, which can be tested using ZSTD_isError() */
3753++size_t INIT ZSTD_decompressContinue(ZSTD_DCtx *dctx, void *dst, size_t dstCapacity, const void *src, size_t srcSize)
3754++{
3755++ /* Sanity check */
3756++ if (srcSize != dctx->expected)
3757++ return ERROR(srcSize_wrong);
3758++ if (dstCapacity)
3759++ ZSTD_checkContinuity(dctx, dst);
3760++
3761++ switch (dctx->stage) {
3762++ case ZSTDds_getFrameHeaderSize:
3763++ if (srcSize != ZSTD_frameHeaderSize_prefix)
3764++ return ERROR(srcSize_wrong); /* impossible */
3765++ if ((ZSTD_readLE32(src) & 0xFFFFFFF0U) == ZSTD_MAGIC_SKIPPABLE_START) { /* skippable frame */
3766++ memcpy(dctx->headerBuffer, src, ZSTD_frameHeaderSize_prefix);
3767++ dctx->expected = ZSTD_skippableHeaderSize - ZSTD_frameHeaderSize_prefix; /* magic number + skippable frame length */
3768++ dctx->stage = ZSTDds_decodeSkippableHeader;
3769++ return 0;
3770++ }
3771++ dctx->headerSize = ZSTD_frameHeaderSize(src, ZSTD_frameHeaderSize_prefix);
3772++ if (ZSTD_isError(dctx->headerSize))
3773++ return dctx->headerSize;
3774++ memcpy(dctx->headerBuffer, src, ZSTD_frameHeaderSize_prefix);
3775++ if (dctx->headerSize > ZSTD_frameHeaderSize_prefix) {
3776++ dctx->expected = dctx->headerSize - ZSTD_frameHeaderSize_prefix;
3777++ dctx->stage = ZSTDds_decodeFrameHeader;
3778++ return 0;
3779++ }
3780++ dctx->expected = 0; /* not necessary to copy more */
3781++ /* fallthrough */
3782++
3783++ case ZSTDds_decodeFrameHeader:
3784++ memcpy(dctx->headerBuffer + ZSTD_frameHeaderSize_prefix, src, dctx->expected);
3785++ CHECK_F(ZSTD_decodeFrameHeader(dctx, dctx->headerBuffer, dctx->headerSize));
3786++ dctx->expected = ZSTD_blockHeaderSize;
3787++ dctx->stage = ZSTDds_decodeBlockHeader;
3788++ return 0;
3789++
3790++ case ZSTDds_decodeBlockHeader: {
3791++ blockProperties_t bp;
3792++ size_t const cBlockSize = ZSTD_getcBlockSize(src, ZSTD_blockHeaderSize, &bp);
3793++ if (ZSTD_isError(cBlockSize))
3794++ return cBlockSize;
3795++ dctx->expected = cBlockSize;
3796++ dctx->bType = bp.blockType;
3797++ dctx->rleSize = bp.origSize;
3798++ if (cBlockSize) {
3799++ dctx->stage = bp.lastBlock ? ZSTDds_decompressLastBlock : ZSTDds_decompressBlock;
3800++ return 0;
3801++ }
3802++ /* empty block */
3803++ if (bp.lastBlock) {
3804++ if (dctx->fParams.checksumFlag) {
3805++ dctx->expected = 4;
3806++ dctx->stage = ZSTDds_checkChecksum;
3807++ } else {
3808++ dctx->expected = 0; /* end of frame */
3809++ dctx->stage = ZSTDds_getFrameHeaderSize;
3810++ }
3811++ } else {
3812++ dctx->expected = 3; /* go directly to next header */
3813++ dctx->stage = ZSTDds_decodeBlockHeader;
3814++ }
3815++ return 0;
3816++ }
3817++ case ZSTDds_decompressLastBlock:
3818++ case ZSTDds_decompressBlock: {
3819++ size_t rSize;
3820++ switch (dctx->bType) {
3821++ case bt_compressed: rSize = ZSTD_decompressBlock_internal(dctx, dst, dstCapacity, src, srcSize); break;
3822++ case bt_raw: rSize = ZSTD_copyRawBlock(dst, dstCapacity, src, srcSize); break;
3823++ case bt_rle: rSize = ZSTD_setRleBlock(dst, dstCapacity, src, srcSize, dctx->rleSize); break;
3824++ case bt_reserved: /* should never happen */
3825++ default: return ERROR(corruption_detected);
3826++ }
3827++ if (ZSTD_isError(rSize))
3828++ return rSize;
3829++ if (dctx->fParams.checksumFlag)
3830++ xxh64_update(&dctx->xxhState, dst, rSize);
3831++
3832++ if (dctx->stage == ZSTDds_decompressLastBlock) { /* end of frame */
3833++ if (dctx->fParams.checksumFlag) { /* another round for frame checksum */
3834++ dctx->expected = 4;
3835++ dctx->stage = ZSTDds_checkChecksum;
3836++ } else {
3837++ dctx->expected = 0; /* ends here */
3838++ dctx->stage = ZSTDds_getFrameHeaderSize;
3839++ }
3840++ } else {
3841++ dctx->stage = ZSTDds_decodeBlockHeader;
3842++ dctx->expected = ZSTD_blockHeaderSize;
3843++ dctx->previousDstEnd = (char *)dst + rSize;
3844++ }
3845++ return rSize;
3846++ }
3847++ case ZSTDds_checkChecksum: {
3848++ U32 const h32 = (U32)xxh64_digest(&dctx->xxhState);
3849++ U32 const check32 = ZSTD_readLE32(src); /* srcSize == 4, guaranteed by dctx->expected */
3850++ if (check32 != h32)
3851++ return ERROR(checksum_wrong);
3852++ dctx->expected = 0;
3853++ dctx->stage = ZSTDds_getFrameHeaderSize;
3854++ return 0;
3855++ }
3856++ case ZSTDds_decodeSkippableHeader: {
3857++ memcpy(dctx->headerBuffer + ZSTD_frameHeaderSize_prefix, src, dctx->expected);
3858++ dctx->expected = ZSTD_readLE32(dctx->headerBuffer + 4);
3859++ dctx->stage = ZSTDds_skipFrame;
3860++ return 0;
3861++ }
3862++ case ZSTDds_skipFrame: {
3863++ dctx->expected = 0;
3864++ dctx->stage = ZSTDds_getFrameHeaderSize;
3865++ return 0;
3866++ }
3867++ default:
3868++ return ERROR(GENERIC); /* impossible */
3869++ }
3870++}
3871++
3872++static size_t INIT ZSTD_refDictContent(ZSTD_DCtx *dctx, const void *dict, size_t dictSize)
3873++{
3874++ dctx->dictEnd = dctx->previousDstEnd;
3875++ dctx->vBase = (const char *)dict - ((const char *)(dctx->previousDstEnd) - (const char *)(dctx->base));
3876++ dctx->base = dict;
3877++ dctx->previousDstEnd = (const char *)dict + dictSize;
3878++ return 0;
3879++}
3880++
3881++/* ZSTD_loadEntropy() :
3882++ * dict : must point at beginning of a valid zstd dictionary
3883++ * @return : size of entropy tables read */
3884++static size_t INIT ZSTD_loadEntropy(ZSTD_entropyTables_t *entropy, const void *const dict, size_t const dictSize)
3885++{
3886++ const BYTE *dictPtr = (const BYTE *)dict;
3887++ const BYTE *const dictEnd = dictPtr + dictSize;
3888++
3889++ if (dictSize <= 8)
3890++ return ERROR(dictionary_corrupted);
3891++ dictPtr += 8; /* skip header = magic + dictID */
3892++
3893++ {
3894++ size_t const hSize = HUF_readDTableX4_wksp(entropy->hufTable, dictPtr, dictEnd - dictPtr, entropy->workspace, sizeof(entropy->workspace));
3895++ if (HUF_isError(hSize))
3896++ return ERROR(dictionary_corrupted);
3897++ dictPtr += hSize;
3898++ }
3899++
3900++ {
3901++ short offcodeNCount[MaxOff + 1];
3902++ U32 offcodeMaxValue = MaxOff, offcodeLog;
3903++ size_t const offcodeHeaderSize = FSE_readNCount(offcodeNCount, &offcodeMaxValue, &offcodeLog, dictPtr, dictEnd - dictPtr);
3904++ if (FSE_isError(offcodeHeaderSize))
3905++ return ERROR(dictionary_corrupted);
3906++ if (offcodeLog > OffFSELog)
3907++ return ERROR(dictionary_corrupted);
3908++ CHECK_E(FSE_buildDTable_wksp(entropy->OFTable, offcodeNCount, offcodeMaxValue, offcodeLog, entropy->workspace, sizeof(entropy->workspace)), dictionary_corrupted);
3909++ dictPtr += offcodeHeaderSize;
3910++ }
3911++
3912++ {
3913++ short matchlengthNCount[MaxML + 1];
3914++ unsigned matchlengthMaxValue = MaxML, matchlengthLog;
3915++ size_t const matchlengthHeaderSize = FSE_readNCount(matchlengthNCount, &matchlengthMaxValue, &matchlengthLog, dictPtr, dictEnd - dictPtr);
3916++ if (FSE_isError(matchlengthHeaderSize))
3917++ return ERROR(dictionary_corrupted);
3918++ if (matchlengthLog > MLFSELog)
3919++ return ERROR(dictionary_corrupted);
3920++ CHECK_E(FSE_buildDTable_wksp(entropy->MLTable, matchlengthNCount, matchlengthMaxValue, matchlengthLog, entropy->workspace, sizeof(entropy->workspace)), dictionary_corrupted);
3921++ dictPtr += matchlengthHeaderSize;
3922++ }
3923++
3924++ {
3925++ short litlengthNCount[MaxLL + 1];
3926++ unsigned litlengthMaxValue = MaxLL, litlengthLog;
3927++ size_t const litlengthHeaderSize = FSE_readNCount(litlengthNCount, &litlengthMaxValue, &litlengthLog, dictPtr, dictEnd - dictPtr);
3928++ if (FSE_isError(litlengthHeaderSize))
3929++ return ERROR(dictionary_corrupted);
3930++ if (litlengthLog > LLFSELog)
3931++ return ERROR(dictionary_corrupted);
3932++ CHECK_E(FSE_buildDTable_wksp(entropy->LLTable, litlengthNCount, litlengthMaxValue, litlengthLog, entropy->workspace, sizeof(entropy->workspace)), dictionary_corrupted);
3933++ dictPtr += litlengthHeaderSize;
3934++ }
3935++
3936++ if (dictPtr + 12 > dictEnd)
3937++ return ERROR(dictionary_corrupted);
3938++ {
3939++ int i;
3940++ size_t const dictContentSize = (size_t)(dictEnd - (dictPtr + 12));
3941++ for (i = 0; i < 3; i++) {
3942++ U32 const rep = ZSTD_readLE32(dictPtr);
3943++ dictPtr += 4;
3944++ if (rep == 0 || rep >= dictContentSize)
3945++ return ERROR(dictionary_corrupted);
3946++ entropy->rep[i] = rep;
3947++ }
3948++ }
3949++
3950++ return dictPtr - (const BYTE *)dict;
3951++}
3952++
3953++static size_t INIT ZSTD_decompress_insertDictionary(ZSTD_DCtx *dctx, const void *dict, size_t dictSize)
3954++{
3955++ if (dictSize < 8)
3956++ return ZSTD_refDictContent(dctx, dict, dictSize);
3957++ {
3958++ U32 const magic = ZSTD_readLE32(dict);
3959++ if (magic != ZSTD_DICT_MAGIC) {
3960++ return ZSTD_refDictContent(dctx, dict, dictSize); /* pure content mode */
3961++ }
3962++ }
3963++ dctx->dictID = ZSTD_readLE32((const char *)dict + 4);
3964++
3965++ /* load entropy tables */
3966++ {
3967++ size_t const eSize = ZSTD_loadEntropy(&dctx->entropy, dict, dictSize);
3968++ if (ZSTD_isError(eSize))
3969++ return ERROR(dictionary_corrupted);
3970++ dict = (const char *)dict + eSize;
3971++ dictSize -= eSize;
3972++ }
3973++ dctx->litEntropy = dctx->fseEntropy = 1;
3974++
3975++ /* reference dictionary content */
3976++ return ZSTD_refDictContent(dctx, dict, dictSize);
3977++}
3978++
3979++size_t INIT ZSTD_decompressBegin_usingDict(ZSTD_DCtx *dctx, const void *dict, size_t dictSize)
3980++{
3981++ CHECK_F(ZSTD_decompressBegin(dctx));
3982++ if (dict && dictSize)
3983++ CHECK_E(ZSTD_decompress_insertDictionary(dctx, dict, dictSize), dictionary_corrupted);
3984++ return 0;
3985++}
3986++
3987++/* ====== ZSTD_DDict ====== */
3988++
3989++struct ZSTD_DDict_s {
3990++ void *dictBuffer;
3991++ const void *dictContent;
3992++ size_t dictSize;
3993++ ZSTD_entropyTables_t entropy;
3994++ U32 dictID;
3995++ U32 entropyPresent;
3996++ ZSTD_customMem cMem;
3997++}; /* typedef'd to ZSTD_DDict within "zstd.h" */
3998++
3999++size_t INIT ZSTD_DDictWorkspaceBound(void) { return ZSTD_ALIGN(sizeof(ZSTD_stack)) + ZSTD_ALIGN(sizeof(ZSTD_DDict)); }
4000++
4001++static const void *INIT ZSTD_DDictDictContent(const ZSTD_DDict *ddict) { return ddict->dictContent; }
4002++
4003++static size_t INIT ZSTD_DDictDictSize(const ZSTD_DDict *ddict) { return ddict->dictSize; }
4004++
4005++static void INIT ZSTD_refDDict(ZSTD_DCtx *dstDCtx, const ZSTD_DDict *ddict)
4006++{
4007++ ZSTD_decompressBegin(dstDCtx); /* init */
4008++ if (ddict) { /* support refDDict on NULL */
4009++ dstDCtx->dictID = ddict->dictID;
4010++ dstDCtx->base = ddict->dictContent;
4011++ dstDCtx->vBase = ddict->dictContent;
4012++ dstDCtx->dictEnd = (const BYTE *)ddict->dictContent + ddict->dictSize;
4013++ dstDCtx->previousDstEnd = dstDCtx->dictEnd;
4014++ if (ddict->entropyPresent) {
4015++ dstDCtx->litEntropy = 1;
4016++ dstDCtx->fseEntropy = 1;
4017++ dstDCtx->LLTptr = ddict->entropy.LLTable;
4018++ dstDCtx->MLTptr = ddict->entropy.MLTable;
4019++ dstDCtx->OFTptr = ddict->entropy.OFTable;
4020++ dstDCtx->HUFptr = ddict->entropy.hufTable;
4021++ dstDCtx->entropy.rep[0] = ddict->entropy.rep[0];
4022++ dstDCtx->entropy.rep[1] = ddict->entropy.rep[1];
4023++ dstDCtx->entropy.rep[2] = ddict->entropy.rep[2];
4024++ } else {
4025++ dstDCtx->litEntropy = 0;
4026++ dstDCtx->fseEntropy = 0;
4027++ }
4028++ }
4029++}
4030++
4031++static size_t INIT ZSTD_loadEntropy_inDDict(ZSTD_DDict *ddict)
4032++{
4033++ ddict->dictID = 0;
4034++ ddict->entropyPresent = 0;
4035++ if (ddict->dictSize < 8)
4036++ return 0;
4037++ {
4038++ U32 const magic = ZSTD_readLE32(ddict->dictContent);
4039++ if (magic != ZSTD_DICT_MAGIC)
4040++ return 0; /* pure content mode */
4041++ }
4042++ ddict->dictID = ZSTD_readLE32((const char *)ddict->dictContent + 4);
4043++
4044++ /* load entropy tables */
4045++ CHECK_E(ZSTD_loadEntropy(&ddict->entropy, ddict->dictContent, ddict->dictSize), dictionary_corrupted);
4046++ ddict->entropyPresent = 1;
4047++ return 0;
4048++}
4049++
4050++static ZSTD_DDict *INIT ZSTD_createDDict_advanced(const void *dict, size_t dictSize, unsigned byReference, ZSTD_customMem customMem)
4051++{
4052++ if (!customMem.customAlloc || !customMem.customFree)
4053++ return NULL;
4054++
4055++ {
4056++ ZSTD_DDict *const ddict = (ZSTD_DDict *)ZSTD_malloc(sizeof(ZSTD_DDict), customMem);
4057++ if (!ddict)
4058++ return NULL;
4059++ ddict->cMem = customMem;
4060++
4061++ if ((byReference) || (!dict) || (!dictSize)) {
4062++ ddict->dictBuffer = NULL;
4063++ ddict->dictContent = dict;
4064++ } else {
4065++ void *const internalBuffer = ZSTD_malloc(dictSize, customMem);
4066++ if (!internalBuffer) {
4067++ ZSTD_freeDDict(ddict);
4068++ return NULL;
4069++ }
4070++ memcpy(internalBuffer, dict, dictSize);
4071++ ddict->dictBuffer = internalBuffer;
4072++ ddict->dictContent = internalBuffer;
4073++ }
4074++ ddict->dictSize = dictSize;
4075++ ddict->entropy.hufTable[0] = (HUF_DTable)((HufLog)*0x1000001); /* cover both little and big endian */
4076++ /* parse dictionary content */
4077++ {
4078++ size_t const errorCode = ZSTD_loadEntropy_inDDict(ddict);
4079++ if (ZSTD_isError(errorCode)) {
4080++ ZSTD_freeDDict(ddict);
4081++ return NULL;
4082++ }
4083++ }
4084++
4085++ return ddict;
4086++ }
4087++}
4088++
4089++/*! ZSTD_initDDict() :
4090++* Create a digested dictionary, to start decompression without startup delay.
4091++* `dict` content is copied inside DDict.
4092++* Consequently, `dict` can be released after `ZSTD_DDict` creation */
4093++ZSTD_DDict *INIT ZSTD_initDDict(const void *dict, size_t dictSize, void *workspace, size_t workspaceSize)
4094++{
4095++ ZSTD_customMem const stackMem = ZSTD_initStack(workspace, workspaceSize);
4096++ return ZSTD_createDDict_advanced(dict, dictSize, 1, stackMem);
4097++}
4098++
4099++size_t INIT ZSTD_freeDDict(ZSTD_DDict *ddict)
4100++{
4101++ if (ddict == NULL)
4102++ return 0; /* support free on NULL */
4103++ {
4104++ ZSTD_customMem const cMem = ddict->cMem;
4105++ ZSTD_free(ddict->dictBuffer, cMem);
4106++ ZSTD_free(ddict, cMem);
4107++ return 0;
4108++ }
4109++}
4110++
4111++/*! ZSTD_getDictID_fromDict() :
4112++ * Provides the dictID stored within dictionary.
4113++ * if @return == 0, the dictionary is not conformant with Zstandard specification.
4114++ * It can still be loaded, but as a content-only dictionary. */
4115++unsigned INIT ZSTD_getDictID_fromDict(const void *dict, size_t dictSize)
4116++{
4117++ if (dictSize < 8)
4118++ return 0;
4119++ if (ZSTD_readLE32(dict) != ZSTD_DICT_MAGIC)
4120++ return 0;
4121++ return ZSTD_readLE32((const char *)dict + 4);
4122++}
4123++
4124++/*! ZSTD_getDictID_fromDDict() :
4125++ * Provides the dictID of the dictionary loaded into `ddict`.
4126++ * If @return == 0, the dictionary is not conformant to Zstandard specification, or empty.
4127++ * Non-conformant dictionaries can still be loaded, but as content-only dictionaries. */
4128++unsigned INIT ZSTD_getDictID_fromDDict(const ZSTD_DDict *ddict)
4129++{
4130++ if (ddict == NULL)
4131++ return 0;
4132++ return ZSTD_getDictID_fromDict(ddict->dictContent, ddict->dictSize);
4133++}
4134++
4135++/*! ZSTD_getDictID_fromFrame() :
4136++ * Provides the dictID required to decompressed the frame stored within `src`.
4137++ * If @return == 0, the dictID could not be decoded.
4138++ * This could for one of the following reasons :
4139++ * - The frame does not require a dictionary to be decoded (most common case).
4140++ * - The frame was built with dictID intentionally removed. Whatever dictionary is necessary is a hidden information.
4141++ * Note : this use case also happens when using a non-conformant dictionary.
4142++ * - `srcSize` is too small, and as a result, the frame header could not be decoded (only possible if `srcSize < ZSTD_FRAMEHEADERSIZE_MAX`).
4143++ * - This is not a Zstandard frame.
4144++ * When identifying the exact failure cause, it's possible to used ZSTD_getFrameParams(), which will provide a more precise error code. */
4145++unsigned INIT ZSTD_getDictID_fromFrame(const void *src, size_t srcSize)
4146++{
4147++ ZSTD_frameParams zfp = {0, 0, 0, 0};
4148++ size_t const hError = ZSTD_getFrameParams(&zfp, src, srcSize);
4149++ if (ZSTD_isError(hError))
4150++ return 0;
4151++ return zfp.dictID;
4152++}
4153++
4154++/*! ZSTD_decompress_usingDDict() :
4155++* Decompression using a pre-digested Dictionary
4156++* Use dictionary without significant overhead. */
4157++size_t INIT ZSTD_decompress_usingDDict(ZSTD_DCtx *dctx, void *dst, size_t dstCapacity, const void *src, size_t srcSize, const ZSTD_DDict *ddict)
4158++{
4159++ /* pass content and size in case legacy frames are encountered */
4160++ return ZSTD_decompressMultiFrame(dctx, dst, dstCapacity, src, srcSize, NULL, 0, ddict);
4161++}
4162++
4163++/*=====================================
4164++* Streaming decompression
4165++*====================================*/
4166++
4167++typedef enum { zdss_init, zdss_loadHeader, zdss_read, zdss_load, zdss_flush } ZSTD_dStreamStage;
4168++
4169++/* *** Resource management *** */
4170++struct ZSTD_DStream_s {
4171++ ZSTD_DCtx *dctx;
4172++ ZSTD_DDict *ddictLocal;
4173++ const ZSTD_DDict *ddict;
4174++ ZSTD_frameParams fParams;
4175++ ZSTD_dStreamStage stage;
4176++ char *inBuff;
4177++ size_t inBuffSize;
4178++ size_t inPos;
4179++ size_t maxWindowSize;
4180++ char *outBuff;
4181++ size_t outBuffSize;
4182++ size_t outStart;
4183++ size_t outEnd;
4184++ size_t blockSize;
4185++ BYTE headerBuffer[ZSTD_FRAMEHEADERSIZE_MAX]; /* tmp buffer to store frame header */
4186++ size_t lhSize;
4187++ ZSTD_customMem customMem;
4188++ void *legacyContext;
4189++ U32 previousLegacyVersion;
4190++ U32 legacyVersion;
4191++ U32 hostageByte;
4192++}; /* typedef'd to ZSTD_DStream within "zstd.h" */
4193++
4194++size_t INIT ZSTD_DStreamWorkspaceBound(size_t maxWindowSize)
4195++{
4196++ size_t const blockSize = MIN(maxWindowSize, ZSTD_BLOCKSIZE_ABSOLUTEMAX);
4197++ size_t const inBuffSize = blockSize;
4198++ size_t const outBuffSize = maxWindowSize + blockSize + WILDCOPY_OVERLENGTH * 2;
4199++ return ZSTD_DCtxWorkspaceBound() + ZSTD_ALIGN(sizeof(ZSTD_DStream)) + ZSTD_ALIGN(inBuffSize) + ZSTD_ALIGN(outBuffSize);
4200++}
4201++
4202++static ZSTD_DStream *INIT ZSTD_createDStream_advanced(ZSTD_customMem customMem)
4203++{
4204++ ZSTD_DStream *zds;
4205++
4206++ if (!customMem.customAlloc || !customMem.customFree)
4207++ return NULL;
4208++
4209++ zds = (ZSTD_DStream *)ZSTD_malloc(sizeof(ZSTD_DStream), customMem);
4210++ if (zds == NULL)
4211++ return NULL;
4212++ memset(zds, 0, sizeof(ZSTD_DStream));
4213++ memcpy(&zds->customMem, &customMem, sizeof(ZSTD_customMem));
4214++ zds->dctx = ZSTD_createDCtx_advanced(customMem);
4215++ if (zds->dctx == NULL) {
4216++ ZSTD_freeDStream(zds);
4217++ return NULL;
4218++ }
4219++ zds->stage = zdss_init;
4220++ zds->maxWindowSize = ZSTD_MAXWINDOWSIZE_DEFAULT;
4221++ return zds;
4222++}
4223++
4224++ZSTD_DStream *INIT ZSTD_initDStream(size_t maxWindowSize, void *workspace, size_t workspaceSize)
4225++{
4226++ ZSTD_customMem const stackMem = ZSTD_initStack(workspace, workspaceSize);
4227++ ZSTD_DStream *zds = ZSTD_createDStream_advanced(stackMem);
4228++ if (!zds) {
4229++ return NULL;
4230++ }
4231++
4232++ zds->maxWindowSize = maxWindowSize;
4233++ zds->stage = zdss_loadHeader;
4234++ zds->lhSize = zds->inPos = zds->outStart = zds->outEnd = 0;
4235++ ZSTD_freeDDict(zds->ddictLocal);
4236++ zds->ddictLocal = NULL;
4237++ zds->ddict = zds->ddictLocal;
4238++ zds->legacyVersion = 0;
4239++ zds->hostageByte = 0;
4240++
4241++ {
4242++ size_t const blockSize = MIN(zds->maxWindowSize, ZSTD_BLOCKSIZE_ABSOLUTEMAX);
4243++ size_t const neededOutSize = zds->maxWindowSize + blockSize + WILDCOPY_OVERLENGTH * 2;
4244++
4245++ zds->inBuff = (char *)ZSTD_malloc(blockSize, zds->customMem);
4246++ zds->inBuffSize = blockSize;
4247++ zds->outBuff = (char *)ZSTD_malloc(neededOutSize, zds->customMem);
4248++ zds->outBuffSize = neededOutSize;
4249++ if (zds->inBuff == NULL || zds->outBuff == NULL) {
4250++ ZSTD_freeDStream(zds);
4251++ return NULL;
4252++ }
4253++ }
4254++ return zds;
4255++}
4256++
4257++ZSTD_DStream *INIT ZSTD_initDStream_usingDDict(size_t maxWindowSize, const ZSTD_DDict *ddict, void *workspace, size_t workspaceSize)
4258++{
4259++ ZSTD_DStream *zds = ZSTD_initDStream(maxWindowSize, workspace, workspaceSize);
4260++ if (zds) {
4261++ zds->ddict = ddict;
4262++ }
4263++ return zds;
4264++}
4265++
4266++size_t INIT ZSTD_freeDStream(ZSTD_DStream *zds)
4267++{
4268++ if (zds == NULL)
4269++ return 0; /* support free on null */
4270++ {
4271++ ZSTD_customMem const cMem = zds->customMem;
4272++ ZSTD_freeDCtx(zds->dctx);
4273++ zds->dctx = NULL;
4274++ ZSTD_freeDDict(zds->ddictLocal);
4275++ zds->ddictLocal = NULL;
4276++ ZSTD_free(zds->inBuff, cMem);
4277++ zds->inBuff = NULL;
4278++ ZSTD_free(zds->outBuff, cMem);
4279++ zds->outBuff = NULL;
4280++ ZSTD_free(zds, cMem);
4281++ return 0;
4282++ }
4283++}
4284++
4285++/* *** Initialization *** */
4286++
4287++size_t INIT ZSTD_DStreamInSize(void) { return ZSTD_BLOCKSIZE_ABSOLUTEMAX + ZSTD_blockHeaderSize; }
4288++size_t INIT ZSTD_DStreamOutSize(void) { return ZSTD_BLOCKSIZE_ABSOLUTEMAX; }
4289++
4290++size_t INIT ZSTD_resetDStream(ZSTD_DStream *zds)
4291++{
4292++ zds->stage = zdss_loadHeader;
4293++ zds->lhSize = zds->inPos = zds->outStart = zds->outEnd = 0;
4294++ zds->legacyVersion = 0;
4295++ zds->hostageByte = 0;
4296++ return ZSTD_frameHeaderSize_prefix;
4297++}
4298++
4299++/* ***** Decompression ***** */
4300++
4301++ZSTD_STATIC size_t INIT ZSTD_limitCopy(void *dst, size_t dstCapacity, const void *src, size_t srcSize)
4302++{
4303++ size_t const length = MIN(dstCapacity, srcSize);
4304++ memcpy(dst, src, length);
4305++ return length;
4306++}
4307++
4308++size_t INIT ZSTD_decompressStream(ZSTD_DStream *zds, ZSTD_outBuffer *output, ZSTD_inBuffer *input)
4309++{
4310++ const char *const istart = (const char *)(input->src) + input->pos;
4311++ const char *const iend = (const char *)(input->src) + input->size;
4312++ const char *ip = istart;
4313++ char *const ostart = (char *)(output->dst) + output->pos;
4314++ char *const oend = (char *)(output->dst) + output->size;
4315++ char *op = ostart;
4316++ U32 someMoreWork = 1;
4317++
4318++ while (someMoreWork) {
4319++ switch (zds->stage) {
4320++ case zdss_init:
4321++ ZSTD_resetDStream(zds); /* transparent reset on starting decoding a new frame */
4322++ /* fallthrough */
4323++
4324++ case zdss_loadHeader: {
4325++ size_t const hSize = ZSTD_getFrameParams(&zds->fParams, zds->headerBuffer, zds->lhSize);
4326++ if (ZSTD_isError(hSize))
4327++ return hSize;
4328++ if (hSize != 0) { /* need more input */
4329++ size_t const toLoad = hSize - zds->lhSize; /* if hSize!=0, hSize > zds->lhSize */
4330++ if (toLoad > (size_t)(iend - ip)) { /* not enough input to load full header */
4331++ memcpy(zds->headerBuffer + zds->lhSize, ip, iend - ip);
4332++ zds->lhSize += iend - ip;
4333++ input->pos = input->size;
4334++ return (MAX(ZSTD_frameHeaderSize_min, hSize) - zds->lhSize) +
4335++ ZSTD_blockHeaderSize; /* remaining header bytes + next block header */
4336++ }
4337++ memcpy(zds->headerBuffer + zds->lhSize, ip, toLoad);
4338++ zds->lhSize = hSize;
4339++ ip += toLoad;
4340++ break;
4341++ }
4342++
4343++ /* check for single-pass mode opportunity */
4344++ if (zds->fParams.frameContentSize && zds->fParams.windowSize /* skippable frame if == 0 */
4345++ && (U64)(size_t)(oend - op) >= zds->fParams.frameContentSize) {
4346++ size_t const cSize = ZSTD_findFrameCompressedSize(istart, iend - istart);
4347++ if (cSize <= (size_t)(iend - istart)) {
4348++ size_t const decompressedSize = ZSTD_decompress_usingDDict(zds->dctx, op, oend - op, istart, cSize, zds->ddict);
4349++ if (ZSTD_isError(decompressedSize))
4350++ return decompressedSize;
4351++ ip = istart + cSize;
4352++ op += decompressedSize;
4353++ zds->dctx->expected = 0;
4354++ zds->stage = zdss_init;
4355++ someMoreWork = 0;
4356++ break;
4357++ }
4358++ }
4359++
4360++ /* Consume header */
4361++ ZSTD_refDDict(zds->dctx, zds->ddict);
4362++ {
4363++ size_t const h1Size = ZSTD_nextSrcSizeToDecompress(zds->dctx); /* == ZSTD_frameHeaderSize_prefix */
4364++ CHECK_F(ZSTD_decompressContinue(zds->dctx, NULL, 0, zds->headerBuffer, h1Size));
4365++ {
4366++ size_t const h2Size = ZSTD_nextSrcSizeToDecompress(zds->dctx);
4367++ CHECK_F(ZSTD_decompressContinue(zds->dctx, NULL, 0, zds->headerBuffer + h1Size, h2Size));
4368++ }
4369++ }
4370++
4371++ zds->fParams.windowSize = MAX(zds->fParams.windowSize, 1U << ZSTD_WINDOWLOG_ABSOLUTEMIN);
4372++ if (zds->fParams.windowSize > zds->maxWindowSize)
4373++ return ERROR(frameParameter_windowTooLarge);
4374++
4375++ /* Buffers are preallocated, but double check */
4376++ {
4377++ size_t const blockSize = MIN(zds->maxWindowSize, ZSTD_BLOCKSIZE_ABSOLUTEMAX);
4378++ size_t const neededOutSize = zds->maxWindowSize + blockSize + WILDCOPY_OVERLENGTH * 2;
4379++ if (zds->inBuffSize < blockSize) {
4380++ return ERROR(GENERIC);
4381++ }
4382++ if (zds->outBuffSize < neededOutSize) {
4383++ return ERROR(GENERIC);
4384++ }
4385++ zds->blockSize = blockSize;
4386++ }
4387++ zds->stage = zdss_read;
4388++ }
4389++ /* fallthrough */
4390++
4391++ case zdss_read: {
4392++ size_t const neededInSize = ZSTD_nextSrcSizeToDecompress(zds->dctx);
4393++ if (neededInSize == 0) { /* end of frame */
4394++ zds->stage = zdss_init;
4395++ someMoreWork = 0;
4396++ break;
4397++ }
4398++ if ((size_t)(iend - ip) >= neededInSize) { /* decode directly from src */
4399++ const int isSkipFrame = ZSTD_isSkipFrame(zds->dctx);
4400++ size_t const decodedSize = ZSTD_decompressContinue(zds->dctx, zds->outBuff + zds->outStart,
4401++ (isSkipFrame ? 0 : zds->outBuffSize - zds->outStart), ip, neededInSize);
4402++ if (ZSTD_isError(decodedSize))
4403++ return decodedSize;
4404++ ip += neededInSize;
4405++ if (!decodedSize && !isSkipFrame)
4406++ break; /* this was just a header */
4407++ zds->outEnd = zds->outStart + decodedSize;
4408++ zds->stage = zdss_flush;
4409++ break;
4410++ }
4411++ if (ip == iend) {
4412++ someMoreWork = 0;
4413++ break;
4414++ } /* no more input */
4415++ zds->stage = zdss_load;
4416++ /* pass-through */
4417++ }
4418++ /* fallthrough */
4419++
4420++ case zdss_load: {
4421++ size_t const neededInSize = ZSTD_nextSrcSizeToDecompress(zds->dctx);
4422++ size_t const toLoad = neededInSize - zds->inPos; /* should always be <= remaining space within inBuff */
4423++ size_t loadedSize;
4424++ if (toLoad > zds->inBuffSize - zds->inPos)
4425++ return ERROR(corruption_detected); /* should never happen */
4426++ loadedSize = ZSTD_limitCopy(zds->inBuff + zds->inPos, toLoad, ip, iend - ip);
4427++ ip += loadedSize;
4428++ zds->inPos += loadedSize;
4429++ if (loadedSize < toLoad) {
4430++ someMoreWork = 0;
4431++ break;
4432++ } /* not enough input, wait for more */
4433++
4434++ /* decode loaded input */
4435++ {
4436++ const int isSkipFrame = ZSTD_isSkipFrame(zds->dctx);
4437++ size_t const decodedSize = ZSTD_decompressContinue(zds->dctx, zds->outBuff + zds->outStart, zds->outBuffSize - zds->outStart,
4438++ zds->inBuff, neededInSize);
4439++ if (ZSTD_isError(decodedSize))
4440++ return decodedSize;
4441++ zds->inPos = 0; /* input is consumed */
4442++ if (!decodedSize && !isSkipFrame) {
4443++ zds->stage = zdss_read;
4444++ break;
4445++ } /* this was just a header */
4446++ zds->outEnd = zds->outStart + decodedSize;
4447++ zds->stage = zdss_flush;
4448++ /* pass-through */
4449++ }
4450++ }
4451++ /* fallthrough */
4452++
4453++ case zdss_flush: {
4454++ size_t const toFlushSize = zds->outEnd - zds->outStart;
4455++ size_t const flushedSize = ZSTD_limitCopy(op, oend - op, zds->outBuff + zds->outStart, toFlushSize);
4456++ op += flushedSize;
4457++ zds->outStart += flushedSize;
4458++ if (flushedSize == toFlushSize) { /* flush completed */
4459++ zds->stage = zdss_read;
4460++ if (zds->outStart + zds->blockSize > zds->outBuffSize)
4461++ zds->outStart = zds->outEnd = 0;
4462++ break;
4463++ }
4464++ /* cannot complete flush */
4465++ someMoreWork = 0;
4466++ break;
4467++ }
4468++ default:
4469++ return ERROR(GENERIC); /* impossible */
4470++ }
4471++ }
4472++
4473++ /* result */
4474++ input->pos += (size_t)(ip - istart);
4475++ output->pos += (size_t)(op - ostart);
4476++ {
4477++ size_t nextSrcSizeHint = ZSTD_nextSrcSizeToDecompress(zds->dctx);
4478++ if (!nextSrcSizeHint) { /* frame fully decoded */
4479++ if (zds->outEnd == zds->outStart) { /* output fully flushed */
4480++ if (zds->hostageByte) {
4481++ if (input->pos >= input->size) {
4482++ zds->stage = zdss_read;
4483++ return 1;
4484++ } /* can't release hostage (not present) */
4485++ input->pos++; /* release hostage */
4486++ }
4487++ return 0;
4488++ }
4489++ if (!zds->hostageByte) { /* output not fully flushed; keep last byte as hostage; will be released when all output is flushed */
4490++ input->pos--; /* note : pos > 0, otherwise, impossible to finish reading last block */
4491++ zds->hostageByte = 1;
4492++ }
4493++ return 1;
4494++ }
4495++ nextSrcSizeHint += ZSTD_blockHeaderSize * (ZSTD_nextInputType(zds->dctx) == ZSTDnit_block); /* preload header of next block */
4496++ if (zds->inPos > nextSrcSizeHint)
4497++ return ERROR(GENERIC); /* should never happen */
4498++ nextSrcSizeHint -= zds->inPos; /* already loaded*/
4499++ return nextSrcSizeHint;
4500++ }
4501++}
4502+diff --git a/xen/common/zstd/entropy_common.c b/xen/common/zstd/entropy_common.c
4503+new file mode 100644
4504+index 000000000000..bcdb57982ba5
4505+--- /dev/null
4506++++ b/xen/common/zstd/entropy_common.c
4507+@@ -0,0 +1,243 @@
4508++/*
4509++ * Common functions of New Generation Entropy library
4510++ * Copyright (C) 2016, Yann Collet.
4511++ *
4512++ * BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
4513++ *
4514++ * Redistribution and use in source and binary forms, with or without
4515++ * modification, are permitted provided that the following conditions are
4516++ * met:
4517++ *
4518++ * * Redistributions of source code must retain the above copyright
4519++ * notice, this list of conditions and the following disclaimer.
4520++ * * Redistributions in binary form must reproduce the above
4521++ * copyright notice, this list of conditions and the following disclaimer
4522++ * in the documentation and/or other materials provided with the
4523++ * distribution.
4524++ *
4525++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
4526++ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
4527++ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
4528++ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
4529++ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
4530++ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
4531++ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
4532++ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
4533++ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
4534++ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
4535++ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
4536++ *
4537++ * This program is free software; you can redistribute it and/or modify it under
4538++ * the terms of the GNU General Public License version 2 as published by the
4539++ * Free Software Foundation. This program is dual-licensed; you may select
4540++ * either version 2 of the GNU General Public License ("GPL") or BSD license
4541++ * ("BSD").
4542++ *
4543++ * You can contact the author at :
4544++ * - Source repository : https://github.com/Cyan4973/FiniteStateEntropy
4545++ */
4546++
4547++/* *************************************
4548++* Dependencies
4549++***************************************/
4550++#include "error_private.h" /* ERR_*, ERROR */
4551++#include "fse.h"
4552++#include "huf.h"
4553++#include "mem.h"
4554++
4555++/*=== Version ===*/
4556++unsigned INIT FSE_versionNumber(void) { return FSE_VERSION_NUMBER; }
4557++
4558++/*=== Error Management ===*/
4559++unsigned INIT FSE_isError(size_t code) { return ERR_isError(code); }
4560++
4561++unsigned INIT HUF_isError(size_t code) { return ERR_isError(code); }
4562++
4563++/*-**************************************************************
4564++* FSE NCount encoding-decoding
4565++****************************************************************/
4566++size_t INIT FSE_readNCount(short *normalizedCounter, unsigned *maxSVPtr, unsigned *tableLogPtr, const void *headerBuffer, size_t hbSize)
4567++{
4568++ const BYTE *const istart = (const BYTE *)headerBuffer;
4569++ const BYTE *const iend = istart + hbSize;
4570++ const BYTE *ip = istart;
4571++ int nbBits;
4572++ int remaining;
4573++ int threshold;
4574++ U32 bitStream;
4575++ int bitCount;
4576++ unsigned charnum = 0;
4577++ int previous0 = 0;
4578++
4579++ if (hbSize < 4)
4580++ return ERROR(srcSize_wrong);
4581++ bitStream = ZSTD_readLE32(ip);
4582++ nbBits = (bitStream & 0xF) + FSE_MIN_TABLELOG; /* extract tableLog */
4583++ if (nbBits > FSE_TABLELOG_ABSOLUTE_MAX)
4584++ return ERROR(tableLog_tooLarge);
4585++ bitStream >>= 4;
4586++ bitCount = 4;
4587++ *tableLogPtr = nbBits;
4588++ remaining = (1 << nbBits) + 1;
4589++ threshold = 1 << nbBits;
4590++ nbBits++;
4591++
4592++ while ((remaining > 1) & (charnum <= *maxSVPtr)) {
4593++ if (previous0) {
4594++ unsigned n0 = charnum;
4595++ while ((bitStream & 0xFFFF) == 0xFFFF) {
4596++ n0 += 24;
4597++ if (ip < iend - 5) {
4598++ ip += 2;
4599++ bitStream = ZSTD_readLE32(ip) >> bitCount;
4600++ } else {
4601++ bitStream >>= 16;
4602++ bitCount += 16;
4603++ }
4604++ }
4605++ while ((bitStream & 3) == 3) {
4606++ n0 += 3;
4607++ bitStream >>= 2;
4608++ bitCount += 2;
4609++ }
4610++ n0 += bitStream & 3;
4611++ bitCount += 2;
4612++ if (n0 > *maxSVPtr)
4613++ return ERROR(maxSymbolValue_tooSmall);
4614++ while (charnum < n0)
4615++ normalizedCounter[charnum++] = 0;
4616++ if ((ip <= iend - 7) || (ip + (bitCount >> 3) <= iend - 4)) {
4617++ ip += bitCount >> 3;
4618++ bitCount &= 7;
4619++ bitStream = ZSTD_readLE32(ip) >> bitCount;
4620++ } else {
4621++ bitStream >>= 2;
4622++ }
4623++ }
4624++ {
4625++ int const max = (2 * threshold - 1) - remaining;
4626++ int count;
4627++
4628++ if ((bitStream & (threshold - 1)) < (U32)max) {
4629++ count = bitStream & (threshold - 1);
4630++ bitCount += nbBits - 1;
4631++ } else {
4632++ count = bitStream & (2 * threshold - 1);
4633++ if (count >= threshold)
4634++ count -= max;
4635++ bitCount += nbBits;
4636++ }
4637++
4638++ count--; /* extra accuracy */
4639++ remaining -= count < 0 ? -count : count; /* -1 means +1 */
4640++ normalizedCounter[charnum++] = (short)count;
4641++ previous0 = !count;
4642++ while (remaining < threshold) {
4643++ nbBits--;
4644++ threshold >>= 1;
4645++ }
4646++
4647++ if ((ip <= iend - 7) || (ip + (bitCount >> 3) <= iend - 4)) {
4648++ ip += bitCount >> 3;
4649++ bitCount &= 7;
4650++ } else {
4651++ bitCount -= (int)(8 * (iend - 4 - ip));
4652++ ip = iend - 4;
4653++ }
4654++ bitStream = ZSTD_readLE32(ip) >> (bitCount & 31);
4655++ }
4656++ } /* while ((remaining>1) & (charnum<=*maxSVPtr)) */
4657++ if (remaining != 1)
4658++ return ERROR(corruption_detected);
4659++ if (bitCount > 32)
4660++ return ERROR(corruption_detected);
4661++ *maxSVPtr = charnum - 1;
4662++
4663++ ip += (bitCount + 7) >> 3;
4664++ return ip - istart;
4665++}
4666++
4667++/*! HUF_readStats() :
4668++ Read compact Huffman tree, saved by HUF_writeCTable().
4669++ `huffWeight` is destination buffer.
4670++ `rankStats` is assumed to be a table of at least HUF_TABLELOG_MAX U32.
4671++ @return : size read from `src` , or an error Code .
4672++ Note : Needed by HUF_readCTable() and HUF_readDTableX?() .
4673++*/
4674++size_t INIT HUF_readStats_wksp(BYTE *huffWeight, size_t hwSize, U32 *rankStats, U32 *nbSymbolsPtr, U32 *tableLogPtr, const void *src, size_t srcSize, void *workspace, size_t workspaceSize)
4675++{
4676++ U32 weightTotal;
4677++ const BYTE *ip = (const BYTE *)src;
4678++ size_t iSize;
4679++ size_t oSize;
4680++
4681++ if (!srcSize)
4682++ return ERROR(srcSize_wrong);
4683++ iSize = ip[0];
4684++ /* memset(huffWeight, 0, hwSize); */ /* is not necessary, even though some analyzer complain ... */
4685++
4686++ if (iSize >= 128) { /* special header */
4687++ oSize = iSize - 127;
4688++ iSize = ((oSize + 1) / 2);
4689++ if (iSize + 1 > srcSize)
4690++ return ERROR(srcSize_wrong);
4691++ if (oSize >= hwSize)
4692++ return ERROR(corruption_detected);
4693++ ip += 1;
4694++ {
4695++ U32 n;
4696++ for (n = 0; n < oSize; n += 2) {
4697++ huffWeight[n] = ip[n / 2] >> 4;
4698++ huffWeight[n + 1] = ip[n / 2] & 15;
4699++ }
4700++ }
4701++ } else { /* header compressed with FSE (normal case) */
4702++ if (iSize + 1 > srcSize)
4703++ return ERROR(srcSize_wrong);
4704++ oSize = FSE_decompress_wksp(huffWeight, hwSize - 1, ip + 1, iSize, 6, workspace, workspaceSize); /* max (hwSize-1) values decoded, as last one is implied */
4705++ if (FSE_isError(oSize))
4706++ return oSize;
4707++ }
4708++
4709++ /* collect weight stats */
4710++ memset(rankStats, 0, (HUF_TABLELOG_MAX + 1) * sizeof(U32));
4711++ weightTotal = 0;
4712++ {
4713++ U32 n;
4714++ for (n = 0; n < oSize; n++) {
4715++ if (huffWeight[n] >= HUF_TABLELOG_MAX)
4716++ return ERROR(corruption_detected);
4717++ rankStats[huffWeight[n]]++;
4718++ weightTotal += (1 << huffWeight[n]) >> 1;
4719++ }
4720++ }
4721++ if (weightTotal == 0)
4722++ return ERROR(corruption_detected);
4723++
4724++ /* get last non-null symbol weight (implied, total must be 2^n) */
4725++ {
4726++ U32 const tableLog = BIT_highbit32(weightTotal) + 1;
4727++ if (tableLog > HUF_TABLELOG_MAX)
4728++ return ERROR(corruption_detected);
4729++ *tableLogPtr = tableLog;
4730++ /* determine last weight */
4731++ {
4732++ U32 const total = 1 << tableLog;
4733++ U32 const rest = total - weightTotal;
4734++ U32 const verif = 1 << BIT_highbit32(rest);
4735++ U32 const lastWeight = BIT_highbit32(rest) + 1;
4736++ if (verif != rest)
4737++ return ERROR(corruption_detected); /* last value must be a clean power of 2 */
4738++ huffWeight[oSize] = (BYTE)lastWeight;
4739++ rankStats[lastWeight]++;
4740++ }
4741++ }
4742++
4743++ /* check tree construction validity */
4744++ if ((rankStats[1] < 2) || (rankStats[1] & 1))
4745++ return ERROR(corruption_detected); /* by construction : at least 2 elts of rank 1, must be even */
4746++
4747++ /* results */
4748++ *nbSymbolsPtr = (U32)(oSize + 1);
4749++ return iSize + 1;
4750++}
4751+diff --git a/xen/common/zstd/error_private.h b/xen/common/zstd/error_private.h
4752+new file mode 100644
4753+index 000000000000..d07bf3cb9b55
4754+--- /dev/null
4755++++ b/xen/common/zstd/error_private.h
4756+@@ -0,0 +1,110 @@
4757++/**
4758++ * Copyright (c) 2016-present, Yann Collet, Facebook, Inc.
4759++ * All rights reserved.
4760++ *
4761++ * This source code is licensed under the BSD-style license found in the
4762++ * LICENSE file in the root directory of https://github.com/facebook/zstd.
4763++ * An additional grant of patent rights can be found in the PATENTS file in the
4764++ * same directory.
4765++ *
4766++ * This program is free software; you can redistribute it and/or modify it under
4767++ * the terms of the GNU General Public License version 2 as published by the
4768++ * Free Software Foundation. This program is dual-licensed; you may select
4769++ * either version 2 of the GNU General Public License ("GPL") or BSD license
4770++ * ("BSD").
4771++ */
4772++
4773++/* Note : this module is expected to remain private, do not expose it */
4774++
4775++#ifndef ERROR_H_MODULE
4776++#define ERROR_H_MODULE
4777++
4778++/* ****************************************
4779++* Dependencies
4780++******************************************/
4781++#include <xen/types.h> /* size_t */
4782++
4783++/**
4784++ * enum ZSTD_ErrorCode - zstd error codes
4785++ *
4786++ * Functions that return size_t can be checked for errors using ZSTD_isError()
4787++ * and the ZSTD_ErrorCode can be extracted using ZSTD_getErrorCode().
4788++ */
4789++typedef enum {
4790++ ZSTD_error_no_error,
4791++ ZSTD_error_GENERIC,
4792++ ZSTD_error_prefix_unknown,
4793++ ZSTD_error_version_unsupported,
4794++ ZSTD_error_parameter_unknown,
4795++ ZSTD_error_frameParameter_unsupported,
4796++ ZSTD_error_frameParameter_unsupportedBy32bits,
4797++ ZSTD_error_frameParameter_windowTooLarge,
4798++ ZSTD_error_compressionParameter_unsupported,
4799++ ZSTD_error_init_missing,
4800++ ZSTD_error_memory_allocation,
4801++ ZSTD_error_stage_wrong,
4802++ ZSTD_error_dstSize_tooSmall,
4803++ ZSTD_error_srcSize_wrong,
4804++ ZSTD_error_corruption_detected,
4805++ ZSTD_error_checksum_wrong,
4806++ ZSTD_error_tableLog_tooLarge,
4807++ ZSTD_error_maxSymbolValue_tooLarge,
4808++ ZSTD_error_maxSymbolValue_tooSmall,
4809++ ZSTD_error_dictionary_corrupted,
4810++ ZSTD_error_dictionary_wrong,
4811++ ZSTD_error_dictionaryCreation_failed,
4812++ ZSTD_error_maxCode
4813++} ZSTD_ErrorCode;
4814++
4815++/* ****************************************
4816++* Compiler-specific
4817++******************************************/
4818++#define ERR_STATIC static __attribute__((unused))
4819++
4820++/*-****************************************
4821++* Customization (error_public.h)
4822++******************************************/
4823++typedef ZSTD_ErrorCode ERR_enum;
4824++#define PREFIX(name) ZSTD_error_##name
4825++
4826++/*-****************************************
4827++* Error codes handling
4828++******************************************/
4829++#define ERROR(name) ((size_t)-PREFIX(name))
4830++
4831++ERR_STATIC unsigned INIT ERR_isError(size_t code) { return (code > ERROR(maxCode)); }
4832++
4833++ERR_STATIC ERR_enum INIT ERR_getErrorCode(size_t code)
4834++{
4835++ if (!ERR_isError(code))
4836++ return (ERR_enum)0;
4837++ return (ERR_enum)(0 - code);
4838++}
4839++
4840++/**
4841++ * ZSTD_isError() - tells if a size_t function result is an error code
4842++ * @code: The function result to check for error.
4843++ *
4844++ * Return: Non-zero iff the code is an error.
4845++ */
4846++static __attribute__((unused)) unsigned int INIT ZSTD_isError(size_t code)
4847++{
4848++ return code > (size_t)-ZSTD_error_maxCode;
4849++}
4850++
4851++/**
4852++ * ZSTD_getErrorCode() - translates an error function result to a ZSTD_ErrorCode
4853++ * @functionResult: The result of a function for which ZSTD_isError() is true.
4854++ *
4855++ * Return: The ZSTD_ErrorCode corresponding to the functionResult or 0
4856++ * if the functionResult isn't an error.
4857++ */
4858++static __attribute__((unused)) ZSTD_ErrorCode INIT ZSTD_getErrorCode(
4859++ size_t functionResult)
4860++{
4861++ if (!ZSTD_isError(functionResult))
4862++ return (ZSTD_ErrorCode)0;
4863++ return (ZSTD_ErrorCode)(0 - functionResult);
4864++}
4865++
4866++#endif /* ERROR_H_MODULE */
4867+diff --git a/xen/common/zstd/fse.h b/xen/common/zstd/fse.h
4868+new file mode 100644
4869+index 000000000000..b86717c34d0f
4870+--- /dev/null
4871++++ b/xen/common/zstd/fse.h
4872+@@ -0,0 +1,575 @@
4873++/*
4874++ * FSE : Finite State Entropy codec
4875++ * Public Prototypes declaration
4876++ * Copyright (C) 2013-2016, Yann Collet.
4877++ *
4878++ * BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
4879++ *
4880++ * Redistribution and use in source and binary forms, with or without
4881++ * modification, are permitted provided that the following conditions are
4882++ * met:
4883++ *
4884++ * * Redistributions of source code must retain the above copyright
4885++ * notice, this list of conditions and the following disclaimer.
4886++ * * Redistributions in binary form must reproduce the above
4887++ * copyright notice, this list of conditions and the following disclaimer
4888++ * in the documentation and/or other materials provided with the
4889++ * distribution.
4890++ *
4891++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
4892++ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
4893++ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
4894++ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
4895++ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
4896++ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
4897++ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
4898++ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
4899++ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
4900++ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
4901++ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
4902++ *
4903++ * This program is free software; you can redistribute it and/or modify it under
4904++ * the terms of the GNU General Public License version 2 as published by the
4905++ * Free Software Foundation. This program is dual-licensed; you may select
4906++ * either version 2 of the GNU General Public License ("GPL") or BSD license
4907++ * ("BSD").
4908++ *
4909++ * You can contact the author at :
4910++ * - Source repository : https://github.com/Cyan4973/FiniteStateEntropy
4911++ */
4912++#ifndef FSE_H
4913++#define FSE_H
4914++
4915++/*-*****************************************
4916++* Dependencies
4917++******************************************/
4918++#include <xen/types.h> /* size_t, ptrdiff_t */
4919++
4920++/*-*****************************************
4921++* FSE_PUBLIC_API : control library symbols visibility
4922++******************************************/
4923++#define FSE_PUBLIC_API
4924++
4925++/*------ Version ------*/
4926++#define FSE_VERSION_MAJOR 0
4927++#define FSE_VERSION_MINOR 9
4928++#define FSE_VERSION_RELEASE 0
4929++
4930++#define FSE_LIB_VERSION FSE_VERSION_MAJOR.FSE_VERSION_MINOR.FSE_VERSION_RELEASE
4931++#define FSE_QUOTE(str) #str
4932++#define FSE_EXPAND_AND_QUOTE(str) FSE_QUOTE(str)
4933++#define FSE_VERSION_STRING FSE_EXPAND_AND_QUOTE(FSE_LIB_VERSION)
4934++
4935++#define FSE_VERSION_NUMBER (FSE_VERSION_MAJOR * 100 * 100 + FSE_VERSION_MINOR * 100 + FSE_VERSION_RELEASE)
4936++FSE_PUBLIC_API unsigned FSE_versionNumber(void); /**< library version number; to be used when checking dll version */
4937++
4938++/*-*****************************************
4939++* Tool functions
4940++******************************************/
4941++FSE_PUBLIC_API size_t FSE_compressBound(size_t size); /* maximum compressed size */
4942++
4943++/* Error Management */
4944++FSE_PUBLIC_API unsigned FSE_isError(size_t code); /* tells if a return value is an error code */
4945++
4946++/*-*****************************************
4947++* FSE detailed API
4948++******************************************/
4949++/*!
4950++FSE_compress() does the following:
4951++1. count symbol occurrence from source[] into table count[]
4952++2. normalize counters so that sum(count[]) == Power_of_2 (2^tableLog)
4953++3. save normalized counters to memory buffer using writeNCount()
4954++4. build encoding table 'CTable' from normalized counters
4955++5. encode the data stream using encoding table 'CTable'
4956++
4957++FSE_decompress() does the following:
4958++1. read normalized counters with readNCount()
4959++2. build decoding table 'DTable' from normalized counters
4960++3. decode the data stream using decoding table 'DTable'
4961++
4962++The following API allows targeting specific sub-functions for advanced tasks.
4963++For example, it's possible to compress several blocks using the same 'CTable',
4964++or to save and provide normalized distribution using external method.
4965++*/
4966++
4967++/* *** COMPRESSION *** */
4968++/*! FSE_optimalTableLog():
4969++ dynamically downsize 'tableLog' when conditions are met.
4970++ It saves CPU time, by using smaller tables, while preserving or even improving compression ratio.
4971++ @return : recommended tableLog (necessarily <= 'maxTableLog') */
4972++FSE_PUBLIC_API unsigned FSE_optimalTableLog(unsigned maxTableLog, size_t srcSize, unsigned maxSymbolValue);
4973++
4974++/*! FSE_normalizeCount():
4975++ normalize counts so that sum(count[]) == Power_of_2 (2^tableLog)
4976++ 'normalizedCounter' is a table of short, of minimum size (maxSymbolValue+1).
4977++ @return : tableLog,
4978++ or an errorCode, which can be tested using FSE_isError() */
4979++FSE_PUBLIC_API size_t FSE_normalizeCount(short *normalizedCounter, unsigned tableLog, const unsigned *count, size_t srcSize, unsigned maxSymbolValue);
4980++
4981++/*! FSE_NCountWriteBound():
4982++ Provides the maximum possible size of an FSE normalized table, given 'maxSymbolValue' and 'tableLog'.
4983++ Typically useful for allocation purpose. */
4984++FSE_PUBLIC_API size_t FSE_NCountWriteBound(unsigned maxSymbolValue, unsigned tableLog);
4985++
4986++/*! FSE_writeNCount():
4987++ Compactly save 'normalizedCounter' into 'buffer'.
4988++ @return : size of the compressed table,
4989++ or an errorCode, which can be tested using FSE_isError(). */
4990++FSE_PUBLIC_API size_t FSE_writeNCount(void *buffer, size_t bufferSize, const short *normalizedCounter, unsigned maxSymbolValue, unsigned tableLog);
4991++
4992++/*! Constructor and Destructor of FSE_CTable.
4993++ Note that FSE_CTable size depends on 'tableLog' and 'maxSymbolValue' */
4994++typedef unsigned FSE_CTable; /* don't allocate that. It's only meant to be more restrictive than void* */
4995++
4996++/*! FSE_compress_usingCTable():
4997++ Compress `src` using `ct` into `dst` which must be already allocated.
4998++ @return : size of compressed data (<= `dstCapacity`),
4999++ or 0 if compressed data could not fit into `dst`,
5000++ or an errorCode, which can be tested using FSE_isError() */
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches