Compression Followup

?

?# Keyboard Navigation

## Global Keys

[, < / ], > Jump to previous / next episode

W, K, P / S, J, N Jump to previous / next timestamp

t / T Toggle theatre / SUPERtheatre mode

V Revert filter to original state Y Select link (requires manual Ctrl-c)## Menu toggling

q Quotes
r References
f Filter
y Link
c Credits
## In-Menu and Index Controls

## Quotes and References Menus and Index

Enter Jump to timestamp

## Quotes, References and Credits Menus

o Open URL (in new tab)
## Filter Menu

x, Space Toggle category and focus next

X, ShiftSpace Toggle category and focus previous

v Invert topics / media as per focus## Filter and Link Menus

z Toggle filter / linking mode
## Credits Menu

Enter Open URL (in new tab)

W, K, P / S, J, N Jump to previous / next timestamp

t / T Toggle theatre / SUPERtheatre mode

V Revert filter to original state Y Select link (requires manual Ctrl-c)

a

w

s

s

d

h
j
k
l

←

↑

↓

↓

→

Esc Close menu / unfocus timestamp

X, ShiftSpace Toggle category and focus previous

v Invert topics / media as per focus

⏫

Previous: 'Compression'

⏫

0:02 : Introducing Charles

0:02 : Introducing Charles

0:02 : Introducing Charles

1:18 : Origins in data compression as a hobby before working on 3D-on-the-web at Eclipse

1:18 : Origins in data compression as a hobby before working on 3D-on-the-web at Eclipse

1:18 : Origins in data compression as a hobby before working on 3D-on-the-web at Eclipse

3:11 : What happened between Eclipse and RAD?

3:11 : What happened between Eclipse and RAD?

3:11 : What happened between Eclipse and RAD?

3:18 : Working on 3D rendering in the game industry

3:18 : Working on 3D rendering in the game industry

3:18 : Working on 3D rendering in the game industry

3:56 : How do you think about compression?

3:56 : How do you think about compression?

3:56 : How do you think about compression?

4:36 : Compressors: Theoretical and Algorithmic aspects

4:36 : Compressors: Theoretical and Algorithmic aspects

4:36 : Compressors: Theoretical and Algorithmic aspects

5:44 : What do you mean by the "theoretical model"?

5:44 : What do you mean by the "theoretical model"?

5:44 : What do you mean by the "theoretical model"?

5:52 : Theoretical model of compressors

5:52 : Theoretical model of compressors

5:52 : Theoretical model of compressors

6:51 : Could you expand on this idea that "-log(2P) is the correct bit-length"?

6:51 : Could you expand on this idea that "-log(2P) is the correct bit-length"?

6:51 : Could you expand on this idea that "-log(2P) is the correct bit-length"?

7:00 : Variable code-length assignment, and Kraft inequality^{1}

7:00 : Variable code-length assignment, and Kraft inequality^{1}

7:00 : Variable code-length assignment, and Kraft inequality^{1}

8:32 : Fractional bit-lengths?

8:32 : Fractional bit-lengths?

8:32 : Fractional bit-lengths?

8:49 : Optimal vs minimally biased, wasteful practical entropy coder

8:49 : Optimal vs minimally biased, wasteful practical entropy coder

8:49 : Optimal vs minimally biased, wasteful practical entropy coder

9:15 : So code-lengths must sum up to 1 bit?

9:15 : So code-lengths must sum up to 1 bit?

9:15 : So code-lengths must sum up to 1 bit?

9:48 : If one bit is assigned a 0.5 bit code, the other must get 2 bits

9:48 : If one bit is assigned a 0.5 bit code, the other must get 2 bits

9:48 : If one bit is assigned a 0.5 bit code, the other must get 2 bits

10:42 : So you break it down as Model and Coder?

10:42 : So you break it down as Model and Coder?

10:42 : So you break it down as Model and Coder?

11:23 : Theoretical compression model, reducing bits based on assumptions and knowledge

11:23 : Theoretical compression model, reducing bits based on assumptions and knowledge

11:23 : Theoretical compression model, reducing bits based on assumptions and knowledge

13:51 : Why the split between prediction and entropy encoding?

13:51 : Why the split between prediction and entropy encoding?

13:51 : Why the split between prediction and entropy encoding?

15:20 : The theoretical possibility of doing without an entropy encoder

15:20 : The theoretical possibility of doing without an entropy encoder

15:20 : The theoretical possibility of doing without an entropy encoder

17:57 : Enumerating every possible image

17:57 : Enumerating every possible image

17:57 : Enumerating every possible image

18:26 : Incremental estimation

18:26 : Incremental estimation

18:26 : Incremental estimation

19:46 : Expressing probability based on previous symbols

🖌

19:46 : Expressing probability based on previous symbols

🖌

19:46 : Expressing probability based on previous symbols

🖌

21:06 : Word estimation, e.g. "t", "h", "e"

21:06 : Word estimation, e.g. "t", "h", "e"

21:06 : Word estimation, e.g. "t", "h", "e"

21:53 : But you'd still need arithmetic encoding for fractional code-lengths?

21:53 : But you'd still need arithmetic encoding for fractional code-lengths?

21:53 : But you'd still need arithmetic encoding for fractional code-lengths?

22:00 : You don't need fractional code-lengths once you've buffered up the whole file

22:00 : You don't need fractional code-lengths once you've buffered up the whole file

22:00 : You don't need fractional code-lengths once you've buffered up the whole file

22:36 : Understanding code-length assignment and storage

22:36 : Understanding code-length assignment and storage

22:36 : Understanding code-length assignment and storage

24:20 : Order-0 data

24:20 : Order-0 data

24:20 : Order-0 data

26:18 : Encoding choices: 1) Enumerative coding^{2}

26:18 : Encoding choices: 1) Enumerative coding^{2}

26:18 : Encoding choices: 1) Enumerative coding^{2}

27:55 : Encoding choices: 2) Huffman coding^{3}

27:55 : Encoding choices: 2) Huffman coding^{3}

27:55 : Encoding choices: 2) Huffman coding^{3}

28:22 : Encoding choices: 3) Run-length, Golomb coding^{4}

28:22 : Encoding choices: 3) Run-length, Golomb coding^{4}

28:22 : Encoding choices: 3) Run-length, Golomb coding^{4}

29:17 : Does the theory suggest the Model / Entropy split?

29:17 : Does the theory suggest the Model / Entropy split?

29:17 : Does the theory suggest the Model / Entropy split?

30:01 : It's a way to express the problem clearly

30:01 : It's a way to express the problem clearly

30:01 : It's a way to express the problem clearly

30:56 : What's a more heuristic data compressor?

30:56 : What's a more heuristic data compressor?

30:56 : What's a more heuristic data compressor?

31:08 : LZ77^{5} and block sort^{6} are models with which we have difficulties understanding them as such

31:08 : LZ77^{5} and block sort^{6} are models with which we have difficulties understanding them as such

31:49 : So entropy coding seems to be much better solved than data modelling?

31:49 : So entropy coding seems to be much better solved than data modelling?

31:49 : So entropy coding seems to be much better solved than data modelling?

32:46 : Asymmetric numeral systems (ANS)^{7}

32:46 : Asymmetric numeral systems (ANS)^{7}

32:46 : Asymmetric numeral systems (ANS)^{7}

37:35 : Yann Collet's inspirational implementation of ANS

37:35 : Yann Collet's inspirational implementation of ANS

37:35 : Yann Collet's inspirational implementation of ANS

38:43 : ANS mathematics suits computer instructions

38:43 : ANS mathematics suits computer instructions

38:43 : ANS mathematics suits computer instructions

38:57 : ANS as a LIFO coder^{8}

38:57 : ANS as a LIFO coder^{8}

38:57 : ANS as a LIFO coder^{8}

40:03 : Why is the flush-order so important?

40:03 : Why is the flush-order so important?

40:03 : Why is the flush-order so important?

40:57 : Stable code-word size in rANS

40:57 : Stable code-word size in rANS

40:57 : Stable code-word size in rANS

43:15 : How do you flush, if you don't know the lower bits?

43:15 : How do you flush, if you don't know the lower bits?

43:15 : How do you flush, if you don't know the lower bits?

43:57 : Encoder / decoder lockstep streaming in ANS

43:57 : Encoder / decoder lockstep streaming in ANS

43:57 : Encoder / decoder lockstep streaming in ANS

46:05 : Streaming ANS tries to approximate, without size or time constraints, a whole-file non-flushed arithmetic encode

46:05 : Streaming ANS tries to approximate, without size or time constraints, a whole-file non-flushed arithmetic encode

46:55 : Why do the ANS encoder and decoder run in the opposite direction?

46:55 : Why do the ANS encoder and decoder run in the opposite direction?

46:55 : Why do the ANS encoder and decoder run in the opposite direction?

48:20 : ANS encoder / decoder streaming

🖌

48:20 : ANS encoder / decoder streaming

🖌

48:20 : ANS encoder / decoder streaming

🖌

50:32 : Shouldn't all arithmetic entropy coders all work like this?

🖌

50:32 : Shouldn't all arithmetic entropy coders all work like this?

🖌

50:32 : Shouldn't all arithmetic entropy coders all work like this?

🖌

51:00 : Flushing from the top vs the bottom

🖌

51:00 : Flushing from the top vs the bottom

🖌

51:00 : Flushing from the top vs the bottom

🖌

51:27 : Arithmetic encoder / decoder streaming

🖌

51:27 : Arithmetic encoder / decoder streaming

🖌

51:27 : Arithmetic encoder / decoder streaming

🖌

55:35 : Why does ANS flush from the low bits?

55:35 : Why does ANS flush from the low bits?

55:35 : Why does ANS flush from the low bits?

57:01 : Flushing the low bits as damage limitation

57:01 : Flushing the low bits as damage limitation

57:01 : Flushing the low bits as damage limitation

1:00:12 : Why do we care about incremental compression?

1:00:12 : Why do we care about incremental compression?

1:00:12 : Why do we care about incremental compression?

1:01:54 : Incremental compression for localised accuracy and efficient memory use

1:01:54 : Incremental compression for localised accuracy and efficient memory use

1:01:54 : Incremental compression for localised accuracy and efficient memory use

1:06:22 : Block-based Oodle^{9} compression

1:06:22 : Block-based Oodle^{9} compression

1:06:22 : Block-based Oodle^{9} compression

1:08:46 : Why haven't we moved past Huffman, arithmetic and LZ compressors?

1:08:46 : Why haven't we moved past Huffman, arithmetic and LZ compressors?

1:08:46 : Why haven't we moved past Huffman, arithmetic and LZ compressors?

1:11:08 : Good algorithms

1:11:08 : Good algorithms

1:11:08 : Good algorithms

1:16:01 : LZ delineates symbols using previously seen ones, but arithmetic encoders do not. Why don't we have encoders that work with both?

1:16:01 : LZ delineates symbols using previously seen ones, but arithmetic encoders do not. Why don't we have encoders that work with both?

1:19:13 : LZ and Brotli symbol delineation, and dictionary completion

1:19:13 : LZ and Brotli symbol delineation, and dictionary completion

1:19:13 : LZ and Brotli symbol delineation, and dictionary completion

1:30:13 : Could you help distinguish LZ, as a modelling encoder, from arithmetic encoders?

1:30:13 : Could you help distinguish LZ, as a modelling encoder, from arithmetic encoders?

1:30:13 : Could you help distinguish LZ, as a modelling encoder, from arithmetic encoders?

1:34:02 : Turning a compressor into a model

1:34:02 : Turning a compressor into a model

1:34:02 : Turning a compressor into a model

1:36:25 : A 1-guess model, with a "correct" symbol and combined "incorrect and literal" symbol

1:36:25 : A 1-guess model, with a "correct" symbol and combined "incorrect and literal" symbol

1:36:25 : A 1-guess model, with a "correct" symbol and combined "incorrect and literal" symbol

1:37:29 : 1-guess combined alphabet

🖌

1:37:29 : 1-guess combined alphabet

🖌

1:37:29 : 1-guess combined alphabet

🖌

1:38:29 : Imagining a fractional-bit LZ compressor

1:38:29 : Imagining a fractional-bit LZ compressor

1:38:29 : Imagining a fractional-bit LZ compressor

1:40:37 : Reducing redundancy of LZ takes work

1:40:37 : Reducing redundancy of LZ takes work

1:40:37 : Reducing redundancy of LZ takes work

1:45:38 : Thank you, Charles^{10}

1:45:38 : Thank you, Charles^{10}

1:45:38 : Thank you, Charles^{10}

⏬

Next: 'Asset Systems and Scalability'

⏬