Compression Followup
?
?

Keyboard Navigation

Global Keys

[, < / ], > Jump to previous / next episode
W, K, P / S, J, N Jump to previous / next marker
t / T Toggle theatre / SUPERtheatre mode
V Revert filter to original state Y Select link (requires manual Ctrl-c)

Menu toggling

q Quotes r References f Filter y Link c Credits

In-Menu Movement

a
w
s
d
h j k l


Quotes and References Menus

Enter Jump to timecode

Quotes, References and Credits Menus

o Open URL (in new tab)

Filter Menu

x, Space Toggle category and focus next
X, ShiftSpace Toggle category and focus previous
v Invert topics / media as per focus

Filter and Link Menus

z Toggle filter / linking mode

Credits Menu

Enter Open URL (in new tab)
0:02Casey Muratori: Introducing Charles
0:02Casey Muratori: Introducing Charles
0:02Casey Muratori: Introducing Charles
1:18Charles Bloom: Origins in data compression as a hobby before working on 3D-on-the-web at Eclipse
1:18Charles Bloom: Origins in data compression as a hobby before working on 3D-on-the-web at Eclipse
1:18Charles Bloom: Origins in data compression as a hobby before working on 3D-on-the-web at Eclipse
3:11CM: What happened between Eclipse and RAD?
3:11CM: What happened between Eclipse and RAD?
3:11CM: What happened between Eclipse and RAD?
3:18CB: Working on 3D rendering in the game industry
3:18CB: Working on 3D rendering in the game industry
3:18CB: Working on 3D rendering in the game industry
3:56CM: How do you think about compression?
3:56CM: How do you think about compression?
3:56CM: How do you think about compression?
4:36CB: Compressors: Theoretical and Algorithmic aspects
4:36CB: Compressors: Theoretical and Algorithmic aspects
4:36CB: Compressors: Theoretical and Algorithmic aspects
5:44CM: What do you mean by the "theoretical model"?
5:44CM: What do you mean by the "theoretical model"?
5:44CM: What do you mean by the "theoretical model"?
5:52CB: Theoretical model of compressors
5:52CB: Theoretical model of compressors
5:52CB: Theoretical model of compressors
6:51CM: Could you expand on this idea that "-log(2P) is the correct bit-length"?
6:51CM: Could you expand on this idea that "-log(2P) is the correct bit-length"?
6:51CM: Could you expand on this idea that "-log(2P) is the correct bit-length"?
7:00CB: Variable code-length assignment, and Kraft inequality1
7:00CB: Variable code-length assignment, and Kraft inequality1
7:00CB: Variable code-length assignment, and Kraft inequality1
8:32CM: Fractional bit-lengths?
8:32CM: Fractional bit-lengths?
8:32CM: Fractional bit-lengths?
8:49CB: Optimal vs minimally biased, wasteful practical entropy coder
8:49CB: Optimal vs minimally biased, wasteful practical entropy coder
8:49CB: Optimal vs minimally biased, wasteful practical entropy coder
9:15CM: So code-lengths must sum up to 1 bit?
9:15CM: So code-lengths must sum up to 1 bit?
9:15CM: So code-lengths must sum up to 1 bit?
9:48CB: If one bit is assigned a 0.5 bit code, the other must get 2 bits
9:48CB: If one bit is assigned a 0.5 bit code, the other must get 2 bits
9:48CB: If one bit is assigned a 0.5 bit code, the other must get 2 bits
10:42CM: So you break it down as Model and Coder?
10:42CM: So you break it down as Model and Coder?
10:42CM: So you break it down as Model and Coder?
11:23CB: Theoretical compression model, reducing bits based on assumptions and knowledge
11:23CB: Theoretical compression model, reducing bits based on assumptions and knowledge
11:23CB: Theoretical compression model, reducing bits based on assumptions and knowledge
13:51CM: Why the split between prediction and entropy encoding?
13:51CM: Why the split between prediction and entropy encoding?
13:51CM: Why the split between prediction and entropy encoding?
15:20CB: The theoretical possibility of doing without an entropy encoder
15:20CB: The theoretical possibility of doing without an entropy encoder
15:20CB: The theoretical possibility of doing without an entropy encoder
17:57CM: Enumerating every possible image
17:57CM: Enumerating every possible image
17:57CM: Enumerating every possible image
18:26CB: Incremental estimation
18:26CB: Incremental estimation
18:26CB: Incremental estimation
19:46CB: Expressing probability based on previous symbols
🖌
19:46CB: Expressing probability based on previous symbols
🖌
19:46CB: Expressing probability based on previous symbols
🖌
21:06CB: Word estimation, e.g. "t", "h", "e"
21:06CB: Word estimation, e.g. "t", "h", "e"
21:06CB: Word estimation, e.g. "t", "h", "e"
21:53CM: But you'd still need arithmetic encoding for fractional code-lengths?
21:53CM: But you'd still need arithmetic encoding for fractional code-lengths?
21:53CM: But you'd still need arithmetic encoding for fractional code-lengths?
22:00CB: You don't need fractional code-lengths once you've buffered up the whole file
22:00CB: You don't need fractional code-lengths once you've buffered up the whole file
22:00CB: You don't need fractional code-lengths once you've buffered up the whole file
22:36CM: Understanding code-length assignment and storage
22:36CM: Understanding code-length assignment and storage
22:36CM: Understanding code-length assignment and storage
24:20CB: Order-0 data
24:20CB: Order-0 data
24:20CB: Order-0 data
26:18CB: Encoding choices: 1) Enumerative coding2
26:18CB: Encoding choices: 1) Enumerative coding2
26:18CB: Encoding choices: 1) Enumerative coding2
27:55CB: Encoding choices: 2) Huffman coding3
27:55CB: Encoding choices: 2) Huffman coding3
27:55CB: Encoding choices: 2) Huffman coding3
28:22CB: Encoding choices: 3) Run-length, Golomb coding4
28:22CB: Encoding choices: 3) Run-length, Golomb coding4
28:22CB: Encoding choices: 3) Run-length, Golomb coding4
29:17CM: Does the theory suggest the Model / Entropy split?
29:17CM: Does the theory suggest the Model / Entropy split?
29:17CM: Does the theory suggest the Model / Entropy split?
30:01CB: It's a way to express the problem clearly
30:01CB: It's a way to express the problem clearly
30:01CB: It's a way to express the problem clearly
30:56CM: What's a more heuristic data compressor?
30:56CM: What's a more heuristic data compressor?
30:56CM: What's a more heuristic data compressor?
31:08CB: LZ775 and block sort6 are models with which we have difficulties understanding them as such
31:08CB: LZ775 and block sort6 are models with which we have difficulties understanding them as such
31:08CB: LZ775 and block sort6 are models with which we have difficulties understanding them as such
31:49CM: So entropy coding seems to be much better solved than data modelling?
31:49CM: So entropy coding seems to be much better solved than data modelling?
31:49CM: So entropy coding seems to be much better solved than data modelling?
32:46CB: Asymmetric numeral systems (ANS)7
32:46CB: Asymmetric numeral systems (ANS)7
32:46CB: Asymmetric numeral systems (ANS)7
37:35CB: Yann Collet's inspirational implementation of ANS
37:35CB: Yann Collet's inspirational implementation of ANS
37:35CB: Yann Collet's inspirational implementation of ANS
38:43CM: ANS mathematics suits computer instructions
38:43CM: ANS mathematics suits computer instructions
38:43CM: ANS mathematics suits computer instructions
38:57CB: ANS as a LIFO coder8
38:57CB: ANS as a LIFO coder8
38:57CB: ANS as a LIFO coder8
40:03CM: Why is the flush-order so important?
40:03CM: Why is the flush-order so important?
40:03CM: Why is the flush-order so important?
40:57CB: Stable code-word size in rANS
40:57CB: Stable code-word size in rANS
40:57CB: Stable code-word size in rANS
43:15CM: How do you flush, if you don't know the lower bits?
43:15CM: How do you flush, if you don't know the lower bits?
43:15CM: How do you flush, if you don't know the lower bits?
43:57CB: Encoder / decoder lockstep streaming in ANS
43:57CB: Encoder / decoder lockstep streaming in ANS
43:57CB: Encoder / decoder lockstep streaming in ANS
46:05CB: Streaming ANS tries to approximate, without size or time constraints, a whole-file non-flushed arithmetic encode
46:05CB: Streaming ANS tries to approximate, without size or time constraints, a whole-file non-flushed arithmetic encode
46:05CB: Streaming ANS tries to approximate, without size or time constraints, a whole-file non-flushed arithmetic encode
46:55CM: Why do the ANS encoder and decoder run in the opposite direction?
46:55CM: Why do the ANS encoder and decoder run in the opposite direction?
46:55CM: Why do the ANS encoder and decoder run in the opposite direction?
48:20CB: ANS encoder / decoder streaming
🖌
48:20CB: ANS encoder / decoder streaming
🖌
48:20CB: ANS encoder / decoder streaming
🖌
50:32CM: Shouldn't all arithmetic entropy coders all work like this?
🖌
50:32CM: Shouldn't all arithmetic entropy coders all work like this?
🖌
50:32CM: Shouldn't all arithmetic entropy coders all work like this?
🖌
51:00CB: Flushing from the top vs the bottom
🖌
51:00CB: Flushing from the top vs the bottom
🖌
51:00CB: Flushing from the top vs the bottom
🖌
51:27CB: Arithmetic encoder / decoder streaming
🖌
51:27CB: Arithmetic encoder / decoder streaming
🖌
51:27CB: Arithmetic encoder / decoder streaming
🖌
55:35CM: Why does ANS flush from the low bits?
55:35CM: Why does ANS flush from the low bits?
55:35CM: Why does ANS flush from the low bits?
57:01CB: Flushing the low bits as damage limitation
57:01CB: Flushing the low bits as damage limitation
57:01CB: Flushing the low bits as damage limitation
1:00:12CM: Why do we care about incremental compression?
1:00:12CM: Why do we care about incremental compression?
1:00:12CM: Why do we care about incremental compression?
1:01:54CB: Incremental compression for localised accuracy and efficient memory use
1:01:54CB: Incremental compression for localised accuracy and efficient memory use
1:01:54CB: Incremental compression for localised accuracy and efficient memory use
1:06:22CB: Block-based Oodle9 compression
1:06:22CB: Block-based Oodle9 compression
1:06:22CB: Block-based Oodle9 compression
1:08:46CM: Why haven't we moved past Huffman, arithmetic and LZ compressors?
1:08:46CM: Why haven't we moved past Huffman, arithmetic and LZ compressors?
1:08:46CM: Why haven't we moved past Huffman, arithmetic and LZ compressors?
1:11:08CB: Good algorithms
1:11:08CB: Good algorithms
1:11:08CB: Good algorithms
1:16:01CM: LZ delineates symbols using previously seen ones, but arithmetic encoders do not. Why don't we have encoders that work with both?
1:16:01CM: LZ delineates symbols using previously seen ones, but arithmetic encoders do not. Why don't we have encoders that work with both?
1:16:01CM: LZ delineates symbols using previously seen ones, but arithmetic encoders do not. Why don't we have encoders that work with both?
1:19:13CB: LZ and Brotli symbol delineation, and dictionary completion
1:19:13CB: LZ and Brotli symbol delineation, and dictionary completion
1:19:13CB: LZ and Brotli symbol delineation, and dictionary completion
1:30:13CM: Could you help distinguish LZ, as a modelling encoder, from arithmetic encoders?
1:30:13CM: Could you help distinguish LZ, as a modelling encoder, from arithmetic encoders?
1:30:13CM: Could you help distinguish LZ, as a modelling encoder, from arithmetic encoders?
1:34:02CB: Turning a compressor into a model
1:34:02CB: Turning a compressor into a model
1:34:02CB: Turning a compressor into a model
1:36:25CB: A 1-guess model, with a "correct" symbol and combined "incorrect and literal" symbol
1:36:25CB: A 1-guess model, with a "correct" symbol and combined "incorrect and literal" symbol
1:36:25CB: A 1-guess model, with a "correct" symbol and combined "incorrect and literal" symbol
1:37:29CB: 1-guess combined alphabet
🖌
1:37:29CB: 1-guess combined alphabet
🖌
1:37:29CB: 1-guess combined alphabet
🖌
1:38:29CM: Imagining a fractional-bit LZ compressor
1:38:29CM: Imagining a fractional-bit LZ compressor
1:38:29CM: Imagining a fractional-bit LZ compressor
1:40:37CB: Reducing redundancy of LZ takes work
1:40:37CB: Reducing redundancy of LZ takes work
1:40:37CB: Reducing redundancy of LZ takes work
1:45:38CM: Thank you, Charles10
1:45:38CM: Thank you, Charles10
1:45:38CM: Thank you, Charles10