[, < / ], >Jump to previous / next episode W, K, P / S, J, NJump to previous / next timestamp t / TToggle theatre / SUPERtheatre mode VRevert filter to original stateYSelect link (requires manual Ctrl-c)
Menu toggling
qQuotesrReferencesfFilteryLinkcCredits
In-Menu and Index Controls
a
w s
d
hjkl
←
↑ ↓
→
EscClose menu / unfocus timestamp
Quotes and References Menus and Index
EnterJump to timestamp
Quotes, References and Credits Menus
oOpen URL (in new tab)
Filter Menu
x, SpaceToggle category and focus next X, ShiftSpaceToggle category and focus previous vInvert topics / media as per focus
Filter and Link Menus
zToggle filter / linking mode
Credits Menu
EnterOpen URL (in new tab)
•
Welcome to 2016
•
0:04Casey Muratori: Introducing Jeff and Fabian from RAD Game Tools
0:04Casey Muratori: Introducing Jeff and Fabian from RAD Game Tools
0:04Casey Muratori: Introducing Jeff and Fabian from RAD Game Tools
1:36Jeff Roberts: RAD's first product, pack.exe
1:36Jeff Roberts: RAD's first product, pack.exe
1:36Jeff Roberts: RAD's first product, pack.exe
3:10Fabian Giesen: Compressing known data (i.e. fudging the numbers)
3:10Fabian Giesen: Compressing known data (i.e. fudging the numbers)
3:10Fabian Giesen: Compressing known data (i.e. fudging the numbers)
3:25JR: Compression zero-point energy
3:25JR: Compression zero-point energy
3:25JR: Compression zero-point energy
3:57FG: Perpetual motion machines
3:57FG: Perpetual motion machines
3:57FG: Perpetual motion machines
4:10JR: Recursive compression
4:10JR: Recursive compression
4:10JR: Recursive compression
4:17FG: More on compressing known data (i.e. fudging the numbers)
4:17FG: More on compressing known data (i.e. fudging the numbers)
4:17FG: More on compressing known data (i.e. fudging the numbers)
5:18CM: Fudged compression discussions
5:18CM: Fudged compression discussions
5:18CM: Fudged compression discussions
5:28JR: Preventing data leakage and cheating
5:28JR: Preventing data leakage and cheating
5:28JR: Preventing data leakage and cheating
6:00CM: Pure form of compression (the compressor size is included in the measurement)
6:00CM: Pure form of compression (the compressor size is included in the measurement)
6:00CM: Pure form of compression (the compressor size is included in the measurement)
6:21FG: Cheating compression by exploiting channels, e.g. NTFS file streams, UDP packet length and zero-padding
6:21FG: Cheating compression by exploiting channels, e.g. NTFS file streams, UDP packet length and zero-padding
6:21FG: Cheating compression by exploiting channels, e.g. NTFS file streams, UDP packet length and zero-padding
8:27CM: Cheating, but back to the background
8:27CM: Cheating, but back to the background
8:27CM: Cheating, but back to the background
8:56JR: RAD Game Tools' origins as a hardware and software company, and data compression as pure and enthralling
8:56JR: RAD Game Tools' origins as a hardware and software company, and data compression as pure and enthralling
8:56JR: RAD Game Tools' origins as a hardware and software company, and data compression as pure and enthralling
10:42CM: How did you, Fabian, get interested in compression?
10:42CM: How did you, Fabian, get interested in compression?
10:42CM: How did you, Fabian, get interested in compression?
10:56FG: Discovering theories of data compression and Huffman coding1
10:56FG: Discovering theories of data compression and Huffman coding1
10:56FG: Discovering theories of data compression and Huffman coding1
11:44CM: What is the way to think about the compression family?
11:44CM: What is the way to think about the compression family?
11:44CM: What is the way to think about the compression family?
13:02JR: What compression is about: Modelling (or Prediction, from David Stafford) and Residual Cleanup
13:02JR: What compression is about: Modelling (or Prediction, from David Stafford) and Residual Cleanup
13:02JR: What compression is about: Modelling (or Prediction, from David Stafford) and Residual Cleanup
15:23CM: Modelling the possible input before fixing up to match the actual input
15:23CM: Modelling the possible input before fixing up to match the actual input
15:23CM: Modelling the possible input before fixing up to match the actual input
16:05JR: Multiple prediction passes
16:05JR: Multiple prediction passes
16:05JR: Multiple prediction passes
16:33CM: Lossy vs Lossless compression?
16:33CM: Lossy vs Lossless compression?
16:33CM: Lossy vs Lossless compression?
17:12FG: Lossy compression as an early-stopping scheme
17:12FG: Lossy compression as an early-stopping scheme
17:12FG: Lossy compression as an early-stopping scheme
20:35CM: Defining error
20:35CM: Defining error
20:35CM: Defining error
20:49FG: Networking errors, and lossless as general-purpose
20:49FG: Networking errors, and lossless as general-purpose
20:49FG: Networking errors, and lossless as general-purpose
22:21CM: What is your mental model of compression?
22:21CM: What is your mental model of compression?
22:21CM: What is your mental model of compression?
22:35FG: Compression as modelling
22:35FG: Compression as modelling
22:35FG: Compression as modelling
23:31JR: Measuring compression performance
23:31JR: Measuring compression performance
23:31JR: Measuring compression performance
24:17FG: Undecidable minimum file size
24:17FG: Undecidable minimum file size
24:17FG: Undecidable minimum file size
24:39CM: You're asking "how small can this set of things get?"
24:39CM: You're asking "how small can this set of things get?"
24:39CM: You're asking "how small can this set of things get?"
25:04FG: Deterministic but impractical optimal LZ parse
25:04FG: Deterministic but impractical optimal LZ parse
25:04FG: Deterministic but impractical optimal LZ parse
26:12CM: "How small in general?" is impossible to answer
26:12CM: "How small in general?" is impossible to answer
26:12CM: "How small in general?" is impossible to answer
26:41FG: Encoding the compressor in the general space, and Kolmogorov complexity2
26:41FG: Encoding the compressor in the general space, and Kolmogorov complexity2
26:41FG: Encoding the compressor in the general space, and Kolmogorov complexity2
28:19FG: Feature trade-offs
28:19FG: Feature trade-offs
28:19FG: Feature trade-offs
29:26CM: The decoder is the description, and compressor complexity may prevent optimal solutions
29:26CM: The decoder is the description, and compressor complexity may prevent optimal solutions
29:26CM: The decoder is the description, and compressor complexity may prevent optimal solutions
30:12FG: The decoder is the specification, e.g. MPEG-1 and MPEG-2
30:12FG: The decoder is the specification, e.g. MPEG-1 and MPEG-2
30:12FG: The decoder is the specification, e.g. MPEG-1 and MPEG-2
32:03JR: File size variance, and optimisable routines
32:03JR: File size variance, and optimisable routines
32:03JR: File size variance, and optimisable routines
33:41FG: Bink 2 format dictated by Xbox 360
33:41FG: Bink 2 format dictated by Xbox 360
33:41FG: Bink 2 format dictated by Xbox 360
33:57CM: No sponsors
33:57CM: No sponsors
33:57CM: No sponsors
34:14CM: Modelling vs Statistical?
34:14CM: Modelling vs Statistical?
34:14CM: Modelling vs Statistical?
34:48JR: Modelling vs Statistical
34:48JR: Modelling vs Statistical
34:48JR: Modelling vs Statistical
35:43CM: Internalising the idea of prediction
35:43CM: Internalising the idea of prediction
35:43CM: Internalising the idea of prediction
37:11FG: Modelling examples: 1) Similar adjacent pixels in JPEG images
37:11FG: Modelling examples: 1) Similar adjacent pixels in JPEG images
37:11FG: Modelling examples: 1) Similar adjacent pixels in JPEG images
39:00FG: Modelling examples: 2) Dictionary methods in text compression, e.g. LZ77 and LZ783
39:00FG: Modelling examples: 2) Dictionary methods in text compression, e.g. LZ77 and LZ783
39:00FG: Modelling examples: 2) Dictionary methods in text compression, e.g. LZ77 and LZ783
39:54JR: LZ774and LZ785 was a watershed moment
39:54JR: LZ774and LZ785 was a watershed moment
39:54JR: LZ774and LZ785 was a watershed moment
40:26FG: Statistical compression with stochastic modelling, e.g. Huffman6
40:26FG: Statistical compression with stochastic modelling, e.g. Huffman6
40:26FG: Statistical compression with stochastic modelling, e.g. Huffman6
42:12CM: How would you define the split between LZ and Huffman?
42:12CM: How would you define the split between LZ and Huffman?
42:12CM: How would you define the split between LZ and Huffman?
42:41FG: LZ understandability using a Markov model7
42:41FG: LZ understandability using a Markov model7
42:41FG: LZ understandability using a Markov model7
44:15JR: Prediction vs encoding example: 0-255 gradient image
44:15JR: Prediction vs encoding example: 0-255 gradient image
44:15JR: Prediction vs encoding example: 0-255 gradient image
45:15CM: The predictor tries to regularise the data?
45:15CM: The predictor tries to regularise the data?
45:15CM: The predictor tries to regularise the data?
46:18FG: Predicted structure, and random residual
46:18FG: Predicted structure, and random residual
46:18FG: Predicted structure, and random residual
47:35JR: Separating prediction and cleanup
47:35JR: Separating prediction and cleanup
47:35JR: Separating prediction and cleanup
48:46CM: Science vs art?
48:46CM: Science vs art?
48:46CM: Science vs art?
48:56JR: Science vs art
48:56JR: Science vs art
48:56JR: Science vs art
50:16CM: Charles is in Hawaii8
50:16CM: Charles is in Hawaii8
50:16CM: Charles is in Hawaii8
51:23JR: Lumpy compressors, e.g. LZ4
51:23JR: Lumpy compressors, e.g. LZ4
51:23JR: Lumpy compressors, e.g. LZ4
52:20CM: What do you mean by bit packing?
52:20CM: What do you mean by bit packing?
52:20CM: What do you mean by bit packing?
52:37JR: LZ77 symbol decode, in terms of bit packing
52:37JR: LZ77 symbol decode, in terms of bit packing
52:37JR: LZ77 symbol decode, in terms of bit packing
53:50FG: Decode efficiency
53:50FG: Decode efficiency
53:50FG: Decode efficiency
54:30CM: Hybrid command data stream
54:30CM: Hybrid command data stream
54:30CM: Hybrid command data stream
55:00FG: Rich Geldreich thinks of compression as specifying a virtual machine
55:00FG: Rich Geldreich thinks of compression as specifying a virtual machine
55:00FG: Rich Geldreich thinks of compression as specifying a virtual machine
55:38CM: Caring about the executable size
55:38CM: Caring about the executable size
55:38CM: Caring about the executable size
55:56FG: Compile optimisation
55:56FG: Compile optimisation
55:56FG: Compile optimisation
56:29JR: Improvements found by Charles in Oodle's parse
56:29JR: Improvements found by Charles in Oodle's parse
56:29JR: Improvements found by Charles in Oodle's parse
57:17FG: LZ parse
57:17FG: LZ parse
57:17FG: LZ parse
59:11JR: Speed choices
59:11JR: Speed choices
59:11JR: Speed choices
59:35FG: Having multiple ways to say the same thing is an underestimation of the probability
59:35FG: Having multiple ways to say the same thing is an underestimation of the probability
59:35FG: Having multiple ways to say the same thing is an underestimation of the probability
1:00:00CM: What do you mean by getting the probabilities right?
1:00:00CM: What do you mean by getting the probabilities right?
1:00:00CM: What do you mean by getting the probabilities right?
1:00:24FG: A Markov model9 has no ambiguity, and multiple ways to same the same thing is wasteful
1:00:24FG: A Markov model9 has no ambiguity, and multiple ways to same the same thing is wasteful
1:00:24FG: A Markov model9 has no ambiguity, and multiple ways to same the same thing is wasteful
1:02:38FG: Useful redundancy
1:02:38FG: Useful redundancy
1:02:38FG: Useful redundancy
1:04:52CM: Optimal parse
1:04:52CM: Optimal parse
1:04:52CM: Optimal parse
1:05:30FG: Incremental compression
1:05:30FG: Incremental compression
1:05:30FG: Incremental compression
1:06:02CM: So there is no such thing as search for optimal parse if the compressor is perfect?
1:06:02CM: So there is no such thing as search for optimal parse if the compressor is perfect?
1:06:02CM: So there is no such thing as search for optimal parse if the compressor is perfect?
1:06:26FG: High-end statistical symmetrical compression vs asymmetric LZ10
1:06:26FG: High-end statistical symmetrical compression vs asymmetric LZ10
1:06:26FG: High-end statistical symmetrical compression vs asymmetric LZ10
1:08:14CM: So arithmetic encoding is a solved problem?
1:08:14CM: So arithmetic encoding is a solved problem?
1:08:14CM: So arithmetic encoding is a solved problem?
1:08:45FG: Arithmetic encoding is mathematical
1:08:45FG: Arithmetic encoding is mathematical
1:08:45FG: Arithmetic encoding is mathematical
1:09:50JR: Art vs Science
1:09:50JR: Art vs Science
1:09:50JR: Art vs Science
1:10:50FG: Evidence of improvement potential, e.g. prepending a byte to an input file
1:10:50FG: Evidence of improvement potential, e.g. prepending a byte to an input file
1:10:50FG: Evidence of improvement potential, e.g. prepending a byte to an input file
1:12:43CM: So searches can get stuck in local maxima?
1:12:43CM: So searches can get stuck in local maxima?
1:12:43CM: So searches can get stuck in local maxima?
1:13:24JR: Bugs in complex but correct systems
1:13:24JR: Bugs in complex but correct systems
1:13:24JR: Bugs in complex but correct systems
1:14:15CM: Do you gauge how close you're getting to optimal size?
1:14:15CM: Do you gauge how close you're getting to optimal size?
1:14:15CM: Do you gauge how close you're getting to optimal size?
1:14:40FG: Oodle11 compression levels
1:14:40FG: Oodle11 compression levels
1:14:40FG: Oodle11 compression levels
1:15:52JR: Compression decoder optimisation
1:15:52JR: Compression decoder optimisation
1:15:52JR: Compression decoder optimisation
1:17:10CM: Instruction-level parallelism
1:17:10CM: Instruction-level parallelism
1:17:10CM: Instruction-level parallelism
1:17:51JR: Port blockage
1:17:51JR: Port blockage
1:17:51JR: Port blockage
1:18:22FG: Instruction-level parallelism, and dependency
🖌
1:18:22FG: Instruction-level parallelism, and dependency
🖌
1:18:22FG: Instruction-level parallelism, and dependency
🖌
1:21:03FG: Huffman serial dependency
🖌
1:21:03FG: Huffman serial dependency
🖌
1:21:03FG: Huffman serial dependency
🖌
1:23:52CM: Increasing instruction-level parallelism by doing multiple decodes concurrently
1:23:52CM: Increasing instruction-level parallelism by doing multiple decodes concurrently
1:23:52CM: Increasing instruction-level parallelism by doing multiple decodes concurrently
1:24:16JR: Unrolling a loop in data-order
1:24:16JR: Unrolling a loop in data-order
1:24:16JR: Unrolling a loop in data-order
1:24:31FG: Six streams for six dependent instructions, plus SIMD
🖌
1:24:31FG: Six streams for six dependent instructions, plus SIMD
🖌
1:24:31FG: Six streams for six dependent instructions, plus SIMD