-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathindex.html
785 lines (777 loc) · 63.1 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="generator" content="pandoc">
<meta name="author" content="Tom Read Cutting">
<title>How Information Works</title>
<meta name="apple-mobile-web-app-capable" content="yes">
<meta name="apple-mobile-web-app-status-bar-style" content="black-translucent">
<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no, minimal-ui">
<link rel="stylesheet" href="reveal.js/css/reset.css">
<link rel="stylesheet" href="reveal.js/css/reveal.css">
<style>
code{white-space: pre-wrap;}
span.smallcaps{font-variant: small-caps;}
span.underline{text-decoration: underline;}
div.column{display: inline-block; vertical-align: top; width: 50%;}
div.hanging-indent{margin-left: 1.5em; text-indent: -1.5em;}
ul.task-list{list-style: none;}
</style>
<link rel="stylesheet" href="reveal.js/css/theme/black.css" id="theme">
<link rel="stylesheet" href="css/style.css"/>
<!-- Printing and PDF exports -->
<script>
var link = document.createElement( 'link' );
link.rel = 'stylesheet';
link.type = 'text/css';
link.href = window.location.search.match( /print-pdf/gi ) ? 'reveal.js/css/print/pdf.css' : 'reveal.js/css/print/paper.css';
document.getElementsByTagName( 'head' )[0].appendChild( link );
</script>
<!--[if lt IE 9]>
<script src="reveal.js/lib/js/html5shiv.js"></script>
<![endif]-->
</head>
<body>
<div class="reveal">
<div class="slides">
<section id="title" class="slide level3" data-background-image="img/Information_Slide.png" data-background-size="contain" data-background-transition="none">
<h3 data-background-image="img/Information_Slide.png" data-background-size="contain" data-background-transition="none"></h3>
</section>
<section id="title-video" class="slide level3" data-background-video="img/Information_Slide.webm" data-background-size="contain" data-background-video-loop="loop">
<h3 data-background-video="img/Information_Slide.webm" data-background-size="contain" data-background-video-loop="loop"></h3>
</section>
<section id="what-is" class="slide level3">
<h3>What Is?</h3>
<p>Understanding the movement and transformation of information through mathematical and physical laws, addressing and answering two fundamental questions:</p>
<div class="fragment">
<ol type="1">
<li><strong>How can much can you compress data? (The entropy of the data, H).</strong></li>
</ol>
</div>
<div class="fragment">
<ol start="2" type="1">
<li>At which rate can you reliable communicate through a channel? (The channel capacity, C).</li>
</ol>
<aside class="notes">
<p>What is information theory?</p>
<p>We will only cover the first question. Question two can be saved for further reading, or a later talk…</p>
</aside>
</div>
</section>
<section id="why-do" class="slide level3">
<h3>Why Do?</h3>
<p>Widely Applicable! (Sneak Peak):</p>
<div class="fragment">
<ul>
<li>Compression (duh!)</li>
<li>Communications and Networking (duh!)</li>
<li>Data-Oriented Design</li>
<li>Security</li>
<li>Machine Learning (huh?)</li>
<li>Computer Vision (huh?)</li>
<li>Computer Graphics (huh!?)</li>
<li>^^ Almost Everything in Computer Science</li>
</ul>
<aside class="notes">
<p>Information Theory has some pretty obvious applications, but hopefully some here will surprise you!</p>
<p><em>click</em></p>
<p>We will be delving into the more interesting links later, and go in depth on these.</p>
<p>Outside of Computer Science, relevant to subjects from linguistics, to physics and how the universe itself works!</p>
</aside>
</div>
</section>
<section id="what-contains" class="slide level3">
<h3>What Contains?</h3>
<ul>
<li>Foundations: Intro, Bayes, Entropy, Shannon’s Source Coding Theorem</li>
<li>Applications: Codes, Compression</li>
<li>Relations: DoD, Security, ML, Graphics</li>
</ul>
<div class="fragment">
<p><em>Not</em> mathematically rigorous! Arguments rely on intuition, <em>not</em> formal proof. User-friendly, <em>not</em> technically precise.</p>
<aside class="notes">
<p>Very high level overview, will give you taste of what doors Information theory opens-up.</p>
<p>Applicable outside of Computer Science, is important to understanding reality itself!</p>
<p><em>click</em></p>
<p>Not maths accurate!</p>
</aside>
</div>
</section>
<section>
<section id="foundations" class="title-slide slide level2">
<h2>Foundations</h2>
</section>
<section id="data-is-not-information" class="slide level3">
<h3>Data Is Not Information</h3>
<p><strong>Intuition:</strong> A new hard drive has 1,000,000,000,000 bits of data, but not 1,000,000,000,000 bits <em>of information</em>.</p>
<p>Is there a difference between a 0-initialized hard drive, and a randomly-initialized hard drive in terms of information?</p>
<aside class="notes">
<p>This concept makes sense to use intuitively.</p>
<p>Answer to question, yes, if you care about the value of the bits on the randomly-initialized hard drive.</p>
<p>But this does hint an something interesting regarding information…</p>
</aside>
</section>
<section id="probabilities-matter" class="slide level3">
<h3>Probabilities Matter</h3>
<p>The less probable an event is, the more information it contains when it happens.</p>
<aside class="notes">
<p>Intuition: I tell you my name is Tom, the fact that you were expecting that means that the particular piece of information that I go by that name now isn’t very high.</p>
<p>However, if I were to tell you that my name is now Geoffrey, that would be more information.</p>
</aside>
</section>
<section id="bit-1-bit" class="slide level3">
<h3>1 Bit ≠ 1 Bit</h3>
<p>1 Bit of Data is not 1 Bit of Information.</p>
<div class="fragment">
<p>We can say that 1 Bit of Data contains 1 Bit of Information if the probability of that Bit being 1 or 0 is 0.5.</p>
<aside class="notes">
<p>Clickbait title! 1 Bit is not 1 Bit?</p>
<p><em>click</em></p>
<p>We will expand on what that is later. But this is the core of things like compression algorithms, if the probabilities of any bit in a bit stream being a given value isn’t 0.5 what does that mean?</p>
<p>You will understand this later.</p>
</aside>
</div>
</section>
<section id="knowledge-affects-information" class="slide level3">
<h3>Knowledge Affects Information</h3>
<p>Intuitively, past events affect the probabilities by which we predict future events.</p>
<div class="fragment">
<p>In othr wrds, yo cn rd ths sntnce evn wth mssng lttrs.</p>
<aside class="notes">
<p>Here’s intuitively why probability is important</p>
<p><em>click</em></p>
<p>The reason why you can read this sentence is because there are probabilies associated with what the missing letters could be, and your brain automatically fills in the gaps with the most likely options.</p>
<p>I find it awesome that our brains can run super-hardcore bayesian inference like that without us even thinking about it.</p>
</aside>
</div>
</section></section>
<section>
<section id="some-probability" class="title-slide slide level2">
<h2>Some Probability</h2>
<aside class="notes">
<p>We are going to delve into some probability basics.</p>
<p>Bear with me as this is important to understand, I’m going to go over this thoroughly as it helps a lot. But if you do get lost, I will stick to intuitive explanations for the rest of the talk.</p>
<p>Information Theory is <em>defined by</em> probability, because as we will discover, information <em>is</em> entropy <em>is</em> randomness.</p>
</aside>
</section>
<section id="basic-syntax---pa" class="slide level3">
<h3>Basic Syntax - <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>A</mi><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">p(A)</annotation></semantics></math></h3>
<p>For some event <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>A</mi><annotation encoding="application/x-tex">A</annotation></semantics></math>, <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>A</mi><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">p(A)</annotation></semantics></math> says how likely that event is to occur.</p>
<div class="fragment">
<p>In other words, <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>A</mi><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">p(A)</annotation></semantics></math> represents the probability that <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>A</mi><annotation encoding="application/x-tex">A</annotation></semantics></math> will happen.</p>
</div>
<div class="fragment">
<p><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>u</mi><mi>m</mi><mo stretchy="false" form="postfix">)</mo><mo>=</mo><mn>1</mn></mrow><annotation encoding="application/x-tex">p(um)=1</annotation></semantics></math></p>
<aside class="notes">
<p>Read Slide</p>
<p><em>click</em></p>
<p>Read Slide</p>
<p><em>click</em></p>
<p>If the event is that I will use say “umm” during this talk, then we can say that the probability of “umm”, p(um), is 1.</p>
</aside>
</div>
</section>
<section id="basic-syntax---pab" class="slide level3">
<h3>Basic Syntax - <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>A</mi><mo>,</mo><mi>B</mi><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">p(A,B)</annotation></semantics></math></h3>
<p>For events <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>A</mi><annotation encoding="application/x-tex">A</annotation></semantics></math> and <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>B</mi><annotation encoding="application/x-tex">B</annotation></semantics></math>, <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>A</mi><mo>,</mo><mi>B</mi><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">p(A,B)</annotation></semantics></math> is how likely both events are to happen.</p>
<div class="fragment">
<p>Hopefully <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>t</mi><mi>a</mi><mi>l</mi><mi>k</mi><mo>,</mo><mi>s</mi><mi>w</mi><mi>e</mi><mi>a</mi><mi>r</mi><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">p(talk,swear)</annotation></semantics></math> is low.</p>
<aside class="notes">
<p><em>click</em></p>
<p>Probability I will give this talk <em>and</em> swear, is low.</p>
</aside>
</div>
</section>
<section id="basic-syntax---pab-1" class="slide level3">
<h3>Basic Syntax - <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>A</mi><mo stretchy="false" form="prefix">|</mo><mi>B</mi><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">p(A|B)</annotation></semantics></math></h3>
<p>For events <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>A</mi><annotation encoding="application/x-tex">A</annotation></semantics></math> and <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>B</mi><annotation encoding="application/x-tex">B</annotation></semantics></math>, <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>A</mi><mo stretchy="false" form="prefix">|</mo><mi>B</mi><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">p(A|B)</annotation></semantics></math> is how likely <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>A</mi><annotation encoding="application/x-tex">A</annotation></semantics></math> is to happen, if <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>B</mi><annotation encoding="application/x-tex">B</annotation></semantics></math> has happened.</p>
<div class="fragment">
<p><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>s</mi><mi>w</mi><mi>e</mi><mi>a</mi><mi>r</mi><mo stretchy="false" form="prefix">|</mo><mi>s</mi><mi>t</mi><mi>u</mi><mi>b</mi><mi>t</mi><mi>o</mi><mi>e</mi><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">p(swear|stub toe)</annotation></semantics></math> is very high.</p>
<aside class="notes">
<p><em>click</em></p>
<p>Let’s hope I don’t stub my toe during this talk.</p>
</aside>
</div>
</section>
<section id="product-rule" class="slide level3">
<h3>Product Rule</h3>
<p>The probability that both <em>A</em> and <em>B</em> will happen:</p>
<p><math display="block" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>A</mi><mo>,</mo><mi>B</mi><mo stretchy="false" form="postfix">)</mo><mo>=</mo><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>A</mi><mo stretchy="false" form="prefix">|</mo><mi>B</mi><mo stretchy="false" form="postfix">)</mo><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>B</mi><mo stretchy="false" form="postfix">)</mo><mo>=</mo><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>B</mi><mo stretchy="false" form="prefix">|</mo><mi>A</mi><mo stretchy="false" form="postfix">)</mo><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>A</mi><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">p(A,B) = p(A|B)p(B) = p(B|A)p(A)</annotation></semantics></math></p>
<div class="fragment">
<p>Example: The Probability that Alice will buy a hot dog <em>and</em> ketchup?</p>
<aside class="notes">
<p><em>click</em></p>
<p>If we know the probability of Alice buying ketchup given that she’s bought a hot dog. <em>And</em> we know how likely she is to buy a hot dog. Then we know how likely <em>both</em> are to happen.</p>
</aside>
</div>
</section>
<section id="sum-rule" class="slide level3">
<h3>Sum Rule</h3>
<p>If the probability of A is affected by the outcome of a number of events <em>B</em></p>
<p><math display="block" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>A</mi><mo stretchy="false" form="postfix">)</mo><mo>=</mo><munder><mo>∑</mo><mi>B</mi></munder><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>A</mi><mo>,</mo><mi>B</mi><mo stretchy="false" form="postfix">)</mo><mo>=</mo><munder><mo>∑</mo><mi>B</mi></munder><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>A</mi><mo stretchy="false" form="prefix">|</mo><mi>B</mi><mo stretchy="false" form="postfix">)</mo><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>B</mi><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">p(A) = \sum\limits_{B} p(A,B) = \sum_{B} p(A|B)p(B)</annotation></semantics></math></p>
<div class="fragment">
<p>Example: The Probability that Bob will beat Alice at chess.</p>
<aside class="notes">
<p><em>click</em></p>
<p>If we know the probability that Bob will beat Alice when starting with whites, and the probability that Bob will beat Alice when starting with blacks - then we know the probability that Bob will beat Alice if we know how likely he is to start with either of those colours.</p>
</aside>
</div>
</section>
<section id="bayes-theorem" class="slide level3">
<h3>Bayes’ Theorem</h3>
<p>The Product Rule:</p>
<p><math display="block" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>A</mi><mo>,</mo><mi>B</mi><mo stretchy="false" form="postfix">)</mo><mo>=</mo><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>A</mi><mo stretchy="false" form="prefix">|</mo><mi>B</mi><mo stretchy="false" form="postfix">)</mo><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>B</mi><mo stretchy="false" form="postfix">)</mo><mo>=</mo><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>B</mi><mo stretchy="false" form="prefix">|</mo><mi>A</mi><mo stretchy="false" form="postfix">)</mo><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>A</mi><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">p(A,B) = p(A|B)p(B) = p(B|A)p(A)</annotation></semantics></math></p>
<div class="fragment">
<p>Gives us…</p>
</div>
<div class="fragment">
<p><math display="block" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>B</mi><mo stretchy="false" form="prefix">|</mo><mi>A</mi><mo stretchy="false" form="postfix">)</mo><mo>=</mo><mfrac><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>A</mi><mo stretchy="false" form="prefix">|</mo><mi>B</mi><mo stretchy="false" form="postfix">)</mo><mo stretchy="false" form="prefix">(</mo><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>B</mi><mo stretchy="false" form="postfix">)</mo></mrow><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>A</mi><mo stretchy="false" form="postfix">)</mo></mrow></mfrac></mrow><annotation encoding="application/x-tex">p(B|A) = \frac{p(A|B)(p(B)}{p(A)}</annotation></semantics></math></p>
<aside class="notes">
<p>Refresh on product rule</p>
<p><em>click</em></p>
<p><em>click</em></p>
<p>This is very powerful, because it allows us to reverse the conditions of events.</p>
</aside>
</div>
</section>
<section id="close-to-home-example" class="slide level3">
<h3>Close to Home Example</h3>
<p>Imagine a 90% accurate “virus immunity” test.</p>
<div class="fragment">
<p>Imagine 1% of population is <em>actually</em> immune to the virus.</p>
</div>
<div class="fragment">
<p>What is the probability you are immune, if the test is positive?</p>
</div>
<div class="fragment">
<p><math display="block" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>I</mi><mo stretchy="false" form="prefix">|</mo><mi>T</mi><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">p(I|T)</annotation></semantics></math></p>
<aside class="notes">
<p><em>click</em></p>
<p><em>click</em></p>
<p>Write down your guess in the comments, your gut feeling.</p>
</aside>
</div>
</section>
<section id="test-variables" class="slide level3">
<h3>Test Variables</h3>
<ul>
<li><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>I</mi><mo stretchy="false" form="postfix">)</mo><mo>=</mo><mn>0.01</mn></mrow><annotation encoding="application/x-tex">p(I)=0.01</annotation></semantics></math> - Chance you are immune.</li>
<li><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mover><mi>I</mi><mo accent="true">¯</mo></mover><mo stretchy="false" form="postfix">)</mo><mo>=</mo><mn>0.99</mn></mrow><annotation encoding="application/x-tex">p(\overline{I})=0.99</annotation></semantics></math> - Chance you are at risk.</li>
<li><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>T</mi><mo stretchy="false" form="prefix">|</mo><mi>I</mi><mo stretchy="false" form="postfix">)</mo><mo>=</mo><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mover><mi>T</mi><mo accent="true">¯</mo></mover><mo stretchy="false" form="prefix">|</mo><mover><mi>I</mi><mo accent="true">¯</mo></mover><mo stretchy="false" form="postfix">)</mo><mo>=</mo><mn>0.9</mn></mrow><annotation encoding="application/x-tex">p(T|I)=p(\overline{T}|\overline{I})=0.9</annotation></semantics></math> - Chance tests succeed.</li>
<li><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mover><mi>T</mi><mo accent="true">¯</mo></mover><mo stretchy="false" form="prefix">|</mo><mi>I</mi><mo stretchy="false" form="postfix">)</mo><mo>=</mo><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>T</mi><mo stretchy="false" form="prefix">|</mo><mover><mi>I</mi><mo accent="true">¯</mo></mover><mo stretchy="false" form="postfix">)</mo><mo>=</mo><mn>0.1</mn></mrow><annotation encoding="application/x-tex">p(\overline{T}|I)=p(T|\overline{I})=0.1</annotation></semantics></math> - Chance tests fail.</li>
</ul>
<aside class="notes">
<p>Stating chances we already know.</p>
</aside>
</section>
<section id="applying-bayes" class="slide level3">
<h3>Applying Bayes’</h3>
<p><math display="block" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>I</mi><mo stretchy="false" form="prefix">|</mo><mi>T</mi><mo stretchy="false" form="postfix">)</mo><mo>=</mo><mfrac><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>T</mi><mo stretchy="false" form="prefix">|</mo><mi>I</mi><mo stretchy="false" form="postfix">)</mo><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>I</mi><mo stretchy="false" form="postfix">)</mo></mrow><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>T</mi><mo stretchy="false" form="postfix">)</mo></mrow></mfrac></mrow><annotation encoding="application/x-tex">p(I|T)=\frac{p(T|I)p(I)}{p(T)}</annotation></semantics></math></p>
<div class="fragment">
<ul>
<li><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>T</mi><mo stretchy="false" form="postfix">)</mo><mo>=</mo><mi>.</mi><mi>.</mi><mi>.</mi></mrow><annotation encoding="application/x-tex">p(T)=...</annotation></semantics></math></li>
<li><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mo>=</mo><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>T</mi><mo stretchy="false" form="prefix">|</mo><mi>I</mi><mo stretchy="false" form="postfix">)</mo><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>I</mi><mo stretchy="false" form="postfix">)</mo><mo>+</mo><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>T</mi><mo stretchy="false" form="prefix">|</mo><mover><mi>I</mi><mo accent="true">¯</mo></mover><mo stretchy="false" form="postfix">)</mo><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mover><mi>I</mi><mo accent="true">¯</mo></mover><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">=p(T|I)p(I)+p(T|\overline{I})p(\overline{I})</annotation></semantics></math> (sum rule)</li>
<li><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mo>=</mo><mo stretchy="false" form="prefix">(</mo><mn>0.9</mn><mo stretchy="false" form="postfix">)</mo><mo stretchy="false" form="prefix">(</mo><mn>0.01</mn><mo stretchy="false" form="postfix">)</mo><mo>+</mo><mo stretchy="false" form="prefix">(</mo><mn>0.1</mn><mo stretchy="false" form="postfix">)</mo><mo stretchy="false" form="prefix">(</mo><mn>0.99</mn><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">=(0.9)(0.01)+(0.1)(0.99)</annotation></semantics></math></li>
<li><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mo>=</mo><mn>0.108</mn></mrow><annotation encoding="application/x-tex">=0.108</annotation></semantics></math></li>
</ul>
<aside class="notes">
<p>We need $p(T) then…</p>
<p><em>click</em></p>
</aside>
</div>
</section>
<section id="are-you-immune" class="slide level3">
<h3>Are You Immune?</h3>
<p><math display="block" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>I</mi><mo stretchy="false" form="prefix">|</mo><mi>T</mi><mo stretchy="false" form="postfix">)</mo><mo>=</mo><mfrac><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>T</mi><mo stretchy="false" form="prefix">|</mo><mi>I</mi><mo stretchy="false" form="postfix">)</mo><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>I</mi><mo stretchy="false" form="postfix">)</mo></mrow><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>T</mi><mo stretchy="false" form="postfix">)</mo></mrow></mfrac><mo>=</mo><mfrac><mrow><mo stretchy="false" form="prefix">(</mo><mn>0.9</mn><mo stretchy="false" form="postfix">)</mo><mo stretchy="false" form="prefix">(</mo><mn>0.01</mn><mo stretchy="false" form="postfix">)</mo></mrow><mn>0.108</mn></mfrac><mo>=</mo><mi>.</mi><mi>.</mi><mi>.</mi></mrow><annotation encoding="application/x-tex">p(I|T)=\frac{p(T|I)p(I)}{p(T)}=\frac{(0.9)(0.01)}{0.108}=...</annotation></semantics></math></p>
<div class="fragment">
<p><math display="block" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mn>0.833</mn><annotation encoding="application/x-tex">0.833</annotation></semantics></math></p>
<aside class="notes">
<p>Are you immune? Here’s the math!</p>
<p><em>click</em></p>
<p>You have less than 10% chance of being immune. Even though your test was 90% accurate.</p>
</aside>
</div>
</section>
<section id="relation-to-information-theory" class="slide level3">
<h3>Relation to Information Theory</h3>
<p>Bayes’ Theorem can be applied recursively to let us use the latest posterior as a new <em>prior</em> so interpret the next set of data.</p>
<div class="fragment">
<p>Information Theory is about quantitatively analysing the amount of information gained (via analysing reduced uncertainty) using Bayes’ Theorem.</p>
<aside class="notes">
<p>Bayes’ theorem let’s us estimate the probalities of incoming bits of information</p>
<p><em>click</em></p>
<p>Information theory is all about quantiviely analysing that.</p>
</aside>
</div>
</section></section>
<section>
<section id="entropy" class="title-slide slide level2">
<h2>Entropy</h2>
<aside class="notes">
<p>This section will be mostly statements, the intuition and tying this all together will come in the coding section.</p>
</aside>
</section>
<section id="event-information" class="slide level3">
<h3>Event Information</h3>
<p>The information <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>I</mi><annotation encoding="application/x-tex">I</annotation></semantics></math> contained within an event <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>E</mi><annotation encoding="application/x-tex">E</annotation></semantics></math> is:</p>
<div class="fragment">
<p><math display="block" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>I</mi><mo>=</mo><msub><mo>log</mo><mn>2</mn></msub><mo stretchy="false" form="prefix">(</mo><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>E</mi><mo stretchy="false" form="postfix">)</mo><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">I = \log_2(p(E))</annotation></semantics></math></p>
<p>Where <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>E</mi><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">p(E)</annotation></semantics></math> is the probability of that event occurring.</p>
</div>
<div class="fragment">
<p>Entropy, <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>H</mi><mo>=</mo><mo>−</mo><mi>I</mi></mrow><annotation encoding="application/x-tex">H = -I</annotation></semantics></math> is the the amount of uncertainty.</p>
<aside class="notes">
<p>So, there is an equation for information, and this…</p>
<p><em>click</em></p>
<p>So, it seems pretty arbitrary to have this. And also it’s negative. We also talk about Entropy though, which, intuitively, is anti-information so…</p>
<p><em>click</em></p>
<p>However, these numbers mean very real things. The next slide will explain why log is useful, but eventually the magic will be revealed…</p>
</aside>
</div>
</section>
<section id="adding-information" class="slide level3">
<h3>Adding Information</h3>
<p>For independent <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>a</mi><annotation encoding="application/x-tex">a</annotation></semantics></math> and <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>b</mi><annotation encoding="application/x-tex">b</annotation></semantics></math>:</p>
<p><math display="block" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi>I</mi><mrow><mi>a</mi><mi>b</mi></mrow></msub><mo>=</mo><msub><mo>log</mo><mn>2</mn></msub><mo stretchy="false" form="prefix">(</mo><msub><mi>p</mi><mi>a</mi></msub><msub><mi>p</mi><mi>b</mi></msub><mo stretchy="false" form="postfix">)</mo><mo>=</mo><msub><mo>log</mo><mn>2</mn></msub><mo stretchy="false" form="prefix">(</mo><msub><mi>p</mi><mi>a</mi></msub><mo stretchy="false" form="postfix">)</mo><mo>+</mo><msub><mo>log</mo><mn>2</mn></msub><mo stretchy="false" form="prefix">(</mo><msub><mi>p</mi><mi>b</mi></msub><mo stretchy="false" form="postfix">)</mo><mo>=</mo><msub><mi>I</mi><mi>a</mi></msub><mo>+</mo><msub><mi>I</mi><mi>b</mi></msub></mrow><annotation encoding="application/x-tex">I_{ab} = \log_2(p_a p_b) = \log_2(p_a) + \log_2(p_b) = I_a + I_b</annotation></semantics></math></p>
<aside class="notes">
<p>By defining information in terms of the logarithms of the underlying probabilities involved, we can “add” information together to get the total information gain of two events.</p>
</aside>
</section>
<section id="entropy-of-ensembles" class="slide level3">
<h3>Entropy of Ensembles</h3>
<p>If you have non-uniform ensemble of probabilities such that:</p>
<p><math display="block" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><munder><mo>∑</mo><mi>i</mi></munder><msub><mi>p</mi><mi>i</mi></msub><mo>=</mo><mn>1</mn></mrow><annotation encoding="application/x-tex">\sum\limits_i p_i = 1 </annotation></semantics></math></p>
<p>Then:</p>
<p><math display="block" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>H</mi><mo>=</mo><mo>−</mo><munder><mo>∑</mo><mi>i</mi></munder><msub><mi>p</mi><mi>i</mi></msub><msub><mo>log</mo><mn>2</mn></msub><mo stretchy="false" form="prefix">(</mo><msub><mi>p</mi><mi>i</mi></msub><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">H = - \sum\limits_i p_i \log_2(p_i)</annotation></semantics></math></p>
<aside class="notes">
<p>So if you have a probability distribution, we can find out the entropy associated with that distribution.</p>
<p>Remember, entropy is measured in bits! This will tie together nicely, I promise!</p>
</aside>
</section>
<section id="intuition-of-entropy" class="slide level3">
<h3>Intuition of Entropy</h3>
<div style="font-size: 0.7em">
<p>Bit, <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>b</mi><annotation encoding="application/x-tex">b</annotation></semantics></math> with <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi>p</mi><mrow><mi>b</mi><mo>=</mo><mn>1</mn></mrow></msub><mo>=</mo><mn>1</mn><mo>−</mo><msub><mi>p</mi><mrow><mi>b</mi><mo>=</mo><mn>0</mn></mrow></msub></mrow><annotation encoding="application/x-tex">p_{b=1}=1-p_{b=0}</annotation></semantics></math></p>
</div>
<div class="fragment">
<div style="font-size: 0.7em">
<figure>
<img data-src="plots/5547008444886439909.svg" class="matplotlib" height="330" alt="" /><figcaption><math display="block" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>H</mi><mo stretchy="false" form="prefix">(</mo><mi>b</mi><mo stretchy="false" form="postfix">)</mo><mo>=</mo><mo>−</mo><msub><mi>p</mi><mrow><mi>b</mi><mo>=</mo><mn>1</mn></mrow></msub><msub><mo>log</mo><mn>2</mn></msub><mo stretchy="false" form="prefix">(</mo><msub><mi>p</mi><mrow><mi>b</mi><mo>=</mo><mn>1</mn></mrow></msub><mo stretchy="false" form="postfix">)</mo><mo>−</mo><mo stretchy="false" form="prefix">(</mo><mn>1</mn><mo>−</mo><msub><mi>p</mi><mrow><mi>b</mi><mo>=</mo><mn>0</mn></mrow></msub><mo stretchy="false" form="postfix">)</mo><msub><mo>log</mo><mn>2</mn></msub><mo stretchy="false" form="prefix">(</mo><mn>1</mn><mo>−</mo><msub><mi>p</mi><mrow><mi>b</mi><mo>=</mo><mn>0</mn></mrow></msub><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">H(b)=-p_{b=1}\log_2(p_{b=1})-(1-p_{b=0})\log_2(1-p_{b=0})</annotation></semantics></math></figcaption>
</figure>
<p>When <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo>=</mo><mn>0.5</mn></mrow><annotation encoding="application/x-tex">p=0.5</annotation></semantics></math>, the Entropy maxes-out at 1.</p>
</div>
<aside class="notes">
<p>Excuse the inconsistent syntax, it’s denser this way.</p>
<p>We are saying that we have a bit <em>of data</em> b, that can either be one or zero. And it can only have two values.</p>
<p>. . .</p>
<p>Bring that back to earlier when we said that 1 bit of data contains 1 bit of information if the probability of it being 1 or 0 was 0.5, this is why!</p>
</aside>
</div>
</section>
<section id="further-entropy-reading" class="slide level3">
<h3>Further Entropy Reading</h3>
<ul>
<li>Joint Entropy</li>
<li>Conditional Entropy of Ensembles</li>
<li>Chain Rule for Entropy</li>
<li>Mutual Information</li>
<li>Kullback-Leibler Distance and Fano’s Inequality</li>
</ul>
<aside class="notes">
<p>We can calculate different kinds of entropy under other conditions, so as the entropy of two independent random variables and the entropy of variables which are conditional on each other.</p>
<p>We can also do this for ensembles of variables!</p>
<p>Unfortunately we don’t have enough time for this.</p>
<p>Mutual information takes this further by letting us know how much one variable gives us information about another.</p>
</aside>
</section></section>
<section>
<section id="source-coding" class="title-slide slide level2">
<h2>Source-Coding</h2>
<aside class="notes">
<p>This section will explain why we all 1 bit of information, a ‘bit’.</p>
</aside>
</section>
<section id="codes" class="slide level3">
<h3>Codes</h3>
<p>Imagine Alice sending Bob <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>A</mi><annotation encoding="application/x-tex">A</annotation></semantics></math>, <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>B</mi><annotation encoding="application/x-tex">B</annotation></semantics></math>, <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>C</mi><annotation encoding="application/x-tex">C</annotation></semantics></math> and <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>D</mi><annotation encoding="application/x-tex">D</annotation></semantics></math>, with:</p>
<div>
<ul>
<li class="fragment"><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>A</mi><mo stretchy="false" form="postfix">)</mo><mo>=</mo><mfrac><mn>1</mn><mn>2</mn></mfrac></mrow><annotation encoding="application/x-tex">p(A)=\frac{1}{2}</annotation></semantics></math></li>
<li class="fragment"><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>B</mi><mo stretchy="false" form="postfix">)</mo><mo>=</mo><mfrac><mn>1</mn><mn>4</mn></mfrac></mrow><annotation encoding="application/x-tex">p(B)=\frac{1}{4}</annotation></semantics></math></li>
<li class="fragment"><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>C</mi><mo stretchy="false" form="postfix">)</mo><mo>=</mo><mfrac><mn>1</mn><mn>8</mn></mfrac></mrow><annotation encoding="application/x-tex">p(C)=\frac{1}{8}</annotation></semantics></math></li>
<li class="fragment"><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>D</mi><mo stretchy="false" form="postfix">)</mo><mo>=</mo><mfrac><mn>1</mn><mn>8</mn></mfrac></mrow><annotation encoding="application/x-tex">p(D)=\frac{1}{8}</annotation></semantics></math></li>
</ul>
</div>
<div class="fragment">
<p>Example: DADDBBADAABBAACBDABCAAADC</p>
<aside class="notes">
<p>Let’s say Alice is send Bob a sequence letters, “A, B, C & D”.</p>
<p><em>click</em></p>
<p>Where the probability she will send A is a half.</p>
<p><em>click</em></p>
<p>B is a quarter</p>
<p><em>click</em></p>
<p>C is 1/8</p>
<p><em>click</em></p>
<p>D is 1/8</p>
<p><em>click</em></p>
<p>With an example (generated randomly!)</p>
</aside>
</div>
</section>
<section id="encoding-as-binary" class="slide level3">
<h3>Encoding as Binary</h3>
<div class="fragment">
<p>A naive code might look like this:</p>
<ul>
<li><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>A</mi><mo>=</mo><mn>00</mn></mrow><annotation encoding="application/x-tex">A = 00</annotation></semantics></math></li>
<li><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>B</mi><mo>=</mo><mn>01</mn></mrow><annotation encoding="application/x-tex">B = 01</annotation></semantics></math></li>
<li><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>C</mi><mo>=</mo><mn>10</mn></mrow><annotation encoding="application/x-tex">C = 10</annotation></semantics></math></li>
<li><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>D</mi><mo>=</mo><mn>11</mn></mrow><annotation encoding="application/x-tex">D = 11</annotation></semantics></math></li>
</ul>
</div>
<div class="fragment">
<p>This has a fixed <em>code rate</em>, (the mean number of bits transmitted), <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>R</mi><mo>=</mo><mn>2</mn></mrow><annotation encoding="application/x-tex">R=2</annotation></semantics></math>.</p>
<aside class="notes">
<p>How would we encode this as binary?</p>
<p><em>click</em></p>
<p>We have four letters, so we can use two bits per letter right?</p>
<p><em>click</em></p>
<p>So this has what we call a “Fixed Code Rate”, that is, on average, for each letter transmitted, we will send two bits and always send two bits! We call this R.</p>
</aside>
</div>
</section>
<section id="entropy-of-the-system" class="slide level3">
<h3>Entropy of the system</h3>
<p>Remember:</p>
<p><math display="block" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>H</mi><mo>=</mo><mo>−</mo><munder><mo>∑</mo><mi>i</mi></munder><msub><mi>p</mi><mi>i</mi></msub><msub><mo>log</mo><mn>2</mn></msub><mo stretchy="false" form="prefix">(</mo><msub><mi>p</mi><mi>i</mi></msub><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">H = - \sum\limits_i p_i \log_2(p_i)</annotation></semantics></math></p>
<div>
<ul>
<li class="fragment"><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>H</mi><mo>=</mo><mi>.</mi><mi>.</mi><mi>.</mi></mrow><annotation encoding="application/x-tex">H=...</annotation></semantics></math></li>
<li class="fragment"><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mo>=</mo><mi>H</mi><mo stretchy="false" form="prefix">(</mo><mi>A</mi><mo stretchy="false" form="postfix">)</mo><mo>+</mo><mi>H</mi><mo stretchy="false" form="prefix">(</mo><mi>B</mi><mo stretchy="false" form="postfix">)</mo><mo>+</mo><mi>H</mi><mo stretchy="false" form="prefix">(</mo><mi>C</mi><mo stretchy="false" form="postfix">)</mo><mo>+</mo><mi>H</mi><mo stretchy="false" form="prefix">(</mo><mi>D</mi><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">=H(A)+H(B)+H(C)+H(D)</annotation></semantics></math></li>
<li class="fragment"><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mo>=</mo><mo>−</mo><mfrac><mn>1</mn><mn>2</mn></mfrac><mo>−</mo><mfrac><mn>1</mn><mn>4</mn></mfrac><mn>2</mn><mo>−</mo><mfrac><mn>1</mn><mn>8</mn></mfrac><mn>3</mn><mo>−</mo><mfrac><mn>1</mn><mn>8</mn></mfrac><mn>3</mn></mrow><annotation encoding="application/x-tex">=-\frac{1}{2}-\frac{1}{4}2-\frac{1}{8}3-\frac{1}{8}3</annotation></semantics></math></li>
<li class="fragment"><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mo>=</mo><mo>−</mo><mfrac><mn>1</mn><mn>2</mn></mfrac><mo>−</mo><mfrac><mn>1</mn><mn>2</mn></mfrac><mo>−</mo><mfrac><mn>3</mn><mn>8</mn></mfrac><mo>−</mo><mfrac><mn>3</mn><mn>8</mn></mfrac></mrow><annotation encoding="application/x-tex">=-\frac{1}{2}-\frac{1}{2}-\frac{3}{8}-\frac{3}{8}</annotation></semantics></math></li>
<li class="fragment"><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mo>=</mo><mo>−</mo><mn>1.75</mn></mrow><annotation encoding="application/x-tex">=-1.75</annotation></semantics></math></li>
</ul>
</div>
<aside class="notes">
<p>So what is the entropy of this system then? Although it still seems like we are plugging arbitrary numbers into equations, it will all make sense!</p>
<p>So we are calculating the entropy of an ensemble here, so here we go!</p>
<p><em>click</em></p>
<p>(once all clicked)</p>
<p>So what does this mean? It means that on average, in our system, each symbol carries 1.75 bits of information.</p>
<p>So what does that tell us about our code?</p>
</aside>
</section>
<section id="coding-efficiency" class="slide level3">
<h3>Coding Efficiency</h3>
<p>The efficiency <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>μ</mi><annotation encoding="application/x-tex">\mu</annotation></semantics></math> of our coding is <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>μ</mi><mo>=</mo><mfrac><mi>H</mi><mi>R</mi></mfrac></mrow><annotation encoding="application/x-tex">\mu=\frac{H}{R}</annotation></semantics></math>:</p>
<p><math display="block" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>μ</mi><mo>=</mo><mn>1.75</mn><mi>/</mi><mn>2</mn><mo>=</mo><mn>0.875</mn></mrow><annotation encoding="application/x-tex">\mu=1.75/2=0.875</annotation></semantics></math></p>
<aside class="notes">
<p>The implication is that a coding <em>should</em> exist that itself has a coding rate <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>R</mi><mo>=</mo><mi>H</mi></mrow><annotation encoding="application/x-tex">R=H</annotation></semantics></math>, and if we can find-it, it will be optimal.</p>
<p>So yeah, maybe there is a different code with a code rate R of 1.75?</p>
</aside>
</section>
<section id="variable-length-coding" class="slide level3">
<h3>Variable-Length Coding</h3>
<p>Now imagine:</p>
<ul>
<li><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>A</mi><mo>=</mo><mn>0</mn></mrow><annotation encoding="application/x-tex">A = 0</annotation></semantics></math></li>
<li><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>B</mi><mo>=</mo><mn>10</mn></mrow><annotation encoding="application/x-tex">B = 10</annotation></semantics></math></li>
<li><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>C</mi><mo>=</mo><mn>110</mn></mrow><annotation encoding="application/x-tex">C = 110</annotation></semantics></math></li>
<li><math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>D</mi><mo>=</mo><mn>111</mn></mrow><annotation encoding="application/x-tex">D = 111</annotation></semantics></math></li>
</ul>
<div class="fragment">
<p><math display="block" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>R</mi><mo>=</mo><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>A</mi><mo stretchy="false" form="postfix">)</mo><mo>+</mo><mn>2</mn><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>B</mi><mo stretchy="false" form="postfix">)</mo><mo>+</mo><mn>3</mn><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>C</mi><mo stretchy="false" form="postfix">)</mo><mo>+</mo><mn>3</mn><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>D</mi><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">R=p(A)+2p(B)+3p(C)+3p(D)</annotation></semantics></math></p>
</div>
<div class="fragment">
<p><math display="block" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mo>=</mo><mn>1.75</mn></mrow><annotation encoding="application/x-tex">=1.75</annotation></semantics></math></p>
<aside class="notes">
<p>So here’s one I made earlier.</p>
<p>Explain variable length coding, and useful properties (each symbol uniquely and instantaneously decodable).</p>
<p>Now, if we were to calculate the likely hood each instance of this code is to appear, and how their cost, what is the code rate now?</p>
<p><em>click</em></p>
<p>Explain probabilities calculation.</p>
<p><em>click</em></p>
<p>Look, R matches the entropy of this system!</p>
<p>Aside: So, this code was plucked out-of thin air, but it has some important properties that make it work.</p>
<p>Firstly, if we tried to find a code with a lower coding-rate, we can’t, because that’s impossible to do without losing information, so we know this is the most maximally efficient code.</p>
<p>It’s also instantly decodable, no symbol encoding is prefix of another symbol, so we don’t need to wait for further messages.</p>
<p>These things can be easy to mess up in more complex situations, and there are systems for constructing codes like this (such as huffman trees).</p>
</aside>
</div>
</section>
<section id="shannons-source-coding-theorem" class="slide level3">
<h3>Shannon’s Source-Coding Theorem</h3>
<p><em>You can compress a stream of data with entropy <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>H</mi><annotation encoding="application/x-tex">H</annotation></semantics></math> into a code whose rate <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>R</mi><annotation encoding="application/x-tex">R</annotation></semantics></math> approaches <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>H</mi><annotation encoding="application/x-tex">H</annotation></semantics></math> in the limit, but you can’t have a code rate <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>R</mi><mo><</mo><mi>H</mi></mrow><annotation encoding="application/x-tex">R < H</annotation></semantics></math> without loss of information.</em></p>
</section>
<section id="on-fixed-probabilities" class="slide level3">
<h3>On Fixed Probabilities</h3>
<p>Probabilities in symbol streams rarely fixed</p>
<ul>
<li>Could be affected by previous symbol (<math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><mi>U</mi><mo stretchy="false" form="prefix">|</mo><mi>Q</mi><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">p(U|Q)</annotation></semantics></math> is high!)</li>
<li>Can be dependant on context, the <em>type</em> of data: photos vs cartoons.</li>
<li>Can depend on recipient, do they know what you are going to send?</li>
</ul>
<aside class="notes">
<p>The goal of encryption is transmit data in such a way that it contains no information for anyone except the intended receiver.</p>
<p>This is unfortunately the limits we will get to with lossless compression, but you can see the foundations of a mathematical system for understanding this stuff.</p>
<p>Good compression systems are as much about trying to accurately <em>discern</em> the entropy of each symbol for the target recipient.</p>
</aside>
</section>
<section id="further-coding-reading" class="slide level3">
<h3>Further Coding Reading</h3>
<ul>
<li>Huffman Codes & Huffman Trees</li>
<li>Kraft-McMillan Inequality</li>
<li>Markov Chains</li>
</ul>
<aside class="notes">
<p>Huffman Trees let use construct these codes.</p>
<p>The Kraft-McMillian deals with limits on how instaneous codes can exist.</p>
<p>Fascinatingly, you can code an infinate alpahbet this way! Whilst still having 2 bits per symbol on average!</p>
<p>Markov Chains deal with probabilities changing based on previous events using a state machine.</p>
</aside>
</section></section>
<section>
<section id="compression" class="title-slide slide level2">
<h2>Compression</h2>
<aside class="notes">
<p>Coding covers how different ways of coding the same data can make it more or less efficient, but how does this apply to general-purpose compression?</p>
</aside>
</section>
<section id="compression-is-hard" class="slide level3">
<h3>Compression is… hard</h3>
<ul>
<li>We want to find out the best estimate <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>p</mi><mo stretchy="false" form="prefix">(</mo><msub><mi>B</mi><mrow><mi>n</mi><mo>+</mo><mn>1</mn></mrow></msub><mo stretchy="false" form="prefix">|</mo><msub><mi>B</mi><mrow><mn>0</mn><mi>.</mi><mi>.</mi><mi>n</mi></mrow></msub><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">p(B_{n+1}|B_{0..n})</annotation></semantics></math>, for the recipient…</li>
<li>Context is key, the more specific you can be the more information you already have.
<ul>
<li>General purpose?</li>
<li>Images Only?</li>
<li>Cartoons Only?</li>
<li>Only Simpsons Characters?</li>
</ul></li>
</ul>
<aside class="notes">
<ul>
<li>About finding and encoding probabilities</li>
<li>Dictionary-Based, assume what comes before will come again.</li>
<li>Run length-encoding, assume things won’t change</li>
<li>Tom Scott Video Compression</li>
<li>Learning Compressors, Give example data, they compress it.</li>
</ul>
</aside>
</section>
<section id="dictionary-method---assume-repeated-patterns" class="slide level3" data-background-image="img/dictionary.jpg" style="color:black; text-shadow: 0px 0px 4px white;">
<h3 data-background-image="img/dictionary.jpg" style="color:black; text-shadow: 0px 0px 4px white;">Dictionary Method - Assume Repeated Patterns</h3>
<p>(LZW, gif)</p>
<p>Every time a new “word” is encountered, put it in a dictionary. Next time you encounter it, refer to the dictionary entry.</p>
<div class="fragment">
<p>Constructing the “best” dictionary is hard.</p>
</div>
<div class="fragment">
<p>Image Source: <a href="https://www.pexels.com/@freestocks">freestocks.org</a></p>
</div>
</section>
<section id="run-length-encoding" class="slide level3">
<h3>Run-Length Encoding</h3>
<p>Assume data will often be repeated, so count number of repeated bytes and store that and first instance.</p>
<div class="fragment">
<p><math display="block" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>000000000001111000</mn><mo>=</mo><mo stretchy="false" form="prefix">(</mo><mn>11</mn><mo stretchy="false" form="postfix">)</mo><mn>0</mn><mo>,</mo><mo stretchy="false" form="prefix">(</mo><mn>4</mn><mo stretchy="false" form="postfix">)</mo><mn>1</mn><mo>,</mo><mn>3</mn><mo stretchy="false" form="prefix">(</mo><mn>0</mn><mo stretchy="false" form="postfix">)</mo></mrow><annotation encoding="application/x-tex">000000000001111000=(11)0, (4)1, 3(0)</annotation></semantics></math></p>
</div>
</section>
<section id="learning-compressors" class="slide level3">
<h3>Learning Compressors</h3>
<p>Train compressor for specific-use case.</p>
<div class="fragment">
<p><a href="http://www.radgametools.com/oodlenetwork.htm">Oodle Network Compression</a> does this by building dictionary for network packets ahead-of-time that is shipped with the game.</p>
</div>
</section>
<section id="lossy-compression" class="slide level3" data-background-image="img/compress_cat.jpg" style="color:white; text-shadow: 0px 0px 4px black;">
<h3 data-background-image="img/compress_cat.jpg" style="color:white; text-shadow: 0px 0px 4px black;">Lossy Compression</h3>
<p>Really good for images/videos - goal is to throw away information that our eyes tend to naturally discard anyway.</p>
<div class="fragment">
<p>JPEG uses FFT to achieve this example.</p>
</div>
<div class="fragment">
<p>A broad topic, that can be expanded on at a later date.</p>
</div>
<div class="fragment">
<p>Image Source: <a href="https://en.wikipedia.org/wiki/Lossy_compression#/media/File:Ruby-HighCompression-Tiny.jpg">Wikipedia</a></p>
</div>
</section></section>
<section>
<section id="relations" class="title-slide slide level2">
<h2>Relations</h2>
</section>
<section id="original-list" class="slide level3">
<h3>Original List</h3>
<p>Case Studies</p>
<ul>
<li>Compression (duh!)</li>
<li>Communications and Networking (duh!) (next time)</li>
<li>Data-Oriented Design</li>
<li>Security</li>
<li>Machine Learning (huh?)</li>
<li>Computer Vision (huh?)</li>
<li>Computer Graphics (huh!?)</li>
<li>^^ Almost Everything in Computer Science</li>
</ul>
</section>
<section id="data-oriented-design-actonmike2014" class="slide level3">
<h3>Data-Oriented Design <span class="citation" data-cites="ActonMike2014">(Acton 2014)</span></h3>
<p>A strongly recommended talk from CppCon in 2014.</p>
<div class="fragment">
<p><img data-src="img/cpp_con_info_density_action_2014.jpg" width="450" /></p>
<p>Information Density in context is important for Data-Oriented Design!</p>
<aside class="notes">
<p>Mike Acton gave a really good talk in 2014 strong advocating software engineering by focusing on “solving the problem you have to solve” - identifying that all software problems are ultimately problems of data transformation on a concrete set of hardware.</p>
<p>With Modern CPUs, this means the focus is often on transforming data in a way that makes effective use of CPU caches.</p>
<p>Being able to reason about information density is an important part of that.</p>
<p><em>click</em></p>
<p>Not only do you have the problem of the cache misses caused by computing a single bit each frame, but the information density here is incredibly low on top of that!</p>
<p>I would strongly recommended watch the full talk if you want to learn more.</p>
</aside>
</div>
</section>
<section id="security---encryption" class="slide level3">
<h3>Security - Encryption</h3>
<figure>
<img data-src="img/keys_george_becker.jpg" width="600" alt="" /><figcaption>Image Source: <a href="https://www.pexels.com/@eye4dtail">George Becker</a></figcaption>
</figure>
<p>Goal: Meaningful <em>only</em> to the intended recipient.</p>
<aside class="notes">
<p>Encryption is all about encoding information so that only someone with the right <em>priors</em> can extract meaning from it. From an information theory point-of-view it’s still the same <em>amount</em> of information from a probability perspective.</p>
<p>Public-key cryptography is a really interesting case.</p>
<p>When you broadcast information encrypted using someone’s public key, everyone is receiving that information with the same density. However, only the person with the right corresponding private key can decrypt it.</p>
<p>What’s fascinating here is that the public key <em>is</em> an encoding of the private key - but the work involved to extract that information just requires a lot of energy. All the information you might need to acquire the private key is there, you just can’t access it.</p>
</aside>
</section>
<section id="machine-learning-computer-graphics---dlss-nvidiadlss2020" class="slide level3">
<h3>Machine Learning + Computer Graphics - DLSS <span class="citation" data-cites="NvidiaDlss2020">(NVIDIA 2020)</span></h3>
<p><img data-src="img/nvidia-dlss-2-0-architecture.png" /></p>
<aside class="notes">
<p>Machine Learning Image Upscaling works because a 4x increase in image resolution does not make for a 4x increase in information - so computing that data every frame is wasteful.</p>
<p>By training a neural network known as a “Convolutional Autoencoder” on examples of low-resolution and high-resolution images, we can make pretty good-looking reconstructions of high-resolution images from low-resolution ones.</p>
<p>By feeding in historical data and motion vectors from previous frames, we can even reconstruct some higher-frequency data that would otherwise be aliased.</p>
</aside>
</section>
<section id="computer-vision---how-the-eye-workswikiblindspot" class="slide level3">
<h3><del>Computer</del> Vision - How the eye works<span class="citation" data-cites="WikiBlindSpot">(Wikipedia 2020)</span></h3>
<div class="fragment">
<figure>
<img data-src="img/blind_spot_demonstration.png" data-fontsize="10" alt="" /><figcaption>Instructions: Close one eye and focus appropriate letter (R for right or L for left). Place eye 3x distance between R and L from screen. Move back-and-forth until opposite letter dissapears.</figcaption>
</figure>
<aside class="notes">
<p>Human vision is a wide and broad topic, but our brains and retinas exploit information theory <em>a lot</em> for in an attempt to efficiently allow us to <em>see</em> informationally rich images from relatively low-resolution data sources which contain a lot of noise, missing pixels, limited bandwidth, the wrong colours, etc.</p>
<p><em>click</em></p>
<p>Testing the process going wrong is where it gets interest. Everyone follow these instructions:</p>
<p>Close one eye and focus the other on the appropriate letter (R for right or L for left). Place your eye a distance from the screen approximately equal to three times the distance between the R and the L. Move your eye towards or away from the screen until you notice the other letter disappear. For example, close your right eye, look at the “L” with your left eye, and the “R” will disappear.</p>
<p>It’s amazing, our brain is filling in the data with what it “expects” to be there with the highest probability, only, it gets it wrong.</p>
</aside>
</div>
</section>
<section id="bonus-the-universe-itself" class="slide level3">
<h3>Bonus: The Universe Itself!?</h3>
<figure>
<img data-src="img/universe_miriam_espacio.jpg" width="600" alt="" /><figcaption>Image Source: <a href="https://www.pexels.com/@miriamespacio">Miriam Espacio</a></figcaption>
</figure>
<aside class="notes">
<p>One of the things that makes Information Theory hold a special place in my heart is the fact that it gives us a really interesting insight into the laws of the Universe itself.</p>
<p>We live in a world that (as far as we know) is trending towards a state of entropy. We have established that no information at all, is the completely uniform and expected, so it seems like more and more information is constantly being added to the Universe within which we live.</p>
<p>Yet, information needs meaning, and in the march towards more entropy, towards more information, it takes work to keep information around. And with channel coding, it takes work to transmit it.</p>
<p>It’s fascinating that packing data into the smallest number of bits needed to store useful information takes work, energy, CPU time, electricity. It’s even more so that unpacking information from that form requires just as much work - to extract meaning we need to reintroduce redundancy.</p>
<p>This is the foundation for efficiently distributing videos of cats around the internet, yet it’s hard not get lost in this when your mind wanders onto the topic.</p>
</aside>
</section></section>
<section>
<section id="the-end" class="title-slide slide level2">
<h2>The End</h2>
<aside class="notes">
<p>I hope you found this interesting and it has piqued your interest for more on information theory. There are links in the description for further watching and reading.</p>
</aside>
</section>
<section id="special-thanks" class="slide level3">
<h3>Special Thanks</h3>
<ul>
<li>Professor John Daugman for teaching this course at University.</li>
<li>Thomas Van Nuffel for the amazing title slide.</li>
<li>Henry Ryder for design feedback and assistance.</li>
<li>Alastair Toft & AJ Weeks for ideas bouncing and feedback.</li>
<li>Huw Bowles for organising these talks and providing invaluable feedback.</li>
</ul>
</section>
<section id="social-media" class="slide level3">
<h3>Social Media</h3>
<p>Subscribe to our <a href="https://www.youtube.com/channel/UCahevy2N_tj_ZOdsByl9L-A">YouTube Channel</a>!</p>
<p>More talks available! Chips! Git!</p>
</section>
<section id="further-watching" class="slide level3">
<h3>Further Watching</h3>
<ul>
<li><a href="https://www.youtube.com/watch?v=sMb00lz-IfE">What is NOT Random?</a> - Veritasium</li>
<li><a href="https://www.youtube.com/watch?v=yWO-cvGETRQ">Why Black Holes Could Delete The Universe</a> - Kurzgesagt</li>
<li><a href="https://www.youtube.com/watch?v=_PG-jJKB_do">Intro to Information Theory</a> - Up and Atom</li>
<li><a href="https://www.youtube.com/watch?v=r6Rp-uo6HmI">Why Snow and Confetti Ruin YouTube Video Quality</a> - Tom Scott</li>
</ul>
</section>
<section id="references" class="slide level3">
<h3>References</h3>
<div id="refs" class="references hanging-indent" style="font-size: 0.5em" role="doc-bibliography">
<div id="ref-ActonMike2014">
<p>Acton, Mike. 2014. “CppCon 2014: Mike Action ‘Data-Oriented Design and C++’.” <a href="https://youtu.be/rX0ItVEVjHc?t=3064">https://youtu.be/rX0ItVEVjHc?t=3064</a>.</p>
</div>
<div id="ref-DaugmanJohn2016">
<p>Daugman, John. 2016. “Information Theory.” <a href="https://www.cl.cam.ac.uk/teaching/1617/InfoTheory/materials.html">https://www.cl.cam.ac.uk/teaching/1617/InfoTheory/materials.html</a>.</p>
</div>
<div id="ref-NvidiaDlss2020">
<p>NVIDIA. 2020. “NVIDIA DLSS 2.0: A Big Leap in AI Rendering.” <a href="https://www.nvidia.com/en-gb/geforce/news/nvidia-dlss-2-0-a-big-leap-in-ai-rendering/">https://www.nvidia.com/en-gb/geforce/news/nvidia-dlss-2-0-a-big-leap-in-ai-rendering/</a>.</p>
</div>
<div id="ref-WikiBlindSpot">
<p>Wikipedia. 2020. “Blind Spot (Vision).” <a href="https://en.wikipedia.org/wiki/Blind_spot_(vision)">https://en.wikipedia.org/wiki/Blind_spot_(vision)</a>.</p>
</div>
</div>
</section>
<section id="careers" class="slide level3" data-background-color="#000">
<h3 data-background-color="#000">Careers</h3>
<div style="font-size: 0.7em">
<p>Electric Square welcomes ambition and talent at every level. With a focus on collaboration we ensure that everyone benefits from a diverse range of skills and experience.</p>
<ul>
<li>150+ staff across 4 projects</li>
<li>Brighton, Leamington Spa, Singapore</li>
<li>Expertise in Free-To-Play and Live Ops</li>
<li>Track record for quality & innovation</li>
<li>Experience managing top IP</li>
</ul>
<p><a href="mailto:careers@electricsquare.com">careers@electricsquare.com</a></p>
<p><a href="https://www.electricsquare.com/careers/">https://www.electricsquare.com/careers/</a></p>
</div>
<aside class="notes">
<p>If you have watched this online and found this interesting, please do consider applying for a job at Electric Square! It’s a great place to work.</p>
<p>We have positions open for all levels of programmer, from Junior through to Technical Director, including research and development roles.</p>
<p>Electric Square has studios in Brighton, Leamington Spa, and Singapore.</p>
</aside>
</section>
<section id="qa" class="slide level3">
<h3>Q&A</h3>
</section></section>
</div>
</div>
<script src="reveal.js/js/reveal.js"></script>
<script>
// Full list of configuration options available at:
// https://github.com/hakimel/reveal.js#configuration
Reveal.initialize({
// Push each slide change to the browser history
history: true,
// Optional reveal.js plugins
dependencies: [
{ src: 'reveal.js/lib/js/classList.js', condition: function() { return !document.body.classList; } },
{ src: 'reveal.js/plugin/zoom-js/zoom.js', async: true },
{ src: 'reveal.js/plugin/notes/notes.js', async: true }
]
});
</script>
</body>
</html>