This is the standard form of a lecture in a graduate CS theory course, and its the reason it can get so hard. The idea is that grad students should be able to understand more detailed theory and so we get it all thrown at us. The reality is, sometimes even the professors cannot fully understand the theory and its derivation - which makes going through the course like skating on extremely thin ice."X person in Y year proved this equation. Its actually quite intuitive, if you think about it... but it turns out that if you do a ton of complex math (given in 2-3 research papers and/or a fat reference textbook), even the theory works out rather nicely."

To me, a theoretical subject is interesting if I can appreciate the theory from first principles, but that kind of thing seems to to be impossible in graduate theory courses. What we're left to do is take for a basis theory that has developed over years and understand how its implemented in systems today - and how to implement it, if it becomes necessary.

Few people know how exactly to decide whether a 100-dimensional data vector with a million data points should be dealt with - but computational learning theory comes to the rescue - and tells you (somehow) whether to use ridge or lasso regression, support vector machines, naive bayes estimators or some other esoteric methods written somewhere. You're like the magician with the book of spells - a few select research papers - and using that you can solve previously unsolved problems. The few people who actually understand, make and improve the spell-book are the masters and Gods - whom we all aspire to be, somewhere and someday. Right now, it seems like we're just entrants at Hogwarts.

As Arthur C. Clarke once said,

*"Any sufficiently advanced technology is indistinguishable from magic"*.