Looking back at the MITx 6.00x course, one concept that I continued to struggle with even in the final exam was Big O notation. The overall concept doesn’t give me any trouble. Professor Grimson is a superb lecturer who explained the idea clearly: Big O notation is used to compare the complexity of algorithms in terms of the relative time or memory it takes to run them in a worst case scenario (i.e. when you have ginormous input).
As I was going through the course, I watched the lectures on edX and an optional recitation posted on the MIT OpenCourseWare website. I got it … sorta. I definitely understood constant-time or O(1) functions, which have the same run-time regardless of the input. And linear or O(n) functions were also fairly clear: they run through the entire input, so the run-time grows at the same rate as the size of the input. I even understood quadratic or O(n²) functions: whereas a linear function goes through the input once, these functions go through the input once for every element of the input. (That sounds complicated, but it’s often the result of nested loops.)
But as the math gets more complicated, things go a little fuzzy. For instance, I only understood logarithmic or O(log n) functions as being the complexity of a binary search. And I connected the idea of loglinear or O(n log n) complexity with merge sort algorithms, but I’m not sure I could identify it in any other context. It has been more than ten years since I last worked with logarithms, and they remain in the “Oh, I vaguely remember calculating that” part of my brain. I was pretty happy that I had grasped the basic concepts.
So yesterday I went back and watched those lectures again. Let me just say that Sarina Canelake (who led the recitation and did most of the legwork for the 6.00x course that I took) is a total rock star. I still need to review some math and practice identifying the complexity of more algorithms to be truly confident, but I feel way more confident about Big O notation after watching this recitation a second time: