Mastering Big O Notation

Looking back at the MITx 6.00x course, one concept that I continued to struggle with even in the final exam was Big O notation. The overall concept doesn’t give me any trouble. Professor Grimson is a superb lecturer who explained the idea clearly: Big O notation is used to compare the complexity of algorithms in terms of the relative time or memory it takes to run them in a worst case scenario (i.e. when you have ginormous input).

As I was going through the course, I watched the lectures on edX and an optional recitation posted on the MIT OpenCourseWare website. I got it … sorta. Continue reading “Mastering Big O Notation”

Advertisements