Looking back at the MITx 6.00x course, one concept that I continued to struggle with even in the final exam was Big O notation. The overall concept doesn’t give me any trouble. Professor Grimson is a superb lecturer who explained the idea clearly: Big O notation is used to compare the complexity of algorithms in terms of the relative time or memory it takes to run them in a worst case scenario (i.e. when you have ginormous input).
I just finished the final exam for 6.00x (Intro to CS and Programming), the MITx course I was taking in the fall. I have to admit that my stamina took a nosedive near the end of this course.
It’s really hard to keep up coursework over the holidays, and some hiccups in the course administration — mostly delays in getting new assignments posted — messed up my rhythm. So I put off doing the final problem set and watching the last couple sets of lectures until last week. And I have to admit something possibly scandalous: I didn’t bother finishing the final exam. Continue reading “Done with 6.00x”