Thursday 5 December 2013

Sorting and Efficiency

Here's a quick review of three types of array sorting algorithms:

  • Selection sort:
    • First, the algorithm finds the smallest item in the array and places it in the front. Then, it finds the second smallest item in the array and places it next to that, and so on. Pretty simple.
      • O(n^2) efficiency
  • Quicksort:
    • A random element is chosen, and then we create two lists: one that contains all elements smaller than the element and one that contains the element and all elements smaller than the element. Then we quicksort those lists, and so on.
      • O(n log(n)) efficiency
  • Merge sort:
    • Merge sort requires a process called "merging", where two arrays are compared to each other to see which value is smaller and then appends it onto a new list. In merge sort, the list is split into each element and merged to each other. Then recursively we run mergesort again.
      • O(n log(n)) efficiency
Here's a graph we made in lab 9 on the efficiencies of various sorting algorithms.

http://puu.sh/5Dbo1.png

As we can see, selection sort sort of tapers off in performance exponentially compared to quicksort and mergesort when we apply it to a larger data set.

Wednesday 27 November 2013

Test 2 Retrospective

Test 2 was quite fun, as it was essentially an exercise on recursion on abstract data types. Although I don't feel like I've mastered recursion, I felt like my logic was sound when writing my answers. I did alright, but I feel like I still need practice "programming" on paper. Mostly because I just wish I could "run" my code to see if it would work. Guess I gotta stop using my IDE as a crutch, and just work on visualizing the logic in my code.

Sunday 3 November 2013

Exercise 6

Exercise 6 was really easy for me, though I'm not sure I solved it the "intended" way.

For Part A, I used a while loop to append the item of each node to a temporary list, and then iterated over the list to see if there were any matches.

Part B was also quite straightforward, I tried to do the same thing except I ran into a problem. Initially, I thought that we were to return the ITEM of the maximum value of all nodes, i.e. in a BST([1,2,44,3]), I had mistakenly assumed the output to be 44. However,we were actually supposed to return a reference to the object that held the value of the item. This was an unfamiliar problem to me. How was I supposed to find the reference to the object after finding the max value?

I ended up brute forcing the problem, and just appended all the objects of the tree into a list and sorted it by the item; then I returned index 0 of that list.

Sunday 27 October 2013

Exercise 5

I wanted to write about some of the problems I ran into while doing Exercise 5 earlier this week, but felt like I should wait until the due date of the assignment before posting solutions or anything.

e5a.py was relatively simple. I'm sure there were many different methods to solve the problem, but I worked under the assumption that Danny wanted us to do it recursively. The "base case" in this case would just be setting the depth attribute of the node as "depth", and then we would reuse the function with a value of depth + 1.

e5b.py was surprisingly difficult, but after having solved it, I don't know why I had so much trouble with it. I think the problem I had was that I tried to go in without a plan. But breaking down the problem makes it seem very straightforward:

-Is the cargo of the node a leaf? If so, append it to a list.
    -What properties does a node have to have to be considered a leaf?
-Is the cargo of the node an internal node? If so, append it to a separate list.
    -What properties does a node have to have to be considered an internal node?

The main issue I had was actually implementing the code. The lists I created kept reinitializing every time the function was being run. I fixed this by messing around on my own and looking up sample list functions on StackOverflow.

Wednesday 16 October 2013

On Recursion

Foreword: Before I talk about recursion, please keep in mind that recursion is still a concept I'm struggling to grasp. In the coming weeks I'll probably be talking about recursion more in an attempt to understand it better.

Recursion is solving a problem in which the problem has smaller, similar cases of the problem within itself. A common example of this is when a function calls on itself. Generally, a recursive method deals with a "base case", and then calls itself to deal with anything that is a smaller instance that the base case was designed to solve.

Recursion seems to be a very powerful tool when used correctly. It's helpful as it lets you divide a problem into smaller, easier to manage cases, once you are familiar with implementing recursive functions. Hopefully I become familiar with recursion soon.

On Object-Oriented Programming

Object-Oriented Programming is, in the eyes of a novice such as myself, the use of "objects," which are copies of each other with the same properties (referred to by most programmers as "attributes") and have the same functions (or "methods"). Another important thing to note about objects is that you can create subclasses of objects, which share similar attributes and methods, but can be specialized further. Obviously, this is an extremely reductionist summary, as there are probably a ton of intricacies that I am still unaware of.

At first, I had a lot of trouble understanding objects and subclasses, but I found a video that really helped me visualize and understand the practical usage of object-oriented programming:

http://youtu.be/yJunB_tV1QI

I think that object-oriented programming is extremely useful, as it allows the programmer to reduce a lot of code, especially when dealing with a problem that requires code to be reused. It allows for the code to look a lot "cleaner", in my experience.