More Big O

The runtime of an algorithm depends on many things:

The size of the input
The content/structure of the input
They type of computer you are using (huge work station vs laptop)
The amount of memory the computer has (space available)
How the program was implemented
The programming language used.
The biggest factor we are considering is how the program was implemented, most efficient implementation, any algorithm with exponential runtime is essentially said to be an impractical algorithm. and that is where Big O Notation comes into analyzing algorithms.

The Big-O of an algorithm is determined by analyzing the the algorithm and determining how many time steps the algorithm will require, dependent on the input size of the problem. “Simple” operations such as Arithmetic, executing IF statements, return statement etc take 1 timestep. Executing for and while loops takes n steps, 1 per iteration. We count steps as a way to mathematically express the number of steps required to solve the problem. Since the exact number of steps does not matter using Big O Notation we can ignore “Unnecessary Details” such as constant steps or single operations that don’t have a big effect in runtime and focus on the dominant process. I also learned that to most accurately analyze an algorithm’s efficiency we focus on the “worst possible case” to do this we look at the worst possible inputs. By knowing the big O of we can guarantee that no matter the input size n, the time complexity to complete the process will always take at most the worst possible time. For Example consider Selection sort its best, worst an average case iwill always grow O(n * n) to the input size because no matter what it always has to check the entire list, so it is considered a stable sort that is very predictable.

I’ve learned that the goal of Big O Notation is to show us how the algorithm execution time grows with respect to the size of the problem(input). Professor Heap told me to ask myself, does the runtime of a program grow proportional to the size of the input?

Here is a table of the Big O of sorting algorithms with discussed in class and their worst, average and best case performances, but remember there are other complexities such as O(N^2), O(N^3), O(N * N), O(N log N), O(N!) or even O(∞) (which will require an infinite amount of time and that’s where we reach computational limits).

Screen Shot 2014-03-29 at 1.58.23 PM

next time I will tell you about proving Big O

Until next time,                                                                                                                  yours truly, – CodeShark

 

Advertisements

Some neat stuff

In class we had a challenge to determined the algorithm finding the pattern of the creases when you fold a strip of paper a certain amount of times. Each time you fold the number of creases doubles. So intuitively if you fold a piece of half in half 47 times it would be thick enough to span  the distance to the moon. That’s some crazy exponential growth I wouldn’t not expect

I decided to do some more research into this topic and given what I’ve found below the math seems correct, which is hard to believe! haha!

Typical piece of paper = 0.097mm
Distance to the moon= 384,400 km
(0.097 x 2^47) / 1000000 = 426,610.5km

The math seems quite accurate, but as humans its hard to imagine anything of the exponential nature. Considering I’m taking linear algebra we are inclined to think in a linear nature with systems of linear equations haha. A more personal example of linear thinking is linear progression or linear periodization in terms of weight and strength gained at the gym. If you slowly and consistently progress linearly with the total amount of weight pushed. Then when you look at the bigger picture its quite exponential when you look at the strength gains. That’s just my example of the world where math and science collide.

Until next time, yours truly,

CodeShark

 

 

 

week 3

This week we learned about :

– AND and OR /Conjunction and Disjunction
– NOT /Negation, and how to push negation into the statement as far as possible
– Truth table and equivalence
– Standard Equivalencies and Laws

My professor said these are the fundamentals of logic that we will need to master for when we learn proofs. In this tutorial this week’s exercises we had represent an english statement with logic notation. The last 2 questions seemed to be the most difficult to translate correctly,

 f) No course has more than two prerequisites
g) Some courses have the same prerequisites. 
Since its rather difficult to translate directly to logic notation, my TA suggested we rephrase the statement for f) if a course has more than two prerequisites 2 of them are equivalent courses.
standard equivalances
we are getting our first assignment next week so stay tuned to hear about it!
Until next time,
Yours Truly, CodeShark

Week 9 | Binary Search Trees!

We learn about Binary search Trees this week, You may ask what in the world is a  binary  tree? There are 4 basic conditions for Binary search tree:

1. The left subtree of a node contains only nodes with values less than the node’s values.
2. The right subtree of a node contains only nodes with values greater than the node’s values.
3. The left and right subtree each must also be a binary search tree.
4. There must be no duplicate nodes.

Intuitively this is a natual way to construct and maintain ordered lists. A real world implementation of a (BST) Binary Search Tree would be for a company who wants to keep  track of the date they first started doing business with a customer and have  the ability to search through this data base quickly. This can be done with a BST because they are self-balancing in the sense that they store data in their subtrees based on least and greatest values making it much faster to search the Tree structure for said company to access information. Also its good to know that the the rightmost child is the largest value in the entire tree, which allows for optimal performance when traversing a tree for a given value. A Binary Search Tree’s performance is  O(log n), which is very similar to binary search, in the way that with every iteration we were able to half the number of values we had to search through.