Week 12 | Wrap up and reflection

Looks like another great course is coming to the end! I really enjoyed my  endeavors in such abstract material  it really helped teach me how to think, as crazy as that sounds.

during our last week we were presented with some new interesting material we didn’t have much opportunity to explore such as Set countability, diagonalization, and Proof by induction which is a really powerful proof technique as i’ve seen it in my linear algebra textbook when the author proved many theorems such as proving the Determinant of the Identity maxtrix equals 1 Screen Shot 2014-12-01 at 11.42.15 PM

I know this induction proof is valid because if you look closely the determinant of a triangular matrix is equal to the product of the diagonal entries of the matrix.

Here is the general proof structure using induction

Screen Shot 2014-12-02 at 4.50.18 PM

I’m going to be exploring this proof technique much more next semester so stay tuned

check out my classmates blogs for another prospective on our experiences in this course http://nanalelfecsc165slog.wordpress.com/ and  http://zanecsc165.blogspot.ca/

Advertisements

Computational Limits | B00M

Computers solve problems using algorithms, in a systematic, repeatable way However,  you may ask are there computational limits to the extent at which a computer can perform? More specifically in class we discussed The Halting problem is the problem of  determining whether an arbitrary input to a function will at some point stop/finish running (Halt) or continue to loop forever (not-halt).
 maxresdefault
Crazy enough in 1936 before the existence of computers, Alan Turing proved that an algorithm that compute whether a program will halt/not-half for all possible program-input pairs CANNOT exist. There are no way of computing these functions. At first I was stubborn and had a hard time believing this is true when computers didn’t exist when Turing’s proof was deduced. Then I decided look into it further and I discovered Gödel’s incompleteness theorems (Which basically is a theorem concerning the limits of provability in formal logical axioms in terms of number theory also take into undecidable propositions into account) which helped me understand some background knowledge, but I was still on the edge of  full believing the halting problem is non computable. After thinking about it for days it finally became more clear as to the conclusion.

Now you might be thinking what about Artificial Intelligence, Watson for example; can’t these super computers solve problem such as the halting problem? Well some people are concerned that we are on the road to technological singularity in the computer world, but lets just have some faith Turing’s Disproof to the Halting problem is strong enough to prevent skynet from ever happening haha!

Computability Terminology  :

Definition 1:  Let f:I→O be a function.  Then f is well-defined ⟺ for each x∈I, there is a unique y∈O such that f(x)=y.

Let g,h:I→O be well-defined functions (computable)

Then g is computable ⟺ there is a program P such that ∀x∈I, g(x)=P(x).

Reductions:  Then g reduces to h ⟺ g can be implemented with h.

My Professor showed us a proof technique in order to prove a function computable then all we have to do is that that function reduces to a computable function, vice versa.

Until next time,
Yours Truly, CodeShark

Assignment Wrap Up

Last week was really busy, our proofs problem set was due I ended up doing really well! When I look back at this assignment one question in specific stands out

Screen Shot 2014-11-27 at 11.53.31 PM

This claim was not true because Floor function has jump discontinuity.

Screen Shot 2014-11-27 at 11.54.54 PM

Given the delta-epsilon limit definition, we know intutively that the discontinuous floor function cannot produce a two-sided limit for all values of x.  After many attempts to find a values that would construct a counter example to prove the negation. I eventually got this proof correct after countless number paper being scrambled.  By choosing x = 2, e = 1 I was able to prove the negotiation of the statement.

 

until next time,

CodeShart

 

 

More Big O

The runtime of an algorithm depends on many things:

The size of the input
The content/structure of the input
They type of computer you are using (huge work station vs laptop)
The amount of memory the computer has (space available)
How the program was implemented
The programming language used.
The biggest factor we are considering is how the program was implemented, most efficient implementation, any algorithm with exponential runtime is essentially said to be an impractical algorithm. and that is where Big O Notation comes into analyzing algorithms.

The Big-O of an algorithm is determined by analyzing the the algorithm and determining how many time steps the algorithm will require, dependent on the input size of the problem. “Simple” operations such as Arithmetic, executing IF statements, return statement etc take 1 timestep. Executing for and while loops takes n steps, 1 per iteration. We count steps as a way to mathematically express the number of steps required to solve the problem. Since the exact number of steps does not matter using Big O Notation we can ignore “Unnecessary Details” such as constant steps or single operations that don’t have a big effect in runtime and focus on the dominant process. I also learned that to most accurately analyze an algorithm’s efficiency we focus on the “worst possible case” to do this we look at the worst possible inputs. By knowing the big O of we can guarantee that no matter the input size n, the time complexity to complete the process will always take at most the worst possible time. For Example consider Selection sort its best, worst an average case iwill always grow O(n * n) to the input size because no matter what it always has to check the entire list, so it is considered a stable sort that is very predictable.

I’ve learned that the goal of Big O Notation is to show us how the algorithm execution time grows with respect to the size of the problem(input). Professor Heap told me to ask myself, does the runtime of a program grow proportional to the size of the input?

Here is a table of the Big O of sorting algorithms with discussed in class and their worst, average and best case performances, but remember there are other complexities such as O(N^2), O(N^3), O(N * N), O(N log N), O(N!) or even O(∞) (which will require an infinite amount of time and that’s where we reach computational limits).

Screen Shot 2014-03-29 at 1.58.23 PM

next time I will tell you about proving Big O

Until next time,                                                                                                                  yours truly, – CodeShark

 

Asymptotic Analysis

Over the past few week we have been exploring sorting lists efficiency of functions!

A sorting algorithm is an algorithm that rearranges items in a list into ascending order (i.e. smallest items leftmost and largest items rightmost). In class we looked at many different types of sorting algorithms and the drawback of choosing one verses the other. We also learned about how to compare their efficiency.

For example the sorting algorithms  we discussed in these past 2 weeks include: Quicksort, Mergesort and  Timsort.

#QuickSort
def quick(L):
    if len(L) == 1:
        pivot = L[0] # there are much better ways of choosing the pivot!
        smaller_than_pivot = [x for x in L[1:] if x == pivot]
        larger_than_pivot = [x for x in L[1:] if x == pivot]
        return quick(smaller_than_pivot) + [pivot] + quick(larger_than_pivot)
   else:
       return L

Quicksort works by picking a pivot from the list it can be random or the first element of the list and reordering the list so that values less than the pivot come before it and values greater than the pivot come after it. This process is repeated recursively, to sort the sublists, until the list is fully sorted.

The way MergeSort works is by dividing the list into n sublists then sorting the sublists and then recursively  merges the two halves  until there is one sorted list. Professor Danny said, Merge sort is very predictable since its Average, Best and Worst cast runtime is big O(n * logn)

The last sorting algorithm we looked at this week is Tim sort.  Which works by using either merge and insertion sort depending on the situation. This algorithm is very complex and I can’t properly explain what factors are used to decide which sort should be used for a given input. Think of Tim Sort as an Adaptive merge sorting algorithm with crazy good performance.

Furthermore I have noticed that it is very common for computer programs to look very similar, especially the simple ones. This poses an interesting question. When multiple programming implementations solve the same task but look different, how can we tell which program is better or more efficient than the other?  It might seem reasonable that the time required to solve the larger case would be greater than for the smaller case. Now you may be thinking why would we want to solve the same problem different ways, that might raise an Exception!

I can’t believe this fantastic course is almost over! I hope you now have an understanding of how computer scientists measure algorithm performance with regards to input size and time.

Until next time,                                                                                                                  yours truly, – CodeShark

Some neat stuff

In class we had a challenge to determined the algorithm finding the pattern of the creases when you fold a strip of paper a certain amount of times. Each time you fold the number of creases doubles. So intuitively if you fold a piece of half in half 47 times it would be thick enough to span  the distance to the moon. That’s some crazy exponential growth I wouldn’t not expect

I decided to do some more research into this topic and given what I’ve found below the math seems correct, which is hard to believe! haha!

Typical piece of paper = 0.097mm
Distance to the moon= 384,400 km
(0.097 x 2^47) / 1000000 = 426,610.5km

The math seems quite accurate, but as humans its hard to imagine anything of the exponential nature. Considering I’m taking linear algebra we are inclined to think in a linear nature with systems of linear equations haha. A more personal example of linear thinking is linear progression or linear periodization in terms of weight and strength gained at the gym. If you slowly and consistently progress linearly with the total amount of weight pushed. Then when you look at the bigger picture its quite exponential when you look at the strength gains. That’s just my example of the world where math and science collide.

Until next time, yours truly,

CodeShark

 

 

 

week 3

This week we learned about :

– AND and OR /Conjunction and Disjunction
– NOT /Negation, and how to push negation into the statement as far as possible
– Truth table and equivalence
– Standard Equivalencies and Laws

My professor said these are the fundamentals of logic that we will need to master for when we learn proofs. In this tutorial this week’s exercises we had represent an english statement with logic notation. The last 2 questions seemed to be the most difficult to translate correctly,

 f) No course has more than two prerequisites
g) Some courses have the same prerequisites. 
Since its rather difficult to translate directly to logic notation, my TA suggested we rephrase the statement for f) if a course has more than two prerequisites 2 of them are equivalent courses.
standard equivalances
we are getting our first assignment next week so stay tuned to hear about it!
Until next time,
Yours Truly, CodeShark

Discrete Mathematics Week 1 and 2: Introduction

Hello World, this is my first slog for a CSC165, A Discrete Mathematics course I am taking this semester. Those of you new to my blog may ask what a slog is, well a Slog is a course log  used to document my experience in this course as well as tell some funny jokes!

CSC165 has a very mathematical approach to computer science. This course is more about communication and problem solving, but not your typical Math problem solving rather how to communicate our ideas to others.  This week we discussed the basic principles of logic such as the right amount ambiguity, precision and balance of between them needed to have a clear and concise logical argument. We also learned how to represent statements using Logic Notation using Quantifiers to make claims about a set. I believe we use symbols to emphasize our argument and also the fact that using logic notation (symbols) is much more convenient/faster to perform mathematical computations on paper.

I am a math person, so learning new areas like logic in computer science are fascinating to me. I really like how logic fills that gap between what humans understand and how computers think. Let’s be honest now computers are dumb. 

here is an Example of using Venn Diagrams to express mathematical statements with an arbitrary P and Q.

venn diagram

To hear more about my endeavors with Discrete Mathematics stay tuned for more!

Until next time, Peace out!

– yours truly,                                                                                                                        “Code Shark”

 

Week 9 | Binary Search Trees!

We learn about Binary search Trees this week, You may ask what in the world is a  binary  tree? There are 4 basic conditions for Binary search tree:

1. The left subtree of a node contains only nodes with values less than the node’s values.
2. The right subtree of a node contains only nodes with values greater than the node’s values.
3. The left and right subtree each must also be a binary search tree.
4. There must be no duplicate nodes.

Intuitively this is a natual way to construct and maintain ordered lists. A real world implementation of a (BST) Binary Search Tree would be for a company who wants to keep  track of the date they first started doing business with a customer and have  the ability to search through this data base quickly. This can be done with a BST because they are self-balancing in the sense that they store data in their subtrees based on least and greatest values making it much faster to search the Tree structure for said company to access information. Also its good to know that the the rightmost child is the largest value in the entire tree, which allows for optimal performance when traversing a tree for a given value. A Binary Search Tree’s performance is  O(log n), which is very similar to binary search, in the way that with every iteration we were able to half the number of values we had to search through.

Week 8 | Recursive Structures | Linked Lists

This week we learned about a recursive structure called LinkedLists. The basic building blocks for Linked Lists are a sequence of nodes, each with a head (value) and a reference to rest (its successors). When we say the term linked we refer to each nodes having a reference in memory to the next node i.e Linked!

A linked list is either: the empty list, represented by None, or a node that contains an  object and a reference to the next node of the linked list.
linkedlist
“Searching for a value in a linked list implementation of an unordered list also uses the traversal technique. As we visit each node in the linked list we will ask whether the data stored there matches the item we are looking for.”
I find Linked Lists pretty interesting considering they can grow dynamically and there is no restriction on its size! Sounds so efficient right!