Week 12 | Wrap up and reflection

Looks like another great course is coming to the end! I really enjoyed my  endeavors in such abstract material  it really helped teach me how to think, as crazy as that sounds.

during our last week we were presented with some new interesting material we didn’t have much opportunity to explore such as Set countability, diagonalization, and Proof by induction which is a really powerful proof technique as i’ve seen it in my linear algebra textbook when the author proved many theorems such as proving the Determinant of the Identity maxtrix equals 1 Screen Shot 2014-12-01 at 11.42.15 PM

I know this induction proof is valid because if you look closely the determinant of a triangular matrix is equal to the product of the diagonal entries of the matrix.

Here is the general proof structure using induction

Screen Shot 2014-12-02 at 4.50.18 PM

I’m going to be exploring this proof technique much more next semester so stay tuned

check out my classmates blogs for another prospective on our experiences in this course http://nanalelfecsc165slog.wordpress.com/ and  http://zanecsc165.blogspot.ca/


Computational Limits | B00M

Computers solve problems using algorithms, in a systematic, repeatable way However,  you may ask are there computational limits to the extent at which a computer can perform? More specifically in class we discussed The Halting problem is the problem of  determining whether an arbitrary input to a function will at some point stop/finish running (Halt) or continue to loop forever (not-halt).
Crazy enough in 1936 before the existence of computers, Alan Turing proved that an algorithm that compute whether a program will halt/not-half for all possible program-input pairs CANNOT exist. There are no way of computing these functions. At first I was stubborn and had a hard time believing this is true when computers didn’t exist when Turing’s proof was deduced. Then I decided look into it further and I discovered Gödel’s incompleteness theorems (Which basically is a theorem concerning the limits of provability in formal logical axioms in terms of number theory also take into undecidable propositions into account) which helped me understand some background knowledge, but I was still on the edge of  full believing the halting problem is non computable. After thinking about it for days it finally became more clear as to the conclusion.

Now you might be thinking what about Artificial Intelligence, Watson for example; can’t these super computers solve problem such as the halting problem? Well some people are concerned that we are on the road to technological singularity in the computer world, but lets just have some faith Turing’s Disproof to the Halting problem is strong enough to prevent skynet from ever happening haha!

Computability Terminology  :

Definition 1:  Let f:I→O be a function.  Then f is well-defined ⟺ for each x∈I, there is a unique y∈O such that f(x)=y.

Let g,h:I→O be well-defined functions (computable)

Then g is computable ⟺ there is a program P such that ∀x∈I, g(x)=P(x).

Reductions:  Then g reduces to h ⟺ g can be implemented with h.

My Professor showed us a proof technique in order to prove a function computable then all we have to do is that that function reduces to a computable function, vice versa.

Until next time,
Yours Truly, CodeShark

Assignment Wrap Up

Last week was really busy, our proofs problem set was due I ended up doing really well! When I look back at this assignment one question in specific stands out

Screen Shot 2014-11-27 at 11.53.31 PM

This claim was not true because Floor function has jump discontinuity.

Screen Shot 2014-11-27 at 11.54.54 PM

Given the delta-epsilon limit definition, we know intutively that the discontinuous floor function cannot produce a two-sided limit for all values of x.  After many attempts to find a values that would construct a counter example to prove the negation. I eventually got this proof correct after countless number paper being scrambled.  By choosing x = 2, e = 1 I was able to prove the negotiation of the statement.


until next time,




Asymptotic Analysis

Over the past few week we have been exploring sorting lists efficiency of functions!

A sorting algorithm is an algorithm that rearranges items in a list into ascending order (i.e. smallest items leftmost and largest items rightmost). In class we looked at many different types of sorting algorithms and the drawback of choosing one verses the other. We also learned about how to compare their efficiency.

For example the sorting algorithms  we discussed in these past 2 weeks include: Quicksort, Mergesort and  Timsort.

def quick(L):
    if len(L) == 1:
        pivot = L[0] # there are much better ways of choosing the pivot!
        smaller_than_pivot = [x for x in L[1:] if x == pivot]
        larger_than_pivot = [x for x in L[1:] if x == pivot]
        return quick(smaller_than_pivot) + [pivot] + quick(larger_than_pivot)
       return L

Quicksort works by picking a pivot from the list it can be random or the first element of the list and reordering the list so that values less than the pivot come before it and values greater than the pivot come after it. This process is repeated recursively, to sort the sublists, until the list is fully sorted.

The way MergeSort works is by dividing the list into n sublists then sorting the sublists and then recursively  merges the two halves  until there is one sorted list. Professor Danny said, Merge sort is very predictable since its Average, Best and Worst cast runtime is big O(n * logn)

The last sorting algorithm we looked at this week is Tim sort.  Which works by using either merge and insertion sort depending on the situation. This algorithm is very complex and I can’t properly explain what factors are used to decide which sort should be used for a given input. Think of Tim Sort as an Adaptive merge sorting algorithm with crazy good performance.

Furthermore I have noticed that it is very common for computer programs to look very similar, especially the simple ones. This poses an interesting question. When multiple programming implementations solve the same task but look different, how can we tell which program is better or more efficient than the other?  It might seem reasonable that the time required to solve the larger case would be greater than for the smaller case. Now you may be thinking why would we want to solve the same problem different ways, that might raise an Exception!

I can’t believe this fantastic course is almost over! I hope you now have an understanding of how computer scientists measure algorithm performance with regards to input size and time.

Until next time,                                                                                                                  yours truly, – CodeShark

Discrete Mathematics Week 1 and 2: Introduction

Hello World, this is my first slog for a CSC165, A Discrete Mathematics course I am taking this semester. Those of you new to my blog may ask what a slog is, well a Slog is a course log  used to document my experience in this course as well as tell some funny jokes!

CSC165 has a very mathematical approach to computer science. This course is more about communication and problem solving, but not your typical Math problem solving rather how to communicate our ideas to others.  This week we discussed the basic principles of logic such as the right amount ambiguity, precision and balance of between them needed to have a clear and concise logical argument. We also learned how to represent statements using Logic Notation using Quantifiers to make claims about a set. I believe we use symbols to emphasize our argument and also the fact that using logic notation (symbols) is much more convenient/faster to perform mathematical computations on paper.

I am a math person, so learning new areas like logic in computer science are fascinating to me. I really like how logic fills that gap between what humans understand and how computers think. Let’s be honest now computers are dumb. 

here is an Example of using Venn Diagrams to express mathematical statements with an arbitrary P and Q.

venn diagram

To hear more about my endeavors with Discrete Mathematics stay tuned for more!

Until next time, Peace out!

– yours truly,                                                                                                                        “Code Shark”


Week 8 | Recursive Structures | Linked Lists

This week we learned about a recursive structure called LinkedLists. The basic building blocks for Linked Lists are a sequence of nodes, each with a head (value) and a reference to rest (its successors). When we say the term linked we refer to each nodes having a reference in memory to the next node i.e Linked!

A linked list is either: the empty list, represented by None, or a node that contains an  object and a reference to the next node of the linked list.
“Searching for a value in a linked list implementation of an unordered list also uses the traversal technique. As we visit each node in the linked list we will ask whether the data stored there matches the item we are looking for.”
I find Linked Lists pretty interesting considering they can grow dynamically and there is no restriction on its size! Sounds so efficient right!

Week 7 – Hardcore Recursion | My Impressions So Far!

you may ask yourself what the hell is this recursion he talks about?  Well Recursion is a method of solving problems that involves breaking a problem down into smaller and smaller increments (subproblems) until you get to a small enough task that it can be to accomplished. Recursion involves a function calling itself. As opposed to iteration using for or while loops. You’re probably now thinking when would I use recursion? While it may not seem like much off the bat, recursion allows us to write efficient algorithms with much less code to solve problems that may otherwise be very difficult to program or even impossible. There are 3 rules of recursive functions:

  1. A recursive function must have a base case (when to stop)
  2. A recursive function must change its state and move toward the base case. (series of simplifications)
  3. A recursive function must call itself, recursively

In simpler terms recursion is “defining a problem in terms of itself”.  Recursive calls solve smaller problems of the same “flavour” and then combine those solutions to get the solution to the main problem.

For example lets consider at taking the sum of a list of integers.

def sum_list(lst: list)-> int:
    total = 0
    for i in lst:
        total += i
    return total

I’ve broken down the step the non-recursive algorithm performs

total= (1+(3+(5+(7+9))))                                                                                                     total= (1+(3+(5+16)))                                                                                                         total= (1+(3+21))                                                                                                               total= (1+24)                                                                                                                     total= 25 Now I can solve the same problem with a recursive algorithm.

    def sum_list(numlst: list)-> int:
     >>> sum_list([1,3,5,7,9])
     #base case one item in the list
     if len(numlst) == 0:
         return numlst[0]
         return numlst[0] + sum_list(numlst[1:])

Now I can get really fancy on you and make it even more
elegant looking using list comprehension! #AlphaStatus

def sum_list(lst: int)-> int:
    return sum([sum_list(i) if isinstance(i, list) else i for i in lst])

We didn’t need recursion for that did we? But we could use it anyways! It’s important to remember that recursion is simply a different method to coding an algorithm. Some functions don’t even need recursion, such as this one above. However, with that being said a recursive approach comes in very handy (is useful) for a programmer who need to develop much more complex problems like the 4 stool Tower of Hanoi I wrote the algorithm for a few weeks ago. Some programmers may choose a recursive approach because it is just easier for them to understand, while some algorithms just go hand in hand with recursion because of the way they are designed such as traversing a tree like structure which we learned about in class last week. Recursion is a very puzzling concept to understood at first it took me hours of practice coding to finally feel comfortable implementing it.

“The human cost of writing, and maintaining, an algorithm that is naturally recursive in an iterative form is far higher than any cost a particular compiler or interpreter might add for running recursive functions.” – Danny Heap

Professor Heap taught he that you gotta just trust the recursive calls works  after tracing the simple cases. I would say the hardest part of the problem is recognizing a recursive pattern. The wonderful thing it is that you can use it in almost any language!

its been another great week in the life of a Computer Science Major! Stay Hungry, Stay Foolish!

yours truly,                                                                                                                                     – “CodeShark”

Week 5 – Wrap up of A1 !

This week in CSC148 we spent time looking at the details of our first assignment this semester. An advanced version of the classic game of the Tower of Hanoi, where the goal is to move a stack of disks of descending sizes from one peg to another, using three pegs.  The only rules are:

1. You’re only allowed to move one disk at a time, and,                                                           2. You cannot stack a bigger disk on top of a smaller one.

here’s where the excitement comes  Professor Danny,  changed the traditional Tower of Hanoi story around so we had to figure out the algorithm to optimally transfer Anne Hoy’s stacks of cheese, rather than discs from one stool to another, using four stools (the extra being her recent investment) to reduce the number of steps necessary for moving larger and larger piles of cheeses.  The optimal solution for the traditional Tower of Hanoi problem is said to have a run time of 2^n – 1.

“Ann Hoy kept her high-quality cheese stacked on stools, the largest rounds underneath the smaller rounds, to stop rats and mice from getting at them. Occasionally the stool holding the cheese would need some maintenance (for example, the legs would start to buckle under the weight), and Anne would shift the entire stack from one stool to another. Since she could only move a single (hundred-pound plus) round of cheese at one time, and she refused to stack a larger-diameter cheese round on a smaller one (the outer part of the larger cheese would droop) she used three stools: one was her destination for her entire stack of cheese, one was the source (which likely needed its legs reinforced), and the third was for intermediate stacking. Chaucer immortalized the complicated routine Anne endured, lugging rounds of cheese from stool to stool as “The tour of Anne Hoy.”

Prof Danny, really emphasized the importance of recursive functions to be used for the expected solution, as this assignment was a means to teach us how to use recursion. After days of frustrating sessions of coding I couldn’t wrap my head around how a simple solvable problem with 3 pegs can be made dramatically more difficult by slightly loosening one of the constraints (i.e 4 pegs).  After writing pages and pages of Psuedo code it suddenly hit me! I was thinking about how to implement the recursion too complexly, recursion an art an art where simple thinking is highly regarded. So pretty much my four stool function can call itself over and over again until it reaches a certain threshold (base case), where the all the cheeses are moved to the destination peg which allows the operation to terminate. This 4 stool solution required me to find the least number of moves required to move any given number of cheeses using 4 stools and pass that value into the recursive function. To me implementing this part was the certainly one of the more challenging aspects of this assignment. When I finally figured out the logic behind it I felt a great deal of satisfaction! I am now confident in my capabilities to implement recursive function. Who’s excited for assignment 2, I know I am!

Screen Shot 2014-02-18 at 12.14.26 PM

Screen Shot 2014-02-18 at 12.15.41 PM

9 moves is said to be the least number of moves to move 4 cheeses with 4 stools.

That is all for this week folks, Stay Hungry. Stay Foolish!

yours truly,                                                                                                                                     – “CodeShark”

Week 4 – More recursion yay!

After reading the instructions for Assignment 1,  I see that it will involve some hardcore recursion. I felt it would  be a smart idea to go and further my understanding of the concept, by working some exercises on CodeAcademy before I try to solve a 4 stool version of  Tower of Hanoi recursively.

I also learned from Professor Danny, this week that Python has a built-in module called Turtle that draws simple line graphics that can be manipulated on a set plane depending on what direction you’d like the Turtle object to point. We can also get fancy and all by changing the color of the Turtle or be mean and hide its tail, haha!  I like the Turtle class as I feel like it was designed solely for teaching recursion.

The recursion allowed us to pick an arbitrary number of lines we wanted… for i in range(n): and draw that  number of lines recursively and from each point of the previous recursion it bursts out 3 times.

Screen Shot 2014-02-08 at 9.10.22 PM

But after class I decided to and mess with the program and spice it up a little bit. I added a few new features 2 more colors to the list, I changed the range  to 5 and the position of the plane that the bursts happen. And voila I get this beauty down below. ^_^

Screen Shot 2014-02-08 at 9.07.02 PMThat my friends is the magic of the mighty recursion! I dont always use recursion, but when I do its beautiful!

That is all for this week folks, stay humble!

yours truly,                                                                                                                                     – “CodeShark”

Week 3: I’m learning about Objects wow!

I am gonna give you guys an introduction to Object Oriented Programming (OOP) with some bonus talk about Exceptions!

oop meme

I’d like to start my first blog ever by letting you know that Computer Science is my true passion everyday I remind myself of how privileged I am to have the opportunity to study not just something I love, but something I could potentially change the world with.

In my third week of CSC148 I have learned more about a fascinating concept called Object Oriented Programming (OOP). I really like how Object Oriented Programming makes it easy for us to maintain and modify our existing code as new objects that can be created with small differences to existing ones. I also learned about the abstraction that everything is an object and that objects serve two purposes; they know things (have attributes) and perform tasks.  Pretty much anything you can think of can be modeled into classes and have instances( that perform tasks) of them created. The possibilities are endless!

In my lab this week we learned how to implement a concept called Abstract DataTypes (ADTs). That is a certain class of data structures that have similar behavior and attributes. We made a basic ADT, Stack Class using OOP that can push new items on top of the stack, pop items from the top of the stack, indicate what item at the top is  as well as tell whether the stack is empty or not.

inheritance meem

Inheritance                                                                                                     After reading an article about inheritance on learnpythonthehardway.com I wonder why are we being taught inheritance if the readings says  Its something you should avoid, especially multiple inheritance at all costs? Well basically Inheritance is used to indicate that a  Child Class (Sub Class) will get most or all of its features from a Parent Class(Base Class).

There are multiple ways that the Parent-Class and the Child-Class can interact.

1) The child-class imply an action on the parent (a.k.a Implicit Inheritance)

you can put functions in the super class (i.e. Parent Class) then the child class will “INHERIT” those features, saving you time by not having to write 2 functions that perform the same task.

2) The child class can override the action(s) of the parent (Override Explicitly)                 The child can override a pre existing function it inherited from its parent class by defining it’s own version of that function with more functionality. No hard feelings Dad, my algorithm has better runtime! Lastly, you could do the exact same thing but in a nicer looking way by calling Pythons Built-in function super().

We are going to be diving into 2 more parts of OOP in the coming weeks such as Encapsulation and Polymorphism.

Exceptions                                                                                                                              I was exposed me to a new programming term called exceptions. In other words “Stop screwing with my code!”  I found understanding how to implement Python Exceptions difficult to grasp right off the bat. Which was frustrating, but after spending time reviewing the concepts starting with the basic exceptions I know feel more confident implementing them into my code. It is neat that we can develop custom exceptions that best fit our program and will most help the user understand their errors. I could make a custom exception that returns “Hey you, stop inheriting my money (bitcoins) I got it from object in the first place! ;)”

To hear more about my endeavors with CSC148 and Python stay tuned for more!

Until next time, Peace out!

yours truly,                                                                                                                       – “Code Shark”