Computational Limits | B00M

Computers solve problems using algorithms, in a systematic, repeatable way However,  you may ask are there computational limits to the extent at which a computer can perform? More specifically in class we discussed The Halting problem is the problem of  determining whether an arbitrary input to a function will at some point stop/finish running (Halt) or continue to loop forever (not-halt).
 maxresdefault
Crazy enough in 1936 before the existence of computers, Alan Turing proved that an algorithm that compute whether a program will halt/not-half for all possible program-input pairs CANNOT exist. There are no way of computing these functions. At first I was stubborn and had a hard time believing this is true when computers didn’t exist when Turing’s proof was deduced. Then I decided look into it further and I discovered Gödel’s incompleteness theorems (Which basically is a theorem concerning the limits of provability in formal logical axioms in terms of number theory also take into undecidable propositions into account) which helped me understand some background knowledge, but I was still on the edge of  full believing the halting problem is non computable. After thinking about it for days it finally became more clear as to the conclusion.

Now you might be thinking what about Artificial Intelligence, Watson for example; can’t these super computers solve problem such as the halting problem? Well some people are concerned that we are on the road to technological singularity in the computer world, but lets just have some faith Turing’s Disproof to the Halting problem is strong enough to prevent skynet from ever happening haha!

Computability Terminology  :

Definition 1:  Let f:I→O be a function.  Then f is well-defined ⟺ for each x∈I, there is a unique y∈O such that f(x)=y.

Let g,h:I→O be well-defined functions (computable)

Then g is computable ⟺ there is a program P such that ∀x∈I, g(x)=P(x).

Reductions:  Then g reduces to h ⟺ g can be implemented with h.

My Professor showed us a proof technique in order to prove a function computable then all we have to do is that that function reduces to a computable function, vice versa.

Until next time,
Yours Truly, CodeShark

Assignment Wrap Up

Last week was really busy, our proofs problem set was due I ended up doing really well! When I look back at this assignment one question in specific stands out

Screen Shot 2014-11-27 at 11.53.31 PM

This claim was not true because Floor function has jump discontinuity.

Screen Shot 2014-11-27 at 11.54.54 PM

Given the delta-epsilon limit definition, we know intutively that the discontinuous floor function cannot produce a two-sided limit for all values of x.  After many attempts to find a values that would construct a counter example to prove the negation. I eventually got this proof correct after countless number paper being scrambled.  By choosing x = 2, e = 1 I was able to prove the negotiation of the statement.

 

until next time,

CodeShart

 

 

More Big O

The runtime of an algorithm depends on many things:

The size of the input
The content/structure of the input
They type of computer you are using (huge work station vs laptop)
The amount of memory the computer has (space available)
How the program was implemented
The programming language used.
The biggest factor we are considering is how the program was implemented, most efficient implementation, any algorithm with exponential runtime is essentially said to be an impractical algorithm. and that is where Big O Notation comes into analyzing algorithms.

The Big-O of an algorithm is determined by analyzing the the algorithm and determining how many time steps the algorithm will require, dependent on the input size of the problem. “Simple” operations such as Arithmetic, executing IF statements, return statement etc take 1 timestep. Executing for and while loops takes n steps, 1 per iteration. We count steps as a way to mathematically express the number of steps required to solve the problem. Since the exact number of steps does not matter using Big O Notation we can ignore “Unnecessary Details” such as constant steps or single operations that don’t have a big effect in runtime and focus on the dominant process. I also learned that to most accurately analyze an algorithm’s efficiency we focus on the “worst possible case” to do this we look at the worst possible inputs. By knowing the big O of we can guarantee that no matter the input size n, the time complexity to complete the process will always take at most the worst possible time. For Example consider Selection sort its best, worst an average case iwill always grow O(n * n) to the input size because no matter what it always has to check the entire list, so it is considered a stable sort that is very predictable.

I’ve learned that the goal of Big O Notation is to show us how the algorithm execution time grows with respect to the size of the problem(input). Professor Heap told me to ask myself, does the runtime of a program grow proportional to the size of the input?

Here is a table of the Big O of sorting algorithms with discussed in class and their worst, average and best case performances, but remember there are other complexities such as O(N^2), O(N^3), O(N * N), O(N log N), O(N!) or even O(∞) (which will require an infinite amount of time and that’s where we reach computational limits).

Screen Shot 2014-03-29 at 1.58.23 PM

next time I will tell you about proving Big O

Until next time,                                                                                                                  yours truly, – CodeShark

 

Asymptotic Analysis

Over the past few week we have been exploring sorting lists efficiency of functions!

A sorting algorithm is an algorithm that rearranges items in a list into ascending order (i.e. smallest items leftmost and largest items rightmost). In class we looked at many different types of sorting algorithms and the drawback of choosing one verses the other. We also learned about how to compare their efficiency.

For example the sorting algorithms  we discussed in these past 2 weeks include: Quicksort, Mergesort and  Timsort.

#QuickSort
def quick(L):
    if len(L) == 1:
        pivot = L[0] # there are much better ways of choosing the pivot!
        smaller_than_pivot = [x for x in L[1:] if x == pivot]
        larger_than_pivot = [x for x in L[1:] if x == pivot]
        return quick(smaller_than_pivot) + [pivot] + quick(larger_than_pivot)
   else:
       return L

Quicksort works by picking a pivot from the list it can be random or the first element of the list and reordering the list so that values less than the pivot come before it and values greater than the pivot come after it. This process is repeated recursively, to sort the sublists, until the list is fully sorted.

The way MergeSort works is by dividing the list into n sublists then sorting the sublists and then recursively  merges the two halves  until there is one sorted list. Professor Danny said, Merge sort is very predictable since its Average, Best and Worst cast runtime is big O(n * logn)

The last sorting algorithm we looked at this week is Tim sort.  Which works by using either merge and insertion sort depending on the situation. This algorithm is very complex and I can’t properly explain what factors are used to decide which sort should be used for a given input. Think of Tim Sort as an Adaptive merge sorting algorithm with crazy good performance.

Furthermore I have noticed that it is very common for computer programs to look very similar, especially the simple ones. This poses an interesting question. When multiple programming implementations solve the same task but look different, how can we tell which program is better or more efficient than the other?  It might seem reasonable that the time required to solve the larger case would be greater than for the smaller case. Now you may be thinking why would we want to solve the same problem different ways, that might raise an Exception!

I can’t believe this fantastic course is almost over! I hope you now have an understanding of how computer scientists measure algorithm performance with regards to input size and time.

Until next time,                                                                                                                  yours truly, – CodeShark