Last week was really busy, our proofs problem set was due I ended up doing really well! When I look back at this assignment one question in specific stands out
This claim was not true because Floor function has jump discontinuity.
Given the delta-epsilon limit definition, we know intutively that the discontinuous floor function cannot produce a two-sided limit for all values of x. After many attempts to find a values that would construct a counter example to prove the negation. I eventually got this proof correct after countless number paper being scrambled. By choosing x = 2, e = 1 I was able to prove the negotiation of the statement.
until next time,
The runtime of an algorithm depends on many things:
The size of the input
The content/structure of the input
They type of computer you are using (huge work station vs laptop)
The amount of memory the computer has (space available)
How the program was implemented
The programming language used.
The biggest factor we are considering is how the program was implemented, most efficient implementation, any algorithm with exponential runtime is essentially said to be an impractical algorithm. and that is where Big O Notation comes into analyzing algorithms.
The Big-O of an algorithm is determined by analyzing the the algorithm and determining how many time steps the algorithm will require, dependent on the input size of the problem. “Simple” operations such as Arithmetic, executing IF statements, return statement etc take 1 timestep. Executing for and while loops takes n steps, 1 per iteration. We count steps as a way to mathematically express the number of steps required to solve the problem. Since the exact number of steps does not matter using Big O Notation we can ignore “Unnecessary Details” such as constant steps or single operations that don’t have a big effect in runtime and focus on the dominant process. I also learned that to most accurately analyze an algorithm’s efficiency we focus on the “worst possible case” to do this we look at the worst possible inputs. By knowing the big O of we can guarantee that no matter the input size n, the time complexity to complete the process will always take at most the worst possible time. For Example consider Selection sort its best, worst an average case iwill always grow O(n * n) to the input size because no matter what it always has to check the entire list, so it is considered a stable sort that is very predictable.
I’ve learned that the goal of Big O Notation is to show us how the algorithm execution time grows with respect to the size of the problem(input). Professor Heap told me to ask myself, does the runtime of a program grow proportional to the size of the input?
Here is a table of the Big O of sorting algorithms with discussed in class and their worst, average and best case performances, but remember there are other complexities such as O(N^2), O(N^3), O(N * N), O(N log N), O(N!) or even O(∞) (which will require an infinite amount of time and that’s where we reach computational limits).
next time I will tell you about proving Big O
Until next time, yours truly, – CodeShark
Over the past few week we have been exploring sorting lists efficiency of functions!
A sorting algorithm is an algorithm that rearranges items in a list into ascending order (i.e. smallest items leftmost and largest items rightmost). In class we looked at many different types of sorting algorithms and the drawback of choosing one verses the other. We also learned about how to compare their efficiency.
For example the sorting algorithms we discussed in these past 2 weeks include: Quicksort, Mergesort and Timsort.
if len(L) == 1:
pivot = L # there are much better ways of choosing the pivot!
smaller_than_pivot = [x for x in L[1:] if x == pivot]
larger_than_pivot = [x for x in L[1:] if x == pivot]
return quick(smaller_than_pivot) + [pivot] + quick(larger_than_pivot)
Quicksort works by picking a pivot from the list it can be random or the first element of the list and reordering the list so that values less than the pivot come before it and values greater than the pivot come after it. This process is repeated recursively, to sort the sublists, until the list is fully sorted.
The way MergeSort works is by dividing the list into n sublists then sorting the sublists and then recursively merges the two halves until there is one sorted list. Professor Danny said, Merge sort is very predictable since its Average, Best and Worst cast runtime is big O(n * logn)
The last sorting algorithm we looked at this week is Tim sort. Which works by using either merge and insertion sort depending on the situation. This algorithm is very complex and I can’t properly explain what factors are used to decide which sort should be used for a given input. Think of Tim Sort as an Adaptive merge sorting algorithm with crazy good performance.
Furthermore I have noticed that it is very common for computer programs to look very similar, especially the simple ones. This poses an interesting question. When multiple programming implementations solve the same task but look different, how can we tell which program is better or more efficient than the other? It might seem reasonable that the time required to solve the larger case would be greater than for the smaller case. Now you may be thinking why would we want to solve the same problem different ways, that might raise an Exception!
I can’t believe this fantastic course is almost over! I hope you now have an understanding of how computer scientists measure algorithm performance with regards to input size and time.
Until next time, yours truly, – CodeShark