Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

Analyzing sorting algorithm for time and space complexity a) What is the time co

ID: 3597977 • Letter: A

Question

Analyzing sorting algorithm for time and space complexity a) What is the time complexity of an algorithm? b) What is space complexity of an algorithm? c) Suppose you take a size n array (assume that n 2m), and start dividing the array into halves 2. until you have one element in the bottom level. Show that the height of the tree structure created by the above process has the height m and m=log (n). Compare and contrast above three sorting algorithm based on time and space complexity. Consider the best-case, worst-case, and average case scenarios. d)

Explanation / Answer

a)

there are more than one way to solve a problem. We need to learn how to compare the performance different algorithms and choose the best one to solve a particular problem. While analyzing an algorithm, we mostly consider time complexity and space complexity. Time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the input. Similarly, Space complexity of an algorithm quantifies the amount of space or memory taken by an algorithm to run as a function of the length of the input.

Time and space complexity depends on lots of things like hardware, operating system, processors, etc. However, we don't consider any of these factors while analyzing the algorithm. We will only consider the execution time of an algorithm.

Lets start with a simple example. Suppose you are given an array AA and an integer xx and you have to find if xxexists in array AA.

Simple solution to this problem is traverse the whole array AA and check if the any element is equal to xx.

Each of the operation in computer take approximately constant time. Let each operation takes cc time. The number of lines of code executed is actually depends on the value of xx. During analyses of algorithm, mostly we will consider worst case scenario, i.e., when xx is not present in the array AA. In the worst case, the if condition will run NN times where NN is the length of the array AA. So in the worst case, total execution time will be (Nc+c)(Nc+c). NcNcfor the if condition and cc for the return statement ( ignoring some operations like assignment of ii ).

As we can see that the total time depends on the length of the array AA. If the length of the array will increase the time of execution will also increase.

Order of growth is how the time of execution depends on the length of the input. In the above example, we can clearly see that the time of execution is linearly depends on the length of the array. Order of growth will help us to compute the running time with ease. We will ignore the lower order terms, since the lower order terms are relatively insignificant for large input. We use different notation to describe limiting behavior of a function.

OO-notation:
To denote asymptotic upper bound, we use OO-notation. For a given function g(n)g(n), we denote by O(g(n))O(g(n))(pronounced “big-oh of g of n”) the set of functions:
O(g(n))=O(g(n))= { f(n)f(n) : there exist positive constants cc and n0n0 such that 0f(n)cg(n)0f(n)cg(n) for all nn0nn0 }

-notation:
To denote asymptotic lower bound, we use -notation. For a given function g(n)g(n), we denote by (g(n))(g(n))(pronounced “big-omega of g of n”) the set of functions:
(g(n))=(g(n))= { f(n)f(n) : there exist positive constants cc and n0n0 such that 0cg(n)f(n)0cg(n)f(n) for all nn0nn0 }

-notation:
To denote asymptotic tight bound, we use -notation. For a given function g(n)g(n), we denote by (g(n))(g(n))(pronounced “big-theta of g of n”) the set of functions:
(g(n))=(g(n))= { f(n)f(n) : there exist positive constants c1,c2c1,c2 and n0n0 such that 0c1g(n)f(n)c2g(n)0c1g(n)f(n)c2g(n) for all n>n0n>n0 }