Part:1 This part is all about runtime and space complexity analysis. When comput
ID: 3869103 • Letter: P
Question
Part:1
This part is all about runtime and space complexity analysis. When computing the runtime or space bounds, they should be as tight as possible. You will not receive any points if your bound is not tight enough.
Consider the following function, . What is the tightest upper bound (big-O notation) ? Explain why you choose that particular function as your answer.
To do that, you have to show, there exist two positive constants such that
What are the values of ? Explain.
Rank the following functions by the order of growth rate ( that is, list them in a list, such that ). Partition your list into equivalent classes such that and are in same growth class or .
You have to prove your answer:
Let’s assume a program starts with data. At each iteration, this program, divides the data into three equal parts and discard two of them. It divides the remaining part into 3 equal parts again and discard two of them. The program continues until the remaining part contain only one data.
What is the upper bound for running time of this program?
Illegal pyramid scheme, starts with one person (level 0), he has to recruit 4 persons under him. Then those 4 persons (level 1) need to find 4 persons for each of them as shown in figure. Now those 16 people at level 2 need to find 4 people for each of them. If the recruitment process goes on this way, how many levels it would require to recruit all the people in the world? (Assume that, the world population is 7.5 billion.)
Explanation / Answer
Algorithmic complexity is concerned about how fast or slow particular algorithm performs. We define complexity as a numerical function T(n) - time versus the input size n. We want to define time taken by an algorithm without depending on the implementation details. But you agree that T(n) does depend on the implementation! A given algorithm will take different amounts of time on the same inputs depending on such factors as: processor speed; instruction set, disk speed, brand of compiler and etc. The way around is to estimate efficiency of each algorithm asymptotically. We will measure time T(n) as the number of elementary "steps" (defined in any way), provided each such step takes constant time.
Let us consider two classical examples: addition of two integers. We will add two integers digit by digit (or bit by bit), and this will define a "step" in our computational model. Therefore, we say that addition of two n-bit integers takes n steps. Consequently, the total computational time is T(n) = c * n, where c is time taken by addition of two bits. On different computers, additon of two bits might take different time, say c1 and c2, thus the additon of two n-bit integers takes T(n) = c1 * n and T(n) = c2* n respectively. This shows that different machines result in different slopes, but time T(n) grows linearly as input size increases.
The process of abstracting away details and determining the rate of resource usage in terms of the input size is one of the fundamental ideas in computer science.