Consider the searching algorithm shown below. Analyze the worst, best, and avera
ID: 3668461 • Letter: C
Question
Consider the searching algorithm shown below. Analyze the worst, best, and average case run times of this algorithm. Count the number of times that you access array. public static int search(int[] sortedArray, int toFind){ // Returns index of toFind in sortedArray, or -1 if not found int low = 0; int high = sortedArray.length - 1; int mid; while (sortedArrayflow] = toFind) { mid = low+((toFind-sortedArray[low])*(high-low))/(sortedArray[high]-sortedArray [low]); if (sortedArray[mid] toFind) high = mid - 1; else return mid; } if (sortedArray[low] == toFind) return low; else return -1; // Not found Best case. Describe a case in which the best case occurs or give an example. Either give fonnuli for the run times of the operations you count or provide a good argument for the run times. Express the run times using the big-oh notation. Worst case. Describe a case in which the worst case occurs or give an example. Either give fonnuli for the run times of the operations you count or provide a good argument for the run times. Express the run times using the big-oh notation. Average case. Provide and justify the average case big-Oh miming time.Explanation / Answer
The Above Algorithm is a Binary Serach Alogrithm
Average Case :
log2(N)1 is the expected number of probes in an average successful search, and the worst case is log2(N), just one more probe If the list is empty, no probes at all are made. Thus binary search is a logarithmic algorithm and executes in O(log N) time. In most cases it is considerably faster than a linear search. It can be implemented using iteration, or recursion. In some languages it is more elegantly expressed recursively; however, in some C-based languages tail recursion is not eliminated and the recursive version requires more stack space.
Best AND Worst cases
With each test that fails to find a match at the probed position, the search is continued with one or other of the two sub-intervals, each at most half the size. More precisely, if the number of items, N, is odd then both sub-intervals will contain (N1)/2 elements, while if N is even then the two sub-intervals contain N/21 and N/2 elements.
If the original number of items is N then after the first iteration there will be at most N/2 items remaining, then at most N/4 items, at most N/8 items, and so on. In the worst case, when the value is not in the list, the algorithm must continue iterating until the span has been made empty; this will have taken at most log2(N)+1 iterations, where the notation denotes the floor function that rounds its argument down to an integer. This worst case analysis is tight: for any N there exists a query that takes exactly log2(N)+1 iterations. When compared to linear search, whose worst-case behaviour is N iterations, we see that binary search is substantially faster as N grows large. For example, to search a list of one million items takes as many as one million iterations with linear search, but never more than twenty iterations with binary search. However, a binary search can only be performed if the list is in sorted order.