Complexity of an algorithm is O(logn) if it cut the problem size by a fraction (usually by half).
As an example, let us consider the following for loop:
for(i=1; i<=n; )
The value of i is doubling every time. Assume n = 16. Initially i=1, in first iteration i=2, and in subsequent iterations i = 4,8,16.
Let us assume that the loop is executing k-times.
Here so when k is 4, n is 16. So, at kth step 2k=n.
We can take logarithm on both sides:
log(2k) = log n
klog2 = log n
k = log n // if we assume base ~ 2.
So, total number = O(logn)
Some people may argue that since loop starts with 1, it should be taken as first iteration making the iterations as: 1, 2, 4, 8, 16.
In this case, 2K-1 = n
When we take log on both sides:
log(2k-1) = log n
(k-1)log 2 = log n
k-1 = log n
k = log n + 1
Total complexity still = O(log n), as we ignore constants and lower denominations.
for (i =n/2; i<=n; i++)
Time complexity = n/2 * n/2 * log n = (n log n) / 4 = O(nlogn).