Complexity of an algorithm is O(logn) if it cut the problem size by a fraction (usually by half).

As an example, let us consider the following for loop:

for(i=1; i<=n; )

i=i*2

The value of i is doubling every time. Assume n = 16. Initially i=1, in first iteration i=2, and in subsequent iterations i = 4,8,16.

Let us assume that the loop is executing k-times.

Here so when k is 4, n is 16. So, at kth step 2k=n.

We can take logarithm on both sides:

log(2k) = log n

klog2 = log n

k = log n // if we assume base ~ 2.

So, total number = O(logn)

Some people may argue that since loop starts with 1, it should be taken as first iteration making the iterations as: 1, 2, 4, 8, 16.

In this case, 2K-1 = n

When we take log on both sides:

log(2k-1) = log n

(k-1)log 2 = log n

k-1 = log n

k = log n + 1

Total complexity still = O(log n), as we ignore constants and lower denominations.

**Question 1:**

for (i =n/2; i<=n; i++)

for (j=1;j+n/2<=n;j=j++)

for (k=1;k<=n;k=k*2)

count++;

Time complexity = n/2 * n/2 * log n = (n log n) / 4 = O(nlogn).

**REFERENCES: **

https://www.onlinemathlearning.com/logarithm-rules.html

https://www.rapidtables.com/math/algebra/Logarithm.html

http://dl.uncw.edu/digilib/Mathematics/Algebra/mat111hb/EandL/logprop/logprop.html