Problem: Given a non negative integer number n. For every numbers i in the range 0 ≤ i ≤ n calculate the number of 1's (set bits) in their binary representation and return them as an array.
Example:
Input: 6
Output: [0,1,1,2,1,2,2]
Approach: One easy way to do it is to iterate through every number from 0 to n and count the set bits in each number and store it in the result array but it will take O(n * k) time where is k is the number of bits in an integer. Let's see how we can do it better.
We can use DP. We can maintain a 1D table where Table[i] will tell the number of set bits in number "i". Let's see how we can fill this table for every i:
- i is even that means i % 2 == 0: Table[i] = Table[i >>1]. Let's see why; if i % 2 == 0 than we can say the right most bit will be 0. Take any example 2(10), 4(100), 6(110),...etc. Now if right most bit is 0 that means we can say that right most bit has no role in counting set bits and hence we can say the number of set bits in i >> 1 and i will be same.
- i is odd that means i % 2 == 1: Table[i] = Table[i>>1] + 1. If you see for any odd number the right most bit is 1 so we can say number of set bits in i will be equal to number of set bits in (i >> 1) + 1 as by doing i >> 1 we are omitting the right most set bit so we are adding 1.
Table[i] = ((i & 1) == 0) ? Table[i >> 1] : Table[i >> 1] + 1
Implementation in C#:
public int[] CountBits(int n)
{
int[] result = new int[n + 1];
for (int i = 1; i <= n; ++i)
{
result[i] = result[i >> 1] + (i & 1);
}
return result;
}
Complexity: O(n)
No comments:
Post a Comment