SR
BAN USERI can think of n*n*logn solution with some optimization.
1. Find the maximum of a[i][0] for all the i {0, n-1}. Let this be curMax.
2. Clearly all the values less than curMax can not be in the common set.
3. Search for curMax in all the rows using binary search such that binary search returns the next highest value index if curMax is not present. <This is the caveat, we will need nextBeginIndex[n] to store these index values. If this is not allowed, then this optimization is not possible. > Store next highest index in nextBeginIndex[n]
4. If all the rows return successful result, curMax is common.
5. Now find maximum of a[i][nextBeginIndex[i]] for all i {0, n-1}
6. Repeat the same steps.
Since we need to preserve the nodes while traversing, instead of using Queue for maintaining the nodes, lets use vector and take care of indexes when we change levels. I am using an "empty" node to separate out two levels.
void PrintLevelOrderList(Node * root)
{
//if (root) do some validation
static Node * empty = new Node (empty); // Have some empty node, store it somewhere or create everytime
std::vector<Node *> levelOrderList;
levelOrderList.append(root);
levelOrderList.append(empty);
int index = 0;
while (true)
{
Node * curr = levelOrderList[index];
if (curr != empty)
{
if (curr->lc)
levelOrderList.append(curr->lc);
if (curr->rc)
levelOrderList.append(curr->rc);
}
else
{
if (index == levelOrderList.size() - 1) // This is the last element
{
// Hence it should terminate
break;
}
else
{
// We have more elements, but we are done with with one level
levelOrderList.append(empty);
}
} // else block
index++;
}// while (true)
// We should now be having a the list in the level order format.
// Use reverse iterators to print the nide,
std::vector<Node *> :: reverse_iterator iter = levelOrderList.rbegin();
while (iter != levelOrderList.rend())
{
if (*iter != empty)
cout<<*iter;
else
cout << std::endl;
}
}
I guess the problem statement is to create Huffman code for the string "AAAAAABBCCDDEEFFFFF"
Solution: <Well known method, just adding for the completeness sake>
1. Find the list of all the symbols and their respective frequencies.
A 6, B 2, C 2, D 2, E2 , F 5.
2. Aim is to create a binary tree so that A, B, C, D, E, F are leaf nodes. Code for any leaf node would be path from the root in terms of Left of Right turn you take on every node from the root. Lets denote 0 for left and 1 for right. Hence for the example root -> lc->rc->lc->leafnode, code for leaf node is 010.
Now Huffman code dictates that the symbol with the maximum frequency should have smallest code. Hence the binary tree we would like to create should be such that A with maximum frequency should have min length of code.
3. How do we do it?
a. Store all the leaf nodes in a min heap.
b. Delete two nodes from the heap.
c. Create a binary tree from the two nodes such that rootNode.freq = leafNode1.freq + leafNode2.freq;
d. Add that rootNode to the min heap.
<Continue this process till we have only one element and the resultant tree would be something similar to what is being asked.>
The question can be broken down into two parts as most of the above comments suggest.
1. Find the nodes which can be reached in k hops from the given node. Assuming only pointers to children are available.
2. Second part is those nodes on the upper part of the tree that are hops k away from the given node(B). The parent(A) of the given node(B), is 1 hop away. Let the given node be the left child of its parent. Hence all the nodes in the right child subtree of the parent k-1 hops away from the parent(A) are distance k - 1 + 1 = k hops away from the given node B. Similarly the grand parent of B, lets call it D, is 2 hops away. All the nodes that are k - 2 distance from D ( in the other subtree than A) are k - 2 + 2 distance away from B.
Function 1 : It returns the list of all the children which are k distance from the node s.
List<Node *> FindNodesAtDistance(Node * s , int k)
Algo.
1. Start from root, store all nodes in a queue of maximum size k till you reach the given node. The queue will have at max k elements. The queue will eventually have a (sub)path to the given node.
2. Call FindNodesAtDistance from the given node to get all the nodes distance k in child direction.
3. Call FindNodesAtDistance on the nodes from the queue to get all the nodes distance k -i from the node. where i is the distance of the node in the queue from the given node.
I think the logic seems little convoluted (for me). What we are trying to achieve is Total time taken in funA is = (endA - endB) - sum of time spend in any other function.
We use a stack to track the function. Whenever you pop an element from the stack, you check if its the given function. If yes, then add difference. If its some other function, subtract the difference.
Assumption is that its possible to provide change. (For that we can assume that there is unit coin available)
We will have to define one objective for dispensing the coins.
"Machine would like to dispense as few coins as possible"
Then this problem can be either solved by Greedy approach or dynamic programming approach.
Greedy approach : At any given stage, use the highest valued coin possible. Lets suppose you have to make change for 98 units and you have two 50 unit coins, one 20 unit coin and four 10 unit coins and unlimited 1 unit coins.
1st Stage : Use one 50 unit. (we can not use 2 as that would exceed the given amount)
2nd Stage : We could have used two 20 unit ones but we have only one. Use one. Remaining amount is 28.
3rd Stage : Use two 10 unit ones.
4th Stage : Use 8 one unit coins.
Another approach is Dynamic Programming.
Assume that the coins are of the value v1, v2, vn. We have to make change for value j.
MinNoOfCoins(j) = min{for all v in [v1-vn] calculate MinNoOfCoins{ j - v} } + 1
I am assuming that the question can be interpreted as a MXN grid with 1 in each cell where the character's part lies, 0 otherwise. Assuming that the character dots will be connected, it means all the cells numbered 1 will be connected. We need to figure out the maximum extent of the connected cells.
Approach
Travel in the diagonal direction, till you hit a cell with value 1. If you do not hit any cell with 1, then either half of the matrix does not contain any character and hence the cells to be tested will be reduced(not necessarily halved).
Once you get one cell having value 1, you can start that as a seed and do a traversal of the connected cells keeping an eye on the yMin and yMax of the visited cells.
This seems like a DP problem. This is what my take at this problem.
1. Let the RemoveCount is the max number of items that can be removed.
2. Sort the elements with their frequencies.
3. This function takes the elemArray and the maximum element to be removed and returns the min possible variation in the data.
int MinVariation(ElementFrequencyArray elemArray, int removeCount )
4. To reduce the highest frequent element, we need to check for the case when it no longer becomes a highest frequent element. Lets suppose reduceCount is more than the difference between the top two elements. Then ReduceHigherFrequentNode will reduce the the top element frequency till the time it becomes equal to the second max of the element array.
int ReduceHigherFrequentNode(ElementFrequencyArray& elemArray, int removeCount )
{
if (Max(elemArray) - SecondMax(elemArray) > removeCount)
{
// Reduce the frequency of maximum elem by remove count
// remove count = 0;
}
else
{
// Reduce max element till its equal to second max element count
// Reduce the removeCount by the difference
}
return removeCount;
}
5. To reduce element from the lower frequency side
int ReduceLowerFrequentNode(ElementFrequencyArray& elemArray, int removeCount )
{
if (Min(elemArray) < removeCount)
{
// Reduce freq of min frequent elem to 0.
// removeCount -= Min(elemArray)
}
return removeCount;
}
6. Given these functions, at any point we have following options
a1. if (removeCount > Min(elementArray)), it means there is a possibility that if we remove the min frequent element we may get the desired result. What we will do is, call ReduceLowerFrequentNode, and then with the changed elementArray and removeCount, we will find the minimum deviation. Lets call this result to be LowFreqRemovalResult.
a2. If the above condition is true, even then we may decide to reduce the highest frequency element, Call ReduceHigherFrequentNode, get the new set of elementArray and reduce count/
b. if the above condition is false, then call ReduceHigherFrequentNode repeatedly.
int MinVariation(ElementFrequencyArray elemArray, int removeCount )
{
if (removeCount == 0)
return Max(elemArray)- Min(elemArray);
if (Min(elemArray) < removeCount)
{
lowElemArray = copy(elementArray)
remCount = ReduceLowerFrequentNode(lowElemArray, removeCount);
lowFreq = MinVariation(lowElemArray, remCount);
highElemArray = copy(elementArray)
remcount2 = ReduceHigherFrequentNode(highElemArray, removeCount);
highFreq = MinVariation(lowElemArray, remCount);
return Min(lowFreq, higFreq);
}
highElemArray = copy(elementArray)
remcount2 = ReduceHigherFrequentNode(highElemArray, removeCount);
return MinVariation(lowElemArray, remCount);
}
Consider the board as directed graph.
1. k is linked to k + 1 k + 2, k + 3, k + 4, k + 5, k +6 because from k you can reach these nodes in one throw of dice.
2. If any of these neighbors of k has a ladder which takes you to j, then j becomes the neighbor instead of the base of the ladder. Lets say k + 3 node takes you to j node, then instead of k + 3, j is the neighbor of k.
3. Similarly if any of the neighbors of k has a snake which takes you to l, then l is a neighbor of k.
Using these conditions, build the graph. Now this is transformed into a Shortest path between two nodes in a directed graph problem.
Here is my take on the problem.
Logic:
a. We need to know the indexes of the two arrays.
b. Start merging from the end of two arrays.
c. Compare FirstArray[i] and SecondArray[j],
That was easy. Now consider the opposite case, when FirstArray[i] > SecondArray[j]. We will swap. But now the FirstArray is o longer a sorted array as it may be possible after swapping FirstArray[i] < FirstArray[i-1].
And we do not want to bubble this element as that will only increase the time complexity. What we will do is that we will keep the element there. And if (FirstArray[i] < FirstArray[i -1]) we will decrement i, as in FirstArray[i -1] becomes the candidate for merging at the end of the array.
If we continue this process, then FirstArray element distribution will end up as an array of two sorted arrays. We apply the same algo on FirstArray now.
Lets look at how this will work for our sample.
Now if you look at the FirstArray [1 4 5 2 3 ], its again a mix of sorted array. We will do the same steps of First Array.
- SR March 18, 2013[ 1 4 5 2 3]
[1 4 3 2 5]
[1 2 3 4 5]
Ans we will get the sorted array.