zortlord
BAN USERSimple O(n) complexity, O(n) non-recursive BFS traversal:
public void printTree(Node root){
if(root == null){
return;
}
LinkedList<Node> list = new LinkedList<Node>();
LinkedList<Node> children = new LinkedList<Node>();
LinkedList<Node> temp;
StringBuilder line = new StringBuilder();
Node node, child;
list.add(root);
while(!list.isEmpty()){
while(!list.isEmpty()){
node = list.removeFirst();
line.append(node.value);
child = node.left;
if(child != null){
children.add(child);
}
child = node.right;
if(child != null){
children.add(child);
}
}
java.lang.System.out.println(line.toString());
line.setLength(0);
temp = children;
children = list;
list = temp;
}
}
How about a plain old greed map-coloring approach? Complexity is O(n m) and memory is O(n) where n is the number of students and m is the number of known friends. Depending on how the number of known students is determines, the Complexity may be closer to O(n^2).
Assumptions:
-Students are treated as Ids
-the lookup table is accessible via a function "int[] getKnown(int studentId)"
public static int getNumColors(int[] students){
if(students == null){
throw new NullPointerException();
}
HashMap<Integer, Integer> studentToTableMap = new HashMap<Integer, Integer>(students.length);
int numTables = 0;
for(int studentId : students){
int[] known = getKnown(studentId);
HashSet<Integer> peerTables = new HashSet<Integer>();
for(int knownId : known){
Integer table = studentToTableMap.get(knownId);
if(table != null){
peerTables.add(table);
}
}
int tableCount = 1;
while(peerTables.contains(tableCount)){
tableCount++;
}
studentToTableMap.put(studentId, tableCount);
if(tableCount > numTables){
numTables = tableCount;
}
}
return numTables;
}
Has the benefit of not creating extra interim objects at every level in the function call stack. Runtime: O(n^2), memory O(n) of which is only consumed by the function call stack and only for each '?'.
public static void getPerms(String str){
if(str == null){
throw new NullPointerException();
}
Worker worker = new Worker(str);
worker.work();
}
public static class Worker{
private char[] arr;
private StringBuilder line;
private int index;
public Worker(String str){
this.arr = str.toCharArray();
this.line = new StringBuilder();
this.index = 0;
}
public void work(){
while(this.index < this.arr.length){
if(this.arr[this.index] == '?'){
this.arr[this.index] = '0';
this.index++;
this.work();
this.arr[this.index -1] = '1';
this.work();
this.index--;
this.arr[this.index] = '?';
return;
}
this.index++;
}
this.print();
}
private void print(){
this.line.setLength(0);
for(int i = 0; i < this.arr.length; i++){
line.append(this.arr[i]);
}
java.lang.System.out.println(line.toString());
}
}
Use a binary tree of some type (pick your favorite flavor). Each node of the tree would preserve a single photo and a flag indicating if the photo is a favorite. The tree is sorted on the time that the photographs are taken.
Insert(x, t, m): inserts a new tree node at a location 't' containing photo 'x' and flagged favorite = 'm'==1. This operation is O(log n) complexity and takes O(1) memory where n is the number of stored photos
Search(t): do a search for the first node that occurs immediately after 't'. This operation is O(log n) in complexity and O(1) memory where n is the number of stored photos
NextFavorite(t): do a search for the first node with the flag == true and time after 't'. This operation is also O(log n) in complexity and O(1) memory where n is the number of stored photos
For the most part, this is the 90% immediate solution. There are naturally other things that could be done like bucketing chunks of time or maintaining separate trees for the favorites vs all the photos.
This can be done in average O(n) complexity using O(n) memory by using a pivoting and sorting strategy:
String find(List<Item> items, int ith){
if(items == null){
throw new NullPointerException():
}
if(i >= items.size()){
throw new IllegalArgumentException();
}
int offset = 0;
ArrayList<Item> moreSold = null;
ArrayList<Item> lessSold = null;
while(items.size() > 0){
moreSold = new ArrayList<Item>();
lessSold = new ArrayList<Item>();
Iterator<Item> iterator = items.iterator();
//get the pivot point
Item pivot = iterator.next();
//split the items up into larger and smaller groups
while(iterator.hasNext()){
Item temp = iterator.next();
if(temp.quantitySold > pivot.quantitySold){
moreSold.add(temp);
}
else{
lessSold.add(temp);
}
}
//now, figure out which list is which
if(offset + moreSold.size() > ith){
items= moreSold;
}
else if(offset + moreSold.size() + 1 == ith){
return pivot.itemId;
}
else{
offset += (moreSold.size() + 1);
items= lessSold;
}
}
return null;
}
Nonrecursive DFS starting at the start position. Will operate in O(nm) complexity and O(nm) memory (no extra frame shifting too) where n and m are the dimensions of the binary array. This could potentially be sped up using something like A*, but that would take a bit more implementation.
public static boolean hasPath(int[][] arr, int[] start, int[] end){
if(arr == null || start == null || end == null){
throw new NullPointerException();
}
if(arr.length == 0 || arr[0].length == 0 || start.length !=2 || end.length != 2){
throw new IllegalArgumentException();
}
boolean[][] alreadyChecked = new boolean[arr.length][arr[0].length];
Stack<int[]> positionsToCheck = new Stack<int[]>();
positionsToCheck.add(start);
while(!positionsToCheck.isEmpty()){
int[] pos = positionsToCheck.pop();
if(Arrays.equals(pos, end)){
return true;
}
if(pos[0] < 0 || pos[0] >= arr.length || pos[1] < 0 || pos[1] >= arr[0].length || arr[pos[0]][pos[1]] == 1 || alreadyChecked[pos[0]][pos[1]]){
continue;
}
alreadyChecked[pos[0]][pos[1]] = true;
positionsToCheck.add(new int[]{pos[0] - 1, pos[1]});
positionsToCheck.add(new int[]{pos[0], pos[1] - 1});
positionsToCheck.add(new int[]{pos[0] + 1, pos[1]});
positionsToCheck.add(new int[]{pos[0], pos[1] + 1});
}
return false;
}
Assuming the dictionary is an array of strings, then the best structure to test with would be a Trie. Using a Trie, the search would effectively operate a O(n*m) complexity where n is the number of words and m is the 'length' of the words to construct the Trie and O(n^2m) to research the word list for the longest compound word.
static class TrieNode<T>{
HashMap<T,TrieNode<T>> children = new HashMap<T, TrieNode<T>>();
boolean isTerminated;
}
static void addWordToTrie(String word, TrieNode<Character> head){
TrieNode<Character> temp = head;
int i = 0;
while(i < word.length()){
Character c = word.charAt(i);
TrieNode<Character> tempChild = temp.children.get(c);
if(tempChild == null){
tempChild = new TrieNode<Character>();
temp.children.put(c, tempChild);
}
temp = tempChild;
i++;
}
temp.isTerminated = true;
}
public static String getLargestCompoundWord(String[] dict){
if(dict == null){
throw new NullPointerException("\"dict\" may not be null");
}
//build the Trie
TrieNode<Character> head = new TrieNode<Character>();
for(String str : dict){
addWordToTrie(str, head);
}
//complete the search
Worker worker = new Worker(dict, head);
worker.work();
return worker.results();
}
static class Worker{
TrieNode<Character> head;
String[] dict;
String result;
Worder(String[] dict, TrieNode<Character> head){
this.dict = dict;
this.head = head;
this.result = "";
}
void work(){
for(String word : this.dict){
if(word.length : this.result.length()){
if(compoundCount(word, 0, 0) > 1){
this.result = word;
}
}
}
}
boolean compountCount(String str, int start, int countedWords){
if(start >= end){
return countedWords > 1;
}
TrieNode<Character> temp = head;
while(start < str.length){
Character c = str.charAt(start);
start++;
TrieNode<Character> tempChild = temp.children.get(c);
if(tempChild == null){
return false;
}
if(tempChild.isTerminated){
if(compoundCount(str, start, countedWords+1){
return true;
}
}
}
return false;
}
String getResults(){
return this.result;
}
Operates with O(nm) complexity and O(1) memory where n is the number of groups and m is the number of trips.
public static int getNumberMoved(int[] groups, int busSize, int trips){
int position = 0;
int total = 0;
while(trips > 0){
int thisTripPosition = position;
int totalForThisTrip = groups[thisTripPosition];
thisTripPosition = (thisTripPosition + 1) % groups.length;
while(totalForThisTrip + groups[thisTripPosition ] <= busSize && thisTripPosition != position){
totalForThisTrip += groups[thisTripPosition ];
thisTripPosition = (thisTripPosition + 1) % groups.length;
}
total += totalForThisTrip;
trips--;
}
return total;
}
public static boolean solveSudoku(int[][] board){
if(board == null){
throw new NullPointerException("\"board\" may not be null");
}
Worker worker = new Worker(board);
return worker.work();
}
static class Worker{
int[][] board;
HashSet<Integer>[] rowSets;
HashSet<Integer>[] colSets;
HashSet<Integer>[][] sqrSets;
Worker(int[][] board){
this.board = board;
this.init();
}
void init(){
this.rowSets = new HashSet<Integer>[9];
this.colSets = new HashSet<Integer>[9];
for(int i = 0; i < 9; i++){
this.rowSets[i] = new HashSet<Integer>();
this.colSets[i] = new HashSet<Integer>();
for(int j = 1; j < 10; j++){
this.rowSets[i].add(j);
this.colSets[i].add(j);
}
}
this.sqrSets = new HashSet<Integer>[3][3];
for(int x = 0; x < 3; x++){
for(int y = 0; y < 3; y++){
this.sqrSets[x][y] = new HashSet<Integer>();
for(int i = 1; i < 10; i++){
this.sqrSets[x][y].add(i);
}
}
}
for(int x = 0; x < this.board.length; x++){
for(int y = 0; y < this.board[x].length; y++){
if(this.board[x][y] > 0){
this.setValue(x, y, this.board[x][y])
}
}
}
}
void setValue(int x, int y, int val){
this.board[x][y] = val;
Integer integerVal = new Integer(val);
this.rowSets[y].remove(integerVal);
this.colSets[x].remove(integerVal);
this.sqrSets[x/3][y/3].remove(integerVal);
}
void unsetValue(int x, int y){
Integer integerVal = new Integer(this.board[x][y]);
this.board[x][y] = 0;
this.rowSets[y].add(integerVal);
this.colSets[x].add(integerVal);
this.sqrSets[x/3][y/3].add(integerVal);
}
HashSet<Integer> getPossibles(int x, int y){
HashSet<Integer> possibles = new HashSet<Integer> possibles;
for(Integer i : this.rowSets[x/3][y/3]){
if(this.colSets[x].contains(i) && this.rowSet[y].contains(i)){
possibles.add(i);
}
}
return possibles;
}
boolean work(){
int minX = -1;
int minY = -1;
HashSet<Integer> bestSolutions = null;
int min = Integer.MAX_VALUE;
outer: for(int x = 0; x < 9; x++){
for(int y = 0; y < 9; y++){
if(this.board[x][y] == 0){
HashSet<Integer> possibleSolutions = getPossibles(x, y);
if(possibleSolutions.size() == 0){
return false;
}
else if(possibleSolutions.size() == 1){
bestSolutions = possibleSolutions;
minX = x;
minY = y;
break outer;
}
else if(possibleSolutions.size() < min){
minX = x;
minY = y;
min = possibleSolutions.size();
bestSolutions = possibleSolutions;
}
}
}
}
if(minX == -1 && minY == -1){
return true;
}
Iterator<Integer> iterator = bestSolutions.iterator();
while(iterator.hasNext()){
Integer val = iterator.next();
this.setValue(minX, minY, val);
if(this.work()){
return true;
}
this.unset(minX, minY);
}
return false;
}
}
public static String expand(String str){
int loc = 0;
StringBuilder output = new StringBuilder();
while(loc < str.length()){
//get the next character to repeat
char c= str.charAt(loc);
loc++;
//get the next number
int num = 0;
while(loc < str.length){
char n = str.charAt(loc);
if(isDigit(n)){
num = num * 10 + n;
}
}
//add character to output
while(num > 0){
output.append(c);
num --;
}
}
return output.toString();
}
private static boolean isDigit(char c){
int asciiOffset = (int)c - (int)'1';
return asciiOffset > -1 && asciiOffset < 10;
}
Assuming allowed to change the array. Alternately, would change the array back or use a different 2d boolean array. Run time O(nm) with O(nm) memory where n and m are the dimensions of the 2d array:
static class Worker{
private int[][] arr;
private ArrayList<ArrayList<int[]>> results;
Worker(int[][] arr){
this.arr = arr;
this.results = new ArrayList<ArrayList<int[]>>();
}
void execute(){
ArrayList<int[]> resultList = new ArrayList<int[]>();
for(int x = 0; x < arr.length; x++){
for(int y = 0; y < arr[x].length; y++){
execute(x, y, resultList);
if(resultList.size() > 0){
this.results.add(resultList);
resultList = new ArrayList<int[]>();
}
}
}
}
void execute(int x, int y, ArrayList<int[]> list){
if(x < 0 || x > this.arr.length-1 || y < 0 || y > this.arr[x].length -1 || this.arr[x][y] < 1){
return;
}
list.add(new int[]{x, y});
this.arr[x][y] = -this.arr[x][y];
this.execute(x-1, y-1, list);
this.execute(x-1, y, list);
this.execute(x-1, y+1, list);
this.execute(x, y-1, list);
this.execute(x, y+1, list);
this.execute(x+1, y-1, list);
this.execute(x+1, y, list);
this.execute(x+1, y+1, list);
}
ArrayList<ArrayList<int[]>> getResults(){
return this.results;
}
}
public static void printPixelSets(int[][] arr){
if(arr == null){
throw new NullPointerException();
}
Worker worker = new Worker(arr);
worker.execute();
ArrayList<ArrayList<int[]>> results = worker.getResults();
StringBuilder row = new StringBuilder();
for(ArrayList<int[]> set : results){
row.append('{');
for(int[] point : set){
row.append('(');
row.append(point[0]);
row.append(", ");
row.append(point[1]);
row.append(')');
}
row.append('}');
java.lang.System.out.println(row.toString());
row.setLength(0);
}
}
public static void rearrange(int[] arr){
if(arr == null){
throw new NullPointerException();
}
int oddIndex = 0;
int evenIndex = 1;
while(evenIndex < arr.length && oddIndex < arr.length){
//find the next odd value to move
while(arr[evenIndex] % 2 == 0 && evenIndex< arr.length){
evenIndex+=2;
}
//find the next even value to move
while(arr[oddIndex] % 2 == 1 && oddIndex < arr.length){
oddIndex += 2;
}
//swap positions
if(oddIndex < arr.length && evenIndex < arr.length){
int temp = arr[evenIndex];
arr[evenIndex] = arr[oddIndex];
arr[oddIndex] = arr[evenIndex];
}
}
}
How about maintaining a HashSet that covers the values that have already been encountered along with the minimal value found so far? O(n) memory and O(n) runtime:
public static int getSmallestUnfoundInInterval(int[] arr){
if(arr == null){
throw new NullPointerException();
}
if(arr.length == 0){
throw new IllegalArgumentException();
}
//build the HashSet and get the min
HashSet<Integer> set = new HashSet<Integer>();
int minVal = Integer.MAX_VALUE;
for(int i : arr){
if(i < minVal){
minVal = i;
}
set.add(i);
}
//search the HashSet for the first value counting up from the min that is NOT in the set
while(set.contains(minVal)){
minVal++;
}
return minVal;
Alternately the following could be done that would be much faster (O(n log m) instead of O(nm) )
public static class LinkNode<T extends Comparable<T>>{
LinkNode<T> next;
T value
}
public static LinkNode<T> merge(LinkNode<LinkNode<T>> lists){
LinkNode<T> resultHead;
LinkNode<T> resultTail;
//build a heap of the contents
PriorityQueue<LinkNode<LinkNode<T>>> heap = new PriorityQueue<LinkNode<LinkNode<T>>>(new Comparator<LinkNode<LinkNode<T>>>(){
@Override
public int compare(LinkNode<LinkNode<T>> list1, LinkNode<LinkNode<T>> list2){
return list1.value.value.compareTo(list2.value.value);
}
});
while(lists != null){
if(lists.value != null){
heap.add(lists);
}
LinkNode<LinkNode<T>> next = lists.next;
lists.next = null;
lists = next;
}
//if there is stuff in the heap
if(!heap.isEmpty()){
//build the head of the results
lists = heap.dequeue();
LinkNode<T> node = lists.value;
lists.value = node.value;
resultHead = node;
resultTail = node;
//don't add back an empty list
if(lists.value != null){
heap.add(lists);
}
//while there is still content in the heap, keep working
while(!heap.isEmpty()){
lists = heap.dequeue();
node = lists.value;
lists.value = node.value;
resultTail.next = node;
resultTail = node;
//don't add back an empty list
if(lists.value != null){
heap.add(lists);
}
}
}
return resultHead;
}
If the question is to make 1 singly linked list out of K lists, then it could be done thusly:
public static ListNode<T extends Comparable>{
ListNode next;
T value;
}
public static ListNode<T> mergeLists(ListNode<ListNode<T>> lists){
if(lists == null){
return null;
}
//build the head of the results
Object[] minObj = getMin(lists);
ListNode<T> resultHead = (ListNode<T>)minObj[1];
ListNode<T> resultTail = resultHead;
while(resultTail != null){
lists = (ListNode<ListNode<T>>)minObj[0];
minObj = getMin(lists);
resultTail.next = (ListNode<T>)minObj[1];
resultTail = resultTail.next;
}
return resultHead;
}
private static Object[] getMin(ListNode<ListNode<T>> lists){
//[0] will be the modified list
//[1] will be the min
Object[] results = new Object[2];
if(lists == null){
return results;
}
ListNode<ListNode<T>> newListsHead = null;
ListNode<ListNode<T>> newListsTail = null;
ListNode<ListNode<T>> temp = lists;
ListNode<ListNode<T>> minListHolder == null;
//set the new lists head and the first min
while(temp != null){
if(temp.value != null){
if(minListHolder == null || minListHolder.value.value.compareTo(temp.value.value)){
minListHolder = temp;
}
//since the value is not null, keep it around for the next search
if(newListsHead == null){
newListsHead = temp;
newListsTail = temp;
}
else{
newListsTail.next = temp;
newListsTail = temp;
}
}
temp = temp.next;
}
//remove any dangling empty lists
newListsTail = null;
if(minListHolder != null){
ListNode<T> minNode = minListHolder.value;
minListHolder.value = minNode.next;
}
results[0] = newListsHead;
results[1] = minNode;
return results;
}
public static int searchRotArr(int[] arr, int val){
int lo = 0, hi = arr.length -1, mid = -1;
while(lo < hi){
mid = (lo + hi) >> 1;
if(val == arr[mid]){
return mid;
}
if(val > arr[mid]){
if(val <= arr[hi]){
lo = mid + 1
}
else{
hi = mid-1;
}
}
else{
if(val <= arr[lo]){
lo = mid + 1;
}
else{
hi = mid -1;
}
}
}
return mid;
}
There are 2 ways to consider this depending on how this function is operating:
1. If this function is simply to sort based on preexisting data and that data effectively ranks between the two teams then this is a basic sort (this would match what is normally done for a 'tournament'. The best runtime for this would be O(n log n) using something like a MergeSort or generally O(n log n) with a quicksort.
2. Alternatively, this could be a situation where every team has played every other team and the ranking is done on overall win / loss record ('round-robin'). This would require O(n^2) run time.
Tournament implementation:
public static Team[] rank(Team[] teams){
Comparator<Team> teamComparator(){
@Override
public int compare(Team t1, Team t2){
if(match(t1, t2) == t1){
return -1;
}
return 1;
}
};
Team[] results = new Team[teams.length];
System.arraycopy(teams, 0, results, 0, teams.length);
Arrays.sort(results, teamComparator);
}
Round Robin approach:
public static Team[] rank(Team[] teams){
Team[] results = new Team[teams.length];
//make a list of all unassigned positions in the results
ArrayList<Integer> unassigned = new ArrayList<Integer>(teams.length);
for(int i = 0; i < teams.length; i++){
unassigned.add(i);
}
//now place the teams
for(int i = 0; i < teams.length; i++){
//count the losses of each team compared to the other remaining teams.
int loseRecord= 0;
for(int j = i+1; j < teams.length; j++){
if(match(teams[i], teams[j]) != teams[i]){
loseRecord++;
}
}
//the losses indicates the position in the remaining results where this team should fall
int index = unassigned.remove(loseRecord);
results[index] = team[i];
}
return results;
}
public static class TreeNode{
TreeNode left, right;
int val;
}
public static TreeNode rebuild(int[] inOrder, int[] preOrder){
return rebuildRecur(inOrder, 0, inOrder.length, preOrder, 0);
}
private static TreeNode rebuildRecur(int[] inOrder, int inStart, int inEnd, int[] preOrder, int preIndex){
if(inOrder > inStart){
return null;
}
TreeNode node = new TreeNode();
node.val = preOrder[preIndex];
int splitIndex = -1;
for(int i = inStart; i < inEnd; i++){
if(inOrder[i] == node.val){
splitIndex = i;
break;
}
}
node.left = rebuildRecur(inOrder, inStart, splitIndex, preOrder, preIndex+1);
node.right = rebuildRecur(inOrder, splitIndex + 1, inEnd, preOrder, preIndex + (splitIndex - inStart + 2 ));
}
public static void printSizedDivision(int numerator, int divisor, int digits){
StringBuilder line = new Stringbuilder();
line.append(numerator / divisor);
line.append('.');
numerator %= divisor;
while(numerator > 0 && digits > 0){
numerator *= 10;
line.append(numerator / divisor);
numerator %= divisor;
digits--;
}
System.out.println(line.toString());
}
I'm going to assume that your tree is supposed to be the following because I can't read yours:
_________1
_______/___\
_____2______3
___/___\____/__\
__4____5__6___7
_/__\___/______/
8___9_10_____11
___/
__12
public static void printBottom(TreeNode node){
Worker worker = new Worker(node);
worker.execute();
System.out.println(worker.getResults());
}
private static class Worker{
TreeNode head;
StringBuilder resultHolder;
private Worker(TreeNode head){
this.head = head;
}
private void execute(){
this.resultHolder = new StringBuilder();
this.executeRecur(this.head);
}
private void executeRecur(TreeNode node){
if(node.left != null){
printBottom(node.left);
}
if(node.left == null || node.right == null){
this.resultHolder.append(node.value);
this.resultHolder.append(' ');
}
if(node.right != null){
printBottom(node.right);
}
}
private String getResults(){
if(this.resultHolder != null){
return this.resultHolder.toString();
}
return null;
}
}
Keep the data in a 2d array of ints where it can be a pseudo graph. Then complete a DFS from each position to the Guard using previous computations to short circuit the computations.
public static final int WALL=Integer.MAX_VALUE;
public static final int UNCHECKED = -2;
public static final int OPEN = -1;
public static final int GUARD = 0;
public static void computeDistance(int[][] room){
class Worker{
int[][] area;
int[][] results;
private Worker(int[][] area, int[][] results){
this.area = area;
this.results = new int[area.length][area[0].length];
for(int y = 0; y < this.area.length; y++){
for(int x = 0; x < this.area[0].length; x++){
this.results[y][x] =UNCHECKED;
}
}
}
private void work(){
for(int y = 0; y < this.area.length; y++){
for(int x = 0; x < this.area[0].length; x++){
this.work(x, y);
}
}
}
private int work(int x, int y){
if(y < 0 || y > this.area.length){
return WALL;
}
if(x < 0 || x > this.area[0].length){
return WALL;
}
int returnVal = this.results[y][x];
if(returnVal != UNCHECKED){
return returnVal ;
}
int areaVal = this.area[y][x];
if(areaVal == WALL){
returnVal = WALL;
}
else if(areaVal == GUARD){
returnVal = GUARD;
}
else if(areaVal == OPEN){
returnVal = Math.min(work(x, y+1), Math.min(x+1, y, Math.min(work(x, y-1), work(x-1, y))));
if(returnVal == WALL){
returnVal = OPEN;
}
else if(returnVal < WALL && returnVal > OPEN){
returnVal++;
}
}
this.results[y][x] = returnVal;
return returnVal;
}
}
Worker worker = new Worker(room);
worker.work();
return worker.results;
}
This would be a simple iterative simulation...
public class Car{
public static final double HANDLING_FACTOR= .8d;
public int team;
public double speed;
public double nextSpeed;
public double topSpeed;
public double accel;
public boolean hasNitro;
public double positionFromEnd;
public dounle nextPositionFromEnd;
public Car(int i, double trackLength){
this.team = i;
//meters per 2 sec
this.topSpeed = (150d + 10d * (double)i)/1.8d;
// meters per 2 sec2
this.acce l= 2 * (double)i;
this.accel *= this.accel;
this.hasNitro = true;
this.positionFromEnd = trackLength + 200d * (double)i;
}
public void update(){
this.speed = this.nextSpeed;
this.positionFromEnd = this.nextPositionFromEnd;
}
}
public class FinishInformation{
public int team;
public int finishTime;
public double finalSpeed;
public FinishInformation(int team, int finishTime, double finalSpeed){
this.team = team;
this.finishTime = finishTime;
this.finalSpeed = finalSpeed;
}
}
public static Collection<FinishInformation> getCompletionData(int numTeams, double trackLength){
//initialize the teams
ArrayList<Car> cars = new ArrayList<Car>(numTeams);
for(int i = 0; i < numTeams; i++){
cars.add(new Car(i, trackLength));
}
//initialize the output
ArrayList<FinishInformation> results = new ArrayList<FinishInformation>();
Comparator<Car> carSorter = new Comparator<Car>(){
@Override
public int compare(Car c1, Car c2){
return c1.positionFromEnd - c2.positionFromEnd;
}
}
//while there are cars still racing
int iterations = 0;
while(cars.size() > 0){
//sort the list based on the positions.
Collections.sort(cars, carSorter);
//now, nearby cars are used for the Handling checks and the last car will always use nitro
double lastNearbyCar
for(int i = 0; i < cars.size(); i++){
Car car = cars.get(i);
//check for nearby-ness
boolean notNearby = true;
if(i > 0){
if(Math.abs(cars.get(i-1).positionFromEnd - car.positionFromEnd) <= 10d){
notNearby = false;
}
}
if(i < cars.size() -1){
if(Math.abs(cars.get(i+1).positionFromEnd - car.positionFromEnd) <= 10d){
notNearby = false;
}
}
//compute next speed
if(notNearby){
car.nextSpeed = car.speed + car.accel;
if(car.nextSpeed > car.maxSpeed){
car.nextSpeed = car.maxSpeed;
}
}
else{
car.nextSpeed = car.speed * Car.HANDLING_FACTOR;
}
//compute next position
car.nextPositionFromEnd -= car.nextSpeed;
}
//compute usage of nitro
Car lastCar = cars.get(cars.size() -1);
if(lastCar.hasNitro){
lastCar.hasNitro = false;
lastCar.nextSpeed = car.speed * 2d;
if(lastCar.nextSpeed > lastCar.maxSpeed){
lastCar.nextSpeed = lastCar.maxSpeed;
}
}
//complete the movements
iterations++;
for(int i = 0; i < cars.size(); i++){
cars.get(i).update();
}
//process finishers
Car car = cars.get(0);
while(car.positionFromEnd <= 0d){
removeIndex++;
results.add(new FinishInformation(car.team, iterations<<1, car.speed));
cars.remove(0);
car = cars.get(0);
}
}
return results;
}
}
- zortlord May 20, 2015private static void reverseSection(int[] arr, int start, int end){
int temp;
while(start < end){
temp = arr[start];
arr[start] = arr[end];
arr[end] = temp;
start++;
end--;
}
}
public static void reverseSections(int[] arr, int sectionSize){
if(arr == null){
throw new NullPointerException("\"arr\" may not be null");
}
if(sectionSize < 1){
throw new IllegalArgumentException("\"sectionSize\" must be greater than 0");
}
int index = 0;
while(index < arr.length - sectionSize){
reverseSection(arr, index, index+sectionSize -1);
index += sectionSize;
}
if(index < arr.length - 1){
reverseSection(arr, index, arr.length -1);
}
}
and testing with JUnit
@Test(expected=NullPointerException)
public void test_reverseSections_arg1_null(){
reverseSections(null, -1);
}
@Test(expected=IllegalArgumentException)
public void test_reverseSections_arg2_illegal(){
reverseSections(new int[]{1, 2, 3}, -1
}
@Test
public void test_reverseSections_arg1_empty(){
//just verifying that there are no exceptions
reverseSections(new int[]{}, 3);
}
@Test
public void test_reverseSections_arg1_1int(){
int[] vals = new int[]{ 1 };
reverseSections(vals, 1);
assertTrue(vals[0] == 1);
vals[0] = 7;
reverseSections(vals, 13);
assertTrue(vals[0] == 7);
}
@Test
public void test_reverseSections_arg1_5int(){
assertReverseSections(new int[]{1, 2, 3, 4, 5}, 1, new int[]{1, 2, 3, 4, 5});
assertReverseSections(new int[]{1, 2, 3, 4, 5}, 2, new int[]{2, 1, 4, 3, 5});
assertReverseSections(new int[]{1, 2, 3, 4, 5}, 3, new int[]{3, 2, 1, 5, 4});
assertReverseSections(new int[]{1, 2, 3, 4, 5}, 4, new int[]{4, 3, 2, 1, 5});
assertReverseSections(new int[]{1, 2, 3, 4, 5}, 5, new int[]{5, 4, 3, 2, 1});
assertReverseSections(new int[]{1, 2, 3, 4, 5}, 6, new int[]{5, 4, 3, 2, 1});
assertReverseSections(new int[]{1, 2, 3, 4, 5}, 13, new int[]{5, 4, 3, 2, 1});
}
private void assertReverseSections(int[] arr, int size, int[] expected){
reverseSections(arr, size);
assertTrue(Arrays.equal(arr, expected));
}
Runtime O(n), Memory O(n)
public static int[] getSumPair(int[] arr, int sum){
HashSet<Integer> contains = new HashSet<Integer>();
for(int i : arr){
int needed = sum - i;
if(contains.contains(needed)){
return new int[]{i, needed};
}
contained.add(i);
}
return null;
}
class static NaryTreeNode<T>{
ArrayList<NaryTreeNode<T>> children;
T value;
public NaryTreeNode<T>(T value){
this.children = new ArrayList<NaryTreeNode<T>>();
this.value = value;
}
}
public static void printZigZag(NaryTreeNode<String> node){
Stack<NaryTreeNode<String>> thisLevel = new Stack<NaryTreeNode<String>>();
Stack<NaryTreeNode<String>> nextLevel = new Stack<NaryTreeNode<String>>();
Stack<NaryTreeNode<String>> temp;
boolean isLeftToRight = false;
StringBuilder line = new StringBuilder();
thisLevel.push(node);
while(!thisLevel.isEmpty()){
while(!thisLevel.isEmpty()){
node = thisLevel.pop();
line.append(node.value);
line.append(' ');
if(isLeftToRight){
for(int i = 0; i < node.children.size(); i++){
nextLevel.push(node.children.get(i));
}
}
else{
for(int i = node.children.size() -1; i > -1; i--){
nextLevel.push(node.children.get(i);
}
}
}
java.lang.System.out.println(line.toString());
line.setLength(0);
temp = thisLevel;
thisLevel = nextLevel;
nextLevel = temp;
isLeftToRight = !isLeftToRight;
}
}
I completely agree. Pure random data would not be losslessly compressible. However, I hadn't thought the data was random because that was never mentioned in the subject. But, if it were perfectly random then I'm not sure someone would really want to send it. I could only think of a few things that pure random data could be- a 1 time pad or some observed natural phenonemon... ?
- zortlord May 19, 2015This could be done in O(nmn) time where n is the size of the smaller dimension and m is the size of the larger dimension by simply quantizing the space and seeing how large a rectangle could be made at each point (n, m) in the rectangle with the point as a corner.
However, I think it could probably be improved by doing something like DP somehow.
O(n) time and O(n) memory. Nonrecursively too so it's faster.
static class Node<T>{
Node next;
T val;
}
public static <T> boolean isPalendrome(Node<T> head){
Node<T> runner = head;
Node<T> endRunner = head;
Stack<Node<T>> nodesEncountered = new Stack<Node<T>>();
while(endRunner != null){
endRunner = endRunner.next;
if(endRunner == null){
runner = runner.next;
break;
}
nodesEncountered.push(runner);
runner = runner.next;
endRunner = endRunner.next
}
while(!nodesEncountered.isEmpty()){
Node<T> old = nodesEncountered.pop();
if(!old.val.equals(runner.val)){
return false;
}
runner = runner.next;
}
return true;
}
Cleaned it up some (it will run now):
public static void printSums(int target, int[] vals){
class Worker{
String[] signs;
int sum = 0;
int target;
int[] vals;
int[] min;
int[] max;
public Worker(int target, int[] vals){
this.target = target;
this.vals = vals;
this.signs = new String[this.vals.length];
//compute the short circuits
this.computeMinMax();
}
private void computeMinMax(){
this.min = new int[this.vals.length];
this.max = new int[this.vals.length];
int val = Math.abs(this.vals[this.vals.length-1]);
this.min[this.vals.length-1] = -val;
this.max[this.vals.length-1] = val;
for(int i = this.vals.length-2; i >= 0; i--){
val = Math.abs(this.vals[i]);
this.min[i] = this.min[i+1] - val;
this.max[i] = this.max[i+1] + val;
}
}
private void work(int index){
if(index == this.vals.length){
if(this.sum == this.target){
this.printSolution();
}
}
else{
//if it's impossible to get the remaining value from this, don't attempt
int remaining = this.target - this.sum;
if(remaining < this.min[index] || remaining > this.max[index]){
return;
}
this.sum += this.vals[index];
this.signs[index] = " + ";
this.work(index+1);
this.sum -= (this.vals[index] << 1);
this.signs[index] = " - ";
this.work(index+1);
this.sum += this.vals[index];
}
}
private void printSolution(){
StringBuilder line = new StringBuilder();
for(int i = 0; i < this.vals.length; i++){
line.append(this.signs[i]);
line.append(Integer.toString(this.vals[i]));
}
java.lang.System.out.println(line.toString());
}
}
Worker worker = new Worker(target, vals);
worker.work(0);
}
A blind DFS search should be relatively fast given the amount of content. I would expect a maximum computation of O(2^n) (2 because you must add or subtract each value). This could potentially be paired down by precomputing the maximum and minimum values that could be had at each position (an optimization but not necessarily one that would drop the time to something less than O(2^n). so you could execute early termination as appropriate.
public static void printSums(int target, int[] vals){
class Worker{
String[] signs;
int sum = 0;
int target;
int[] vals;
int[] min;
int[] max;
public Worker(int target, int[] vals){
this.target = target;
this.signs = new String(this.target.length);
//compute the short circuits
this.computeMinMax();
}
private void computeMinMax(){
this.min = new int[this.vals.length];
this.max = new int[this.vals.length];
int val = Math.abs(this.vals[i]);
this.min[this.vals.length-1] = -val;
this.max[this.vals.lenth-1] = val;
for(int i = this.vals.length-2; i >= 0; i--){
int val = Math.abs(this.vals[i]);
this.min[i] = this.min[i+1] - val;
this.max[i] = this.max[i+1] + val;
}
}
private void work(int index){
if(index == vals.length && this.sum == this.target){
this.printSolution();
}
else{
//if it's impossible to get the remaining value from this, don't attempt
int remaining = this.target - this.sum;
if(remaining < this.min[index] || remaining > this.max[index]){
return;
}
this.sum += this.vals[index];
this.signs[index] = "+";
this.work(index++);
this.sum -= (this.vals[index] << 1);
this.signs[index] = " - ";
this.work(index++);
this.sum += this.vals[index];
}
}
private void printSolution(){
StringBuilder line = new StringBuilder();
for(int i = 0; i < this.vals.length; i++){
line.append(this.signs[i]);
line.append(Integer.toString(this.vals[i]));
}
java.lang.System.out.println(line.toString());
}
}
Worker worker = new Worker(target, vals);
worker.work();
}
Won't work.
Weights: 35 (small kid), 110, 111, 111, 114, 115, 149, 150
should take 7 canoes
Your algorithm:
minw = 35
maxw = 115
count1 = 4 ( 110, 111, 111, 114)
count2 = 1 ( 149)
Answer = 4 / 2 + 1 = 3 ... ? Not correct.
Adjusting your between statements to do what I think you may have meant to say:
minw = 35
max2 = 115
count1 = 6 (35, 110, 111, 111, 114, 115)
count2 = 2 (149, 150)
Answer = 6 / 2 + 2 = 5 ... still incorrect
Operates in O(n log n) average time with O(n) memory
Assumptions: All individual weights are <= the max weight specified (what do you do with kids larger than the max weight? Leave them at the shore all sad?)
public static int countCanoes(int maxWeight, int[] weights){
//sort the weights
Arrays.sort(weights);
//now pair off the weights that are collectively under maxWeight
int loIndex = 0, hiIndex = weights.length - 1;
int canoes = 0;
while(loIndex <= hiIndex){
int loWeight = weights[loIndex];
int hiWeight = weights[hiIndex];
if(loWeight + hiWeight <= maxWeight){
loIndex++;
}
hiIndex--;
canoes++;
}
return canoes;
}
Operates in O(n) time, more specifically, only makes 2N passes of the data as opposed to 3N for finding the head or tail of the itinerary first. Uses approximately 2N memory.
public static List<String> getItinerary(String[][] tickets){
//data maps
HashMap<String, String> sourceToDestMap = new HashMap<String, String>();
HashMap<String, String> destToSourceMap new HashMap<String, String>();
//map-ify the data
for(int i = 1; i < tickets.length; i++){
String[] ticket = tickets[i];
sourceToDestMap.put(ticket[0], ticket[1]);
destToSourceMap.put(ticket[1], ticket[0]);
}
//build the results
LinkedList<String> itinerary = new LinkedList<String>();
itinerary.add(tickets[0][0]);
itinerary.add(tickets[0][1]);
while(sourceToDestMap.size() > 0){
//if there is a destination after the current destination, add it to the itinerary end
String temp = sourceToDestMap.remove(itinerary.getLast());
if(temp != null){
itinerary.add(temp);
destToSourceMap.remove(temp);
}
//if there is a source before this one, add it to the itinerary beginning
temp = destToSourceMap.remove(itinerary.getFirst());
if(temp != null){
itinerary.addFirst(temp);
sourceToDestMap.remove(temp);
}
}
return itinerary;
}
public static Group{
LinkedList<Integer> setIndices = new LinkedList<Integer>();
int min;
int max;
Group merge(Group g){
if(g.min < min){
min = g.min;
}
if(g.max > max){
max = g.max;
}
setIndices.addAll(g.setIndices);
}
boolean overlaps(Group g){
return !( max < g.min || min > g.max);
}
int[] getSets(){
int[] results = new int[setIndices.size()];
Iterator<Integer> iterator = setIndices.iterator();
for(int i = 0; i < results.length; i++){
results[i] = iterator.next();
}
return results;
}
}
public static int[][] getOverlaps(int[][] schedules){
//if feeling froggy, sort the schedules somehow here
LinkedList<Group> groups = new LinkedList<Group>();
for(int i = 0; i < schedules.length; i++){
int[] sched : schedules[i];
Group g = new Group;
g.setIndices.add(i);
g.min = sched[0];
g.max = sched[1];
}
LinkedList<Group> complete = new LinkedList<Group>();
boolean merged = true;
while(groups.size() > 0){
Group check = groups.removeFirst();
boolean notMerged = true;
for(int i = 0; i < groups.size(); i++){
Group temp = groups.removeFirst();
if(check.overlaps(temp)){
check.merge(temp);
notMerged = false;
else{
groups.add(temp);
}
}
if(notMerged){
if(check.setIndices.size() > 1){
complete.add(check);
}
}
else{
groups.add(check);
}
}
int[][] results = new int[complete.()][];
for(int i = 0; i < complete.size(); i++){
results[i] = complete.removeFirst().getSets();
}
return results;
}
Assuming that there is some API in the background that allows access to the different elements, then problem can be easily solved using a merge sort:
//this is part of the assumption regarding the API- that there are API handles for access and handling the content
public static class Reader{
InputStream in;
public Reader(InputStream in){
this.in = in;
}
public SuperObject read(){
//need some implementation here
}
}
public static class Writer{
OutputStream out;
public Writer(OutputStream out){
this.out = out;
}
public void write(SuperObject obj){
//need some implementation here
}
}
private static Comparator<SuperObject> COMPARE = new Comparator<SuperObject(){
public int compare(SuperObject obj1, SuperObject obj2){
//need some implementation here
}
}
public static File sort(Reader reader) throws IOException {
//break up the file into pieces
SuperObject object = null;
LinkedList<File> files = new LinkedList<File>();
ArrayList<SuperObject> objs = new ArrayList<SuperObject>();
while((object = reader.read()) != null){
objs.add(object);
if(objs.size() == 10){
sort_10(objs);
File tempFile = new File.tempFile("temp","txt");
files.add(tempFile);
OutputStream out = new FileOutputStream(tempFile);
Writer writer = new Writer(out);
for(object : objs){
writer.write(object);
}
out.close();
objs.clear();
}
}
LinkedList<File> nextFile = new LinkedList<File>();
while(files.size() > 1){
while(files.size() > 1){
nextFiles.add(merge(files.removeFirst(), files.removeFirst()));
}
if(files.size() > 0){
nextFiles.add(merge(files.removeFirst(), nextFiles.removeFirst());
}
}
return file.removeFirst();
}
private void sort_10(ArrayList<SuperObject> arr){
Collections.sort(arr, COMPARE);
}
private File merge(File file1, File file2) throws IOException{
File retFile = File.tempFile("temp","txt");
OutputStream out = new FileOutputStream(retFile);
Writer writer = new Writer(out);
InputStream in1 = new FileInputStream(file1);
InputStream in2 = new FileInputStream(file2);
Reader reader1 = new Reader(in1);
Reader reader2 = new Reader(in2);
LinkedList<SuperObject> list1 = read_5(reader1);
LinkedList<SuperObject> list2 = read_5(reader2);
while(list1 != null && list2 != null){
SuperObject obj1 = list1.getFirst();
SuperObject obj2 = list2.getFirst();
int compare = COMPARE.compare(obj1, obj2);
if(compare < 0){
writer.write(obj1);
list1.removeFirst();
}
else{
writer.write(obj2);
list2.removeFirst();
}
if(list1.size() == 0){
list1 = read_5(reader1);
}
if(list2.size() == 0{
list2 = read_5(reader2);
}
}
while(list1 != null){
SuperObject obj = list1.removeFirst();
writer.write(obj);
if(list1.size() == 0){
list1 = read_5(reader1);
}
}
while(list2 != null){
SuperObject obj = list2.removeFirst();
writer.write(obj);
if(list2.size() == 0){
list2 = read_5(reader2);
}
}
out.close();
in1.close();
in2.close();
return retFile;
}
public static LinkedList<SuperObject> read_5(Reader reader){
LinkedList<SuperObject> list = new LinkedList<SuperObject>();
SuperObject obj = null;
while((obj = reader.read()) != null){
list.add(obj);
}
if(list.size() == 0){
list = null;
}
return list;
}
How's about implementing a series of compression algorithms (huffman, chunking, etc), each prefixing the encoding header information in a parseable way (IE: encrypting 'Message' with chunking -> [header:chunking]+chunking('Message') ) Using each of these algorithms, you could effectively search the space of all possible encoding applications to optimize the size of the message regardless the computational complexity. Convert the problem into a search problem. True, it would be EXTREMELY computationally expensive, but it would probably optimize the size of the message.
- zortlord May 12, 2015A functional O(n^2) approach:
private static class Chunk{
Chunk next;
ArrayList<Character> charArr;
int count;
private Chunk(){
this.char = new ArrayList<Character>();
}
}
public static String encode(String input){
//handle simple outputs
if(input == null || input.isEmpty()){
return "";
}
/get the input
char[] arr = input.toCharArray();
//build a head for the Chunking chain
Chunk head = new Chunk();
head.charArr.add(arr[0]);
head.count++;
//grab a pointer to the current chunk being processed
Chunk currentChunk = head;
int inputIndex = 1;
//while there is content to be processed
while(inputIndex < arr.length){
int chunkLocation = 0;
int inputIndexRunner = inputIndex;
//send a runner to see if the content of the string matches the
//current chunk
boolean matches = true;
while(chunkLocation < currentChunk.charArr.size()){
if(arr[inputIndexRunner] != currentChunk.charArr.get(chunkLocation)){
matches = false;
break;
}
chunkLocation++;
inputIndexRunner++;
}
//if it matches, then just up the count on this chunk
if(matches){
currentChunk.count++;
inputIndex += currentChunk.charArr.size();
}
else{
//if the current chunk only passes through once,
//add the character to the chunk
if(currentChunk.count == 1){
currentChunk.charArr.add(arr[inputIndex]);
}
//otherwise this has to be a new chunk
else{
Chunk next = new Chunk();
currentChunk.next = next;
currentChunk = currentChunk.next;
currentChunk.charArr.add(arr[inputIndex]);
currentChunk.count = 1;
}
inputIndex++;
}
}
//process the chunks to output
return processChunksToString(head);
}
private static String processChunksToString(Chunk chunk){
StringBuilder builder = new StringBuilder();
while(chunk != null){
builder.append(chunk.count);
for(Character c : chunk.charArr){
builder.append(c);
}
chunk = chunk.next;
}
return builder.toString();
}
- zortlord June 02, 2015