Skip to content

A collection of algorithm practice questions, system-design practice, and other interview prep resources

Notifications You must be signed in to change notification settings

javadroider/algorithm-interview

 
 

Repository files navigation

algorithm-interview

Strategies for Algorithm Puzzles

Every algorithim has an input. An input defines an instance of the puzzle. The instance can be either specific or general. When dealing with a specific instance of a puzzle, the solver has no obligaiton beyond solving the instance given. In fact, other instances of the puzzle may not have the same solution or any solution at all. On the other hand, specific numbers in a puzzle's statement may be of no signifigance whatsoever. Then solving the geneal instance of a puzzle could be not only more satisfying but even easier. Either general or specific instance it is almost always benefitual to solve several instances, and will often provide insights into the puzzle.

In algorithm problems there are essentially three types of problems:

  1. Questions that ask for some minimum or maximum: Optimization

  2. Questions about whether or not something is possible: Find the invariant, the property that remains true no matter how you change the problem

  3. Questions that are about counting and enumeration: For counting, you should look to collapse identity and only count was is necessary rather than collecting everything and then counting what matters

Invariant: An invariant is a property that is preseved by any algorithim that solves the problem. For puzzle problems, the invariant is often used to show that the problem has no solution because the invariant property holds for the intial state of the puzzle but fails for the required final state. Even-odd parity and coloring are two of the most widely used ways to exploit the invariant idea. Using parity and board coloring can be important keys to finding an invariant.

Backtracking: Is the first improvement over exhaustive search algorithms. In backtracking we construct solutions one at a time and then evaluate. If the partially constructed solution doesn't satisfy the problems constraints then we add the first option for the next component. If not, the solution backtracks. Backtracking constructs a state-space tree and prunes non-promising nodes or dead ends.

Strategy 1: Decrease and Conquer

The decrease and conquer strategy is about finding a relationship between the solution to the larger problem and the solution to a smaller instance of the problem. This relationship naturally leads to a recursive algorithm where the instances decrease until the problem can be solved directly.

Consider the binary search. Finding a number in a sorted list is linear time if we iterate through the list- but we can solve a smaller instance of the problem by splitting the list until we are left with a directly solvable instance. This is an example of decrease by a constant factor, with the factor n/2. Another common decrease and conquer is decrease by 1:

A detachment of 25 soldiers must cross a wide and deep river with no bridge in sight. They notice two 12-year-old boys playing in a rowboat by the shore. The boat is so tiny, however, that it can only hold two boys or one soldier. How can the soldiers get across the river and leave the boys in joint possession of the boat? How many times does the boat pass from shore to shore in your algorithm?

Solving this problem is as easy as considering the solution if the amount of soliders (n) is 1. To move one solider across the river the two children would have to cross, one child would take the boat back, then the solider would cross, and then the other child would have to cross. In total, four trips, Therefore, it would take 100 trips for 25 soliders to pass, or 4n.

Strategy 2: Divide and Conquer

The divide and conquer strategy is to partition a problem into several smaller subproblems, usually of the same or related type and ideally of about the same size. Solve each partition and then, if necessary, combine the solutions to get a solution to the original problem. This strategy underlines many efficient solutions for important problems in computer science, like merge sort, quick sort, and the tower of Hanoi. Divide and conquer problems often involve recursion.

Strategy 3: Transform and Conquer

The transform and conquer has two stages. First the problem is transformed into a form that is more amenable to solution, then, in the second step, it is conquered.

There are three variations on this problem- instance simplification, representation change, and problem reduction.

Instance simplification - Transform that generates a special property that makes it easier to solve. For example, if we were creating an anagram solver if we sort the letters of the words first, then anagrams would be identical.

Representation Change - A transform into another representation that is more conducive to efficient algorithmic solution. Forms of representation include transformation into a binary or ternary representaiton of problem input. Problem Reduction - Transformation into another problem altogether. Forms of transformation include binary and ternary transforms, and state-space graph transforms.

Binary and ternary transforms are questions that are very difficult to solve otherwise. In binary and ternary systems an integer is represented as a power of 2 or 3 respectively.

Cash Envelopes You have one thousand $1 bills. How can you distribute them among 10 envelops so that any amount between $1 and $1000, inclusive, can be given as some combination of these envelopes? No change is allowed, of course.

The state-space graph transforms the puzzle into a graph of possible states where a vertice is an initial space and the children are possible states, with this transformation solving the problem is simply finding the path from initial state to goal state vertex. How you orient the vertices and edges on a place can provide important insight into possible future states or optimal configurations. A star network shape has important implications for interaction between states.

Strategy 4: Greedy Approach

The greedy approach solves an optimization problem by a sequence of steps, each expanding a partially constructed solution until a complete solution is reached.At each step—and this is the central point of this strategy—the choice is to produce the largest immediate gain without violating the problem’s constraints.

Greedy algorithms are often used to solve optimization problems: you want to maximize or minimize some quantity subject to a set of constraints. For example:

• Maximize the number of events you can attend, but do not attend any overlapping events. • Minimize the number of jumps required to cross the pond, but do not fall into the water. • Minimize the cost of all edges chosen, but do not disconnect the graph.

Greedy algorithms are often used to solve combinatorial problems, combinatorial optimization is a topic that consists of finding an optimal object from a finite set of objects.

In general, greedy algorithms have five components:

  1. A candidate set, from which a solution is created
  2. A selection function, which chooses the best candidate to be added to the solution
  3. A feasibility function, that is used to determine if a candidate can be used to contribute to a solution
  4. An objective function, which assigns a value to a solution, or a partial solution, and
  5. A solution function, which will indicate when we have discovered a complete solution

Greedy algorithms choose the optimal choice at the current state in the hopes that it leads to a optimal solutions

  1. The activity selection problem is characteristic to this class of problems, where the goal is to pick the maximum number of activities that do not clash with each other.

  2. Malfatti's Problem- Draw 3 large as possible circles in a triangle- Answer- draw a big ass circle until it hits a wall, repeat

  3. A greedy algorithm is used to construct a Huffman tree during Huffman coding where it finds an optimal solution.

  4. In decision tree learning, greedy algorithms are commonly used, however they are not guaranteed to find the optimal solution.

Strategy 5: Iterative Improvement

While a greedy algorithm constructs a solution piece by piece, an iterative improvement algorithm starts with some easily obtainable approximation to a solution and improves upon it by repeated applications of some simple step. Make sure that the algorithm does have a terminal point- it must have a finite amount of steps and the final approximation must solve the puzzle.

Strategy 6: Dynamic Programming

This is a technique for solving problems with overlapping subproblems. Rather than solving overlapping subproblems again and again, it suggests solving each of the smaller subproblems only once and recording the results in a table from which a solution to the original problem can then be obtained. A common application of dynamic programming is counting paths, where every vertice represents the paths to that vertice of the tributary vertices combined.

About

A collection of algorithm practice questions, system-design practice, and other interview prep resources

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • HTML 91.9%
  • Jupyter Notebook 8.1%