- Algorithms and Problems
An algorithm is a series of steps to solve a problem.
The general process of understanding a problem and designing an algorithm is as follows:
When solving a problem, you can first consider the brute force method because it can generally solve the problem, although it may be less efficient. After using the brute force method to solve the problem, you can consider using other algorithmic ideas for optimization. When using the brute force method, if it is difficult to solve, you can reconsider the problem and see if you can find any patterns, and then solve it based on those patterns.
The brute force method often requires traversing all possibilities from start to finish, and sometimes a problem may have several traversal strategies, some of which can lead to the solution of the problem, while others cannot. Therefore, the choice of traversal strategy is often important when using the brute force method. A good traversal strategy can even play a role similar to greedy or dynamic programming strategies, in this sense, greedy, dynamic programming, backtracking, and branch and bound can also be considered as a kind of brute force method, just a clever brute force method.
There are three directions for optimization:
-
Brute force method -> Greedy, dynamic programming -> Backtracking, branch and bound
-
Divide and conquer, decrease and conquer, transform and conquer
-
Time-space trade-off
-
Brute Force Method
The brute force method is a simple and intuitive way to solve problems, often based directly on the description of the problem and the concepts involved. The brute force method can solve the widest range of problems, but the efficiency is often not high.
Algorithms that use the brute force method include:
-
Bubble sort, selection sort
-
Sequential matching
-
Exhaustive search
-
Divide and Conquer
The steps for designing a divide and conquer algorithm are as follows: -
Divide the problem instance into smaller instances of the same problem, preferably with the same size.
-
Solve the smaller instances.
-
Merge the solutions of the smaller problems.
Algorithms for divide and conquer include:
-
Merge sort, quicksort
-
Binary search
-
Binary tree traversal (preorder, inorder, postorder)
-
Divide and conquer for large integer multiplication and matrix multiplication
-
Divide and conquer for closest pair and convex hull problems
-
Decrease and Conquer
Decrease and conquer techniques apply the relationship between the solution to a problem and the solution to a smaller instance of the same problem. There are three main types of decrease and conquer: -
Decrease by a constant: Decrease by a constant of the same size each time, usually equal to 1 or 2. For example, calculating the power of a to the n can be understood as f(n) = f(n-1) * a.
-
Decrease by a constant factor: Decrease by the same constant factor each time, such as reducing the problem size to half each time. For example, calculating the power of a to the n can still be done recursively, but with a smaller problem size each time.
-
Decrease by a variable size: Decrease the problem size by a variable size each time. For example, calculating the greatest common divisor gcd(m, n) = gcd(n, m mod n), or finding the kth largest number based on quicksort.
Algorithms for decrease and conquer (decrease by a constant):
- Insertion sort, depth-first search, breadth-first search, topological sort, generating permutations and combinations, generating subsets
Algorithms for decrease by a constant factor:
- Fake coin problem, Josephus problem
Algorithms for decrease by a variable size:
-
Finding the kth largest number based on quicksort, interpolation search, insertion and search in a binary search tree
-
Transform and Conquer
Transform and conquer mainly involve transforming problem instances, and there are three types: -
Problem reduction: Transforming a problem into a simpler and more convenient instance of the same problem.
-
Change of representation: Different representations of the same problem instance.
-
Problem transformation: Transforming a problem into an instance of another problem.
Examples of transform and conquer:
-
Least common multiple: Transforming the problem of finding the least common multiple into finding the greatest common divisor. lcm(m, n) = m * n / gcd(m, n)
-
Time-Space Trade-off
Space-time trade-off refers to trading space for time. Dynamic programming and hashing are also examples of space-time trade-offs.
Examples of time-space trade-off:
-
Counting sort
-
String matching algorithms like Horspool, Boyer-Moore, KMP
-
Hashing
-
B-tree indexing
-
Dynamic Programming
Dynamic programming is used to solve problems with overlapping subproblems. The specific process generally involves two steps: -
Determine the recurrence relation.
-
Use appropriate planning strategies based on the nature of the problem, such as bottom-up dynamic programming or top-down memoization search. Memoization search also requires maintaining a state table, but when calculating a new value, it first checks if the value has been calculated in the table. If it has, it uses the calculated value directly; otherwise, it recursively calculates it.
Examples of dynamic programming algorithms:
-
Computing binomial coefficients
-
Warshall's algorithm for computing transitive closure
-
Floyd's algorithm for computing all-pairs shortest paths
-
Optimal binary tree construction
-
Knapsack problem
-
Greedy
The greedy choice property means that the overall optimal solution to the problem can be achieved by a series of locally optimal choices, i.e., greedy choices. This is the first basic requirement for the feasibility of greedy algorithms and the main difference between greedy algorithms and dynamic programming algorithms. In dynamic programming algorithms, each step's choice often depends on the solutions to related subproblems. Therefore, the choice can only be made after solving the related subproblems. In greedy algorithms, only the best choice is made in the current state, i.e., the locally optimal choice. Then, the corresponding subproblem generated by this choice is solved. The greedy choices made in greedy algorithms can depend on the choices made in the past, but they never depend on the choices to be made in the future or the solutions to subproblems. It is precisely because of this difference that dynamic programming algorithms usually solve subproblems in a bottom-up manner, while greedy algorithms usually solve subproblems in a top-down manner, making successive greedy choices in an iterative manner, simplifying the problem to a smaller subproblem each time.
Examples of greedy algorithms:
-
Prim's and Kruskal's algorithms for finding minimum spanning trees
-
Dijkstra's algorithm
-
Huffman coding
-
Iterative Improvement
Iterative improvement starts from a feasible solution and then repeatedly applies simple steps to improve it: -
Construct a feasible solution.
-
Check if the current solution is optimal. If not, replace it.
Examples of iterative improvement algorithms:
-
Simplex method and linear programming
-
Maximum flow
-
Recursive descent
-
Stable marriage
-
Expectation-Maximization (EM)
-
Bipartite matching
-
Backtracking and Branch and Bound
In fact, the so-called state space tree is just a decision tree, and each decision node represents one iteration. By constantly pruning, growing, and deducing the decision tree, the final result is derived.
The application process is as follows:
- Determine the traversal pattern, which is the growth order of the decision tree.
- Continuously grow and prune the tree through backtracking or branch and bound.
Examples of backtracking algorithms:
- N-queens problem
- Hamiltonian cycle
- Generating subsets
Examples of branch and bound algorithms:
- Knapsack problem
- Traveling salesman problem