Hill climbing (M2.1)

  •  Hill climbing is a mathematical optimization technique. 
  • It is an iterative algorithm that starts with an arbitrary solution to a problem, then attempts to find a better solution by making an incremental change to the solution. If the change produces a better solution, another incremental change is made to the new solution, and so on until no further improvements can be found. 
  • It is a heuristic search (search algo. that may not find the optimal solution to the problem, but will give a good solution in reasonable time). Given a large set of inputs and a good heuristic function, it tries to find a sufficiently good solution to the problem. This solution may not be the global optimal maximum. (local search algo.)
A heuristic function is a function that will rank all the possible alternatives at any branching step in search algorithm based on the available information. It helps the algorithm to select the best route out of possible routes.
  • Hill climbing finds optimal solutions for convex problems – for other problems it will find only local optima (solutions that cannot be improved upon by any neighboring configurations), which are not necessarily the best possible solution (the global optimum) out of all possible solutions (the search space). 

For example, hill climbing can be applied to the traveling salesman problem. It is easy to find an initial solution that visits all the cities but will likely be very poor compared to the optimal solution. The algorithm starts with such a solution and makes small improvements to it, such as switching the order in which two cities are visited. Eventually, a much shorter route is likely to be obtained.  

Features of Hill Climbing

1. Variant of generate and test algorithm : It is a variant of generate and test algorithm. The generate and test algorithm is as follows : 

1. Generate possible solutions. 
2. Test to see if this is the expected solution. 
3. If the solution has been found quit else go to step 1.

Hence we call Hill climbing as a variant of generate and test algorithm as it takes the feedback from the test procedure. Then this feedback is utilized by the generator in deciding the next move in search space. 

2. Uses the Greedy approach : At any point in state space, the search moves in that direction only which optimizes the cost of function with the hope of finding the optimal solution at the end. 

3. No backtracking: It does not backtrack the search space, as it does not remember the previous states.

Mathematical description

Hill climbing attempts to maximize (or minimize) a target function , where is a vector of continuous and/or discrete values. At each iteration, hill climbing will adjust a single element in and determine whether the change improves the value of . (Note that this differs from gradient descent methods, which adjust all of the values in at each iteration according to the gradient of the hill.) With hill climbing, any change that improves is accepted, and the process continues until no change can be found to improve the value of . Then is said to be "locally optimal".

In discrete vector spaces, each possible value for may be visualized as a vertex in a graph. Hill climbing will follow the graph from vertex to vertex, always locally increasing (or decreasing) the value of , until a local maximum (or local minimum) is reached.

 

Types of Hill Climbing 

1. In Simple hill climbing, the first closer node is chosen.

2. In steepest ascent hill climbing all successors are compared and the closest to the solution is chosen. Steepest ascent hill climbing is similar to best-first search, which tries all possible extensions of the current path instead of only one.

(Both forms fail if there is no closer node, which may happen if there are local maxima in the search space which are not solutions.)

3. Stochastic hill climbing does not examine all neighbors before deciding how to move. Rather, it selects a neighbor at random, and decides (based on the amount of improvement in that neighbor) whether to move to that neighbor or to examine another.

 

#Simple Hill climbing 

It examines the neighboring nodes one by one and selects the first neighboring node which optimizes the current cost as next node. 

It only checks it's one successor state, and if it finds better than the current state, then move else be in the same state.

  • Less time consuming
  • Less optimal solution and the solution is not guaranteed

 Algorithm for Simple Hill climbing

Step 1 : Evaluate the initial state. If it is a goal state then stop and return success. Otherwise, make initial state as current state. 

Step 2 : Loop until the solution state is found or there are no new operators present which can be applied to the current state. 

a) Select a state that has not been yet applied to the current state and apply it to produce a new state. 

b) Perform these to evaluate new state 

  •  If the current state is a goal state, then stop and return success. 
  • If it is better than the current state, then make it current state and proceed further.  
  • If it is not better than the current state, then continue in the loop until a solution is found. 

Step 3 : Exit.  

 

#Steepest-Ascent Hill climbing

It first examines all the neighboring nodes and then selects the node closest to the solution state as of next node.

Algorithm

Step 1 :  Evaluate the initial state. If it is a goal state then stop and return success. Otherwise, make initial state as current state. 

Step 2 : Repeat these steps until a solution is found or current state does not change 

a) Select a state that has not been yet applied to the current state.

b)  Initialize a new ‘best state’ equal to current state and apply it to produce a new state.

c) Perform these to evaluate new state 
  •  If the current state is a goal state, then stop and return success.         
  • If it is better then best state, then make it best state else continue loop with another new state.

d) Make best state as current state and go to Step 2: b) part.

Step 3 : Exit

 

#Stochastic hill climbing 

 It does not examine all the neighboring nodes before deciding which node to select .It just selects a neighboring node at random and decides (based on the amount of improvement in that neighbor) whether to move to that neighbor or to examine another. 

Algorithm  

Step 1:  Evaluate the initial state. If it is a goal state then stop and return success. Otherwise, make initial state as current state. 

Step 2: Repeat these steps until a solution is found or current state does not change.

       a) Select a state that has not been yet applied to the current state.

       b) Apply successor function to the current state and generate all the neighbor states.

       c) Among the generated neighbor states which are better than current state choose a state randomly (or  based on some probability function).                                                                                                      

       d) If the chosen state is goal state, then return success, else make it current state and repeat step 2: b) part. 

Step 3: Exit

State Space diagram for Hill Climbing

State space diagram is a graphical representation of the set of states our search algorithm can reach vs the value of our objective function(the function which we wish to maximize). 


X-axis : denotes the state space ie states or configuration our algorithm may reach.
Y-axis : denotes the values of objective function corresponding to a particular state. 


The best solution will be that state space where objective function has maximum value(global maximum). 

State Space diagram for Hill climbing



Different regions in the State Space Diagram: 

  1. Local maximum: It is a state which is better than its neighboring state however there exists a state which is better than it(global maximum). This state is better because here the value of the objective function is higher than its neighbors. 
     
  2. Global maximum : It is the best possible state in the state space diagram. This because at this state, objective function has highest value.
  3. Plateau/flat local maximum : It is a flat region of state space where neighboring states have the same value.
  4. Ridge : It is region which is higher than its neighbours but itself has a slope. It is a special kind of local maximum.
  5. Current state : The region of state space diagram where we are currently present during the search.
  6. Shoulder : It is a plateau that has an uphill edge.

Problems in different regions in Hill climbing

Hill climbing cannot reach the optimal/best state(global maximum) if it enters any of the following regions :   

#Local maximum 

At a local maximum all neighboring states have a values which is worse than the current state. Since hill-climbing uses a greedy approach, it will not move to the worse state and terminate itself. The process will end even though a better solution may exist. 

To overcome local maximum problem : Backtracking technique can be a solution of the local maximum in state space landscape. Create a list of the promising path so that the algorithm can backtrack the search space and explore other paths as well.

#Plateau 

(On plateau all neighbors have same value . Hence, it is not possible to select the best direction.)

Another problem that sometimes occurs with hill climbing is that of a plateau. 

A plateau is encountered when the search space is flat, or sufficiently flat that the value returned by the target function is indistinguishable from the value returned for nearby regions due to the precision used by the machine to represent its value. In such cases, the hill climber may not be able to determine in which direction it should step, and may wander in a direction that never leads to improvement.

To overcome plateaus : Take a big jump or big steps. Randomly select a state far away from the current state. Chances are that we will land at a non-plateau region. 

#Ridge 

Any point on a ridge can look like peak because movement in all possible directions is downward. Hence the algorithm stops when it reaches this state. 

Ridges are a challenging problem for hill climbers that optimize in continuous spaces. Because hill climbers only adjust one element in the vector at a time, each step will move in an axis-aligned direction. If the target function creates a narrow ridge that ascends in a non-axis-aligned direction (or if the goal is to minimize, a narrow alley that descends in a non-axis-aligned direction), then the hill climber can only ascend the ridge (or descend the alley) by zig-zagging. If the sides of the ridge (or alley) are very steep, then the hill climber may be forced to take very tiny steps as it zig-zags toward a better position. Thus, it may take an unreasonable length of time for it to ascend the ridge (or descend the alley).

By contrast, gradient descent methods can move in any direction that the ridge or alley may ascend or descend. Hence, gradient descent or the conjugate gradient method is generally preferred over hill climbing when the target function is differentiable. Hill climbers, however, have the advantage of not requiring the target function to be differentiable, so hill climbers may be preferred when the target function is complex.

To overcome Ridge : In this kind of obstacle, use two or more rules before testing. It implies moving in several directions at once.
With the use of bidirectional search, or by moving in different directions, we can improve this problem.

Applications of Hill Climbing Technique

Hill Climbing technique can be used to solve many problems, where the current state allows for an accurate evaluation function, such as Network-Flow, Traveling Salesman problem, 8-Queens problem, Integrated Circuit design, etc.

The hill-climbing algorithm can be applied in the following areas:

Marketing

A hill-climbing algorithm can help a marketing manager to develop the best marketing plans. This algorithm is widely used in solving Traveling-Salesman problems. It can help by optimizing the distance covered and improving the travel time of sales team members. The algorithm helps establish the local minima efficiently.

Robotics

Hill climbing is useful in the effective operation of robotics. It enhances the coordination of different systems and components in robots.

Job Scheduling

The hill climbing algorithm has also been applied in job scheduling. This is a process in which system resources are allocated to different tasks within a computer system. Job scheduling is achieved through the migration of jobs from one node to a neighboring node. A hill-climbing technique helps establish the right migration route.

 

Hill Climbing is used in inductive learning methods too. This technique is used in robotics for coordination among multiple robots in a team. There are many other problems where this technique is used.

No comments:

Post a Comment

Monk and Inversions

using System; public class Solution { public static void Main () { int T = Convert . ToInt32 ( Console . ReadLine...