Generalized Assignment Problem Heuristics Psychology

QAP formulation

This section is dedicated to introduce the classical formulation of QAP considered in this research. This formulation is as follows:

where cijkl is the cost of assigning facility i in location k and simultaneously facility j in location l, and xik = 1 if location k is assigned to facility i; otherwise, xik = 0. Also, xjl = 1 if location l is assigned to facility j; otherwise, xjl = 0. The objective function (Equation 1) of this model must be minimized. Each location must be assigned just to one facility, as Equation 2 shows. Equation 3 displays that each facility must be assigned just in one location. The number of facility and location is the same and is equal to n. The variable in this model is binary.

Heuristic methods

Heuristic algorithms do not provide an assurance for optimization of the problem. These methods are an approximation. They have an additional property that worst-case solutions are known. In this section, some heuristic methods as procedures to search the better solution that contains 2-opt, greedy 2-opt, 3-opt, greedy 3-opt, and Vollman, Nugent, Zartler (VNZ) are contemplated.

2-Opt algorithm

Among simple local search algorithms, the most famous ones are 2-opt and 3-opt. The 2-opt algorithm was first proposed by Croes ([1958]) for TSP. If there are four locations and four facilities, the transposition of facility location in 2-opt method is like that in Figure 4. This figure illustrates that for the first transposition, the facility in location one can be changed with the facility in location two, and for the second transposition, the facility in location one can be changed with the facility in location three, so that if the number of location and facility is shown by n, the number of transposition in each iteration will be n(n − 1)/2.

Initially, the algorithm considers the transposition of facilities 1 and 2. If the resulting solution's objective function value (OFV) is smaller than that of the initial solution, then it is stored as a candidate for future consideration. Otherwise, it is discarded, and the algorithm considers the transposing of facilities 1 and 3. If this exchange generates a better solution, then it is stored as a candidate for future consideration; otherwise, it is discarded, and so on. Thus, whenever a better solution is found, the algorithm discards the previous best solution. This procedure continues until all the pair-wise exchanges are considered.

For n location in the QAP problem, the 2-opt algorithm consists of three steps:

Step 1. Let S be the initial feasible solution and Z its objective function value; then, set S* = S, Z* = Z, i = 1 and j = i + 1 = 2.

Step 2. Consider the exchange results in a solution S′ that has OFV Z′ < Z*, set Z* = Z′ and S* = S′. If j < n, then repeat step 2; otherwise, set i = i + 1 and j = i + 1. If i < n, repeat step 2; otherwise, proceed to step 3.

Step 3. If SS*, set S = S*, Z = Z*, i = 1, j = i + 1 = 2 and go to step 2. Otherwise, output S* is selected as the best solution, and the process is terminated.

Greedy 2-opt algorithm

The greedy 2-opt algorithm is a variant of the 2-opt algorithm. The difference between this method and 2-opt is in selecting the best transposition. This method transposes the facility location if the OFV is better than the known OFV and stabilizes this assignment; it then goes to transpose the facility location from the start. It also consists of three steps. Like the 2-opt algorithm, greedy 2-opt also considers pair-wise exchanges. Initially, it considers transposing facilities 1 and 2. If the resulting OFV is less than the previous one, two facilities are immediately transposed. Otherwise, the algorithm will go on to facility 3 and evaluate the exchange, and so on until improvement is found. If facilities 1 and 2 are transposed, then the algorithm will take it as an initial solution and will repeat the algorithm until it is impossible to improve the solution any further. Greedy 2-opt algorithm makes the exchange permanent whenever an improvement is found and thus consumes less computational time than the 2-opt algorithm. On the other hand, greedy 2-opt algorithm produces slightly worse solutions than the 2-opt algorithm.

3-Opt algorithm

The 3-opt algorithm is similar to the 2-opt algorithm except that it considers transposing two facilities at a time. This algorithm is originally applied for TSP by Bock ([1958]). For example, if there are three facilities in the same location, two types of transposition can be used with the 3-opt algorithm. These types are shown in Figure 5. Type (2) is applied in this research.

Greedy 3-opt algorithm

Greedy 3-opt algorithm is also similar to the greedy 2-opt algorithm, but it makes the three facility exchange permanent whenever its resulting OFV is better than the current OFV and repeats the algorithm with the new transposition as the initial solution. The transposition in this method is similar to that in 3-opt.

VNZ method

The VNZ method was introduced by Vollman et al. ([1968]). This method is using less storage space than 2-opt.

There is not any randomization, and also, these methods cannot orient the current solution to the optimum solution in a limited time. However, the meta-heuristic methods contain a good search approach with a reasonable time. These methods are considered in the next subsection.

Meta-heuristic methods

In the original definition, meta-heuristics are solution methods that manage an interaction between the local improvement procedures and higher level strategies to create a process capable of escaping from local optimum solution and performing a good search of solution space. These methods have also come to include any procedures that employ strategies for overcoming the trap of local optimality in complex solution spaces, especially those procedures that take advantage of one or more neighborhood structures as a means of defining acceptable moves to transformation from one solution to another. In this research, TS, SA, and PSO are applied for the QAP, and their comparison has been done for the selected data sets.

Tabu search

Classical methods often face great difficulty when confronted with hard optimization problems present in real situations. Tabu search (TS), for the first time, was proposed by Glover ([1989, 1990]). This meta-heuristic approach is, in a theatrical manner, changing the ability of solving problems of practical significance. The pseudo code of TS, which is applied in this research, is as follows:

Step 1. Let S be the initial feasible solution and Z its objective function value; then, set S* = S, Z* = Z, max short-term memory (STM) = 5, and max iteration = 1,000; iter = 1. Best O value = O value.

Step 2. Random (i, j) = rand/Long-term memory (LTM) (i, j), (n 1, n 2) = the indices of maximum value in random.

Step 3. If there is none (n 1, n 2) in STM matrix, change n 1 and n 2 locations; otherwise, repeat step 2.

Step 4. Insert n 1 and n 2 in STM and release the last indices from STM (e.g., m 1, m 2); and LTM(m 1, m 2) = LTM(m 1, m 2) + 1.

Step 5. Calculate the objective function value (Z) of the new permutation.

Step 6. If ZZ*, then Z* = Z, S* = S, and iter = iter + 1.

Step 7. If iter ≤ max iteration, then repeat step 2; otherwise, print Z* and S*.

Simulated annealing

Simulated annealing is a famous and popular local search meta-heuristic applied to address discrete and continuous optimization problems. This method, like the other meta-heuristic methods, can be escaping from the local solution. Simulated annealing is so named because of its similarity to the process of physical annealing with solids, in which a crystalline solid is heated and then allowed to cool very slowly until it achieves its most regular possible crystal lattice configuration and thus is free of crystal defects. If the cooling schedule is sufficiently slow, the final configuration results in a solid with such superior structural integrity. Equation 5 shows the Metropolis acceptance criteria for each move in the cooling process (Metropolis et al. [1953]).

where OFV and OFVB are the objective function values for this iteration and are the best computed one until this iteration. T is the temperature of the algorithm in the iteration, and P is the probability of acceptance for each move in the annealing process. The proposed SA pseudo code for QAP is as follows:

Step 1. Let S be the initial feasible solution and Z its objective function value; then, set S* = S, Z* = Z. T = 100, T0 = 0.1, r = 0.95, n limit max = 5 and n over max = 10.

Step 2. n limit = 0 and n over = 0.

Step 3. Transpose two facilities in the current layout randomly and calculate the objective function value (Z).

Step 4. If ZZ*, then accept the transposition; it means that Z* = Z, S* = S, and then n limit = n limit + 1, n over = n over + 1; if n over = n over max or n limit = n limit max, then proceed to step 6.Otherwise, repeat step3; if Z > Z*, then proceed to step 5.

Step 5. Calculate Equation 3; if P ≥ rand (0, 1), then Z* = Z, S* = S, and then n limit = n limit + 1, n over = n over + 1; if n over = n over max or n limit = n limit max, then proceed to step 6; otherwise, repeat step 3.If P < rand(0, 1), then n over = n over + 1; if n over = n over max, then proceed to step 6; otherwise, repeat step 3.

Step 6. T = r × T, where r is the rate of cooling. If TT0, then proceed to step 7; otherwise, repeat step 2.

Step 7. Print S* and Z*.

Partial swarm optimization

The particle swarm optimization (PSO) is a population-based search algorithm founded on the simulation of the social behavior of bees, birds, or a school of fish. This method is a significant member of the swarm intelligence. It was proposed by Eberhart and Kennedy as an optimization method (Eberhart and Kennedy [1995]; Kennedy and Eberhart [1995]). Each individual within the swarm is represented by a vector in multidimensional search space. This vector has one assigned vector that determines the next movement of the particle and is called the velocity vector. This technique also determines how to update the velocity of a particle. Each particle updates its velocity based on the current velocity and the best position (p best) it has explored so far, as well as based on the global best position (g best) explored by a swarm. Movement of each particle is shown in Figure 6, and it is based on Equations 6 and 7. Equation 6 illustrates that the velocity vector is updated by the global best position, personal best position, and current position of each particle. Equation 7 shows that each particle moves by its velocity.

(6)

where i is the index of the particle, t is the index of an iteration, vi is the vector of velocity, xi is the position, w is the weight of current velocity, b1 is the weight of difference between personal best and current positions, b2 is the weight of difference between global best and current positions, and rand is used for randomization. The PSO pseudo code, which is presented for QAP in this research, is as follows:

Step 1. max iteration = 100, number of particle = 15, and w = n − 1, b1 = n/2 and b2 = (n/2) + 2, where n is the dimension of the problem. Make 15 permutations as the initial solutions and Z* = min (OFV), S* = S, and iter = 1.

Step 2. For i = 1 to w, transpose two facility. Do this step for each particle.

Step 3. Calculate objective function value (Z) for each new particle; find the personal best OFV for each particle (p best), and find the global best OFV (g best).

Step 4. Generate a random discrete number between 0 and b1, and for i = 1 to this random number, simulate each particle to p best.

Step 5. Generate a random discrete number between 0 and b2, and for i = 1 to this random number, simulate each particle to g best.

Step 6. iter = iter + 1; if iter < max iteration, then repeat step 2;otherwise, proceed to step 7.

Tuning method

The parameters in meta-heuristic methods which are introduced, such as max STM and max iteration, were tuned by carrying out the general factorial design. To achieving this purpose, some levels were defined initially for each parameter. These levels are determined to be focused on the logic of meta-heuristics. For instance, when the number of iteration in PSO is increased, then the time of search is intensified. This is just one characteristic of this meta-heuristic approach. After defining the levels, an experimental design is determined by general factorial design. The meta-heuristic for these experiments is then run, and their results are analyzed. Thus, the best parameter level can be found. Figure 7 illustrates this tuning method.
For example, for ‘Nug’ instances taken from the Quadratic Assignment Problem Library (QAPLIB), the analysis is done for ‘Nug27’ for the TS method. The proposed levels for max STM are 5, 7, 9, and 11, and for max iteration, they are 1,000, 1,500, and 2,000. We consider two replicates for each combination of factor levels; hence, for this instance, 24 treatment combinations are arranged. Figure 8 shows the interaction plot for OFV, and it also shows the best combination of these factors. The best max STM is 5, and the best max iteration is 1,000. Hence, we tune the TS parameter based on this experiment. Moreover, the parameters like n over max, n limit max, T, T0, and others were set by analyzing the general factorial design similar to TS.

In applied mathematics, the maximum generalized assignment problem is a problem in combinatorial optimization. This problem is a generalization of the assignment problem in which both tasks and agents have a size. Moreover, the size of each task might vary from one agent to the other.

This problem in its most general form is as follows:

There are a number of agents and a number of tasks. Any agent can be assigned to perform any task, incurring some cost and profit that may vary depending on the agent-task assignment. Moreover, each agent has a budget and the sum of the costs of tasks assigned to it cannot exceed this budget. It is required to find an assignment in which all agents do not exceed their budget and total profit of the assignment is maximized.

In special cases[edit]

In the special case in which all the agents' budgets and all tasks' costs are equal to 1, this problem reduces to the assignment problem. When the costs and profits of all agents-task assignment are equal, this problem reduces to the multiple knapsack problem. If there is a single agent, then, this problem reduces to the knapsack problem.

Explanation of definition[edit]

In the following, we have n kinds of items, through and m kinds of bins through . Each bin is associated with a budget . For a bin , each item has a profit and a weight . A solution is an assignment from items to bins. A feasible solution is a solution in which for each bin the total weight of assigned items is at most . The solution's profit is the sum of profits for each item-bin assignment. The goal is to find a maximum profit feasible solution.

Mathematically the generalized assignment problem can be formulated as an integer program:

Complexity[edit]

The generalized assignment problem is NP-hard,[1] and it is even APX-hard to approximate it. Recently it was shown that an extension of it is hard to approximate for every .[citation needed]

Greedy approximation algorithm[edit]

Using any -approximation algorithm ALG for the knapsack problem, it is possible to construct a ()-approximation for the generalized assignment problem in a greedy manner using a residual profit concept. The algorithm constructs a schedule in iterations, where during iteration a tentative selection of items to bin is selected. The selection for bin might change as items might be reselected in a later iteration for other bins. The residual profit of an item for bin is if is not selected for any other bin or if is selected for bin .

Formally: We use a vector to indicate the tentative schedule during the algorithm. Specifically, means the item is scheduled on bin and means that item is not scheduled. The residual profit in iteration is denoted by , where if item is not scheduled (i.e. ) and if item is scheduled on bin (i.e. ).

Formally:

Set
For do:
Call ALG to find a solution to bin using the residual profit function . Denote the selected items by .
Update using , i.e., for all .

See also[edit]

References[edit]

  • Reuven Cohen, Liran Katzir, and Danny Raz, "An Efficient Approximation for the Generalized Assignment Problem", Information Processing Letters, Vol. 100, Issue 4, pp. 162–166, November 2006.
  • Lisa Fleischer, Michel X. Goemans, Vahab S. Mirrokni, and Maxim Sviridenko, "Tight Approximation Algorithms for Maximum General Assignment Problems", SODA 2006, pp. 611–620.
  • Hans Kellerer, Ulrich Pferschy, David Pisinger, Knapsack Problems , 2005. Springer Verlag ISBN 3-540-40286-1

External links[edit]

0 Thoughts to “Generalized Assignment Problem Heuristics Psychology

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *