where cijkl is the cost of assigning facility i in location k and simultaneously facility j in location l, and xik = 1 if location k is assigned to facility i; otherwise, xik = 0. Also, xjl = 1 if location l is assigned to facility j; otherwise, xjl = 0. The objective function (Equation 1) of this model must be minimized. Each location must be assigned just to one facility, as Equation 2 shows. Equation 3 displays that each facility must be assigned just in one location. The number of facility and location is the same and is equal to n. The variable in this model is binary.
Heuristic algorithms do not provide an assurance for optimization of the problem. These methods are an approximation. They have an additional property that worst-case solutions are known. In this section, some heuristic methods as procedures to search the better solution that contains 2-opt, greedy 2-opt, 3-opt, greedy 3-opt, and Vollman, Nugent, Zartler (VNZ) are contemplated.
Initially, the algorithm considers the transposition of facilities 1 and 2. If the resulting solution's objective function value (OFV) is smaller than that of the initial solution, then it is stored as a candidate for future consideration. Otherwise, it is discarded, and the algorithm considers the transposing of facilities 1 and 3. If this exchange generates a better solution, then it is stored as a candidate for future consideration; otherwise, it is discarded, and so on. Thus, whenever a better solution is found, the algorithm discards the previous best solution. This procedure continues until all the pair-wise exchanges are considered.
For n location in the QAP problem, the 2-opt algorithm consists of three steps:
Step 1. Let S be the initial feasible solution and Z its objective function value; then, set S* = S, Z* = Z, i = 1 and j = i + 1 = 2.
Step 2. Consider the exchange results in a solution S′ that has OFV Z′ < Z*, set Z* = Z′ and S* = S′. If j < n, then repeat step 2; otherwise, set i = i + 1 and j = i + 1. If i < n, repeat step 2; otherwise, proceed to step 3.
Step 3. If S ≠ S*, set S = S*, Z = Z*, i = 1, j = i + 1 = 2 and go to step 2. Otherwise, output S* is selected as the best solution, and the process is terminated.
Greedy 2-opt algorithm
The greedy 2-opt algorithm is a variant of the 2-opt algorithm. The difference between this method and 2-opt is in selecting the best transposition. This method transposes the facility location if the OFV is better than the known OFV and stabilizes this assignment; it then goes to transpose the facility location from the start. It also consists of three steps. Like the 2-opt algorithm, greedy 2-opt also considers pair-wise exchanges. Initially, it considers transposing facilities 1 and 2. If the resulting OFV is less than the previous one, two facilities are immediately transposed. Otherwise, the algorithm will go on to facility 3 and evaluate the exchange, and so on until improvement is found. If facilities 1 and 2 are transposed, then the algorithm will take it as an initial solution and will repeat the algorithm until it is impossible to improve the solution any further. Greedy 2-opt algorithm makes the exchange permanent whenever an improvement is found and thus consumes less computational time than the 2-opt algorithm. On the other hand, greedy 2-opt algorithm produces slightly worse solutions than the 2-opt algorithm.
Greedy 3-opt algorithm
Greedy 3-opt algorithm is also similar to the greedy 2-opt algorithm, but it makes the three facility exchange permanent whenever its resulting OFV is better than the current OFV and repeats the algorithm with the new transposition as the initial solution. The transposition in this method is similar to that in 3-opt.
The VNZ method was introduced by Vollman et al. (). This method is using less storage space than 2-opt.
There is not any randomization, and also, these methods cannot orient the current solution to the optimum solution in a limited time. However, the meta-heuristic methods contain a good search approach with a reasonable time. These methods are considered in the next subsection.
In the original definition, meta-heuristics are solution methods that manage an interaction between the local improvement procedures and higher level strategies to create a process capable of escaping from local optimum solution and performing a good search of solution space. These methods have also come to include any procedures that employ strategies for overcoming the trap of local optimality in complex solution spaces, especially those procedures that take advantage of one or more neighborhood structures as a means of defining acceptable moves to transformation from one solution to another. In this research, TS, SA, and PSO are applied for the QAP, and their comparison has been done for the selected data sets.
Classical methods often face great difficulty when confronted with hard optimization problems present in real situations. Tabu search (TS), for the first time, was proposed by Glover ([1989, 1990]). This meta-heuristic approach is, in a theatrical manner, changing the ability of solving problems of practical significance. The pseudo code of TS, which is applied in this research, is as follows:
Step 1. Let S be the initial feasible solution and Z its objective function value; then, set S* = S, Z* = Z, max short-term memory (STM) = 5, and max iteration = 1,000; iter = 1. Best O value = O value.
Step 2. Random (i, j) = rand/Long-term memory (LTM) (i, j), (n 1, n 2) = the indices of maximum value in random.
Step 3. If there is none (n 1, n 2) in STM matrix, change n 1 and n 2 locations; otherwise, repeat step 2.
Step 4. Insert n 1 and n 2 in STM and release the last indices from STM (e.g., m 1, m 2); and LTM(m 1, m 2) = LTM(m 1, m 2) + 1.
Step 5. Calculate the objective function value (Z) of the new permutation.
Step 6. If Z ≤ Z*, then Z* = Z, S* = S, and iter = iter + 1.
Step 7. If iter ≤ max iteration, then repeat step 2; otherwise, print Z* and S*.
where OFV and OFVB are the objective function values for this iteration and are the best computed one until this iteration. T is the temperature of the algorithm in the iteration, and P is the probability of acceptance for each move in the annealing process. The proposed SA pseudo code for QAP is as follows:
Step 1. Let S be the initial feasible solution and Z its objective function value; then, set S* = S, Z* = Z. T = 100, T0 = 0.1, r = 0.95, n limit max = 5 and n over max = 10.
Step 2. n limit = 0 and n over = 0.
Step 3. Transpose two facilities in the current layout randomly and calculate the objective function value (Z).
Step 4. If Z ≤ Z*, then accept the transposition; it means that Z* = Z, S* = S, and then n limit = n limit + 1, n over = n over + 1; if n over = n over max or n limit = n limit max, then proceed to step 6.Otherwise, repeat step3; if Z > Z*, then proceed to step 5.
Step 5. Calculate Equation 3; if P ≥ rand (0, 1), then Z* = Z, S* = S, and then n limit = n limit + 1, n over = n over + 1; if n over = n over max or n limit = n limit max, then proceed to step 6; otherwise, repeat step 3.If P < rand(0, 1), then n over = n over + 1; if n over = n over max, then proceed to step 6; otherwise, repeat step 3.
Step 6. T = r × T, where r is the rate of cooling. If T ≤ T0, then proceed to step 7; otherwise, repeat step 2.
Step 7. Print S* and Z*.
Partial swarm optimization
where i is the index of the particle, t is the index of an iteration, vi is the vector of velocity, xi is the position, w is the weight of current velocity, b1 is the weight of difference between personal best and current positions, b2 is the weight of difference between global best and current positions, and rand is used for randomization. The PSO pseudo code, which is presented for QAP in this research, is as follows:
Step 1. max iteration = 100, number of particle = 15, and w = n − 1, b1 = n/2 and b2 = (n/2) + 2, where n is the dimension of the problem. Make 15 permutations as the initial solutions and Z* = min (OFV), S* = S, and iter = 1.
Step 2. For i = 1 to w, transpose two facility. Do this step for each particle.
Step 3. Calculate objective function value (Z) for each new particle; find the personal best OFV for each particle (p best), and find the global best OFV (g best).
Step 4. Generate a random discrete number between 0 and b1, and for i = 1 to this random number, simulate each particle to p best.
Step 5. Generate a random discrete number between 0 and b2, and for i = 1 to this random number, simulate each particle to g best.
Step 6. iter = iter + 1; if iter < max iteration, then repeat step 2;otherwise, proceed to step 7.
In applied mathematics, the maximum generalized assignment problem is a problem in combinatorial optimization. This problem is a generalization of the assignment problem in which both tasks and agents have a size. Moreover, the size of each task might vary from one agent to the other.
This problem in its most general form is as follows:
There are a number of agents and a number of tasks. Any agent can be assigned to perform any task, incurring some cost and profit that may vary depending on the agent-task assignment. Moreover, each agent has a budget and the sum of the costs of tasks assigned to it cannot exceed this budget. It is required to find an assignment in which all agents do not exceed their budget and total profit of the assignment is maximized.
In special cases
In the special case in which all the agents' budgets and all tasks' costs are equal to 1, this problem reduces to the assignment problem. When the costs and profits of all agents-task assignment are equal, this problem reduces to the multiple knapsack problem. If there is a single agent, then, this problem reduces to the knapsack problem.
Explanation of definition
In the following, we have n kinds of items, through and m kinds of bins through . Each bin is associated with a budget . For a bin , each item has a profit and a weight . A solution is an assignment from items to bins. A feasible solution is a solution in which for each bin the total weight of assigned items is at most . The solution's profit is the sum of profits for each item-bin assignment. The goal is to find a maximum profit feasible solution.
Mathematically the generalized assignment problem can be formulated as an integer program:
The generalized assignment problem is NP-hard, and it is even APX-hard to approximate it. Recently it was shown that an extension of it is hard to approximate for every .
Greedy approximation algorithm
Using any -approximation algorithm ALG for the knapsack problem, it is possible to construct a ()-approximation for the generalized assignment problem in a greedy manner using a residual profit concept. The algorithm constructs a schedule in iterations, where during iteration a tentative selection of items to bin is selected. The selection for bin might change as items might be reselected in a later iteration for other bins. The residual profit of an item for bin is if is not selected for any other bin or – if is selected for bin .
Formally: We use a vector to indicate the tentative schedule during the algorithm. Specifically, means the item is scheduled on bin and means that item is not scheduled. The residual profit in iteration is denoted by , where if item is not scheduled (i.e. ) and if item is scheduled on bin (i.e. ).
- For do:
- Call ALG to find a solution to bin using the residual profit function . Denote the selected items by .
- Update using , i.e., for all .
- Reuven Cohen, Liran Katzir, and Danny Raz, "An Efficient Approximation for the Generalized Assignment Problem", Information Processing Letters, Vol. 100, Issue 4, pp. 162–166, November 2006.
- Lisa Fleischer, Michel X. Goemans, Vahab S. Mirrokni, and Maxim Sviridenko, "Tight Approximation Algorithms for Maximum General Assignment Problems", SODA 2006, pp. 611–620.
- Hans Kellerer, Ulrich Pferschy, David Pisinger, Knapsack Problems , 2005. Springer Verlag ISBN 3-540-40286-1