INTRODUCTION
Many optimization problems in engineering are of a very complex nature and must be solved subjected to various sometimes complicated constraints. As a consequence, the achievement of an optimal solution is often hard. In addition, frequently, a large number of mixed variables, and differential and non-linear equations are used to describe the problem. As a result, in many cases, classical optimization procedures based on differential methods cannot be used. Metaheuristic techniques arise to bridge this gap, as they can explore the search space for optimal and feasible solutions in a less restrictive (derivative-free) framework. Since in continuous problems the set of feasible solutions is infinite, metaheuristic algorithms use an empirical iterative search method, mostly based on natural intelligence, to guide the search in a way that the solution is expected to always improve.
Among the various approaches utilized by metaheuristic algorithms, swarm intelligence uses social features of a group to define the behavior of its individuals, leading them to local optimal solutions [1]. In general, the position of an individual represents a potential solution, and has a defined score based on the objective function of interest. First, initial positions for individuals are defined randomly, and this defines their initial score. Then an individual evolves according to: i) its own preferences, based on its past experience, i.e., on its best performance ever achieved, and ii) according to the group experience, i.e., the best solution ever found by any individual in the group. Each algorithm has its own rules to express this social behavior. Additional random coefficients are used to guarantee better exploration of the search space, especially in the initial period, and those coefficients are also modified along iteration [2] to achieve better exploitation, mainly at the end of the search.
Examples are algorithms based on ants [3] and bees [4] looking for food, the breeding behavior of cuckoos [5], the behavior of birds flocking [6], and multi-agent theory [7], among many others.
One of the major issues of metaheuristic algorithms is the fact that there is no guarantee that the global optimal solution is achieved. Despite the randomness deployed to build the initial solutions and during iteration, the search can be easily biased and the final result may be just a point close to a local optimum [8]. In addition, to account for possible constraints of the problem, in a single-objective setting, penalty functions have to be used, adding a value to the objective function to hinder unfeasible solutions [9]. If the added value is too high, optimal solutions on the boundaries can be disregarded because of the steep path this represents for the individuals and the strong deformation of the objective function close to the boundary. On the other hand, if the penalty is too soft, unfeasible solutions can be considered as optimal [10]. The randomness of the process and the penalties added can lead to inconsistent results, so that multiple runs of the algorithm are necessary to verify the quality of the solution.
Another crucial issue of these methods is the computational effort required. Multiple solutions (individuals) must be evaluated in each iteration. This number varies according to the number of variables of the problem, and can be very large. In general, the number of individuals has to be at least the same as the number of variables [11]. Thus, in complex problems, the number of objective functions evaluations can be very large. In addition, many engineering problems use external models to calculate parameters of the objective function and the constraints, as is the case of hydraulic [12], thermal [13], and chemical models [14], thus increasing even more the computational effort.
A final crucial idea is that there is not such a thing as the best metaheuristic algorithm. Each of them has advantages and disadvantages regarding three major aspects [15]: i) ease of implementation due to the number of parameters to be adjusted (sometimes costly fine-tuned); ii) computational complexity, which reflects on the convergence speed; and iii) reliability of the results, with consistent optimal values obtained through a series of tests.
As an attempt to help in the solution of these problems, a new swarm-based algorithm that uses the metaphor of the cyclists’ behavior in a peloton, which we have named Grand Tour Algorithm (GTA), is proposed in this paper. The main features of the algorithm are: i) the drag, defined according to the distance to the leading cyclist (embodying the best solution), and ii) the speed, defined using the difference between two consecutive objective function evaluations. These two elements are used to calculate the coefficients that determine the route of each cyclist. As a result, the peloton will try to follow the cyclist closest to the finish line, i.e. the optimal solution, and also the cyclist who is going faster at this point. We describe the methodological aspects in the next section. Then, sixteen benchmarking functions, most of them with 20,000 decision variables, are used to evaluate the performance of the algorithm when compared with the classical Particle Swarm Optimization (PSO) algorithm [6]. Finally, various sensitivity analyses verify the relevance the algorithm parameters have in the optimization convergence and stability.