Showing 1 - 3 of 3 Items

Basins of Attraction and Metaoptimization for Particle Swarm Optimization Methods

Date: 2024-01-01

Creator: David Ma

Access: Open access

Particle swarm optimization (PSO) is a metaheuristic optimization method that finds near- optima by spawning particles which explore within a given search space while exploiting the best candidate solutions of the swarm. PSO algorithms emulate the behavior of, say, a flock of birds or a school of fish, and encapsulate the randomness that is present in natural processes. In this paper, we discuss different initialization schemes and meta-optimizations for PSO, its performances on various multi-minima functions, and the unique intricacies and obstacles that the method faces when attempting to produce images for basins of attraction, which are the sets of initial points that are mapped to the same minima by the method. This project compares the relative strengths and weaknesses of the Particle Swarm with other optimization methods, namely gradient-descent, in the context of basin mapping and other metrics. It was found that with proper parameterization, PSO can amply explore the search space regardless of initialization. For all functions, the swarm was capable of finding, within some tolerance, the global minimum or minima in fewer than 60 iterations by having sufficiently well chosen parameters and parameterization schemes. The shortcomings of the Particle Swarm method, however, are that its parameters often require fine-tuning for different search spaces to most efficiently optimize and that the swarm cannot produce the analytical minimum. Overall, the PSO is a highly adaptive and computationally efficient method with few initial restraints that can be readily used as the first step of any optimization task.


GREEN-PSO: Conserving function evaluations in Particle Swarm Optimization

Date: 2013-11-18

Creator: Stephen M. Majercik

Access: Open access

In the Particle Swarm Optimization (PSO) algorithm, the expense of evaluating the objective function can make it difficult, or impossible, to use this approach effectively; reducing the number of necessary function evaluations would make it possible to apply the PSO algorithm more widely. Many function approximation techniques have been developed that address this issue, but an alternative to function approximation is function conservation. We describe GREEN-PSO (GR-PSO), an algorithm that, given a fixed number of function evaluations, conserves those function evaluations by probabilistically choosing a subset of particles smaller than the entire swarm on each iteration and allowing only those particles to perform function evaluations. The "surplus" of function evaluations thus created allows a greater number of particles and/or iterations. In spite of the loss of information resulting from this more parsimonious use of function evaluations, GR-PSO performs as well as, or better than, the standard PSO algorithm on a set of six benchmark functions, both in terms of the rate of error reduction and the quality of the final solution.


GEM-PSO: Particle Swarm Optimization Guided by Enhanced Memory

Date: 2019-05-01

Creator: Kevin Fakai Chen

Access: Open access

Particle Swarm Optimization (PSO) is a widely-used nature-inspired optimization technique in which a swarm of virtual particles work together with limited communication to find a global minimum or optimum. PSO has has been successfully applied to a wide variety of practical problems, such as optimization in engineering fields, hybridization with other nature-inspired algorithms, or even general optimization problems. However, PSO suffers from a phenomenon known as premature convergence, in which the algorithm's particles all converge on a local optimum instead of the global optimum, and cannot improve their solution any further. We seek to improve upon the standard Particle Swarm PSO algorithm by fixing this premature convergence behavior. We do so by storing and exploiting increased information in the form of past bests, which we deem enhanced memory. We introduce three types of modifications to each new algorithm (which we call a GEM-PSO: Particle Swarm Optimization, Guided by Enhanced Memory, because our modifications all deal with enhancing the memory of each particle). These are procedures for saving a found best, for removing a best from memory when a new one is to be added, and for selecting one (or more) bests to be used from those saved in memory. By using different combinations of these modifications, we can create many different variants of GEM-PSO that have a wide variety of behaviors and qualities. We analyze the performance of GEM-PSO, discuss the impact of PSO's parameters on the algorithms' performances, isolate different modifications in order to closely study their impact on the performance of any given GEM-PSO variant, and finally look at how multiple modifications perform. Finally, we draw conclusions about the efficacy and potential of GEM-PSO variants, and provide ideas for further exploration in this area of study. Many GEM-PSO variants are able to consistently outperform standard PSO on specific functions, and GEM-PSO variants can be shown to be promising, with both general and specific use cases.