In a population-based optimization algorithm, a set of particles on the domain of the objective function mutates as time goes by, and eventually converges to a single point
In an ideal population-based optimization algorithm, the limiting point would be the minimizer of the objective function.
Most of these algorithms do not require evaluation of gradients of the objective function.
So, they can be applied to situations where it is difficult to evaluate gradients of the objective function, e.g., hyperparameter tuning of artificial neural networks.
Despite the popularity of these algorithms such as Particle Swarm Optimization, Consensus-Based Optimization, etc., their convergence analyses are not well-studied.
I am studying convergence analyses of these algorithms.
When the information of the objective function is distributed among multiple agents, a distributed optimization algorithm helps the agents find the minimizer of the objective function by cooperating with each other.
To be more specific, these algorithms work in a situation where the objective function is the sum of multiple local functions, and information(such as function value, gradient, etc.) of each local function can be accessed by only one agent.
This philosophy is used in distributed training, where the training data are distributed among several agents.
Since 2008, many distributed optimization algorithms were proposed, and I am studying their convergence analyses.