Let be a global optimization problem, where is a state in the problem space. In SCO, each state is called a knowledge point, and the function is the goodness function. In SCO, there are a population of cognitive agents solving in parallel, with a social sharing library. Each agent holds a private memory containing one knowledge point, and the social sharing library contains a set of knowledge points. The algorithm runs in T iterative learning cycles. By running as a Markov chain process, the system behavior in the tth cycle only depends on the system status in the th cycle. The process flow is in follows:
:Initialize the private knowledge point in the memory of each agent, and all knowledge points in the social sharing library, normally at random in the problem space.
** :Compare the private knowledge point and the model point ,and return the one with higher quality as the base point ,and another as the reference point 。
** :Combine and to generate a new knowledge point. Normally should be around ,and the distance with is related to the distance between and, and boundary handling mechanism should be incorporated here to ensure that.
** :Share a knowledge point, normally, to the social sharing library.
** :Update the private knowledge of agent, normally replace by. Some Monte Carlo types might also be considered.
* :The social sharing library using all knowledge points submitted by agents to update into. A simple way is one by one tournament selection: for each knowledge point submitted by an agent, replace the worse one among points randomly selected from.
:Return the best knowledge point found by the agents.
SCO has three main parameters, i.e., the number of agents, the size of social sharing library, and the learning cycle. With the initialization process, the total number of knowledge points to be generated is, and is not related too much with if is large. Compared to traditional swarm algorithms, e.g. particle swarm optimization, SCO can achieving high-quality solutions as is small, even as. Nevertheless, smaller and might lead to premature convergence. Some variants were proposed to guaranteed the global convergence. One can also make a hybrid optimization method using SCO combined with other optimizers. For example, SCO was hybridized with differential evolution to obtain better results than individual algorithms on a common set of benchmark problems.