6 Comments

  1. Very interesting and well explained!

    1. Author

      Thank you for a nice workshop, Antonia! I am really grateful for your quick feedback and help)

  2. Thank you, Ilyas! I find your explanation very neat and informative. I wonder how quickly does PS method converge and how hard are the compuations. I meant here what can you say about computational complexity of PS.

    1. Author

      Thank you, Dima! That is a tough question, because it really depends on version of the algorithm, but for the basic one we can say that it is O(swarmsize*n + swarmsize*F_cost) for every iteration, where n – is dimension of the problem, F_cost – is computation cost of function. Seems quite fast, but one should take into account that in applications F_cost may play a key role and we have to reduce the number of function computations by any means.

      What may be really interesting for you in Swarm Intelligence is the way how particles interact with each other, forming so called different topologies. For example, a particle may receive an information from all particles, but may compare its global best only with its closest neighbours. Changing the topologies you significantly affect the behavior of the swarm and the convergence rate.

  3. thanks for that simple explanation, I always find “animal” algorithms interesting to study. Can you explain that contour graph on the right?

    1. Author

      Thank you for your question, Alaa. On this contour graph yellow crosses represent the particles (or “bees”) and the vectors – their current velocities. A color bar on the right helps to determine a value of each contour line.
      It is only the beginning of the algorithm and particles are quite evenly distributed in space, but you can see a general tendency towards the global minimum in the center.

Leave a Reply

Your email address will not be published. Required fields are marked *