When you add more processors, it does not necessarily speed up the computational process. A naive view would think that adding two processor cores would double the computational power. Things are not quite as rosy as that. There are many factors that come into play, when considering different computational issues. One classical example used to illustrate this folly is Amdahl’s Law. Although it is incomplete, it serves as an initial estimate as to how much speed up is achievable with a specific number of cores.
Simply stated, Amdahl’s Law is:
speedup = 1 / (serial + parallel/cores)
What this simple expression illustrates is that if the bulk of computation is serial, there will be little speed-up observed even if we use an infinite number of processor cores. However, if a task is highly parallel, then a huge almost linear speed-up can be obtained from using multiple computational cores. Therefore, when someone has a computational problem that needs to be reduced, throwing more cores at the problem may not be the right solution.
When I said that the Law was incomplete, that’s because it does not capture the various other factors that come into play. When we increase the number of cores, the computational complexity also increases because the number of cores will need to communicate with each other in order to synchronise their tasks. The synchronisation complexity will increase exponentially with the number of cores. This includes process synchronisation and memory synchronisation. Memory being slow as it already is, may just worsen the situation.