New paper investigates topological limits to the parallel processing capability of network architectures

Humans can effortlessly perform many kinds of task at the same time, such as walking, talking and responding to their surroundings, all of which presumably involve extensive simultaneous computations. On the other hand, they are radically constrained in their ability to perform other kinds of task concurrently, such as planning a grocery list while simultaneously carrying out multidigit mental arithmetic. As recent work has suggested, these fundamental features of human cognitive function may be explained by a tradeoff between two kinds of use of parallel distributed computing in network architectures: the first focuses on incorporating a variety of interacting constraints in the learning and processing of complex representations (“interactive parallelism”); the second kind, in contrast, focuses on the capacity of a network to carry out multiple processes independently (“independent parallelism”).

In Topological limits to the parallel processing capability of network architectures, a new paper out in Nature Physics authored by an international scientific team led by ISI Foundation Giovanni Petri, researchers investigate this fundamental tradeoff between interactive parallelism, which supports learning and generalization, and independent parallelism, which supports processing efficiency through concurrent multitasking.

Understanding the topological limits to the parallel distributed computing in network architectures may have profound consequences for understanding the human brain’s mix of sequential and parallel capabilities, as well as for the development of artificial intelligence systems that can optimally manage the tradeoff between learning and processing efficiency, scientists say. In this article they provide a formal analysis of the problem, based on a combination of graph theory and statistical mechanics of frustrated systems. They illustrate the mechanism by which even modest degrees of shared representations impose strong constraints on the number of tasks that can be performed simultaneously without the risk of interference from crosstalk between tasks. Final results highlight a fundamental tension in network architectures between the benefits that accrue from shared representations and their cost in terms of processing efficiency.

Topological limits to the parallel processing capability of network architectures”, Giovanni Petri, Sebastian Musslick, Biswadip Dey, Kayhan Özcimde , David Turner, Nesreen K. Ahmed, Theodeore L. Willke and Jonathan D. Cohen. Nature Physics, 18th February 2021.