by Peter B. Nichol

What do cognitive science and swarm intelligence have in common?

May 02, 2017
AnalyticsArtificial IntelligenceIT Strategy

The future of artificial intelligence is self-organizing software. Multi-agent coordination and stigmergy will be useful in our quest to discover dynamic environments with decentralized intelligence.

In every field, there’s a pioneer, a prototype, an individual or group that blazed the path forward to uncover previously hidden value. Observing the giants in artificial intelligence allows us to revisit the early instrumental concepts in the development and maturation of the field. Biological principles are the roots of swarm intelligence, and self-organizing collective behavior is its organizing principle. Better understanding these foundational principles results in the ability to accelerate the development of your business applications.

The movers and shakers of artificial intelligence

Four pioneers shaped artificial intelligence as we know it today.

Allen Newell was a researcher in computer science and cognitive psychology at the RAND Corporation and Carnegie Mellon University’s School of Computer Science. His primary contributions to information processing, in collaboration with Herbert A. Simon, were the development of the two early A.I. programs: the Logic Theory Machine (1956) and the General Problem Solver (1957).

Herb Simon was an economist, sociologist, psychologist and computer scientist with specialties in cognitive psychology and cognitive science, among many other fields. He coined the terms bounded rationality and satisficing. Bounded rationality is the idea that when individuals make decisions, their rationality is limited by the tractability of the decision problem, the cognitive limitations of their minds and the time available to make the decision. Satisficing (as opposed to maximizing or optimizing) is a decision-making strategy or cognitive heuristic that entails searching through the available alternatives until an acceptability threshold is met. Simon also proposed the concept of the preferential attachment process, in which, typically, some form of wealth or credit is distributed among individuals or objects according to how much they already have, so that those who are already wealthy receive more than those who are not.

John McCarthy was a computer science and cognitive scientist who coined the term artificial intelligence. His development of the LISP programming language family, which heavily influenced ALGOL, an early set of a programming language developed in the mid-1950s, emphasized the value of timesharing. Timesharing today is more commonly known as multiprogramming or multitasking, where multiple users share computing resources. McCarthy envisioned this interaction in the 1950s, which is nothing short of unbelievable.

Marvin Minsky, a cognitive scientist, was the co-founder of MIT’s artificial intelligence laboratory. In 1963, Minsky invented the head-mounted graphical display that’s widely used today by aviators, gamers, engineers and doctors. He also invented the confocal microscope, an early version of the modern laser scanning microscope.

Together these framers laid the foundation for artificial intelligence as we know it today.

The design for mass collaboration

Do we understand collaboration? Thanks to Kurt Lewin and his research on group dynamics, we understand how groups interact much better than we thought. I ask again, do we understand group interactions? Is there an ideal group size? What’s the best balance of independence? Is the group interaction better or worse when we design in patterns for group activities?

We have defined paradigms of productive and unproductive group interactions. Our challenge comes from the fact that these models don’t scale. It’s also the same reason that the suggested agile team size is seven people plus or minus two team members. As group size increases, so does the complexity in the lines of communication. A team of six people has 15 lines of communication, a team of seven people has 21, and a team of nine people has 36 lines of communication [n members in a group produces n(n-1)/2 lines of communication]. Yet, in spite of the problem of the complexity in lines of communication, colonies of ants reaching 306 million workers interact fine as does a mayfly swarm of 8,000 flies. Both groups organized around common goals.

How is this possible if this line of communication principle is absolute? To state it simply, it’s not absolute. We can change the lines of communication by adjusting how the group interacts. This same concept can be applied for swarms of drones and self-organizing software. The limit that prevents us logically from adding agents due to communication complexity — a system we as innovators can simply redesign — is defined by our communications systems.

Psychologist Norman Triplett concluded that bicyclists performed better when riding with others. He found a similar result in the study of children: pairs performed better than solo actors.

Lewin, Lippitt and White later studied what happened to the behavior of young boys (10-11 years old) when an adult male joined the group. The group adopted one of three behavior styles, which the authors named autocratic, democratic and laissez-faire. The results were surprising. The autocratic style worked when the leader merely observed the boys’ behavior. The democratic style worked when the leader wasn’t present with the team. The laissez-faire style was found to be least effective. Does democratic mass collaboration result when the leader is absent?

Group dynamics of biology and computer science

Sociometry is the quantitative study and measurement of relationships within a group of people. Does sociometry apply to swarm interactions?

A swarm is simply a group, right? What if we could design intelligence systems to optimize learning? These systems wouldn’t only exemplify stigmergic environmental properties. They would also build on properties of traditional group dynamics. If you’re in the gym and notice people are staring at you, you’re able to bike a little harder, run a little faster, or lift a little more. What if we could design artificial intelligence systems that would be intelligent enough to embrace these same feelings? Sure, we’re talking less about feeling and more about procedures or rules that we apply in context — but the term “feelings” sounds better to me.

Collective behaviors contribute to solving various complex tasks. These four principles are found in insects that collectively organize. They should also be found in the artificial intelligence systems we create:

  1. Coordination: organizing using time and space to solve a specific problem.
  2. Cooperation: agents or individuals achieve a task that couldn’t be done individually.
  3. Deliberation: multiple mechanisms when a colony or team faces multiple opportunities.
  4. Collaboration: different activities performed simultaneously by individuals or groups.

Whether we’re adding blocks to a blockchain or changing the rights individuals have to shared content, the study of interactions might hold the key to unlock the next generation of artificial intelligence. Before exploring the benefits of dynamic systems and chaos theory, we must apply the principles of artificial intelligence, mass collaboration and group dynamics to expand our knowledge of how systems self-organize.