By Dr. Lance Eliot, the AI Trends Insider
There was a dog on the freeway the other day.
I’ve seen a lot of items scattered on the freeways during my daily commute, including lawn chairs, ladders, pumpkins (a truck carrying Halloween pumpkins had gotten into an accident and spilled its load of pumpkin patch pumpkins), and whatever else can seem to drop onto, spill into, or wander along on the freeway. A live animal is always an especially big concern on the freeway. Besides the danger to the animal, there is also usually heightened danger to the freeway drivers. The likely erratic behavior of the animal can cause drivers to make mistakes and ram into other cars. Also, invariably some good Samaritans try to get out of their cars and corral the animal on the freeway. This puts those well intended humans into danger too from errant car drivers.
Anyway, in this case, I watched in amazement as my fellow drivers all seemed to work cooperatively with each other. Cars nearest tp the dog were careful to give it some distance so that it would not be scared into bolting further along on the freeway. Cars next to those cars were trying to run interference by moving into positions that would force other cars to go widely around the protected pocket. Cars at the outer layers had turned on their emergency flashers and were essentially directing other cars to flow into the outermost lanes. In the end, fortunately, the dog opted to run toward a freeway exit and was last seen happily getting off the freeway and into a nearby neighborhood.
Let’s review for a moment what happened in this case of the saved dog.
Did all of us drivers get onto our cell phones and talk with each other about what to do? Nope. Did an authority figure such as a policeman enter into the fray and direct us to provide a safe zone for the dog? Nope. So, in other words, we somehow miraculously all worked together, in spite of not directly speaking with each other, and nor by having someone coordinate our activities for us. We spontaneously were able to work as a group, even though we all had never met each other and we carried on no direct communication with each other per se.
Miracle? Well, maybe, maybe not. Have you ever watched a flock of birds? They seem to coordinate their movements and do so perhaps without having to directly communicate with each other. Same goes for a school of fish. Same goes for a colony of ants, bees, wasps, and those darned pesky termites. Generally, there are numerous examples in nature of how animals essentially self-organize themselves and exhibit collective aggregated behavior that provides a useful outcome for the group and provides benefits for the members of the group too. This collective behavior is typically characterized by a decentralized governance, meaning that there is not one centralized authority that directs the activities of the group, but instead the control of the group and the individuals is dispersed.
Swarm Intelligence (SI). That’s what this kind of behavior is called, at least within the field of AI and robotics that’s what we call it. If you prefer, you can call it swarm behavior. The swarm behaviorists are prone to studying how animals end-up being able to act as a flock, school, colony, or any other such grouping. Those of us studying swarm intelligence are more focused on getting technology to do the same kind of swarm behavior that we see occurring in animals. Some also don’t like to say that swarm intelligence is appropriate for things like say termites, since they argue that termites are not “intelligent” and so it is better to simply refer to them as having swarm behavior. We could debate at some length whether termites are “intelligent” or at least have intelligent-like characteristics – I’m going to avoid that acrimonious debate herein and save it for another day.
Swarm intelligence is a pretty hot topic these days. There have been many that are working on individual robots and individual drones for a long time, trying to get AI to appear in those individualized things. There are others that want to leverage the individualized thing and have it do wonderous acts by coming together as a swarm. Imagine a swarm of a hundred drones and how they might be able to deliver packages to your door, either each flying your ordered new pair of pants or maybe they work together to carry a refrigerator to you (able to handle the weight of the refrigerator by having many drones each bearing some of the weight). You can also imagine the military applications for swarming, such as having an army of robots to fight battles.
One of the major questions in swarming is how much intelligence does the individual member of the swarm need to have. If you believe that ants are pretty ignorant, and yet they are able as a group to accomplish amazing feats, you would argue that members of a swarm don’t need to have much intelligence at all. You could even say that if the swarm members have too much intelligence, they might not swarm so well. The self-thinking members might decide that they don’t want to do the swarm. If instead they are rather non-intelligent and are just acting on instinct, they presumably won’t question the swarm and will mindlessly go along with the swarm.
The swarm participants do need to coordinate in certain kinds of ways, regardless of how intelligent or not they each are. In the 1980’s, there were studies done of birds in flocks, and a researcher named Craig Reynolds developed a computer-based simulation that involved bird-oid objects, meaning bird like simulations, and this came to be known as boids. Thus, you can refer to each individual member of a swarm as a boid. The birds in a flock are boids in that swarm, while the ants in a colony are the boids in that swarm.
In the boids simulation, there were three crucial rules about aspects of a swarm:
In the case of separation, each boid needs to keep away from each other boid, just enough as a minimum that they don’t collide with each other. A bird in a flock needs to stay far enough away from the birds next to it that they won’t accidentally run into each other. This distance will depend on how fast they are moving in the swarm and how much the swam shifts in direction. The separation distance can vary at times during the swarm. The relative distance will also vary from type of boid such as fish versus birds versus ants. If the distance between the boids gets overly large, it can also impact the swarm, such as the swarm losing its formation and becoming more like a seemingly random and chaotic collection rather than a self-organized one. On the other hand, you can have biods that actually link physically with each other, such that there is no distance between them at all (this is considered an intentional act rather than an accidental collision of the boids).
In the case of alignment, each boid aligns with the other boids in order to proceed in some direction. There has been much study done about why flocks or colonies go in particular directions. It can be driven at times by sunlight, or by earth magnetism, or by veering away from predators, or by veering toward food, and so on. The key here is that they align individually in order to steer toward some direction. They collectively go in that direction. The direction is not usually static, in the notion that the direction will change over time. They might go in one direction for a long time and then suddenly shift to another direction, or they might continually be shifting their direction.
In the case of cohesion, this refers to the individuals having a collective center of mass. You might have some members that are not necessarily going in exactly the same direction as others, but they overall are all exhibiting cohesion in that they still remain together in a flock, colony, or whatever. You’ve likely seen birds that have joined in a flock and can see splintering factions that appear to nearly be wanting to go off on their own, but in the end they continue to go along with the rest of the flock. As such, this swarm would be said to have strong cohesion.
Overall, any given swarm will have either strong or weak separation, strong or weak alignment, and strong or weak cohesion. There are other factors involved in depicting and developing swarms, but these three factors of separation, alignment, and cohesion are especially at the core of swarm principles.
I will though add one other important factor to this swam discussion, namely stigmergy. Stigmergy is the aspect that embodies the self-organizing element of the swam. It presupposes that one action of the swarm leads to the next action of the swarm. The spontaneous coming together of the boids turns into an emergent systematic activity, and for each act there is a next act that follows. A flock of birds turns left and rises, which then leads to the birds turning to the right and going lower, which leads to the birds flying level and straight ahead. One action stimulates the performance of the next action.
Notice that there are some factors that aren’t being mentioned and so by default are not encompassed by traditional swarms. There is no control of the entire swarm. There is no planning by the swarm. There is no direct communication among the members of the swarm. This is what makes swarms so interesting as a subject matter. We usually spend much of our time assuming that to get intelligent group behavior you must have direct communication between members of the group, they must have some form of centralized control, and they must have some form of planning. This would seem to be the case for our governmental bodies such as a congress or similar, and the same for companies and how they turn individual workers into a collective that involves direct communication, planning, and uses executive centralized control. Not so with swarms.
Remember my story about the dog on the freeway? In that story, I purposely pointed out that none of the drivers directly communicated with each other. We did not call each other on our cell phones. I purposely mentioned that the police had not shown up to direct us toward working together (thus, there was in this case no centralized control). We had not prearranged a plan of how to protect the dog. Instead, it all happened spontaneously.
We essentially acted as a swarm.
The cars all kept a distance from each other to avoid hitting each other (separation). We shaped ourselves to help protect the dog and force other traffic around the dog (alignment). We were all moving together, at a slow speed, and remained tied together in a virtual manner (cohesion). Maybe I should get a T-shirt that says “I was a boid today and saved a dog!”
What does swarms have to do with self-driving cars?
At the Cybernetic Self-Driving Car Institute, we are developing AI systems that make use of SI (Swarm Intelligence) for self-driving cars.
You’ve probably read or heard that one of the claimed great advantages of self-driving cars will be that there won’t be anymore traffic tie-ups on the highways. Those proponents are saying that self-driving cars will collectively work together to ensure that we don’t have bogged down bumper-to-bumper traffic like we do today. The claim is that human drivers of today are not able to adequately coordinate with each other and therefore the emergent group behavior is that we are stymied in traffic.
You’ve maybe seen that trucking companies are aiming towards having “fleets” of AI self-driving trucks that work in unison, acting as a coordinated convoy. Self-driving truck after self-driving truck will be aligned with each other, and a lead self-driving truck will guide them to where they need to go. It is almost like a train, involving self-driving trucks that are akin to railcars that hook together to form a long train, but rather than physically being connected these self-driving trucks will be virtually connected to each other.
There are going to be a number of issues around these kinds of arrangements.
One issue is the aspect of freewill.
If you are in a self-driving car, and it is being somehow coordinated as part of overall traffic on the freeway, will you have any say over what your self-driving car does? Those that are proponents of the self-driving car as a freeway clogging solution would tend to say that you won’t have any freewill per se. Your self-driving car will become part of the collective for the time you are on the freeway. It will obey whatever it is commanded to do by the collective. They tell you that this is good for you, since you, an occupant but no longer a driver, won’t need to worry about which lane to be in, nor how fast to go. This will all be done for you, somehow.
One wonders that if this is indeed to be the case, if this is our future, whether it even matters that the self-driving car has much if any AI capabilities. In other words, if the self-driving car is going to be an all-obedient order taker, why does the self-driving car need any AI at all? You could just have a car that basically is driven by some other aspect, like a centralized control mechanism. No need for the self-driving car to do much itself.
Some say that the self-driving car will have and needs to have robust AI, and that it will be communicating with other self-driving cars, using V2V (vehicle to vehicle communications) to achieve coordinated group behavior. Therefore, when your self-driving car is on the freeway, it will discuss the freeway conditions with other self-driving cars that are there, and they will agree to what should be done. Your self-driving car might say to another one, hey, let me pass you to the left in the fast lane. And the other self-driving car says, okay, that’s seems good, go for it.
We don’t though know how these self-driving car discussions are going to be refereed. Suppose that I am in a hurry, and so I want my self-driving car to get to work right away. I instruct my self-driving car to bully other self-driving cars. But, suppose all the other self-driving cars are also in the bullying mode. How will this work? We might end-up back into the same freeway snarls that we already have today. There are some that argue that we’ll need to have a points system. When my self-driving car gets onto the freeway, maybe my self-driving cars says it is willing to give up 100 points in order to get ahead of the other self-driving cars. Those other self-driving cars then earn points by allowing my self-driving car to do this. They, in turn, at some later point, can use their earned points to get preferential treatment.
Now, all of this covers the situation wherein the self-driving cars are communicating with each other. They either directly communicate with each other, via the V2V, or maybe they are under some kind of centralized control. There is the V2I (vehicle to infrastructure), which involves cars communicating with the roadways, and some believe this will allow for centralized control of cars.
Suppose though that we say that the self-driving cars aren’t going to directly communicate with each other. They might have that capability to do so, but lets say that they don’t need to do so. We then are heading into the realm of the swarm.
We are working on swarm algorithms and software that allows AI self-driving cars to act together and yet do so without having to do any pre-planning, without having any centralized control, and without having do to direct communication with each other. The self-driving cars become the equivalent of boids. They are like birds in a flock, or ants in a colony, or schools of fish.
This makes sense as a means to gain collective value from having self-driving cars. This also does away with the requirement of the self-driving cars having to negotiate with each other, and also allows them “freewill” with respect to the driving task.
I’ll toss into the mix a wrinkle that makes this harder than it might seem at first glance. It is easiest to envision a swarm of AI self-driving cars that act in unison based on emergent behaviors when you have exclusively AI self-driving cars. The problem becomes more difficult once you add human drivers into the swarm. I know that some have a utopian view that we are going to have all and only self-driving cars and that we’ll ban the use of human drivers, but I’d say that’s a long, long, long ways in the future (if ever).
For now, it is more realistic to realize that we are going to have self-driving cars that are driving in the same roadways as human drivers.
With our software for the self-driving cars, the self-driving cars will know how to become part of a swarm. The question will be how will human drivers impact the swarm. It is like having a school of fish in which some of the fish aren’t necessarily of a mind to be part of the school. Now, that being said, when you look closely at a school of fish, you will see that other fish will at times enter into the swarm and either join it, disrupt it, or pass through it. We are assuming that human drivers will do likewise when encountering an AI self-driving car swarm.
What would have happened if self-driving cars had encountered a dog on the freeway? Right now, most of the auto makers and tech companies are programming the AI self-driving cars to pretty much come to a halt when they come upon a moving animal. There is no provision for the self-driving cars to act together to deal with the situation. We believe that robust self-driving cars should be able to act together, doing so without necessarily needing direct communication and without needing any centralized control. A swarm of AI self-driving cars that has swarm intelligence would have done the same that we humans did, forming an emergent behavior that sought to save the dog and avoid any car accidents in doing so. That’s really good Swarm Intelligence to augment Artificial Intelligence (which, by the way, I do have a nifty T-shirt that says “I Love AI+SI!”
This content is originally posted on AI Trends.
Source: AI Trends