Barrier to AI in the Enterprise: Access to High Quality Data

According to a recent Teradata study, 80% of IT and business decision-makers have already implemented some form of artificial intelligence (AI) in their business.
The study also found that companies have a desire to increase AI spending. Forty-two percent of respondents to the Teradata study said they thought there was more room for AI implementation across the business, and 30% said their organizations weren’t investing enough in AI.
Forrester recently released their 2018 Predictions and also found that firms have an interest investing in AI. Fifty-one percent of their 2017 respondents said their firms were investing in AI, up from 40% in 2016, and 70% of respondents said their firms will have implemented AI within the next 12 months.
While the interest to invest in and grow AI implementation is there, 91% of respondents to the Teradata survey said they expect to see barriers get in the way of investing in and implementing AI.
Forty percent of respondents to the Teradata study said a lack of IT infrastructure was preventing AI implementation, making it their number one barrier to AI. The second most cited challenge, noted by 30% of Teradata respondents, was lack of access to talent and understanding.
“A lot of the survey results were in alignment with what we’ve experienced with our customers and what we’re seeing across all industries — talent continues to be a challenge in an emerging space,” says Atif Kureishy, Global Vice President of Emerging Practices at Think Big Analytics, a Teradata company.
When it comes to barriers to AI, Kureishy thinks that the greatest obstacles to AI are actually found much farther down the list noted by respondents.
“The biggest challenge [organizations] need to overcome is getting access to data. It’s the seventh barrier [on the list], but it’s the one they need to overcome the most,” says Kureishy.
Kureishy believes that because AI has the eye of the C-suite, organizations are going to find the money and infrastructure and talent. “But you need access to high-quality data, that drives training of these [AI] models,” he says.
Michele Goetz, principal analyst at Forrester and co-author of the Forrester report, “Predictions 2018: The Honeymoon For AI Is Over,” also says that data could be the greatest barrier to AI adoption.
“It all comes down to, how do you make sure you have the right data and you’ve prepared it for your AI algorithm to digest,” she says.
Read the source article at
Source: AI Trends

MIT Looks at How Humans Sorta Drive in Sorta Self-Driving Cars

ALMOST HALF OF Americans will hop in their cars for a Thanksgiving trip this year. But if you were being very precise—if you were a team of Massachusetts of Technology researchers who study human-machine interactions—you wouldn’t say that all those Americans are “driving,” exactly. The new driver assistance systems on the market—like Tesla’s’s Autopilot, Volvo’s’s Pilot Assist, and Jaguar Land Rover’s InControl Driver Assistance—mean that some of those travelers are doing an entirely new thing, participating in a novel, fluid dance. The human handles the wheel in some situations, and the machine handles it in others: changing lanes, parking, monitoring blind spots, warning when the car is about to crash. Call it…piloting? Shepherding? Conducting? We might need a new word.
Fully autonomous cars won’t swarm the roads en masse for decades, and in the meantime, we’ll have these semiautonomous systems. And scientists need to figure out how humans interact with them. Well, actually, the first thing to know is that most humans don’t: Preliminary research by the Insurance Institute of Highway Safety noted that, of nearly 1,000 semiautonomous vehicles studied, 49 percent had their systems turned off. The warnings were annoying, owners said.
If you could actually watch those drivers—sit inside the car and eyeball them while they drive—you might get a better understanding of how these systems are helpful and how they’re not. Maybe drivers find one of kind of warning sound frustrating, but another (a bloop instead of a bleep?) helpful. Maybe they get more comfortable with the system over time, or stay mystified even as the odometer rolls over. That spying would be really helpful for people who build and design semi-autonomous systems; for those who want to regulate them; and for those expected to evaluate the risks of using these systems, like insurers.
That’s why MIT researchers are announcing this week a gigantic effort to collect data on how human drivers work with their driver assistance systems. They outfitted the cars of Boston-area Tesla, Volvo, and Range Rover drivers with cameras and sensors to capture how humans cooperate with the new technology. They want to understand what parts of these systems are actually helping people—keeping them from crashing for example—and what parts aren’t.
Read the source article at Wired.
Source: AI Trends

What AI Trends Marketers Should Look for at AI World

AI Trends Marketers Should Look for at
AI World
AI is coming to Boston December 11-13. If you’re only planning to send your engineers, you should probably think again. AI tools are making huge strides in the martech space and revolutionizing how a marketer spends their day. Can you imagine being able to spend 80% less time scheduling meetings and building lists? How about seeing a 4x increase in overall lift/LTV?  Yeah, you should go to AI world.
If you’re intrigued, read on for a roundup of some AI trends you should be looking out for at AI World.
Virtual Personal Assistants:
Marketers know how much time they waste on manual labor—be it email management, social media posting, or just trying to coordinate meetings. While it may not be the sexiest application of AI out there, these time savers are freeing up marketers to do more marketing and less project management.
Think email management, social media posting, meeting scheduling.
Customer Data & Insights Platforms:
This is what we do at Zylotech. Companies are building automated systems to identify, unify, cleanse, and enrich your data from both 1st & 3rd party sources. Beyond the data curation, smart platforms can now use that AI enabled data to power deep insights and predictive / prescriptive analytics. The best part? You don’t have to learn a new marketing platform.  We push lists, segments, and recommendations into whichever delivery platform you already use.
The average marketer generally uses about 15% of available customer data, so unlocking the full data stack and feeding it into an AI application can yield huge insights in a fraction of the time that traditional approaches take.
A major benefit here is that the feedback loop an integrated data/decisioning platform has lends itself very well to AI optimization. Think about a cross-sell engine. It has a near real time validation of how effective its recommendations are and, due to the self adjusting nature of AI, it can quickly validate and improve its recommendations for your next campaign.
Companies like Zylotech, ActionIQ, and Agilone are pushing boundaries here and worth looking into if you’re a customer marketer with big data available.
Image Analysis with Qualitative Reporting & Insights:
One major area that AI&ML are revolutionizing for marketers is image recognition, categorization and reporting. Images are quickly become the defacto communication medium for consumers, so marketers must be able to track and report on trends. There are lots of applications here. Some image marketplaces are implementing AI to curate and surface the perfect image for customers. Some marketers are using image recognition to spot logos in customer photos to build brand affinity models.
There are too many use cases to cover them all, but here are a few worth checking out. Two vendors worth taking a look at are Clarifai, a multipurpose API, and LogoGrab, a logo recognition analysis API.
Content Marketing & Targeting Tools:
For content marketers, a good editor is indispensable and can make or break a program. But what if an AI system could take over some, or all, of the tasks we rely on human editors for? With the major advances in NLP writing, editing, and targeting tools are smarter than ever.
From building brand personas of your content, to real time editing and suggestions as we write, AI infused content marketing tools are very quickly becoming more than a novelty. Speaking from experience, they probably can’t replace a trusted editor quite yet, but they are getting there.
Here are a few interesting tools in the space: Acrolinx is like Grammarly for marketing writing with a scoring and recommendation engine. Lucy is powered by Watson and is a persona building and media planning AI application that looks like it could be very useful for a marketing manager who juggles a lot of tasks.
Advertising Tools:
Ads were the first place marketers and data scientists started to work together as a tight team, and it only makes sense that there are now a ton of new AI tools built to help businesses more intelligently, and quickly, make complex decisions around big ad data.
I remember when I was first starting out, I had an excel sheet with several significance calculators where I could test audience sizes, results, etc. to figure out what my ad data was telling me. Needless to say, it was clumsy and pretty inefficient. Now marketers can lean on machine learning based systems that do all of that, and more, in a fraction of the time.
Here are a few interesting tools in the space. Albert bills itself as an all-in-one solution for marketing delivery and has a few big clients including Harley Davidson. Sizmek is an AI recommendation focused ad platform, with a focus on transparency into its algorithms and how it’s makes its decisions. This might be a good tool for marketers who aren’t yet sold on a full black box solution.
Testing & Optimization Tools:
One last major category for marketers to keep an eye on is testing & optimization (T&O). T&O is a natural progression for ML in marketing as multivariate testing for a big brands can become very complex. With a good data source, a smart platform can test and optimize around any number of factors. Who would have thought that people in Georgia with 2 sessions go crazy for blue text? A smart platform can move quickly, and utilize a deep spread of data, and it’s reasonable to imagine that in the next 10 years, most savvy companies will be running nearly autonomous platforms that personalize and shift their site for each customer. Amazon already does this with their powerful recommendation engines.
There’s a lot of noise in this space, so take these recommendations as no more than a starting point. Sentient Ascend seems to be the most fleshed out and market ready player here. One feature that stands out is a/b funnel testing, rather than simply testing one page at a time. Strike Social is another player in the space, but looks to be mostly focused on Youtube ads & optimization.
At AI World
Most of the vendors I mentioned above are not going to be exhibiting (we are though!), but if you happen to be walking the show floor looking to chat, there will be plenty of AI applications that are being built to empower marketers to do better work faster. Consider this a primer on what marketing domains are being pushed with AI/ML tools.
Source: AI Trends

AI Trends Weekly Brief: AI World 2017 a Cross-Section of AI Marketplace

AI World 2017 a Cross-Section of AI Marketplace
Exhibitors at the AI World Conference & Expo happening Dec. 11-13 in Boston represent a cross-section of the emerging AI marketplace, companies seeking growth and development by riding atop the AI wave. Here is an account of a selection of AI World exhibitors.
Coveo Offers Cognitive Search and Knowledge Discovery
Coveo combines unified search, analytics and machine learning to deliver relevant information and recommendations across every business interaction, including websites, ecommerce, contact centers and intranets. Coveo partners with the world’s largest enterprise technology players and has more than 1,500 activations in mid-to-large sized global organizations across multiple industries.
Coveo recently announced the Early Access of Coveo on Elasticsearch. This index-agnostic version of Coveo’s AI-powered search platform delivers the same out-of-the-box relevance and insight powered by best of breed machine learning and usage analytics, with the added ability of being deployed on top of the open source elasticsearch index, fully managed or self-hosted.
“One of the reasons many companies and integrators are drawn to using open source technology is the ability to build virtually any solution on top of publicly available assets.” said Gauthier Robe, Coveo VP of Products, in a press release. “With Coveo on Elasticsearch, Coveo has done much of the work to make that possible by decoupling our proprietary index from the critical search experience components, such as machine learning, usage analytics, customizable user interface, query engine and connectors. We are very excited to see what the Elasticsearch community is able to build utilizing these two powerful technologies”.
Coveo was named a leader in The Forrester Wave: Cognitive Search and Knowledge Discovery Solutions, Q2 2017. The report evaluates 9 vendors on 23 criteria, grouped by Current Offering, Strategy and Market Presence. Coveo received the top score in the strategy category.
According to the Forrester Wave Report: “Coveo focuses on the key to relevancy — context. Search is successful when the results are relevant to the person querying for them. Coveo’s R&D focuses on using advanced analytics and machine learning to automatically learn the behaviors of individual users and return the results most relevant to them.”
Learn more at Coveo.
DataRobot Offers Enterprise Machine Learning Platform
DataRobot offers an enterprise machine learning platform that empowers users to make better predictions faster. Incorporating a library of hundreds of open source machine learning algorithms, the DataRobot platform automates, trains and evaluates predictive models in parallel, delivering more accurate predictions at scale.
DataRobot recently announced that it has achieved Amazon Web Services (AWS) ML Competency status. The designation recognizes DataRobot for providing business analysts, data scientists and machine learning practitioners with an automated, cutting-edge solution that enables predictive capabilities within customer applications.
Achieving the AWS ML Competency distinguishes DataRobot as an AWS Partner Network (APN) member that streamlines machine learning and data science workflows, and is an indication that the company has demonstrated extensive expertise in AI and ML on AWS. Thousands of DataRobot users run on AWS, having built more than 300 million machine learning models.
“Since day one, we have demonstrated a fierce commitment to making the AI-driven enterprise a reality,” said Jeremy Achin, CEO of DataRobot. “Achieving AWS ML Competency status recognizes our track record of excellence in automated machine learning, as well as our dedication to our users, many of whom rely on AWS to power their data-driven initiatives.”
The DataRobot automated machine learning platform puts the power of ML into the hands of any business user. DataRobot automates the data science workflow, enabling users to build and deploy highly accurate predictive models in a fraction of the time of traditional methods. Developers building applications on AWS can leverage DataRobot’s APIs to power the machine learning in these applications.
Learn more at DataRobot.
Expert System Inc. On the Power of Social Signals
Expert System Inc. is a leading provider of cognitive computing and text analytics software based on the proprietary, patented, multilingual semantic technology of Cogito. Using Expert System’s products, enterprise companies and government agencies can go beyond traditional keyword approaches for making sense of their structured and unstructured data. Our technology has been deployed as solutions for a range of business requirements such as semantic search, open source intelligence, multilingual text analytics, natural language processing and the development and management of taxonomies and ontologies.
Prior to the Black Friday and Cyber Monday shopping days, Expert Systems analyzed a sample of 120,000 tweets in English, French, German, Spanish and Italian, posted online from Oct. 20 to Nov. 20, 2017.  The analysis showed 75% of the tweets were focused on Black Friday deals, while 25% were focused on Cyber Monday offers. Amazon is the most frequently-mentioned retailer.
As a product category, high-tech products dominated the tweets, and Apple is the most-cited brand. In the battle between iPhone and Galaxy, iPhone wins, with 69% of English tweets on the smartphone subject were focused on iPhone, and 31% on Galaxy.
The origin of Expert System’s cognitive technology “Cogito” (Latin ‘I think’) dates back to the nineties, a time when the convergence of linguistics and technology was something only being talked about in research institutions or in academia. After licensing its early technology to Microsoft, Expert System was able to fully extend the vision to developing software that could understand the meaning and context of language. The effort produced one of the first semantic analysis platforms and led to Expert System’s patented Cogito technology.
Cogito’s technology has been deployed within hundreds of organizations of differing industries from banking to publishing to healthcare to insurance.  Thousands of interactions have been analyzed, generating millions of data poitns to enhance the effectiveness of Cogito’s behavioral models.
Learn more at Expert System.
UiPath Offers Robotic Process Automation for Managing the Robotic Workforce
UiPath is a leading provider of Robotic Process Automation technology enabling global enterprises to design, deploy and manage a full-fledged robotic workforce. This workforce mimics employees in administering rules-based tasks and frees them from the daily routine of rote work. The UiPath RPA computing platform is dedicated to automating business processes. It provides process modelling, change management, deployment management, access control, remote execution and scheduling. It also provides execution monitoring, auditing and analytics in full compliance with enterprise security and governance best practices.
In recent news, UIPath announced partnerships with five companies who will provide UiPath-accredited training for clients and partners globally. Companies EY Romania, Machina Automation, Roboyo, SMFL Capital Japan and Symphony Ventures will have their expert trainers undergo advanced training and testing in the UiPath RPA platform. These experts will teach the RPA Developer Advanced Training course, enabling graduates to implement the UiPath’s RPA platform within their own or their clients’ organizations. Machina Automation will also offer the RPA Business Analyst training.
From January 2018 onwards, the five partners are organizing onsite training, starting with the RPA Developer Advanced Course and the RPA Business Analyst Course, and expanding the curriculum over the coming months. EY, Roboyo, and Symphony will be conducting training on a global scale, while Machina Automation will be active in North America, and SMFL Capital in Japan.
UiPath also recently announced a strategic partnership with Enate, the provider of Robotic Service Orchestration (RSO). The partnership will look to drive accelerated automation success for the companies’ mutual partners at any stage of their digital journey.
Enate comes with custom-build Activity Libraries for UiPath Studio, allowing for the seamless integration of UiPath robots into the orchestration platform. This is the cornerstone of the world’s first environment that allows digital and human teams to work together seamlessly, and the partnership is already bringing business benefit to clients such as insurance giant Generali.
Learn more at UiPath.
VoiceBase Provides APIs for Speech Recognition
VoiceBase provides APIs for speech recognition, speech analytics and predictive analytics to surface the insights every business needs.  Enterprises utilize VoiceBase’s deep learning neural network technology to automatically transcribe audio and video, score contact center calls, and predict customer behavior. Privately-held, VoiceBase is based in San Francisco.
A member of  the Amazon Web Services (AWS) Partner Network,  VoiceBase recently announced an integration for Amazon Connect customers. The integration is designed to ingest call recordings from Amazon Connect, transcribe and analyze the content and publish the results on AWS. This integration makes it easy for Amazon Connect users to surface valuable insights from calls and make better decisions using data from their contact center. VoiceBase was one of the initial APN Partners to support Amazon Connect to deliver advanced speech analytics to a growing cloud contact center customer base.
The VoiceBase API features include machine transcription and keyword and phrase spotting, PCI redaction, instant custom vocabulary and predictive insights. Predictive Insights was a product born from years of data science research and the idea of combining artificial intelligence and spoken information to detect complex events and future customer behavior in sales and service calls. With this integration, these services power many sought-after enterprise use cases such as agent quality monitoring, auto call scoring, compliance, and sales optimization.
“We are excited to expand our collaboration with AWS and their customers to offer customized speech analytics and predictive analytics services,” said Walter Bachtiger, Founder and CEO of VoiceBase, in a press release.  “AWS provides the ideal framework for VoiceBase to layer on its speech analytics API and unlock valuable insights for the enterprise.”
VoiceBase’s customers include Amazon Web Services, Twilio, Nasdaq, HireVue and Veritone.
Learn more at VoiceBase.
Pegasystems Targets Intelligent Business Process Management
Pegasystems Inc., a leader in software for customer engagement and operational excellence, offers its adaptive, cloud-architected software –built on its unified Pega® Platform – supporting rapid deployment and ability to extend and change applications to meet strategic business needs. Over its 30-year history, Pega has delivered award-winning capabilities in CRM and BPM, powered by advanced artificial intelligence and robotic automation, to help the world’s leading brands achieve breakthrough business results.
Pegasystems recently announced the availability of Pega® Deployment Manager, a no-code, model-driven capability that enables businesses to accelerate the deployment of new applications and software updates. Businesses are turning to DevOps (software development and operations) methodologies to catch up to more nimble competitors and transform into real-time application deployment machines.
But most development organizations quickly become overwhelmed with the numerous tools, specialized skills, and cultural shifts needed to be DevOps-proficient. As a result, they remain stuck in the early stages of DevOps maturity or don’t know how or where to start. Meanwhile, more agile enterprises continuously release new software and features to meet the latest customer demands, leaving the competition behind.
Pega Deployment Manager aims to guide teams through all stages of agile deployment – from unit testing and packaging, to staging and testing – into a consolidated visual roadmap driven by proven best practices. Without any coding, DevOps-enabled teams can progress new apps and capabilities to the next stage in the pipeline with a single click, making it simple and easy to bring software into production.
In other recent news, Pegasystems was named a Leader in the Gartner Magic Quadrant to Intelligent Business Process Management Suites. Pega has been recognized as a Leader in this report every year since its inception in 2003.
In the report, Gartner evaluated 19 intelligent business process management suite (iBPMS) vendors on their ability to execute and completeness of vision. Gartner assessed Pega® Platform, which combines case management, BPM, robotic automation, AI and decisioning, mobile, and omni-channel UX on a unified platform.
Learn more at Pegasystems.
CognitiveScale Targeting Financial Services; USAA Invests, Becomes a Customer
CognitiveScale builds industry-specific augmented intelligence solutions for financial services, healthcare, and digital commerce markets that emulate and extend human cognitive functions by pairing people and machines. Built on its CORTEX augmented intelligence platform, the company’s industry-specific solutions help large enterprises drive change by increasing user engagement, improving decision-making, and delivering self-learning and self-assuring business processes.
In recent news, CognitiveScale announced that a USAA affiliate has made a strategic investment in the company and became a customer. USAA will implement the CognitiveScale Financial Services augmented intelligence products for delivering contextual customer engagement and improving advisor productivity. By using CognitiveScale, USAA is positioning to provide its more than 12 million members predictive, data-driven banking and insurance services while learning continuously from user interactions and data.
Artificial Intelligence (AI) is a major disruptive force in banks, insurance companies and financial services organizations. According to IDC, global cognitive systems spending market will grow to $47 billion by the end of 2020 with the Banking Industry accounting for 19 percent of that projected market spend.
“USAA has a long history of using emerging technologies to develop innovative ways to serve our members,” said Nathan McKinley, VP and head of corporate development for USAA. “Our work with CognitiveScale allows us to support such innovation through our investment while also leveraging the AI products they have today to find ways to better serve our members.”
Elsewhere, CognitiveScale recently announced the addition of Dr. Joydeep Ghosh as the company’s first Chief Scientific Officer. An internationally recognized authority on machine learning, data-web mining and related artificial intelligence (AI) approaches, Dr. Ghosh joins the team with more than 30 years of experience applying these technologies to complex real-world problems.
As CognitiveScale’s Chief Scientific Officer, Dr. Ghosh will focus on aligning and tightly integrating the company’s Cognitive Cloud software with industry-specific data models and the latest algorithmic sciences efforts; recruiting the best and the brightest minds in AI while supporting those already at CognitiveScale; and educating the market about the power and value of augmented intelligence and enterprise-grade AI.
Learn more at
Zylotech Offers Customer Analytics Platform
Zylotech is an MIT spin-off offering the AI Customer Insights Platform which combines customer data management with a deep-learning driven decisioning engine. Zylotech uncovers probabilistic customer behavior patterns from all data sources to enable real time customer marketing with a high success rate.
Zylotech’s Customer Analytics Platform uses automated machine learning to identify, unify, cleanse, and enrich customer data to power AI-driven, real-time customer insights for marketing teams to execute upon.
Zylotech offers an ebook, “Retailers Guide to Customer Retention & Monetization”, covering strategies retailers are using to tame their data and move from big data to big insights.
Learn more at
— Written and compiled by John P. Desmond
Source: AI Trends

NVIDIA GPU Cloud Now Available to Thousands of AI Researchers Using NVIDIA Desktop GPUs

NVIDIA this week announced that hundreds of thousands of AI researchers using desktop GPUs can now tap into the power of NVIDIA GPU Cloud (NGC) as the company has extended NGC support to NVIDIA TITAN.
NVIDIA also announced expanded NGC capabilities — adding new software and other key updates to the NGC container registry — to provide researchers a broader, more powerful set of tools to advance their AI and high performance computing research and development efforts.
Customers using NVIDIA® Pascal architecture-powered TITAN GPUs can sign up immediately for a no-charge NGC account and gain full access to a comprehensive catalog of GPU-optimized deep learning and HPC software and tools. Other supported computing platforms include NVIDIA DGX-1, DGX Station and NVIDIA Volta-enabled instances on Amazon EC2.
Software available through NGC’s rapidly expanding container registry includes NVIDIA optimized deep learning frameworks such as TensorFlow and PyTorch, third-party managed HPC applications, NVIDIA HPC visualization tools, and NVIDIA’s programmable inference accelerator, NVIDIA TensorRT 3.0.
“We built NVIDIA GPU Cloud to give AI developers easy access to the software they need to do groundbreaking work,” said Jim McHugh, vice president and general manager of enterprise systems at NVIDIA. “With GPU-optimized software now available to hundreds of thousands of researchers using NVIDIA desktop GPUs, NGC will be a catalyst for AI breakthroughs and a go-to resource for developers worldwide.”
An early adopter of NGC is GE Healthcare. The first medical device maker to use NGC, the company is tapping the deep learning software in NGC’s container registry to accelerate bringing the most sophisticated AI to its 500,000 imaging devices globally with the goal of improving patient care.
Read the full press release at
Source: AI Trends

Swarm Intelligence and AI Self-Driving Cars: Stigmergy and Boids

By Dr. Lance Eliot, the AI Trends Insider
There was a dog on the freeway the other day.
I’ve seen a lot of items scattered on the freeways during my daily commute, including lawn chairs, ladders, pumpkins (a truck carrying Halloween pumpkins had gotten into an accident and spilled its load of pumpkin patch pumpkins), and whatever else can seem to drop onto, spill into, or wander along on the freeway. A live animal is always an especially big concern on the freeway. Besides the danger to the animal, there is also usually heightened danger to the freeway drivers. The likely erratic behavior of the animal can cause drivers to make mistakes and ram into other cars. Also, invariably some good Samaritans try to get out of their cars and corral the animal on the freeway. This puts those well intended humans into danger too from errant car drivers.
Anyway, in this case, I watched in amazement as my fellow drivers all seemed to work cooperatively with each other. Cars nearest tp the dog were careful to give it some distance so that it would not be scared into bolting further along on the freeway. Cars next to those cars were trying to run interference by moving into positions that would force other cars to go widely around the protected pocket. Cars at the outer layers had turned on their emergency flashers and were essentially directing other cars to flow into the outermost lanes.  In the end, fortunately, the dog opted to run toward a freeway exit and was last seen happily getting off the freeway and into a nearby neighborhood.
Let’s review for a moment what happened in this case of the saved dog.
Did all of us drivers get onto our cell phones and talk with each other about what to do? Nope. Did an authority figure such as a policeman enter into the fray and direct us to provide a safe zone for the dog? Nope. So, in other words, we somehow miraculously all worked together, in spite of not directly speaking with each other, and nor by having someone coordinate our activities for us. We spontaneously were able to work as a group, even though we all had never met each other and we carried on no direct communication with each other per se.
Miracle?  Well, maybe, maybe not. Have you ever watched a flock of birds? They seem to coordinate their movements and do so perhaps without having to directly communicate with each other. Same goes for a school of fish. Same goes for a colony of ants, bees, wasps, and those darned pesky termites. Generally, there are numerous examples in nature of how animals essentially self-organize themselves and exhibit collective aggregated behavior that provides a useful outcome for the group and provides benefits for the members of the group too. This collective behavior is typically characterized by a decentralized governance, meaning that there is not one centralized authority that directs the activities of the group, but instead the control of the group and the individuals is dispersed.
Swarm Intelligence (SI). That’s what this kind of behavior is called, at least within the field of AI and robotics that’s what we call it. If you prefer, you can call it swarm behavior. The swarm behaviorists are prone to studying how animals end-up being able to act as a flock, school, colony, or any other such grouping. Those of us studying swarm intelligence are more focused on getting technology to do the same kind of swarm behavior that we see occurring in animals. Some also don’t like to say that swarm intelligence is appropriate for things like say termites, since they argue that termites are not “intelligent” and so it is better to simply refer to them as having swarm behavior.  We could debate at some length whether termites are “intelligent” or at least have intelligent-like characteristics – I’m going to avoid that acrimonious debate herein and save it for another day.
Swarm intelligence is a pretty hot topic these days. There have been many that are working on individual robots and individual drones for a long time, trying to get AI to appear in those individualized things. There are others that want to leverage the individualized thing and have it do wonderous acts by coming together as a swarm. Imagine a swarm of a hundred drones and how they might be able to deliver packages to your door, either each flying your ordered new pair of pants or maybe they work together to carry a refrigerator to you (able to handle the weight of the refrigerator by having many drones each bearing some of the weight). You can also imagine the military applications for swarming, such as having an army of robots to fight battles.
One of the major questions in swarming is how much intelligence does the individual member of the swarm need to have. If you believe that ants are pretty ignorant, and yet they are able as a group to accomplish amazing feats, you would argue that members of a swarm don’t need to have much intelligence at all. You could even say that if the swarm members have too much intelligence, they might not swarm so well. The self-thinking members might decide that they don’t want to do the swarm. If instead they are rather non-intelligent and are just acting on instinct, they presumably won’t question the swarm and will mindlessly go along with the swarm.
The swarm participants do need to coordinate in certain kinds of ways, regardless of how intelligent or not they each are. In the 1980’s, there were studies done of birds in flocks, and a researcher named Craig Reynolds developed a computer-based simulation that involved bird-oid objects, meaning bird like simulations, and this came to be known as boids. Thus, you can refer to each individual member of a swarm as a boid.  The birds in a flock are boids in that swarm, while the ants in a colony are the boids in that swarm.
In the boids simulation, there were three crucial rules about aspects of a swarm:
–          Separation
–          Alignment
–          Cohesion
In the case of separation, each boid needs to keep away from each other boid, just enough as a minimum that they don’t collide with each other. A bird in a flock needs to stay far enough away from the birds next to it that they won’t accidentally run into each other. This distance will depend on how fast they are moving in the swarm and how much the swam shifts in direction. The separation distance can vary at times during the swarm. The relative distance will also vary from type of boid such as fish versus birds versus ants.  If the distance between the boids gets overly large, it can also impact the swarm, such as the swarm losing its formation and becoming more like a seemingly random and chaotic collection rather than a self-organized one.  On the other hand, you can have biods that actually link physically with each other, such that there is no distance between them at all (this is considered an intentional act rather than an accidental collision of the boids).
In the case of alignment, each boid aligns with the other boids in order to proceed in some direction. There has been much study done about why flocks or colonies go in particular directions. It can be driven at times by sunlight, or by earth magnetism, or by veering away from predators, or by veering toward food, and so on. The key here is that they align individually in order to steer toward some direction. They collectively go in that direction. The direction is not usually static, in the notion that the direction will change over time. They might go in one direction for a long time and then suddenly shift to another direction, or they might continually be shifting their direction.
In the case of cohesion, this refers to the individuals having a collective center of mass. You might have some members that are not necessarily going in exactly the same direction as others, but they overall are all exhibiting cohesion in that they still remain together in a flock, colony, or whatever. You’ve likely seen birds that have joined in a flock and can see splintering factions that appear to nearly be wanting to go off on their own, but in the end they continue to go along with the rest of the flock. As such, this swarm would be said to have strong cohesion.
Overall, any given swarm will have either strong or weak separation, strong or weak alignment, and strong or weak cohesion. There are other factors involved in depicting and developing swarms, but these three factors of separation, alignment, and cohesion are especially at the core of swarm principles.
I will though add one other important factor to this swam discussion, namely stigmergy. Stigmergy is the aspect that embodies the self-organizing element of the swam. It presupposes that one action of the swarm leads to the next action of the swarm. The spontaneous coming together of the boids turns into an emergent systematic activity, and for each act there is a next act that follows. A flock of birds turns left and rises, which then leads to the birds turning to the right and going lower, which leads to the birds flying level and straight ahead. One action stimulates the performance of the next action.
Notice that there are some factors that aren’t being mentioned and so by default are not encompassed by traditional swarms. There is no control of the entire swarm. There is no planning by the swarm. There is no direct communication among the members of the swarm. This is what makes swarms so interesting as a subject matter. We usually spend much of our time assuming that to get intelligent group behavior you must have direct communication between members of the group, they must have some form of centralized control, and they must have some form of planning. This would seem to be the case for our governmental bodies such as a congress or similar, and the same for companies and how they turn individual workers into a collective that involves direct communication, planning, and uses executive centralized control.  Not so with swarms.
Remember my story about the dog on the freeway? In that story, I purposely pointed out that none of the drivers directly communicated with each other. We did not call each other on our cell phones. I purposely mentioned that the police had not shown up to direct us toward working together (thus, there was in this case no centralized control). We had not prearranged a plan of how to protect the dog. Instead, it all happened spontaneously.
We essentially acted as a swarm.
The cars all kept a distance from each other to avoid hitting each other (separation). We shaped ourselves to help protect the dog and force other traffic around the dog (alignment). We were all moving together, at a slow speed, and remained tied together in a virtual manner (cohesion). Maybe I should get a T-shirt that says “I was a boid today and saved a dog!”
What does swarms have to do with self-driving cars?
At the Cybernetic Self-Driving Car Institute, we are developing AI systems that make use of SI (Swarm Intelligence) for self-driving cars.
You’ve probably read or heard that one of the claimed great advantages of self-driving cars will be that there won’t be anymore traffic tie-ups on the highways. Those proponents are saying that self-driving cars will collectively work together to ensure that we don’t have bogged down bumper-to-bumper traffic like we do today. The claim is that human drivers of today are not able to adequately coordinate with each other and therefore the emergent group behavior is that we are stymied in traffic.
You’ve maybe seen that trucking companies are aiming towards having “fleets” of AI self-driving trucks that work in unison, acting as a coordinated convoy. Self-driving truck after self-driving truck will be aligned with each other, and a lead self-driving truck will guide them to where they need to go. It is almost like a train, involving self-driving trucks that are akin to railcars that hook together to form a long train, but rather than physically being connected these self-driving trucks will be virtually connected to each other.
There are going to be a number of issues around these kinds of arrangements.
One issue is the aspect of freewill.
If you are in a self-driving car, and it is being somehow coordinated as part of overall traffic on the freeway, will you have any say over what your self-driving car does? Those that are proponents of the self-driving car as a freeway clogging solution would tend to say that you won’t have any freewill per se. Your self-driving car will become part of the collective for the time you are on the freeway. It will obey whatever it is commanded to do by the collective. They tell you that this is good for you, since you, an occupant but no longer a driver, won’t need to worry about which lane to be in, nor how fast to go. This will all be done for you, somehow.
One wonders that if this is indeed to be the case, if this is our future, whether it even matters that the self-driving car has much if any AI capabilities. In other words, if the self-driving car is going to be an all-obedient order taker, why does the self-driving car need any AI at all? You could just have a car that basically is driven by some other aspect, like a centralized control mechanism. No need for the self-driving car to do much itself.
Some say that the self-driving car will have and needs to have robust AI, and that it will be communicating with other self-driving cars, using V2V (vehicle to vehicle communications) to achieve coordinated group behavior. Therefore, when your self-driving car is on the freeway, it will discuss the freeway conditions with other self-driving cars that are there, and they will agree to what should be done. Your self-driving car might say to another one, hey, let me pass you to the left in the fast lane. And the other self-driving car says, okay, that’s seems good, go for it.
We don’t though know how these self-driving car discussions are going to be refereed. Suppose that I am in a hurry, and so I want my self-driving car to get to work right away. I instruct my self-driving car to bully other self-driving cars. But, suppose all the other self-driving cars are also in the bullying mode. How will this work? We might end-up back into the same freeway snarls that we already have today. There are some that argue that we’ll need to have a points system. When my self-driving car gets onto the freeway, maybe my self-driving cars says it is willing to give up 100 points in order to get ahead of the other self-driving cars. Those other self-driving cars then earn points by allowing my self-driving car to do this. They, in turn, at some later point, can use their earned points to get preferential treatment.
Now, all of this covers the situation wherein the self-driving cars are communicating with each other. They either directly communicate with each other, via the V2V, or maybe they are under some kind of centralized control. There is the V2I (vehicle to infrastructure), which involves cars communicating with the roadways, and some believe this will allow for centralized control of cars.
Suppose though that we say that the self-driving cars aren’t going to directly communicate with each other. They might have that capability to do so, but lets say that they don’t need to do so. We then are heading into the realm of the swarm.
We are working on swarm algorithms and software that allows AI self-driving cars to act together and yet do so without having to do any pre-planning, without having any centralized control, and without having do to direct communication with each other. The self-driving cars become the equivalent of boids. They are like birds in a flock, or ants in a colony, or schools of fish.
This makes sense as a means to gain collective value from having self-driving cars. This also does away with the requirement of the self-driving cars having to negotiate with each other, and also allows them “freewill” with respect to the driving task.
I’ll toss into the mix a wrinkle that makes this harder than it might seem at first glance. It is easiest to envision a swarm of AI self-driving cars that act in unison based on emergent behaviors when you have exclusively AI self-driving cars. The problem becomes more difficult once you add human drivers into the swarm. I know that some have a utopian view that we are going to have all and only self-driving cars and that we’ll ban the use of human drivers, but I’d say that’s a long, long, long ways in the future (if ever).
For now, it is more realistic to realize that we are going to have self-driving cars that are driving in the same roadways as human drivers.
With our software for the self-driving cars, the self-driving cars will know how to become part of a swarm. The question will be how will human drivers impact the swarm. It is like having a school of fish in which some of the fish aren’t necessarily of a mind to be part of the school. Now, that being said, when you look closely at a school of fish, you will see that other fish will at times enter into the swarm and either join it, disrupt it, or pass through it. We are assuming that human drivers will do likewise when encountering an AI self-driving car swarm.
What would have happened if self-driving cars had encountered a dog on the freeway? Right now, most of the auto makers and tech companies are programming the AI self-driving cars to pretty much come to a halt when they come upon a moving animal. There is no provision for the self-driving cars to act together to deal with the situation. We believe that robust self-driving cars should be able to act together, doing so without necessarily needing direct communication and without needing any centralized control. A swarm of AI self-driving cars that has swarm intelligence would have done the same that we humans did, forming an emergent behavior that sought to save the dog and avoid any car accidents in doing so. That’s really good Swarm Intelligence to augment Artificial Intelligence (which, by the way, I do have a nifty T-shirt that says “I Love AI+SI!”
This content is originally posted on AI Trends.
Source: AI Trends