Biomimicry and Robomimicry for AI Self-Driving Cars: Machine Learning from Nature

By Dr. Lance Eliot, the AI Trends Insider
The fish in the aquarium tank were going round and round the inner edges of the glass surface that encased them in water. Humans watching the fish were likely wondering whether or not the fish knew they were in water. There is an ongoing philosophical debate about whether or not fish can comprehend that they are immersed in water. Maybe they take it for granted just like we take for granted that we are surrounded by air. Maybe they are actually deep thinkers and know they are in water and that they must stay in water to survive. Or, maybe they have no capacity to think per se and so the question of whether they are in water is not even something that they can entertain.
Regardless of the broader question about whether fish know they are in water, the humans watching the fish were looking for something else. The humans were researchers that wanted to see if the fish would demonstrate certain kinds of behaviors. For you see, the experimenters had created a robotic version of a fish and were waiting eagerly to see if the regular fish would accept the robo-fish as one of their own. If the living fish swam along with the robo-fish, it would tend to imply that the real fish were not frightened or otherwise taken aback by the robo-fish.
Indeed, the fish were swimming right along with the robo-fish, all of them going round and round in the tank. Success for robo-fish!
The researchers were wondering too whether the robo-fish could even get the fish to change their behavior, somewhat, such as convincing the fish to follow the direction of the robo-fish. Up until now, the robo-fish had been swimming in the same direction as the fish.  This seems to suggest that the fish had accepted that the robo-fish was safe to be near. Would they also though be willing to change their own behavior and end-up following the robo-fish. In some mild respects, yes, it turns out that the fish would follow-up along with the robo-fish.  More success for robo-fish!
Now, don’t go too far on this. It’s not as though the robo-fish got the living fish to do the macarena macaroni dance in the middle of the water tank. Instead, it was more akin to slightly altering the direction they were already headed and so a very modest impact on their behavior. But, who knows, maybe one day we’ll be creating robo-fish that are the king of the fishes. All fish hail to the robo-fish! It could become a takeover of all fish by the robots, which presumably (hopefully!) are being controlled by the humans. So, humans control the robo-fish, which in turn control the fish. I know this might seem quite untoward and maniacal. Maybe another version of the future is that robo-fish will live in harmony with regular fish, and they will all help each other. Robo-fish and regular fish will become blood brothers, though I guess without the blood part of it.
One of the research studies about robo-fish that caught my attention involves the study of zebrafish and the development of a modular robotic system that mimics this small fish’s locomotion and body movements. The work is being done at the Robotic Systems Lab in the School of Engineering at the Ecole Polytechnique Federale de Lausanne in Switzerland, and with the Paris Interdisciplinary Energy Research Institute at the University Paris Diderot.  
Let me point out that trying to create a robot that is as small as a zebrafish and that has the same motion pattern and look as a zebrafish is a hard problem. The system is known as the Fish Control Actuator Sensor Unit or Fish-CASU, and it attempts to not only look like a zebrafish but also aims to swim at the same linear speed and acceleration as the real fish. There are two main components, the FishBot and the RiBot, and it uses the popular Raspberry Pi processor along with a computer that communicates via Bluetooth and infrared with the robo-fish.
By first carefully studying the zebrafish, the researchers were able to determine that the fish follow a particular sequence while moving in the tank. The first step involves the zebrafish gaining their orientation and they do caudal peduncle bending to start their propulsion. Next, the fish go into a high linear acceleration mode. Third, in the relaxation step, they stop their tail beating and begin to glide in the water, gradually their linear speed decreases during this step.  Generally, the zebrafish then repeat those three steps, over and over. The researchers opted to develop a finite-state machine that would get the robo-fish to do roughly the same, namely the orientation, acceleration, and then relaxation steps.
The idea of building machines that mimic the behavior of animals is of course a notion that has been with us for a very long time. Biomimicry is the study and attempt at trying to mimic the behavior of biological creatures. If you look at the work of Leonardo da Vinci, you can see that he was fascinated by birds and hoped to someday develop a machine that would allow man to fly like birds do. Even the Wright Brothers likewise used biomimicry to help get mankind off the ground and flying into the air.
As they say, imitation is the highest form of flattery. If animals can do something, perhaps we can create machines to do the same.
One twist to this topic involves the aspect of potentially changing the behavior of the mimicked creature. In other words, it’s one thing for us to be able to fly in airplanes, and another to have us use biomimicry inspired robots to change the behavior of the birds. Suppose we created a robo-eagle and had it fly along with eagles. Maybe the robo-eagle could warn real eagles when a hunter was trying to shoot at the eagles, or maybe keep the eagles from running into the wall of a building or into the engine of a jet plane. You could say that the biomimicry could be used for purposes of good, augmenting the true creatures and aiding them. As with anything that involves good, there’s the chances too of the bad, such as maybe using the robo-eagle to lure the eagles into a trap of some kind and lead to their destruction or extinction.
Anyway, the overall point is that we can study living creatures and try to create robo-like versions of them and then use those robo-versions to possibly change the behaviors of the creatures themselves. The part in which we try to create robo-like versions is what I call biomimicry. The part about using the robo-like version to then change the behavior of the living creature I call robomimicry. In essence, the living thing begins to mimic the robot thing.
What does this have to do with AI self-driving cars?
At the Cybernetic Self-Driving Car Institute, we are using the techniques of biomimicry and robomimicry to understand and enhance the AI of self-driving cars. This will be important along the path toward achieving true self-driving cars, those that are at the level 5. A level 5 self-driving car is one that can drive the car in whatever manner a human could drive the car. To-date, we’ve seen mainly level 2 and level 3 self-driving cars, and some auto makers and tech firms are just getting to the edges of level 4. We still have a long ways to go before we get to a true level 5.
From a biomimicry perspective, you could say that we are already trying to mimic the biological creatures that underlie cars, namely the human drivers. I realize this seems a bit odd in that usually you think of biomimicry as trying to mimic perhaps a horse, or a bird, or fish. In the case of cars, cars are already a type of machine, but there is a biological component essential to that machine, which is the human that drives the machine.  Therefore, it makes sense that we would want to mimic the human driver when trying to create a “robot” that can do the same thing (an AI self-driving car).
Allow me a moment to give an example of how biomimicry can be subtly but demonstrably applied.
Recently, the Nissan 2018 Rogue SL AWD was released. The car has a limited version of self-driving capabilities, including the ProPilot smart adaptive cruise control. As akin to similar systems on other auto makers cars, it allows the system to steer and drive the car while in a constrained highway driving situation. The human driver must still remain attentive to the driving task. The driver’s hands are to remain on the steering wheel, and the system the prompts the driver to periodically nudge the steering wheel to prove that they (the human driver) presumably are still paying attention to the road.  Similar kinds of adaptive cruise controls are found on the Tesla Autopilot, the Mercedes Benz DistronicPlus, and the Cadillac SuperCurise.
In the case of the ProPilot, it often appears to move back-and-forth within the lane. It veers toward the leftmost part of the lane, and then corrects itself toward the center, and then tends to veer toward the rightmost part of the lane. Many would not notice the car doing this. It takes a keen eye and an awareness of driving behaviors to readily realize this aspect. In some respects, this would be the same as a novice driver, imagine a student learning to drive. They over-correct in one direction and then the next. The ProPilot also tended to at times brake sharply in traffic, seemingly as though it was belated in recognizing that it was time to apply the brakes. The acceleration would do the same, at times jerking forward and rapidly accelerating when a more gradual increase in speed would do.
A human driver that is a novice might do all of those things. They would be over-correcting within a lane and tend to “weave” rather than be able to keep a steady center-lane approach. They would tend to brake suddenly rather than gradually. They would tend to accelerate rapidly rather than gradually. A more seasoned and experienced driver would be able to generally keep to the center of the lane. They would be able to gauge when to apply the brakes and do so without a sense of dramatics to it. They would be able to accelerate in a smooth manner that would not have the occupants in the car feel like they are in a rocket that is zooming into outer space.
This behavior of the ProPilot could be enhanced by using biomimicry of human drivers, particularly seasoned human drivers. The smoother version of driving is what the self-driving car should attempt to achieve. The odds are that the ProPilot was programmed to consider the angles and torque and other driving factors to mathematically calculate what to do. By also then seeing how human drivers drive, the self-driving capability can become more like human drivers.  This is one of the advantages of using machine learning as part of the AI development for self-driving cars. Machine learning based on large data sets of human driving are able to “mimic” the human driving behavior, even if the system itself does not necessarily have any logical reason for it per se, and instead it uses often neural networks which mainly try to find a pattern and mimic to that pattern.
Improvements in AI self-driving cars will occur as the AI becomes more biomimetic of how humans drive.
There is an additional twist to this. Right now, the biomimicry is based on how humans drive today. But, keep in mind that once AI self-driving cars become more prevalent on the roadways, we are likely to see a change in the driving behavior of humans.
Say, what?
Yes, we will begin to see human drivers changing their behavior due to the behaviors of the AI self-driving cars. In a sense, we’ll see robomimicry.
Let’s first look at what is going to happen as AI self-driving cars become somewhat common place on our roadways.
Here’s the human reaction:

      Awe
      Wide Berth
      Acceptance
      Treat Like Second-Class Citizen
      Begin to Ignore or Disdain

At first, human drivers in their cars will tend to look at the AI self-driving cars in awe. Look, there goes a self-driving car! Let’s follow it to see what it goes. Oh my gosh, did you see it come up to that red light, it made a perfectly good stop at the red light. And so on.
Most of the human drivers will opt to give a wide berth to the self-driving car. It will be the same kind of reaction that seasoned human drivers give to novice drivers. When you see a human driven car that has a sign “Student Driver” you usually give that car a wide berth. You figure that the human driver might do something untoward and will likely be driving in a very timid way. So, you switch lanes to go around it, or you give it extra distance from your car. Human drivers will tend to do the same with the first round of AI self-driving cars.
Gradually, we’ll begin to see acceptance of the self-driving cars. They will be gradually improving in their AI driving capabilities. Rather than giving them a wide berth, instead we’ll see a lot of human drivers that have lost the awe aspect, and instead are irritated or frustrated at the self-driving cars. Why is that darned self-driving car going so slowly? Why is it waiting so long at the stop sign? Human drivers will begin to see the self-driving car as a kind of second-class citizen.
We’ll begin to see human drivers trying to trick or exploit the AI of the self-driving car.
Imagine these kinds of human driving behavior:
I know that the self-driving car waits a long time to make a right on a red light, so I’ll swing around the self-driving car and sneak in front of it, allowing me (as a human) to make the turn without having to wait for the AI self-driving car to do so.
I’ll outrace the self-driving car since I am willing to zip through a yellow light while the self-driving cars are all being cautious and coming to a halt as soon as they see the yellow light and don’t want to race through an intersection.
Up ahead there is a self-driving car, and I can use it to block traffic for me, by getting in front of it, it will try to maintain the proper driving distance and I can then exploit it to prevent traffic from catching up with me.
These are examples of how human driving behavior will change, due to the introduction of AI self-driving cars. Those examples tend toward workarounds regarding the AI self-driving cars. We might say that those human driving behavior changes are “bad” because they are tending toward worse driving behavior by the humans.
Oddly enough, there is a chance that the changes in human driving behavior will be for the good. The robomimicry of human drivers mimicking the self-driving cars could actually get human drivers to be better drivers. If the AI self-driving cars are all tending toward the proper driving distances on the highways, it might get the human drivers to do likewise. If the AI self-driving cars exhibit minimal lane changes and it leads to faster traffic flow, perhaps human drivers will do the same. Whether the human drivers will do this because they mentally see the connection between how the AI self-driving cars are driving and their own driving behavior is an open question. It could be that the human drivers will just witness what is going on and tend to follow along, rather than overtly opting to drive differently.
Not all human drivers will be driving the same way. Some human drivers will more quickly adapt to the AI self-driving cars, while others will take longer to do so. Some human drivers will try to exploit the AI self-driving cars, while others won’t. It will be a mix. Overall though, we need to realize that the introduction of AI self-driving cars onto the roadways will have an impact on human drivers. Currently, most researchers and auto makers are assuming people will drive as they do. It is assumed that the driving behavior of humans is static. The reality is that human driving behavior is dynamic. Humans will change as they see other facets of the roadways and how AI self-driving cars are driving. Biomimicry leads to robomimicry, which will lead to more biomimicry, and so on.
What will human drivers think of AI self-driving cars that eventually can drive as well and perhaps even better than humans?  It reminds me of this famous quote by Immanuel Kant: “Even a man’s exact imitation of the song of the nightingale displeases us when we discover that it is a mimicry, and not the nightingale.”
This content is originally posted on AI Trends.
Source: AI Trends

Barrier to AI in the Enterprise: Access to High Quality Data

According to a recent Teradata study, 80% of IT and business decision-makers have already implemented some form of artificial intelligence (AI) in their business.
The study also found that companies have a desire to increase AI spending. Forty-two percent of respondents to the Teradata study said they thought there was more room for AI implementation across the business, and 30% said their organizations weren’t investing enough in AI.
Forrester recently released their 2018 Predictions and also found that firms have an interest investing in AI. Fifty-one percent of their 2017 respondents said their firms were investing in AI, up from 40% in 2016, and 70% of respondents said their firms will have implemented AI within the next 12 months.
While the interest to invest in and grow AI implementation is there, 91% of respondents to the Teradata survey said they expect to see barriers get in the way of investing in and implementing AI.
Forty percent of respondents to the Teradata study said a lack of IT infrastructure was preventing AI implementation, making it their number one barrier to AI. The second most cited challenge, noted by 30% of Teradata respondents, was lack of access to talent and understanding.
“A lot of the survey results were in alignment with what we’ve experienced with our customers and what we’re seeing across all industries — talent continues to be a challenge in an emerging space,” says Atif Kureishy, Global Vice President of Emerging Practices at Think Big Analytics, a Teradata company.
When it comes to barriers to AI, Kureishy thinks that the greatest obstacles to AI are actually found much farther down the list noted by respondents.
“The biggest challenge [organizations] need to overcome is getting access to data. It’s the seventh barrier [on the list], but it’s the one they need to overcome the most,” says Kureishy.
Kureishy believes that because AI has the eye of the C-suite, organizations are going to find the money and infrastructure and talent. “But you need access to high-quality data, that drives training of these [AI] models,” he says.
Michele Goetz, principal analyst at Forrester and co-author of the Forrester report, “Predictions 2018: The Honeymoon For AI Is Over,” also says that data could be the greatest barrier to AI adoption.
“It all comes down to, how do you make sure you have the right data and you’ve prepared it for your AI algorithm to digest,” she says.
Read the source article at InformationWeek.com.
 
Source: AI Trends

MIT Looks at How Humans Sorta Drive in Sorta Self-Driving Cars

ALMOST HALF OF Americans will hop in their cars for a Thanksgiving trip this year. But if you were being very precise—if you were a team of Massachusetts of Technology researchers who study human-machine interactions—you wouldn’t say that all those Americans are “driving,” exactly. The new driver assistance systems on the market—like Tesla’s’s Autopilot, Volvo’s’s Pilot Assist, and Jaguar Land Rover’s InControl Driver Assistance—mean that some of those travelers are doing an entirely new thing, participating in a novel, fluid dance. The human handles the wheel in some situations, and the machine handles it in others: changing lanes, parking, monitoring blind spots, warning when the car is about to crash. Call it…piloting? Shepherding? Conducting? We might need a new word.
Fully autonomous cars won’t swarm the roads en masse for decades, and in the meantime, we’ll have these semiautonomous systems. And scientists need to figure out how humans interact with them. Well, actually, the first thing to know is that most humans don’t: Preliminary research by the Insurance Institute of Highway Safety noted that, of nearly 1,000 semiautonomous vehicles studied, 49 percent had their systems turned off. The warnings were annoying, owners said.
If you could actually watch those drivers—sit inside the car and eyeball them while they drive—you might get a better understanding of how these systems are helpful and how they’re not. Maybe drivers find one of kind of warning sound frustrating, but another (a bloop instead of a bleep?) helpful. Maybe they get more comfortable with the system over time, or stay mystified even as the odometer rolls over. That spying would be really helpful for people who build and design semi-autonomous systems; for those who want to regulate them; and for those expected to evaluate the risks of using these systems, like insurers.
That’s why MIT researchers are announcing this week a gigantic effort to collect data on how human drivers work with their driver assistance systems. They outfitted the cars of Boston-area Tesla, Volvo, and Range Rover drivers with cameras and sensors to capture how humans cooperate with the new technology. They want to understand what parts of these systems are actually helping people—keeping them from crashing for example—and what parts aren’t.
Read the source article at Wired.
Source: AI Trends

What AI Trends Marketers Should Look for at AI World

AI Trends Marketers Should Look for at
AI World
AI is coming to Boston December 11-13. If you’re only planning to send your engineers, you should probably think again. AI tools are making huge strides in the martech space and revolutionizing how a marketer spends their day. Can you imagine being able to spend 80% less time scheduling meetings and building lists? How about seeing a 4x increase in overall lift/LTV?  Yeah, you should go to AI world.
If you’re intrigued, read on for a roundup of some AI trends you should be looking out for at AI World.
Virtual Personal Assistants:
Marketers know how much time they waste on manual labor—be it email management, social media posting, or just trying to coordinate meetings. While it may not be the sexiest application of AI out there, these time savers are freeing up marketers to do more marketing and less project management.
Think email management, social media posting, meeting scheduling.
Customer Data & Insights Platforms:
This is what we do at Zylotech. Companies are building automated systems to identify, unify, cleanse, and enrich your data from both 1st & 3rd party sources. Beyond the data curation, smart platforms can now use that AI enabled data to power deep insights and predictive / prescriptive analytics. The best part? You don’t have to learn a new marketing platform.  We push lists, segments, and recommendations into whichever delivery platform you already use.
The average marketer generally uses about 15% of available customer data, so unlocking the full data stack and feeding it into an AI application can yield huge insights in a fraction of the time that traditional approaches take.
A major benefit here is that the feedback loop an integrated data/decisioning platform has lends itself very well to AI optimization. Think about a cross-sell engine. It has a near real time validation of how effective its recommendations are and, due to the self adjusting nature of AI, it can quickly validate and improve its recommendations for your next campaign.
Companies like Zylotech, ActionIQ, and Agilone are pushing boundaries here and worth looking into if you’re a customer marketer with big data available.
Image Analysis with Qualitative Reporting & Insights:
One major area that AI&ML are revolutionizing for marketers is image recognition, categorization and reporting. Images are quickly become the defacto communication medium for consumers, so marketers must be able to track and report on trends. There are lots of applications here. Some image marketplaces are implementing AI to curate and surface the perfect image for customers. Some marketers are using image recognition to spot logos in customer photos to build brand affinity models.
There are too many use cases to cover them all, but here are a few worth checking out. Two vendors worth taking a look at are Clarifai, a multipurpose API, and LogoGrab, a logo recognition analysis API.
Content Marketing & Targeting Tools:
For content marketers, a good editor is indispensable and can make or break a program. But what if an AI system could take over some, or all, of the tasks we rely on human editors for? With the major advances in NLP writing, editing, and targeting tools are smarter than ever.
From building brand personas of your content, to real time editing and suggestions as we write, AI infused content marketing tools are very quickly becoming more than a novelty. Speaking from experience, they probably can’t replace a trusted editor quite yet, but they are getting there.
Here are a few interesting tools in the space: Acrolinx is like Grammarly for marketing writing with a scoring and recommendation engine. Lucy is powered by Watson and is a persona building and media planning AI application that looks like it could be very useful for a marketing manager who juggles a lot of tasks.
Advertising Tools:
Ads were the first place marketers and data scientists started to work together as a tight team, and it only makes sense that there are now a ton of new AI tools built to help businesses more intelligently, and quickly, make complex decisions around big ad data.
I remember when I was first starting out, I had an excel sheet with several significance calculators where I could test audience sizes, results, etc. to figure out what my ad data was telling me. Needless to say, it was clumsy and pretty inefficient. Now marketers can lean on machine learning based systems that do all of that, and more, in a fraction of the time.
Here are a few interesting tools in the space. Albert bills itself as an all-in-one solution for marketing delivery and has a few big clients including Harley Davidson. Sizmek is an AI recommendation focused ad platform, with a focus on transparency into its algorithms and how it’s makes its decisions. This might be a good tool for marketers who aren’t yet sold on a full black box solution.
Testing & Optimization Tools:
One last major category for marketers to keep an eye on is testing & optimization (T&O). T&O is a natural progression for ML in marketing as multivariate testing for a big brands can become very complex. With a good data source, a smart platform can test and optimize around any number of factors. Who would have thought that people in Georgia with 2 sessions go crazy for blue text? A smart platform can move quickly, and utilize a deep spread of data, and it’s reasonable to imagine that in the next 10 years, most savvy companies will be running nearly autonomous platforms that personalize and shift their site for each customer. Amazon already does this with their powerful recommendation engines.
There’s a lot of noise in this space, so take these recommendations as no more than a starting point. Sentient Ascend seems to be the most fleshed out and market ready player here. One feature that stands out is a/b funnel testing, rather than simply testing one page at a time. Strike Social is another player in the space, but looks to be mostly focused on Youtube ads & optimization.
At AI World
Most of the vendors I mentioned above are not going to be exhibiting (we are though!), but if you happen to be walking the show floor looking to chat, there will be plenty of AI applications that are being built to empower marketers to do better work faster. Consider this a primer on what marketing domains are being pushed with AI/ML tools.
Source: AI Trends

AI Trends Weekly Brief: AI World 2017 a Cross-Section of AI Marketplace

AI World 2017 a Cross-Section of AI Marketplace
Exhibitors at the AI World Conference & Expo happening Dec. 11-13 in Boston represent a cross-section of the emerging AI marketplace, companies seeking growth and development by riding atop the AI wave. Here is an account of a selection of AI World exhibitors.
Coveo Offers Cognitive Search and Knowledge Discovery
Coveo combines unified search, analytics and machine learning to deliver relevant information and recommendations across every business interaction, including websites, ecommerce, contact centers and intranets. Coveo partners with the world’s largest enterprise technology players and has more than 1,500 activations in mid-to-large sized global organizations across multiple industries.
Coveo recently announced the Early Access of Coveo on Elasticsearch. This index-agnostic version of Coveo’s AI-powered search platform delivers the same out-of-the-box relevance and insight powered by best of breed machine learning and usage analytics, with the added ability of being deployed on top of the open source elasticsearch index, fully managed or self-hosted.
“One of the reasons many companies and integrators are drawn to using open source technology is the ability to build virtually any solution on top of publicly available assets.” said Gauthier Robe, Coveo VP of Products, in a press release. “With Coveo on Elasticsearch, Coveo has done much of the work to make that possible by decoupling our proprietary index from the critical search experience components, such as machine learning, usage analytics, customizable user interface, query engine and connectors. We are very excited to see what the Elasticsearch community is able to build utilizing these two powerful technologies”.
Coveo was named a leader in The Forrester Wave: Cognitive Search and Knowledge Discovery Solutions, Q2 2017. The report evaluates 9 vendors on 23 criteria, grouped by Current Offering, Strategy and Market Presence. Coveo received the top score in the strategy category.
According to the Forrester Wave Report: “Coveo focuses on the key to relevancy — context. Search is successful when the results are relevant to the person querying for them. Coveo’s R&D focuses on using advanced analytics and machine learning to automatically learn the behaviors of individual users and return the results most relevant to them.”
Learn more at Coveo.
DataRobot Offers Enterprise Machine Learning Platform
DataRobot offers an enterprise machine learning platform that empowers users to make better predictions faster. Incorporating a library of hundreds of open source machine learning algorithms, the DataRobot platform automates, trains and evaluates predictive models in parallel, delivering more accurate predictions at scale.
DataRobot recently announced that it has achieved Amazon Web Services (AWS) ML Competency status. The designation recognizes DataRobot for providing business analysts, data scientists and machine learning practitioners with an automated, cutting-edge solution that enables predictive capabilities within customer applications.
Achieving the AWS ML Competency distinguishes DataRobot as an AWS Partner Network (APN) member that streamlines machine learning and data science workflows, and is an indication that the company has demonstrated extensive expertise in AI and ML on AWS. Thousands of DataRobot users run on AWS, having built more than 300 million machine learning models.
“Since day one, we have demonstrated a fierce commitment to making the AI-driven enterprise a reality,” said Jeremy Achin, CEO of DataRobot. “Achieving AWS ML Competency status recognizes our track record of excellence in automated machine learning, as well as our dedication to our users, many of whom rely on AWS to power their data-driven initiatives.”
The DataRobot automated machine learning platform puts the power of ML into the hands of any business user. DataRobot automates the data science workflow, enabling users to build and deploy highly accurate predictive models in a fraction of the time of traditional methods. Developers building applications on AWS can leverage DataRobot’s APIs to power the machine learning in these applications.
Learn more at DataRobot.
Expert System Inc. On the Power of Social Signals
Expert System Inc. is a leading provider of cognitive computing and text analytics software based on the proprietary, patented, multilingual semantic technology of Cogito. Using Expert System’s products, enterprise companies and government agencies can go beyond traditional keyword approaches for making sense of their structured and unstructured data. Our technology has been deployed as solutions for a range of business requirements such as semantic search, open source intelligence, multilingual text analytics, natural language processing and the development and management of taxonomies and ontologies.
Prior to the Black Friday and Cyber Monday shopping days, Expert Systems analyzed a sample of 120,000 tweets in English, French, German, Spanish and Italian, posted online from Oct. 20 to Nov. 20, 2017.  The analysis showed 75% of the tweets were focused on Black Friday deals, while 25% were focused on Cyber Monday offers. Amazon is the most frequently-mentioned retailer.
As a product category, high-tech products dominated the tweets, and Apple is the most-cited brand. In the battle between iPhone and Galaxy, iPhone wins, with 69% of English tweets on the smartphone subject were focused on iPhone, and 31% on Galaxy.
The origin of Expert System’s cognitive technology “Cogito” (Latin ‘I think’) dates back to the nineties, a time when the convergence of linguistics and technology was something only being talked about in research institutions or in academia. After licensing its early technology to Microsoft, Expert System was able to fully extend the vision to developing software that could understand the meaning and context of language. The effort produced one of the first semantic analysis platforms and led to Expert System’s patented Cogito technology.
Cogito’s technology has been deployed within hundreds of organizations of differing industries from banking to publishing to healthcare to insurance.  Thousands of interactions have been analyzed, generating millions of data poitns to enhance the effectiveness of Cogito’s behavioral models.
Learn more at Expert System.
UiPath Offers Robotic Process Automation for Managing the Robotic Workforce
UiPath is a leading provider of Robotic Process Automation technology enabling global enterprises to design, deploy and manage a full-fledged robotic workforce. This workforce mimics employees in administering rules-based tasks and frees them from the daily routine of rote work. The UiPath RPA computing platform is dedicated to automating business processes. It provides process modelling, change management, deployment management, access control, remote execution and scheduling. It also provides execution monitoring, auditing and analytics in full compliance with enterprise security and governance best practices.
In recent news, UIPath announced partnerships with five companies who will provide UiPath-accredited training for clients and partners globally. Companies EY Romania, Machina Automation, Roboyo, SMFL Capital Japan and Symphony Ventures will have their expert trainers undergo advanced training and testing in the UiPath RPA platform. These experts will teach the RPA Developer Advanced Training course, enabling graduates to implement the UiPath’s RPA platform within their own or their clients’ organizations. Machina Automation will also offer the RPA Business Analyst training.
From January 2018 onwards, the five partners are organizing onsite training, starting with the RPA Developer Advanced Course and the RPA Business Analyst Course, and expanding the curriculum over the coming months. EY, Roboyo, and Symphony will be conducting training on a global scale, while Machina Automation will be active in North America, and SMFL Capital in Japan.
UiPath also recently announced a strategic partnership with Enate, the provider of Robotic Service Orchestration (RSO). The partnership will look to drive accelerated automation success for the companies’ mutual partners at any stage of their digital journey.
Enate comes with custom-build Activity Libraries for UiPath Studio, allowing for the seamless integration of UiPath robots into the orchestration platform. This is the cornerstone of the world’s first environment that allows digital and human teams to work together seamlessly, and the partnership is already bringing business benefit to clients such as insurance giant Generali.
Learn more at UiPath.
VoiceBase Provides APIs for Speech Recognition
VoiceBase provides APIs for speech recognition, speech analytics and predictive analytics to surface the insights every business needs.  Enterprises utilize VoiceBase’s deep learning neural network technology to automatically transcribe audio and video, score contact center calls, and predict customer behavior. Privately-held, VoiceBase is based in San Francisco.
A member of  the Amazon Web Services (AWS) Partner Network,  VoiceBase recently announced an integration for Amazon Connect customers. The integration is designed to ingest call recordings from Amazon Connect, transcribe and analyze the content and publish the results on AWS. This integration makes it easy for Amazon Connect users to surface valuable insights from calls and make better decisions using data from their contact center. VoiceBase was one of the initial APN Partners to support Amazon Connect to deliver advanced speech analytics to a growing cloud contact center customer base.
The VoiceBase API features include machine transcription and keyword and phrase spotting, PCI redaction, instant custom vocabulary and predictive insights. Predictive Insights was a product born from years of data science research and the idea of combining artificial intelligence and spoken information to detect complex events and future customer behavior in sales and service calls. With this integration, these services power many sought-after enterprise use cases such as agent quality monitoring, auto call scoring, compliance, and sales optimization.
“We are excited to expand our collaboration with AWS and their customers to offer customized speech analytics and predictive analytics services,” said Walter Bachtiger, Founder and CEO of VoiceBase, in a press release.  “AWS provides the ideal framework for VoiceBase to layer on its speech analytics API and unlock valuable insights for the enterprise.”
VoiceBase’s customers include Amazon Web Services, Twilio, Nasdaq, HireVue and Veritone.
Learn more at VoiceBase.
Pegasystems Targets Intelligent Business Process Management
Pegasystems Inc., a leader in software for customer engagement and operational excellence, offers its adaptive, cloud-architected software –built on its unified Pega® Platform – supporting rapid deployment and ability to extend and change applications to meet strategic business needs. Over its 30-year history, Pega has delivered award-winning capabilities in CRM and BPM, powered by advanced artificial intelligence and robotic automation, to help the world’s leading brands achieve breakthrough business results.
Pegasystems recently announced the availability of Pega® Deployment Manager, a no-code, model-driven capability that enables businesses to accelerate the deployment of new applications and software updates. Businesses are turning to DevOps (software development and operations) methodologies to catch up to more nimble competitors and transform into real-time application deployment machines.
But most development organizations quickly become overwhelmed with the numerous tools, specialized skills, and cultural shifts needed to be DevOps-proficient. As a result, they remain stuck in the early stages of DevOps maturity or don’t know how or where to start. Meanwhile, more agile enterprises continuously release new software and features to meet the latest customer demands, leaving the competition behind.
Pega Deployment Manager aims to guide teams through all stages of agile deployment – from unit testing and packaging, to staging and testing – into a consolidated visual roadmap driven by proven best practices. Without any coding, DevOps-enabled teams can progress new apps and capabilities to the next stage in the pipeline with a single click, making it simple and easy to bring software into production.
In other recent news, Pegasystems was named a Leader in the Gartner Magic Quadrant to Intelligent Business Process Management Suites. Pega has been recognized as a Leader in this report every year since its inception in 2003.
In the report, Gartner evaluated 19 intelligent business process management suite (iBPMS) vendors on their ability to execute and completeness of vision. Gartner assessed Pega® Platform, which combines case management, BPM, robotic automation, AI and decisioning, mobile, and omni-channel UX on a unified platform.
Learn more at Pegasystems.
CognitiveScale Targeting Financial Services; USAA Invests, Becomes a Customer
CognitiveScale builds industry-specific augmented intelligence solutions for financial services, healthcare, and digital commerce markets that emulate and extend human cognitive functions by pairing people and machines. Built on its CORTEX augmented intelligence platform, the company’s industry-specific solutions help large enterprises drive change by increasing user engagement, improving decision-making, and delivering self-learning and self-assuring business processes.
In recent news, CognitiveScale announced that a USAA affiliate has made a strategic investment in the company and became a customer. USAA will implement the CognitiveScale Financial Services augmented intelligence products for delivering contextual customer engagement and improving advisor productivity. By using CognitiveScale, USAA is positioning to provide its more than 12 million members predictive, data-driven banking and insurance services while learning continuously from user interactions and data.
Artificial Intelligence (AI) is a major disruptive force in banks, insurance companies and financial services organizations. According to IDC, global cognitive systems spending market will grow to $47 billion by the end of 2020 with the Banking Industry accounting for 19 percent of that projected market spend.
“USAA has a long history of using emerging technologies to develop innovative ways to serve our members,” said Nathan McKinley, VP and head of corporate development for USAA. “Our work with CognitiveScale allows us to support such innovation through our investment while also leveraging the AI products they have today to find ways to better serve our members.”
Elsewhere, CognitiveScale recently announced the addition of Dr. Joydeep Ghosh as the company’s first Chief Scientific Officer. An internationally recognized authority on machine learning, data-web mining and related artificial intelligence (AI) approaches, Dr. Ghosh joins the team with more than 30 years of experience applying these technologies to complex real-world problems.
As CognitiveScale’s Chief Scientific Officer, Dr. Ghosh will focus on aligning and tightly integrating the company’s Cognitive Cloud software with industry-specific data models and the latest algorithmic sciences efforts; recruiting the best and the brightest minds in AI while supporting those already at CognitiveScale; and educating the market about the power and value of augmented intelligence and enterprise-grade AI.
Learn more at CognitiveScale.com.
Zylotech Offers Customer Analytics Platform
Zylotech is an MIT spin-off offering the AI Customer Insights Platform which combines customer data management with a deep-learning driven decisioning engine. Zylotech uncovers probabilistic customer behavior patterns from all data sources to enable real time customer marketing with a high success rate.
Zylotech’s Customer Analytics Platform uses automated machine learning to identify, unify, cleanse, and enrich customer data to power AI-driven, real-time customer insights for marketing teams to execute upon.
Zylotech offers an ebook, “Retailers Guide to Customer Retention & Monetization”, covering strategies retailers are using to tame their data and move from big data to big insights.
Learn more at Zylotech.com.
— Written and compiled by John P. Desmond
Source: AI Trends