
AVIOS
Conversational Interaction
Conference
CONNECTING WITH COMPUTERS - SPEECH, TEXT OR BOTH?

In the Heart of Silicon Valley
SAVE THE DATE - March 11-12, 2019
WHY ATTEND?
Keynote: Social Applications of Conversational Interfaces
Social applications of conversational interfaces, Michael McTear, Professor, Ulster University
Conversational interfaces (also known as chatbots, virtual assistants, or digital assistants) have been used widely in business applications such as marketing and customer service. Increasingly attention is being directed to social and humanitarian uses of conversational interfaces, such as providing companionship and advanced home monitoring of the elderly, healthcare applications, dealing with harassment at work, providing people in developing countries with new forms of support, and many others. This talk presents some chatbot applications currently under development at Ulster University in the areas of monitoring the elderly at home, mental health support, and bibliotherapy for disadvantaged users, and examines the issues and technical challenges that these types of conversational application encounter. The talk concludes with a discussion of the current state of conversational interfaces and suggestions for future developments.
The state of conversational technology
Speech recognition and natural language understanding are developing technologies. What is their status today and how will they evolve?
Conversational AI: Separating Hype from Reality, Julian Paolino, Worldwide Leader of Cognitive Innovation Group, Nuance Communications
There's a lot of hype around how humans and machine engage with one another – the storied age of AI-powered robots and devices that can have full-fledged intelligent conversations with people seemingly is now upon us. Is it? The answer is "not quite." We've made huge advancements in computing and natural language processing, and machines can understand humans at accuracy rates in the 90s, but automating complex conversations between humans and machines is still not solved. Today, it's a heavily manual process – taking teams of designers and coders to build decision trees and neural networks to help the machine know what to do based on what the human said. That's the true area where AI needs innovating. So how do we mimic and automate the human mind and all of its intricacies? That's what I'm here to explore.
Jimmy's World: Making sense of everyday life references, Eugene Joseph, CEO, North Side Inc
Common wisdom has it that successful chatbots have to be specialized (banking, movies, wine, Siri web services - are some examples). We'll present our work on Jimmy's World, a chatbot that makes sense of everyday-life utterances and generates context for them. Our massively rule-based NLU technology can provide useful context information to vertical, ML-based chatbots, helping improve their performance.
Best kept secrets to deploying Assistants in the real world, Nico Acosta, Director of Product - Conversational AI, Twilio
Today, it's really hard for companies to navigate the line of how much to rely on bots and assistant to serve their customers while still providing a great customer experience. In this talk we will go through the framework to deploy omnichannel assistants in the enterprise improving the scale and efficiency while also improving the customer experience.
The voice and personality of your application
The quality, naturalness, and "personality" of speech from an automated system can be a major determinant of its effectiveness.
Behavioral Considerations in Developing Assistants - an Industrial Application View, Dan Bagley, CEO, Cepstral
The best behavior of a voice assistant can depend on the type of application.
Do as I Do, Not as I Say, Lisa Falkson, Senior VUI Designer, Amazon
Mirroring is defined as "the behavior in which one person subconsciously imitates the gesture, speech pattern, or attitude of another," according to Wikipedia. Humans perform this behavior unconsciously to build rapport with others during conversation. How can virtual assistants use mirroring to mimic human behavior in positive ways? In this talk, I will discuss paralinguistic features (such as Amazon's recently announced "whisper mode") and their application to automated conversations.
Give your Digital Agent a Branded Voice with Emotional Intelligence and Personality – Quickly and Cost Effectively, Meir Friedlander, EVP, Product, Operation and Business Development, Speech Morphing
Your consumer-facing digital agent is an integral part of your customer care team and your company brand. A uniquely branded and contextually aware voice can improve your brand image as well as your customer experience. It is the personality and tone of your business. And only a humanized, emotionally aware voice can spark natural dialogue between your brand and your consumers. By leveraging advanced speech technology, neural networks, deep learning and natural language voice generation, Speechmorphing is reinventing speech synthesis. In this talk, we will introduce this new technology that is a true game changer for developers and designers of conversational AI, voice interaction and modern IVR. We will demonstrate how you can use our user-friendly SmorphTM Voice-on-Demand service and powerful tools with as little as 3-30 minutes of voice recordings to create a dynamic branded voice that is highly customizable. From tones and demeanor to pronunciation and lingo, granular controls allow you to adjust the mood, volume, pitch, speed, intonation and human sound gestures. Never before has high-quality, expressive and customized voice creation been so easy, quick and cost-effective. Speechmorphing has the potential to improve human-machine communications and speed the proliferation of synthesized speech for real world applications.
Case studies 1: Experience with deployed conversational systems
Examples often make clear the advantages and challenges of a conversational system. This session contains such examples of deployed systems and their results.
Ten Things Every Voice Application Should Do, Jeff Blankenburg, Senior Alexa Evangelist, Amazon Alexa
In my experience building dozens of skills for Alexa, and working with hundreds of developers on their own voice applications, I've identified ten specific patterns that are common to the most successful of the bunch. This presentation will cover these ten topics, giving you the insight and acceleration your voice app needs to get to the next level.
Conversational AI Best Practices – Real-world learnings working with the world's largest financial institutions, Dror Oren, Chief Product Officer & Co-Founder, Kasisto
What to consider when choosing a Conversational AI platform and best practices for preparing your organization. Based on years of experience working with financial institutions, he will share first-hand what it takes to bring a virtual assistant to market and beyond launch day, what it takes to keep the system learning and improving – from fine-tuning accuracy and performance to adding new channels, markets, and languages. Expect to hear how to define measurable goals that everyone in your organization can get behind – whether that's improving how you service, engage or acquire customers. What makes a solid go-to-market strategy and how can you evolve your virtual assistant's roadmap to drive more business results. He'll dive into designing conversational experiences, how to customize for channels, what you need to know to ensure a smooth integration into your data and infrastructure, and some of the pitfalls and resource drains to avoid. All of Dror's insights are based on Kasisto's years of work with the world's largest financial institutions, including JP Morgan, Wells Fargo, TD Bank, Mastercard, DBS Bank and Standard Chartered.
Advanced features in conversational systems
Conversational systems can have advanced features such as security and dealing with more than one speaker in a conversation.
Security and Fraud Prevention Using Biometrics, Mia Puzo, SME Manager, Biometrics and Security, Nuance Communications
As enterprises have transitioned to becoming more digital, efforts on fighting fraud have followed suit, leaving the voice channel especially vulnerable – call center fraud is growing by 10% every year. 60% of enterprises today are falling victim to social engineering where individuals impersonate legitimate customers to convince agents to change information that then enables fraudulent activities. As fraud continues to be a challenge across both digital and traditional channels, it is critical that the tools used to authenticate consumers upon first contact are effective and accurate – no matter which entry point they choose to engage. An expert from Nuance's biometrics team will shed light on how today's advanced technologies across voice and other means of authentication can combat fraud and data breaches in the call center and beyond. In this session, Nuance will: • Provide an overview of biometrics, including how the technology uniquely balances security and convenience, while bringing a new level of personalization to customer service • Explain the ways in which AI can help create biometric watchlists that spot abnormal activities as they are happening to stop social engineering in its tracks • Compare the features and benefits of various types of biometrics (from voice and facial to behavioral methods) as they relate to fraud detection • Explore use cases for biometrics and real-world examples of deployments – from large financial institutions, telecom providers, government organizations, and more.
Preserving Personal Privacy in Conversational Interactions with Digital Agents, Gerard Chollet, VP Research, Intelligent Voice
Businesses, government agencies, associations, are offering server-based digital assistants accessible from web browsers or specific downloadable applications as well as telephone access to call centers. Some of these applications are allowing spoken interactions with the users. In many cases, the speech signal is sent to the operator and processed on a centralized server. There is a risk that the operator could use this information for purposes that were not intended by the user. This presentation discusses a number of proposals to preserve the personal privacy of the user. These include an embedded pre-processing interface on the user terminal to transform the voice of the user as well as anonymization techniques for internet access.
Customer service: Automation with natural language
Customers are frustrated with IVR menus. Improve self-service by letting them simply say or text about why they are contacting customer service.
The role of AI in financial services – how humans and machines will work together, Dr. Catriona Wallace, CEO and Founder, Flamingo AI
tbd
How Conversational Intelligence Drives Exceptional Customer Service, Somya Kapoor, Head of Product, Aisera
Conversational Intelligence helps organizations direct conversations toward optimal customer outcomes. Top vendors with startups are working towards creating chatbots which are easy to create but difficult to maintain. What is the journey from chatbots to conversational AI and what it takes to maintain & scale them while providing optimal customer experience with efficient cost. What is the receipt for success for intelligent assistance and precise customer service?
Options in user interface design
Natural language interaction doesn't stand alone. It can be supported by GUIs and other options.
Affect, the ultimate differentiator - The Engineering of Emotion, Wolf Paulus, Principal Engineer, Intuit Futures, Intuit
We pride ourselves on creating delightful user experiences, where designers have obsessed over shades of colors, font types, and padding around text, use words like "design language." But here is the problem: Voice first or Voice Only experiences don't have a traditional UI. In this new environment of Ambient-Computing, where neither form factor nor looks matter, likability may become the ultimate differentiator. Not only what, but how a virtual assistant says it, will determine success. We need to turn engineers, designers, and content writers into emotion-aware wordsmiths, who deeply care about every word and every pause, what to emphasize, and how to respond empathically. This talk explores and demonstrates possibilities of a more personalized, contextual and likeable customer engagement, by using affective computing technologies and emotions analytics (e.g., Expressive SSML, sentiment analysis, WordNet and SentiWordNet, Dictionary of Affect in Language-DAL, Emotional Prosody, Tone in Text and Engagement Tone in Text, Text Statistics and Readability). Emotion recognition and emotion synthesis may play in important role when creating a likable bot for text and/or voice interactions. Ie., correct perception of a user's emotional state, e.g., by observing user behavior (voice, photo/video, touch, click, etc.) in real time and to create / synthesize appropriate and thoughtful responses, and create a believable illusion that a bot concerned itself with the user's situation. All with the simple goal to put emotional intelligence into the experiences we want to create for our users and customers.
Chat vs. Voice: The Future of Conversational AI, Justina Nguyen, Dashbot
This session will cover the differences, benefits, and challenges of messaging and voice interfaces. The natural evolution of human computer interaction is conversation, but which medium has the potential to be the most productive, delightful, and efficient? We will dive into use cases and best practices for each conversational platform and how to develop a unique tone and voice. We will also cover industry data insights on conversational platforms, end-user satisfaction and engagement, and developer experiences and challenges.
Digital assistants and chatbots: The options
What are the options in interacting with digital systems with human ("natural") language? How does one judge the tradeoffs and platforms?
Testing Bots with Bots: Conversational CX Gives Rise to a New Breed of Testers, Mike Monegan, Vice President Product Management, Cyara
Conversational interfaces are being supercharged with AI, enabling a more intuitive and adaptive customer experience (CX). These AI-powered conversational interfaces are being applied to textual chat on the web, asynchronous messaging in apps, and natural language in IVRs. While the benefits of such a sophisticated, data-driven, and learning interface are clear, the ways to test and assure perfect execution are not. Testing conversational chatbots that enable customer self-service across many customer intents and journeys, which might require escalation to a live human agent at any point, is a tremendous challenge. Couple that with the myriad of technologies required to support interaction channels such as IVR, live agent voice, chatbot, live chat, web, SMS, CTI, and agent desktop, and you have a herculean task to achieve end-to-end testing. Testers of conversational CX need to exhibit great linguistic sensitivities and develop deep domain expertise in the realm of the enterprise providing customer service. So how do you approach testing such a complex area, such as customer experience with AI? Fight fire with fire - test bots with bots. This session will discuss new research in using AI to validate AI in the realm of outside-in CX testing. The strategy for synthesizing "virtual bot testers" from linguistic and machine learning algorithms is closer than you think. This session will cover: -Machine learning algorithms for scoring bot response accuracy and maintaining proper conversational context. -Conversational scenario generation to stretch the limit of NLU models -Configuring bots to execute regression testing in an agile, iterative delivery cycle -Customer usage examples.
Connect with Customers: Messaging That Works, Rob Lawson, Global Partnerships, Google
Businesses in all industries are being forced to evaluate how they communicate with their customers. How can they reach them in a way that's both instant and personalized? How can the brand be easy to engage with at each touchpoint? Conversational interfaces are gaining popularity and in this session Google will outline how you can connect with customers directly through branded, interactive mobile experiences, right to the default messaging app.
Issues in creating conversational applications
Talking or text with a user of an application in a way that doesn't require instruction or detailed prompts is clearly a potentially powerful way of interacting with an automated system. But there are challenges created by today's state of technology.
Clarification and Error Recovery Dialogs, Marie Meteer, Senior Research Scientist, Pryon
The challenge of conversational interactions is not just answering questions correctly, but also knowing when the question is not understood and providing the most useful clarifying question or response. In this talk I explore multiple recovery strategies, such as implicit, explicit and reprise clarifications. I'll then describe our experience implementing clarification dialogs using tools from Microsoft, IBM, and others.
Hearing in smart devices, Alexander Goldin, Founder & CEO, Alango Technologies
If we want smart devices to interact with us in our language, we need to give them a sense of hearing. And our expectation is that this sense will be, at least, matching our own. We want smart devices to be able to distinguish our voices from voices of other people, understand us in noisy environment and, while playing music or reading a weather forecast to us, be responsive to our barge-in commands. The device sense of hearing must be capable of ignoring its own "voice" while choosing a specific sound direction and listening signal from that direction only. With these requirements, using one microphone is not enough. Smart hearing requires multiple microphones configured according to a specific application and the corresponding digital signal processing software capable of enhancing and recognizing voice from a specific direction. I will present different use cases and discuss what needs to be done, both from hardware and software point of view, to achieve our goal of enabling the true sense of hearing in smart devices. I will outline the principals of voice enhancement as well as trade-offs that must be made to meet the cost, performance, ergonomics, power consumption and other practical limitations.
Digital Assistants in the Enterprise
Digital assistants can make employees more efficient, making the use of enterprise software and operations easier and thus more fully utilized.
Chatbots/Digital Assistants in the Enterprise, Tin Kashyap, Product Manager, Kelly Services
How chatbots are assisting employees and making them more efficient. How chatbots have changed the way we used to communicate with employees. How we are using chatbots to shortlist candidates?
Cisco e-Commerce: AI powered Chatbot, Gaurav Goyal, Principal Architect, Cisco Systems
Cisco's e-Commerce platform is one of the world's largest in terms of revenue. It is responsible for booking ~$40+ billion worth of orders annually. Cisco's Commerce platform is unique in the industry as, unlike the platform of other enterprises, it a single platform responsible for selling of all of Cisco's 40,000+ products across all the business entities. It is used by 140,000 unique users consisting of Cisco Account Managers, partner, end customer & distributor users in 16 languages across 138 countries. Our user base varies from novice to seasoned users but even for seasoned users, it is difficult to keep up with all of our product offerings and all the functionality changes that the platform goes through. It results in the users opening support cases when they do not understand how to transact on our platform. We get hundreds of thousands of cases every year. To solve these productivity problems and give a better experience to our customers, we have rolled out an AI powered chatbot in our platform. We are using Machine Learning behind the scenes to understand user behavior, collecting more information to provide them with a personalized experience.
Collecting Data in the Food Supply Chain using voice interaction: Updated Case Study, John Swansey, CoFounder and Chief Design Officer, AgVoice
Consumer demand for more transparency in the food supply has led many large food production companies to drastically increase the amount of data they collect in the field for compliance, yield optimization, source verification, and breeding and inspection of plants and livestock. The outdoor environment is harsh, and users’ hands are busy manipulating soil, samples, and tools, and their eyes may be focused on the task. In this session, we show advances in our pioneering work incorporating mobile and voice technology in a data-collection service optimized around the challenges of these specialized outdoor workers. Highly specialized vocabularies, and the need for real-time confirmation and error handling drive our development of an innovative solution to meet the needs of large companies in the food and agriculture industry.
Fitting the application: Differences driven by the environment and the user
Particular cases, such as hands- and eyes-free options in automobiles or home speakers and the availability of interface options other than speech, create specialized demands on a conversational system.
Conversation with cars: Present and future of in-car voice interfaces, Jay Goluguri, Lead Product Owner, Toyota Connected North America
Advances in AI and conversational technology are fundamentally impacting the automotive voice interfaces. This talk will explore the current state of automotive voice, challenges, and where we are going next.
Always on Voice - The Key for Conversational Interactions, Tali Chen, President, US and Chief Evangelist, DSP Group
Voice is quickly becoming the preferred method of communication between consumers and their smart devices, with products like Alexa-enabled, Amazon Echo and Google Home creating high levels of user engagement and bringing computing to significantly more people and situations. By 2020, over 24 billion internet-connected devices will be installed worldwide, that is more than four for each human on earth. Furthermore, roughly 44% of U.S. broadband households use voice controls on internet-connected devices and 46% of U.S. millennials with smartphones use voice recognition software. The total volume of data generated by IoT devices will reach 600 ZB per year by 2020.
Developers require always-on, small form factor, low energy technologies to tame and channel that data into useful applications. Whether battery or AC-powered, the staggering number of devices and data dictate that the electrical power they require be kept at a minimum. Ultra-low power chipsets and careful internal design lend these devices the stamina to run for years, without overloading the power grid. ULE (Ultra Low Energy) offers an ideal standard for the emerging Internet of Things (IoT) market. ULE features extremely low costs, low power consumption, long range (full house coverage with simple start topology), interference-free allocated spectrum, highly-suitable bit-rates, and value added complementary voice and video capabilities While voice user interfaces (VUI) are growing, it faces reliability, cost and security issues created by subpar smart devices. Moreover, clear communication and consistent accuracy has been hard to achieve. For voice to become the dominant user interface, it must attain an extensive communication range and an interference-free spectrum band that enables devices to listen at scale, all at an affordable price.
DSP Group leverages 30 years of voice processing expertise and a dominant market presence, and Tali Chen has led DSP Group's corporate marketing, communications, and EU business development for over 10 years, with her finger on the pulse of IoT in healthcare, smart home, and security. Her experience allows her to describe voice-enabled technologies' successes and shortfalls, as well as identify challenges the emerging IoT industry must overcome and areas for growth. The talk will focus on how the ULE protocol, versus competing ZigBee and Z-Wave standards, will help the smart home industry overcome obstacles and achieve mass adoption.
Communications Next, Catelyn Orsini, Voice Interface Architect, Plantronics
Together, IoT, AI, AR, ubiquitous connectivity, and analytics are enabling all-new types of communications and engagement experiences. Will it turn humans into extensions of the network used to do the things that technology can't? Or will we take charge and wield the technology to make life better, make people more productive, happier and empower us in ways that makes more space for our humanity? With the individual in the center, it becomes increasingly important to fully understand how to best integrate vision, touch, hearing, and speech to ensure they feel as natural as possible and give us the control of our attention and focus. Some of these scenarios and use-cases will be mundane and others profound, but there's an expectation that all will appear seamless as people move throughout their day. One key ingredient for success will be to continuously think beyond individual, discrete engagements and instead strive to orchestrate the user experience across a range of platforms, devices, and scenarios in a holistic fashion that speaks to the human story.
Robots and social systems
Science fiction is coming closer to reality as limited-function robots reach commercial feasibility. Robots and other devices that simulate human traits must not only interact naturally, but display social awareness.
Designing Valuable Experiences for Social Robots, Matt Willis, Lead UX Designer, SoftBank Robotics America
Creating experiences for social robots presents many challenges that span multiple facets of social interaction and conversational design. As designers of these experiences, we need to consider multiple channels of communication, support pre-existing cognitive models and introduce new methods of communication as robot capabilities increase. Above all of this, we need to ensure that the experiences we create with our robots bring value to humans and help them to achieve their goals. At SoftBank Robotics, we apply a human-centered design methodology to the design of experiences for our social robots. This talk will introduce some of the tools we use to build these experiences, and share some of the challenges we have when designing these interactions. Examples provided will cover the application of various tools we use to ensure we place the human at the center of our experiences, and some of the techniques we have applied to support social interaction with Pepper, a humanoid robot.
Talking to Robots: Making it Work!, Charles Jankowski, Director, AI and Robotics, Cloudminds
More and more we see public social robots having conversational interactions. How does that differ from what we're used to, either with phones, or chatbots, or Alexa? We'll talk about what's different about these deployments, and what has to be done to get them to work, and talk about Cloudminds' deployments of social robots in the field.
Building conversational systems that engage users
"Conversational" interfaces raise expectations. How can we best meet those expectations?
Like My Style: Conversation Style 101 for Bots, Sondra Ahlén, Principal VUI Consultant/Owner, SAVIC
How does the way bots talk impact whether people want to keep talking or listening and the decisions they make? Conversational style varies by culture and plays a major factor in human interaction for making friends, building trust, influencing purchases, and so much more. As bots become more fluent in human language, conversation style plays an ever-increasing role in human-bot interaction. This talk presents a variety of conversation styles such as "Bowling", "Basketball", and "Ping Pong", and "Auditory" vs. "Visual" vs. "Kinesthetic", that can be used for various purposes by bots that speak or text with humans regularly.
Second dates with Alexa: Designing voice experiences that users come back to, Neha Javalagi, Lead, UX Research and Design, Witlingo
In this presentation, I use the analogy of 'bad first dates' and 'failed engagements' to highlight with examples the common pitfalls designers must avoid to design voice experiences that people enjoy, and most importantly, come back to. I will share fictional stories of frictional interactions to discuss a new fun framework for evaluating the usability of conversational experiences. Imagine the encounter with a skill for the first time to be a first date. Just like we want our first date to lead to a second and a third date, likewise with skills: we want users to come back for seconds and thirds. After all, it's hard to undo a bad first impression. So, is the first encounter with your skill like my encounter with "Skillip," who goes on and on and doesn't let me get a word in edgewise? Or is it like "Skilliam," who bores me by acting as if he were following a contrived, predictable Hollywood romance script? Or perhaps it is more like "Skillsie," who is hung up on the ghost of interactions past? Or maybe it is like "Skilly," who wants to hold my hand at every turn and tell me what to say? We as designers often spend our time thinking about 'user engagement'. With a literal spin on this term, in this presentation we will discuss why so many voice interactions don't go beyond the 'first date'. Using the intimate dating experience as a metaphor, I will explore how the subtle conventions of human-human interactions help us outline some key, guiding principles for designing delightful and meaningful voice interactions.
Best Practices in developing conversational systems
What practices will make a conversational system most effective?
We Are All Conversation Natives, Susan Hura, CEO, Banter Technology
Digital native refers to the generation born in the digital age, surrounded by technologies that older generations encountered only as adults. Digital natives never knew the world without digital technologies and their high expectations raise the bar for the utility and usability. Users have similarly high expectations for conversations with smart speakers, and the underlying reason is the same: we are conversation natives. Conversation is the innate mode of human interaction and is the overarching influence on users' perceptions and opinions of automated conversations. This session explores the factors that will help you create interactions that meet the high expectations of conversation natives.
Chatbots use cases: Follow the 4 F's framework, Pedro Andrade, Head of New Product Development, WIT Software
Designing a good conversation is more than just good technology. As a technology company, we learned that from dozens of experiences and trials, collecting and analyzing feedback, from real customers. All that knowledge can be summarized in one formula: The 4 F's framework: Fast, Friendly, Freedom, Fun.
Virtual Assistants are both marvelous and god awful!, Alexandros Potamianos, CEO & co-Founder, Behavioral Signals
Four and a half years ago Amazon's Alexa entered our homes and virtual assistants quickly became part of our everyday life. Breakthroughs in signal capture and processing, as well as context-dependent speech understanding made Alexa an overnight success. "It works!" declared both experts and laymen alike. Today five years since the project was declared a success the customers are getting restless: there has been almost no progress in the user experience, interaction is brief and transnational, the (admittedly fun) easter eggs that have been added manually, a Sisyphean task. The Alexa division acknowledged this and created the Alexa Prize where top universities compete to create a sticky user experience. The results are mixed at best. What are we doing wrong? The answer is simple. We are ignoring the very basic principles of human voice communication: common grounding (the Alexa team calls this context), recursive mind reading, trust, joint attention and -most importantly- joint intentionality. In this talk we explain how affective, behavioral and social awareness and intelligence can lead to the much-needed breakthrough in the user experience and lead to quality and emotionally fulfilling interaction with our virtual assistants.
Technology challenges in conversational systems
The technology supporting conversational systems has passed a "tipping point" of practical deployment, but is still evolving, and there are many challenges.
Wrangling the conversational Long Tail, Anish Mathur, Senior Offering Manager, IBM Watson
Conversational systems used in applications like customer service or technical assistance interact with users across a distribution of inputs. Systems today can be built to effectively handle the head of this distribution, addressing FAQs and common inputs. But in many cases, this covers only a portion of the questions that users have, questions that are less frequent or not easily categorized make up the "long tail" and can pose a challenge for existing technology and affect how users perceive the interaction. Answers for the long tail are often buried in a large corpus of content, and this talk will focus on how to combine conversational technology with discovery and insight capabilities to find and present answers for the long tail. It will also describe use cases and examples across industries where this pattern has been successfully applied and how it can enable unique customer experiences.
Internationalizing a conversational platform, Joseph Tyler, Senior Conversation Designer, Sensely
If a conversational interface is built initially for a single language, perhaps English, then extending it to other languages and locales can raise new challenges. In this talk, I will discuss some of the issues that may arise when building out a conversational platform to support many languages, writing systems, text-to-speech and speech recognition models, and more.
Beyond Chat Bots: Towards Truly Intelligent Conversational Agents, Peter Voss, CEO/ Chief Scientist, Aigo.ai
Alexa and other chat bots have taken the world by storm. However, the current reality is that natural language applications have severe limitations, failing on tasks that even a child could easily handle. They typically have no memory of what was said earlier, don't reason about their task, and have no common sense. Moreover, they are usually engineered to perform just one specific task -- i.e. they do not have any general intelligence. Originally (the first wave), conversational agents were programmed manually using logic flow. Everyone is well aware of limitations in scaling and maintaining such system. More recently (in the second wave), big data and machine learning have automated task classification (intent identification) and some key parameter extraction. However, we now also understand the difficulty in obtaining massive amounts of manually tagged training data, and the problems with lack of transparency, reasoning, and real-time adaptability in these systems. This talk will explain how a cognitive architecture approach can overcome many of these limitations and provide a much more intelligent platform for conversation agent. A practical example of this 'third wave' technology will be demonstrated.
Keynote panel: Building a digital assistant or chatbot: What works and what doesn't
Experienced experts will discuss the issues in building effective conversational systems. Is this a major trend or a fad? What is the status of the core technology and development tools? What are best practices? What are potential mistakes to avoid?
Dan Bagley, CEO, Cepstral; Diego Ventura, CEO, noHold; David Nahamoo, CTO, Pryon; additional panelist to be announced
Digital assistants in customer service
Customer service is a key area where conversational systems can have an impact, getting away from a frustrating series of menus to a prompt such as "Please say why you are calling."
A rose by any other name: the complexity of conversational design at scale, Editt Gonen-Friedman, Conversational AI at CX Mobile, and Samrat Baul, Principal Conversation Design Lead, Oracle
Dozens of names per item, thousands of items in dozens of languages, and a multitude of classification systems; all this for one enterprise bot. Can one such bot then fit all of your customers? When you build conversational user interfaces for complex and growing systems, how do you monitor, maintain and improve a bot which needs to understand each company's context, but still leverage a central services approach? In this talk, we explore the solutions to building a bot that scales.
Accelerating Customer Response with Natural Language Search, Praful Krishna, CEO, Coseer
The focus will be a case study of wildly successful NLS deployment at a large SaaS company with a highly technical product. Across all industries (use cases span much more than just tech) customers want better information faster, and customer experience has never been more important. Through NLS, we've helped our client take ticket deflection from just 10% to 60% - translating into tens of millions in savings, more efficient operations, and happier customers. Relevant white paper may be referenced here: https://coseer.com/content/accelerating-customer-response-with-natural-language-search/
Options in developing a conversational assistant
Involvement in building an intelligent virtual assistant can range from simply providing guidance and examples to a solutions provider to getting into the details. What are the options and tradeoffs?
Meet the Ultra Assistant, David Colleen, CEO, SapientX
First there were chatbots, then command-based systems like Alexa, but... people wanted more. They wanted assistants that better understood them and could do more for them. Our team, at SapientX, has been busy working on a new generation of AI's that we call Ultra Assistants. They understand your conversations with them, they learn all about you and they can perform complex tasks for you. Ultra Assistants are helping companies who build cars, robots, TV's and medical applications to improve their products and to make happier customers.
Fake it till you make it: Prototyping for Voice User Interfaces, Obaid Ahmed, CEO & Founder, Botmock
The current process for building voice applications is broken. Most teams go through the classic pattern: ideate, build, launch, measure, iterate, repeat. Even when executed efficiently, this process hinders innovation under the burden of engineering and product launching. In this talk I will show how to test your ideas more quickly and increase the overall effectiveness of your whole team in delivering great VUI experiences.
Understanding your customer: A key to effective conversational systems
There are many aspects to building a conversational system. But perhaps the most fundamental is understanding what the customer may say.
Unsupervised Speech and Text Analytics based on 'NLP Architect' by Intel AI-Lab, Moshe Wasserblat, NLP and DL Research Manager, INTEL
AI provides competitive advantages but still adoption rate by Enterprises is low. In this talk I will present "NLP Architect", an open-source library by Intel lab, and demonstrate how to deploy unsupervised Text/Speech Analytics solutions in an Enterprise environment. The solutions augment human operation and allow flexible and low-cost adaptation into new domain.
Analytics Powered by Conversation, Adrien Schmidt, CEO and Co-Founder, Bouquet.ai
AI is continually integrating and impacting industries outside of the tech world. From government to education to agriculture, its omnipresence implies that it's an AI world and we're just living in it. How has AI shaped the way business is done and how will voice command continue to change the future of business? There is a growing need to make big data more accessible and intuitive for enterprises. By applying artificial intelligence to power smart conversations about data, chatbots will transform the way companies rely on intelligent conversations to make decisions. Having a personal analytics assistant makes it as simple as having a conversation to get all the powerful answers you need for success. -AI (artificial intelligence) + BI (business intelligence) + CI (conversational interfaces) is coming together to create new, magical experiences for users -Interact with chatbots using natural language and voice support -Getting answers from chatbots in the most helpful format: text, data visualizations, emails, reports, etc. -Personalized preferences and custom triggers to produce relevant, smart answers at a faster rate -Use case enterprise sales Aristotle ... -will change how enterprises receive and understand information by providing new interfaces powered by conversations -is always on, delivering smart and flexible analytics through conversations on mobile devices using standard messaging tools and voice-activated interfaces -dramatically reduces the time to data for all business users, thereby accelerating the pace of business. -streamlines the way analysts use their time, preparing companies for the growing data needs of the near future.
Natural language interaction in complex customer service
Some customer service automation using speech or text interaction is complex due to the wide variation in the way requests are posed.
The Use of Chatbots And Machine Learning in the travel Industry, Maryna Shumaieva, Co-founder and CTO, CruiseBe, Inc.
There are many types of chatbots and ways of implementing them from small businesses to big corporations: - what is a chatbot - the history of chatbots (when the first chatbot was developed) - four types of chatbots - what type of chatbot suites your company best - how to integrate NLP to your chatbot - Facebook as a platform for building the first chatbot - chatbots as a support and marketing tool.
The Future of Voice-based Intelligent Conversational Agents for the Enterprise, Itamar Arel, CEO, Apprente
The needs and requirements from speech-based conversational agents in customer service automation are fundamentally different than those of consumer facing products, such as virtual personal assistants. The talk will highlight the conceptual differences between the two and outline some technical approaches that can be taken in order to optimize future solutions for enterprise-facing applications. Apprente is developing advanced conversational AI systems aimed at automating a wide range of customer service applications leveraging a technological pipeline that is customized for enterprise-facing domains, such as drive-thru stations, kiosks and on mobile devices.
Making workers more efficient
Speech recognition and natural language understanding can be used to help workers do their jobs more efficiently.
The Rise Of Voice-Activated Assistants In The Workplace, Omar Tawakol, CEO & Founder, Voicera
We've already mastered voice at home with the help of Alexa and Google Home, but what about in the enterprise? In his talk, Omar will explain his thesis on why voice-activated AI is the most important development to come to the workplace. He will pull from his experiences creating Eva, an enterprise voice assistant focused on making meetings more actionable, and will dive specifically into the challenges of automatic speech recognition, natural language processing and neural networks that make it difficult to create these kinds of voice-activated assistants. More importantly, he will speak to how he and his team have overcome these challenges.
How AI enables dynamic information access to a truly mobile retail workforce, Jesse Montgomery, Sr. Speech Technologist, Theatro
As brick-and-mortar retailers continue leveraging their physical presence to better compete against e- commerce disruption, store associates are becoming an increasingly competitive asset. Today's informed and savvy shoppers expect associates to provide them with instant, useful information and excellent customer service. Jesse Montgomery will explain how the AI capabilities behind voice- controlled wearables like Theatro's are able to not only help associates access store systems and connect with each other in real time, but create a far more dynamic, supportive and valuable information to associates based on experience level – ultimately creating a far better shopping experience for customers. Theatro's AI-driven communication platform adapts to individual retail associates and dynamically customizes their experience. For example, a new and not-very-accomplished associate might dynamically receive additional training or reminders on best practices, whereas a more advanced user might receive tips on using features that they haven't accessed often (or at all) or might introduce them to newly-released features. This talk will cover some of the hottest topics in business today – AI, IoT, wearables, workforce enablement, digital augmentation of physical assets – and discuss them in context of the power of voice.
Using Conversational AI to get the most out of your Enterprise Database, Diego Ventura, CEO, noHold
Most enterprises rely on powerful databases to manage customer relationships, inventory, HR information, etc. But gathering information from them is not always easy and often requires the use of predefined reports or the help of IT specialists. What if you could simply say: "show me customers in California who have spent at least $500K in the last 12 months and that are non-profit". It is now possible to create simple straightforward conversational interfaces on top of the databases that govern our lives. The application of this technology is not limited only to the enterprise, but it can be used also by individuals while shopping, or booking the next vacation: "what laptops do you have with an i7 processor, that cost less than $1250 and with a 2.8Ghz clock?"
Naturally! Not so easy
When a digital system uses human language - "natural language" - it creates expectations. How well can today's technology meet those expectations?
An inter-lingual approach to dialogue understanding, Lisa Michaud, Director, Natural Language R&D, Aspect Software
The users of any conversational system are likely to come from diverse language backgrounds. Many such systems are deployed in a single language, but forcing a user to engage in a dialogue in a less-well-known language raises the likelihood that their inputs will be highly divergent from language models built on the productions of native speakers. This may confound many natural language understanding engines. This talk illustrates an alternative: an interlingual architecture where dialogue logic, intent classification, and slot-filling are language-independent, empowering a design approach that builds for multiple language groups in a single design effort.
Resources and tools for natural language design: intents and entities, Deborah Dahl, Conversational Technologies, Conversational Technologies
Modern natural language application development tools (Alexa, LUIS, DialogFlow) allow developers to define user goals (intents) and specific information that the user is looking for (entities) with easy to use graphical interfaces. But where do those intents and entities come from in the first place? The tools don't provide any guidance on this critical initial step. Errors at this stage will result in unmaintainable systems that require significant rework as the application matures. This talk is going to cover the process of designing entities and intents from a platform-independent perspective. We will discuss user studies, existing software tools, and the overall design process.
Design and Evaluation of Multi-party Conversational Systems, Heloisa Candello, Research Scientist, IBM Research
User Evaluation of Multi-party Conversational Systems. Recent advances in artificial intelligence, natural language processing, and mobile computing, together with the rising popularity of chat and messaging environments, have enabled a boom in the deployment of interactive systems based on conversation and dialogue. This talk explores the design and evaluation of conversational interfaces, and it is focused on design and evaluation methods which address specific challenges of interfaces based on multi-party dialogue. I will show two projects. Café com os Santiagos, an artwork where visitors conversed with three chatbots portraying characters from a book in a scenographic space recreating a 19th-century coffee table. It was accessed by more than 10.000 users in a public space resulting in insights to improve even more the conversation system field. And an experiment with a cognitive investment adviser called Finch. Finch interface was able to make a state-of-art artificial conversational governance system accessible for regular users to assist in financial decisions.
Case studies 2: Experience with deployed conversational systems
Examples often make clear the advantages and challenges of a conversational system. This session contains such examples of deployed systems and their results.
Lessons Learned in Building a Conversational Chatbot using Open source Technologies, Saurav Chatterjee, Sr. Director, asurion
In this presentation, we will talk about our experience in building a chatbot for the insurance domain, specifically the challenges in building conversational dialogs for scale, A/B feature testing, analytics and testing.
IBM Travel Concierge: Personalize Trip Planning through Natural Conversation, Guang-Jie Ren, Manager, re*THINK Enterprise, IBM Research
Planning a trip nowadays tends to go one of the two extremes: making endless queries on search engines to try and find the best deal or taking good, old advice from only a few people you know well but with limited options. The result: a vacation that's not really what you want. IBM Travel Concierge strikes the balance by interacting with you through natural conversation to understand your preferences and helping you narrowing down the top choices with personalized recommendations, from destinations and attractions to flights and hotels. This novel combination has been beta tested by a major international travel company. In this talk, we will demonstrate how it works and the lessons we've learned from the test, including app performance, user experience and business impact.
Natural language tools: A wide range of options
When "machine learning" is used to develop natural language applications, it can sound intimidating. This session presents insights making those technologies more accessible.
A data-driven approach to building AI-powered customer service virtual agents, Ofer Ronen, General Manager, Chatbase (Area 120 product at Google)
More and more companies are adopting AI-powered virtual agents (aka bots) to augment live agents and improve the customer experience. But for contact centers building a bot or virtual agent, the critical question is: which bot to build and why? In this session, attendees will learn ML techniques for faster, more efficient intent discovery.
Deep Reinforcement Learning for Conversational AI, Dr. Sid J Reddy, Chief Scientist, Conversica
"Deep learning" – more specifically "Deep reinforcement learning" – has become a hot topic in the general rush to launch AI products. Some practitioners of deep reinforcement learning are at one end of the pendulum where they are prone to "Maslov's hammer" cognitive bias. They are trying to solve their problem with deep reinforcement learning because they are fascinated by it or know it is good for marketing. Other practitioners such as small startups are prone to fear of the unknown. Since its implementation in autonomous robots and self-driving cars is not well-understood, they equate deep reinforcement learning to something that should be applied only when traditional deterministic AI and shallow machine learning models are tried and they fail. Drawing upon the experience at Conversica where we applied deep reinforcement learning for our AI assistants to have autonomous multi-turn conversations with millions of humans to fulfill business objectives, we will explain how to avoid the hype, decide which use cases are best for its application and how to avoid some common pitfalls such as dimensionality curse. We will go over the basics of Markov decision process with conversational AI as an example and explain how to optimally set up the environment, states, agent actions, transition probabilities, reward functions, and end states. Attendees will also understand when to use end-end reinforcement learning and when it is more effective to use deep reinforcement learning as a component. We will also discuss best practices in deep reinforcement learning and highlight techniques that have been successful in various applications of artificial intelligence.
Building digital assistants for smart speakers
Interaction with devices in the home by voice is a rapidly increasing trend. Check how you can connect with customers effectively through this channel and avoid customer frustration.
Voice in Journalism - Product Managing the launch of a Voice Skill, Navya Nayaki Yelloji, Product Manager - Voice Platforms, Gannett - USA TODAY NETWORK
Following the Web and Mobile Revolutions, the News industry is racing head first into the Voice Revolution. So, what does it take to launch a highly engaging and highly enjoyable Alexa Skill or Google Action for news? Navya walks through the various steps that take the product manager from engaging with field and market researchers, from identifying the initial functional requirements for the skill to engaging with UX designers, developers, voice experiences testers, beta testers, and then marketers through the launch and the post launch of the skill. She provides specific examples and shares learnings from real deployments of conversational experiences.
Ten Lessons Learned: Launching Voice Experiences for Clients, Brielle Nickoloff, Voice User Interface Designer, Witlingo
Voice experience designers -- whether designing for traditional voice telephony or the new channel of smart voice assistants -- have always faced a curious double challenge: We need to design for an exact medium (pure voice), while at the same time managing a client who, by the mere fact that they are competent conversationalists of their native language, believe that they have what it takes to design for voice -- or at least believe that they should have a say. How does one deal with this double challenge? How does one engage in the delicate crafting of a design that requires skill, knowledge, and experience, while negotiating with a client who believes that they can re-write a prompt or restructure the flow, just because they feel so. In this talk, Brielle Nickoloff draws from her experience designing Alexa skills for real clients and shares some hard-earned lessons learned.
Creativity Under Constraint: Overcoming Technical Limitations for Great Conversational UX, Rebecca Evanhoe, VUI Designer, Mobiquity
For intuitive, flexible conversational interactions, users need a balance of freedom and guidance. It's hard enough to deliver the right level of instruction at the right time so that a user intuitively engages in completing her task. What's more, platforms like Amazon's Alexa Skills Kit layer on additional challenges. Sometimes, the best UX for a customer and the limitations of a platform might seem at odds. But when designers and developers work together, they uncover solutions that obey technical constraints while delivering efficient and effective conversational user experiences. We've been there, and we will share successes and pitfalls. This talk covers four case studies of building voice skills for Alexa where our team encountered tension between ideal UX and technical constraints. For each case, you'll learn about the challenge, our solution (and its pros and cons), lessons learned, and the resulting best practices for design.
Challenges in conversational interaction
Conversational interaction is a young technology. Some things you should consider…
The Hitchhiker's Guide to High-Density Natural Language Understanding, Yi Ma, Principal Scientist, b4.ai
Within 9 months, we built a voice bot, Pizza Pal*, that can take orders of pizzas and drinks through natural language. We made the bot from scratch without any existing data to start with. Pizza Pal can understand user's requests to order multiple items in one turn, ask for recommendations, change and add toppings, customize the sauce and crust, edit and remove items from the order, specify preferences on food allergies, inquire about coupons… Yes you can do all that by talking naturally to Pizza Pal as if you were talking to a real person who works at the pizza restaurant. For the NLU component, in addition to the classic intent/slot processes, we created a set of new design principles and technologies to give the user total freedom during the natural interaction with the machine. More importantly, the same conversational platform can be reused with little adaptation to create many more voice-enabled apps within weeks for each. We call the set of new design principles and technologies that allow for natural, seamless voice experiences between people and machines High-density Conversational AI. Through the case study of Pizza Pal, Yi will share his ongoing adventures at b4.ai towards building ultimate intelligent agents with human-level conversational skills. *To try out our skill/action, Pizza Pal Lite, yourself. It's available on Alexa and Google Assistant. Just say, "Start Pizza Pal Lite" on Google Assistant, or search for and enable "Pizza Pal Lite" on Alexa to experience how high-density can help make ordering pizza with voice, a better experience. See some demos of our current voice bot here: https://www.youtube.com/channel/UCofLeQENzRDnnZZBFrKh1iQ? Also, check out our Medium series for more insight into how we're implementing high-density conversational AI to create frictionless conversational experiences: https://medium.com/adventures-in-high-density
Spoken, Knowledge-Grounded, End-to-End Dialog, Lazaros Polymenakos, Manager, IBM Research, Implicit Learning for Dialog
Deep learning for conversational systems that is increasingly "deeper": Instead of learning dialog from textual dialog examples, we will present AI approaches that learn directly from spoken interactions and integrate relevant external knowledge, appropriately represented to the neural network. This allows for tightly coupled systems where context from the conversation improves speech recognition performance and vice versa the errors of speech recognition are repaired in dialog understanding. We will present novel systems and benchmarks on spoken dialog data.
The Foresee Conversational Framework, Andy Tarczon, Director of Strategy & Insights, VMLY&R
Conversational interfaces require more than just intents, utterances, and entities. They require the ability to solve real-world issues where the user is controlling the conversation. In this discussion, you will learn the 4Cs of the Foresee Conversational Framework and how to create the strategic guardrails that guide a user to a positive outcome. The presentation will explore: • How to choose the right use cases to align business goals with user challenges • How to look beyond the simple chatbot to build an interface that delivers relevancy and value to each interaction • Identifying the needs of each your audiences • Checklist for the starting point for the first channel for your Conversational Interface.
Case studies 3: Experience with deployed conversational systems
Experience with deployed conversational systems.
Working Voice into the Editorial and Product Workflow for News Companies, Kevin Goff, Technical Product Director, Gannett - USA TODAY NETWORKS
Newsrooms have complex workflows to produce content at highest journalistic standards – Gannett does this efficiently and at massive scale across its 100+ news properties. The newsroom workflows changed from print, then to web, then to mobile and then one more time to social. The Voice revolution calling for another metamorphosis to the newsroom's workflow. As a veteran of the news industry who helped Gannett through the Digital and Social age, Kevin speaks from experience to walk through the how he helping to transform the newsroom workflow for Voice with lessons learnt from the Web, Mobile and Social revolutions.
Dialog Management as a Key Technology for Conversational Systems, Nathan Ziv, VP, Product Management, Invoca
With consumers interacting with brands online, offline, and across devices, maintaining context throughout the customer journey has become more complex than ever before. The key to solving this critical problem lies in a brand's ability to develop a solid dialog management system that enables marketers to have continuous, context-rich conversations with consumers while simultaneously using these opportunities to analyze calls to better understand consumer behavior, improve customer experience and increase ROI. Part of this is understanding why digital context is key to voice interactions as it relates to all moments of a customer's journey, from the beginning -- understanding what drove the call -- to understanding a customer's evolving needs -- are they looking to buy, are they a returning customer that can be upsold, or having issues. The speaker will talk about the technologies that companies can implement today to tie what happens during a conversation to the rest of the digital journey - ultimately creating a consistent and seamless experience for the consumer.
Creative improvement to the user experience
Conversational technology is evolving rapidly as we learn how to make it more effective in all situations.
Defining Multimodal Interactions: One Size Does Not Fit All, Jared Strawderman, UX Designer/Head of UX Framework, Google Assistant, Google
Multimodal interactions are coming to life on a wide range of surfaces and operate on a set of rules defined in your interaction model. But the tenets of a multimodal interaction vary wildly depending on whether you're designing for a mobile device, a TV, a car, etc. We'll delve into some of the things you need to consider when building a model for various surfaces.
How In-Queue Music and Messaging Slays the “On-Hold” Problem, Marcus Graham, Founder & CEO, GM Voices
“On-hold” has been the black eye on the caller experience for years. While recent innovations in CX technologies have changed call handling, the on-hold portion of the customer interaction has been largely the same—REALLY BAD. But very soon, personalization and streaming will be the in-queue standard. While the caller holds the line, give them control through a unique on-hold experience that includes the caller’s choice of licensed music (pop, rock, country, etc.) and relevant messaging based on their customer profile and history. Let’s make the “on-hold” time enjoyable and useful.
Issues in conversational systems
Computer interaction with humans raises many issues such as privacy and how ambitious our goals should be.
The Turing test holds no value in assessing conversational AI, Puneet Mehta, Founder / CEO, msg.ai
We're measuring AI all wrong. For years, AI enthusiasts have used the Turing test as a guide for developing conversational AI. Developed in 1950, the Turing test focuses on believability, analyzing a machine's ability to behave indistinguishably from a human; researchers have long considered passing the test as the holy grail of AI. In the application of modern AI, the number one goal is to solve problems. Reproducing human characteristics is only one ingredient in a complex concoction of an effective AI, and many human characteristics are even counterproductive. Yet we still see engineers building things like time delays in conversational AI responses to make it appear as though a bot is "thinking" and similar tactics to contort technology into passing the Turing test. One of the biggest opportunities for AI lies in customer service: not simply answering repeatable questions, but also giving AI the authority to make decisions, like accepting a return or waiving a change fee for an airline ticket. Companies can't treat and measure AI like an employee. AI will make a different set of mistakes than humans do and will also learn from these mistakes differently. This means we need to measure success for machines differently than we do for humans. When the washing machine was invented, its creators didn't try to replicate hand washing. They created an entirely new machine. The washing machine's success was not measured the same way as washing clothing by hand, but rather a new set of standards was created: efficiency, quality of the wash and ability to remove stains, power, ease of use. So how do we update the Turing test for practical applications of conversational AI? We need to get away from how "advanced" it feels and focus on the primary goal: efficiency. We should regard AI as providing a significantly better alternative to how we solve problems today. As we move forward, we also need to widen the scope to encompass all intelligent behavior that could be useful to the end user. In this session, Puneet will explore how researchers could more accurately measure the success of AI.
Preserving Personal Privacy in Conversational Interactions with Digital Agents, Gerard Chollet, VP Research, Intelligent Voice
Businesses, government agencies, associations, are offering server-based digital assistants accessible from web browsers or specific downloadable applications as well as telephone access to call centers. Some of these applications are allowing spoken interactions with the users. In many cases, the speech signal is sent to the operator and processed on a centralized server. There is a risk that the operator could use this information for purposes that were not intended by the user. This presentation discusses a number of proposals to preserve the personal privacy of the user. These include an embedded pre-processing interface on the user terminal to transform the voice of the user as well as anonymization techniques for internet access.
What we Believe About the Mind and How We Build AI, Robert Harris, President, Communications Advantage, Inc.
How much does our view of consciousness, spirituality, and natural selection influence how we approach the development of artificial intelligence systems? Join Robert Lee Harris for an interactive review of contrasting theories of the mind, and their practical application in computer development and current trends in creating the "brains" behind interactive, conversational systems. In summary, how human-like can, and should these systems actually be?
Managing the conversation
Simple interchanges such as a Question/Answer session serve their purposes, but more is often necessary.
The future of messaging as a platform, Mike Gozzo, CTO, Smooch
Payments, Cards, Images, Buttons, Chatbots – messaging platforms have brought in a wide variety of new end-user experiences and opened up new use cases. Despite these awesome capabilities, the future of messaging platforms is not about any of these things. Instead, it's deeply tied to the very nature of our human experience. Join Mike, co-founder of Smooch.io – the platform that connects some of the largest brands and enterprises to messaging channels, as he explores the concepts and technology that will drive the next wave of the messaging revolution.
How digital assistants can positively impact small to medium size businesses, Ketan Shah, CEO, Agentz
In the early phase of AI, most companies focus is on bigger enterprises. But AI needs be accessible to SMBs at affordable price. It can have a huge impact on the bottom line for SMBs.