
AVIOS
Conversational Interaction
Conference
CONNECTING WITH COMPUTERS - SPEECH, TEXT OR BOTH?

In the Heart of Silicon Valley
NEWS
Interactions expands fraud protection with Next Caller partnership
The Interactions customer service platform includes many levels of authentication
Interactions, a provider of Intelligent Virtual Assistants (IVAs) for enterprise brands, announced a partnership with Next Caller, which provides phone fraud detection and call verification. The connection expands Interactions’ breadth of security capabilities for IVA clients. Partnering with Next Caller will offer an additional layer of security to keep fraudulent callers at bay.
Next Caller’s VeriCall is designed to assess the threat level of inbound calls, serving customers in large financial, government, and insurance institutions. With Next Caller’s technology, Interactions IVA will be able to assess the threat level of an incoming call using the caller’s automatic number identification (ANI) and network info, and produce a ‘risk score.’ This score will allow the IVA to determine how to handle the call, taking into account a client’s business protocols. For example, callers flagged as high risk may require additional authentication, or may be routed to a high-risk live agent. Low risk scores may require fewer customer-facing hurdles during authentication. VeriCall processes risk in a matter of milliseconds, so there is no latency in the interaction.
Next Caller’s technology joins Interactions’ existing suite of security features, which includes multi-factor authentication methods, voice biometrics technology, and ANI and Voiceprint Blacklists. Interactions’ Voice Biometrics creates voiceprints of callers with consent, and the voiceprints are securely encrypted and stored. ANI Blacklists cross-reference a caller’s ANI with existing databases of blacklisted numbers. Similarly, Voiceprint Blacklists use the caller’s voiceprint to search an existing database of fraudster voiceprints.
“At Interactions, security is a top priority,” said Mary McKenna, Director of Product Management at Interactions. “Customer service interactions generate massive amounts of data that are often sensitive, so being able to provide our clients with state-of-the-art security is a must. With this enhanced security offering, we strengthen our commitment to protect our clients and their customers from security threats.”
Interactions also sponsored a Harris Poll on the acceptability of virtual assistants to consumers. The downloadable study asked questions about consumers preferences and comfort levels when dealing with AI—automated natural-language services. The overall conclusion was that the majority of consumers are comfortable interacting with AI solutions and that—in some cases—AI is actually preferred over human interactions. Beyond that, the study showed that there can be significant benefits for companies using AI that consumers view as beneficial.
When asked about AI capabilities that enable positive customer experiences, 85% think 24/7 availability is useful. The same percentage also cited the ability to connect directly to a virtual assistant instead of going through a menu of choices. Additionally, about 79% of consumers said that AI solutions provide positive customer experiences when they enable consumers to interact on the channel of their choice and to speak conversationally—as if talking to a human—instead of forcing them to use “robot speak.”
Roughly 3 in 5 consumers are also comfortable with AI using personal and historical information to personalize interactions, and nearly 60% are comfortable with AI using the information to predict why they are contacting the company. Additionally, nearly 3 in 5 consumers (59%) would support companies using invasive AI capabilities if it could help them solve issues more efficiently. Boomers appear to be least receptive to this idea, with only 51% in support, compared to 64% of Millennials and 63% of Gen Xers.
There are even some situations where consumers would prefer a virtual assistant over a human agent. Half of consumers would prefer to interact with a virtual assistant over a human agent if they are dealing with an embarrassing customer service situation, and more than 2 in 5 (44%) prefer a virtual assistant when they are upset or in a bad mood. This sentiment is even stronger among Millennials at 65%.
Consumers were asked to evaluate how useful they think a variety of AI capabilities are when it comes to providing a positive customer experience. The most useful capabilities indicated were:
-
Directly connecting you with a virtual assistant (eliminating the need to go through menu choices) (85%);
-
Ability to use conversational words/phrases, as if they were talking to a human, rather than speaking “robot talk” (79%); and
-
Interacting with a virtual assistant that has a human-like voice/personality as opposed to a computer-generated voice (70%).
The survey was conducted online within the United States by The Harris Poll on behalf of Interactions between August 14-16, 2018, among 2,022 adults ages 18+.
IBM adds a digital assistant to its enterprise endpoint security software
Also, Watson Natural Language Classifier now available
IBM is adding new chatbot capabilities to its MaaS360 Unified Endpoint Management (UEM) security platform, delivered as Software-as-a-Service (SaaS). The company also made generally available its Watson Natural Language Classifier, which allows creating custom classifiers through machine learning.
Unified Endpoint Management
Endpoint management is designed to help an organization provide security for the devices (endpoints) that access specific enterprise software to enhance security, including possibly smartphones, tablets, laptops, desktops, wearables, and the Internet of Things (IoT). Unified endpoint management manages devices across all enterprise software with a single solution.
In October 2017, IBM added Watson to MaaS360, as the administration-oriented “Insights Advisor,” renaming it MaaS360 with Watson. The new MaaS360 Assistant is limited to smartphones and is a chatbot that works with both text chat and voice. It enables users to make natural language queries to find emails, discover documents, and schedule meetings, among other capabilities. For the voice option on smartphones, the Assistant currently uses the native speech recognition in iOS and Android.
Apparently, the objective is to allow digital assistant functionality without leaving the monitored security system. This implies that the Assistant will evolve over time in an attempt to make it unnecessary to use other options.
Watson Natural Language Classifier
From classifying financial risk and compliance to categorizing service queries and messages, Watson Natural Language Classifier enables a developer to create, train, and evaluate custom classifiers through machine learning. IBM announced the general availability of the Watson Natural Language Classifier tooling in Watson Studio. Features include:
-
Classify natural language text: Classify strings of text into custom categories.
-
Leverage IBM Deep Learning as a Service: Train a classifier with up to 20,000 rows of training data.
-
Multi-lingual support: Support is available in English, Arabic, French, German, Italian, Japanese, Korean, Portuguese (Brazilian), and Spanish.
-
Multi-phrase classification: Classify up to 30 separate text inputs in a single API request.
New tooling provides an improved user experience for training classifiers.
In one environment, you train, build, and test your classifiers. Additionally, with Watson Studio, you can import other services, notebooks, frameworks, and models into your projects.
Google Assistant adds optimistic news option
“Hey Google, tell me something good”
A recent book by Steven Pinker, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress, argues that things are better than most perceive and getting better. One reason Pinker believes we are more pessimistic than we should be is that the news media favors bad news, assuming it is more interesting than good news. (We don’t see a headline that there were no murders in a city all day, for example.)
Google now gives an option for people who would like a more optimistic view of what’s going on. You can ask Google Assistant, “Hey Google, tell me something good.” Examples include how Georgia State University coupled empathy with data to double its graduation rate and eliminate achievement gaps between white and black students, how backyard beekeepers in East Detroit are bringing back the dwindling bee population while boosting the local economy, and how Iceland curbed teen drinking with nightly curfews and coupons for kids to enroll in extracurricular activities.
The stories come from a wide range of media outlets, curated and summarized by the Solutions Journalism Network. They are a nonpartisan nonprofit dedicated to spreading the practice of solutions journalism, which highlights how problems are solvable and that doing better is possible.
Apple’s Siri to get better at recognition of local destinations
Three million HomePod speakers sold
There were developments regarding Apple’s Siri:
-
Apple HomePods are estimated to have reached three million in the US; and
-
Siri will do better with recognition of local destinations.
HomePods
A new report by Consumer Intelligence Research Partners (CIRP) says that Apple has sold three million HomePods in the US and nearly doubled its smart speaker market share in the second quarter of 2018. This equates to roughly $1 billion in revenue. Separately, Aftrex Market Research estimated that the global digital bots market will reach about $5.3 billion in revenue by 2026.
Siri’s location-based recognition
Researchers at Apple published a paper describing how Siri uses regionally specific language models for speech recognition, allowing Siri to be better at recognizing small local businesses. The authors note that the improvement in speech recognition over recent years has been due to deep learning techniques, with improvements largely in the recognition of general speech. They note that accurately recognizing named entities, like small local businesses (Points of Interest, POIs), has remained a performance bottleneck.
Siri’s ability to recognize names of local POIs was improved by incorporating knowledge of the user’s location into Apple’s speech recognition system, using customized language models that take the user’s location into account.
Statistical Language Models (SLMs) in general consider the probability of specific sequences of words to go beyond the recognition of specific sounds such as phonemes. They are critical, of course, in general speech recognition in distinguishing words that are produced identically, such as “to,” “too,” and “two.”
An SLM created from a large data set will likely be able to recognize “Starbucks,” but less likely to recognize “Molly’s Pastry Place.” Entity names that occur only once, or never, in the training data for an SLM are likely to be misrecognized, even when the individual words are rated the most likely based on phonetic analysis, due to bias in the SLM against them because of their lack of representation in more general training data.
The method Apple uses assumes that users are more likely to search for nearby local POIs with mobile devices than with Macs, for instance, and therefore uses geolocation information from mobile devices to improve POI recognition. The authors indicate they been able to significantly improve the accuracy of local POI recognition and understanding by incorporating users’ geolocation information into Siri’s speech recognition system. The specific methodology is presented in more detail in the paper.
The paper compares the geographical SLM (Geo-LM) to the general SLM for the task of POI recognition in the United States. The test used eight major US metropolitan regions and selected the top 1,000 most popular POIs for each region based on Yelp reviews. For each POI, the test data included three recorded utterances, with and without the carrier phrase directions to, from three different speakers. The results showed that Geo-LM reduces word error rate (WER) by 18.7% on the geographically specific data, with no degradation in accuracy in general recognition. The paper reports other comparative results.
Samsung personal assistant Bixby gets more conversational and personal
Samsung’s Galaxy Home smart speaker with Bixby introduced
Samsung made major announcements at a company event in August. The major announcement was the Galaxy Note 9, a large mobile phone that almost delivers laptop functionality. The premium device starts at $1,000 with a new S Pen, AI camera, and 128GB of storage.
Other announcements more related to this newsletter’s focus included improvements in Bixby, Samsung’s alternative to digital assistants such as Apple’s Siri, Google Assistant, Microsoft’s Cortana, and Amazon’s Alexa. The company also announced the Galaxy Home smart speaker, which includes Bixby. Apparently, Samsung considers a direct connection with its customers a strategic priority. Separately, Ticketmaster and Live Nation added Bixby integration.
Bixby improvements
Ji Soo Yi, vice president of AI strategy showed off some of the Bixby’s new capabilities at the Samsung event. He began by saying, “From the beginning Bixby was designed to help you get things done.”
A demo of Bixby at the introduction of the Note 9 featured its being more “conversational.” This was demonstrated by its ability to understand the range of days implied by “Labor Day weekend.” A follow-on question that referred to the previous question was understood to reference the Labor Day weekend without it being repeated. More generally, the feature is the retention of context within an interaction.
Bixby also retained context from previous interactions. In the demo, the digital assistant remembered the demonstrator’s preferences for French food and the number of people he typically made a reservation with. The assistant did more than listing restaurant web sites, showing visuals of food at a restaurant. A reservation could be made by simply pressing a “Bixby button.”
Bixby also provides access to outside services. A request for a ride to a particular location quoted a price, and the option to request the ride. There was no mention, however, of developer tools for outside applications.
Samsung is, however, not betting entirely on Bixby. One can say, “Hey, Google” to launch the Google Assistant. And Bixby uses Google Maps.
Galaxy Home smart speaker
Samsung’s Galaxy Home smart speaker with Bixby was announced, but is not yet available. It will feature an integrated SmartThings hub, which will enable it to control smart home devices in the Samsung SmartThings ecosystem, which includes a powerful connectivity capability that is available in a number of products from other manufacturers as well as Samsung products.
The Galaxy Home will feature high-fidelity sound, with six built-in speakers and a subwoofer. The audio development was supported by AKG, the high-end headphone and microphone division of Harman International that Samsung acquired in 2016. The speaker has eight microphones that will deliver far-field speech recognition.
The Galaxy Home has a feature called “sound steering,” which attempts to direct audio to where you're at in a room after you say, “Hi Bixby, Sound Steer.” The speaker determines your location based on input from its eight microphones, which also help with noise cancellation to provide more accurate speech recognition.
Spotify’s CEO Daniel Ek announced a partnership with Samsung at the event that will mean the smart speaker will be able to use the music streaming service. The deal will give both Samsung and Spotify a chance to challenge the Apple Music service for users. Samsung added that soon you can get a multi-device experience with its Galaxy Home, switching Spotify music from its speakers to a smart TV or listening from the smartphone. Spotify will also work with Bixby for voice control. “We are talking about hundreds of millions of devices,” said Ek.
Ticketmaster
Bixby has partnered with Ticketmaster and Live Nation allowing users to find concerts or events. Ticketmaster/Live Nation and Bixby marketing information says, “Fans will have a personalized experience from event discovery to seat selection using voice technology.” However, a purchase would have to be completed online.
Amazon provides new features for Alexa, including a customer profile API
Also, research on small-footprint natural language understanding
Amazon made a number of announcements:
-
The availability of a Customer Profile API, which allows certain personalization of an Alexa skill.
-
A new Alexa feature, Answer Updates, that gives users the option to receive a later answer to a question that Alexa can’t immediately answer.
-
The release of a database that provides transliteration of names in multiple scripts to aid speech recognition where data used may be written in different languages.
-
A research report on on-device (small-footprint) natural language understanding.
-
“Brief mode” can shorten Alexa responses in some cases.
-
Launch of the Alexa Skills Challenge: Tech for Good.
-
Dunkin’ Donuts has added an Alexa skill for ordering.
Customer Profile API
Amazon announced the availability of the Customer Profile API. The new feature enables Alexa skill publishers to (1) learn a user’s first name to personalize the interaction, or (2) their email or mobile number to provide interactivity through messaging. Alexa skills can now request user permission to use contact information as part of the skill experience. Previously, Alexa skills had no direct method for linking to Amazon user profile information. Google Assistant has had a similar feature.
Amazon policy guidelines include requirements of using the Customer Profile API, including:
-
A link to the privacy policy that applies to the skill;
-
The skill must request permission from the user, the contact details are only allowed when features require the information, and in accordance with both the privacy policy and applicable law;
-
The skill may not associate an Amazon customer and their profile information with other customer account information the developer maintains outside of Alexa;
-
The skill must request the data from the Alexa Customer Profile API whenever it is needed and may not call it from a stored location; and
-
The Customer Profile API may not be used in any skill designed for use by children.
Answer Updates
With an update to the Echo, Alexa will be able to remember a question it can’t answer and respond to the user later if it finds an appropriate response. Users are asked if they want Answer Updates if a question that qualifies is asked. The standard response when Alexa can’t answer is something like, “Sorry, I didn't understand the question.”
Amazon database of names
One issue in building an application for Alexa that accesses data from web sites or databases is that there may be mixed languages in the database, including different scripts, making it difficult to recognize the items by speech recognition. For example, a Japanese music catalogue may contain names written in English or the various scripts used in Japanese—Kanji, Katakana, or Hiragana.
In August, Amazon AI researchers publicly released a dataset of almost 400,000 transliterated names, containing a phonetic representation of the names. This will aid the development of natural-language-understanding systems that can search across databases that use different scripts. The researchers describe the dataset's creation in a paper, together with experiments using the dataset to train different types of machine learning models. The authors describe typical approaches to this problem, describe the approach they used, and discuss why their approach is superior.
Research on small-footprint NLU
Researchers at Cornell University presented a technique for doing natural language understand in a device. The results of their research (“Statistical Model Compression for Small-Footprint Natural Language Understanding”) will be presented at this year’s Interspeech machine learning conference in Hyderabad, India.
Brief mode
Amazon added “Brief Mode” to Alexa. You can switch some Alexa responses to simple beeps instead of verbal confirmation by going to Settings, General, then Alexa Voice Responses in the Alexa app.
Alexa Skills Challenge
Amazon has recently launched the Alexa Skills Challenge: Tech for Good, an online competition for building Alexa skills that have a positive impact on the environment, a local community, and the world. Participants who submit an eligible certified skill will receive a participation prize, up to 10 finalists will win $5,000 and promotion in the Alexa Skills Store, one grand prize winner will receive $10,000, and there will be a $20,000 donation prize to benefit a charity. Submissions are open through September 17.
Dunkin' Donuts
Dunkin’ Donuts announced that On-the-Go Mobile Ordering is now available through Alexa. To start their order, guests say, “Alexa, order from Dunkin’ Donuts.”
With this new integration, DD Perks Rewards members can place a mobile order for Dunkin’ Donuts coffee, beverages, baked goods, and breakfast sandwiches on an Alexa-enabled device, and then speed past the line for pick-up.
Guests who have a DD Perks account and an Amazon account can link them together in the Alexa app once they open the skill, with all ordering and payments happening within Dunkin’ Donuts’ mobile platform. When a guest places and submits an order through Amazon Alexa, they will be asked which location they would like it to be sent to and the time their order will be ready. Guests can order from saved Favorites they have previously ordered via the Dunkin’ Mobile App.
Too much technology?
William Meisel, Executive Director, AVIOS, and President, TMA Associates
It seems that successful technology always draws warnings of its dangers. A Swiss scientist, Conrad Gessner, warned about the effects of information overload, describing in a book how the modern world overwhelmed people with data, “confusing and harmful” to the mind.
Gessner died in 1565. He was referring to the dangers of the printing press.
Socrates feared that writing would cause our memories to deteriorate. And TV supposedly created “couch potatoes.”
Today we also worry about the overuse of smartphones, not just for being too tied to them for communication, but for overuse in playing games and watching video. There are warnings that they might destroy our social skills by reducing human interaction.
Conversing with Computers: What’s behind this major trend?
William Meisel, Executive Director, AVIOS, and President, TMA Associates
Dealing with computers through human language used to be science fiction. It’s not anymore. What is driving this trend? How far can it go?
The general digital assistants
The general digital assistants (GDAs) such as Amazon’s Alexa, Google Assistant, Apple’s Siri, and Microsoft’s Cortana are leading the “talking to computers” trend. They are becoming available through more and more channels, from mobile phones and smartwatches to personal computers, home devices, and automobiles. The companies behind them are making huge investments to continually improve the quality of the speech understanding, the quality of the synthetic voices responding, and the ability to deal with a request properly. Read more
noHold’s knowledge management technology supports virtual assistants
The company’s platform supports multi-turn natural language conversations
noHold provides a cloud-based “Knowledge Management Platform” called SICURA. The company’s technology focuses on providing answers in natural language interactions. The knowledge hosted in the company’s platform can be delivered through an interactive Virtual Agent or through Search++, noHold’s natural language search engine.
noHold announced that its technology will enable Virtual Assistants to be able to handle “multi-turn interactions with expansions.” Multi-turn interactions refers to the back and forth interaction within a conversation. Expansions refer to the various ways conversation can turn.
The company provided examples of expansions.
-
Opening: Understanding various greetings such as “how are you” and “what is your name.”
-
Screen: Handling questions such as “what do you know about” as well as the explicit “do you know about routers?”
-
Closing: Handling various ending statements such as “bye” or “see ya.”
-
Escape: Moving on when a user types “never mind.”
-
Repeat: Repeating itself if end users ask things such as “Can you say that again?”
-
Elicit: End users can elicit information from the Virtual Assistant such as providing options when a user can’t directly answer a question from the virtual agent.
-
Paraphrase: End users can ask for an explanation or elaboration of a virtual agent request for information.
Other features that help create a more dynamic Virtual Assistant include:
-
Providing question to confirm understanding;
-
Managing clarifying questions;
-
Handling a change of subject;
-
Dealing with interruptions;
-
Inferring the meaning, when possible, from an incomplete response;
-
Handling exceptions; and
-
Retaining context.
Diego Ventura, CEO and Founder of noHold, said, “Incorporating these features into the SICURA platform is important…Providing end users with the most natural experience helps decrease frustration, and at the same time increase satisfaction.”
Source: LUI News, June 2018 issue
Bank of America launches virtual assistant to its 25 million mobile clients
Clients can interact with “Erica” by texting, talking, or tapping options on the screen
Bank of America is rolling out what it characterizes as “the first widely available AI-driven virtual assistant of its kind in financial services.” The virtual assistant “Erica” is available to its 25 million mobile clients through the Bank of America mobile app, with a staged rollout throughout June. Erica can interact in natural language using voice or text.
Michelle Moore, head of digital banking at Bank of America, said in a statement that Erica will make it easy for clients to find what they are looking for to providing new and interactive ways to do their banking using voice, text, or gesture. She added, “Through Erica, we are also delivering personalized solutions at scale by providing insights, such as how you can improve your credit score or create a budget.”
Currently, clients can ask Erica to:
-
Search for past transactions, such as checks written or shopping activity, across any one of their accounts;
-
Provide information about their credit scores and connect them to information that will help them learn about money management through “Better Money Habits”;
-
Navigate the app and access key information, such as routing numbers or the closest ATM or financial center;
-
Schedule face-to-face meetings with more than 25,000 specialists in our financial centers;
-
View bills and schedule payments;
-
Lock and unlock debit cards; and
-
Transfer money between accounts or send money to friends and family with Zelle, a US-based digital payments network owned by Early Warning Services, in which BofA is an investor.
Erica is designed to learn from clients’ behaviors over time, helping them accomplish simple to complex tasks within the mobile banking app with easy-to-follow prompts. Clients can interact with Erica any way they choose, including texting, talking, or tapping options on their screen.
“Erica’s knowledge of banking and financial services increases with every client interaction,” said Aditya Bhasin, head of consumer and wealth management technology at Bank of America. “In time, Erica will have the insights to not only help pay a friend or list your transactions at a specific merchant, but also help you make better financial decisions by analyzing your habits and providing guidance.”
According to Bank of America, in the coming months, Erica will be able to tackle more complex tasks, such as:
-
Sending proactive notifications to clients about upcoming bills and payments;
-
Displaying key client spending and budgeting information and advice on ways to save;
-
Identifying ways for clients to save more;
-
Managing credit and debit cards to help notify clients of card changes; and
-
Showing upcoming subscription charges and monitoring transaction history and changes.
Bank of America began piloting this technology with its employees in late 2017 and started rolling it out to clients in March of this year. Since its development, the bank:
-
Integrated more than 200,000 different ways for clients to ask financial questions;
-
Added new functionality based upon client patterns and behaviors;
-
Expanded Erica’s conversational knowledge, including the ability to engage clients with salutations and well wishes, such as “happy birthday”; and
-
Implemented a real-time feedback capture to inform future enhancements.
Source: LUI News, June 2018 issue
Global smart speaker shipments reached 9.2 million units in Q1 2018
According to the latest quarterly research from Strategy Analytics, global smart speaker shipments reached 9.2 million units in Q1 2018. Market leader Amazon is estimated to have shipped 4 million smart speakers during the quarter, though its global market share was nearly halved from the same period last year. Google and Alibaba consolidated their number two and three rankings, while Apple became the fourth largest smart speaker brand worldwide following the launch of the HomePod in February 2018.
David Watkins, Director at Strategy Analytics commented, “Amazon and Google accounted for a dominant 70% share of global smart speaker shipments in Q1 2018 although their combined share has fallen from 84% in Q4 2017 and 94% in the year ago quarter. This is partly as a result of strong growth in the Chinese market for smart speakers where both Amazon and Google are currently absent. Alibaba and Xiaomi are leading the way in China and their strength in the domestic market alone is proving enough to propel them into the global top five.”
Intelligent Virtual Assistant Market projected to grow at a CAGR of 38.32% from 2018 to 2025
MarketInsightsReports estimated that the global Intelligent Virtual Assistant Market was valued at $1.4 billion in 2016 and is projected to reach $26.8 billion by 2025, growing at a CAGR of 38.3% from 2017 to 2025. The report defines an Intelligent Virtual Assistant as an application program that understands natural language voice commands and completes tasks for the user. An Intelligent Virtual Assistant can improve the online customer service experience, increase sales and reduce costs, the firm indicated.
Smartphone shopping to increase, with a digital shopping assistant desired as help
Ericsson provides solutions for communications service providers. The company’s Consumer & Industry Lab produced a report on smartphone shoppers. The report is based on a survey of advanced internet users in ten influential cities globally. The report found:
-
Smartphone shopping is expected to peak globally in the coming few years with 43% surveyed already making purchases on their phone weekly;
-
The majority of smartphone shoppers expect most people to have a personal shopping advisor within 3 years, and there is emerging demand for digital shopping assistants to help with purchase decisions; and
-
69% of Augmented Reality and Virtual Reality users think these technologies will give smartphones all the benefits of physical stores within 3 years.
Selecting the type of shopping assistant for home and personal purchases will soon be more important than the actual purchase decision, the report concluded. For example, 63% of smartphone shoppers want help with price comparisons and 48% want help making shopping decisions easy.
The report presents insights based on an online survey (carried out in January 2018) of 5,048 advanced internet users in Johannesburg, London, Mexico City, Moscow, New York, San Francisco, São Paulo, Shanghai, Sydney, and Tokyo. Respondents were aged 15 to 69 andfit the profile of urban early adopters.
Press Release: Conversational Interaction Conference 2019 announced by AVIOS
SAN JOSE, Calif., April 3, 2018 /PRNewswire/ -- The Conversational Interaction Conference will be held March 11-12 in San Jose, California. The conference covers commercial activities that use automated computer processing of human-language speech or text--“natural language”—one of the most rapidly growing area of Artificial Intelligence.
Read More
Perhaps we should be talking about Computer Intelligence (CI), not AI
Artificial Intelligence—AI—is obviously “hot.” Execs of leading companies have talked about their companies being “AI-First,” or otherwise emphasized the importance of the technology to their company’s long-term growth. This newsletter has emphasized the importance of the Language User Interface in connecting humans to technology to maintain the usability of that technology. Certainly, speech recognition and natural language processing are part of what most people consider AI.
But all the attention to AI is not celebratory. Brilliant men such as Elon Musk have warned that AI poses a risk to humanity when computers get smarter than humans, even to the extent of saying we need to colonize Mars to escape their domination. In a somewhat strange approach to the problem, he has even sponsored formed an AI startup, OpenAI, that somehow may help the challenge to humans by giving more people the weapon.
Voice technology recognized for its potential revolutionary role
Speech and natural language technology have evolved to the point where they are a viable supplement—even an alternative—to the over-stretched Visual User Interface (VUI). This “tipping point” in utility was highlighted in January by the many announcements if its use at the Computer Electronics Show in Las Vegas. The mainstream media are also noting this breakthrough, for example, by its making the cover of The Economist, a thoughtful journalistic weekly published in the UK since 1843.
The Economist notes as an example, that, even before Christmas, the Echo was already resident in about 4% of American households. (Amazon announced after the article was written that sales of the Echo were up nine-times over the holiday season last year, with “millions” sold.) The article said that Apple’s Siri handles over two billion commands a week, and 20% of Google searches on Android-powered handsets in America are input by voice. Further, the Economist article said, “dictating e-mails and text messages now works reliably enough to be useful.”
The article summarizes: “This is a huge shift. Simple though it may seem, voice has the power to transform computing, by providing a natural means of interaction…The arrival of the touchscreen was the last big shift in the way humans interact with computers. The leap to speech matters more.”
Just the answer, please!
By Bill Meisel, CI Conference organizer; President TMA Associates
Natural Language Processing (NLP) technology generally focuses on interpreting the “intent” (basic information or action being requested) of text or speech and any “entities” (further information) required to address that intent. The next step is, of course, creating an action that satisfies that request. That may be a “simple matter of software,” e.g., using an Application Programming Interface (API) that turns up the temperature on your thermostat or places a call to a contact in your smartphone. In general, the action step is considered outside of the NLP realm.
Cepstral demos of text-to-speech available online
Cepstral Voices can speak any text they are given with whatever voice you choose. Try out a sample of some of the voices that the company currently has available and experience the quality of today’s technology.
What’s all this about bots?
By Bill Meisel, CI Conference organizer
There have been a number of announcements from messaging service vendors about support for “bots.” The idea is simply that you, in effect, text a company (send a text message from within the messaging application) to get its services. No navigating to a web site, no downloading of an app. The bots interact by text in human language (“natural language”). Microsoft CEO Satya Nadella summarized the trend at a recent Microsoft developers’ conference: “Bots are the next applications.”
—> More