Cracked Conversations: What to Do When Chatbots Aren’t Enough

Enterprise Search to Compliment Your Chatbot ExperienceBy: Robert Smith, Sales Engineer and John Finneran, Product Marketing

Conversational AI, or chatbot, vendors, are everywhere, deafening customers with the promise of AI-Powered solutions for their customer service needs.  According to Capterra, 158 companies currently offer chatbot software.  In Forrester’s evaluation of the emerging market for conversational AI for customer service for Q2 2019, the analyst firm identified the 14 most significant providers in the category – [24], Avaamo, Cognigy, eGain, Indenta Technologies, Interactions, IPsoft,, LogMeIn, Nuance Communications, Omilia, Saleforce and Verint.

This makes understanding what works best to improve customer experience hard.

Chatbots work best guiding users along straightforward, well-defined conversational paths.  If a customer asks new, unpredicted questions the typical chatbot gets confused. More complex questions require complementary solutions.  

Sinequa offers one such complementary solution – Enterprise Search that can work with chatbots to help customers and employees find what they need.

We have spoken with a number of companies ranging from those considering the technology, to building prototypes, to deploying chatbots in customer-facing applications.

Several of the concerns about the value produced by chatbot deployments

  • Slow Conversation speeds
  • Conversation path-sets grow larger and longer
  • Low accuracy because the chatbot was unable to answer and was unable to maintain the chat
  • High development effort with too many expert hours spent conceiving, designing, deploying, and maintaining those conversational paths.

Some Reasons Why?

Chatbots work best when guiding a well-defined type of user through a set of preconceived conversational paths.

The typical chatbot’s tooling provides a graphical interface, and some testing capabilities; conceiving, designing, deploying, and maintaining those conversational paths will be up to you.

  • When you consider how many paths a user might take, multiplied by the number of user types, it can grow to an astonishing amount of work.
  • When chatbots have a lot of this work to do, they tend to slow down compromising, the chat experience
  • Most requests for information are ‘ad-hoc’ and therefore not well-suited for a pre-planned and pre-built conversation flow.

When Do Chatbots Make Sense?

An example is a chatbot at your local bank

  • They have a limited set of offerings for users to choose from
    • E.g. checking, savings, mortgages, lines of credit
  • Those offerings have a limited number of actions
    • Checking deposit, transfer, bill pay, balance inquiry
  • The site is often for reference, not as much for execution
    • To actually open up an account type, you typically have to apply in-person

If you can’t narrow the scope to specific user-types and paths like these, then the outcome of multi-step “chats” is by definition, less predictable, leading to a higher failure rate.

This also makes it difficult for some chatbots to get a PTO (Permit to Operate), because companies have not let applications go into production that couldn’t guarantee outcomes.  This is to avoid “Rogue AI” situations, among other things.

Addressing the Challenge

Enterprise Search, like Sinequa’s, leverages natural language processing (NLP) to get users the most relevant content, without the chatbot’s requirement that the conversational path be designed, built and maintained.

Where chatbot interactions are sometimes helpful, that chatbot can connect to enterprise search; when the chatbot gets a user’s request for information, the chatbot can refine and forward the request to the underlying Sinequa search, then channel the results back to the user’s conversation.

In Short

By using chatbots and a powerful enterprise search platform together for the jobs they were designed for, you can deliver profitable and productive solutions that enhance both customer and employee experiences.

+1Share on LinkedInShare on Twitter


米国ニューヨーク州 – 2019年5月29日 – コグニティブ検索・アナリティクスのリーダーと評価されているSinequa社は、“The Forrester Wave™: Cognitive Search, Q2 20191.”でリーダーに選ばれました。

当レポートで、独立系調査会社であるForrester Researchは次のように述べています。「Sinequaはインテリジェンスを増大させます。 Sinequaが目指しているのは、企業における全従業員が、適切な情報に素早くアクセスできる能力を提供することで、組織を”インフォメーション・ドリブン”へと変革をもたらすことです。具体的には、コグニティブ検索テクノロジーを駆使し、知見や洞察を明らかにし、特定分野におけるエキスパートサーチを実現させます。またSinequaは、最新のオープンソースのMLテクノロジーに、独自のNLUテクノロジーを、絶妙なバランスで組み合わせたエンジンを搭載しています。ライフサイエンスや金融サービス、製造業界などの専門分野にも柔軟に対応できるソリューションです。」

Forrester Waveは、顧客が導入選定する際に役立つ、テクノロジー市場のベンダー・製品を調査した報告書です。 このレポートでは、「Current Offering(現行の製品)」や「Strategy(戦略)」、「Market Presence(市場でのプレゼンス)」の3カテゴリで、合計12社のベンダーが評価されていました。 Sinequaは、Current Offeringで情報に対する高度なハンドリング力が、Strategyでは実行能力・ソリューションロードマップ・カスタマーサービス、またMarket Presenceでは市場の認知度と、各カテゴリそれぞれの評価基準において、最高のスコアを獲得しました。

今回の評価を受け、Sinequa社の代表取締役社長兼最高経営責任者(CEO)のAlexandre Bilger氏は、次のように述べています。「我々は、Forresterのレポートでリーダーとして認められたことを光栄に思います。膨大な量のエンタープライズデータを取り込んで分析することで、顧客は状況に即した実用的な情報をタイムリーに引出し、洞察の獲得や意思決定、生産性の向上に繋がります。我々は、優れたプラットフォームと、金融サービスや製造、製薬業界の組織をサポートする能力に自信を持っています。」

さらにレポートでは、「Sinequaの強みはデータコネクターやingestion intelligence(データソースに対するインデックス作成)、intent intelligence(検索に対する的確な回答)、チューニングツールにあります。この高度なコグニティブ検索アプリケーションを導入したお客様は、Sinequaの幅広くて深みのあるingestion intelligenceの性能を理解できることでしょう。」とも述べています。そして「Sinequaの強力なロードマップには、最新のオープンソースAIテクノロジー活用も構想に入っています。」と締めくくっています。

Sinequaプラットフォームは、最近Angular 7を基盤にした、レスポンシブ・ユーザーインターフェース設計フレームワークを通じて、エンドユーザーへ洞察を提示する能力が強化されました。同プラットフォームには、SparkまたはTensorFlowに基づく機械学習モデルが搭載されたため、現在はインデックス作成パイプラインが直接適用できるようになりました。またクエリーや言語に対して、意図・意味の解釈をさらに自動化するための、大幅な新機能が追加されています。


レポート「Forrester Wave:Cognitive Search, Q2 2019」は、以下からダウンロードできます。

1 Forrester Research, Inc., “The Forrester Wave™: Cognitive Search, Q2 2019” by Mike Gualtieri, with Srividya Sridharan and Elizabeth Hoberman.




詳細は、 をご覧ください。

+1Share on LinkedInShare on Twitter

The Heat in The Trend Point: June 3 to June 7

Living in a globalized world where business operates with an evolving set of practices and norms, there are many areas where enterprise technology is impacted. Several recent articles that point to this idea caught our attention in The Trend Point over the past week.

There is an imminent need for solutions that are geared towards large businesses that are operating on a global scale, and have to deal with large amounts of data. “Getting a Global Perspective on Enterprise Search” echoes this idea:

The first day of the conference started with Ed Dale of Ernst & Young talking about implementing enterprise search for a truly global organisation. E&Y’s search is over a surprisingly small number of documents (only 2 million or so) but they are lucky enough to have a relatively large and experienced team running their search as an ongoing operation – no ‘fire and forget’ here (an approach often taken and seldom successfully).

It is no surprise that we are seeing an article like “Sage Advice for Data Storage and Analytics” call for consideration towards scalability and searchability. The following information was relayed in this post:

The repository should be highly scalable with respect to the storage capacity and amount of requests it can handle. Because of ever generating digital content out of various business processes, size of the stored content can grow rapidly and the storage limit should not be a roadblock for any content repository. Similarly, the architecture should be capable enough of handling a varying number of user requests.

Many terms like semantic search, natural language processing and text analysis are popping up everywhere in regards to enterprise software. We saw the following summary in “SAP HANA Project Addresses Text Analysis” break down some of these definitions:

The two terms are used interchangeably by a lot of people. There is a lot of gray area in defining ‘Text Analysis’ and differentiating it from ‘Text Mining.’ But from the SAP perspective, ‘Text Analysis,’ refers to the ability to do Natural Language Processing, linguistically understand the text and apply statistical techniques to refine the results. Text Mining is applying algorithms, like predictive analytics, for post-processing of data (akin to data mining).

As successful businesses become global they often need increased scalability and text analysis capabilities. One unique feature to Sinequa’s Unified Information Access is that in addition to the semantic search and text analysis functionality (“Natural Language Processing”), this solution also has the capability to interpret text in multiple languages, and it scales to very large volumes. An enterprise search solution is not prepared to enter the globalized market without such technology. Sinequa is particularly poised to address this aspect because their research continues every day. Just as language naturally evolves, Sinequa’s methods also evolve to mirror such changes.

Jane Smith, June 12, 2013

Sponsored by, developer of Beyond Search

+1Share on LinkedInShare on Twitter

Big Data: Marketing Nirvana or the Next Big Bubble to Burst?

Everyone surfs on the Big Data wave, redefining it such that their roles in this new “hot” market are maximized. Some journalists have already started to blacklist press releases on the subject, since they receive too much fanciful nonsense.

That is a pity for the companies that have really something to offer in this market. If you don’t agree that Big Data really defines a market, let us take a simple approach: We talk about enterprises and administrations that have to deal with vast amounts of data that come from very varied sources and in wildly different formats, and flow into their storage space at great speed. This market is addressed by products and services that help these large organizations not just cope with the deluge of data but extract the useful information contained in it.

Some of the players in the Big Data market must feel reminded of Molière’s Monsieur Jourdain who learnt that he had been “speaking in prose” before knowing what prose was: They had been serving the Big Data market before they knew it would be called that.

At Sinequa, we have been dealing with Big Data (in the above sense) for quite some time: Our Unified Information Access solution has been used by large enterprises and administrations to plough through billions of data base records, business transactions, and unstructured data of all sorts, like documents, emails, and social network data. Our semantic analyses and Natural Language Processing have served to make sense of this magma of data, and to create structure where there was none. All this in order to find sense in chaos. The challenge for us was to combine deep analysis with high performance in dealing with big volumes. We have invested a lot of energy – and dare I say, brain power – in our solution to satisfy big customers like Siemens, Crédit Agricole, Mercer or Atos in their quest to extract useful information from their big data volumes, relevant for their employees and customers.

The Grail of the Structured Universe

For many years, IT professionals have been chasing the grail of the “all-structured” enterprise data. This is how engineers were educated: you must structure the world to get a grip on it. If you need to search, you haven’t done your homework. For many of them, it is thus painful to give up on this goal – and on years of work and huge investments – in order to turn to search technology that can cope with the unstructured world much more easily and demanding an order of magnitude less time and money. Thus, search technology has evolved and is now used at the core of Unified Information Access platforms.

It’s not all or nothing

Now let’s not fall into the trap of claiming that Big Data is all about search and our kind of content analytics, just because we have been in Big Data up to our ears long before the people who invented the name. There are many approaches to Big Data and many useful tools and solutions to deal with it. But Unified Information Access platforms and semantic technologies are certainly part of any complete solution set. And our customers benefit from the fact that we have been in Big Data quite some time before the concept entered the hype cycle: Our solutions have matured over time.

Is Big Data a bubble that will burst?

If you link it inseparably to its name, “Big Data”, then it might well disappear. But the very real problems of Big Data sketched above will not go away. Heterogeneous and continuously changing big data volumes will increase rather than diminish.

See also

+1Share on LinkedInShare on Twitter