Les Misérables and the Digital Workplace

When Optimizing Data Access Shows Soft and Hard ROI

When Optimizing Data Access Shows Soft and Hard ROIs

Les Misérables? Ring a bell?

Of course! This is a famous book by Victor Hugo, and the story is amazing! But what does it have to do with the digital workplace? Let me focus on a specific quotation and comment on similarities with the digital workplace.  It occurs in the chapter where Jean Valjean and Cosette are residing in a house with a garden.  In that part, Victor Hugo explores the multiple dimensions of nature.  What caught my attention is the following question: “Where the telescope ends, the microscope begins.  Which of the two has a grander view?”  The quotation resonated in my mind as it evokes similarities with the digital workplace, particularly in reference to data access.  For large and diverse content, having relevant and timely information is critical to companies.  There are different methods to query the data and the kind of ROI that can be expected varies by orders of magnitude.

The telescope – see far into the universe

What does it mean for the digital workplace? This means breaking internal data silos and opening up global information to your entire organization (any information shared by all, such as policies, procedures, HR information, compliance, etc.).  Having a digital workplace that includes an enterprise search layer that connects people to corporate content is therefore critical.  Every employee can see beyond its reach and access data spread over a wide range of different repositories.  This data is made available to everyone and everyone stays informed.

Such use of enterprise search does not bring a high degree of business specificity.  This is typically a Google-like experience with a simplified interface that is used indifferently by marketing, sales, engineering, or accounting people – any employee.  Working across business units to address multiple audiences (a horizontal approach) – its value can be uncovered by helping a large number of employees to find information; the ROI (Return on Investment) is based on an overall improvement of the company’s productivity.  According to McKinsey, employees spend close to two hours per day search for information.  In addition to increased productivity, such employee empowerment also has positive impacts on a company’s culture and employees’ wellbeing.  This is what we call a soft ROI.  A soft ROI is not easy to measure and rely on in a business case.  Benefits are referred to as indirect.  Having said that, some dollars savings can be estimated through productivity gains.  The main assumptions include the number of employees,  the average salary, and the percentage of working time saved thanks to a simple information finder.  A summary of an ROI that was calculated for a company comprising of 30,000 employees can be seen below.

ROI of Search for Digital Workplace

Assumptions were made regarding user adoption ramp-up schedules, with a greater number of users and a higher efficiency over time.  The ROI in this example is close to 13 million dollars over 3 years.

The microscope – explore what is next to you

How would this translate for the digital workplace? This ability would indeed be very helpful to assist intensive-knowledge workers in their daily tasks.  The term “knowledge worker” was first coined by Peter Drucker who defined knowledge workers as high-level workers who use advanced data collection techniques, statistics, complex correlations, case studies, and a lot more.  Data is key in helping them to perform their jobs.  And guess what? Enterprise search technology can also help in such a context.

As opposed to the simple Google-like experience, the objective here is to design a “Search-based application” customized with business-specific knowledge.  The value resides in the ability to follow a targeted business function along the key phases of its work.  Only enterprise search can index and aggregate very diverse data coming from both structured and unstructured content in order to extract the nuggets of information and provide a unified view on a specific topic (product, customer, company…)  For example, for a bank advisor, it is critical to aggregate internal data such as payments, information from the CRM, transaction history as well as external data, such as market analysis and news, to recommend the most relevant products to a customer.  The ROI is no longer related to a high number of people but to clear business-process improvements.  To do so, we target a precise group of knowledge workers on a designated use case in a specific vertical, a tryptic of “industry, use case, persona.”

Let’s take the example of clinical trials with a large pharmaceutical company.  Clinical trials are research studies that are aimed at evaluating a new drug.  They are the vehicles for evaluating a new drug.  They are the primary way that researchers find out if a new treatment is safe and effective.  In that case, the tryptic mentioned previously would then be “pharmaceutical, clinical trials, researchers.”  A specific “Search-based application” has been designed to dive into clinical data dispersed across millions of files and multiple systems and applications, surfacing insights to support the evaluation of new drugs.  The enterprise search technology had increased speed to market for new drugs.  Knowing that in the pharma industry, the average cost of new drug development is $1.0 billion, any slight improvement in the global process immediately gives better margins leading to bottom-line improvement.  This is what we call a hard ROI.  This type of ROI refers to clear measures that can be quantified in hard dollars.  To give you a flavor of the way the above pharmaceutical company calculated the ROI, you’ll find below some of the assumptions that were made (for your information, clinical trials include 3 main phases):

  • 10% to 14% of all drugs that make it to phase 1 succeed
  • 31% of all drugs that make it to phase 2 succeed
  • 50% of all drugs that make it to phase 3 succeed
  • 32% of drugs make it to phase 3
  • Average trial costs- phase 1: $170m; phase 2: $400m; phase 3: $530m
  • The cost of a trial is between $800m and $1.8b
  • The cost of patient/site recruitment averages $40k per patient/site

Locating key data and deriving insights is a key success factor for researchers.  The “Search-based application” has increased efficiency, shaving months off drug development timeline.  According to this large pharmaceutical corporation, the ROI realized is 25 million dollars per drug.

So, which has the grander view- the telescope or the microscope?

Both reveal worlds that are normally hidden from view.  For the digital workplace and data access, you require them both.  Accessing the right information at the right time is becoming ever more complex, and there are many factors with the potential to make it even more complicated.  Either for corporate content or business-specific data, enterprise search can help with both dimensions.  The ability to retrieve a company’s data assets and provide actionable insights in order to make informed decisions is indeed vital for business efficiency.  By applying methods and technologies, you can be sure that “Even the darkest of night will end and the sun will rise.” Another quote from Les Misérables.

Digital Workplace telescope vs microscope

+1Share on LinkedInShare on Twitter

Mind the Information Gap

The following was originally published on the Benelux Intelligence Community website.

Over the last several years, data analytics has become a driving force for organizations wanting to make informed decisions about their businesses and their customers.  With further advancements in open source analytic tools, faster storage and database performance and the advent of sensors and IoT, IDC predicts the big data analytics market is on track to become a $200 billion industry by the end of this decade.

MIND_the_GAPMany organizations now understand the value of extracting relevant information from their enterprise data and using it for better decision-making, superior customer service and more efficient management. But to realize their highest potential in this space, organizations will have to evolve from being “data-driven” to being “information-driven.” While these two categories might sound similar, they’re actually quite different.

In order to make a data-driven decision, a user must somehow find the data relevant to a query and then interpret it to resolve that query. The problem with this approach is there is no way to know the completeness and accuracy of the data found in any reliable way.

Being information-driven means having all of the relevant content and data from across the enterprise intelligently and securely processed into information that is contextual to the task at hand and aligned with the user’s goals.

An information-driven approach is ideal for organizations in knowledge-intensive industries such as life sciences and finance where the number and volume of data sets are increasing and arriving from diverse sources. The approach has repeatedly proven to help research and development organizations within large pharmaceutical companies connect experts with others experts and knowledge across the organization to accelerate research, lab tests and clinical trials to be first to market with new drugs.

Or think of maintenance engineers working at an airline manufacturer trying to address questions over an unexpected test procedure result. For this, they need to know immediately the particular equipment configuration, the relevant maintenance procedures for that aircraft and whether other cases with the same anomaly are known and how they were treated. They don’t have time to “go hunting” for information. The information-driven approach draws data from multiple locations, formats and languages for a complete picture of the issue at hand.

In the recent report, “Insights-Driven Businesses Set the Pace for Global Growth,” Forrester Research notes organizations that use better data to gain business insights will create a competitive advantage for future success. They are expected to grow at an average of more than 30 percent each year, and by 2020 are predicted to take $1.8 trillion annually from their less-informed peers.

To achieve this level of insight, here are several ways to evolve into an information-driven organization.

Understand the meaning of multi-sourced data

To be information-driven, organizations must have a comprehensive view of information and understand its meaning. If it were only about fielding queries and matching on keywords, a simple indexing approach would suffice.

The best results are obtained when multiple indexes are combined, each contributing a different perspective or emphasis. Indexes are designed to work in concert to provide the best results such as a full-text index for key terms and descriptions, a structured index for metadata and a semantic index that focuses on the meaning of the information.

Maintain strong security controls and develop contextual abilities

Being information-driven also requires a tool that is enterprise-grade with strong security controls to support the complexities and multiple security layers, and contextual enrichment to learn an organization’s vernacular and language.

Capture and leverage relevant feedback from searches

As queries are performed, information is captured about the system that interacts with the end user and leveraged in all subsequent searches. This approach ensures the quality of information improves as the system learns what documents are most used and valued the most.

Connect information along topical lines

Connecting information along topical lines across all repositories allows information-driven organizations to expose and leverage their collective expertise. This is especially valuable in large organizations that are geographically distributed.

As more people are connected, the overall organization becomes more responsive in including research and development, service and support and marketing and sales as needed. Everyone has the potential to be proficient in less time as new and existing employees learn new skills and have access to the expertise to take their work to the next level.

By connecting related information across dispersed applications and repositories, employees can leverage 360-degree views and have more confidence they are getting holistic information about the topic they are interested in, whether it be a specific customer, a service that is provided, a sales opportunity or any other business entity critical to driving the business.

Leverage natural language processing

A key to connecting information is natural language processing (NLP), which performs essential functions, including automated language detection and lexical analysis for speech tagging and compound word detection.

NLP also provides the ability to automatically extract dozens of entity types, including concepts and named entities such as people, places and companies. It also enables text-mining agents integrated into the indexing engine that detects regular expressions and complex “shapes” that describe the likely meaning of specific terms and phrases and then normalizes them for use across the enterprise.

Put Machine Learning to work

Machine learning (ML) is becoming increasingly critical to enhancing and improving search results and relevancy. This is done during ingestion but also constantly in the background as humans interact with the system. The reason ML has become essential in recent years is that it can handle complexity beyond what’s possible with rules.

ML helps organizations become information-driven by analyzing and structuring content to both enrich and extract concepts such as entities and relationships. It can modify results through usage, incorporating human behavior into the calculation of relevance. And it can provide recommendations based what is in the content (content-based) and by examining users’ interactions (collaborative filtering).

Taking these steps will help organizations become information-driven by connecting people with the relevant information, knowledge, expertise and insights necessary to ensure positive business outcomes.

 

+1Share on LinkedInShare on Twitter

Cracked Conversations: What to Do When Chatbots Aren’t Enough

Enterprise Search to Compliment Your Chatbot ExperienceBy: Robert Smith, Sales Engineer and John Finneran, Product Marketing

Conversational AI, or chatbot, vendors, are everywhere, deafening customers with the promise of AI-Powered solutions for their customer service needs.  According to Capterra, 158 companies currently offer chatbot software.  In Forrester’s evaluation of the emerging market for conversational AI for customer service for Q2 2019, the analyst firm identified the 14 most significant providers in the category – [24]7.ai, Avaamo, Cognigy, eGain, Indenta Technologies, Interactions, IPsoft, Kore.ai, LogMeIn, Nuance Communications, Omilia, Saleforce and Verint.

This makes understanding what works best to improve customer experience hard.

Chatbots work best guiding users along straightforward, well-defined conversational paths.  If a customer asks new, unpredicted questions the typical chatbot gets confused. More complex questions require complementary solutions.  

Sinequa offers one such complementary solution – Enterprise Search that can work with chatbots to help customers and employees find what they need.

We have spoken with a number of companies ranging from those considering the technology, to building prototypes, to deploying chatbots in customer-facing applications.

Several of the concerns about the value produced by chatbot deployments

  • Slow Conversation speeds
  • Conversation path-sets grow larger and longer
  • Low accuracy because the chatbot was unable to answer and was unable to maintain the chat
  • High development effort with too many expert hours spent conceiving, designing, deploying, and maintaining those conversational paths.

Some Reasons Why?

Chatbots work best when guiding a well-defined type of user through a set of preconceived conversational paths.

The typical chatbot’s tooling provides a graphical interface, and some testing capabilities; conceiving, designing, deploying, and maintaining those conversational paths will be up to you.

  • When you consider how many paths a user might take, multiplied by the number of user types, it can grow to an astonishing amount of work.
  • When chatbots have a lot of this work to do, they tend to slow down compromising, the chat experience
  • Most requests for information are ‘ad-hoc’ and therefore not well-suited for a pre-planned and pre-built conversation flow.

When Do Chatbots Make Sense?

An example is a chatbot at your local bank

  • They have a limited set of offerings for users to choose from
    • E.g. checking, savings, mortgages, lines of credit
  • Those offerings have a limited number of actions
    • Checking deposit, transfer, bill pay, balance inquiry
  • The site is often for reference, not as much for execution
    • To actually open up an account type, you typically have to apply in-person

If you can’t narrow the scope to specific user-types and paths like these, then the outcome of multi-step “chats” is by definition, less predictable, leading to a higher failure rate.

This also makes it difficult for some chatbots to get a PTO (Permit to Operate), because companies have not let applications go into production that couldn’t guarantee outcomes.  This is to avoid “Rogue AI” situations, among other things.

Addressing the Challenge

Enterprise Search, like Sinequa’s, leverages natural language processing (NLP) to get users the most relevant content, without the chatbot’s requirement that the conversational path be designed, built and maintained.

Where chatbot interactions are sometimes helpful, that chatbot can connect to enterprise search; when the chatbot gets a user’s request for information, the chatbot can refine and forward the request to the underlying Sinequa search, then channel the results back to the user’s conversation.

In Short

By using chatbots and a powerful enterprise search platform together for the jobs they were designed for, you can deliver profitable and productive solutions that enhance both customer and employee experiences.

+1Share on LinkedInShare on Twitter

Enterprise Search – Then and Now

This following post was originally published on emerj.com and is based on a presentation by Daniel Faggella for Sinequa‘s INFORM 2019 client event.

Traditional Search – Then

Older search applications would usually search through structured documents, such as loan application forms. They emphasized predictable formats and matching keywords directly to their appearances in enterprise documents. Also, at the time, only natively digital text was searchable, as opposed to scanned print and handwriting. It would take some years before scanned documents and other unstructured data types became searchable.

Daniel Faggella speaking at Sinequa’s INFORM 2019 event in Paris.

Daniel Faggella speaking at Sinequa’s INFORM 2019 event in Paris.

Before machine learning, “intelligent” search applications could not handle as much metadata as current systems. This made searching for complex topics difficult. In addition, metadata was applied to documents manually. This was a time-consuming process that was required for documents that a company wished to be able to search in the future. In many cases, this continues to be the case.

Intelligent Search – Now
Current search applications can now handle all kinds of structured and unstructured content in various file types with an emphasis on classification for further accessibility. These applications could also enrich documents with metadata, allowing for concept searching and automatic document organization.

Past Difficulties Persist Today
Artificial intelligence and machine learning are not the solution to every search-related business problem. Despite how much search applications have developed over the years, companies still face some of the same difficulties as in the past. The difficulties with adopting an intelligent search application include integration, defining metadata, and determining what data is needed to search the documents a bank or financial institution wants to search.

AI startups and other vendors that are new to the intelligent search space often underestimate the difficulties their clients are likely to face with adoption. Overcoming these challenges can be hard work, and we find that many companies that are just starting out with intelligent search do not consider the commitment required to do so.

These companies often market their AI applications as easy to deploy within the enterprise. However, it is likely that they do this because they have not finished the thorough process of bringing an AI application into the enterprise. They may not have run into the common problems with data infrastructure (an ML problem that almost every enterprise data science leader struggles with) or defining their use cases (easier said than done, requires lots of business context from subject-matter experts).

What AI and ML Bring to Enterprise Search

The potential influence of artificial intelligence and machine learning on enterprise search can be understood as two important capabilities:

Making more information accessible – Making data digitally accessible using techniques such as optical character recognition, machine vision, scanning documents, and analyzing more data types. An AI application can also accomplish this by automatically adding metadata to backlogs of enterprise data.

Enabling companies to ask deeper questions – Enabling the capability of searching for broader concepts as opposed to strict keywords. This is helpful for finding insights on a general topic instead of simply every document including a few terms. Employees could search for documents and information beyond what directly pertains to a single keyword.
When observing the differences between search applications of the past and those of the present, one can see that artificial intelligence could help broaden a bank’s access to data. At the same time, the technology could transform the way in which employees search for that data, thus capitalizing on that access even more.

Use-Case Overview

Enrichment and Classification
One use case of intelligent search for banks and financial institutions is in data enrichment and classification. Documents need to be tagged with metadata, or data that describes the data within those documents. Metadata is what allows employees to search for documents using search queries with keywords and filters.

Traditionally, these documents need to be manually tagged with metadata, often upon uploading or creating them in the ideal situation. But that doesn’t always happen, and as a result, a bank’s digital ecosystem can end up very disorganized. Employees forget to tag documents or tag them incorrectly, making them difficult to find when needed.

Artificial intelligence could improve this process, but leaders at the bank will still need to decide what kind of metadata they want documents tagged with. For example, leaders at the customer service department may want to tag call center logs with metadata about the kind of problem the customer is facing and the emotional state of the caller.

Once they determine categories of metadata, subject matter experts at the department can start tagging documents with this metadata, and once this is complete, they can feed these tagged documents into the machine learning algorithm that will power the intelligent search engine. The bank will then be left with a search application that could automate and improve two parts of the search and discovery process:

Enrichment – When employees upload or create a document, the intelligent search application could automatically tag the documents with metadata, immediately preparing them for search. The application could also run through older documents and automatically add metadata to them as well.
Classification – The machine learning algorithm could also cluster the metadata into broader categories. As a result, documents that are uploaded and created could be automatically organized into folders and allow for easier search with keywords.

Example: Data Confidentiality
Banks and financial institutions could use an intelligent search application to restrict access to enterprise data based on different levels of confidentiality.

They could use these groups as thresholds for documents so that the higher one’s threshold, the more access they have. The top-level would be the most confidential, where nearly no one has access unless it is specifically defined.

The middle level might allow certain categories of people to access certain documents based on what they need to do their job. For example, an account executive for financial services may not have access to the bank’s profit and loss information. The bottom level would allow most or all employees to access openly accessible data, such as customer service agents.

Once thresholds are decided, the company’s subject matter experts and data scientists can begin to label various documents in the database according to their level of confidentiality. The company can then use that labeled data to train an algorithm to go through the rest of the database and find commonalities between all of the documents labeled under a certain threshold. The algorithm could then determine which other documents fit those patterns or involve similar topics.

Unified View of the Customer
Another use-case for intelligent search is gaining what vendors market as a unified view of customers. Customer data is often scattered across various data silos and in structured and unstructured formats, such as a history of transactions or a mortgage application respectively.

This makes it difficult for company employees, especially those that deal with customers every day, to know whether or not they have all of the information a company has on a customer when dealing with them. A wealth manager, for example, may have trouble finding all of the information about a client they need to make the best decision for their portfolio.

When we studied the vendor landscape of intelligent search applications in the banking industry, we found that 75% of the products in the space included capabilities for customer information retrieval. The unified view seems to be a point of resonance for banks and financial institutions in customer service and wealth management use-cases.

Example: Call Centers
A unified view of a customer may allow a call center agent to not only pull up a customer’s contact record in a CRM, but also their past emails with the company, call logs on their past phone calls with the company, and, in some cases, sentiment analysis information on these conversations.

As a result, the call center agent would have a better idea of how to deal with the customer; they may learn that an angry customer has been calling in frequently about overdraft fees and decide it’s better to refund the customer for those fees than to allow them to keep calling in to the support line and take up agent time.

In the future, this use-case may evolve into automated coaching for call center or live chat employees. Employees would get recommendations for how to best handle the customer and even what to sell them on. Instead of deciding for themselves whether or not to refund the irate customer, the AI software might recommend this to the employee.

Concept and Advanced Entity Search
A third use-case for intelligent search is the capability to search for broader concepts and phrases as opposed to individual words or entities. Employees could search for documents with more contextual natural language phrases, as opposed to just searching for specific keywords.

For example, an employee could search “angry customers with an account login issue between June and August” into the search application, and the software could present a list of call logs for customers fitting the criteria. Such a capability is useful for finding more information relating to concepts that could appear in various documents scattered throughout a database, especially when those concepts are discussed in tangential ways.

Example: Searching For Documents Related to LIBOR
In banking, the 2021 sunset of LIBOR may have compliance departments scrambling to search for contracts that reference it so that they might update or manage them for a post-LIBOR state of affairs. In many cases, it may still be very simple to find all LIBOR-related documents and update them via strict keyword searches.

However, there may be many documents within a database that contain LIBOR-related discussions that don’t specifically mention any keywords one might normally associate with LIBOR. Employees using traditional keyword-based search software might miss these documents,

Intelligent enterprise search software could help employees find these documents. Subject matter experts could first find documents that appear to only suggest LIBOR-related discussion and label these documents.

Data scientists could then run this labeled data through the machine learning algorithm behind the search software, and this would train the software to pick up on the patterns that tend to constitute LIBOR-related discussion within a document. As a result, employees could type “LIBOR” into the search application, and the software would return LIBOR-related documents that compliance officers would want to stay on top of.

This way, employees do not have to guess which of the results actually reference LIBOR without mentioning it directly, manually reading through documents to find LIBOR-related discussion. Instead, they would search for LIBOR as a concept, and the algorithm would search the enterprise database for entities/phrases related to that concept.

+1Share on LinkedInShare on Twitter

Data Doesn’t Drive Finance

Your people need information, not data. On average, they waste a day a week searching across silos, systems, and clouds for information. It’s pre-digital-age work. Learn how AI-Powered Search gives your employees the information and intelligence they need.

View the white paper

em360-11-2019-800

+1Share on LinkedInShare on Twitter