This article was originally published on Information Management.
This article was originally published in Database Trends & Applications.
The deadline looms on the horizon. On May 25, 2018, the European Union will enact some of the most stringent data privacy regulations the world has ever seen. These regulations will impact thousands of companies around the world, not only EU-based organizations but any company that collects or processes personal data on EU residents. The General Data Protection Regulation (GDPR) recognizes the “fundamental right” of people to control what data is stored about them and how it is used.
Organizations must be ready for this date since the fines for non-compliance could be as high as 4% of annual revenue or $21 million, whichever is higher. To put this in perspective, small companies could go out of business with a $21 million fine, and for a company with revenue of $10 billion, the fine could be a staggering $400 million.
No organization with large datasets can sift through them manually to find personal data and judge its GDPR compliance. Companies need sophisticated technology to deal with their data effectively, enabling them to search, discover, and review. Most organizations find it challenging to quickly and accurately identify and find personal data.
Under GDPR guidelines, people can request to be informed about the data that organizations store about them and can demand rectification, erasure, or the restriction of how their data is used. They can also ask to receive their personal data in a common format that allows them to transfer it to another organization.
The impending deadline and the fear of painful fines put organizations under a great deal of pressure, such that they may forget about pursuing the potential business benefits of conformity measures. For example, the prospect of thousands or even millions of people demanding to know what data is stored about them may seem daunting. Since an organization is obliged to answer within 30 days, this might result in thousands of cases per day being handled by customer service.
On the other hand, many large enterprises with millions of individual customers—banks, wireless providers, etc.—need to provide a 360-degree view of a customer to their sales and service personnel—in seconds, not in a month. This is a business requirement independent of GDPR compliance. When customers contact the company, they expect the sales or service reps to know them and give them knowledgeable recommendations and advice.
One way of providing such a 360-degree customer view is using cognitive technologies that can ingest structured data from enterprise applications such as CRM and billing and unstructured data such as emails and other correspondence. Companies often have hundreds of such data sources. Cognitive capabilities, such as natural language processing and machine learning, are necessary to extract relevant information from structured and unstructured data: what kinds of contracts the organization has with customers; service and payment history; whether the latest exchanges were friendly or aggressive; suggestions from past experience with other customers to help solve the current customer’s problems; etc.
In a call center, operators need to get a complete picture of the person on the line within less than 2 seconds, according to industry standards. If a company has 20 million customers, more than 200 enterprise applications with customer data, and 10,000 call center agents, that is a daunting challenge—but a challenge that has been successfully overcome by companies.
ROI: BUSINESS BENEFITS—NOT JUST COMPLIANCE
Gartner estimates that European companies will each spend an average of 1.3 million euros to comply with GDPR personal data protection requirements while U.S. businesses are setting aside at least $1 million for GDPR readiness, with some assigning up to $10 million. What do they get for it, apart from avoiding fines?
Let us look at a concrete example of a wireless telecom company that implemented a 360-degree view strategy using cognitive technologies. The first objective of the project was reduction of average call handling time, increased customer satisfaction and loyalty, and increased up- and cross-selling. All these goals have been achieved, but there is another aspect to the project that offered massive savings: Call center employees now have a unique and intuitive user interface to access customer data.
They no longer need to understand some 30 enterprise applications they had to navigate before to access this data. This reduces the need for training from 30 days to 1 day. With 10,000 employees and a turnover rate that often approaches 50%, that means 5,000 x 29 workdays saved per year, i.e., 145,000 workdays or 29,000 person-weeks. ?The company can certainly offer a lot of customer service during that time! The overall ROI of the project would be approximately 60 million euros over ?3 years.
NEW PARADIGM: CUSTOMER SELF-SERVICE FOR INFORMATION RETRIEVAL
One of the 10 biggest banks in the world has implemented a similar project to provide a 360-degree view of customers to its customer-facing employees. Its objective from the outset was also to provide their customers a 360-degree view of their own dealings with the bank: accounts, share deposits, insurance contracts, etc. It is easy to extend this interface to answer the question, “What data does the company have on me?” In this way, the company improves its service to customers and fulfills its GDPR obligations without a single employee being involved.
GDPR is coming, but instead of seeing it only as a costly burden, organizations should view the regulation as an opportunity. By implementing advanced cognitive technologies to derive deep customer insights, organizations can ensure compliance while reaping the business benefits of greatly improved customer service that can have a tremendous impact on the bottom line.
The quest for actionable insights and answers from within vast troves of data is neverending within the modern enterprise. There’s good reason for that – it is the end goal of all information work – but the process is anything but optimized. Global analytics firm Forrester revealed as much in a 2017 report, which found that more than 54% of global information workers are interrupted from their work a few times or more per month by time wasted trying to gain access to information, insights, and answers.
It’s a problem that goes far beyond the limitations of conventional enterprise search technology – it’s a Sisyphean challenge, thanks to the sheer volume of data being created every single second.
“As organizations in data-intensive industries strive to create value, enhance customer experiences, and differentiate themselves from their competition, they are placing demands on their knowledge workers in unprecedented ways,” explains Laurent Fanichet, VP of Marketing for Sinequa. “Frequently, the data and knowledge they are looking for is isolated, segmented, and fractured. It’s difficult to surface the right information at the right time to see the patterns in the data.”
Fanichet has a clear grasp on the key problem Sinequa, an independent software vendor specialising in cognitive search & analytics, is trying to address. In its recent report, Forrester Wave: Cognitive Search and Knowledge Discovery Solutions, (Q2 2017), Forrester defines cognitive search as ‘the new generation of enterprise search that employs AI technologies such as natural language processing and machine learning to ingest, understand, organize, and query digital content’ – and, in the same report, go on to highlight Sinequa for the applications of their NLP technology in enterprise search.
The kind of cognitive search and analytics platform Sinequa offers, Fanichet explains, refers to an information system that is capable of automatically extracting relevant insights from diverse enterprise datasets for users within a specific work context. “Cognitive search brings the power of AI to enterprise search,” he says. “It helps organizations in data-intensive industries to become information driven.”
A recent IBM Watson report highlights the applications of cognitive search in the aerospace sector. One company uses these enhanced search capabilities “to improve supply chain visibility and reduce cycle time, saving millions of dollars on critical parts deliveries.” Furthermore, the system enables aircraft technicians to search through “reams” of maintenance records and technical documentation. “Now, if a worker needs to know what’s causing high hydraulic oil temperatures, the [cognitive solution] identifies historical cases with similar circumstances, finding patterns that point to the root cause of the overheating.” The report goes on to note that the solution in question saves the airline manufacturer up to $36 million per year.
Cognitive search and analytics likewise has its applications in the health and pharma sector. AI Business recently spoke to Karenann Terrell, GlaxoSmithKline’s first ever Chief Data and Analytics Officer, and former CIO of Walmart. She explained that a big component of what it takes to develop medicine can benefit from next-generation computing and machine learning. “Approximately 1/3 of the total cost of developing a medicine (>$2.5bn) is spent during the time it takes from identifying your target (the process in the body that you want to affect) to testing your molecule in humans for the first time,” she explained. “This process can take around five years. [GSK’s] goal with artificial intelligence is to reduce this time to just one year in future.”
“These are just a few of the many business areas where surfacing the information from within their data can drive better decisions,” Fanichet argues. He explains that cognitive search and analytics also have a range of powerful potential applications within customer service, enabling organizations to:
- Provide personalized and highly relevant communication to their customers
- Nurture customer relationships and prevent customer churn
- Improve productivity, reduce operating expenses, and gain operational efficiencies
- Minimize customer service representative turnover and knowledge loss
The Challenges Ahead for Cognitive Search
The potential use cases speak for themselves, but that doesn’t mean there aren’t challenges ahead for enterprises looking to incorporate cognitive search technology into their work. While working with clients, Fanichet explains, Sinequa helps them to understand that there are a set of common machine learning challenges along the path ahead. Expertise is often the first hurdle, but he maintains that there are many different types of AI implementation challenge. “Assuming that enterprises are able to resolve a dearth of expertise, there are still other challenges – most of which are specific to the type of AI being pursued.”
Take supervised machine learning, where the system learns to recognize patterns by observing ‘correct’ patterns provided by humans. “The greatest challenge is around providing sufficiently labelled training datasets from which the system can learn,” Fanichet explains. This is something Matt Buskell highlights in his ‘10 keys to AI implementation‘, recommending that following the initial loading of data and knowledge base, the system needs to go through a phase of refinement once the software has launched. “During this phase, things like gain and variance for Machine Learning, or intent training for NLP and maybe model refinement to cognitive reasoning need to be improved. During this phase, it is essential to carefully release the software and measure how well it’s performing over a 6-12 week period, at the least.”
Fanichet likewise highlights the obstacles unique to unsupervised machine learning, in which the system identifies existing patterns and a human determines their usefulness. “The greatest challenge is balancing the system’s need for sufficient data with the proper human guidance and interpretation needed to train the system,” Fanichet argues. This is as much an issue of skills and process culture as it is technical expertise, and is reflected in a recent Genpact survey of over 300 senior executives, which argues “AI cannot be implemented piecemeal. It must be part of the organisation’s overall business plan, along with aligned resources, structures, and processes.” Collaboration is therefore key.
Finally, there’s a need to formulate clear goals and outcomes, Fanichet says. “When pursuing reinforcement learning, where the system makes many attempts and learns from the outcome to take better actions, the greatest challenge is providing the system with a defined goal and sufficient practice in a dynamic environment so that the system can effectively learn from trial and error.”
Sinequa Brings the Power of AI to Enterprise Search
Fanichet believes Sinequa offer a range of unique intelligent capabilities within the analytics space:
- Robust Indexing Engine: “If cognitive search was all about matching a keyword, a single index would suffice. The best results are obtained when multiple indexes are combined, each providing a different perspective or emphasis, providing a comprehensive overview of the information available. This provides the best possible understanding of the meaning it carries.”
- Enterprise Grade: “Sinequa was designed from the start to support the complexities and multiple security layers of today’s enterprises. It was also designed to be immersed in diverse enterprise environments and can operate within the context of a specific industry and the language of the specific organization.”
- Topically Aware: “Connecting information along topical lines across all repositories surfaces the collective expertise of the organization and makes it transparent. This is especially valuable in large organizations that are geographically distributed. By connecting people with expertise, the overall responsiveness of the organization increases.”
- Natural Language Processing: “Sinequa’s world-class NLP offers automated language detection; lexical and syntactical analysis; and automatic extraction of dozens of entity types, including concepts and named entities like people, places, companies, etc. It also supports text mining agenda that is integrated into the indexing engine. This enables the extraction of virtually any function, relationship, or complex concept from the content.”
- Machine Learning: “Sinequa leverages ML to enhance and improve search results and relevancy. This is done during ingestion but also constantly in the background as humans interact with Sinequa. It has become an essential part of the platform since it can handle complexity beyond what’s possible with rules.”
- Well Designed User Experience: “Sinequa’s front-end serves as an intelligent agent that employees can consult for institutional knowledge that can be readily applied to the task or situation at hand. The experience is well designed in the sense that it is aesthetically pleasing, it is understandable in that it makes use of the user’s intuition, it is unobtrusive, and perhaps most importantly, it is contextual to the user’s goals.”
- Ubiquitous Connectivity: “Sinequa’s product comes with over 160 ready-to-use connectors, all of which were developed in-house, thus ensuring consistency, quality control, and high performance.”
This post was originally published as an article on Future of Everything.
There continue to be all kinds of stories about the promise of artificial intelligence (AI) in the media, many of which discuss the idea that robots are going to take over the world and put everyone out of a job. While AI is is something we must monitor and control as its capabilities continue to expand, there are tangible applications of this powerful technology happening today. So what are some realistic opportunities for AI in the near future, as opposed to all the sci-fi hype we typically hear about? (more…)
As organizations strive to create value, enhance customer experiences and differentiate themselves from their competition, they place tremendous demands on their R&D departments. From accelerating the delivery of innovative products to improving compliance to understanding consumer demands and improving responsiveness to gain and keep customer trust, R&D has a lot on its plate.
Global competition, narrow margins, higher product development costs and tenuous holds on exclusivity, drive organizations to push innovation, seek cost cutting strategies and go to market as quickly as possible. Consumer demands change frequently while regulatory and compliance standards become even more stringent. Organizations must keep up, and the pressure on R&D never stops. R&D is the epicenter of an organization, whether within a large aircraft manufacturer or a leading automobile company looking to develop cutting edge products and services or a pharmaceutical company accelerating time-to-market for new drugs.
R&D thrives on information: customer information, expert information, product information, scientific information, market information, regulatory information and competitive information. To be at the forefront of innovation, R&D departments need complete visibility into both new and historical information across the entire enterprise as well as access to research from external public and premium information services. This is no easy feat in today’s world where we are inundated with data — more data, more opportunities and more challenges. As a result, many companies depend on machine learning solutions to harness insightful, high-quality information and fuel innovation within their product and solution portfolios.
Here are six examples of how R&D departments are leveraging machine learning to improve their effectiveness and create competitive differentiation for their organizations:
- Machine learning algorithms objectively connect researchers and developers based on the work they do, which at a minimum results in greater efficiency and optimally streamlines the path from academic innovation to product development.
- Machine learning techniques are the only effective means currently available to adapt security countermeasures based on historical hacking techniques to deal with sophisticated cybersecurity threats aimed at stealing trade secrets and intellectual property.
- Machine learning algorithms are revolutionizing product and service quality by determining which factors impact quality enterprise-wide and to what extent. For example, machine learning can yield much greater manufacturing intelligence by predicting how quality and sourcing decisions contribute to greater Six Sigma performance within the Define, Measure, Analyze, Improve, and Control (DMAIC) framework.
- Cognitive search and analytics solutions powered by machine learning amplify the expertise of R&D departments by surfacing insights from data across the enterprise, regardless of location and format. From a single, secure access point, these solutions enable R&D professionals to unlock relevant and timely product research from internal and external sources that helps make informed decisions.
- Healthcare prediction and prevention are being revitalized and reinforced by machine learning. The pace of machine learning-powered prediction and prevention research is now faster than that of research that does not utilize the technology. From patient wellness scores to risk scores, machine learning is transforming the healthcare landscape.
- Open source software libraries like Google’s TensorFlow are enabling researchers to leverage machine learning for everything from language translation to early cancer detection to preventing blindness in diabetics.
Machine learning can leverage and build on relevant customer and market information to give R&D organizations insight and the ability to react quickly to demands. Teams are utilizing this technology to eliminate data silos and deliver increasingly relevant information from data to users in their business context, such that they can make better decisions, drive innovation, reduce risk and be more efficient. This in turn enables forward-thinking R&D departments that thrive on continuous product improvements and introductions to amplify the collective expertise of the organization.