Becoming Information-Driven Begins with Pragmatic AI

d_schubmehl_m
Written by guest blogger, David Schubmehl, IDC Research Director, Cognitive/Artificial

Intelligence Systems.  Sponsored by Sinequa.

Over the last several years, I’ve spoken to many organizations that have all asked the same question: How can we most effectively make use of all of the research, documents, email, customer records and other information that our employees have collected over the years, especially those that are now retiring? In the past, organizations had corporate libraries and corporate librarians whose job it was to help collect, organize, and disseminate information to employees and staff when and where they needed it. That department and positions are long gone from most organizations today. Why have they gone? The rate of data and documents (including research papers, contracts, and even emails) has exploded, making this task impossible. But let’s be honest: even before today’s information explosion, no classification system could ever keep up with the fast pace of change in the economy. No one could have foreseen today’s most important questions, in content categories that did not exist until today. And with the baby boomers retiring at an ever-increasing rate, an urgent question must be asked: How do organizations get the most value from the vast amounts of information and knowledge that they’ve accumulated over decades?

IDC has identified the characteristics of organizations that are able to extract more value out of the information and the data available to them. Leader organizations make use of information access and analysis technologies to facilitate information access, retrieval, location, discovery, and sharing among their employees and other stakeholders. These insight leaders are characterized by:

  • Strategic use of information extracted from both content and data assets
  • Efficient access to unified and efficient access to information
  • Effective query capabilities (including dashboards)
  • Effective sharing and reuse of information among employees and other stakeholders
  • Access to subject matter experts and to the accumulated expertise of the organization
  • Effective leverage of relationships between information from different content and data sources

So how can artificial intelligence (AI) and machine learning affect information access and retrieval? The types of questions that are best answered by AI-enabled information access and retrieval tools are those that require input from many different data sources and often aren’t simple yes/no answers. In many cases, these types of questions rely on semantic reasoning where AI makes connections across an aggregated corpus of data and uses reasoning strategies to surface insights about entities and relationships. This is often done by building a broad-based searchable information index covering structured, unstructured, and semi-structured data across a range of topics (commonly called a knowledge base) and then using a knowledge graph that supports the AI based reasoning.

AI-enabled search systems facilitate the discovery, use, and informed collaboration during analysis and decision making. These technologies use information curation, machine learning, information retrieval, knowledge graphs, relevancy training, anomaly detection, and numerous other components to help workers answer questions, predict future events, surface unseen relationships and trends, provide recommendations, and take actions to fix issues.

Content analytics, natural language processing, and entity and relationship extraction are key components in dealing with enterprise information. According to IDC’s Global DataSphere model developed in 2018, of the 29 ZB of data creation, 88% is unstructured content that needs the aforementioned technologies to understand and extract the value from it. In addition, most of this content is stored in dozens, if not hundreds of individual silos, so repository connectors and content aggregation capabilities are also highly desired.

AI and machine learning provide actionable insights and can enable intelligent automation and decision making. Key technology and process considerations include:

  • Gleaning insights from unstructured data and helping to “connect the dots” between previously unrelated data points
  • Presenting actionable information in context to surface insights, inform decisions, and elevate productivity with an easy-to-use application
  • Utilizing information handling technologies that can be used in large scale deployments in complex, heterogeneous, and data-sensitive environments
  • Enriching content automatically and at scale
  • Improving relevancy continuously over time, based on user actions driven by machine learning
  • Improving understanding by intelligently analyzing unstructured content

IDC believes that the future for AI-based information access and retrieval systems is very bright, because the use of AI and machine learning coupled with next-generation content analysis technologies enable search systems to empower knowledge workers with the right information at the right time.

The bottom line is this: enabled by machine learning–based automation, there will be a massive change in the way data and content is managed and analyzed to provide advisory services and support or automate decision making across the enterprise. Using information-driven technologies and processes, the scope of knowledge work, advisory services, and decisions that will benefit from automation will expand exponentially based on intelligent AI-driven systems like those that Sinequa is offering.

For more information on using AI to be an information leader, I invite you to read the IDC Infographic, Become Information Driven, sponsored by Sinequa at https://www.sinequa.com/become-information-driven-sinequa/

+1Share on LinkedInShare on Twitter

How Biopharmaceutical Companies Can Fish Relevant Information From A Sea Of Data

This article originally appeared in Bio-IT World

Content and data in the biopharmaceutical industry are complex and growing at an exponential rate. Terabytes from research and development, testing, lab reports, and patients reside in sources such as databases, emails, scientific publications, and medical records. Information that could be crucial to research can be found in emails, videos, recorded patient interviews, and social media.

school-of-fish

Extracting usable information from what’s available represents a tremendous opportunity, but the sheer volume presents a challenge as well. Add to that challenge the size of biopharmaceutical companies, with tens of thousands of R&D experts often distributed around the world, and the plethora of regulations that the industry must adhere to—and it’s difficult to see how anyone could bring all of that content and data together to make sense of it.

Information instrumental to developing the next blockbuster drug might be hidden anywhere, buried in a multitude of silos throughout the organization.

Companies that leverage automation to sift through all their content and data, in all its complexity and volume, to find relevant information have an edge in researching and developing new drugs and conducting clinical trials.

This is simply not a task that can be tackled by humans alone—there is just too much to go through. And common keyword searches are not enough, as they won’t tell you that a paper is relevant if the search terms don’t appear in it, or if a video has the answer unless the keywords are in the metadata of the video.

Today, companies can get help from insight engines, which leverage a combination of sophisticated indexing, artificial intelligence, and natural language processing for linguistic and semantic analyses to identify what a text is about, look for synonyms and extract related concepts. Gartner notes that insight engines, “enable richer indexes, more complex queries, elaborated relevancy methods, and multiple touchpoints for the delivery of data (for machines) and information (for people).” A proper insight engine does this at speed, across languages, and in all kinds of media.

For biopharmaceuticals, this is particularly powerful, allowing them to correlate and share research in all forms over widely distributed research teams. Here are several ways biopharma companies can use insight engines to accelerate their research.

Find A Network Of Experts

Many companies struggle to create the best teams for new projects because expertise is hidden in large, geographically-distributed organizations with multiple divisions. A drug repositioning project might require a range of experts on related drugs, molecules, and their mechanisms of action, medical experts, geneticists, and biochemists. Identifying those experts within a vast organization can be challenging. But insight engines can analyze thousands of documents and other digital artifacts to see who has experience with relevant projects.

The technology can go further, identifying which experts’ work is connected. If they appear together in a document, interact within a forum, or even communicate significantly via email, an insight engine can see that connection and deduce that the work is related. Companies can then create an “expert graph” of people whose work intersects to build future teams.

This technique can extend beyond the borders of the company, helping to identify the most promising collaboration partners outside the company in a given field, based on publicly available data, such as trial reports, patent filings and reports from previous collaboration projects.

Generate R&D News Alerts

Biopharma companies can also use insight engines to watch for new developments in drug research and stay on top of the latest trends. These news alerts can go beyond typical media sources to include scientific publications, clinical trial reports, and patent filings.

This capability can be used on SharePoint, Documentum, or other sources within a large company to surface relevant information. An insight engine ensures the right information gets to the right people in the right context, and in a timely way.

Optimize Clinical Trials

Clinical trials that stretch over many years generate millions of datasets for every drug and study provide a treasure trove of data. Biostatisticians can ensure they get a comprehensive list of patients having certain diseases within trials on a drug, something nearly impossible with traditional methods.

They can also search and analyze across many drugs and studies, across content and data silos. Over time, this allows biopharmaceutical companies’ growing number of clinical trials to become a valuable asset that can be easily leveraged across a growing number of use cases.

All of these uses can lead to biopharma companies developing new drugs more quickly and getting them to market faster—necessary as these companies face tremendous pressure to innovate quickly and develop new promising drugs as patents for older drugs expire. With insight engines, they can make every part of the journey more efficient, from research, to clinical trials, to regulatory processes, presenting incredible opportunities for everyone in this field.

 

+1Share on LinkedInShare on Twitter

Sinequa Snags Three Key Industry Award Wins in September

Sinequa Industry Recognition - September 2018

We’re off to a busy September here at Sinequa! We’re excited and humbled to have received a few different awards for our Cognitive Search & Analytics Platform and company as a whole this month. Sinequa recognition has included the following awards from leading industry publications:

KMWorld Trend-Setting Products 2018

KMWorld’s 2018 list of Trend-Setting Products features not only emerging software directed toward human-like functionality but also more traditional offerings impressively refined. It encompasses AI, machine learning, cognitive computing and the Internet of Things, as well as enterprise content management, collaboration, text analytics, compliance and customer service. Read more.

DBTA’s Cool Companies in Cognitive Computing for 2018

DBTA and Big Data Quarterly presented the 2018 list of Cool Companies in Cognitive Computing to help increase understanding about the important area of information technology and how it is being leveraged in solutions and platforms to provide business advantages. Read more.

Datanami Readers’ Choice Award Winner

Sinequa won the Readers’ Choice – Best Big Data Product or Technology: Machine Learning category.

The Datanami Readers’ and Editors’ Choice Awards are determined through a nomination and voting process with input from the global big data community, as well as selections from the Datanami editors, to highlight key trends, shine a spotlight on technological breakthroughs and capture a critical cross-section of the state of the industry. Read more.

Looking forward to continuing the momentum for the rest of the year!

For more information on Sinequa’s cognitive search and analytics platform visit: https://www.sinequa.com/insight-platform-2/

+1Share on LinkedInShare on Twitter

3 Ways to Use Data to Fight Terrorism and Money Laundering

This article was originally published on Nextgov.

nextgovCognitive search and analytics technologies are all about accessing the right information at the right time.

The increased severity of domestic security breaches due to terrorist threats and cyber crime poses a strategic challenge for federal and state security services. The strengthening of human resources, now widely deployed around the world, is not enough to meet the challenge alone. Increasing efficiency and speed, controlling the means of communication used by terrorists, but also, and above all, anticipating the lead-up to such actions, are all challenges that persist. (more…)

+1Share on LinkedInShare on Twitter

How organizations can evolve from data-driven to information-driven

This article was originally published on Information Management.

Over the last several years, data analytics has become a driving force for organizations wanting to make informed decisions about their businesses and their customers.
With further advancements in open source analytic tools, faster storage and database performance and the advent of sensors and IoT, IDC predicts the big data analytics market is on track to become a $200 billion industry by the end of this decade.
+1Share on LinkedInShare on Twitter