The Heat in The Trend Point: June 10 to June 14

Big data is usually mentioned at least a bit in The Trend Point, and last week was no exception. We noticed that many of the articles seemed to be pointing towards going beyond information retrieval.

Value must be added through the technologies of an information management, search and analytics system. An article quoted in “Big Data without Value is Just a Lot of Data” states the following:

Relying solely on the information gathered by Big Data is like watching a group of people from a relatively far distance. It’s possible to see what they’re doing while they interact with each other and engage in conversations, but it’s virtually impossible to understand why they’re holding those conversations, what are they feeling that drives their actions, what is the emotion underpinning those conversations, and most importantly, how they’ll determine the future behaviour of each individual and the group at large.

We heard a similar sentiment repeated in “Data Visualization Key to Data Understanding” with an emphasis on the end goal being easy access to actionable information. This post relayed the following:

It’s typical for an analyst who has been working on a project for more than two months to show all the frequency or statistical results with a presentation deck consisting of hundreds of slides. Stop! A few charts with great data visualization are worth 1,000 slides. Actionable visualizations such as Price or Attrition Alerts can help sales teams better engage with customers instead of analyzing a plethora of reports. The key: reports should be easy to understand as well as recommend the next actionable step for business leaders.

In another post, we saw another mumbling that big data is a misnomer better represented as big content. We noted some of the thoughts that followed — the necessity of extracting value from unstructured content — in the article “Big Data or Big Content“:

Unstructured content is often included almost as an afterthought, with extraction and enrichment applied on-the-fly, from scratch on a case-by-case basis. This undermines the potential of Big Data in several ways. It raises the cost of incorporating unstructured content while also increasing the opportunities for the introduction of inconsistencies and errors reducing the quality of the final product. Most importantly, the ad hoc approach also reduces the potential of Big Data by obscuring the extent of available raw materials.

It is refreshing to see that these several media sources are no longer discussing simply mashing up raw data from different sources. The important pieces are fusion of data (both structured and unstructured) and that comes through strong analytics that can detect what belongs to the same semantic category. Then a system like Unified Information Access from Sinequa can “fuse” results with other data, like geographic position or customer history, and others.

Jane Smith, June 19, 2013

Sponsored by, developer of Beyond Search

+1Share on LinkedInShare on Twitter

One thought on “The Heat in The Trend Point: June 10 to June 14

  1. Big Data – Value – Visualization
    It goes without saying that no one in his right mind would assemble large volumes of data without value. (Does it? I hope it does.) That obviates the question whether we should talk about “Big Content” rather than “Big Data”: you don’t need to distinguish “Big Content” and “Big Data” to imply value.
    The value is nevertheless different for different populations, and it varies over time. That is why we should continue to talk about data as the raw material with many facets of value.

    On the other hand, I don’t think we want to visualize “data”: we want to visualize concepts that emerge from data via analysis. No wonder that analysts trying to present “data” end up with “presentation decks consisting of hundreds of slides”. If you are interested in a particular “angle” of the big data that you collected over time, you want to grasp the relevant aspects concerning that angle, and not “visualize data”.
    So the “value chain” could be
    Big Data -> Analysis -> Visualization -> Value.
    But most often, the process of value extraction involves iterative man-machine interaction:
    Big Data -> Analysis -> Visualization -> Analysis -> visualization -> Value.
    And keep in mind that value changes over time for each user category. No use mining gold and storing it in a vault. When you want to take it out of the vault it might have turned into lead. Value needs to be mined each time you look for it. Why is that not just a philosophical question? Because you need a high-performance architecture to extract value from (very) large amounts of data whenever you need it. No compromise on analytic depth for dealing with lots of requests on Big Data!
    Hans-Josef Jeanrond

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Current month ye@r day *