Preparing for Analytics 3.0




Pump Up the Profit show

Summary: Analytics are not a new idea. The tools have been used in business since the mid-1950s. To be sure, there has been an explosion of interest in the topic, but for the first half-century of activity, the way analytics were pursued in most organizations didn’t change that much. Let’s call the initial era Analytics 1.0. This period, which stretched 55 years from 1954 (when UPS initiated the first corporate analytics group) to about 2009, was characterized by the following attributes: Data sources were relatively small and structured, and came from internal sources; Data had to be stored in enterprise warehouses or marts before analysis; The great majority of analytical activity was descriptive analytics, or reporting; Creating analytical models was a “batch” process often requiring several months; Quantitative analysts were segregated from business people and decisions in “back rooms”; Very few organizations “competed on analytics”—for most, analytics were marginal to their strategy. It was in 2010 that the world began to take notice of “big data,” and we’ll have to call that the beginning of Analytics 2.0. Big data analytics were quite different from the 1.0 era in many ways. Data was often externally-sourced, and as the big data term suggests, was either very large or unstructured. The fast flow of data meant that it had to be stored and processed rapidly, often with parallel servers running Hadoop. The overall speed of analysis was much faster. Visual analytics—a form of descriptive analytics—still crowded out predictive and prescriptive techniques. The new generation of quantitative analysts was called “data scientists,” and many were not content with working in the back room. Big data and analytics not only informed internal decisions, but also formed the basis for customer-facing products and processes. Big data, of course, is still a popular concept, and one might think that we’re still in the 2.0 period. However, there is considerable evidence that organizations are entering theAnalytics 3.0 world. It’s an environment that combines the best of 1.0 and 2.0—a blend of big data and traditional analytics that yields insights and offerings with speed and impact. Although it’s early days for this new model, the traits of Analytics 3.0 are already apparent: Organizations are combining large and small volumes of data, internal and external sources, and structured and unstructured formats to yield new insights in predictive and prescriptive models; Analytics are supporting both internal decisions and data-based products and services for customers; The Hadoopalooza continues, but often as a way to provide fast and cheap warehousing or persistence and structuring of data before analysis—we’re entering a post-warehousing world; Faster technologies such as in-database and in-memory analytics are being coupled with “agile” analytical methods and machine learning techniques that produce insights at a much faster rate; Many analytical models are being embedded into operational and decision processes, dramatically increasing their speed and impact; Data scientists, who excel at extracting and structuring data, are working with conventional quantitative analysts who excel at modeling it—the combined teams are doing whatever is necessary to get the analytical job done; Companies are beginning to create “Chief Analytics Officer” roles or equivalent titles to oversee the building of analytical capabilities; Tools that support particular decisions are being pushed to the point of decision-making in highly targeted and mobile “analytical apps;” Analytics are now central to many organizations’ strategies; a survey I recently worked on with Deloitte found that 44% of executives feel that analytics are strongly supporting or driving their companies’ strategies. Even though it hasn’t been long since the advent of Big Data, I believe these attributes add up to a new era.