Share with your friends


Analytics Magazine

Realizing Value: Building a high-performance big data analytics organization

September/October 2014

“From Twitter feeds to photo streams to RFID pings, the big data universe is rapidly expanding, providing unprecedented opportunities to understand the present and peer into the future. Tapping its potential while avoiding its pitfalls doesn’t take magic; it takes a roadmap.”  — Chris Berdik, author of “Mind over Mind”

alt alt alt

By (l-r) Pramod Singh, Ritin Mathur and Srujana H.M.

The digital universe is expected to expand exponentially between 2013 and 2020, from 4.4 trillion gigabytes to 44 trillion gigabytes [1] a year, and this massive amount of data will significantly impact global industries. In nontraditional context, analyzing big data isn’t about managing more or diverse data. Rather it is about asking new questions, up skilling new capabilities, building new technological environments and devising holistic communication strategies to encompass the nuances associated with complexities of volumes of data.

Numerous opportunities, as well as challenges, are associated with big data. Time savings can be achieved through real-time monitoring and forecasting of events that impact either business performance or operation. Significant cost savings over traditional analytical techniques can be achieved by adoption of big data due to usage of Hadoop clusters.

While companies with business models predicated on the Internet have been the pioneers of developing big data analytics, other firms with more established non-Internet-based models are also rapidly adopting big data analytics practices, typically in response to consumer and technology trends. With the emphatic big data explosion, it becomes imperative for organizations to assess and adopt big data analytics practices into their decision-making process.

Big Data Tools and Technologies

When considering an organization’s needs for big data tools and technologies, it is useful to think of them in four dimensions.

1. Structured data management: Tools for managing high-volume structured data (for instance, clickstream data or machine or sensor data) are an important part of any big data technology stack.

2. Unstructured data management: The explosion in data volumes have been to a large extent a result of the rise in human information, which is typically comprised of social media data, videos, pictures and even text data from customer support logs. Tools and technologies to manage, analyze and make sense of this data stream are critical to build understanding and to correlate with other forms of structured data.

3. Analytics environment: Combining both structured and unstructured data, at scale, requires specialized tools and technologies to be able to merge these data sets and to be able to run analytical algorithms. Concepts such as in-database and in-memory analytics have greatly enhanced the ability to use large data sets for analysis at near real-time speeds and to combine the analytics environment within, for example, structured data management tools.

4. Visualization: Intuitive representation of data and results of analysis is a critical final component of the big data technology stack. This furthers the speed at which results are understood and insights derived. Tools and technologies that allow for quick drill down, investigative analysis are now pervasive and easily integrated into the analytics stack [2].

Most tools designed for data mining or conventional statistical analysis are not optimal for large data sets. A common hurdle to cross for most analytics organizations trying to leverage big data analytics is availability of big data technologies and platforms. Organizations usually start off by using open source technologies to gain experience and expertise. The big data analytics space, thankfully, provides many open source options for organizations.

For example, Hadoop is a good starting place for being able to manage large data at scale. Combining this with NoSQL databases such as Hbase or MySQL can provide a good first step to get a feel for handling large data sets. Hadoop ecosystem tools like Hive, Pig, Sqoop, etc. allow data scientists to also get a feel for being able to query and analyze large data sets. R is an open source programming language and software environment designed for statistical computing and visualization [3]. For visualization, tools like d3.js allow for creative and varied visualization sets to help data scientists present results in an intuitive way.

The challenge with using open source technologies though is two-fold. One, integrating these with a legacy enterprise stack is not easy, and most IT organizations don’t yet allow for easy integration. This integration quickly becomes critical when one moves beyond experimentation into solving real-world business problems that require multi-dimensional data, some of which might be in legacy enterprise data warehouse (EDW) environments [4]. Second, while strong use communities around open source technologies exist, the learning curve could be longer given the often less than user-friendly nature of these technologies. Learning on open source requires a certain level of existing expertise, and beginners may find a learning approach based on open source harder.

Integrated Big Data Analytics Platform

Most analytics for business use cases rely on bringing together diverse data sets to analyze. With big data, these data sets are no longer limited to just structured data; they increasingly leverage unstructured data as well. This calls for a big data environment that allows data scientists to work seamlessly across data streams.

Using an integrated environment provides a quicker, more scalable and integrated approach to analytics. This allows for a user-friendly environment for data scientists to learn new skills and adapt to working with and running analytics on large data sets. HP HAVEn, for example, brings together Hadoop, Autonomy, Vertica, HP Enterprise Security and any number of applications.

Building Organizational Skills

Providing for big data technologies and platforms sets the baseline for an organization. What needs to be done next is to have a focused effort across the organization to build the skills in these technologies.

In contrast to traditional analytical organizations, big data organizations need to augment existing analytical staffs with data scientists who possess a higher level of technical capabilities, as well as the ability to manipulate big data technologies. These capabilities might include natural language processing and text mining skills; video, image and visual analytics experience; as well as the ability to code in scripting languages such as Python, Pig and Hivev. A data scientist in a big data analytics organization typically needs skills in three core areas: 1. business intelligence related skills to get to the data quickly, 2. statistics and analytical techniques to be able to analyze and, 3. business skills to be able to interpret analysis results in business terms.

The time that an analytics organization has to respond to a business need is shrinking. This gives rise to a situation where you need all three skill sets in one person, which is hard to find.

To guide skill development among the existing analyst community, HP developed competency centers aligned to each of the key technologies – Vertica for structured data analytics, Autonomy for unstructured data analytics and Hadoop as a data lake. The competency centers cater to focused competency development through collaboration, training and live projects. These competency centers, composed of data scientists across the organization, created a skills framework and a big data curriculum to guide the skill development effort.

Re-thinking Business Analytics

With the right tools, technologies and skill sets, an organization’s next step is deploying big data analytics to solve analytics questions in different application areas. A challenge some analytics organizations might have is getting their teams to think about how big data analytics applies to their business areas. Given the relative maturity of analytics solutions across most domains, teams sometimes have difficulty in assessing how big data could help.

At HP, our belief is that big data analytics impacts analytics in two ways: 1. helping answer existing/legacy questions in newer ways and, 2. addressing a range of newer questions and decisions organizations face today.

An example of the existing/legacy question is the area of segmentation, a well-understood area in marketing and customer analytics. The challenge for most business analytics teams, though, is to look at segmentation with the fresh lens of big data analytics: How can big data help make segmentation better?

Examples of how big data analytics can help address new questions is perhaps best exemplified when considering either social media analytics or machine/sensor data analytics. These diverse areas are important aspects in business decision-making and impact such functions as marketing, manufacturing, customer service and R&D. Both require new analytical approaches to manage the large streams of data that get generated.


To realize value from big data analytics, organizations need to integrate technology, tools and practices with existing analytics ecosystems. The choices to make in terms of which tools to select and which skills to develop need careful consideration and have a long-term impact on an organization’s ability to integrate big data analytics.

Analytics organizations should start by considering four key questions:

  1. What technology and tools are needed?
  2. What platform is best for integrating these technology choices with each other, as well as with legacy environments?
  3. Which skills do we need to develop and how do we develop them?
  4. How do we integrate all of the above into the business decision-making process?

These questions require senior management time and attention. Addressing these issues comprehensively can reduce the barriers to success for analytics organizations looking to incorporate big data analytics.

Pramod Singh ( is director of Digital and Big Data Analytics at Hewlett-Packard (HP) and a member of INFORMS. He has a Ph.D. in mathematics from the University of Arkansas and an MBA in marketing. Ritin Mathur ( is a senior manager of Big Data Analytics at HP. Srujana H.M. ( is a data scientist working on big data technology platforms at HP. All three are based in Bangalore, India.

Notes & References

1. Source:

2. “Big Data Meets Big Data Analytics,” white paper, SAS Institute Inc., 2012.

3. Source:, last accessed on July 3, 2014.

4. Philip Russom, “Big Data Analytics,” TDWI Best Practices Report, Fourth Quarter, 2011.

5. Thomas H. Davenport and Jill Dyche, “Big Data in Big Companies,” paper, International Institute of Analytics, May 2013.

business analytics news and articles


Related Posts

  • 53
    Patrick Noonan, a former management consultant, business owner and college professor, will lead two-day workshops in Denver and the Washington, D.C., area later this year as part of INFORMS’ Professional Development and Continuing Education program. The workshops, entitled, “Essential Practice Skills for High-Impact Analytics Projects,” will be held June 20-21…
    Tags: management, analytics, data, industries, global, unstructured
  • 53
    FEATURES Putin vs. Western analysts Russia’s new approach to extending its influence necessitates new approaches to assessment. By Douglas Samuelson Making analytics work through practical project management Making analytics work: Why consistently delivering value requires effective project management. By Erick Wikum Crowdsourcing – Using the crowd: curated vs. unknown Using…
    Tags: analytics, data, big, management
  • 53
    Many organizations have noticed that the data they own and how they use it can make them different than others to innovate, to compete better and to stay in business. That’s why organizations try to collect and process as much data as possible, transform it into meaningful information with data-driven…
    Tags: data, management, big
  • 51
    July/August 2014 The story of how IBM not only survived but thrived by realizing business value from big data. By (l-r) Brenda Dietrich, Emily Plachy and Maureen Norton This is the story of how an iconic company founded more than a century ago, and once deemed a “dinosaur” that would…
    Tags: analytics, data, big
  • 51
    The winners of the first edition of the Dutch Data Science Awards, recently announced in Haarlem, included Quantib, data visualization expert Stef van den Elzen, AgroEnergy and CQM. AIMMS and AIMMS implementation partner ORTEC sponsored the awards along with EY, Lubbers De Jong, Microsoft and Motivaction. 
    Tags: data, visualization, big, analytics


Using machine learning and optimization to improve refugee integration

Andrew C. Trapp, a professor at the Foisie Business School at Worcester Polytechnic Institute (WPI), received a $320,000 National Science Foundation (NSF) grant to develop a computational tool to help humanitarian aid organizations significantly improve refugees’ chances of successfully resettling and integrating into a new country. Built upon ongoing work with an international team of computer scientists and economists, the tool integrates machine learning and optimization algorithms, along with complex computation of data, to match refugees to communities where they will find appropriate resources, including employment opportunities. Read more →

Gartner releases Healthcare Supply Chain Top 25 rankings

Gartner, Inc. has released its 10th annual Healthcare Supply Chain Top 25 ranking. The rankings recognize organizations across the healthcare value chain that demonstrate leadership in improving human life at sustainable costs. “Healthcare supply chains today face a multitude of challenges: increasing cost pressures and patient expectations, as well as the need to keep up with rapid technology advancement, to name just a few,” says Stephen Meyer, senior director at Gartner. Read more →

Meet CIMON, the first AI-powered astronaut assistant

CIMON, the world’s first artificial intelligence-enabled astronaut assistant, made its debut aboard the International Space Station. The ISS’s newest crew member, developed and built in Germany, was called into action on Nov. 15 with the command, “Wake up, CIMON!,” by German ESA astronaut Alexander Gerst, who has been living and working on the ISS since June 8. Read more →



INFORMS Computing Society Conference
Jan. 6-8, 2019; Knoxville, Tenn.

INFORMS Conference on Business Analytics & Operations Research
April 14-16, 2019; Austin, Texas

INFORMS International Conference
June 9-12, 2019; Cancun, Mexico

INFORMS Marketing Science Conference
June 20-22; Rome, Italy

INFORMS Applied Probability Conference
July 2-4, 2019; Brisbane, Australia

INFORMS Healthcare Conference
July 27-29, 2019; Boston, Mass.

2019 INFORMS Annual Meeting
Oct. 20-23, 2019; Seattle, Wash.

Winter Simulation Conference
Dec. 8-11, 2019: National Harbor, Md.


Advancing the Analytics-Driven Organization
Jan. 28–31, 2019, 1 p.m.– 5 p.m. (live online)


CAP® Exam computer-based testing sites are available in 700 locations worldwide. Take the exam close to home and on your schedule:

For more information, go to