• Save

Intersection Between Big Data Analytics and Software Development

Collecting raw software project data and analyzing it to create actionable, easily digested KPIs has been valuable. Many firms, struggle to extract, analyze, and organize data into scorecards, dashboards and reports. Only very few organizations have been able to implement an effective, streamlined reporting systems.

It is assumed that because various reports provide interesting information, they have gone the extra mile. Time-consuming processes such as manual data collection and report generation bog them down that they cannot envision anything beyond the next report. Something as basic as creating an end-of-day testing report for management continues to be a tedious and time-consuming manual task.

This hurts productivity across the board. Not only are testers mirred with generating manual reports, but in many cases, the reports aren’t comprehensive enough to provide meaningful decision-making information. This can cause reverberations across the enterprise. 

If reports aren’t sufficiently insightful, then management might not realize defect rates are rising. No one takes action to pinpoint and correct the problem, and defects are discovered too late for developers to fix them before the release goes into production. User frustration ensues, and no one is happy.

Although producing accurate, accessible information such as defect reports won’t guarantee software quality, without them it’s nearly impossible to achieve it. To improve the situation, numerous firms have developed or incorporated into existing platforms—functionality that tracks and organizes (at a basic level) data related to software team activities.

Now, teams frustrated with the tedium and complexity of reporting are finding help from an unexpected quarter—technologies developed to harness complex datasets that traditional data-processing applications cannot effectively process.

Big Data Versus Software Data

Processing datasets of more than five petabytes)—especially those incorporating unstructured data from myriad sources—requires incredibly fast processors, paired with sophisticated analytics software to identify the patterns and associations that provide meaningful feedback. Also, organizations must have a mechanism to visualize this information.

A cadre of innovative enterprises have developed the technologies to accomplish these goals and wrapped them into analytics platforms that are already revolutionizing fact-based decision making in other areas. So, what does this have to do with reporting for software activities?

Leadership in progressive, quality-focused organizations recognized that the challenges of harnessing and visualizing big data were a close parallel to the hurdles experienced by software teams. Although software project activity data might appear to be a world away from big data, it shares a few notable similarities with unstructured big data. It is not easily accessible, natively, nor is it properly organized for data analysis and reporting. It is often unstructured, as well.

The obstacles previously preventing big data from being useful are similar to those keeping software teams mired in inadequate, time-consuming, manual reporting. As such, the requirements for developing a functional, automated analytics and visualization/reporting solution for software project data are similar, as well.

Envisioning a Better System

If only organizations could harness these technologies, they could drive near-real-time analysis and reporting to support informed decision making during the software development and testing process. This information would be especially valuable for continuous integration and agile efforts, as well as for DevOps teams having a hard time achieving integrated communication and collaboration—a major stumbling block in adopting this approach.

However, as with big data analytics, it wouldn’t be enough to analyze one data stream—from one tester or even one team. Similarly, the results wouldn’t be reliable or conclusive if team members were to cherry pick the metrics that were easy to pull. To provide full value, the solution would need to automatically extract all available data from tools like HPE Application Lifecycle Management or JIRA Software—and preferably both, as well as from any other tools being used.

The consolidated data extraction would ensure sufficient information to cover everything—teams, projects, and other activities. Then, there would be a means to normalize the data to remove inconsistencies between values, fields, and other disparate naming conventions prior to analytics and visualization. Resulting KPIS would be both broad and deep, ranging from the number of tests that passed, failed, and were blocked in a day, to defect distribution by severity, status, or root cause, trending of defects, and beyond.

At the visualization end, users would be able to consume a variety of data formats—reports, dashboards, and scorecards, for instance, and be able to create their own views. Furthermore, users would be able to see high-level results or drill down deeper into the most detailed specifics.

A handful of enterprises are working at varying levels to help software teams better analyze and visualize their test activity data using big data technologies.

Developers and testers can identify workload peaks and troughs that might cause problems—and track completion statuses across the entire software lifecycle. 

Project managers can spot problems earlier and reallocate resources as needed to maintain quality. They can also monitor trends to stop problems in their tracks. 

Executives can track project health and monitor visibility across the entire project portfolio.

It’s an exciting time, especially for quality-focused organizations such as ours, and we look forward to seeing how these technologies will allow software project (and quality) management to evolve.

New Horizons for Software Delivery

Processing and visualizing software project data in near-real time has incredible potential to facilitate decisions that expedite release cycles and boost quality. However, it isn’t the only way big data can make its way into software development and testing—and even quality assurance. A powerful synergy is building that will enable software teams to act more quickly and proactively upon a wide variety of data, including the big data that drove the technological transformation we described earlier.

Data monitoring of unstructured data feeds such as social media has already become commonplace among customer satisfaction teams. If a key influencer with millions of followers—a celebrity, for instance—pronounces he or she doesn’t like a product, the effect will be felt at a retailer’s cash registers within days, if not sooner. Near-real-time big data analytics are enabling companies around the globe to learn of and address user dissatisfaction issues right away rather than weeks or months later.

It shouldn’t be long before this type of information makes its way into software user experience efforts—because it must. If a power celebrity such as Taylor Swift tells four million fans that she doesn’t like a new music app because it lacks a certain feature, it will do the organization little good to integrate that feature a year later. The app will be dying or dead by then. 

It’s entirely possible that “user activity” monitoring tools could soon be integrated with production monitoring platforms, enabling firms to receive near-real-time information, not only when users are experiencing functionality problems with their apps but also when they are complaining to their friends about them on Facebook.

Such functionality will be a final piece in the quality puzzle, because savvy organizations will already have used the type of analytics we described earlier to better manage their software processes, minimizing defects and watching user satisfaction climb.

Leave a Comment

avatar
  Subscribe  
Notify of
Share via
Copy link