The following is an excerpt of a thesis research proposal submitted for the 2016 Teradata Partners Conference and the University of North Carolina at Greensboro. Throughout 2018, I will be posting and presenting findings from this research.
With faster networks, the growth of internet enabled devices and integration of computer systems into the Internet of Things (IoT), businesses continue to build data collection systems for downstream billing, reporting, analytics and warehousing. Research methods are growing in performance and predictive analytics using regression modeling, probability density function modeling, and statistical process control for IT management of IoT data streams. Results include system sizing, statistical process control, health checks and endpoint communication monitoring. Recommendations include quantitative and qualitative analysis of systems used in utilities and cloud services that help businesses determine scale, performance and availability.
PROBLEM AND MOTIVATION
Rapid advancements in information technology, and the proliferation of data has created many opportunities in system performance measurement of multi-tiered architectures. These types of architectures utilize a variety of platforms from physical, virtual, web, and cloud for a complete end-to-end solution. With the advent of Internet-enabled devices and IoT, information technology will become a crucial part in collecting, aggregating, and analyzing data from new endpoints. IT and business will need to become more aligned in corporate practices and strategies with IoT. IT managers, in turn, will need to rely on analytics based on system performance models that demonstrate system capabilities and satisfy technical and business requirements.
The industry focus of this project are utilities with multi-tiered enterprise solutions and cloud-based services. These industries rely heavily on analytical systems to improve revenue streams. With the large collection of data from smart meters, IoT sensors and wearables, maintaining well performing enterprise IT solutions to take advantage of this data will become critical.
One of the biggest challenges in designing and engineering new IT solutions is determining if the assets acquired are optimized and sized according to changing demand. An assessment of IT budgets must determine appropriate return on investment for acquiring or provisioning new resources. This research will propose methods to analyze resource metrics using various statistical and advanced analytical techniques and how businesses can communicate results and conclusions.
BACKGROUND AND RELATED WORK
Performance metrics are collected on everything from disks to processors to memory. Large hardware and software companies such as IBM, Cisco and Oracle have best practices in tuning hardware using the best configurations for optimal performance and capacity. Related work in performance engineering and industrial system engineering have been practiced throughout the history of manufacturing to reduce deviations in engineering processes using statistical control for better manufacturing quality and efficiency. With new cloud services trends, evaluating and modeling performance overhead has been studied to provide new benchmarks for virtualization platform evaluation.
The latest effort is to provide multi-parameter models that extend beyond basic metrics of CPU, memory and storage. For multi-tiered architectures that have time dependent variables, this project looks at granular areas of those resources such as logical and physical partitioning of resources, database engine performance metrics and physical and virtual versus native memory resource management. The number of potential inputs exceeds current modeling techniques. Mathematical and statistical testing include mostly non-linear methods such as exponential, logarithmic and high order polynomial distributions for fit testing with hypothesis testing and the use logistic regression methods for predictability; Normal probability distribution is used for statistical process control. This project also demonstrates trending on over one terabyte of system performance data for segmentation, predictability, variable worth, and associations.
APPROACH AND UNIQUENESS
IT system performance data is collected from large multi-tiered and cloud systems. Data is also collected from multiple sources including network monitoring tools, database monitoring tools, web logs, and file system logs, including data from data sensing and Internet-enabled devices
Approaches used to analyze data includes benchmarks, control charts, cluster and segmentation analysis, least-square fitting, best fit testing, predictive analytics and regression modeling.
Three main applications include:
- Scaling, sizing and provisioning IT resources
- Process control
- Distributed parallel processing and control
- IT system modeling
- Endpoint Communication Analysis
Understanding the benefits of data analysis of data management and IT systems will help to accelerate the collection and analysis of sensor data analysis in high scale deployments.