Comparing Machine Learning Models in Determining Credit Worthiness for Bank Loans

The R language has a number of machine learning libraries to help determine for both supervised and unsupervised machine learning. This includes such ML techniques such as linear and logistic regression, decision trees, random forest, generalized boosted regression modeling among others. I strongly recommend learning how these models work and how they can be used to predictive analytics.

Part of the Machine Learning process includes the following:

  1. Sample: Create a sample set of data either through random sampling or top tier sampling.  Create a test, training and validation set of data.
  2. Explore: Use exploratory methods on the data.  This includes descriptive statistics, scatter plots, histograms, etc.
  3. Modify:  Clean, prepare, impute or filter data.  Perform cluster analysis, association and segmentation.
  4. Model:  Model the data using Logistic or Linear regression, Neural Networking, and Decision Trees.
  5. Assess:  Access the model by comparing it to other model types and again real data. Determine how close your model is to reality.  Test the data using hypothesis testing.

When creating machine learning models for any application, it is wise to following a process flow such as the following:

In the following example, we use machine learning to determine the credit worthiness of prospective borrowers for a bank loan.

The loan data consist of the following inputs

  1. Loan amount
  2. Interest rate
  3. Grade of credit
  4. Employment length of borrower
  5. Home ownership status
  6. Annual Income
  7. Age of borrower

The response variable or predictor to predict the default rate

  1. Loan status (0 or 1).

After loading the data into R, we partition the data for training or testing sets.

loan <- read.csv("loan.csv", stringsAsFactors = TRUE)

str(loan)

## Split the data into 70% training and 30% test datasets

library(rsample)
set.seed(634)

loan_split <- initial_split(loan, prop = 0.7)

loan_training <- training(loan_split)
loan_test <- testing(loan_split)

Create a over-sample training data based on ROSE library. This checks for over-sampling of the data.

str(loan_training)

table(loan_training$loan_status)

library(ROSE)

loan_training_both <- ovun.sample(loan_status ~ ., data = loan_training, method = "both", p = 0.5)$data

table(loan_training_both$loan_status)

Build a logistic regression model and a classification tree to predict loan default.

loan_logistic <- glm(loan_status ~ . , data = loan_training_both, family = "binomial")

library(rpart)

loan_ctree <- rpart(loan_status ~ . , data = loan_training_both, method = "class")

library(rpart.plot)

rpart.plot(loan_ctree, cex=1)

Build the ensemble models (random forest, gradient boosting) to predict loan default.

library(randomForest)

loan_rf <- randomForest(as.factor(loan_status) ~ ., data = loan_training_both, ntree = 200, importance=TRUE)

plot(loan_rf)

varImpPlot(loan_rf)

library(gbm)

Summarize gradient boosting model

loan_gbm <- gbm(loan_status ~ ., data = loan_training_both, n.trees = 200, distribution = "bernoulli")
summary(loan_gbm)

Use the ROC (receiver operating curve) and compute the AUC (area under the curve) to check the specificity and sensitivity of the models.

# Step 1. Predicting on test data

predicted_logistic <- loan_logistic %>% 
  predict(newdata = loan_test, type = "response")

predicted_ctree <- loan_ctree %>% 
  predict(newdata = loan_test, type = "prob")

predicted_rf <- loan_rf %>% 
  predict(newdata = loan_test, type = "prob")

predicted_gbm <- loan_gbm %>% 
  predict(newdata = loan_test, type = "response")

# Step 3. Create ROC and Compute AUC

library(cutpointr)

roc_logistic <- roc(loan_test, x= .fitted_logistic, class = loan_status, pos_class = 1 , neg_class = 0)
roc_ctree<- roc(loan_test, x= .fitted_ctree, class = loan_status, pos_class = 1 , neg_class = 0)
roc_rf<- roc(loan_test, x= .fitted_rf, class = loan_status, pos_class = 1 , neg_class = 0)
roc_gbm<- roc(loan_test, x= .fitted_gbm, class = loan_status, pos_class = 1 , neg_class = 0)

plot(roc_logistic) + 
  geom_line(data = roc_logistic, color = "red") + 
  geom_line(data = roc_ctree, color = "blue") + 
  geom_line(data = roc_rf, color = "green") + 
  geom_line(data = roc_gbm, color = "black")

auc(roc_logistic)
auc(roc_ctree)
auc(roc_rf)
auc(roc_gbm)

These help you compare and score which model works best for the type of data presented in the test set. When looking at the ROC chart, you can see that the gradient boost model has the best performance of all the model as it is closer to 1.00 than the other models. Classifiers that are closer to 1.00 for the top left where Sensitivity is 1.00 and Specificity is closer to 0.00 have the best performance.

Creating Twitter Sentiment Association Analysis using the Association Rules and Recommender System Methods

Contextual text mining methods extract information from documents, live data streams and social media.  In this project, thousands of tweets by users were extracted to generate  sentiment analysis scores.

Sentiment analysis is a common text classification tool that analyzes streams of text data in order to ascertain the sentiment (subject context) of the text, which is typically classified as positive, negative or neutral. 

In the R sentiment analysis engine, our team built, the sentiment score has a range of .-5 to 5. Numbers within this range determine the the change in sentiment. 

SentimentScore
Negative-5
Neutral0
Positive5

Sentiment Scores are determined by a text file of key words and scores called the AFINN lexicon.  It’s a popular with simple lexicon used in sentiment analysis.  

New versions of the file are released in source repositories and contains over 3,300+ words with scores associated with each word based on its level of positivity or negativity.

Twitter is an excellent example of sentiment analysis.

An example of exploration of the sentiment scores based on the retweets filtered on the keywords:

  1. Trump
  2. Biden
  3. Republican
  4. Democrat
  5. Election

The data was created using a sentiment engine built in R.  It is mostly based on the political climate in the United States leading up to and after the 2020 United States election.

Each bubble size represents the followers of user who’ve retweeted.  The bubble size gives a sense of the influence of those users (impact). The Y-axis is the sentiment score, the X-axis represents the retweet count of the bubble name.

“Impact” is a measure of how often a twitter user is retweeted by users with high follower counts.

Using the Apriori Algorithm, you can build a sentiment association analysis in R. See my article on Apriori Association Analysis in R.

Applying the Apriori algorithm. using the single format, we assigned our transactions as the sentiment score and We assigned items_id as retweeted_screen_name.  

This is the measure the association between highly retweeted accounts and their associations based on sentiment scores (negative, neutral, positive).  Support is the minimum support for an itemset.  Minimum support was set to 0.02.

The majority of the high retweeted accounts had highly confident associations based on sentiment values. We then focused on the highest confidence associates that provided lift above 1.  After removing redundancy, we were able to see the accounts where sentiment values are strongly associated between accounts.

 According to the scatter plot above, we see most of the rules overlap, but have very good lift due to strong associations, but also this is indicated by the limited number of transactions and redundancy in the rules.

The analysis showed a large number of redundancy, but this was mostly due to the near nominal level of sentiment values.  So having high lift, a larger minimum support and .removing redundancy find the most valuable rules.

Using R to Create Decision Tree Classification

R is a great language for creating decision tree classification for a wide array of applications. Decision trees are a tree-like model in machine learning commonly used in decision analysis. The technique is commonly used in creating strategies for reaching a particular goal based on multi-dimensional datasets.

Decision trees are commonly used for applications such as determining what type of consumer is at higher risk of defaulting on a loan than borrowers of lower risk. What sort of factors impacts whether a company can retain customers, and what type of students are more at risk at dropping out and require mediation based on school attendance, grades, family structure, etc.

Below are the typically libraries for building machine learning analysis are below including decision trees, linear and logistic regression

library(tidyverse)
library(dplyr)
library(broom)
library(yardstick)
library(DescTools)
library(margins)
library(cutpointr)
library(tidyverse)
library(caTools)
library(rsample)
library(ROSE)
library(rpart)
library(rpart.plot)
library(caret)
install.packages("rsample")
install.packages("caTools")
install.packages("ROSE")
install.packages("rpart")
install.packages("rpart.plot")
install.packages("yardstick")
install.packages("DescTools")
install.packages("margins")
install.packages("cutpointr")

The following code block creates regression and decision tree analysis of custom churn predictions.

# Import the customer_churn.csv and explore it.
# Drop all observations with NAs (missing values)


customers <- read.csv('customer_churn.csv')
summary(customers)
str(customers)
customers$customerID <- NULL
customers$tenure <- NULL
sapply(customers, function(x) sum(is.na(x)))
customers <- customers[complete.cases(customers),]


#===================================================================



#  Build a logistic regression model to predict customer churn by using predictor variables (You determine which ones will be included).
# Calculate the Pseudo R2 for the logistic regression.


# Build a logistic regression model to predict customer churn by using predictor variables (You determine which ones will be included).

customers <- customers %>%
 mutate(Churn=if_else(Churn=="No", 0, 1))


str(customers)
         
regression1 <- glm(Churn ~ Contract + MonthlyCharges + TotalCharges + TechSupport + MultipleLines + InternetService, data=customers, family="binomial")


# Calculate the Pseudo R2 for the logistic regression.


regression1 %>%
  PseudoR2()

#  Split data into 70% train and 30% test datasets.
#  Train the same logistic regression on only "train" data.

#  Split data into 70% train and 30% test datasets.


set.seed(645)

customer_split <- initial_split(customers, prop=0.7)
train_customers <- training(customer_split)
test_customers <- testing(customer_split)

# Train the same logistic regression on only "train" data.

regression_train <- glm(Churn ~ Contract + MonthlyCharges + TotalCharges + TechSupport + MultipleLines + InternetService, data=train_customers, family="binomial")
#regression_test <- glm(Churn ~ Contract + MonthlyCharges + TotalCharges + tenure + TechSupport, data=test_customers)



#  For "test" data, make prediction using the logistic regression trained in Question 2.
#  With the cutoff of 0.5, predict a binary classification ("Yes" or "No") based on the cutoff value, 
#  Create a confusion matrix using the prediction result.

#. For "test" data, make prediction using the logistic regression trained in Question 2.

str(regression_train)
prediction <- regression_train %>%
  predict(newdata = test_customers, type = "response")
 
 


# With the cutoff of 0.5, predict a binary classification ("Yes" or "No") based on the cutoff value, 


#train_customers <- train_customers %>%
#  mutate(Churn=if_else(Churn=="0", "No", "Yes"))



The last code creates the decision tree.


train_cust_dtree <- rpart(Churn ~ ., data=train_customers, method = "class")
rpart.plot(train_cust_dtree, cex=0.8)

Check the sensitivity and specificity of the classification tree, we create a confusion matrix for ROC charts. ROC Charts are receiver operating characteristic curves that have the diagnostic ability of a binary classifier system as its threshold.

set.seed(1304)

train_cust_dtree_over <- ovun.sample(Churn ~., data=train_customers, method="over", p = 0.5)$data
train_cust_dtree_under <- ovun.sample(Churn ~., data=train_customers,  method="under", p=0.5)$data
train_cust_dtree_both <- ovun.sample(Churn ~., data=train_customers, method="both", p=0.5)$data

table(train_customers$Churn)
table(train_cust_dtree_over$Churn)
table(train_cust_dtree_under$Churn)
table(train_cust_dtree_both$Churn)

train_cust_dtree_over_A <- rpart(Churn ~ ., data = train_cust_dtree_over, method="class")
rpart.plot(train_cust_dtree_over_A, cex=0.8)

customers_dtree_prob <- train_cust_dtree_over_A %>%
  predict(newdata = test_customers, type = "prob")

#  Create a confusion matrix using the prediction result.


head(customers_dtree_prob)

table(customers_dtree_prob)

customers_dtree_class <- train_cust_dtree_over_A %>%
  predict(newdata = test_customers, type = "class")

table(customers_dtree_class)

test_customers <- test_customers %>%
    mutate(.fitted = customers_dtree_prob[, 2]) %>%
    mutate(predicted_class = customers_dtree_class)


confusionMatrix(as.factor(test_customers$predicted_class), as.factor(test_customers$Churn), positive = "1")
#===================================================================


#  Based on prediction results in Question 3, draw a ROC curve and calculate AUC.


roc <- roc(test_customers, x=.fitted, class=Churn, pos_class=1, neg_clas=0)

plot(roc)
auc(roc)

plot(roc) +
    geom_line(data = roc, color = "red") +
    geom_abline(slope = 1) +
    labs(title = "ROC Curve for Classification Tree")

Machine Learning with Azure ML Studio

Directions on How to Build the Predictive Model In Microsoft Azure ML

  • Sign in to Microsoft Azure using your login credentials in the  Azure portal 
  • Create a workspace for you to store your work
    • In the upper-left corner of Azure portal, select + Create a resource.
    • Use the search bar to type Machine Learning.
    • Select Machine Learning.
    • In the Machine Learning pane, select Create to begin.
    • You will provide the following information below to configure your new workspace:
      • Subscription – Select the Azure subscription that you would like to use.
      • Resource group – Create a name for your resource group which will hold resources for your Azure solution.
      • Workspace name – Create a unique name that identifies your workspace.
      • Region – select the region closest to the users to reduce latency
      • Storage account – created by default
      • Key Vault – created by default
      • Application insights – created by default
    • When you have completed configuring the workspace, select Review + Create.
    • Review the settings and make any additional changes or corrections. Lastly, select Create. When deployment of workspaces has completed you will see the message “Your deployment is Complete”. Please see the visual below as a reference. 
  • To Launch your workspace, click Go to resource
  • Next, Click the blue Launch Studio button which is under Manage your Machine Learning Lifecycle. Now you are ready to begin!!!!
  • Click on Experiments in the left panel
  • Click on NEW in the lower left corner 
  • Select Blank Experiment. The new experiment is created with a default name. You can change the name at the top of the page. 
  • Upload the data above into Ml studio
    • Drag the datasets on to the experiment canvas. (We uploaded preprocessed data
    • If you would like to see what the data looks like, click on the outpost port at the bottom on the dataset and select Visualize. Given this data we are going to try and predict if there the IoT sensors have communication errors. 
  • Next, prepare the data
    • Remove unnecessary columns /data
      • Type “Select Columns” in the Search box  and select Select Columns in the Dataset  module, then drag and drop it on the canvas. This allows you to exclude any columns that you do not want in the model. 
      • Connect Select Columns in Dataset to the Data on the canvas.
    • Choose and Apply a Learning Algorithm
      • Click on Data Transformation in the left column 
        • Next, click on the drop down Manipulation 
        • Drag the Select Edit the Metadata (use this to change the metadata that is associated with columns inside the dataset. This changes the metadata inside Azure Machine Learning that tells the downstream components how to use the selected columns.)
      • Split the data 
        • Then, click on the drop down Sample and Split.
        • Choose Split Data and add it to the canvas and connect it to Edit the Metadata.
        • Click on Split Data and find the Fraction of rows in the output dataset and set it to .80. You are splitting the data to train the model using 80% of the data and test the model using 20% of the data.
  • Then you train the data 
    • Choose the drop down under Machine Learning
    • Choose the drop down under Initialize Model
    • Choose the drop down under Anomaly Detection 
    • Click on PCA- Based Anomaly Detection and add this to the canvas and connect with the Split data.  
    • Choose the drop down under Machine Learning
    • Choose the drop down under Initialize Model
    • Choose the drop down under Anomaly Detection 
    • Click on One-Class Support Vector machine and add this to the canvas and connect with the Split data.  
    • Choose the drop down under Machine Learning
    • Then, choose the drop down under Train
    • Click on Tune Model Hyperparameters and add this to the canvas and connect with the Split Data.
    • Choose the drop down under Machine Learning
    • Then, choose the drop down under Train
    • Click on Train Anomaly Detection Model
  • Then score the model 
    • Choose the drop down under Machine Learning
    • Then, choose the drop down – Score
    • Click on Score Model
  • Normalize the data
    • Choose the drop down under Data Transformation
    • Then, choose the drop down under Scale and Reduce
    • Click on Normalize Data
  • Evaluate the model – this will compare the one-class SVM and PCA – based anomaly detectors.
    • Choose the drop down under Machine Learning
    • Then, choose the drop down under Evaluate
    • Click on Evaluate Model
  • Click Run at the bottom of the screen to run the experiment. Below is how the model should look. Please click on the link to use our experiment (Experiment Name: IOT Anomaly Detection) for further reference.  This link requires that you have a Azure ML account.  To access the gallery, click the following public link:  https://gallery.cortanaintelligence.com/Experiment/IOT-Anomaly-Detection

Derek MooreErica Davis, and Hank Galbraith, authors.

Anomaly and Intrusion Detection in IoT Networks with Enterprise Scale Endpoint Communication – Pt 2

Derek MooreErica Davis, and Hank Galbraith, authors.

Part two of a series of LinkedIn articles based on Cognitive Computing and Artificial Intelligence Applications

Background

Several high profile incidents of ransomware attacks have called attention to IoT networks security. An assessment of security vulnerabilities and penetration testing have become increasingly important to sufficient design. Most of this assessment and testing takes place at the software and hardware level. However, a more broad approach is vital to the security of IoT networks. The protocol and traffic analysis is of importance to structured dedicated IoT networks since communication and endpoints are tracked and managed. Understanding all the risks posed to these types of network allows for more complete risk management plan and strategy. Beside network challenges, there are challenges to scalability, operability, channels and also the information being transmitted and collected with such networks. In IoT networks, looking for vulnerabilities spans the network architecture, endpoint devices and services, where services include the hardware, software and processes that build an overall IoT architecture. Building a threat assessment or map, as part of an overall security plan, as well as, updating it on a schedule basis allows security professionals and stakeholders to manage for all possible threats to the architecture. Whenever possible, creating simulations of possible attack vectors, understanding the behavior of such attacks and then creating models will help build upon a overall security management plan.

Open ports, SQL injection flaws, unencrypted services, insecure network interfaces, buffer overflow risks, lack of firewall protocols, authorization settings, web interface insecurity are among some of the types of vulnerabilities in an IoT network and devices.

Where is the location of a impending attack? Is it occurring at the device, server or service? Is it occurring in the location where the data is stored or while the data is in transit? What type of attacks can be identified? Types of attacks include distributed denial of service, man-in-the-middle, ransomware, botnets, spoofing, account penetrations, etc.

Business Use Case

For this business use case research study, a fictional company was created. The company is a national farmland and agricultural cooperative that supplies food to local and state markets. Part of the company’s IT infrastructure is an IoT network that uses endpoint devices for monitoring and controlling temperature, humidity and moisture for the company’s large agricultural farmlands. This network has over 2000 IoT devices in operations on 800 acres. Any intrusion into the network by a rogue service or bad actor could have consequences in regards to delivering fresh produce with quality and on time. The network design in the simulation below is a concept of this agricultural network. Our team created a simulation network using Cisco Packet Tracer, a tool which allows users to create and simulate package traffic throughout a computerized network at multiple ISO levels.

Simulated data was generated for using the packet tracer simulator to track and build. In the simulation network below using multiple routers, switches, servers and IoT devices for packets such as TCP, UDP, RIPv4 and ICMP, for instance.

Network Simulation

Below is a simulation of packet routing throughout the IoT network.

Cisco Packet Tracer Simulation for IoT network.  Packet logging to test anomaly detection deep learning models.

Problem Statement

Our fictional company will be the basis of our team’s mock network for monitoring for intrusions and anomaly. Being a simulated IoT network, it contains only a few dozen IoT enabled sensors and devices such as sprinklers, temperature and water level sensors, and drains. Since our model will be designed for large scale IoT deployment, it will be trained on publicly available data, while the simulated data will serve as a way to score the accuracy of the model. The simulation has the ability to generate the type of threats that would create anomalies. It is important to distinguish between an attack and a known issue or event (see part one of this research for IoT communication issues). The company is aware of those miscommunications and has open work orders for them. The goal is for our model is to be able to detect an actual attack on the IP network by a bad actor. Although miscommunication is technically an anomaly, it is known by the IT staff and should not raise an alarm. Miscommunicating devices are fairly easy to detect, but to a machine learning or deep learning model, it can be a bit more tricky. Creating a security alarm for daily miscommunication issues that originate from the endpoints, would constitute a prevalence of false positives (FP) in a machine learning confusion matrix.

No alt text provided for this image

A running simulation

Project Significance and Implementation

In today’s age of modern technology and the internet, it is becoming increasingly more difficult to protect enterprise networks against malicious attacks. Not only are malicious actors becoming more advanced with the methodologies of their attacks, but also the number IoT devices that live and operate in a business environment is ever increasing. It needs to be a top priority for any business to create an IT business strategy that protects the company’s technical architecture systems and core intellectual property. When accessing all potential security weakness, you must decompose the network model and define trust zones within the IoT architecture.

This application was designed to use Microsoft Azure Machine Learning analyze and detect anomalies in large data sets collected from all devices on the business’ network. In an actual implementation of this application, there would be a constant data flow running through our predictive model to classify traffic as Normal, Incorrect Setup, Distributed Denial of Service (DDOS attack), Data Type Probing, Scan Attack, or Man in the Middle. Using a supervised learning method to iteratively train our model, the application would grow increasingly more cognitive, and accurate at identifying these network traffic patterns correctly. If this system were to be fully implemented, there would need to also be actions for each of these classification patterns. For instance, if the model detected a DDOS attack coming from a certain device, the application would automatically send shutdown commands to the device, thus isolating it from the network and containing the attack. When these actions occur, there would be logs taken, and notifications automatically sent to appropriate IT administrators and management teams, so that quick and effective action could be taken. Applications such as the one we have designed are already being used throughout the world by companies in all sectors. Crowdstrike for instance, is a cyber technology company that produces Information Security applications with machine learning capabilities. Cyber technology companies such as Crowdstrike have grown ever more popular over the past few years as the number of cyber attacks have increased. We have seen first hand how advanced these attacks can be with data breaches on the US Federal government, Equifax, Facebook, and more. The need for advanced information security applications is increasing daily, not just for large companies, but small- to mid-sized companies as well. While outsourcing information security is an easy choice for some companies, others may not have the budget to afford such technology. That is where our application gives an example of the low barrier to entry that can be attained when using machine learning applications, such as Microsoft Azure ML or IBM Watson. Products such as these create relatively easy interfaces for IT Security Administrators to take the action into their own hands, and design their own anomaly detection applications. In conclusion, our IOT Network Anomaly Detection Application is an example of how a company could design and implement it’s own advanced cyber security defense applications. This would better enable any company to protect it’s network devices, and intellectual property against the ever growing malicious attacks.

Methodology

For this project, our team acquired public data from Google, Kaggle and Amazon. For the IoT model, preprocessed data was selected for the anomaly detection model. Preprocessed data from the Google open data repository was collected to test and train the models. R Studio programming served as an initial data analysis and data analytics process to determine Receiver Operating Characters (ROC) and Area Under the Curve (AUC) and evaluate the sensitivity and specificity of the models for scoring the predictability of the response variables. In R, predictability was compared between with logistic regression, random forest, and gradient boosting models. In the preprocessed data, a predictor (normality) variable was used for training and testing purposes. After the initial data discovery stage, the data was processed by a machine learning model in Azure ML using support vector machine and principal component analysis pipelines for anomaly detection. The response variable has the following values:

  • Normal – 0
  • Wrong Setup – 1
  • DDOS – 2
  • Scan Attack – 4
  • Man in the Middle – 5

The preprocessed dataset for intrusion detection for network-based IoT devices includes ultrasonic sensors using Arduino microcontrollers and Node MCU, a low-cost open source IoT platform that can run on the ESP8266 Wi-Fi Module used to send data.

The following table represents data from the ethernet frame which is part of the TCP/IP packet that is transmitted from a source device to a destination device for network communication.  The following dataset is preprocessed according to the network intrusion detection based system.

The following table represents data from the ethernet frame which is part of the TCP/IP packet that is transmitted from a source device to a destination device for network communication. 

Source:  Google.com

Source: Google.com

In the next article, we’ll be exploring the R code and Azure ML trained anomaly detection models in greater depth.

Anomaly and Intrusion Detection in IoT Networks with Enterprise Scale Endpoint Communication

This is part one of a series of articles to be published on LinkedIn based on a classroom project for ISM 647: Cognitive Computing and Artificial Intelligence Applications taught by Dr. Hamid R. Nemati at the University of North Carolina at Greensboro Bryan School of Business and Economics.

The Internet of Things (IoT) continues to be one of the most innovative and exciting areas of technology in the last decade. IoT are a collection of devices that reside in the world that collect data from the environment around it or through mechanical, electrical, thermodynamic or hydrological processes. These environments could be the human body, geological areas, the atmosphere, etc. The networking of IoT devices has been more prevalent in the many industries for years including the gas, oil and utilities industry. As companies create demand for higher sample read rates of data from sensors, meters and other IoT devices and bad actors from foreign and domestic sources have become more prevalent and brazen, these networks have become vulnerable to security threats due to their increasing ubiquity and evolving role in industry. In addition to this, these networks are also prone to read rate fluctuations that can produce false positives for anomaly and intrusion detection systems when you have enterprise scale deployment of devices that are sending TCP/IP transmissions of data upstream to central office locations. This paper focuses on developing an application for anomaly detection using cognitive computing and artificial Intelligence as a way to get better anomaly and intrusion detection in enterprise scale IoT applications.

This project is to use the capabilities of automating machine learning to develop a cognitive application that addresses possible security threats in high volume IoT networks such as utilities, smart city, manufacturing networks. These are networks that have high communication read success rates with hundreds of thousands to millions of IoT sensors; however, they still may have issues such as:

  1. Noncommunication or missing/gap communication.
  2. Maintenance Work Orders
  3. Alarm Events (Tamper/Power outages)

In large scale IoT networks, such interruptions are normal to business operations. Certainly, noncommunication is typically experienced because devices fail, or get swapped out due to a legitimate work order. Weather events and people, can also cause issues with the endpoint device itself, as power outages can cause connected routers to fail, and tampering with a device, such as people trying to do a hardwire by-pass or removing a meter.

The scope of this project is to build machine learning models that address IP specific attacks on the IoT network such as DDoS from within and external to the networking infrastructure. These particular models should be intelligent enough to predict network attacks (true positive) versus communication issues (true negative). Network communication typical for such an IoT network include:

  1. Short range: Wi-Fi, Zigbee, Bluetooth, Z-ware, NFC.
  2. Long range: 2G, 3G, 4G, LTE, 5G.
  3. Protocols: IPv4/IPv6, SLIP, uIP, RLP, TCP/UDP.

Eventually, as such machine learning and deep learning models expand, these types of communications will also be monitored.

Scope of Project

This project will focus on complex IoT systems typical in multi-tier architectures within corporations. As part of the research into the analytical properties of IT systems, this project will focus primarily on the characteristics of operations that begin with the collection of data through transactions or data sensing, and end with storage in data warehouses, repositories, billing, auditing and other systems of record. Examples include:

  1. Building a simulator application in Cisco Packet Tracer for a mock IoT network.
  2. Creating a Machine Learning anomaly detection model in Azure.
  3. Generating and collecting simulated and actual TCP/IP network traffic data from open data repositories in order to train and score the team machine learning model.

Other characteristics of the IT systems that will be researched as part of this project, include systems that preform the following:

  1. Collect, store, aggregate and transport large data sets
  2. Require application integration, such as web services, remote API calls, etc.
  3. Are beyond a single stack solution.

Next: Business Use Cases and IoT security

Derek MooreErica Davis, and Hank Galbraith, authors.

Top Data Analytics and Data Science Resources

Here are my favorite Data Science/Data Analytics Resources

 Curriculum

1) MIT Open Courseware 

A great MOOC (Massive Online Open Courses) to learn about the math and statistical fundamentals of Data Science, such as linear algebra, statistics, probability, etc. From a
great university. This will give you some of the fundamentals of data science. https://ocw.mit.edu

2) https://kaggle.com

A website that sets up data analytics and data science competitions. It also provides a lot of free data that you can play build your skills on.

3) https://Datasciencemasters.org

A open-source curriculum for learning Data Science. Including foundation in theory and technologies. You can download code and use it to build
project to improve your skills.

4) https://superdatascience.com/pages/machine-learning

I really like like this website, because it teaches you all the fundamentals of machine learning from A to Z.

5) https://Coursera.org

A MOOC (Massive Online Open Courses) that can teach you anything you need to know about Data Science and Machine Learning and Data analytics

6) https://Udemy.com

Another good MOOC

7) https://EdX.org

Another good MOOC

8) Standford Online (https://online.stanford.edu/)

A lot like MIT Open Courseware. Free and from a renowned university.

Programming

4) https://Anaconda.com

This is my favorite Data Science/Data Analytics platform. Python is very hot right now in regards to Data Science. Anaconda is the best platform to learn and program in Python.
Strongly recommend learning Python and/or R. It looks like you’ve learned SQL, which is the other popular language.

5) Sci-kit Learn (https://scikit-learn.org)

This is my favorite library for data science and deep learning. A lot of great features for classification and anomaly detect and other stuff.

6) https://Github.com

Github is a community source repository for Python, C++, C#, Java, and Javascript. You should create your own GitHub account and start being active on it. There are a lot of tutorials.

7) https://www.w3schools.com/

You can learn almost any coding language here. C++/Python

There are a lot of good books from Amazon to learn Python.

Datasets

My Favorite Publicly Available Datasets (also see https://rtpopendata.com/2019/02/03/my-favorite-publicly-available-datasets/)

I’ve been working with data for decades, searching for insights, converting it, managing it, and now performing data analytics. We have access to unbelievable treasure troves of public data to analyze. Many of the blogs I write are based on these datasets, as I don’t have access to large computing systems. Here is a list of my favorite publicly available datasets. Enjoy!

PJM Interconnection Data Dictionary for electrical grids, distribution and transmission. https://www.pjm.com/markets-and-operations/data-dictionary.aspx
University of California Irvin (UCI) has a huge machine learning repository to practice techniques. This repository can be accessed at archive.ics.uci.edu/ml/index.php
Amazon Web Services datasets are available to the public. https://aws.amazon.com/datasets/.
Kaggle is a data science competition website that rewards prizes to teams for the best ML models. Datasets are located at https://www.kaggle.com/datasets
University of Michigan Sentiment Data.
The time series data repositories are located at https://fred.stlouisfed.org/categories.
Canadian Institute of Cyber Security. https://www.unb.ca/cic/datasets/nsl.html.
Datasets for “The Elements of Statistical Learning”. https://web.stanford.edu/~hastie/ElemStatLearn/.
Government Open Data Portal. https://data.gov

Deep Learning, Oracle Database Performance and the Future of Autonomous Databases

“The goal is to have databases in the Cloud run autonomously.  The Cloud should be about scale, elasticity, statelessness and ease of operation and interoperability.  Cloud infrastructures are about moving processes into microservices and the agile deployment of business services.  Deep Learning has the potential to give databases innovative and powerful level autonomy in a multitenant environment, allowing DBAs the freedom to offer expertise in system architecture and design…”.

Introduction

This article details initial research performed using deep learning algorithms to detect anomalies in Oracle performance.  It does not serve as a “deep” dive into deep learning and machine learning algorithms.  Currently, there are many really good resources available from experts on the subject matter and I strongly recommend those who are interested in learning more about these topics to check out the list of references at the end of this article.  Mathematical terminology is used throughout this article (it’s almost impossible to avoid), but I attempted to keep the descriptions brief, as it’s best that people interested in these topics seek to out the rich resources available online to get a better breadth of information on individual subjects.

 

In this final article on Oracle performance tuning and machine learning, I will discuss the application of deep learning models in predicting performance and detecting anomalies in Oracle.  Deep Learning is a branch of Machine Learning method that uses intensive Artificial Intelligence (AI) techniques with data to learn iteratively; while deploying optimization and minimization functions. Applications for these techniques include natural language processing, image recognition, self-driving cars, anomaly and fraud detection.  With the number of applications for deep learning models growing substantially in the last few years, it was only a matter of time that it would find its way into relational databases.   Relational databases have sort of become the workhorses of the IT industry and still generate massive amounts of revenue.  Many data-driven applications still use some type of relational database; even with the growth of Hadoop and NoSQL databases.  It’s been a business goal of Oracle Corporation, one of the largest relational database software companies in the world, to create database services that are easier manage, secure and operate.

As I mentioned in my previous article, Oracle Enterprise Edition has a workload data repository that it already uses to produce great analysis for performance and workload.  Microsoft SQL-Server also has a warehouse that can store performance data, but I’ve decided to devote my research into Oracle.

For this analysis, the focus was specifically on the Oracle Program Global Area (PGA).

Oracle Program Global Area

 

The Program Global Area (PGA) is a private memory in the database that contains information for server processes.  Each user session gets a private memory region within the PGA.  Oracle will read and write information to the PGA based on requests from server processes.  The PGA performance metrics accessed for this article are based on Oracle Automatic Shared Memory Management (ASMM).

As a DBA, when troubleshooting PGA performance, I typically look at the PGA advisor, which are a series of modules that collects monitoring and performance data from PGA.  It recommends how large the PGA should be in order to fulfill process requests for private memory and is based on the Cache Hit Percentage value.

 

Methodology

 

The database was staged in a Microsoft Azure virtual machine processing large scale data from a data generator.  Other data was compiled from public portals such as EAI (Energy Administration Institute) and PJM Interconnection, an eastern regional transmission organization.

Tools used to perform the analysis include SAS Enterprise Miner, Azure Machine Learning studio and the SciKit Learn with TensorFlow machine learning libraries.  I’ve focused my research on a few popular techniques for which I continuously do research.  These include

  • Recurrent Neural Networks
  • Autoencoders
  • K-Nearest Neighbors
  • Naїve Bayes
  • Principal Component Analysis
  • Decision Trees
  • Support Vector Machines
  • Convolutional Neural Network
  • Random Forest

For this research into databases, I focused primarily on SVM, PCA and CNN. The first step was to look at the variable worth (the variables that had the greatest weight on the model) for data points per sample.

 

picture2

 

 

The analysis of Oracle Performance data on Process Memory within dedicated process memory in Oracle in the program global area of the database.

Once the data was collected, cleaned, imputed and partitioned, Azure ML studio was used to build two types of classifiers for anomaly detection.

 

Support Vector Machine (SVM):  Implements a binary classifier where the training data consists of examples of only one class (normal data).  The model attempts to separate the collection of training data from the origin using maximum margin.

 

Principal Component Analysis (PCA): Create subspace spanned by orthonormal eigenvectors associated with the top eigenvalues of the data covariance matrix for approximation of classifiers.

 

For prediction, I compared Artificial Neural Networks and Regression models.  For Deep Learning, I researched the use of CNN specifically for anomaly detection.

 

Deep Learning and Oracle Database Performance Tuning

My article Using Machine Learning and Data Science for Performance Tuning in Oracle  discusses the use of Oracle’s automated workflow repository, a data warehouse which stores snapshots of views for SQL, O/S and system state and active session history among many other areas of system performance.  Standard data science methods require having a strong understanding of business processes through qualitative and quantitative methods, cleaning data to find outliers and missing values, and applying data partitioning strategies to get better data validation and scoring of models.  As a final step, a review of the results would be required to determine its hypothetical testing accuracy.

 

Deep Learning has changed these methodologies a bit by applying artificial intelligence into building models.  These models learn from iteratively training as data moves from hidden layers with activation functions from input to output.  The hidden layers in this article are convolutional and are specific to spatial approximations such as convolution, pooling and fully connected layers (FCL).  This has opened many opportunities to automate a lot of the steps typically used in typical data science models.  If there is data generated which would require interpretation by a human operator, this can now be interpreted using deep neural networks at much higher rates that can possibly be done by a human operator.

 

Deep Learning is a subset of Machine Learning which is loosely based on how neurons learn in in the brain.  Neural networks have been around for decades but have just recently gained popularity in the information technology for its ability to identify and classify images.  Image data has exploded with the increase in social media platforms, digital images and image data storage.  Imaging data, along with text data how a multitude of applications in the real world, so there is no shortage of work being done in this area.  The latest popularity of neural networks can be attributed to Alexnet, a deep neural network that on the ImageNet classification challenge for achieving low error rates on the ImageNet dataset.

 

With anomaly detection, the idea is to train a deep learning models to detect anomalies without overfitting data.  As the model iterates through the layers of a deep neural network, cost functions help to determine how close it is classifying real-world data.  The model should have no prior knowledge of the processes and should be iteratively trained in the data for the cost functions from input arrays and activation functions of other previous layers [7].

 

Anomaly detection is the process of detecting outliers in the data streams such as financial transactions and network traffic. It can also be applied to deviations in system performance for the purpose of this article.

 

Predictive Analysis versus Anomaly Detection

Using predictive analytics to model targets through supervised learning techniques is most useful in planning for capacity and performing aggregated analysis of resource consumption and database performance.  For the model, we analyzed regression and neural network models to determine how well each one scored based on inputs from PGA metrics.

Predictive analysis requires cleansing of data, supervised and non-supervised classification, imputation and variable worth selection to create model. Most applications can be scored best with linear or logistic regression.  In the analysis on PGA performance, I found a logistic regression model scored better than an artificial neural network for predictive ability.

 

picture3

 

In my previous article, I mentioned the role that machine learning and data science can play in Oracle performance data.

  1. Capacity Planning and IT Asset Planning.
  2. Performance Management
  3. Business Process Analysis

The fourth application for data science and machine learning in Oracle is anomaly detection.  Which specifically means applying artificial intelligence to the training of algorithms mostly used in image recognition and language processing and credit fraud detection.  It’s also a possibly less efficient way of detecting performance problems in Oracle performance.  To attempt to obtain accuracy in the algorithm presents a risk itself, since such models could result in overfitting and high dimensionality that you want to avoid in deep neural networks.  Getting accuracy that is comparable to what I human operator can do, works better because basically you don’t want the process to overthink things.  The result of an overfitting model is a lot of false positives.  You want the most accurate signs of an anomaly, not a model that is oversensitive.  Deep Learning techniques also perform intense resource consumption to generate output in a neural network.  Most business scale applications require GPUs to build them efficiently.

Convolutional Neural Networks

 

Convolutional Neural Networks (CNN) are designed for high dimensional data such as images and signals.  It’s used for computer vision as well as network intrusion detection and anomaly detection.  Oracle performance data is designed as normal text (ASCII) data and contains many different ranges of metrics like seconds versus bytes of memory. Using a mathematical normalization formula, text data can be converted in vector arrays that can be mapped, pooled and compressed.  Convolutional Neural Networks are good distinguishing features in an image matrix.  Computationally, it is efficient to represent images as multi-dimensional arrays.

 

The first step is to normalize the PGA data, which contains multiple scales and features.  Below is a sample of the data.

picture4

 

Normalizing the data can be done with the following formula[8]:

2019-03-29_10-33-41

 

The second step is to convert this data into image format.  This would require building a dimensional array of all the features.  Filtering the array can be done by removing small variances and nonlinear features to generate an overall neutral vector.  The goal is to normalize and create a multidimensional array of the data.

 

CNN is often used to identify the NMIST data, which is a set of handwritten numbers.  It contains 60,000 training images and 10,000 testing images.  Researchers have used CNN to get an error rate on the NMIST data of less than 1%.

 

Convolution Neural Networks have five basic components, input layer, convolution layer, pooling layer, fully connected layer and output layer.  Below is a visual of how CNN works to recognize an image of a bird versus and image of a cat.

 

picture4picture5

The activation function uses a popular rectified linear unit ReLU, which is typical used for CNN.  Popular activation functions include logistic sigmoid and hyperbolic tangents.  ReLU is defined as a linear y=x for positive values and linear y=0 for negative values.  It’s great as an activation function for CNN, due to it’s simplicity and because it helps the time it takes to iterate in the neural network.

 

 

 

Comparing Support Vector Machines (SVM) and Principal Component Analysis (PCA)

Support Vector Machines or SVM are good for finding large margin classifications and identifying vectors of data that are related.  The nice thing about SVM is that it has features to deal with outliers built into it. Support Vector Machines is a feature-rich supervised machine learning technique used for classification of observations by their coordinates.  I compared the SVM with principal component analysis (PCA) to approximate.  PCA creates subspaces spanned by orthonormal eigenvectors associated with the top eigenvalues of the data covariance matrix.  PCA based methods help to remove redundancy and reduce dimensionality that is persistent in performance data.   Once data was split into training and testing, we used SVM and PCA to optimize multiple dimensions in the data.

 

 

Evaluation of Machine Learning Models for Oracle Workloads

For this test, we compared neural networking regression models and ANN.  Deep Learning of patterns concerned with anomalies within a database require AI style learning techniques.  Finding the correct classifier for performance metrics to improve the accuracy of an Oracle anomaly detection system can include ANN, naive Bayes, k-nearest neighbors and general algorithms.

 

There are several classification methods that can be used when evaluating anomaly detection models

 

  • RoC Curve
  • Area under RoC
  • Precision-Recall Curve
  • Mean average precision (mAP)
  • Accuracy of classification

 

Below is a RoC chart used to score PCA and SVM models.  RoC charts plot false positive rates against true positive rates.   When comparing the PCA and the SVM model, PCA had a higher true positive rate.

picture6

Summary:  The Future of Autonomous Databases

Oracle has released its first deep learning database, marketed as “The world’s first self-driving database”.  Oracle has announced 18c as a new autonomous database that requires no human labor for daily operational task, can provide more security, and automate most database processes.  The database will self-tune, self-upgrade and self-patch – all while maintaining %99.995 availability with machine learning.  For many companies, especially those working on cloud and PaaS infrastructures, this will mean lower costs.  With Exadata, this would include compression techniques that would add further benefits to very large and enterprise level workloads.

 

Will there be more databases that will be completely run by Artificial Intelligence and Deep Learning algorithms?  As a DBA, maintaining a database can be arduous, but many of my DBA colleagues enjoy the respect and prestige of database management and database tuning.  With the role of a DBA evolving rapidly, autonomous database may provide the freedom for DBAs to provide database design and development to corporate teams.

 

It remains to be seen if databases as a service (DBaaS) will reach the reality of full autonomy.  It’s bound to happen before automobiles become level 5 autonomous.  Selecting the service on this platform could provide opportunities of minimal configurations – and you’re done.  Everything else is taken care of.  There would be no operator, either in the hosted environment or on premise, nor would anyone ever touch the database for any reason except for application and software development.

 

In summary, this is a very high-level article on techniques for using deep learning and machine learning on Oracle performance data.  I hope that this cursory introduction will inspire DBAs and operators to do their own research and apply it to their toolbox.

 

References

 

1http://deeplearning.net/reading-list/

 

2https://www.analyticsvidhya.com/

 

3http://www.kdnuggets.com/

 

4http://www.ieee.org/

 

5https://www.computer.org/

 

6https://www.udacity.com/course/deep-learning-nanodegree-nd101se/deep-learning-nanodegree–nd101

 

7https://www.fast.ai

 

8“A Novel Intrusion Detection Model for Massive Network Using Convolutional Neural Networks” Kehe Wu; Zuge Chen; Wei Li.  IEEE Access. Received July 29, 2018.

 

9“Enhanced Network Anomaly Detection Based on Deep Neural Networks”.  Naseer, Sheraz; Saleem, Yasir; Khalid, Shezad, Bashir, Muhammad Khawar; Jihun Han, Iqbal, Muhammad Munwar; Kijun Han.  IEEE Acwwwcess Received June 3, 2018.  Accepted July 16, 2018.

 

10https://www.pyimagesearch.com Dr. Adrian Rosebrock

 

11U.S. Energy Information Administration. https://www.eia.gov/.

 

12PJM Interconnection.  https://www.pjm.com/markets-and-operations.aspx

 

13Oracle Corporation.  https://www.oracle.com/index.html

My Favorite Publicly Available Datasets

I’ve been working with data for decades, searching for insights, converting it, managing it, and now performing data analytics. We have access to unbelievable treasure troves of public data to analyze.  Many of the blogs I write are based on these datasets, as I don’t have access to large computing systems.  Here is a list of my favorite publicly available datasets.  Enjoy!

  1. PJM Interconnection Data Dictionary for electrical grids, distribution and transmission.  https://www.pjm.com/markets-and-operations/data-dictionary.aspx
  2. University of California Irvin (UCI) has a huge machine learning repository to practice techniques.  This repository can be accessed at archive.ics.uci.edu/ml/index.php
  3. Amazon Web Services datasets are available to the public.  https://aws.amazon.com/datasets/.
  4. Kaggle is a data science competition website that rewards prizes to teams for the best ML models. Datasets are located at https://www.kaggle.com/datasets
  5. University of Michigan Sentiment Data.
  6. The time series data repositories are located at  https://fred.stlouisfed.org/categories.
  7. Canadian Institute of Cyber Security. https://www.unb.ca/cic/datasets/nsl.html.
  8. Datasets for “The Elements of Statistical Learning”.  https://web.stanford.edu/~hastie/ElemStatLearn/.
  9. Government Open Data Portal.  https://data.gov