To better help you understand how AI systems like SunView Willow AI™ work, we will give you a behind-the-scenes look at the way AI systems are constructed. What follows is a high-level breakdown of the components of an AI system, an explanation of how the AI system learns, and how that learning is implemented.
Machine Learning (ML) is a branch of Artificial Intelligence (AI) that involves building software applications that learn from data and improve their accuracy over time—without additional programming.
Machine learning algorithms are snippets of code that help people explore, analyze, and find meaning in complex datasets. Each algorithm contains a set of step-by-step instructions that a computer can follow to achieve a specific goal. In a machine learning model, the goal is to establish or discover patterns that people can use to make predictions or categorize information. Machine learning algorithms use parameters that are based on training a subset of data that represents a larger dataset. As the training data expands to represent the “world” more realistically, the algorithm calculates more accurate results. Artificial Intelligence is like a child with an appetite for learning. The more you teach it, the smarter it becomes.
The “learning” part of machine learning refers to a process in which computers review existing data and learn new skills and knowledge from that data. Machine learning systems use algorithms to find patterns in datasets, which may include text, numbers, and even rich media files such as audio clips, images, and videos. Machine learning algorithms are computationally intensive, requiring specialized hardware and software to operate on a large scale. Different algorithms analyze data in different ways, so they are typically grouped by the machine learning technique that they use: supervised learning, unsupervised learning, or reinforcement learning.
Supervised learning is used when you know, in advance, what you want to teach a machine. Suppose, for example, you provide the machine with a dataset that includes employee names along with all the support requests that they have submitted over the past year. You want to determine (predict) what types of requests are most likely to be submitted over the next month and by whom. The answer will be delivered to you using labels that already exist in the dataset (e.g., employee name, ticket type, and date). This technique typically requires exposing the algorithm to a very large set of training data, allowing the model to examine the output, while you adjust the parameters until you achieve the desired results. You can then test the machine by letting it make predictions for a “validation dataset,” which consists of new data that the computer has never seen before. Common supervised learning tasks typically implement prediction, regression, and classification AI models, which are explained later in this newsletter.
Unsupervised learning enables a machine to explore a set of data. After the initial exploration, the machine tries to identify hidden patterns that connect different variables. This type of learning can help turn data into groups, based only on statistical properties. Unsupervised learning does not require training on large datasets, so it is much faster and easier to deploy than a supervised learning model.
In unsupervised learning, the data points are not labeled because the algorithm labels them for you by organizing the data or describing its structure. This technique is useful when you do not know what the outcome is supposed to look like. Suppose, for example, you provide employee data and want to create segments of employees who have reported similar computer issues. Since the data that you are providing is not labeled, the labels in the outcome are computer-generated based on the similarities that are discovered between data points.
Reinforcement learning enables a computer to interact with an environment by using algorithms that learn from outcomes and decide what action to take next. After each action, the algorithm receives feedback that helps it determine whether the choice it made was correct, neutral, or incorrect. This is a good technique to use for automated systems that need to make many small decisions without human guidance. Suppose, for example, you want to ensure that the performance of a server stays within a specific set of parameters such as CPU and memory utilization, free disk space, process and thread counts, and network performance. As the AI model gains experience monitoring various servers in your organization and compiles a history of reinforcement, it learns what needs to be done to keep these servers running at optimal levels.
A machine learning algorithm is a procedure that is executed using a dataset to create a machine learning “model.” Machine learning algorithms perform pattern recognition, which allows these algorithms to be “fitted” onto a dataset as they learn from that data. There are many kinds of machine learning algorithms that are used for many different applications. For example, there are algorithms for classification, such as k-Nearest Neighbors; algorithms for regression, such as Linear Regression; and algorithms for clustering, such as k-Means.
In the realm of AI, there are hundreds of algorithms that software developers can choose from—but the trick is to choose the one that best addresses the challenges that your company faces. In some cases, the best approach is to deploy several algorithms simultaneously. This approach, known as ensemble modeling, yields better predictive performance than can be obtained from any one of the integral learning algorithms working alone.
A machine learning model is the output of a machine learning algorithm that was run against a specific dataset. A model represents the knowledge that was gained (i.e., learned) by that machine learning algorithm. The model is the output that is saved after running a machine learning algorithm on training data. It represents the rules, numbers, and any other algorithm-specific data structures required to make predictions. An easy way to understand a machine learning model is to think of it like a computer program. A machine learning model (i.e., computer program) is comprised of both data and a procedure for using that data to make a prediction. Ideally, a model should also reveal the rationale (via a written description) behind its decision to help interpret the decision process. The following example describes a machine learning model that uses a linear regression algorithm:
Matching the right AI model with the right data is one of the biggest challenges that an organization may encounter during its digital transformation process. Selecting the right model not only requires a thorough understanding of what your organization wants to accomplish, but the process also involves balancing requirements like model performance, accuracy, interpretability, and computing power among other factors. You also need to have the right kind of data to use certain AI models.
Luckily, you do not have to perform any of these tasks manually! Selecting the right AI model is where SunView Software outshines the competition. Our Advanced Intelligence Pack contains the “perfect blend” (a proprietary combination) of Text Similarity, Chatbot, Regression, Predictive, and Classification AI models that can be applied to a variety of business types and industries. The most used models employ regression and classification to predict target categories and values, find unusual data points, and discover similarities.
Text Similarity is an algorithm that enables an AI application to understand semantically similar queries (i.e., different questions that mean the same thing) from users and provide uniform responses. The purpose of this algorithm is not only to improve the quality of the responses made by an artificial intelligence, such as SunView Willow AI™, but also to make the Q&A interactions between the human and computer feel more natural.
Suppose, for example, a user asks, “What kind of laptop can I order?” or “What are my choices for a laptop?” The user should expect (and receive) the same response, regardless of how many ways the question is phrased. This emphasis on semantic similarity strives to create a system that recognizes language and word patterns so it can generate responses that sound like they belong in a “normal” human conversation. This algorithm is accomplished through a series of systems and processes that interconnect to create an AI application that understands speech patterns.
To generate authentic and natural response patterns, SunView Willow AI™ uses a specialized system architecture that can recognize text and predict sentences. Our advanced AI platform performs tasks such as custom text classification, paraphrase detection, and clustering—all of which contribute to the high-quality results you see in our Self-Service Portal.
Natural Language Processing (NLP) is used to split the user input into sentences and words. NLP also standardizes the text using a series of techniques that convert everything to lowercase, correct spelling mistakes, determine if a word is an adjective or verb, and perform many more tasks. Other factors, such as sentiment, are also considered during this stage of AI analysis.
Also called conversational AI bots, AI chatbots, AI assistants, virtual assistants, digital assistants, virtual agents, and conversational agents among other names, chatbots are growing in popularity. But just as chatbots are known by many different names, they also have varying degrees of intelligence and are used in a wide variety of applications.
On a basic level, a chatbot is a computer program that allows humans to interact with technology using various input methods such as voice and text, while providing users with 24/7/365 access to information. If someone calls your service desk and leaves a voicemail describing an issue, our Smart Voice technology turns the voice input into text. The chatbot then analyzes the text from that message, considers the best response, and delivers that response back to the end-user via text message.
Natural Language Understanding (NLU) helps the chatbot understand what the user said using programming language objects such as lexicons, synonyms, and themes. These objects are then used in conjunction with algorithms to construct dialogue flows that tell the chatbot how to respond. Delivering a meaningful, personalized experience beyond pre-scripted responses requires NLP. This enables the chatbot to interrogate data repositories, including integrated knowledge bases, your CMDB, and other back-end systems and use that information to generate an appropriate response. Conversational AI technology takes NLP and NLU to the next level. It allows companies to create advanced dialogue systems that utilize historical data, personal preferences, and contextual understanding to deliver a realistic and engaging natural language interface.
Regression analysis is a statistical modeling technique used for predicting the occurrence of an event or the value of a continuous variable (i.e., the dependent variable), based on the value of one or many independent variables. Suppose, for example, you decide to drive to a faraway city (dependent variable). There are several factors that affect the amount of time it will take you to reach your destination: the start time, distance, real-time traffic conditions, construction activities on the road, and weather conditions. All these factors have the potential to impact the actual time it will take you to reach your destination, but some factors will have a greater impact than others on the value of the dependent variable. Using regression analysis, a computer can mathematically sort out which variables will impact the outcome and by how much. This analysis helps you understand which factors matter most, which factors have little or no impact, and how these factors relate to each other.
Regression analysis is also a fundamental concept in machine learning. It falls under supervised learning where the algorithm is trained using input features and output labels, which helps establish a relationship among the variables by estimating how one variable affects the others. In the context of machine learning and ITSM, regression specifically refers to the estimation of a continuous dependent variable or response from a list of input variables. The regression model used by ChangeGear ITSM with Willow AI™ can help identify problems and events that happen due to fluctuating measurements, for example, such as capacity issues that arise when server traffic increases.
There are a variety of regression techniques, ranging from the simplest (e.g., Linear and Logistic Regression), to complicated statistical classic regression models (e.g., Ridge, Lasso, Elastic Net), to even more complex techniques such as neural networks. However, neural networks are reducible to regression models, which means that a neural network can “pretend” to be any type of regression model that ChangeGear wants it to be.
Predictive analytics makes forecasts about future outcomes using historical data that is combined with statistical modeling, data mining, and machine learning. Basically, you can analyze past data to identify trends that can help you make informed business decisions. While machine learning and predictive analytics were once viewed as two entirely different and unrelated concepts, they are now intertwined. Today, the field of predictive analytics makes extensive use of machine learning for data modeling due to its ability to accurately process large amounts of data and recognize patterns.
ChangeGear ITSM with Willow AI™ uses historical data to build a mathematical model that can capture important trends. The predictive analytics model is then applied to current data to project what will happen next, or to suggest actions to take for optimal outcomes. ChangeGear can predict future trends such as the number of tickets that will be opened, closed, or assigned on certain days. IntellAssign ensures that tickets are assigned to the right team and staff members, maximizing the efficiency of your operations.
Based on prior history and outcomes, your organization can gain deeper insight into trends and patterns regarding your employees, customers, and competitors. You can also mitigate risks and predict success by capturing and analyzing current data from multiple sources including emails, files, instant messages, relational databases, and collaboration tools like Microsoft Teams and Slack.
When an issue or support ticket comes into your service desk, the first step is to review and assign it to a category so it can be routed to the correct team member. This process involves reading the ticket, so your support technicians know which category to choose. Unfortunately, manual classification systems are often complicated and cluttered with too many categories to choose from. After spending endless hours reading through tickets, support agents often end up assigning tickets to the “other” category to sort them faster and avoid spending precious time searching for the correct category.
Ticket classification with machine learning avoids this problem by using predictive analytics to automatically assign data to preset categories. Instead of relying on people to interpret the content and categorize it correctly, automatic ticket classification uses NLP, which helps the computer parse, understand, and efficiently generate human language responses.
Ticket classification by sentiment is another way that tickets can be automatically sorted. The Sentiment Analysis model, included in ChangeGear’s Advanced Intelligence Pack, allows you to classify the sentiment (i.e., polarity) of each ticket as positive, negative, or neutral. In the same way that IntellAssign prioritizes tickets based on expressions of urgency, our Sentiment Analysis model can be used to prioritize issues based on expressions that indicate negativity. Classification algorithms are trained on input data, which can be used to answer questions like, “What is the sentiment (positive, negative, or neutral) of a given sentence?”
ChangeGear, powered by SunView Willow AI™, accurately represents the next generation of ITSM because it leverages the perfect blend of AI Models that target specific problems across every department in an organization. ITSM has a tremendous potential to benefit from AI as service desk technicians perform a wider variety of transactional tasks each day.
However, it is important to have a clear understanding of the problems that you want to solve (and obtain support for solving them) before throwing a bunch of AI models against the wall and waiting to see which ones stick. The concept of AI must be embraced, with buy-in at all levels, for its potential impact on ITSM to be fully realized. Without having buy-in, artificial intelligence and the benefits that it brings with it will have unrealized potential.