Artificial Intelligence (AI) is the theory and development of computer systems that are able to perform tasks, that traditionally have required human intelligence. AI is a very vast field, in which ‘machine learning’ is a subdomain. Machine learning can be described as a method of designing a sequence of actions to solve a problem, known as algorithms, which automatically optimise through experience and with limited or no human arbitration. These methods can be used to find patterns in large sets of data (big data analytics) from increasingly diverse and innovative sources. The figure below provides an overview.From the very beginning of interest in the 1950s, smaller subsets of artificial intelligence – the first machine learning, then deep learning, a subset of machine learning – have created ever larger disruptions.
The simplest analogy of their connection is to visualize them as concentric circles with AI — the idea that came first – the largest, followed by machine learning – which flourished later, and at last deep learning – which is driving current day AI expansion – fitting inside both of them.Machine Learning, quite plainly, is the use of algorithms to parse data, study it, and then make a deduction or forecast about something in the world. So rather than manually coding software command modules with a defined sequence of commands to achieve a particular goal, the system is “trained” using large sets of data and specific algorithms that give the system, the capability to understand and learn how to execute the assignment.
Machine learning is the brainchild of the early AI group, and the algorithmic undertakings over the decades included decision tree learning, inductive logic programming, clustering, reinforcement learning, and Bayesian networks, including other techniques. Machine Learning has many categories, which are classified based on the amount of human guidance necessary for labelling the input training data. These categories are :Supervised Learning: The algorithm is provided a set of data for training, which has labels on some segments. As an example, some data points in a data set of financial transactions may have labels, which helps identifying the ones that are fraudulent as compared to those that are genuine. Over the course of training, the system’s algorithm will ‘learn’ a high-level method of classification, which it will use to forecast the labels for the outstanding entries in the data set.Unsupervised Learning: In this category, the input data fed to the algorithm doesn’t have labels.
The algorithm is requested to detect patterns in the data by identifying groups of observations that are based on comparable underlying features. For instance, an unsupervised machine learning algorithm could be defined to look for securities that have features comparable to an illiquid security that is difficult to provide an evaluation for. If it finds a similar looking cluster for the illiquid security, evaluations of other securities in the cluster can be used to help estimate the value of the illiquid security.Reinforcement Learning: This technique in somewhat part way between supervised and unsupervised learning. In this category, the algorithm is given an unlabelled set of data, and it decides an action for each data point, and receives comment or review (possibly by a human) that helps the algorithm learn. For example, reinforcement learning is particularly useful in self-driving cars, game theory, and robotics.Deep Learning: This is a type of machine learning that employs algorithms that function in ‘levels’ influenced by the design and purpose of the human brain. Deep learning algorithms and it’s structures, called artificial neural networks, can be used for supervised, unsupervised, or reinforcement learning.
Neural network algorithms were created about a couple of decades ago. Deep Learning algorithms expand on neural networks by working with multiple levels. This helps solve complex problems with a higher degree of abstraction. We also now have a large amount of data and enhanced processing capabilities (through GPUs).
All of these propel the use of Deep Learning algorithms. While neural networks have been used as non-linear classifiers, the actual potential of Deep Learning algorithms lies in automatic feature engineering. Automatic feature engineering requires extensive labeled data. Therefore, there is a requirement to employ both AI/Deep Learning side-by-side other machine learning technologies for fraud detection.Fraud prevention is a kind of anomaly detection. Therefore, the aim of fraud prevention is to detect transactions which do not function in the normal manner, i.e. anomalies.
A variety of traditional machine learning techniques like Logistic regression, Decision tree, Random Forest, Neural networks, Clustering etc can be employed to identify anomalies.