3 min read

Part 1: Demystifying Artificial Intelligence

human hand reaching out to touch machine hand

Are you intrigued by Artificial Intelligence (AI) but find it confusing or intimidating?  

In this blog post, the first of a series that focuses on AI, we will demystify the terminology and technology related to AI and break everything down into easy-to-understand concepts.  

 

What is Data Science? 

Data science is an interdisciplinary academic field that uses statistics, computing, scientific methods, processes, algorithms, and systems to extract knowledge and insights – in the form of patterns, anomalies, and models – from noisy, structured (tables, databases, logs) and unstructured (web text, images) data. 

 

What is Artificial Intelligence?  

Artificial Intelligence is the science of programming machines to imitate human intelligence, allowing them to reason, plan, decide, learn, adapt, and act like a human would.  

It encompasses a wide range of algorithms (computer code) and techniques that allow computers and software to solve problems, make decisions, understand and generate natural language, navigate the real world, and learn directly from human domain experts.  

From the perspective of knowledge and expertise, we can devise two types of AI:  

  • Artificial Narrow Intelligence – Designed for specific tasks or functions. For example, filtering email spam from your inbox, detecting and reading road signs, etc.  
  • Artificial General Intelligence – A theoretical concept where machines can do anything a human or many humans can. Essentially, being able to sufficiently accomplish tasks in different cognitive areas and with different skill sets. 

 

What is Machine Learning?  

Machine Learning (ML) is a branch of AI and an essential component in the growing field of data science. As the name implies, it focuses on the problem of learning. Models are developed to perform a specific task or skill exclusively from training data instead of being programmed with rules and decision logics. 

Here’s how it works: 

A machine is fed input (I) and output (O) data (namely, training data), but no rules or explanations for any particular I→O relation.  

Optimization algorithms are coded to iterate over every I→O and produce a model or a “map” that can then be used to predict the most probable output to all new and future input.  

For example, a machine could learn to detect cancer by being exposed to the x-ray plates (I) and the expert diagnosis (O). This allows the machine to replicate expert decisions that it has learned from the data alone.  

Optimization algorithms are mathematical and computational techniques employed to modify the variables of a model systematically and gradually to maximize accuracy or minimize errors in predictions. In simpler terms, get the right (O) for every (I). 

Good optimization strategies and the resulting models can perform well on both training data and new data, which is often absent during learning. However, models must be trained correctly at the start with good-quality data to avoid any bias. 

 

What is Deep Learning? 

Deep Learning is the name given to architecting and training a very large artificial neural network. You could call them large networks, but that wouldn’t do justice to the kind of results these networks can produce. 

These networks go far beyond just their scale. They allow specialized sub-networks to learn and master particular abilities, such as feature selection, memorization, attention to saliency, temporal connection, and context encoding.  

They can learn from enormous amounts of data – which would quickly saturate a shallow network – and capture intricate patterns and relationships in data for deeper and more powerful learning.  

That has been shown to significantly improve nuanced decision-making, complex reasoning, task adaptation, and even give the machine the ability to be creative.  

Undoubtedly, Deep Learning is responsible for the supercharge of AI and its modern applications. However, neural networks still require large amounts of quality data and lengthy, heavy-duty training, forcing scientists to rely on cloud infrastructures and services like Azure and AWS.  

The choice, therefore, between deep learning and traditional machine learning models will ultimately depend on the problem, available data, budget, access to resources, and the team’s skillset. 

 

Why do banks and financial institutions need data-driven solutions?  

Current anti-money laundering (AML) and terrorist financing (TF) methods are no longer fully effective.  

The sheer volume and complexity of data produced by financial institutions and their customers are outpacing legacy systems and human capabilities. False positives have skyrocketed, false negatives continue to fall off the radar, and compliance teams are overwhelmed. In addition, bad actors' creativity and access to modern tools and technology are yet to be exhausted.  

We need a better approach. 

With the right tools and the right people, banks and financial institutions can combat financial crime more effectively.  

Automation and advanced analytics can significantly enhance the ability to detect suspicious activity and reduce false positives. Human oversight and judgment will then contextualize these findings and make critical decisions for better risk management. 

The results could be truly remarkable by combining the power of data-driven solutions with human expertise and ingenuity. 

 

To learn more about data science, AI, Machine Learning, and Deep Learning, listen to our podcast featuring Leandro Loss, Principal Data Scientist at AML RightSource.  

Or, if you find out more about how we currently use the latest and finest technology to fight financial crime, simply fill out our contact form, and we’ll start the conversation.