What Is Machine Learning? Types And Examples
Read an introduction to machine learning, types, and its role in cybersecurity.
2023년 2분기 글로벌 위협 동향 보고서 전문가와 상담하기Machine learning involves enabling computers to learn without someone having to program them. In this way, the machine does the learning, gathering its own pertinent data instead of someone else having to do it.
Machine learning plays a central role in the development of artificial intelligence (AI), deep learning, and neural networks—all of which involve machine learning’s pattern- recognition capabilities.
Modern machine learning has its roots in Boolean logic. George Boole came up with a kind of algebra in which all values could be reduced to binary values. As a result, the binary systems modern computing is based on can be applied to complex, nuanced things.
Then, in 1952, Arthur Samuel made a program that enabled an IBM computer to improve at checkers as it plays more. Fast forward to 1985 where Terry Sejnowski and Charles Rosenberg created a neural network that could teach itself how to pronounce words properly—20,000 in a single week. In 2016, LipNet, a visual speech recognition AI, was able to read lips in video accurately 93.4% of the time.
Machine learning has come a long way, and its applications impact the daily lives of nearly everyone, especially those concerned with cybersecurity.
All types of machine learning depend on a common set of terminology, including machine learning in cybersecurity. Machine learning, as discussed in this article, will refer to the following terms.
Model is also referred to as a hypothesis. This is the real-world process that is represented as an algorithm.
A feature is a parameter or property within the data-set that can be measured.
This refers to a set of more than one numerical feature. It is used as an input, entered into the machine-learning model to generate predictions and to train the system.
When an algorithm examines a set of data and finds patterns, the system is being “trained” and the resulting output is the machine-learning model.
After the machine-learning model has been trained, it can receive an input and then provide a prediction regarding the output.
The target is the value the machine-learning model is charged with predicting.
When a machine-learning model is provided with a huge amount of data, it can learn incorrectly due to inaccuracies in the data. This is called “overfitting” the system.
In an underfitting situation, the machine-learning model is not able to find the underlying trend of the input data. This makes the machine learning model inaccurate.
Machine learning is based on the discovery of patterns and makes use of the following processes:
The decision process involves the machine-learning model making a classification or prediction based on input data. These then produce estimates regarding patterns found in the data.
With error determination, an error function is able to assess how accurate the model is. The error function makes a comparison with known examples and it can thus judge whether the algorithms are coming up with the right patterns.
In the model optimization process, the model is compared to the points in a dataset. The model’s predictive abilities are honed by weighting factors of the algorithm based on how closely the output matched with the data-set.
There are a few different types of machine learning, including supervised, unsupervised, semi-supervised, and reinforcement learning.
With supervised learning, the datasets are labeled, and the labels train the algorithms, enabling them to classify the data they come across accurately and predict outcomes better. In this way, the model can avoid overfitting or underfitting because the datasets have already been categorized.
In unsupervised learning, the algorithms cluster and analyze datasets without labels. They then use this clustering to discover patterns in the data without any human help.
In semi-supervised learning, a smaller set of labeled data is input into the system, and the algorithms then use these to find patterns in a larger dataset. This is useful when there is not enough labeled data because even a reduced amount of data can still be used to train the system.
In reinforcement machine learning, the algorithm learns as it goes using trial and error. The system is provided with input regarding whether an outcome was successful or unsuccessful.
Machine learning is already playing an important role in cybersecurity. Its predictive and pattern-recognition capabilities make it ideal for addressing several cybersecurity challenges. It can collect, structure, and organize data and then find patterns that can be used to better inform decisions.
For example, a machine-learning model can take a stream of data from a factory floor and use it to predict when assembly line components may fail. It can also predict the likelihood of certain errors happening in the finished product. An engineer can then use this information to adjust the settings of the machines on the factory floor to enhance the likelihood the finished product will come out as desired.
Machine learning can also help decision-makers figure out which questions to ask as they seek to improve processes. For example, sales managers may be investing time in figuring out what sales reps should be saying to potential customers. However, machine learning may identify a completely different parameter, such as the color scheme of an item or its position within a display, that has a greater impact on the rates of sales. Given the right datasets, a machine-learning model can make these and other predictions that may escape human notice.
Machine learning is already playing a significant role in the lives of everyday people. In many ways, some of its capabilities are still relatively untapped.
Speech recognition is used when a computer transcribes speech into text or tries to understand verbal inputs by users. Speech recognition analyzes speech patterns and uses feedback as to whether or not the output is accurate. In this way, a speech recognition machine-learning model can tell the difference between similar sounds, such as those associated with “f” and “s.”
For example, when someone asks Siri a question, Siri uses speech recognition to decipher their query. In many cases, you can use words like “sell” and “fell” and Siri can tell the difference, thanks to her speech recognition machine learning. Speech recognition also plays a role in the development of natural language processing (NLP) models, which help computers interact with humans.
Customer service bots have become increasingly common, and these depend on machine learning. For example, even if you do not type in a query perfectly accurately when asking a customer service bot a question, it can still recognize the general purpose of your query, thanks to data from machine -earning pattern recognition.
Computers are able to “look” at things and categorize them. They can then use these categories to make decisions. Using machine vision, a computer can, for example, see a small boy crossing the street, identify what it sees as a person, and force a car to stop. Similarly, a machine-learning model can distinguish an object in its view, such as a guardrail, from a line running parallel to a highway. It can then use that information to steer a vehicle.
Recommendation engines can analyze past datasets and then make recommendations accordingly. This machine-learning application depends on regression models. A regression model uses a set of data to predict what will happen in the future.
For example, a company invested $20,000 in advertising every year for five years. Each year, sales went up by 10%. With all other factors being equal, a regression model may indicate that a $20,000 investment in the following year may also produce a 10% increase in sales.
With the help of AI, automated stock traders can make millions of trades in one day. The systems use data from the markets to decide which trades are most likely to be profitable. They can then execute trades in less than a second.
Business applications from inventory management to search engines use machine learning algorithms to identify common data types and structes and label them for use. Some uses include organizing libraries of files such as videos, documents, and images.
Large language models are used in translation systems, document analysis, and generative AI tools for email, document composition, image labeling, and search engine results annotation.
Machine learning, like most technologies, comes with significant challenges. Some of these impact the day-to-day lives of people, while others have a more tangible effect on the world of cybersecurity.
Many people are concerned that machine-learning may do such a good job doing what humans are supposed to that machines will ultimately supplant humans in several job sectors. In some ways, this has already happened although the effect has been relatively limited.
For example, the car industry has robots on assembly lines that use machine learning to properly assemble components. In some cases, these robots perform things that humans can do if given the opportunity. However, the fallibility of human decisions and physical movement makes machine-learning-guided robots a better and safer alternative.
Also, a machine-learning model does not have to sleep or take lunch breaks. It also will not call in sick or get into disputes with others. Some manufacturers have capitalized on this to replace humans with machine learning algorithms.
However, the fear may be somewhat overblown. While machine-learning can do things humans cannot, it also does jobs that humans would rather not do. The same human resources that machine learning “replaced” can, in many cases, be used to accomplish other tasks—tasks that machines cannot do. These include making managerial decisions on the fly, and serving as mentors, teachers, artists, and other jobs where human discretion is paramount.
Technological singularity refers to the concept that machines may eventually learn to outperform humans in the vast majority of thinking-dependent tasks, including those involving scientific discovery and creative thinking. This is the premise behind cinematic inventions such as “Skynet” in the Terminator movies.
However, not only is this possibility a long way off, but it may also be slowed by the ways in which people limit the use of machine learning technologies. The ability to create situation-sensitive decisions that factor in human emotions, imagination, and social skills is still not on the horizon. Further, as machine learning takes center stage in some day-to-day activities such as driving, people are constantly looking for ways to limit the amount of “freedom” given to machines.
Because these debates happen not only in people’s kitchens but also on legislative floors and within courtrooms, it is unlikely that machines will be given free rein even when it comes to certain autonomous vehicles. If cars that completely drove themselves—even without a human inside—become commonplace, machine-learning technology would still be many years away from organizing revolts against humans, overthrowing governments, or attacking important societal institutions.
Since machine learning can analyze objects and people’s faces, it is possible for human privacy to be invaded by the machines that collect and store their data, including those that pertain to their belongings and objects within their homes.
For example, if machine learning is used to find a criminal through facial recognition technology, the faces of other people may be scanned and their data logged in a data center without their knowledge. In most cases, because the person is not guilty of wrongdoing, nothing comes of this type of scanning. However, if a government or police force abuses this technology, they can use it to find and arrest people simply by locating them through publicly positioned cameras. For many, this kind of privacy invasion is unacceptable.
On the other hand, machine learning can also help protect people's privacy, particularly their personal data. It can, for instance, help companies stay in compliance with standards such as the General Data Protection Regulation (GDPR), which safeguards the data of people in the European Union. Machine learning can analyze the data entered into a system it oversees and instantly decide how it should be categorized, sending it to storage servers protected with the appropriate kinds of cybersecurity.
Because machine-learning models recognize patterns, they are as susceptible to forming biases as humans are. For example, a machine-learning algorithm studies the social media accounts of millions of people and comes to the conclusion that a certain race or ethnicity is more likely to vote for a politician. This politician then caters their campaign—as well as their services after they are elected—to that specific group. In this way, the other groups will have been effectively marginalized by the machine-learning algorithm.
Similarly, bias and discrimination arising from the application of machine learning can inadvertently limit the success of a company’s products. If the algorithm studies the usage habits of people in a certain city and reveals that they are more likely to take advantage of a product’s features, the company may choose to target that particular market. However, a group of people in a completely different area may use the product as much, if not more, than those in that city. They just have not experienced anything like it and are therefore unlikely to be identified by the algorithm as individuals attracted to its features.
The future of machine learning lies in hybrid AI, which combines symbolic AI and machine learning. Symbolic AI is a rule-based methodology for the processing of data, and it defines semantic relationships between different things to better grasp higher-level concepts. This enables an AI system to comprehend language instead of merely reading data.
양식을 작성해주시면 전문 담당자가 연락을 드릴 것입니다.