Deep parsing machine learning three kinds of learning methods

In the field of machine learning. There are mainly three different types of learning methods: Supervised learning, Unsupervised learning, and Semi-supervised learning.

Supervised learning: Through the existing part of the corresponding relationship between input data and output data. Generate a function that maps the input to the appropriate output, such as a taxonomy.

Unsupervised learning: Direct modeling of input data sets, such as clustering.

Semi-supervised learning: Comprehensive use of data with or without class labels to generate an appropriate classification function.

Deep parsing machine learning three kinds of learning methods

First, supervise learning

1. Supervised learning is a method of machine learning. Can learn or build a learning model from training materials. And guess new instances based on this pattern.

Training data is composed of input objects (usually vectors) and expected outputs. The output of a function can be a continuous value (called a regression analysis). Or predict a classification label (called a classification).

2. The task of a supervised learner, after observing some of the training paradigms (input and expected output), predicts the output of this function for any possible input value. To achieve this goal. Learners must generalize from existing data to non-observed situations in a way that is "reasonable" (see induction bias).

In human and animal perception. It is often referred to as concept learning.

3, supervised learning There are two kinds of models.

The most general. Supervised learning produces a global model that maps input objects to the expected output. And there is another way to do this in a regional model. (eg case inference and recent neighbors). In order to solve a given problem of supervised learning (handwriting recognition), the following steps must be considered:

1) Determine the form of the training data example.
Before doing other things, the project engineer should decide which data to use as an example. For example, it may be a handwritten character, or an entire handwritten word. Or a line of handwritten text.

2) Collect training data. This information needs to have real-world features. and so. The input object and its corresponding output can be obtained by a human expert or (of a machine or a sensor).

3) Deciding the representation of the input features of the learning function.
The accuracy of the learning function is very much related to how the input object is represented. Traditionally, the input object is converted to a feature vector. Including a lot of features on the description of the narrative object. Due to the dimension disaster relationship. The number of features should not be too much, but it should be large enough. Accurate forecast output.

4) Decide the data structure used by the function to learn and the corresponding learning algorithm. for example. The engineer may choose artificial neural networks and decision trees.

5) Complete the design. The project engineer then runs the learning algorithm on the collected data. The parameters of the learning algorithm can be adjusted by running the data on a subset of the data (called validation set) or cross-validation (cross-validation). After the parameters are adjusted, the algorithm can be performed on a test set other than the training set. The vocabulary used for supervised learning is also classified. There are all kinds of classifiers now. Each has its own strengths or weaknesses. The performance of the classifier is very much related to the characteristics of the data to be classified.

There is no single classifier that can perform best on all given issues. This is known as the 'Lunch Theory without lunch'.

The various empirical rules are used to compare the performance of the classifiers and find the data characteristics that will determine the performance of the classifier. Deciding on a classifier that fits a problem is still an art, not a science.

The most widely used classifiers today are artificial neural networks, support vector machines, near neighbor methods, Gaussian mixture models, naive Bayes methods, decision trees, and radial basis function classifications.

Second, unsupervised learning

1. Unsupervised Learning is an algorithm for artificial intelligence networks. Its purpose is to classify the original data in order to understand the internal structure of the data. Unlike the supervised learning network, the unsupervised learning network does not know whether its classification results are correct when it is learning, that is, it is not subject to supervised enhancement (tell it what kind of learning is correct). Its feature is to provide input paradigms only for this type of network. And it will actively identify the potential class rules from these examples. When learning is completed and tested, it can also be applied to new cases.

2. The typical example of unsupervised learning is clustering. The purpose of clustering is to bring together similar things, and we don't care what the class is. Therefore, a clustering algorithm usually only needs to know how to calculate the similarity to start working.

Third, semi-supervised learning

1. The basic idea of ​​semi-supervised learning is to use a model on the data distribution. If the learner is set up, label the unlabeled sample.
The formal description is:

Given a sample set S=L∪U from an unknown distribution, when L is a labeled sample set L={(x1,y1),(x2,y2), ...,(x|L|,y|L| }}, U is an unlabeled sample set U={x'1,x'2,...,x'|U|}. We hope that the function f:X → Y can accurately predict the label y for the sample x. The function may be parametric. Like the maximum likelihood method; may be non-parametric. Such as the nearest neighbor method, neural network method, support vector machine method, etc.; may also be non-numeric, such as decision tree classification. Among, x and x 'are both d-dimensional vector, yi∈Y label sample xi, | L |, and | U | L respectively and the size of the U, i.e. the number of samples included. Semi-supervised learning is to find the optimal learner on the sample set S. How to make full use of labeled examples and unlabeled examples is a problem that semi-supervised learning needs to solve.

2. Semi-supervised learning problems From a sample point of view, machine learning is performed using a small number of labeled samples and a large number of unlabeled samples. As understood from the perspective for the study of the probability of learning how to use the input training samples marginal probability P (x) and conditional output probability P (y | x) contact design classifier with good performance. The existence of such a connection is based on some of the foundations. That is, clustering if (cluster assumpTIon) and manifold if (maniford assumpTIon).

Power Bank

With 15+ years manufacturing experience for phone accessories.

Supply various portable charger for iPhone, Airpods, laptop, radio-controlled aircraft ,laptop, car, medical device, mobile device, ect.

Avoiding your devices run out of charge, portable chargers to keep your mobile device going.

From the original ordinary power bank charger to wireless power bank, green energy solar power bank, magnetic mobile power, Portable Power Stations and other products continue to innovate.

We help 200+ customers create a custom mobile power banks design for various industries.

Portable Charger,Power Bank Charger,Mini Power Bank,Wireless Power Bank, Solar Power Bank

TOPNOTCH INTERNATIONAL GROUP LIMITED , https://www.mic11.com

Posted on