glossary
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
glossary [2022/09/12 12:24] – admin | glossary [2022/09/12 16:51] (current) – admin | ||
---|---|---|---|
Line 26: | Line 26: | ||
===== C ===== | ===== C ===== | ||
+ | |||
+ | ==== class ==== | ||
+ | |||
+ | One of a set of target values for a [[: | ||
==== classification ==== | ==== classification ==== | ||
- | The [[: | + | The [[: |
==== clustering ==== | ==== clustering ==== | ||
Grouping of data, particulary during [[: | Grouping of data, particulary during [[: | ||
+ | |||
+ | ==== convolutional layer ==== | ||
+ | |||
+ | A layer in a [[: | ||
==== convolutional neural network (CNN) ==== | ==== convolutional neural network (CNN) ==== | ||
+ | |||
+ | A neural network in which at least one layer is a [[: | ||
==== cross-validation ==== | ==== cross-validation ==== | ||
+ | |||
+ | A method to estimate how well a model will generalise to new data. In cross-validation, | ||
===== D ===== | ===== D ===== | ||
+ | |||
+ | ==== data imbalance ==== | ||
+ | |||
+ | When the [[: | ||
==== deep learning ==== | ==== deep learning ==== | ||
Line 71: | Line 87: | ||
==== feature engineering ==== | ==== feature engineering ==== | ||
- | The process of converting data into useful [[: | + | The process of converting data into useful [[: |
==== feature selection ==== | ==== feature selection ==== | ||
Line 86: | Line 102: | ||
==== hidden layer ==== | ==== hidden layer ==== | ||
+ | |||
+ | Artificial layer in a [[: | ||
==== hierarchical agglomerative clustering ==== | ==== hierarchical agglomerative clustering ==== | ||
Line 92: | Line 110: | ||
==== hyperparameters ==== | ==== hyperparameters ==== | ||
+ | |||
+ | Higher-level properties of a model, such as the learning rate (how fast it can learn) or the number of [[: | ||
===== I ===== | ===== I ===== | ||
Line 99: | Line 119: | ||
===== K ===== | ===== K ===== | ||
- | ==== k-fold validation ==== | + | ==== k-fold |
+ | |||
+ | The training set is split into k smaller subsets. The model is trained on one of the k folds as training set and validated on the remaining (k-1) folds. This is done for all k folds. The performance measure calculated by the k-fold cross-validation is the average of the results of all k folds. | ||
===== L ===== | ===== L ===== | ||
Line 143: | Line 165: | ||
==== supervised learning ==== | ==== supervised learning ==== | ||
+ | |||
+ | A [[: | ||
===== T ===== | ===== T ===== |
glossary.1662978297.txt.gz · Last modified: 2022/09/12 12:24 by admin