The complexity of learning concept machine learning in genomics #1

The complexity of learning concept machine learning in genomics #1

How much data do we need to use a ML algorithm?

Although this is the most common question, it is hard to answer since the amount of data mainly depends on how complex the learning concept is. In Machine Learning (ML), the learning complexity can be broken down into informational and computational complexities. Further, informational complexity considers two aspects, how many training examples are needed (sample complexity) and how fast a learner/model’s estimate can converge to the true population parameters (rate of convergence). Computational complexity refers the types of algorithms and the computational resources to extract the learner/model’s prediction within a reasonable time. As you can guess now, this blog will cover informational complexity to answer the question.

Learn from an example – ‘To be or Not to be banana’

Let’s try to learn what banana is. In this example, banana is the learning concept (one hypothesis, that is ‘to be’ or ‘not to be banana’), and the various descriptions associated with banana can be featuresIdea such as colors and shapes. Unlike the way human can process the concept of banana – the human does not require non-banana information to classify a banana, typical machine learning algorithm requires counter-examples. Although there is One Class Classification (OCC) which has been widely used for outlier or anomaly detection, this is harder than the problem of conventional binary/multi-class classification.

Let’s place another concept ‘Apple’ into this example and make this practice as a binary-class classification. By doing this, we just made the learning concept simpler, ‘to be banana = not apple’ and ‘not to be banana = apple’. This is little counter-intuitive since adding an additional learning concept into a model makes the model simpler: however, OCC basically refers one versus all others, and the number of all other cases are pretty much infinite. This is where we are in terms of ML; one of the simplest learning activities for human is the most difficult problem to solve in ML. Before generating some data for banana, we need to define some terms.

  • Instances[ii] X describes banana with features such as color (f1 = yellow, green or red, |f1|=3), shape (f2 = cylinder or sphere, |f2|=2) and class label (C → {banana, apple}, |C|=2). These values for color and shape need to be enumerated. For examples, we can assign integers to each value like (Yellow=1, Green=2, Red=3), (Cylinder=1, Sphere=2) and (banana=0, apple=1) (Table 1)
  • The target function t generates a prediction for ‘is this banana or apple’ as a number ranging between 0 ≤ t(xi) ≤ 1. Typically, we want to have a prediction, t(xi) as close as c(xi), 0 ≤ i ≤ n, where n is the total number of samples.
  • The hypothesis space H can be defined as the conjunction of features and target function h(xi) = (f1i, f2i, t(xi)).
  • Training examples S must contain roughly the same number of banana (0) and apple (1) examples. A sample is described as s(xi) = (f1i, f2i, c(xi))

Sample complexity – estimate the size of training data set in a quick and dirty way

Ideally, we want to have all the instances in the training sample set S covering all the possible combinations of features with respect to t as you can see in Table 1. There are three possible values for f1 and two possible values for f2. Also, there are two classes in this example. Therefore, the number of all the possible instances |X| = |f1| x |f2| x |C| = 3 x 2 x 2 = 12. However, f2 is a lucky feature[iii] that is mutually exclusive between banana and apple. Hence, |f2| is considered as 1 in this case. In addition to that, we can subtract one case because there is no red banana. For this example, only 5 instances can exhaust the entire sample space H. In general, the number of features (columns) in a data set is exponentially proportional to the required number of training samples (|S| = n, where n is the unique number of samples in the set). If we assume that all features are binary like a simple value of yes or no, then |X| = 2 x 2 x 2 = 23. Two to the power of the number of columns is the minimum n in the simplest case. This example only works when the values in all the features are discrete values. If we use the gradient color values for Color (RGB 256 color pallet ranges from 0 to 16777215 in decimal), the required number of training samples will increase quite significantly because now you need to multiply 16777216 for f1 if all the possible colors exist in H.

It is worth noting that the number of instances we calculate here does not always guarantee that a learner/model can converge properly. If you have the number of data equal or below this number, the amount of data is simply too small for the most of algorithm except a ML algorithm evaluating one feature at a time such as a decision tree. As a rough rule of thumb, many statisticians say that a sample size of 30 is large enough. This rule can be applied for a regression based ML algorithm that assumes one smooth linear decision boundary. Although an optimal n could be different on a case-by-case basis, it is not a bad idea to start from the total number of samples of N = |X| x 30.

Table 1: Training sample set S



Color (Yellow=1, Green=2, Red=3)


Shape (Cylinder=1, Sphere=2)


Class = Banana (0) or Apple (1)
x1, Banana 1 1 1 0
x2, Banana 2 2 1 0
x3, Apple 1 1[iv] 2 1
x4, Apple 2 2 2 1
x5, Apple 3 3 2 1

Learning curve – an accurate way to estimate the size of training data set

In ML, learning curve refers a plot of the classification accuracy against the training set size. This is not an estimation method; it requires building a classification model multiple times with the different size of training data set. This is a good technique for sanity-checks (underfitting – high bias and overfitting – high variance) for a model. It also can be utilized to optimize/improve the performance.

Figure 1 shows two examples of learning curves that represent underfitting (left-side plot) and overfitting (right-side plot). Basically, these are the plots of the training and cross-validation error (root mean square error in this example) when the size of training set increases. For underfitting case (left-side plot), both the training and cross-validation errors are very high, and the increment of training set size does not help. This indicates that the features in the data set are not quite relevant to the target learning concept. The examples are confusing the model. On the other hand, the right-side plot shows an overfitting case. The validation error is much higher than the training error in this case. The model trains on the identical samples repeatedly, and the training error climbs continuously while the cross-validation error continue to decrease. The performance for an overfitted model usually looks good; however, it will fail miserably for the real-life data.

Back in the day, not a single ML paper was accepted without a learning curve. Without this simple plot, the entire performance claim will be unverifiable.


Internal web page

External web page


Kihoon Yoon
Sr. Principal Systems Dev Eng
+1 512 728 4191

Or attributes. A feature is an individual measurable property or characteristic of a phenomenon being observed.

[ii] Instance is also referred as example or sample

[iii] If a feature like f2 exists in a data set, we could make 100% accurate prediction simply by looking at f2.

[iv] Is there yellow apple? Yes, Golden Delicious is yellow.

Article ID: SLN312364

Last Date Modified: 08/14/2018 07:37 AM

Rate this article

Easy to understand
Was this article helpful?
Yes No
Send us feedback
Comments cannot contain these special characters: <>()\
Sorry, our feedback system is currently down. Please try again later.

Thank you for your feedback.