# My first VC bound

## In words...

The symmetrization Lemma bounds the probability of having a large error between the empirical and expected risks via the probability of having a large error between the empirical risk computed on the training sample and the one computed on a ghost sample. Then, an error bound can be obtained by bounding the latter via the growth function. Such bounds are called Vapnik-Chervonenkis bounds or VC bounds.

The bound obtained here can be difficult to use due to the dependence of the growth function on the training sample size. However the bound can be reformulated in terms of the VC-dimension, which is independent of the sample size.

## In maths...

In a binary classification setting with a given function class $\F$, we have the following bound on the risk $R(f)$ of any classifier $f\in\F$.

For all $\delta > 0$, the bound $$\forall f\in\F,\quad R(f) \leq R_{emp}(f) + 2\sqrt{2\frac{\log \Pi_{\F}(2N) + \log\frac{2}{\delta}}{N}}$$ holds with probability at least $1-\delta$.

Proof: Introducing a ghost sample $D^\prime$ and applying the symmetrization Lemma yields $$P^N\left\{ \sup_{f\in\F} ( R(f) - R_{emp}(f) \geq \epsilon\right\} \leq 2 P^{2N}\left\{ \sup_{f\in\F} ( R_{emp}^\prime(f) - R_{emp}(f) )\geq \frac{\epsilon}{2} \right\} ,$$ with $N\epsilon^2 \geq 2$. Note that the quantity $$R_{emp}^\prime(f) - R_{emp}(f) = \frac{1}{N}\sum_{i=1}^N \I{f(X_i^\prime)\neq Y_i^\prime} - \frac{1}{N}\sum_{i=1}^N \I{f(X_i)\neq Y_i} = \frac{1}{N}\sum_{i=1}^N ( \I{f(X_i^\prime)\neq Y_i^\prime} - \I{f(X_i)\neq Y_i} )$$ only depends on the values $\mathbf{f}=(f_1,\dots,f_N, f_1^\prime,\dots,f_N^\prime)$ taken by the function $f$ on the training sample $D$ and the ghost sample $D^\prime$. Thus, by considering the projection $\F_{DD^\prime}$ of $\F$ onto the concatenation $DD^\prime$ of the two samples, we have $$\sup_{f\in\F} ( R_{emp}^\prime(f) - R_{emp}(f) ) = \sup_{\mathbf{f} \in\F_{DD^\prime}} \frac{1}{N}\sum_{i=1}^N ( \I{f_i^\prime\neq Y_i^\prime} - \I{f_i \neq Y_i } )$$ and \begin{align*} P^{2N} \left\{ \sup_{f\in\F} (R_{emp}^\prime(f) - R_{emp}(f))\geq \frac{\epsilon}{2}\right\} &= P^{2N} \left\{ \sup_{\mathbf{f} \in\F_{DD^\prime}} \frac{1}{N}\sum_{i=1}^N ( \I{f_i^\prime\neq Y_i^\prime} - \I{f_i \neq Y_i } ) \geq \frac{\epsilon}{2}\right\} \\ & = P^{2N} \left\{ \exists \mathbf{f}\in \F_{DD^\prime} \ :\ \frac{1}{N}\sum_{i=1}^N ( \I{f_i^\prime\neq Y_i^\prime} - \I{f_i \neq Y_i } ) \geq \frac{\epsilon}{2}\right\} \end{align*} because the supremum is large when $\frac{1}{N}\sum_{i=1}^N ( \I{f_i^\prime\neq Y_i^\prime} - \I{f_i \neq Y_i } )$ is large for at least one $\mathbf{f}\in \F_{DD^\prime}$. Since $\F_{DD^\prime}$ is a countable set, we can express the probability that there exists a particular $\mathbf{f}$ as a probability of a union of simple events: $$P^{2N} \left\{ \exists \mathbf{f}\in \F_{DD^\prime} \ :\ \frac{1}{N}\sum_{i=1}^N ( \I{f_i^\prime\neq Y_i^\prime} - \I{f_i \neq Y_i } ) \geq \frac{\epsilon}{2}\right\} = P^{2N} \left(\bigcup_{\mathbf{f}\in \F_{DD^\prime}} \left\{ \frac{1}{N}\sum_{i=1}^N ( \I{f_i^\prime\neq Y_i^\prime} - \I{f_i \neq Y_i } ) \geq \frac{\epsilon}{2}\right\} \right)$$ and use the union bound to obtain $$P^{2N} \left\{ \sup_{\mathbf{f} \in\F_{DD^\prime}} \frac{1}{N}\sum_{i=1}^N ( \I{f_i^\prime\neq Y_i^\prime} - \I{f_i \neq Y_i } ) \geq \frac{\epsilon}{2}\right\} \leq |\F_{DD^\prime}| \ P^{2N} \left\{ \frac{1}{N}\sum_{i=1}^N ( \I{f_i^\prime\neq Y_i^\prime} - \I{f_i \neq Y_i } ) \geq \frac{\epsilon}{2}\right\} .$$ Using the definition of the growth function, we have $|\F_{DD^\prime}|\leq \Pi_{\F}(2N)$ and $$P^{2N} \left\{ \sup_{\mathbf{f} \in\F_{DD^\prime}} \frac{1}{N}\sum_{i=1}^N ( \I{f_i^\prime\neq Y_i^\prime} - \I{f_i \neq Y_i } ) \geq \frac{\epsilon}{2}\right\} \leq \Pi_{\F}(2N)\ P^{2N} \left\{ \frac{1}{N}\sum_{i=1}^N ( \I{f_i^\prime\neq Y_i^\prime} - \I{f_i \neq Y_i } ) \geq \frac{\epsilon}{2}\right\} .$$

Here, we introduce the random variable $L_i = \I{f_i^\prime \neq Y_i^\prime} - \I{f_i\neq Y_i}$, whose expectation satsifies $$\E L_i = \E[ \I{f_i^\prime \neq Y_i^\prime} - \I{f_i\neq Y_i}] = \E[\I{f_i^\prime \neq Y_i^\prime}] - \E[ \I{f_i\neq Y_i}] = R(f) - R(f) = 0$$ and which allows us to express $R_{emp}^\prime(f) - R_{emp}(f)$ as $$R_{emp}^\prime(f) - R_{emp}(f) = \frac{1}{N}\sum_{i=1}^N ( \I{f_i^\prime\neq Y_i^\prime} - \I{f_i \neq Y_i } ) = \frac{1}{N}\sum_{i=1}^N L_i = \frac{1}{N}\sum_{i=1}^N L_i - \E L_i .$$ In addition, note that $L_i$ is a bounded random variable taking values in $[-1,1]$. Therefore, we can apply Hoeffding's inequality, $$P^{2N} \left\{\frac{1}{N}\sum_{i=1}^N L_i - \E L_i\geq t\right\} \leq \exp \left(\frac{-Nt^2}{2}\right),$$ with $t = \epsilon / 2$ to obtain $$P^{2N} \left\{\frac{1}{N}\sum_{i=1}^N ( \I{f_i^\prime\neq Y_i^\prime} - \I{f_i \neq Y_i } )\geq \frac{\epsilon}{2}\right\} \leq \exp \left(\frac{-N\epsilon^2}{8}\right).$$

Gathering the results, we have $$P\left\{ \sup_{f\in\F} ( R(f) - R_{emp}(f) ) \geq \epsilon\right\} \leq 2 \Pi_{\F}(2N)\exp \left(\frac{-N\epsilon^2}{8}\right).$$ Now, let define $$\delta = 2 \Pi_{\F}(2N)\exp \left(\frac{-N\epsilon^2}{8}\right) .$$ This leads to $\log \frac{\delta}{2\Pi_{\F}(2N)} = \frac{-N\epsilon^2}{8}$ and $$\epsilon = \sqrt{8\frac{\log \frac{2\Pi_{\F}(2N)}{\delta}}{N}} = 2 \sqrt{2\frac{\log \Pi_{\F}(2N) + \log \frac{2}{\delta}}{N}}.$$ Introducing this value of $\epsilon$ in the last bound implies that $$P\left\{ \sup_{f\in\F} ( R(f) - R_{emp}(f) ) \geq 2 \sqrt{2\frac{\log \Pi_{\F}(2N) + \log \frac{2}{\delta}}{N}} \right\} \leq \delta ,$$ which corresponds to the desired result by the probability inversion principle and the fact that for all $f\in\F$, $$R(f) \leq R_{emp}(f) + \sup_{f\in\F} ( R(f) - R_{emp}(f) ).$$

To conclude, we have to check the condition $N\epsilon^2 \geq 2$ that allowed us to apply the symmetrization Lemma. This condition boils down to a condition on $\delta$: $$\delta \leq 2 \Pi_{\F}(2N)\exp \left(\frac{-1}{4}\right)$$ which trivially holds for $\delta\leq 1$ due to $$2 \Pi_{\F}(2N)\exp \left(\frac{-1}{4}\right) \geq \Pi_{\F}(2N) \geq 1$$ (unless $\F$ is empty).