Distance-based Losses

This section lists all the subtypes of DistanceLoss that are implemented in this package.

LPDistLoss

class LPDistLoss

The \(p\)-th power absolute distance loss. It is Lipschitz continuous iff \(p = 1\), convex if and only if \(p \ge 1\), and strictly convex iff \(p > 1\).

Lossfunction Derivative
\[L(r) = | r | ^p\]
\[L'(r) = p \cdot r \cdot | r | ^{p-2}\]

L1DistLoss

class L1DistLoss

The absolute distance loss. Special case of the LPDistLoss with \(p = 1\). It is Lipschitz continuous and convex, but not strictly convex.

Lossfunction Derivative
\[L(r) = | r |\]
\[L'(r) = \textrm{sign}(r)\]

L2DistLoss

class L2DistLoss

The least squares loss. Special case of the LPDistLoss with \(p = 2\). It is strictly convex.

Lossfunction Derivative
\[L(r) = | r | ^2\]
\[L'(r) = 2 r\]

LogitDistLoss

class LogitDistLoss

The distance-based logistic loss for regression. It is strictly convex and Lipschitz continuous.

Lossfunction Derivative
\[L(r) = - \ln \frac{4 e^r}{(1 + e^r)^2}\]
\[L'(r) = \tanh \left( \frac{r}{2} \right)\]

HuberLoss

class HuberLoss
α

Loss function commonly used for robustness to outliers. For large values of \(\alpha\) it becomes close to the L1DistLoss, while for small values of \(\alpha\) it resembles the L2DistLoss. It is Lipschitz continuous and convex, but not strictly convex.

Lossfunction Derivative
\[\begin{split}L(r) = \begin{cases} \frac{r^2}{2} & \quad \text{if } | r | \le \alpha \\ \alpha | r | - \frac{\alpha^2}{2} & \quad \text{otherwise}\\ \end{cases}\end{split}\]
\[\begin{split}L'(r) = \begin{cases} r & \quad \text{if } | r | \le \alpha \\ \alpha \cdot \textrm{sign}(r) & \quad \text{otherwise}\\ \end{cases}\end{split}\]

L1EpsilonInsLoss

class L1EpsilonInsLoss
ϵ

The \(\epsilon\)-insensitive loss. Typically used in linear support vector regression. It ignores deviances smaller than \(\epsilon\) , but penalizes larger deviances linarily. It is Lipschitz continuous and convex, but not strictly convex.

Lossfunction Derivative
\[L(r) = \max \{ 0, | r | - \epsilon \}\]
\[\begin{split}L'(r) = \begin{cases} \frac{r}{ | r | } & \quad \text{if } \epsilon \le | r | \\ 0 & \quad \text{otherwise}\\ \end{cases}\end{split}\]

L2EpsilonInsLoss

class L2EpsilonInsLoss
ϵ

The \(\epsilon\)-insensitive loss. Typically used in linear support vector regression. It ignores deviances smaller than \(\epsilon\) , but penalizes larger deviances quadratically. It is convex, but not strictly convex.

Lossfunction Derivative
\[L(r) = \max \{ 0, | r | - \epsilon \}^2\]
\[\begin{split}L'(r) = \begin{cases} 2 \cdot \textrm{sign}(r) \cdot \left( | r | - \epsilon \right) & \quad \text{if } \epsilon \le | r | \\ 0 & \quad \text{otherwise}\\ \end{cases}\end{split}\]

PeriodicLoss

class PeriodicLoss
c

Measures distance on a circle of specified circumference \(c\).

Lossfunction Derivative
\[L(r) = 1 - \cos \left ( \frac{2 r \pi}{c} \right )\]
\[L'(r) = \frac{2 \pi}{c} \cdot \sin \left( \frac{2r \pi}{c} \right)\]

QuantileLoss

class QuantileLoss
τ

The quantile loss, aka pinball loss. Typically used to estimate the conditional \(\tau\)-quantiles. It is convex, but not strictly convex. Furthermore it is Lipschitz continuous.

Lossfunction Derivative
\[\begin{split}L(r) = \begin{cases} \left( 1 - \tau \right) r & \quad \text{if } r \ge 0 \\ - \tau r & \quad \text{otherwise} \\ \end{cases}\end{split}\]
\[\begin{split}L(r) = \begin{cases} 1 - \tau & \quad \text{if } r \ge 0 \\ - \tau & \quad \text{otherwise} \\ \end{cases}\end{split}\]

Note

You may note that our definition of the QuantileLoss looks different to what one usually sees in other literature. The reason is that we have to correct for the fact that in our case \(r = \hat{y} - y\) instead of \(r_{\textrm{usual}} = y - \hat{y}\), which means that our definition relates to that in the manner of \(r = -1 * r_{\textrm{usual}}\).