## Gaussian Blur

(From wikipedia)

**Gaussian blur** describes blurring an image by a
Gaussian function. It is a widely
used effect in graphics software, typically to reduce image noise and reduce
detail. The visual effect of this blurring technique is a smooth
blur resembling that of viewing the image through a translucent screen, distinctly
different from the bokeh
effect produced by an out-of-focus lens or the shadow of an object
under usual illumination. Gaussian smoothing is also used as a
pre-processing stage in computer vision algorithms in order to
enhance image structures at different scales—see scale-space representation and scale-space
implementation.

Mathematically, applying a Gaussian blur to an image is the same as convolving the image with a Gaussian function; this is also known as a two-dimensional Weierstrass transform. By contrast, convolving by a circle (i.e., a circular box blur) would more accurately reproduce the bokeh effect. Since the Fourier transform of a Gaussian is another Gaussian, applying a Gaussian blur has the effect of reducing the image's high-frequency components; a Gaussian blur is thus a low pass filter.

### Mechanics

The Gaussian blur is a type of image-blurring filter that uses a Gaussian function (which is also used for the normal distribution in statistics) for calculating the transformation to apply to each pixel in the image. The equation of a Gaussian function in one dimension is

- $G\left(x,y\right)=\frac{1}{\sqrt{2\pi}\sigma}{e}^{-\frac{x2}{2{\sigma}^{2}}}$

in two dimensions, it is the product of two such Gaussians, one per direction:

- $G\left(x,y\right)=\frac{1}{2\pi {\sigma}^{2}}{e}^{-\frac{{x}^{2}+{y}^{2}}{2{\sigma}^{2}}}$

where *x* is the distance from the origin in the
horizontal axis, *y* is the distance from the origin in the
vertical axis, and *σ* is the standard deviation of the
Gaussian distribution. When applied in two dimensions, this formula
produces a surface whose contours are concentric
circles with a Gaussian distribution from the center point.
Values from this distribution are used to build a convolution matrix which
is applied to the original image. Each pixel's new value is set to
a weighted average of that pixel's
neighborhood. The original pixel's value receives the heaviest
weight (having the highest Gaussian value) and neighboring pixels
receive smaller weights as their distance to the original pixel
increases. This results in a blur that preserves boundaries and
edges better than other, more uniform blurring filters; see also
scale-space
implementation.

In theory, the Gaussian function at every point on the image
will be non-zero, meaning that the entire image would need to be
included in the calculations for each pixel. In practice, when
computing a discrete approximation of the Gaussian function, pixels
at a distance of more than 3*σ* are small enough to be
considered effectively zero. Thus contributions from pixels outside
that range can be ignored. Typically, an image processing program
need only calculate a matrix with dimensions $\lceil 6\sigma \rceil +\lceil 6\sigma \rceil $ (where $\lceil \rceil $ is the ceiling function) to ensure
a result sufficiently close to that obtained by the entire gaussian
distribution.

In addition to being circularly symmetric, the Gaussian blur can
be applied to a two-dimensional image as two independent
one-dimensional calculations, and so is termed *linearly
separable*. That is, the effect of applying the two-dimensional
matrix can also be achieved by applying a series of
single-dimensional Gaussian matrices in the horizontal direction,
then repeating the process in the vertical direction. In
computational terms, this is a useful property, since the
calculation can be performed in $O\left({\mathrm{w}}_{\mathrm{kernel}}{\mathrm{w}}_{\mathrm{image}}{\mathrm{h}}_{\mathrm{image}}\right)+O\left({\mathrm{h}}_{\mathrm{kernel}}{\mathrm{w}}_{\mathrm{image}}{\mathrm{h}}_{\mathrm{image}}\right)$ time (where *h* is height and *w* is
width; see Big O notation), as opposed to $O\left({\mathrm{w}}_{\mathrm{kernel}}{\mathrm{h}}_{\mathrm{kernel}}{\mathrm{w}}_{\mathrm{image}}{\mathrm{h}}_{\mathrm{image}}\right)$ for a non-separable kernel.

Applying multiple, successive gaussian blurs to an image has the same effect as applying a single, larger gaussian blur, whose radius is the square root of the sum of the squares of the blur radii that were actually applied. For example, applying successive gaussian blurs with radii of 6 and 8 gives the same results as applying a single gaussian blur of radius 10, since $\sqrt{{6}^{2}+{8}^{2}}=10$. Because of this relationship, processing time cannot be saved by simulating a gaussian blur with successive, smaller blurs — the time required will be at least as great as performing the single large blur.

Gaussian blurring is commonly used when reducing the size of an image. When downsampling an image, it is common to apply a low-pass filter to the image prior to resampling. This is to ensure that spurious high-frequency information does not appear in the downsampled image (aliasing). Gaussian blurs have nice properties, such as having no sharp edges, and thus do not introduce ringing into the filtered image.

### Low-pass filter

Gaussian blur is a low-pass filter, attenuating high frequency signals.

Its amplitude Bode plot (the log scale in the frequency domain) is a parabola.

### Implementation

A Gaussian Blur effect is typically generated by convolving an image with a kernel of Gaussian values. In practice, it is best to take advantage of the Gaussian Blur’s linearly separable property by dividing the process into two passes. In the first pass, a one-dimensional kernel is used to blur the image in only the horizontal or vertical direction. In the second pass, another one-dimensional kernel is used to blur in the remaining direction. The resulting effect is the same as convolving with a two-dimensional kernel in a single pass, but requires fewer calculations.

Discretisation is typically achieved by sampling the Gaussian filter kernel at discrete points, normally at positions corresponding to the midpoints of each pixel. This reduces the computational cost but, for very small filter kernels, point sampling the Gaussian function with very few samples leads to a large error. In these cases, accuracy is maintained (at a slight computational cost) by integration of the Gaussian function over each pixel's area.

When converting the Gaussian’s continuous values into the discrete values needed for a kernel, the sum of the values will be different from 1. This will cause a darkening or brightening of the image. To remedy this, the values can be normalized by dividing each term in the kernel by the sum of all terms in the kernel.