Convolution Calculator
Our free sequences calculator solves convolution problems. Get worked examples, visual aids, and downloadable results.
Formula
(a*b)[n] = Sum of a[k] x b[n-k] for all valid k
For each output position n, the convolution sums the products of overlapping elements from sequence a and the reversed, shifted sequence b. The output length equals len(a) + len(b) - 1. This operation is equivalent to polynomial multiplication when sequences represent polynomial coefficients.
Worked Examples
Example 1: Moving Average Filter
Problem: Convolve the signal [2, 4, 6, 8, 10] with the averaging kernel [1/3, 1/3, 1/3] to smooth the signal.
Solution: Signal a = [2, 4, 6, 8, 10], Kernel b = [0.333, 0.333, 0.333]\nOutput length = 5 + 3 - 1 = 7\ny[0] = 2 x 0.333 = 0.667\ny[1] = 2 x 0.333 + 4 x 0.333 = 2.000\ny[2] = 2 x 0.333 + 4 x 0.333 + 6 x 0.333 = 4.000\ny[3] = 4 x 0.333 + 6 x 0.333 + 8 x 0.333 = 6.000\ny[4] = 6 x 0.333 + 8 x 0.333 + 10 x 0.333 = 8.000\ny[5] = 8 x 0.333 + 10 x 0.333 = 6.000\ny[6] = 10 x 0.333 = 3.333
Result: Output: [0.667, 2.000, 4.000, 6.000, 8.000, 6.000, 3.333]
Example 2: Polynomial Multiplication
Problem: Multiply (1 + 2x + 3x^2) by (1 + x) using convolution of coefficients [1,2,3] and [1,1].
Solution: a = [1, 2, 3] (coefficients of 1 + 2x + 3x^2)\nb = [1, 1] (coefficients of 1 + x)\nOutput length = 3 + 2 - 1 = 4\ny[0] = 1 x 1 = 1\ny[1] = 1 x 1 + 2 x 1 = 3\ny[2] = 2 x 1 + 3 x 1 = 5\ny[3] = 3 x 1 = 3\nResult polynomial: 1 + 3x + 5x^2 + 3x^3
Result: Convolution: [1, 3, 5, 3] = polynomial 1 + 3x + 5x^2 + 3x^3
Frequently Asked Questions
What is convolution and what does it represent mathematically?
Convolution is a mathematical operation that combines two sequences (or functions) to produce a third sequence expressing how the shape of one is modified by the other. For discrete sequences, the convolution of a[n] and b[n] is defined as (a*b)[n] = sum of a[k] x b[n-k] for all valid k. Intuitively, convolution slides one sequence across the other, multiplying overlapping elements and summing the products at each position. It measures the overlap between one function and a reversed, shifted copy of another. Convolution is commutative (a*b = b*a), associative, and distributive over addition, making it algebraically well-behaved.
How is discrete convolution computed step by step?
Discrete convolution follows a systematic slide-multiply-sum process. Given sequences a = [a0, a1, ..., am] and b = [b0, b1, ..., bn], the output has length m+n+1. For each output index k, reverse sequence b, shift it by k positions, multiply element-wise with a where they overlap, and sum all products. At position k=0, only a[0]*b[0] contributes. At position k=1, both a[0]*b[1] and a[1]*b[0] contribute. This continues until the final position where only the last elements overlap. The process is equivalent to polynomial multiplication when sequences represent polynomial coefficients.
What is the relationship between convolution and polynomial multiplication?
Convolution of two sequences is mathematically identical to multiplying two polynomials whose coefficients are those sequences. If a = [1, 2, 3] represents the polynomial 1 + 2x + 3x^2, and b = [1, 1] represents 1 + x, then their convolution [1, 3, 5, 3] represents the product polynomial 1 + 3x + 5x^2 + 3x^3. This connection makes convolution fundamental in algebra and computer science. The Fast Fourier Transform (FFT) exploits this relationship by converting sequences to frequency domain representations where convolution becomes simple element-wise multiplication, reducing complexity from O(n^2) to O(n log n).
How is convolution used in signal processing?
In signal processing, convolution is the fundamental operation for applying filters to signals. When a digital signal passes through a Linear Time-Invariant (LTI) system, the output equals the convolution of the input signal with the system impulse response. Low-pass filters smooth signals by convolving with averaging kernels like [1,1,1]/3. High-pass filters detect edges by convolving with difference kernels like [-1,2,-1]. Audio effects like reverb simulate room acoustics by convolving dry audio with a recorded room impulse response. Equalization, noise reduction, and echo cancellation all rely on convolution operations.
What is the convolution theorem and why is it important?
The convolution theorem states that convolution in the time domain equals multiplication in the frequency domain, and vice versa. Mathematically: FFT(a*b) = FFT(a) x FFT(b), where FFT is the Fourier Transform and x represents element-wise multiplication. This theorem is enormously important because direct convolution requires O(n^2) operations, while FFT-based convolution requires only O(n log n) operations. For large signals, this speedup is dramatic. A convolution of two signals with 1 million samples each would take roughly 10^12 operations directly but only about 4 x 10^7 operations using FFT, a speedup factor of about 25,000.
How does convolution work in image processing and convolutional neural networks?
In image processing, 2D convolution slides a small kernel (filter matrix) across an image, computing weighted sums at each position. Common kernels include blur (averaging nearby pixels), sharpen (emphasizing center pixel), edge detection (Sobel, Prewitt operators), and emboss filters. Convolutional Neural Networks (CNNs) learn optimal kernel values from training data rather than using hand-designed filters. A CNN might have dozens of layers, each with multiple learned kernels that detect progressively more abstract features, from edges and textures in early layers to complex shapes and objects in deeper layers. This hierarchical feature learning makes CNNs exceptionally powerful for image recognition.