Skip to main content

Partial Derivative Calculator

Solve partial derivative problems step-by-step with our free calculator. See formulas, worked examples, and clear explanations.

Share this calculator

Formula

df/dx = lim(h->0) [f(x+h,y) - f(x,y)] / h

The partial derivative with respect to x differentiates while holding y constant. For f(x,y) = ax^m*y^n + bx^p + cy^q + dxy + e, df/dx = amx^(m-1)y^n + bpx^(p-1) + dy. The gradient vector (df/dx, df/dy) points in the direction of steepest ascent.

Worked Examples

Example 1: Surface Analysis and Gradient

Problem: For f(x,y) = 3x^2*y + 2x^3 - y^2 + 4xy + 5, find partial derivatives and gradient at (2, 1).

Solution: f(2,1) = 3(4)(1) + 2(8) - 1 + 4(2)(1) + 5 = 12 + 16 - 1 + 8 + 5 = 40\ndf/dx = 6xy + 6x^2 + 4y = 6(2)(1) + 6(4) + 4(1) = 12 + 24 + 4 = 40\ndf/dy = 3x^2 - 2y + 4x = 3(4) - 2(1) + 4(2) = 12 - 2 + 8 = 18\nGradient = (40, 18), magnitude = sqrt(1600 + 324) = sqrt(1924) = 43.86\nDirection = arctan(18/40) = 24.23 degrees\nLaplacian = d2f/dx2 + d2f/dy2 = (6y + 12x) + (-2) = 6 + 24 - 2 = 28

Result: Gradient: (40, 18) | |grad| = 43.86 | Steepest ascent at 24.23 deg | Laplacian = 28

Example 2: Critical Point Classification

Problem: For f(x,y) = x^2 - y^2 (saddle surface), analyze the critical point at (0, 0).

Solution: f(0,0) = 0\ndf/dx = 2x = 0, df/dy = -2y = 0: gradient is (0, 0), confirmed critical point\nd2f/dx2 = 2, d2f/dy2 = -2, d2f/dxdy = 0\nHessian = [[2, 0], [0, -2]]\nHessian det = 2*(-2) - 0 = -4 < 0\nSince det < 0: SADDLE POINT\nThe surface curves up in x-direction and down in y-direction

Result: Critical point at origin | Hessian det = -4 < 0 | Saddle point confirmed | Eigenvalues: 2, -2

Frequently Asked Questions

What is a partial derivative and how is it different from a regular derivative?

A partial derivative measures the rate of change of a multivariable function with respect to one variable while holding all other variables constant. For f(x,y), the partial derivative with respect to x (written df/dx or fx) treats y as a constant and differentiates only with respect to x. This is different from the ordinary derivative of a single-variable function, which captures the total rate of change. Partial derivatives are the building blocks of multivariable calculus, appearing in gradient vectors, divergence, curl, and all the major theorems. They answer questions like: how does temperature change if you move only east (holding north position fixed)?

What is a directional derivative and how is it computed?

The directional derivative measures the rate of change of a function in any specified direction, not just along the coordinate axes. For f(x,y) in the direction of unit vector u = (cos theta, sin theta), the directional derivative is Duf = (df/dx)*cos(theta) + (df/dy)*sin(theta) = grad(f) dot u. This is the dot product of the gradient with the direction vector. The maximum directional derivative occurs in the gradient direction and equals |grad(f)|. The minimum occurs in the opposite direction and equals -|grad(f)|. In any direction perpendicular to the gradient, the directional derivative is zero, corresponding to movement along a level curve where the function value does not change.

What is the tangent plane to a surface and how is it related to partial derivatives?

The tangent plane to the surface z = f(x,y) at a point (x0, y0, f(x0,y0)) is the best linear approximation to the surface near that point. Its equation is z = f(x0,y0) + fx(x0,y0)*(x-x0) + fy(x0,y0)*(y-y0), where fx and fy are partial derivatives evaluated at the point. This is the multivariable generalization of the tangent line in single-variable calculus. The tangent plane contains both tangent lines obtained by slicing the surface with planes parallel to the xz and yz planes. Its normal vector is (-fx, -fy, 1), which is proportional to the gradient of g(x,y,z) = f(x,y) - z. Tangent planes are used for linearization, error estimation, and constructing differential approximations.

How do partial derivatives appear in optimization and machine learning?

Partial derivatives are the computational engine behind modern optimization algorithms used in machine learning. Gradient descent updates parameters by subtracting a step proportional to the gradient: theta_new = theta_old - learning_rate * grad(Loss). For neural networks with millions of parameters, backpropagation efficiently computes partial derivatives of the loss function with respect to every weight using the chain rule. Second-order methods like Newton method use the Hessian matrix for faster convergence but require computing and inverting the Hessian, which is expensive for high-dimensional problems. Stochastic gradient descent, Adam, and RMSprop are practical variants that approximate the gradient using mini-batches of data.

What is the chain rule for partial derivatives?

The multivariable chain rule extends the single-variable chain rule to compositions involving multiple variables. If z = f(x,y) where x = g(t) and y = h(t), then dz/dt = (df/dx)(dx/dt) + (df/dy)(dy/dt). For functions of multiple intermediate variables, the rule generalizes: if z = f(x,y) and x = g(s,t), y = h(s,t), then dz/ds = (df/dx)(dx/ds) + (df/dy)(dy/ds). This is often visualized using a tree diagram showing dependencies between variables. The chain rule is essential for implicit differentiation, coordinate transformations (Cartesian to polar or spherical), and backpropagation in neural networks where compositions of many functions are differentiated layer by layer.

Is my data stored or sent to a server?

No. All calculations run entirely in your browser using JavaScript. No data you enter is ever transmitted to any server or stored anywhere. Your inputs remain completely private.

References