mirror of
https://github.com/ROCm/jax.git
synced 2025-04-17 12:26:07 +00:00
1688 lines
56 KiB
Plaintext
1688 lines
56 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "Ic1reB4s6vu1"
|
|
},
|
|
"source": [
|
|
"# The Autodiff Cookbook\n",
|
|
"\n",
|
|
"<!--* freshness: { reviewed: '2024-04-08' } *-->\n",
|
|
"\n",
|
|
"[](https://colab.research.google.com/github/google/jax/blob/main/docs/notebooks/autodiff_cookbook.ipynb) [](https://kaggle.com/kernels/welcome?src=https://github.com/google/jax/blob/main/docs/notebooks/autodiff_cookbook.ipynb)\n",
|
|
"\n",
|
|
"*alexbw@, mattjj@* \n",
|
|
"\n",
|
|
"JAX has a pretty general automatic differentiation system. In this notebook, we'll go through a whole bunch of neat autodiff ideas that you can cherry pick for your own work, starting with the basics."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"metadata": {
|
|
"id": "JTYyZkSO6vuy"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"import jax.numpy as jnp\n",
|
|
"from jax import grad, jit, vmap\n",
|
|
"from jax import random\n",
|
|
"\n",
|
|
"key = random.key(0)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "YxnjtAGN6vu2"
|
|
},
|
|
"source": [
|
|
"## Gradients"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "zqwpfr2vAsvt"
|
|
},
|
|
"source": [
|
|
"### Starting with `grad`\n",
|
|
"\n",
|
|
"You can differentiate a function with `grad`:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"metadata": {
|
|
"id": "0NLO4Wfknzmk",
|
|
"outputId": "ec6f5fe3-3d90-4ec9-a405-f3191b6099da"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"0.070650935\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"grad_tanh = grad(jnp.tanh)\n",
|
|
"print(grad_tanh(2.0))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "LGcNfDntoBZI"
|
|
},
|
|
"source": [
|
|
"`grad` takes a function and returns a function. If you have a Python function `f` that evaluates the mathematical function $f$, then `grad(f)` is a Python function that evaluates the mathematical function $\\nabla f$. That means `grad(f)(x)` represents the value $\\nabla f(x)$.\n",
|
|
"\n",
|
|
"Since `grad` operates on functions, you can apply it to its own output to differentiate as many times as you like:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"metadata": {
|
|
"id": "RDGk1GDsoawu",
|
|
"outputId": "157bab60-52a8-4ca9-a298-b57561b30032"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"-0.13621889\n",
|
|
"0.2526544\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"print(grad(grad(jnp.tanh))(2.0))\n",
|
|
"print(grad(grad(grad(jnp.tanh)))(2.0))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "2rcnpTiinqi8"
|
|
},
|
|
"source": [
|
|
"Let's look at computing gradients with `grad` in a linear logistic regression model. First, the setup:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"metadata": {
|
|
"id": "27TcOT2i6vu5"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"def sigmoid(x):\n",
|
|
" return 0.5 * (jnp.tanh(x / 2) + 1)\n",
|
|
"\n",
|
|
"# Outputs probability of a label being true.\n",
|
|
"def predict(W, b, inputs):\n",
|
|
" return sigmoid(jnp.dot(inputs, W) + b)\n",
|
|
"\n",
|
|
"# Build a toy dataset.\n",
|
|
"inputs = jnp.array([[0.52, 1.12, 0.77],\n",
|
|
" [0.88, -1.08, 0.15],\n",
|
|
" [0.52, 0.06, -1.30],\n",
|
|
" [0.74, -2.49, 1.39]])\n",
|
|
"targets = jnp.array([True, True, False, True])\n",
|
|
"\n",
|
|
"# Training loss is the negative log-likelihood of the training examples.\n",
|
|
"def loss(W, b):\n",
|
|
" preds = predict(W, b, inputs)\n",
|
|
" label_probs = preds * targets + (1 - preds) * (1 - targets)\n",
|
|
" return -jnp.sum(jnp.log(label_probs))\n",
|
|
"\n",
|
|
"# Initialize random model coefficients\n",
|
|
"key, W_key, b_key = random.split(key, 3)\n",
|
|
"W = random.normal(W_key, (3,))\n",
|
|
"b = random.normal(b_key, ())"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "8Wk-Yai7ooh1"
|
|
},
|
|
"source": [
|
|
"Use the `grad` function with its `argnums` argument to differentiate a function with respect to positional arguments."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"metadata": {
|
|
"id": "bpmd8W8-6vu6",
|
|
"outputId": "5faafcc6-e9c5-4a2d-fc35-5c23e0be2d6d"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"W_grad [-0.16965576 -0.8774645 -1.4901344 ]\n",
|
|
"W_grad [-0.16965576 -0.8774645 -1.4901344 ]\n",
|
|
"b_grad -0.29227236\n",
|
|
"W_grad [-0.16965576 -0.8774645 -1.4901344 ]\n",
|
|
"b_grad -0.29227236\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Differentiate `loss` with respect to the first positional argument:\n",
|
|
"W_grad = grad(loss, argnums=0)(W, b)\n",
|
|
"print('W_grad', W_grad)\n",
|
|
"\n",
|
|
"# Since argnums=0 is the default, this does the same thing:\n",
|
|
"W_grad = grad(loss)(W, b)\n",
|
|
"print('W_grad', W_grad)\n",
|
|
"\n",
|
|
"# But we can choose different values too, and drop the keyword:\n",
|
|
"b_grad = grad(loss, 1)(W, b)\n",
|
|
"print('b_grad', b_grad)\n",
|
|
"\n",
|
|
"# Including tuple values\n",
|
|
"W_grad, b_grad = grad(loss, (0, 1))(W, b)\n",
|
|
"print('W_grad', W_grad)\n",
|
|
"print('b_grad', b_grad)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "MDl5UZl4oyzB"
|
|
},
|
|
"source": [
|
|
"This `grad` API has a direct correspondence to the excellent notation in Spivak's classic *Calculus on Manifolds* (1965), also used in Sussman and Wisdom's [*Structure and Interpretation of Classical Mechanics*](https://mitpress.mit.edu/9780262028967/structure-and-interpretation-of-classical-mechanics) (2015) and their [*Functional Differential Geometry*](https://mitpress.mit.edu/9780262019347/functional-differential-geometry) (2013). Both books are open-access. See in particular the \"Prologue\" section of *Functional Differential Geometry* for a defense of this notation.\n",
|
|
"\n",
|
|
"Essentially, when using the `argnums` argument, if `f` is a Python function for evaluating the mathematical function $f$, then the Python expression `grad(f, i)` evaluates to a Python function for evaluating $\\partial_i f$."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "fuz9E2vzro5E"
|
|
},
|
|
"source": [
|
|
"### Differentiating with respect to nested lists, tuples, and dicts"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "QQaPja7puMKi"
|
|
},
|
|
"source": [
|
|
"Differentiating with respect to standard Python containers just works, so use tuples, lists, and dicts (and arbitrary nesting) however you like."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"metadata": {
|
|
"id": "IY82kdAe6vu_",
|
|
"outputId": "d4004d2d-97ed-4ddc-bb1d-fc4af21edd23"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"{'W': Array([-0.16965576, -0.8774645 , -1.4901344 ], dtype=float32), 'b': Array(-0.29227236, dtype=float32)}\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"def loss2(params_dict):\n",
|
|
" preds = predict(params_dict['W'], params_dict['b'], inputs)\n",
|
|
" label_probs = preds * targets + (1 - preds) * (1 - targets)\n",
|
|
" return -jnp.sum(jnp.log(label_probs))\n",
|
|
"\n",
|
|
"print(grad(loss2)({'W': W, 'b': b}))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "cJ2NxiN58bfI"
|
|
},
|
|
"source": [
|
|
"You can [register your own container types](https://github.com/google/jax/issues/446#issuecomment-467105048) to work with not just `grad` but all the JAX transformations (`jit`, `vmap`, etc.)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "PaCHzAtGruBz"
|
|
},
|
|
"source": [
|
|
"### Evaluate a function and its gradient using `value_and_grad`"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "CSgCjjo-ssnA"
|
|
},
|
|
"source": [
|
|
"Another convenient function is `value_and_grad` for efficiently computing both a function's value as well as its gradient's value:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"metadata": {
|
|
"id": "RsQSyT5p7OJW",
|
|
"outputId": "c2502c2e-091e-4e9c-ca20-1e0c2670fd7a"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"loss value 3.051939\n",
|
|
"loss value 3.051939\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"from jax import value_and_grad\n",
|
|
"loss_value, Wb_grad = value_and_grad(loss, (0, 1))(W, b)\n",
|
|
"print('loss value', loss_value)\n",
|
|
"print('loss value', loss(W, b))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "rYTrH5tKllC_"
|
|
},
|
|
"source": [
|
|
"### Checking against numerical differences\n",
|
|
"\n",
|
|
"A great thing about derivatives is that they're straightforward to check with finite differences:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 8,
|
|
"metadata": {
|
|
"id": "R8q5RiY3l7Fw",
|
|
"outputId": "4f2ccbe9-da9f-438e-9f3e-ad03e3c3e247"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"b_grad_numerical -0.29563904\n",
|
|
"b_grad_autodiff -0.29227236\n",
|
|
"W_dirderiv_numerical -0.19788742\n",
|
|
"W_dirderiv_autodiff -0.19909099\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Set a step size for finite differences calculations\n",
|
|
"eps = 1e-4\n",
|
|
"\n",
|
|
"# Check b_grad with scalar finite differences\n",
|
|
"b_grad_numerical = (loss(W, b + eps / 2.) - loss(W, b - eps / 2.)) / eps\n",
|
|
"print('b_grad_numerical', b_grad_numerical)\n",
|
|
"print('b_grad_autodiff', grad(loss, 1)(W, b))\n",
|
|
"\n",
|
|
"# Check W_grad with finite differences in a random direction\n",
|
|
"key, subkey = random.split(key)\n",
|
|
"vec = random.normal(subkey, W.shape)\n",
|
|
"unitvec = vec / jnp.sqrt(jnp.vdot(vec, vec))\n",
|
|
"W_grad_numerical = (loss(W + eps / 2. * unitvec, b) - loss(W - eps / 2. * unitvec, b)) / eps\n",
|
|
"print('W_dirderiv_numerical', W_grad_numerical)\n",
|
|
"print('W_dirderiv_autodiff', jnp.vdot(grad(loss)(W, b), unitvec))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "InzB-iiJpVcx"
|
|
},
|
|
"source": [
|
|
"JAX provides a simple convenience function that does essentially the same thing, but checks up to any order of differentiation that you like:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 9,
|
|
"metadata": {
|
|
"id": "6Ok2LEfQmOuy"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from jax.test_util import check_grads\n",
|
|
"check_grads(loss, (W, b), order=2) # check up to 2nd order derivatives"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "id0DXxwt3VJi"
|
|
},
|
|
"source": [
|
|
"### Hessian-vector products with `grad`-of-`grad`\n",
|
|
"\n",
|
|
"One thing we can do with higher-order `grad` is build a Hessian-vector product function. (Later on we'll write an even more efficient implementation that mixes both forward- and reverse-mode, but this one will use pure reverse-mode.)\n",
|
|
"\n",
|
|
"A Hessian-vector product function can be useful in a [truncated Newton Conjugate-Gradient algorithm](https://en.wikipedia.org/wiki/Truncated_Newton_method) for minimizing smooth convex functions, or for studying the curvature of neural network training objectives (e.g. [1](https://arxiv.org/abs/1406.2572), [2](https://arxiv.org/abs/1811.07062), [3](https://arxiv.org/abs/1706.04454), [4](https://arxiv.org/abs/1802.03451)).\n",
|
|
"\n",
|
|
"For a scalar-valued function $f : \\mathbb{R}^n \\to \\mathbb{R}$ with continuous second derivatives (so that the Hessian matrix is symmetric), the Hessian at a point $x \\in \\mathbb{R}^n$ is written as $\\partial^2 f(x)$. A Hessian-vector product function is then able to evaluate\n",
|
|
"\n",
|
|
"$\\qquad v \\mapsto \\partial^2 f(x) \\cdot v$\n",
|
|
"\n",
|
|
"for any $v \\in \\mathbb{R}^n$.\n",
|
|
"\n",
|
|
"The trick is not to instantiate the full Hessian matrix: if $n$ is large, perhaps in the millions or billions in the context of neural networks, then that might be impossible to store.\n",
|
|
"\n",
|
|
"Luckily, `grad` already gives us a way to write an efficient Hessian-vector product function. We just have to use the identity\n",
|
|
"\n",
|
|
"$\\qquad \\partial^2 f (x) v = \\partial [x \\mapsto \\partial f(x) \\cdot v] = \\partial g(x)$,\n",
|
|
"\n",
|
|
"where $g(x) = \\partial f(x) \\cdot v$ is a new scalar-valued function that dots the gradient of $f$ at $x$ with the vector $v$. Notice that we're only ever differentiating scalar-valued functions of vector-valued arguments, which is exactly where we know `grad` is efficient.\n",
|
|
"\n",
|
|
"In JAX code, we can just write this:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"metadata": {
|
|
"id": "Ou5OU-gU9epm"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"def hvp(f, x, v):\n",
|
|
" return grad(lambda x: jnp.vdot(grad(f)(x), v))(x)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "Rb1-5Hpv-ZV0"
|
|
},
|
|
"source": [
|
|
"This example shows that you can freely use lexical closure, and JAX will never get perturbed or confused.\n",
|
|
"\n",
|
|
"We'll check this implementation a few cells down, once we see how to compute dense Hessian matrices. We'll also write an even better version that uses both forward-mode and reverse-mode."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "5A_akvtp8UTu"
|
|
},
|
|
"source": [
|
|
"### Jacobians and Hessians using `jacfwd` and `jacrev`"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "UP5BbmSm8ZwK"
|
|
},
|
|
"source": [
|
|
"You can compute full Jacobian matrices using the `jacfwd` and `jacrev` functions:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"metadata": {
|
|
"id": "cbETzAvKvf5I",
|
|
"outputId": "7c8d2361-cc68-4139-9f1f-afa2431b3cd2"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"jacfwd result, with shape (4, 3)\n",
|
|
"[[ 0.05981756 0.12883782 0.088576 ]\n",
|
|
" [ 0.04015914 -0.04928622 0.00684531]\n",
|
|
" [ 0.12188289 0.01406341 -0.3047072 ]\n",
|
|
" [ 0.00140428 -0.00472522 0.00263777]]\n",
|
|
"jacrev result, with shape (4, 3)\n",
|
|
"[[ 0.05981756 0.12883782 0.088576 ]\n",
|
|
" [ 0.04015914 -0.04928622 0.00684531]\n",
|
|
" [ 0.12188289 0.01406341 -0.3047072 ]\n",
|
|
" [ 0.00140428 -0.00472522 0.00263777]]\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"from jax import jacfwd, jacrev\n",
|
|
"\n",
|
|
"# Isolate the function from the weight matrix to the predictions\n",
|
|
"f = lambda W: predict(W, b, inputs)\n",
|
|
"\n",
|
|
"J = jacfwd(f)(W)\n",
|
|
"print(\"jacfwd result, with shape\", J.shape)\n",
|
|
"print(J)\n",
|
|
"\n",
|
|
"J = jacrev(f)(W)\n",
|
|
"print(\"jacrev result, with shape\", J.shape)\n",
|
|
"print(J)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "iZDL-n_AvgBt"
|
|
},
|
|
"source": [
|
|
"These two functions compute the same values (up to machine numerics), but differ in their implementation: `jacfwd` uses forward-mode automatic differentiation, which is more efficient for \"tall\" Jacobian matrices, while `jacrev` uses reverse-mode, which is more efficient for \"wide\" Jacobian matrices. For matrices that are near-square, `jacfwd` probably has an edge over `jacrev`."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "zeKlr7Xz8bfm"
|
|
},
|
|
"source": [
|
|
"You can also use `jacfwd` and `jacrev` with container types:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 12,
|
|
"metadata": {
|
|
"id": "eH46Xnm88bfm",
|
|
"outputId": "ab1f5dce-926e-40b9-9664-5bd5e628e0b5"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Jacobian from W to logits is\n",
|
|
"[[ 0.05981756 0.12883782 0.088576 ]\n",
|
|
" [ 0.04015914 -0.04928622 0.00684531]\n",
|
|
" [ 0.12188289 0.01406341 -0.3047072 ]\n",
|
|
" [ 0.00140428 -0.00472522 0.00263777]]\n",
|
|
"Jacobian from b to logits is\n",
|
|
"[0.11503378 0.04563539 0.23439017 0.00189768]\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"def predict_dict(params, inputs):\n",
|
|
" return predict(params['W'], params['b'], inputs)\n",
|
|
"\n",
|
|
"J_dict = jacrev(predict_dict)({'W': W, 'b': b}, inputs)\n",
|
|
"for k, v in J_dict.items():\n",
|
|
" print(\"Jacobian from {} to logits is\".format(k))\n",
|
|
" print(v)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "yH34zjV88bfp"
|
|
},
|
|
"source": [
|
|
"For more details on forward- and reverse-mode, as well as how to implement `jacfwd` and `jacrev` as efficiently as possible, read on!"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "K6Mpw_7K8bfp"
|
|
},
|
|
"source": [
|
|
"Using a composition of two of these functions gives us a way to compute dense Hessian matrices:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 13,
|
|
"metadata": {
|
|
"id": "n155ypD9rfIZ",
|
|
"outputId": "69622bc4-9a8d-47f1-aab6-40ab21d450d9"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"hessian, with shape (4, 3, 3)\n",
|
|
"[[[ 0.02285465 0.0492254 0.03384246]\n",
|
|
" [ 0.0492254 0.10602394 0.07289147]\n",
|
|
" [ 0.03384246 0.07289146 0.05011288]]\n",
|
|
"\n",
|
|
" [[-0.03195214 0.03921399 -0.00544639]\n",
|
|
" [ 0.03921399 -0.04812626 0.0066842 ]\n",
|
|
" [-0.00544639 0.0066842 -0.00092836]]\n",
|
|
"\n",
|
|
" [[-0.01583708 -0.00182736 0.03959271]\n",
|
|
" [-0.00182736 -0.00021085 0.00456839]\n",
|
|
" [ 0.03959271 0.00456839 -0.09898177]]\n",
|
|
"\n",
|
|
" [[-0.00103522 0.00348336 -0.00194453]\n",
|
|
" [ 0.00348336 -0.01172105 0.00654308]\n",
|
|
" [-0.00194453 0.00654308 -0.00365256]]]\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"def hessian(f):\n",
|
|
" return jacfwd(jacrev(f))\n",
|
|
"\n",
|
|
"H = hessian(f)(W)\n",
|
|
"print(\"hessian, with shape\", H.shape)\n",
|
|
"print(H)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "wvkk82R6uRoM"
|
|
},
|
|
"source": [
|
|
"This shape makes sense: if we start with a function $f : \\mathbb{R}^n \\to \\mathbb{R}^m$, then at a point $x \\in \\mathbb{R}^n$ we expect to get the shapes\n",
|
|
"\n",
|
|
"* $f(x) \\in \\mathbb{R}^m$, the value of $f$ at $x$,\n",
|
|
"* $\\partial f(x) \\in \\mathbb{R}^{m \\times n}$, the Jacobian matrix at $x$,\n",
|
|
"* $\\partial^2 f(x) \\in \\mathbb{R}^{m \\times n \\times n}$, the Hessian at $x$,\n",
|
|
"\n",
|
|
"and so on.\n",
|
|
"\n",
|
|
"To implement `hessian`, we could have used `jacfwd(jacrev(f))` or `jacrev(jacfwd(f))` or any other composition of the two. But forward-over-reverse is typically the most efficient. That's because in the inner Jacobian computation we're often differentiating a function wide Jacobian (maybe like a loss function $f : \\mathbb{R}^n \\to \\mathbb{R}$), while in the outer Jacobian computation we're differentiating a function with a square Jacobian (since $\\nabla f : \\mathbb{R}^n \\to \\mathbb{R}^n$), which is where forward-mode wins out."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "OMmi9cyhs1bj"
|
|
},
|
|
"source": [
|
|
"## How it's made: two foundational autodiff functions"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "mtSRvouV6vvG"
|
|
},
|
|
"source": [
|
|
"(jacobian-vector-product)=\n",
|
|
"\n",
|
|
"### Jacobian-Vector products (JVPs, aka forward-mode autodiff)\n",
|
|
"\n",
|
|
"JAX includes efficient and general implementations of both forward- and reverse-mode automatic differentiation. The familiar `grad` function is built on reverse-mode, but to explain the difference in the two modes, and when each can be useful, we need a bit of math background.\n",
|
|
"\n",
|
|
"#### JVPs in math\n",
|
|
"\n",
|
|
"Mathematically, given a function $f : \\mathbb{R}^n \\to \\mathbb{R}^m$, the Jacobian of $f$ evaluated at an input point $x \\in \\mathbb{R}^n$, denoted $\\partial f(x)$, is often thought of as a matrix in $\\mathbb{R}^m \\times \\mathbb{R}^n$:\n",
|
|
"\n",
|
|
"$\\qquad \\partial f(x) \\in \\mathbb{R}^{m \\times n}$.\n",
|
|
"\n",
|
|
"But we can also think of $\\partial f(x)$ as a linear map, which maps the tangent space of the domain of $f$ at the point $x$ (which is just another copy of $\\mathbb{R}^n$) to the tangent space of the codomain of $f$ at the point $f(x)$ (a copy of $\\mathbb{R}^m$):\n",
|
|
"\n",
|
|
"$\\qquad \\partial f(x) : \\mathbb{R}^n \\to \\mathbb{R}^m$.\n",
|
|
"\n",
|
|
"This map is called the [pushforward map](https://en.wikipedia.org/wiki/Pushforward_(differential)) of $f$ at $x$. The Jacobian matrix is just the matrix for this linear map in a standard basis.\n",
|
|
"\n",
|
|
"If we don't commit to one specific input point $x$, then we can think of the function $\\partial f$ as first taking an input point and returning the Jacobian linear map at that input point:\n",
|
|
"\n",
|
|
"$\\qquad \\partial f : \\mathbb{R}^n \\to \\mathbb{R}^n \\to \\mathbb{R}^m$.\n",
|
|
"\n",
|
|
"In particular, we can uncurry things so that given input point $x \\in \\mathbb{R}^n$ and a tangent vector $v \\in \\mathbb{R}^n$, we get back an output tangent vector in $\\mathbb{R}^m$. We call that mapping, from $(x, v)$ pairs to output tangent vectors, the *Jacobian-vector product*, and write it as\n",
|
|
"\n",
|
|
"$\\qquad (x, v) \\mapsto \\partial f(x) v$\n",
|
|
"\n",
|
|
"#### JVPs in JAX code\n",
|
|
"\n",
|
|
"Back in Python code, JAX's `jvp` function models this transformation. Given a Python function that evaluates $f$, JAX's `jvp` is a way to get a Python function for evaluating $(x, v) \\mapsto (f(x), \\partial f(x) v)$."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 14,
|
|
"metadata": {
|
|
"id": "pTncYR6F6vvG"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from jax import jvp\n",
|
|
"\n",
|
|
"# Isolate the function from the weight matrix to the predictions\n",
|
|
"f = lambda W: predict(W, b, inputs)\n",
|
|
"\n",
|
|
"key, subkey = random.split(key)\n",
|
|
"v = random.normal(subkey, W.shape)\n",
|
|
"\n",
|
|
"# Push forward the vector `v` along `f` evaluated at `W`\n",
|
|
"y, u = jvp(f, (W,), (v,))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "m1VJgJYQGfCK"
|
|
},
|
|
"source": [
|
|
"In terms of [Haskell-like type signatures](https://wiki.haskell.org/Type_signature),\n",
|
|
"we could write\n",
|
|
"\n",
|
|
"```haskell\n",
|
|
"jvp :: (a -> b) -> a -> T a -> (b, T b)\n",
|
|
"```\n",
|
|
"\n",
|
|
"where we use `T a` to denote the type of the tangent space for `a`. In words, `jvp` takes as arguments a function of type `a -> b`, a value of type `a`, and a tangent vector value of type `T a`. It gives back a pair consisting of a value of type `b` and an output tangent vector of type `T b`."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "3RpbiasHGD3X"
|
|
},
|
|
"source": [
|
|
"The `jvp`-transformed function is evaluated much like the original function, but paired up with each primal value of type `a` it pushes along tangent values of type `T a`. For each primitive numerical operation that the original function would have applied, the `jvp`-transformed function executes a \"JVP rule\" for that primitive that both evaluates the primitive on the primals and applies the primitive's JVP at those primal values.\n",
|
|
"\n",
|
|
"That evaluation strategy has some immediate implications about computational complexity: since we evaluate JVPs as we go, we don't need to store anything for later, and so the memory cost is independent of the depth of the computation. In addition, the FLOP cost of the `jvp`-transformed function is about 3x the cost of just evaluating the function (one unit of work for evaluating the original function, for example `sin(x)`; one unit for linearizing, like `cos(x)`; and one unit for applying the linearized function to a vector, like `cos_x * v`). Put another way, for a fixed primal point $x$, we can evaluate $v \\mapsto \\partial f(x) \\cdot v$ for about the same marginal cost as evaluating $f$.\n",
|
|
"\n",
|
|
"That memory complexity sounds pretty compelling! So why don't we see forward-mode very often in machine learning?\n",
|
|
"\n",
|
|
"To answer that, first think about how you could use a JVP to build a full Jacobian matrix. If we apply a JVP to a one-hot tangent vector, it reveals one column of the Jacobian matrix, corresponding to the nonzero entry we fed in. So we can build a full Jacobian one column at a time, and to get each column costs about the same as one function evaluation. That will be efficient for functions with \"tall\" Jacobians, but inefficient for \"wide\" Jacobians.\n",
|
|
"\n",
|
|
"If you're doing gradient-based optimization in machine learning, you probably want to minimize a loss function from parameters in $\\mathbb{R}^n$ to a scalar loss value in $\\mathbb{R}$. That means the Jacobian of this function is a very wide matrix: $\\partial f(x) \\in \\mathbb{R}^{1 \\times n}$, which we often identify with the Gradient vector $\\nabla f(x) \\in \\mathbb{R}^n$. Building that matrix one column at a time, with each call taking a similar number of FLOPs to evaluate the original function, sure seems inefficient! In particular, for training neural networks, where $f$ is a training loss function and $n$ can be in the millions or billions, this approach just won't scale.\n",
|
|
"\n",
|
|
"To do better for functions like this, we just need to use reverse-mode."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "PhkvkZazdXu1"
|
|
},
|
|
"source": [
|
|
"(vector-jacobian-product)=\n",
|
|
"\n",
|
|
"### Vector-Jacobian products (VJPs, aka reverse-mode autodiff)\n",
|
|
"\n",
|
|
"Where forward-mode gives us back a function for evaluating Jacobian-vector products, which we can then use to build Jacobian matrices one column at a time, reverse-mode is a way to get back a function for evaluating vector-Jacobian products (equivalently Jacobian-transpose-vector products), which we can use to build Jacobian matrices one row at a time.\n",
|
|
"\n",
|
|
"#### VJPs in math\n",
|
|
"\n",
|
|
"Let's again consider a function $f : \\mathbb{R}^n \\to \\mathbb{R}^m$.\n",
|
|
"Starting from our notation for JVPs, the notation for VJPs is pretty simple:\n",
|
|
"\n",
|
|
"$\\qquad (x, v) \\mapsto v \\partial f(x)$,\n",
|
|
"\n",
|
|
"where $v$ is an element of the cotangent space of $f$ at $x$ (isomorphic to another copy of $\\mathbb{R}^m$). When being rigorous, we should think of $v$ as a linear map $v : \\mathbb{R}^m \\to \\mathbb{R}$, and when we write $v \\partial f(x)$ we mean function composition $v \\circ \\partial f(x)$, where the types work out because $\\partial f(x) : \\mathbb{R}^n \\to \\mathbb{R}^m$. But in the common case we can identify $v$ with a vector in $\\mathbb{R}^m$ and use the two almost interchangeably, just like we might sometimes flip between \"column vectors\" and \"row vectors\" without much comment.\n",
|
|
"\n",
|
|
"With that identification, we can alternatively think of the linear part of a VJP as the transpose (or adjoint conjugate) of the linear part of a JVP:\n",
|
|
"\n",
|
|
"$\\qquad (x, v) \\mapsto \\partial f(x)^\\mathsf{T} v$.\n",
|
|
"\n",
|
|
"For a given point $x$, we can write the signature as\n",
|
|
"\n",
|
|
"$\\qquad \\partial f(x)^\\mathsf{T} : \\mathbb{R}^m \\to \\mathbb{R}^n$.\n",
|
|
"\n",
|
|
"The corresponding map on cotangent spaces is often called the [pullback](https://en.wikipedia.org/wiki/Pullback_(differential_geometry))\n",
|
|
"of $f$ at $x$. The key for our purposes is that it goes from something that looks like the output of $f$ to something that looks like the input of $f$, just like we might expect from a transposed linear function.\n",
|
|
"\n",
|
|
"#### VJPs in JAX code\n",
|
|
"\n",
|
|
"Switching from math back to Python, the JAX function `vjp` can take a Python function for evaluating $f$ and give us back a Python function for evaluating the VJP $(x, v) \\mapsto (f(x), v^\\mathsf{T} \\partial f(x))$."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 15,
|
|
"metadata": {
|
|
"id": "1tFcRuEzkGRR"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from jax import vjp\n",
|
|
"\n",
|
|
"# Isolate the function from the weight matrix to the predictions\n",
|
|
"f = lambda W: predict(W, b, inputs)\n",
|
|
"\n",
|
|
"y, vjp_fun = vjp(f, W)\n",
|
|
"\n",
|
|
"key, subkey = random.split(key)\n",
|
|
"u = random.normal(subkey, y.shape)\n",
|
|
"\n",
|
|
"# Pull back the covector `u` along `f` evaluated at `W`\n",
|
|
"v = vjp_fun(u)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "oVOZexCEkvv3"
|
|
},
|
|
"source": [
|
|
"In terms of [Haskell-like type signatures](https://wiki.haskell.org/Type_signature),\n",
|
|
"we could write\n",
|
|
"\n",
|
|
"```haskell\n",
|
|
"vjp :: (a -> b) -> a -> (b, CT b -> CT a)\n",
|
|
"```\n",
|
|
"\n",
|
|
"where we use `CT a` to denote the type for the cotangent space for `a`. In words, `vjp` takes as arguments a function of type `a -> b` and a point of type `a`, and gives back a pair consisting of a value of type `b` and a linear map of type `CT b -> CT a`.\n",
|
|
"\n",
|
|
"This is great because it lets us build Jacobian matrices one row at a time, and the FLOP cost for evaluating $(x, v) \\mapsto (f(x), v^\\mathsf{T} \\partial f(x))$ is only about three times the cost of evaluating $f$. In particular, if we want the gradient of a function $f : \\mathbb{R}^n \\to \\mathbb{R}$, we can do it in just one call. That's how `grad` is efficient for gradient-based optimization, even for objectives like neural network training loss functions on millions or billions of parameters.\n",
|
|
"\n",
|
|
"There's a cost, though: though the FLOPs are friendly, memory scales with the depth of the computation. Also, the implementation is traditionally more complex than that of forward-mode, though JAX has some tricks up its sleeve (that's a story for a future notebook!).\n",
|
|
"\n",
|
|
"For more on how reverse-mode works, see [this tutorial video from the Deep Learning Summer School in 2017](http://videolectures.net/deeplearning2017_johnson_automatic_differentiation/)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Vector-valued gradients with VJPs\n",
|
|
"\n",
|
|
"If you're interested in taking vector-valued gradients (like `tf.gradients`):"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"[[6. 6.]\n",
|
|
" [6. 6.]]\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"from jax import vjp\n",
|
|
"\n",
|
|
"def vgrad(f, x):\n",
|
|
" y, vjp_fn = vjp(f, x)\n",
|
|
" return vjp_fn(jnp.ones(y.shape))[0]\n",
|
|
"\n",
|
|
"print(vgrad(lambda x: 3*x**2, jnp.ones((2, 2))))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "MWHcAPqLdJFn"
|
|
},
|
|
"source": [
|
|
"### Hessian-vector products using both forward- and reverse-mode"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "YG3g5C3KdW7H"
|
|
},
|
|
"source": [
|
|
"In a previous section, we implemented a Hessian-vector product function just using reverse-mode (assuming continuous second derivatives):"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 16,
|
|
"metadata": {
|
|
"id": "C70CA-7wdelL"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"def hvp(f, x, v):\n",
|
|
" return grad(lambda x: jnp.vdot(grad(f)(x), v))(x)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "zJlJbFKCdfd0"
|
|
},
|
|
"source": [
|
|
"That's efficient, but we can do even better and save some memory by using forward-mode together with reverse-mode.\n",
|
|
"\n",
|
|
"Mathematically, given a function $f : \\mathbb{R}^n \\to \\mathbb{R}$ to differentiate, a point $x \\in \\mathbb{R}^n$ at which to linearize the function, and a vector $v \\in \\mathbb{R}^n$, the Hessian-vector product function we want is\n",
|
|
"\n",
|
|
"$(x, v) \\mapsto \\partial^2 f(x) v$\n",
|
|
"\n",
|
|
"Consider the helper function $g : \\mathbb{R}^n \\to \\mathbb{R}^n$ defined to be the derivative (or gradient) of $f$, namely $g(x) = \\partial f(x)$. All we need is its JVP, since that will give us\n",
|
|
"\n",
|
|
"$(x, v) \\mapsto \\partial g(x) v = \\partial^2 f(x) v$.\n",
|
|
"\n",
|
|
"We can translate that almost directly into code:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 17,
|
|
"metadata": {
|
|
"id": "rq3C0reVfAaI"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from jax import jvp, grad\n",
|
|
"\n",
|
|
"# forward-over-reverse\n",
|
|
"def hvp(f, primals, tangents):\n",
|
|
" return jvp(grad(f), primals, tangents)[1]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "XUsye1SwfSFm"
|
|
},
|
|
"source": [
|
|
"Even better, since we didn't have to call `jnp.dot` directly, this `hvp` function works with arrays of any shape and with arbitrary container types (like vectors stored as nested lists/dicts/tuples), and doesn't even have a dependence on `jax.numpy`.\n",
|
|
"\n",
|
|
"Here's an example of how to use it:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 18,
|
|
"metadata": {
|
|
"id": "bmpuQa5_f1Al",
|
|
"outputId": "20ef2514-0ab7-4071-c2f4-77b59b013ffc"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"True\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"def f(X):\n",
|
|
" return jnp.sum(jnp.tanh(X)**2)\n",
|
|
"\n",
|
|
"key, subkey1, subkey2 = random.split(key, 3)\n",
|
|
"X = random.normal(subkey1, (30, 40))\n",
|
|
"V = random.normal(subkey2, (30, 40))\n",
|
|
"\n",
|
|
"ans1 = hvp(f, (X,), (V,))\n",
|
|
"ans2 = jnp.tensordot(hessian(f)(X), V, 2)\n",
|
|
"\n",
|
|
"print(jnp.allclose(ans1, ans2, 1e-4, 1e-4))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "aWTii5TyXL5C"
|
|
},
|
|
"source": [
|
|
"Another way you might consider writing this is using reverse-over-forward:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 19,
|
|
"metadata": {
|
|
"id": "YxwmXZH2XQrw"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# reverse-over-forward\n",
|
|
"def hvp_revfwd(f, primals, tangents):\n",
|
|
" g = lambda primals: jvp(f, primals, tangents)[1]\n",
|
|
" return grad(g)(primals)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "8z-QG_xTXR4I"
|
|
},
|
|
"source": [
|
|
"That's not quite as good, though, because forward-mode has less overhead than reverse-mode, and since the outer differentiation operator here has to differentiate a larger computation than the inner one, keeping forward-mode on the outside works best:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 20,
|
|
"metadata": {
|
|
"id": "lxfv25qTX5gZ",
|
|
"outputId": "b88dae03-7bd1-4836-f994-880c57bc4714"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Forward over reverse\n",
|
|
"10 loops, best of 3: 6.75 ms per loop\n",
|
|
"Reverse over forward\n",
|
|
"10 loops, best of 3: 8.4 ms per loop\n",
|
|
"Reverse over reverse\n",
|
|
"10 loops, best of 3: 9.98 ms per loop\n",
|
|
"Naive full Hessian materialization\n",
|
|
"10 loops, best of 3: 15.9 ms per loop\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# reverse-over-reverse, only works for single arguments\n",
|
|
"def hvp_revrev(f, primals, tangents):\n",
|
|
" x, = primals\n",
|
|
" v, = tangents\n",
|
|
" return grad(lambda x: jnp.vdot(grad(f)(x), v))(x)\n",
|
|
"\n",
|
|
"\n",
|
|
"print(\"Forward over reverse\")\n",
|
|
"%timeit -n10 -r3 hvp(f, (X,), (V,))\n",
|
|
"print(\"Reverse over forward\")\n",
|
|
"%timeit -n10 -r3 hvp_revfwd(f, (X,), (V,))\n",
|
|
"print(\"Reverse over reverse\")\n",
|
|
"%timeit -n10 -r3 hvp_revrev(f, (X,), (V,))\n",
|
|
"\n",
|
|
"print(\"Naive full Hessian materialization\")\n",
|
|
"%timeit -n10 -r3 jnp.tensordot(hessian(f)(X), V, 2)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "xtqSUJgzwQXO"
|
|
},
|
|
"source": [
|
|
"## Composing VJPs, JVPs, and `vmap`"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "PSL1TciM6vvI"
|
|
},
|
|
"source": [
|
|
"### Jacobian-Matrix and Matrix-Jacobian products\n",
|
|
"\n",
|
|
"Now that we have `jvp` and `vjp` transformations that give us functions to push-forward or pull-back single vectors at a time, we can use JAX's `vmap` [transformation](https://github.com/google/jax#auto-vectorization-with-vmap) to push and pull entire bases at once. In particular, we can use that to write fast matrix-Jacobian and Jacobian-matrix products."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 21,
|
|
"metadata": {
|
|
"id": "asAWvxVaCmsx",
|
|
"outputId": "05d3b5f9-f526-42a4-ea2b-6163b267db26"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Non-vmapped Matrix-Jacobian product\n",
|
|
"10 loops, best of 3: 291 ms per loop\n",
|
|
"\n",
|
|
"Vmapped Matrix-Jacobian product\n",
|
|
"10 loops, best of 3: 6.85 ms per loop\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Isolate the function from the weight matrix to the predictions\n",
|
|
"f = lambda W: predict(W, b, inputs)\n",
|
|
"\n",
|
|
"# Pull back the covectors `m_i` along `f`, evaluated at `W`, for all `i`.\n",
|
|
"# First, use a list comprehension to loop over rows in the matrix M.\n",
|
|
"def loop_mjp(f, x, M):\n",
|
|
" y, vjp_fun = vjp(f, x)\n",
|
|
" return jnp.vstack([vjp_fun(mi) for mi in M])\n",
|
|
"\n",
|
|
"# Now, use vmap to build a computation that does a single fast matrix-matrix\n",
|
|
"# multiply, rather than an outer loop over vector-matrix multiplies.\n",
|
|
"def vmap_mjp(f, x, M):\n",
|
|
" y, vjp_fun = vjp(f, x)\n",
|
|
" outs, = vmap(vjp_fun)(M)\n",
|
|
" return outs\n",
|
|
"\n",
|
|
"key = random.key(0)\n",
|
|
"num_covecs = 128\n",
|
|
"U = random.normal(key, (num_covecs,) + y.shape)\n",
|
|
"\n",
|
|
"loop_vs = loop_mjp(f, W, M=U)\n",
|
|
"print('Non-vmapped Matrix-Jacobian product')\n",
|
|
"%timeit -n10 -r3 loop_mjp(f, W, M=U)\n",
|
|
"\n",
|
|
"print('\\nVmapped Matrix-Jacobian product')\n",
|
|
"vmap_vs = vmap_mjp(f, W, M=U)\n",
|
|
"%timeit -n10 -r3 vmap_mjp(f, W, M=U)\n",
|
|
"\n",
|
|
"assert jnp.allclose(loop_vs, vmap_vs), 'Vmap and non-vmapped Matrix-Jacobian Products should be identical'"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 22,
|
|
"metadata": {
|
|
"id": "TDaxsJrlDraK",
|
|
"outputId": "99a7591c-643d-4b91-c0fc-c7a3c73b0de6"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Non-vmapped Jacobian-Matrix product\n",
|
|
"10 loops, best of 3: 693 ms per loop\n",
|
|
"\n",
|
|
"Vmapped Jacobian-Matrix product\n",
|
|
"10 loops, best of 3: 5.97 ms per loop\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"def loop_jmp(f, W, M):\n",
|
|
" # jvp immediately returns the primal and tangent values as a tuple,\n",
|
|
" # so we'll compute and select the tangents in a list comprehension\n",
|
|
" return jnp.vstack([jvp(f, (W,), (mi,))[1] for mi in M])\n",
|
|
"\n",
|
|
"def vmap_jmp(f, W, M):\n",
|
|
" _jvp = lambda s: jvp(f, (W,), (s,))[1]\n",
|
|
" return vmap(_jvp)(M)\n",
|
|
"\n",
|
|
"num_vecs = 128\n",
|
|
"S = random.normal(key, (num_vecs,) + W.shape)\n",
|
|
"\n",
|
|
"loop_vs = loop_jmp(f, W, M=S)\n",
|
|
"print('Non-vmapped Jacobian-Matrix product')\n",
|
|
"%timeit -n10 -r3 loop_jmp(f, W, M=S)\n",
|
|
"vmap_vs = vmap_jmp(f, W, M=S)\n",
|
|
"print('\\nVmapped Jacobian-Matrix product')\n",
|
|
"%timeit -n10 -r3 vmap_jmp(f, W, M=S)\n",
|
|
"\n",
|
|
"assert jnp.allclose(loop_vs, vmap_vs), 'Vmap and non-vmapped Jacobian-Matrix products should be identical'"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "MXFEFBDz6vvL"
|
|
},
|
|
"source": [
|
|
"### The implementation of `jacfwd` and `jacrev`"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "ZAgUb6sp8bf7"
|
|
},
|
|
"source": [
|
|
"Now that we've seen fast Jacobian-matrix and matrix-Jacobian products, it's not hard to guess how to write `jacfwd` and `jacrev`. We just use the same technique to push-forward or pull-back an entire standard basis (isomorphic to an identity matrix) at once."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 23,
|
|
"metadata": {
|
|
"id": "HBEzsDH1U5_4"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from jax import jacrev as builtin_jacrev\n",
|
|
"\n",
|
|
"def our_jacrev(f):\n",
|
|
" def jacfun(x):\n",
|
|
" y, vjp_fun = vjp(f, x)\n",
|
|
" # Use vmap to do a matrix-Jacobian product.\n",
|
|
" # Here, the matrix is the Euclidean basis, so we get all\n",
|
|
" # entries in the Jacobian at once. \n",
|
|
" J, = vmap(vjp_fun, in_axes=0)(jnp.eye(len(y)))\n",
|
|
" return J\n",
|
|
" return jacfun\n",
|
|
"\n",
|
|
"assert jnp.allclose(builtin_jacrev(f)(W), our_jacrev(f)(W)), 'Incorrect reverse-mode Jacobian results!'"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 24,
|
|
"metadata": {
|
|
"id": "Qd9gVZ5t6vvP"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from jax import jacfwd as builtin_jacfwd\n",
|
|
"\n",
|
|
"def our_jacfwd(f):\n",
|
|
" def jacfun(x):\n",
|
|
" _jvp = lambda s: jvp(f, (x,), (s,))[1]\n",
|
|
" Jt =vmap(_jvp, in_axes=1)(jnp.eye(len(x)))\n",
|
|
" return jnp.transpose(Jt)\n",
|
|
" return jacfun\n",
|
|
"\n",
|
|
"assert jnp.allclose(builtin_jacfwd(f)(W), our_jacfwd(f)(W)), 'Incorrect forward-mode Jacobian results!'"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "7r5_m9Y68bf_"
|
|
},
|
|
"source": [
|
|
"Interestingly, [Autograd](https://github.com/hips/autograd) couldn't do this. Our [implementation](https://github.com/HIPS/autograd/blob/96a03f44da43cd7044c61ac945c483955deba957/autograd/differential_operators.py#L60) of reverse-mode `jacobian` in Autograd had to pull back one vector at a time with an outer-loop `map`. Pushing one vector at a time through the computation is much less efficient than batching it all together with `vmap`."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "9maev0Nd8bf_"
|
|
},
|
|
"source": [
|
|
"Another thing that Autograd couldn't do is `jit`. Interestingly, no matter how much Python dynamism you use in your function to be differentiated, we could always use `jit` on the linear part of the computation. For example:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 25,
|
|
"metadata": {
|
|
"id": "_5jDflC08bgB",
|
|
"outputId": "7d37cca8-3954-40b9-bea8-937ac655387c"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"(Array(3.1415927, dtype=float32),)\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"def f(x):\n",
|
|
" try:\n",
|
|
" if x < 3:\n",
|
|
" return 2 * x ** 3\n",
|
|
" else:\n",
|
|
" raise ValueError\n",
|
|
" except ValueError:\n",
|
|
" return jnp.pi * x\n",
|
|
"\n",
|
|
"y, f_vjp = vjp(f, 4.)\n",
|
|
"print(jit(f_vjp)(1.))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "3fPWLrxK8bgD"
|
|
},
|
|
"source": [
|
|
"## Complex numbers and differentiation"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "2pZOHvrm8bgE"
|
|
},
|
|
"source": [
|
|
"JAX is great at complex numbers and differentiation. To support both [holomorphic and non-holomorphic differentiation](https://en.wikipedia.org/wiki/Holomorphic_function), it helps to think in terms of JVPs and VJPs.\n",
|
|
"\n",
|
|
"Consider a complex-to-complex function $f: \\mathbb{C} \\to \\mathbb{C}$ and identify it with a corresponding function $g: \\mathbb{R}^2 \\to \\mathbb{R}^2$,"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 26,
|
|
"metadata": {
|
|
"id": "OaqZ2MuP8bgF"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"def f(z):\n",
|
|
" x, y = jnp.real(z), jnp.imag(z)\n",
|
|
" return u(x, y) + v(x, y) * 1j\n",
|
|
"\n",
|
|
"def g(x, y):\n",
|
|
" return (u(x, y), v(x, y))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "3XB5oGxl8bgH"
|
|
},
|
|
"source": [
|
|
"That is, we've decomposed $f(z) = u(x, y) + v(x, y) i$ where $z = x + y i$, and identified $\\mathbb{C}$ with $\\mathbb{R}^2$ to get $g$."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "6fBBMxqpiVjF"
|
|
},
|
|
"source": [
|
|
"Since $g$ only involves real inputs and outputs, we already know how to write a Jacobian-vector product for it, say given a tangent vector $(c, d) \\in \\mathbb{R}^2$, namely\n",
|
|
"\n",
|
|
"$\\begin{bmatrix} \\partial_0 u(x, y) & \\partial_1 u(x, y) \\\\ \\partial_0 v(x, y) & \\partial_1 v(x, y) \\end{bmatrix}\n",
|
|
"\\begin{bmatrix} c \\\\ d \\end{bmatrix}$.\n",
|
|
"\n",
|
|
"To get a JVP for the original function $f$ applied to a tangent vector $c + di \\in \\mathbb{C}$, we just use the same definition and identify the result as another complex number, \n",
|
|
"\n",
|
|
"$\\partial f(x + y i)(c + d i) =\n",
|
|
"\\begin{matrix} \\begin{bmatrix} 1 & i \\end{bmatrix} \\\\ ~ \\end{matrix}\n",
|
|
"\\begin{bmatrix} \\partial_0 u(x, y) & \\partial_1 u(x, y) \\\\ \\partial_0 v(x, y) & \\partial_1 v(x, y) \\end{bmatrix}\n",
|
|
"\\begin{bmatrix} c \\\\ d \\end{bmatrix}$.\n",
|
|
"\n",
|
|
"That's our definition of the JVP of a $\\mathbb{C} \\to \\mathbb{C}$ function! Notice it doesn't matter whether or not $f$ is holomorphic: the JVP is unambiguous."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "6SL6dWtFpBUr"
|
|
},
|
|
"source": [
|
|
"Here's a check:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 27,
|
|
"metadata": {
|
|
"id": "BGZV__zupIMS"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"def check(seed):\n",
|
|
" key = random.key(seed)\n",
|
|
"\n",
|
|
" # random coeffs for u and v\n",
|
|
" key, subkey = random.split(key)\n",
|
|
" a, b, c, d = random.uniform(subkey, (4,))\n",
|
|
"\n",
|
|
" def fun(z):\n",
|
|
" x, y = jnp.real(z), jnp.imag(z)\n",
|
|
" return u(x, y) + v(x, y) * 1j\n",
|
|
"\n",
|
|
" def u(x, y):\n",
|
|
" return a * x + b * y\n",
|
|
"\n",
|
|
" def v(x, y):\n",
|
|
" return c * x + d * y\n",
|
|
"\n",
|
|
" # primal point\n",
|
|
" key, subkey = random.split(key)\n",
|
|
" x, y = random.uniform(subkey, (2,))\n",
|
|
" z = x + y * 1j\n",
|
|
"\n",
|
|
" # tangent vector\n",
|
|
" key, subkey = random.split(key)\n",
|
|
" c, d = random.uniform(subkey, (2,))\n",
|
|
" z_dot = c + d * 1j\n",
|
|
"\n",
|
|
" # check jvp\n",
|
|
" _, ans = jvp(fun, (z,), (z_dot,))\n",
|
|
" expected = (grad(u, 0)(x, y) * c +\n",
|
|
" grad(u, 1)(x, y) * d +\n",
|
|
" grad(v, 0)(x, y) * c * 1j+\n",
|
|
" grad(v, 1)(x, y) * d * 1j)\n",
|
|
" print(jnp.allclose(ans, expected))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 28,
|
|
"metadata": {
|
|
"id": "I2OBU3OGp-CY",
|
|
"outputId": "28ae844b-0c25-4255-ca9b-598b0dbeb404"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"True\n",
|
|
"True\n",
|
|
"True\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"check(0)\n",
|
|
"check(1)\n",
|
|
"check(2)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "XjWMgDOimUcU"
|
|
},
|
|
"source": [
|
|
"What about VJPs? We do something pretty similar: for a cotangent vector $c + di \\in \\mathbb{C}$ we define the VJP of $f$ as\n",
|
|
"\n",
|
|
"$(c + di)^* \\; \\partial f(x + y i) =\n",
|
|
"\\begin{matrix} \\begin{bmatrix} c & -d \\end{bmatrix} \\\\ ~ \\end{matrix}\n",
|
|
"\\begin{bmatrix} \\partial_0 u(x, y) & \\partial_1 u(x, y) \\\\ \\partial_0 v(x, y) & \\partial_1 v(x, y) \\end{bmatrix}\n",
|
|
"\\begin{bmatrix} 1 \\\\ -i \\end{bmatrix}$.\n",
|
|
"\n",
|
|
"What's with the negatives? They're just to take care of complex conjugation, and the fact that we're working with covectors."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "oRu2VRjmtrgB"
|
|
},
|
|
"source": [
|
|
"Here's a check of the VJP rules:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 29,
|
|
"metadata": {
|
|
"id": "4J7edvIBttcU"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"def check(seed):\n",
|
|
" key = random.key(seed)\n",
|
|
"\n",
|
|
" # random coeffs for u and v\n",
|
|
" key, subkey = random.split(key)\n",
|
|
" a, b, c, d = random.uniform(subkey, (4,))\n",
|
|
"\n",
|
|
" def fun(z):\n",
|
|
" x, y = jnp.real(z), jnp.imag(z)\n",
|
|
" return u(x, y) + v(x, y) * 1j\n",
|
|
"\n",
|
|
" def u(x, y):\n",
|
|
" return a * x + b * y\n",
|
|
"\n",
|
|
" def v(x, y):\n",
|
|
" return c * x + d * y\n",
|
|
"\n",
|
|
" # primal point\n",
|
|
" key, subkey = random.split(key)\n",
|
|
" x, y = random.uniform(subkey, (2,))\n",
|
|
" z = x + y * 1j\n",
|
|
"\n",
|
|
" # cotangent vector\n",
|
|
" key, subkey = random.split(key)\n",
|
|
" c, d = random.uniform(subkey, (2,))\n",
|
|
" z_bar = jnp.array(c + d * 1j) # for dtype control\n",
|
|
"\n",
|
|
" # check vjp\n",
|
|
" _, fun_vjp = vjp(fun, z)\n",
|
|
" ans, = fun_vjp(z_bar)\n",
|
|
" expected = (grad(u, 0)(x, y) * c +\n",
|
|
" grad(v, 0)(x, y) * (-d) +\n",
|
|
" grad(u, 1)(x, y) * c * (-1j) +\n",
|
|
" grad(v, 1)(x, y) * (-d) * (-1j))\n",
|
|
" assert jnp.allclose(ans, expected, atol=1e-5, rtol=1e-5)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 30,
|
|
"metadata": {
|
|
"id": "RieNCdXgtzs7"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"check(0)\n",
|
|
"check(1)\n",
|
|
"check(2)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "7I6A19Myt3qN"
|
|
},
|
|
"source": [
|
|
"What about convenience wrappers like `grad`, `jacfwd`, and `jacrev`?\n",
|
|
"\n",
|
|
"For $\\mathbb{R} \\to \\mathbb{R}$ functions, recall we defined `grad(f)(x)` as being `vjp(f, x)[1](1.0)`, which works because applying a VJP to a `1.0` value reveals the gradient (i.e. Jacobian, or derivative). We can do the same thing for $\\mathbb{C} \\to \\mathbb{R}$ functions: we can still use `1.0` as the cotangent vector, and we just get out a complex number result summarizing the full Jacobian:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 31,
|
|
"metadata": {
|
|
"id": "xz_9lK61wGdm",
|
|
"outputId": "96693583-2a36-48fd-d811-cd1a0f1a50c5"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"Array(6.-8.j, dtype=complex64)"
|
|
]
|
|
},
|
|
"execution_count": 31,
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"def f(z):\n",
|
|
" x, y = jnp.real(z), jnp.imag(z)\n",
|
|
" return x**2 + y**2\n",
|
|
"\n",
|
|
"z = 3. + 4j\n",
|
|
"grad(f)(z)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "jqCvEE8qwGw7"
|
|
},
|
|
"source": [
|
|
"For general $\\mathbb{C} \\to \\mathbb{C}$ functions, the Jacobian has 4 real-valued degrees of freedom (as in the 2x2 Jacobian matrices above), so we can't hope to represent all of them within a complex number. But we can for holomorphic functions! A holomorphic function is precisely a $\\mathbb{C} \\to \\mathbb{C}$ function with the special property that its derivative can be represented as a single complex number. (The [Cauchy-Riemann equations](https://en.wikipedia.org/wiki/Cauchy%E2%80%93Riemann_equations) ensure that the above 2x2 Jacobians have the special form of a scale-and-rotate matrix in the complex plane, i.e. the action of a single complex number under multiplication.) And we can reveal that one complex number using a single call to `vjp` with a covector of `1.0`.\n",
|
|
"\n",
|
|
"Because this only works for holomorphic functions, to use this trick we need to promise JAX that our function is holomorphic; otherwise, JAX will raise an error when `grad` is used for a complex-output function:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 32,
|
|
"metadata": {
|
|
"id": "Y3n9hPVrwvXx",
|
|
"outputId": "20f72dfe-7083-47f8-b086-4a16426ba97b"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"Array(-27.034945-3.8511531j, dtype=complex64)"
|
|
]
|
|
},
|
|
"execution_count": 32,
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"def f(z):\n",
|
|
" return jnp.sin(z)\n",
|
|
"\n",
|
|
"z = 3. + 4j\n",
|
|
"grad(f, holomorphic=True)(z)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "LjIbDxX-w9Qf"
|
|
},
|
|
"source": [
|
|
"All the `holomorphic=True` promise does is disable the error when the output is complex-valued. We can still write `holomorphic=True` when the function isn't holomorphic, but the answer we get out won't represent the full Jacobian. Instead, it'll be the Jacobian of the function where we just discard the imaginary part of the output:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 33,
|
|
"metadata": {
|
|
"id": "th9xhwp2xaeU",
|
|
"outputId": "2dfebe3a-bbfe-46a1-b282-f5d59d53c772"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"Array(1.-0.j, dtype=complex64)"
|
|
]
|
|
},
|
|
"execution_count": 33,
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"def f(z):\n",
|
|
" return jnp.conjugate(z)\n",
|
|
"\n",
|
|
"z = 3. + 4j\n",
|
|
"grad(f, holomorphic=True)(z) # f is not actually holomorphic!"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "R8ytpfeXyBu2"
|
|
},
|
|
"source": [
|
|
"There are some useful upshots for how `grad` works here:\n",
|
|
"\n",
|
|
"1. We can use `grad` on holomorphic $\\mathbb{C} \\to \\mathbb{C}$ functions.\n",
|
|
"2. We can use `grad` to optimize $f : \\mathbb{C} \\to \\mathbb{R}$ functions, like real-valued loss functions of complex parameters `x`, by taking steps in the direction of the conjugate of `grad(f)(x)`.\n",
|
|
"3. If we have an $\\mathbb{R} \\to \\mathbb{R}$ function that just happens to use some complex-valued operations internally (some of which must be non-holomorphic, e.g. FFTs used in convolutions) then `grad` still works and we get the same result that an implementation using only real values would have given.\n",
|
|
"\n",
|
|
"In any case, JVPs and VJPs are always unambiguous. And if we wanted to compute the full Jacobian matrix of a non-holomorphic $\\mathbb{C} \\to \\mathbb{C}$ function, we can do it with JVPs or VJPs!"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "qmXkI37T8bgL"
|
|
},
|
|
"source": [
|
|
"You should expect complex numbers to work everywhere in JAX. Here's differentiating through a Cholesky decomposition of a complex matrix:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 34,
|
|
"metadata": {
|
|
"id": "WrDHHfKI8bgM",
|
|
"outputId": "c04baa8a-2408-4a76-e3dc-4522782d1bc5"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"Array([[-0.75342447 +0.j , -3.0509021 -10.940544j ,\n",
|
|
" 5.989684 +3.5422976j],\n",
|
|
" [-3.0509021 +10.940544j , -8.904487 +0.j ,\n",
|
|
" -5.1351547 -6.5593696j],\n",
|
|
" [ 5.989684 -3.5422976j, -5.1351547 +6.5593696j,\n",
|
|
" 0.01320434 +0.j ]], dtype=complex64)"
|
|
]
|
|
},
|
|
"execution_count": 34,
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"A = jnp.array([[5., 2.+3j, 5j],\n",
|
|
" [2.-3j, 7., 1.+7j],\n",
|
|
" [-5j, 1.-7j, 12.]])\n",
|
|
"\n",
|
|
"def f(X):\n",
|
|
" L = jnp.linalg.cholesky(X)\n",
|
|
" return jnp.sum((L - jnp.sin(L))**2)\n",
|
|
"\n",
|
|
"grad(f, holomorphic=True)(A)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "Pgr2A60q9gl1"
|
|
},
|
|
"source": [
|
|
"## More advanced autodiff\n",
|
|
"\n",
|
|
"In this notebook, we worked through some easy, and then progressively more complicated, applications of automatic differentiation in JAX. We hope you now feel that taking derivatives in JAX is easy and powerful. \n",
|
|
"\n",
|
|
"There's a whole world of other autodiff tricks and functionality out there. Topics we didn't cover, but hope to in an \"Advanced Autodiff Cookbook\" include:\n",
|
|
"\n",
|
|
" - Gauss-Newton Vector Products, linearizing once\n",
|
|
" - Custom VJPs and JVPs\n",
|
|
" - Efficient derivatives at fixed-points\n",
|
|
" - Estimating the trace of a Hessian using random Hessian-vector products.\n",
|
|
" - Forward-mode autodiff using only reverse-mode autodiff.\n",
|
|
" - Taking derivatives with respect to custom data types.\n",
|
|
" - Checkpointing (binomial checkpointing for efficient reverse-mode, not model snapshotting).\n",
|
|
" - Optimizing VJPs with Jacobian pre-accumulation."
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"accelerator": "GPU",
|
|
"colab": {
|
|
"collapsed_sections": [],
|
|
"name": "Autodiff Cookbook.ipynb",
|
|
"provenance": []
|
|
},
|
|
"jupytext": {
|
|
"formats": "ipynb,md:myst"
|
|
},
|
|
"kernelspec": {
|
|
"display_name": "Python 3",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.7.3"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 0
|
|
}
|