* Minor update to docs; trigger readthedocs
* Updated Common Gotchas notebook
Handle errors explicitly, otherwise it is too hard to test the notebook by 'Run all'
* Added a section about pure functions to Common Gotchas
Long, long ago, when JAX was first born, we realized that we couldn't
transpose this jaxpr:
{ lambda ; a.
let b = reduce_sum[ axes=(0,) ] a
in b }
The problem was that the transpose of a reduce-sum is a broadcast, but
because jaxprs didn't have shape information available, we didn't know
what input shape to broadcast to!
Our hack was to have the primitives that required shape information for
transposition to acquire it into their parameters, so that we'd produce
jaxprs like this one:
{ lambda ; a.
let b = reduce_sum[ axes=(0,)
input_shape=(3,) ] a
in b }
That's not only aesthetically unpleasant, but also it meant we were
limiting an (unused) capability of the system: ideally we should be able
to trace a reduce-sum jaxpr without specializing on shape information
(e.g. at the Unshaped level) and only require shape specialization for
transposition. (Good thing no one actually traces at Unshaped...)
But at long last @chr1sj0nes in #2299 added avals to jaxprs, so that
shape information (or whatever information with which the jaxpr was
specialized out of Python) is in the jaxpr itself. So we could finally
remove these shapes-in-params warts!
That's exactly what this commit does!
Co-authored-by: Roy Frostig <frostig@google.com>
Co-authored-by: Roy Frostig <frostig@google.com>
* Moved all notebooks to docs/notebooks.
Now all notebooks are in the same place, thus all are subject
to auto-doc generation at readthedocs.io and to automated testing
with travis.
Some notebooks are too slow, exclude them at docs/conf.py:exclude_patterns.
Cleanup a bit the section headings in notebooks so that they show
up well in readtehdocs.io.
* Increase the cell timeout for executing notebooks
* Exclude also the neural network notebook from auto-generation (timing out)
* Disable the score_matching notebook from auto-doc (travis does not have sklearn)
Testing is done by running "jupyter nbconvert --to notebook" and
then parsing the resulting notebook to look for errors.
One can declare expected errors, and the test will fail if those
are missing.
In the process of doig this, found and fixed a bug in the autodiff_cookbook
notebook.
* Cleaned up use of section levels
* Renamed ma to multiply_add and sq_add to square_add
* Other minor clarifications
* Separated the Colabs into Tutorials and Advanced Tutorials