All initial style primitives currently use `batch_jaxpr` in their
batching rules, but that function hasn't been updated to support
axis_name when I added support for vmap collectives.
Prior to this it was possible, e.g., for code that contains a Literal,
such as to result in FLOPS during checking.
The assertion is broken by many tests unless we raise_to_shape for Literals.
I have timed the checks on my laptop and I do not see a reduction in the
total test time.
The main change is that we use `core.new_base_main` to use an
omnistaging-based tracer. This has the benefit that we can
convert to TF even functions with no arguments (previously
they would be constant-folded by JAX prior to the conversion).
We also add an explicit error if the jax2tf.convert transformation
is nested under other JAX transformations.
- Add float0 and set-up at_least_vspace to return float0
values for int/bool primals
- Use Zero to wrap float0 tangents so they're correctly ignored in jvp
rules
- Add float0 handlers to XLA to support jit
- Fix convert_element_type and tie_in jvp rules
The primitive was moved to `lax_parallel.py` some time ago, so the one
in `core` should no longer be used. This is probably a result of a
botched rebase.
Previously, given this function:
```python
@jax.jit
def f(x,y):
if x > y:
return x
else:
return y
```
we'd get an error message like this (after #4038, improved to help with
omnistaging debugging):
```
...
While tracing the function f at tim.py:3, this value became a tracer due to JAX operations on these lines:
operation c:bool[] = gt a:int32[] b:int32[]
from line tim.py:5 (f)
...
```
But this message is buggy! In this case, the value is a tracer because
it has a data dependence on arguments to a jitted function.
After this change, we instead produce this error message:
```
...
While tracing the function f at tim.py:3, this concrete value was not available in Python because it depends on the value of the arguments to f at tim.py:3 at positions [0, 1], and the computation of these values is being staged out.
...
```
I'm eager to iterate with further improvements, but for now I want to
fix this buggy message.
rename and simplify TypedJaxpr -> ClosedJaxpr
This change:
* simplifies code that constructs TypedJaxprs/ClosedJaxprs (because
in_avals / out_avals no longer need to be constructed), making them
easier to work with;
* correspondingly rules out a class of errors (mismatches between
invars/outvars and in_avals/out_avals);
* provides a more descriptive class name (ClosedJaxprs are like jaxprs
but they're closed in that they are packaged with their constant
values).
This is part 1 of an attempt to remove TypedJaxprs completely, or at
least significantly reduce our use of them. However, I'm not getting rid
of them entirely in this first step because it'd require bigger changes
(basically allowing all constants to be represented as literals, rather
than only scalars) that would not only touch a lot more code (jaxpr
formation, jaxpr-to-jaxpr transformations, control flow, XLA lowering)
but also might affect XLA lowering right before a conference deadline
(ICLR). Plus I'm trying to make big changes in smaller steps :)
Co-authored-by: George Necula <gcnecula@gmail.com>
Changes:
- Fix unnecessary generator
- Iterate dictionary directly instead of calling .keys()
- Remove global statement at the module level
- Use list() instead of a list comprehension
- Use with statement to open the file
- Merge isinstance calls
* improve an escaped tracer error message
Before this commit, encountering an escaped tracer in a specific way
would lead to a bad internal error. This change
1. raises an UnexpectedTracerError instead, and
2. includes in the error message the user source line which created the
tracer.
* deflake
* replace _live propety with _assert_live method
Thanks @jekbradbury !
This is normally unnecessary, because the XLA translation usually
doesn't bind any of the primitives in the jaxpr, but this is not true in
case of scan! Its translation rule reevaluates the jaxpr as a function,
and if it contains collectives such as `axis_index` it can fail due to
axis being missing.
Some of the vmap and gmap collective tests have been failing on master
and I can't seem to be able to reproduce them locally. Hopefully, if
this happens again, this extra bit of information will be useful in
debugging the problem.
* applied simple find+sed for 'master' -> 'main'
* Rename master->main in JAX API and internals (#4178)
* Started with #4174
* Renamed Trace.master to Trace.main
* Renamed core.new_master and core.new_base_master
Co-authored-by: George Necula <gcnecula@gmail.com>
This allows executing collectives over the gmapped axes. This requires
some extra manipulation of the gmapped jaxpr, since gmap exposes a
single logical axis name, but evaluates the program using multiple
"physical" axes.
This also fixes some bugs around handling `multiple_returns` in
vmap collective implementation.
Before this change, there were two versions, one used with omnistaging
and one without. But that made bookkeeping hard and buggy. This change
defines the axis_index_p primitive in core.py. Some of its rules are
still changed when omnistaging is enabled.
In the original usage of TypedJaxpr, literals could not be tracers
because they were only produced by initial-style transformations of
jaxprs. But now TypedJaxpr is used in several other ways, e.g. in
make_jaxpr, and moreover its avals are redundant. It should probably be
renamed ClosedJaxpr since it mainly serves to package a jaxpr together
with its constant arrays. This check was limiting the utility of
TypedJaxpr, and it was only added relatively recently anyway.
* Add experimental __array_module__ method
xref https://github.com/google/jax/issues/1565
`__array_module__` (see [NEP 37](https://numpy.org/neps/nep-0037-array-module.html))
is an experimental alternative to `__array_function__` and `__array_ufunc__`
for "duck array" compatibility with NumPy that promises to be much less
invasive.
Example usage:
```python
import numpy as np
def duckarray_stack(arrays):
"""This "stack" function should work with any array library, including JAX."""
npx = np.get_array_module(*arrays)
arrays = [npx.asarray(arr) for arr in arrays]
shapes = {arr.shape for arr in arrays}
if len(shapes) != 1:
raise ValueError('all input arrays must have the same shape')
expanded_arrays = [arr[npx.newaxis, ...] for arr in arrays]
return npx.concatenate(expanded_arrays, axis=0)
```
Support for this protocol has *not* yet been implemented in NumPy, but it can
be tested with https://github.com/seberg/numpy-dispatch.
My reasoning for merging it into JAX (on an experimental basis with no
guarantees, of course) is that:
1. It's not invasive -- the implementation is small and self-contained.
2. No backwards compatibility issues. Unlike `__array_function__` and
`__array_ufunc__`, `__array_module__` will always require an explicit
opt-in by libraries that use it by calling `get_array_module()`.
2. Other NumPy developers
[want evidence](https://github.com/numpy/numpy/pull/16935#issuecomment-673951287)
that this is actually feasible.
3. Scikit-Learn developers like @thomasjpfan are interested in exploring
supporting scikit-learn on top of NumPy-like libraries like JAX, and
experimental support for this protocol will make that easier.
Note: this PR does add `numpy-dispatch` as a optional testing requirement in
order to verify that this works. If desired, we could remove this from CI, but
installing numpy-dispatch (and its build requirement Cython) appears to only
add a few seconds of build time.
* don't explicitly list cython
* remove UnshpaedArray from _JAX_ARRAY_TYPES
* Remove incorrect note about metaclasses
* remove unnecessary numpy_dispatch.ensure_dispatching()
This adds support for the basic (associative and commutative)
collectives to vmap. Supporting more complex collectives will
require some more complicated rules. Also, at the moment it is not
possible to use collectives inside `custom_vjp` rules which we might
want to fix in the future.
This feature is also omnistaging-only.
Co-authored-by: Matthew Johnson <mattjj@google.com>