8184 Commits

Author SHA1 Message Date
Peter Hawkins
b130257ee1 Drop support for NumPy 1.16. 2021-06-11 09:03:09 -04:00
jax authors
87a533e4ea Merge pull request #6947 from gnecula:tf_call_tf
PiperOrigin-RevId: 378837807
2021-06-11 03:05:14 -07:00
George Necula
1994f6df4a [jax2tf] Fix the round-trip call_tf(convert)
Also cleaned the handling of global state in jax2tf.
2021-06-11 11:57:27 +03:00
jax authors
3d1a6a308e Merge pull request #6945 from skye:version
PiperOrigin-RevId: 378756818
jax-v0.2.14
2021-06-10 16:11:49 -07:00
jax authors
e8068c0802 Merge pull request #6943 from jakevdp:bcoo-todense-fix
PiperOrigin-RevId: 378722733
2021-06-10 13:31:09 -07:00
Skye Wanderman-Milne
063401f3ef Update jax version to 0.2.14 2021-06-10 13:15:53 -07:00
jax authors
42b540c2f4 Merge pull request #6942 from skye:tpu_driver_version
PiperOrigin-RevId: 378703719
2021-06-10 12:02:54 -07:00
Jake VanderPlas
72fe3babee bcoo_todense: fix corner case 2021-06-10 12:02:04 -07:00
Skye Wanderman-Milne
4abac4f170 Pin the tpu_driver version used for Cloud TPU Colabs, instead of using nightly.
There have been some recent breakages affecting the nightly driver,
causing JAX operations to fail on Cloud TPU Colabs. Pinning to a
specific version will alleviate these problems. This version may need
to be updated if there are breaking changes to the tpu_driver
client/server boundary, but that doesn't happen very often.
2021-06-10 10:51:01 -07:00
jax authors
d622d5c824 Merge pull request #6939 from 8bitmp3:patch-1
PiperOrigin-RevId: 378675118
2021-06-10 10:03:40 -07:00
George Necula
888db31ede [jax2tf] Fix passing of indices_are_sorted to the TF XlaGather op
PiperOrigin-RevId: 378660840
2021-06-10 08:54:17 -07:00
8bitmp3
8568aee800
Add missing back ticks to fix jax2tf README Markdown rendering in Different 64-bit precision in JAX and TensorFlow 2021-06-10 16:09:10 +01:00
jax authors
f67acb9379 Merge pull request #6937 from mariosasko:specify-zip-safe
PiperOrigin-RevId: 378647488
2021-06-10 07:34:18 -07:00
mariosasko
55b421ff36 Specify zip_safe for mypy 2021-06-10 16:06:11 +02:00
jax authors
7540690157 Merge pull request #6897 from hawkinsp:indexunique
PiperOrigin-RevId: 378550369
2021-06-09 18:49:00 -07:00
jax authors
69f6d5e3d2 Merge pull request #6781 from lukepfister:resize_weight
PiperOrigin-RevId: 378550280
2021-06-09 18:45:49 -07:00
Peter Hawkins
1ff12f05b3 Add unique/sorted annotations to gather().
XLA itself does not consume these, but they can be propagated onto scatter() when computing gradients.

Compute unique/sorted information on indexed accesses and indexed updates. Non-advanced indexes are always sorted and unique.
2021-06-09 21:05:41 -04:00
jax authors
7d0bda604a Merge pull request #6930 from jakevdp:union1d-size
PiperOrigin-RevId: 378530731
2021-06-09 16:51:19 -07:00
Peter Hawkins
73c47dce6e [XLA] Revert to using the textbook algorithm to construct the 2x2 Jacobi rotations in Eigendecomposition.
The current version is causing wrong outputs when the diagonal elements are exactly zero.

https://github.com/tensorflow/tensorflow/issues/50017

PiperOrigin-RevId: 378506479
2021-06-09 14:56:45 -07:00
Jake VanderPlas
79d0852145 Add optional size argument to jnp.union1d for JIT compatibility 2021-06-09 11:36:34 -07:00
jax authors
2a1936e6f9 Merge pull request #6824 from jakevdp:sparse-bcoo
PiperOrigin-RevId: 378437750
2021-06-09 10:25:28 -07:00
jax authors
8362db6ef8 Merge pull request #6925 from jakevdp:nonzero-test
PiperOrigin-RevId: 378420194
2021-06-09 09:10:59 -07:00
Jake VanderPlas
69f29a631a Add experimental batched COO sparse format.
This is an implementation of a batch-friendly multidimensional COO format for JAX. It contains implementations of four primitives (bcoo_todense, bcoo_fromdense, bcoo_extract, bcoo_dot_general), as well as batching, JVP, and transpose rules for each.

For convenience, this also adds class BCOO, which is a pytree wrapper around these.
2021-06-09 09:10:53 -07:00
Marc van Zee
b749e78d2c Adds support for enable_xla=False for shape polymorphism tests and adds such tests for dynamic_slice.
It turned out that, in jax2tf._dynamic_slice, tf.constant doesn't work with polymorphic shapes, so I replaced it with a tf.cast.

PiperOrigin-RevId: 378392273
2021-06-09 06:35:07 -07:00
jax authors
86d2da44c0 Merge pull request #6919 from marcvanzee:patch-3
PiperOrigin-RevId: 378352041
2021-06-09 01:53:27 -07:00
jax authors
aac8d7434c Merge pull request #6860 from gnecula:tf_source
PiperOrigin-RevId: 378351073
2021-06-09 01:45:19 -07:00
George Necula
59ae45a83c [jax2tf] Add support for generating HLO OpMetadata in the TF graph
The goal is to ensure that the HLO that
jax2tf->TF/XLA generates has the same metadata as what JAX generates.
This includes `op_type`, `op_name`, and source information, which are
used for debugging and profiling.

In order to ensure that this metadata is carried from the JAX tracing
time to TF/XLA, we save the metadata in custom TF op attributes. These
attributes are automatically preserved through SavedModel. This relies
on a separate change in TF/XLA to look for these custom attributes
and override its default.

For the source information, we use pretty much the same code that
xla.py uses. HLO OpMetadata has room for only one source location.
JAX (xla.py) picks the top-most user frame, which is obtained by
filtering out the stack frames in the JAX source tree. When used
with jax2tf we also need to filter out stack frames in the
TensorFlow source tree.

The hardest part is to generate the `op_name`, which is a hierarchical
name with components separated by '/', e.g., `jax2tf(top_func)/while/cond/le`.
We carry the current `name_stack` in thread-local state. Unfortunately, there
is no easy way to share the exact code that achieves this in xla.py. At the
same time it is not crucial that we have exactly identical name stacks as in
JAX.

I attempted to also carry this state in the JAX `MainTrace`, but could not
fully control the name stack. E.g., when calling a jitted-function we
have to reuse the current `MainTrace` although we want to push an element
on the name stack.

For now this option is not yet enabled until we make the necessary
changes in TensorFlow.
2021-06-09 08:08:42 +02:00
Jake VanderPlas
0f4f4102ce Add more complete test for jnp.nonzero size argument 2021-06-08 16:40:53 -07:00
jax authors
30b00095a9 Merge pull request #6915 from jakevdp:argwhere-size
PiperOrigin-RevId: 378275722
2021-06-08 16:39:57 -07:00
jax authors
c6d389387e Merge pull request #6926 from jakevdp:fix-random-validation
PiperOrigin-RevId: 378271745
2021-06-08 16:21:18 -07:00
Jake VanderPlas
f97e2f945f jnp.argwhere: add optional size parameter for JIT compatibility 2021-06-08 16:17:37 -07:00
jax authors
0383d7b312 Merge pull request #6910 from jakevdp:where-size
PiperOrigin-RevId: 378270750
2021-06-08 16:16:13 -07:00
Jake VanderPlas
022464b91b jnp.where: add optional size argument 2021-06-08 15:53:12 -07:00
Allen Lavoie
e7fe44e9fd Fix jax2tf._is_tfval after tf.constant placement changes
complex128 isn't supported on TPUs in TF, tf.constant now places on TPU by default, _is_tfval saw the exception and assumed it wasn't convertable to a TF type.

PiperOrigin-RevId: 378240447
2021-06-08 14:06:22 -07:00
jax authors
17c2d8a50b Merge pull request #6913 from jakevdp:flatnonzero-size
PiperOrigin-RevId: 378235152
2021-06-08 13:43:55 -07:00
Jake VanderPlas
119c9bc0dd jax.random: improve input validation (fixes #6922) 2021-06-08 13:37:21 -07:00
Jake VanderPlas
1296dc3f1e jnp.flatnonzero: add optional size argument for JIT compatibility 2021-06-08 13:16:51 -07:00
jax authors
72cd6d0072 Merge pull request #6912 from jakevdp:jittable-unique
PiperOrigin-RevId: 378217815
2021-06-08 12:34:24 -07:00
jax authors
d38def4660 Merge pull request #6923 from jakevdp:gamma-doc
PiperOrigin-RevId: 378217732
2021-06-08 12:31:06 -07:00
jax authors
648b5d3265 Merge pull request #6066 from apaszke:xmap-no-mesh-slicing
PiperOrigin-RevId: 378209333
2021-06-08 11:54:05 -07:00
Jake VanderPlas
d198ad0ac1 jnp.unique: add optional size argument for JIT compatibility 2021-06-08 11:31:42 -07:00
Jake VanderPlas
22dbe80255 DOC: state that digamma only accepts float 2021-06-08 10:47:27 -07:00
jax authors
3927946456 Merge pull request #6920 from apaszke:xmap-loops
PiperOrigin-RevId: 378147047
2021-06-08 07:15:57 -07:00
Adam Paszke
54ba051631 Always run xmap over all mesh axes
Automatic mesh slicing might be surprising, and can always be
performed manually.
2021-06-08 13:36:13 +00:00
Adam Paszke
490f9778c8 Raise a friendlier error message when using loop axes in collectives 2021-06-08 11:55:03 +00:00
Marc van Zee
80e69d456e
Update README.md 2021-06-08 07:21:40 +02:00
jax authors
7a3a160b26 Merge pull request #6869 from colemanliyah:file_system_cache
PiperOrigin-RevId: 378012890
2021-06-07 14:59:49 -07:00
Peter Hawkins
e9611eb090 Move jax.ad_util to jax._src.ad_util.
Expose ad_util.stop_gradient_p as jax.lax.stop_gradient_p. stop_gradient() is already under the external lax namespace.

PiperOrigin-RevId: 378011152
2021-06-07 14:51:34 -07:00
jax authors
7591e16c98 Merge pull request #6917 from hawkinsp:token
PiperOrigin-RevId: 377997455
2021-06-07 13:50:56 -07:00
jax authors
737efb5908 Merge pull request #6916 from jakevdp:doc-typo
PiperOrigin-RevId: 377990039
2021-06-07 13:19:26 -07:00