mirror of
https://github.com/ROCm/jax.git
synced 2025-04-19 05:16:06 +00:00
docs: more sentence case
This commit is contained in:
parent
734ebd5708
commit
09e73118bf
@ -258,7 +258,7 @@
|
||||
"id": "oBdKtkVW8Lha"
|
||||
},
|
||||
"source": [
|
||||
"## 🔪 In-Place Updates"
|
||||
"## 🔪 In-place updates"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -533,7 +533,7 @@
|
||||
"id": "oZ_jE2WAypdL"
|
||||
},
|
||||
"source": [
|
||||
"## 🔪 Out-of-Bounds Indexing"
|
||||
"## 🔪 Out-of-bounds indexing"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -868,7 +868,7 @@
|
||||
"id": "MUycRNh6e50W"
|
||||
},
|
||||
"source": [
|
||||
"## 🔪 Random Numbers"
|
||||
"## 🔪 Random numbers"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -888,7 +888,7 @@
|
||||
"id": "Qikt9pPW9L5K"
|
||||
},
|
||||
"source": [
|
||||
"### RNGs and State\n",
|
||||
"### RNGs and state\n",
|
||||
"You're used to _stateful_ pseudorandom number generators (PRNGs) from numpy and other libraries, which helpfully hide a lot of details under the hood to give you a ready fountain of pseudorandomness:"
|
||||
]
|
||||
},
|
||||
@ -1183,7 +1183,7 @@
|
||||
"id": "rg4CpMZ8c3ri"
|
||||
},
|
||||
"source": [
|
||||
"## 🔪 Control Flow"
|
||||
"## 🔪 Control flow"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -1192,7 +1192,7 @@
|
||||
"id": "izLTvT24dAq0"
|
||||
},
|
||||
"source": [
|
||||
"### ✔ python control_flow + autodiff ✔\n",
|
||||
"### ✔ Python control_flow + autodiff ✔\n",
|
||||
"\n",
|
||||
"If you just want to apply `grad` to your python functions, you can use regular python control-flow constructs with no problems, as if you were using [Autograd](https://github.com/hips/autograd) (or Pytorch or TF Eager)."
|
||||
]
|
||||
@ -1231,7 +1231,7 @@
|
||||
"id": "hIfPT7WMmZ2H"
|
||||
},
|
||||
"source": [
|
||||
"### python control flow + JIT\n",
|
||||
"### Python control flow + JIT\n",
|
||||
"\n",
|
||||
"Using control flow with `jit` is more complicated, and by default it has more constraints.\n",
|
||||
"\n",
|
||||
@ -1791,7 +1791,7 @@
|
||||
"id": "OxLsZUyRt_kF"
|
||||
},
|
||||
"source": [
|
||||
"## 🔪 Dynamic Shapes"
|
||||
"## 🔪 Dynamic shapes"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -2194,7 +2194,7 @@
|
||||
"id": "WAHjmL0E2XwO"
|
||||
},
|
||||
"source": [
|
||||
"## 🔪 Miscellaneous Divergences from NumPy\n",
|
||||
"## 🔪 Miscellaneous divergences from NumPy\n",
|
||||
"\n",
|
||||
"While `jax.numpy` makes every attempt to replicate the behavior of numpy's API, there do exist corner cases where the behaviors differ.\n",
|
||||
"Many such cases are discussed in detail in the sections above; here we list several other known places where the APIs diverge.\n",
|
||||
|
@ -158,7 +158,7 @@ iter_operand = iter(range(10))
|
||||
|
||||
+++ {"id": "oBdKtkVW8Lha"}
|
||||
|
||||
## 🔪 In-Place Updates
|
||||
## 🔪 In-place updates
|
||||
|
||||
+++ {"id": "JffAqnEW4JEb"}
|
||||
|
||||
@ -268,7 +268,7 @@ For more details on indexed array updates, see the [documentation for the `.at`
|
||||
|
||||
+++ {"id": "oZ_jE2WAypdL"}
|
||||
|
||||
## 🔪 Out-of-Bounds Indexing
|
||||
## 🔪 Out-of-bounds indexing
|
||||
|
||||
+++ {"id": "btRFwEVzypdN"}
|
||||
|
||||
@ -385,7 +385,7 @@ jnp.sum(jnp.array(x))
|
||||
|
||||
+++ {"id": "MUycRNh6e50W"}
|
||||
|
||||
## 🔪 Random Numbers
|
||||
## 🔪 Random numbers
|
||||
|
||||
+++ {"id": "O8vvaVt3MRG2"}
|
||||
|
||||
@ -395,7 +395,7 @@ jnp.sum(jnp.array(x))
|
||||
|
||||
+++ {"id": "Qikt9pPW9L5K"}
|
||||
|
||||
### RNGs and State
|
||||
### RNGs and state
|
||||
You're used to _stateful_ pseudorandom number generators (PRNGs) from numpy and other libraries, which helpfully hide a lot of details under the hood to give you a ready fountain of pseudorandomness:
|
||||
|
||||
```{code-cell} ipython3
|
||||
@ -538,11 +538,11 @@ for subkey in subkeys:
|
||||
|
||||
+++ {"id": "rg4CpMZ8c3ri"}
|
||||
|
||||
## 🔪 Control Flow
|
||||
## 🔪 Control flow
|
||||
|
||||
+++ {"id": "izLTvT24dAq0"}
|
||||
|
||||
### ✔ python control_flow + autodiff ✔
|
||||
### ✔ Python control_flow + autodiff ✔
|
||||
|
||||
If you just want to apply `grad` to your python functions, you can use regular python control-flow constructs with no problems, as if you were using [Autograd](https://github.com/hips/autograd) (or Pytorch or TF Eager).
|
||||
|
||||
@ -562,7 +562,7 @@ print(grad(f)(4.)) # ok!
|
||||
|
||||
+++ {"id": "hIfPT7WMmZ2H"}
|
||||
|
||||
### python control flow + JIT
|
||||
### Python control flow + JIT
|
||||
|
||||
Using control flow with `jit` is more complicated, and by default it has more constraints.
|
||||
|
||||
@ -865,7 +865,7 @@ $\ast$ = argument-<b>value</b>-independent loop condition - unrolls the loop
|
||||
|
||||
+++ {"id": "OxLsZUyRt_kF"}
|
||||
|
||||
## 🔪 Dynamic Shapes
|
||||
## 🔪 Dynamic shapes
|
||||
|
||||
+++ {"id": "1tKXcAMduDR1"}
|
||||
|
||||
@ -1130,7 +1130,7 @@ x.dtype # --> dtype('float64')
|
||||
|
||||
+++ {"id": "WAHjmL0E2XwO"}
|
||||
|
||||
## 🔪 Miscellaneous Divergences from NumPy
|
||||
## 🔪 Miscellaneous divergences from NumPy
|
||||
|
||||
While `jax.numpy` makes every attempt to replicate the behavior of numpy's API, there do exist corner cases where the behaviors differ.
|
||||
Many such cases are discussed in detail in the sections above; here we list several other known places where the APIs diverge.
|
||||
|
Loading…
x
Reference in New Issue
Block a user