409 Commits

Author SHA1 Message Date
Jacques Pienaar
1ea7ced7ee
[mlir][py] Enable disabling loading all registered (#117643)
There is a pending todo about always eagerly loading or not. Make this
behavior optional and give the control to the user in a backwards
compatible manner. This is made optional as there were arguments for
both forms, kept it in form that is backwards compatible.
2024-11-25 15:39:55 -08:00
Ingo Müller
56feea7307
[mlir][python] Update minimal version of pybind11 to 2.10. (#117314)
This PR updates the minimal required version of pybind11 from 2.9.0 to
2.10.0. New new version is almost 2.5 years old, which is half a year
less than the previous version. This change is necessary to support the
changes introduced in #115307, which does not compile with pybind11
v.2.9.

Signed-off-by: Ingo Müller <ingomueller@google.com>
2024-11-23 12:48:59 +01:00
annuasd
47ef5c4b7f
[mlir][Bindings] Fix missing return value of functions and incorrect type hint in pyi. (#116731)
The zero points of UniformQuantizedPerAxisType should be List[int].
And there are two methods missing return value.

Co-authored-by: 牛奕博 <niuyibo@niuyibodeMacBook-Pro.local>
2024-11-19 15:24:39 -06:00
Matthias Springer
e17c91341b
[mlir][python] Add T.tf32 and missing tests for tf32 (#116725) 2024-11-19 11:00:35 +09:00
Jinyun (Joey) Ye
618f231a6d
[MLIR][Transform] Consolidate result of structured.split into one list (#111171)
Follow-up a review comment from
https://github.com/llvm/llvm-project/pull/82792#discussion_r1604925239
as a separate PR:

	E.g.:
	```
	%0:2 = transform.structured.split
	```
	is changed to
	```
	%t = transform.structured.split
	%0:2 = transform.split_handle %t
	```
2024-11-15 10:53:34 +08:00
Amy Wang
d50fbe43c9
[MLIR][Python] Python binding support for AffineIfOp (#108323)
Fix the AffineIfOp's default builder such that it takes in an
IntegerSetAttr. AffineIfOp has skipDefaultBuilders=1 which effectively
skips the creation of the default AffineIfOp::builder on the C++ side.
(AffineIfOp has two custom OpBuilder defined in the
extraClassDeclaration.) However, on the python side, _affine_ops_gen.py
shows that the default builder is being created, but it does not accept
IntegerSet and thus is useless. This fix at line 411 makes the default
python AffineIfOp builder take in an IntegerSet input and does not
impact the C++ side of things.
2024-11-13 16:27:46 -05:00
Ingo Müller
2a448da6e6
[mlir][python] Make types in register_(dialect|operation) more narrow. (#115307)
This PR makes the `pyClass`/`dialectClass` arguments of the pybind11
functions `register_dialect` and `register_operation` as well as their
return types more narrow, concretely, a `py::type` instead of a
`py::object`. As the name of the arguments indicate, they have to be
called with a type instance (a "class"). The PR also updates the typing
stubs of these functions (in the corresponding `.pyi` file), such that
static type checkers are aware of the changed type. With the previous
typing information, `pyright` raised errors on code generated by
tablegen.

Signed-off-by: Ingo Müller <ingomueller@google.com>
2024-11-11 09:26:15 +01:00
stefankoncarevic
39358f846d
[mlir][linalg] Add Grouped Convolution Ops: conv_2d_nhwgc_gfhwc and conv_2d_nhwgc_gfhwc_q (#108192)
This patch adds two new ops: linalg::Conv2DNhwgcGfhwcOp and
linalg::Conv2DNhwgcGfhwcQOp, and uses them to convert tosa group conv2d
Ops.
- Added linalg::Conv2DNhwgcGfhwcOp and linalg::Conv2DNhwgcGfhwcQOp.
- Updated the conversion process to use these new ops for tosa group
conv2d operations.
2024-11-08 09:23:17 -08:00
Md Asghar Ahmad Shahid
3ad0148020
[MLIR][Linalg] Re-land linalg.matmul move to ODS. + Remove/update failing obsolete OpDSL tests. (#115319)
The earlier PR(https://github.com/llvm/llvm-project/pull/104783) which
introduces
transpose and broadcast semantic to linalg.matmul was reverted due to
two failing
OpDSL test for linalg.matmul.

Since linalg.matmul is now defined using TableGen ODS instead of
Python-based OpDSL,
these test started failing and needs to be removed/updated.

This commit removes/updates the failing obsolete tests from below files.
All other files
were part of earlier PR and just cherry picked.
    "mlir/test/python/integration/dialects/linalg/opsrun.py"
    "mlir/test/python/integration/dialects/transform.py"

---------

Co-authored-by: Renato Golin <rengolin@systemcall.eu>
2024-11-07 14:51:02 +00:00
Marius Brehler
f5e6c8e0b7
[mlir][python] Raise maximum allowed version (#114050)
Raises the maximum allowed versions to more recent versions, which is a
basic enabler to install them in a venv using Python 3.13.
2024-10-31 17:39:26 +01:00
Felix Schneider
02bf3b54c0
[mlir][linalg] Add quantized conv2d operator with FCHW,NCHW order (#107740)
This patch adds a quantized version of the `linalg.conv2d_nchw_fchw` Op.
This is the "channel-first" ordering typically used by PyTorch and
others.
2024-10-19 18:25:27 +02:00
Andrzej Warzyński
a758bcdbd9
[mlir][td] Rename pack_paddings in structured.pad (#111036)
The pack_paddings attribute in the structure.pad TD Op is used to set
the `nofold` attribute in the generated tensor.pad Op. The current name
is confusing and suggests that there's a relation with the tensor.pack
Op. This patch renames it as `nofold_flags` to better match the actual
usage.
2024-10-15 19:24:43 +01:00
Emilio Cota
1276ce9e97 Revert "[mlir][linalg] Introduce transpose semantic to 'linalg.matmul' ops. (#104783)"
This reverts commit 03483737a7a2d72a257a5ab6ff01748ad9cf0f75 and
99c8557, which is a fix-up on top of the former.

I'm reverting because this commit broke two tests:
  mlir/test/python/integration/dialects/linalg/opsrun.py
  mlir/test/python/integration/dialects/transform.py
See https://lab.llvm.org/buildbot/#/builders/138/builds/4872

I'm not familiar with the tests, so I'm leaving it to the original author
to either remove or adapt the broken tests, as discussed here:
  https://github.com/llvm/llvm-project/pull/104783#issuecomment-2406390905
2024-10-11 05:22:56 -04:00
Md Asghar Ahmad Shahid
03483737a7
[mlir][linalg] Introduce transpose semantic to 'linalg.matmul' ops. (#104783)
The main goal of this patch is to extend the semantic of 'linalg.matmul'
named op to include per operand transpose semantic while also laying out
a way to move ops definition from OpDSL to tablegen. Hence, it is
implemented in tablegen. Transpose semantic is as follows.

By default 'linalg.matmul' behavior will remain as is. Transpose
semantics can be appiled on per input operand by specifying the optional
permutation attributes (namely 'permutationA' for 1st input and
'permutationB' for 2nd input) for each operand explicitly as needed. By
default, no transpose is mandated for any of the input operand.

    Example:
    ```
%val = linalg.matmul ins(%arg0, %arg1 : memref<5x3xf32>,
memref<5x7xf32>)
              outs(%arg2: memref<3x7xf32>)
              permutationA = [1, 0]
              permutationB = [0, 1]
    ```
2024-10-10 17:00:58 +01:00
Sergey Kozub
3f9cabae00
[MLIR] Add f8E8M0FNU type (#111028)
This PR adds `f8E8M0FNU` type to MLIR.

`f8E8M0FNU` type is proposed in [OpenCompute MX
Specification](https://www.opencompute.org/documents/ocp-microscaling-formats-mx-v1-0-spec-final-pdf).
It defines a 8-bit floating point number with bit layout S0E8M0. Unlike
IEEE-754 types, there are no infinity, denormals, zeros or negative
values.

```c
f8E8M0FNU
- Exponent bias: 127
- Maximum stored exponent value: 254 (binary 1111'1110)
- Maximum unbiased exponent value: 254 - 127 = 127
- Minimum stored exponent value: 0 (binary 0000'0000)
- Minimum unbiased exponent value: 0 − 127 = -127
- Doesn't have zero
- Doesn't have infinity
- NaN is encoded as binary 1111'1111

Additional details:
- Zeros cannot be represented
- Negative values cannot be represented
- Mantissa is always 1
```

Related PRs:
- [PR-107127](https://github.com/llvm/llvm-project/pull/107127)
[APFloat] Add APFloat support for E8M0 type
- [PR-105573](https://github.com/llvm/llvm-project/pull/105573) [MLIR]
Add f6E3M2FN type - was used as a template for this PR
- [PR-107999](https://github.com/llvm/llvm-project/pull/107999) [MLIR]
Add f6E2M3FN type
- [PR-108877](https://github.com/llvm/llvm-project/pull/108877) [MLIR]
Add f4E2M1FN type
2024-10-04 09:23:12 +02:00
Mateusz Sokół
a9746675a5
[MLIR][Python] Add encoding argument to tensor.empty Python function (#110656)
Hi @xurui1995 @makslevental,

I think in https://github.com/llvm/llvm-project/pull/103087 there's
unintended regression where user can no longer create sparse tensors
with `tensor.empty`.

Previously I could pass:
```python
out = tensor.empty(tensor_type, [])
```
where `tensor_type` contained `shape`, `dtype`, and `encoding`.

With the latest 
```python
tensor.empty(sizes: Sequence[Union[int, Value]], element_type: Type, *, loc=None, ip=None)
```
it's no longer possible.

I propose to add `encoding` argument which is passed to
`RankedTensorType.get(static_sizes, element_type, encoding)` (I updated
one of the tests to check it).
2024-10-01 16:48:00 -04:00
Sergei Lebedev
91ef1f7caa
A few tweaks to the MLIR .pyi files (#110488) 2024-10-01 07:49:18 -04:00
Sergey Kozub
2c58063435
[MLIR] Add f4E2M1FN type (#108877)
This PR adds `f4E2M1FN` type to mlir.

`f4E2M1FN` type is proposed in [OpenCompute MX
Specification](https://www.opencompute.org/documents/ocp-microscaling-formats-mx-v1-0-spec-final-pdf).
It defines a 4-bit floating point number with bit layout S1E2M1. Unlike
IEEE-754 types, there are no infinity or NaN values.

```c
f4E2M1FN
- Exponent bias: 1
- Maximum stored exponent value: 3 (binary 11)
- Maximum unbiased exponent value: 3 - 1 = 2
- Minimum stored exponent value: 1 (binary 01)
- Minimum unbiased exponent value: 1 − 1 = 0
- Has Positive and Negative zero
- Doesn't have infinity
- Doesn't have NaNs

Additional details:
- Zeros (+/-): S.00.0
- Max normal number: S.11.1 = ±2^(2) x (1 + 0.5) = ±6.0
- Min normal number: S.01.0 = ±2^(0) = ±1.0
- Min subnormal number: S.00.1 = ±2^(0) x 0.5 = ±0.5
```

Related PRs:
- [PR-95392](https://github.com/llvm/llvm-project/pull/95392) [APFloat]
Add APFloat support for FP4 data type
- [PR-105573](https://github.com/llvm/llvm-project/pull/105573) [MLIR]
Add f6E3M2FN type - was used as a template for this PR
- [PR-107999](https://github.com/llvm/llvm-project/pull/107999) [MLIR]
Add f6E2M3FN type
2024-09-24 08:22:48 +02:00
Bimo
f8eceb45d0
[MLIR] [Python] align python ir printing with mlir-print-ir-after-all (#107522)
When using the `enable_ir_printing` API from Python, it invokes IR
printing with default args, printing the IR before each pass and
printing IR after pass only if there have been changes. This PR attempts
to align the `enable_ir_printing` API with the documentation
2024-09-18 11:54:16 +08:00
Sergey Kozub
73d83f20c9
[MLIR] Add f6E2M3FN type (#107999)
This PR adds `f6E2M3FN` type to mlir.

`f6E2M3FN` type is proposed in [OpenCompute MX
Specification](https://www.opencompute.org/documents/ocp-microscaling-formats-mx-v1-0-spec-final-pdf).
It defines a 6-bit floating point number with bit layout S1E2M3. Unlike
IEEE-754 types, there are no infinity or NaN values.

```c
f6E2M3FN
- Exponent bias: 1
- Maximum stored exponent value: 3 (binary 11)
- Maximum unbiased exponent value: 3 - 1 = 2
- Minimum stored exponent value: 1 (binary 01)
- Minimum unbiased exponent value: 1 − 1 = 0
- Has Positive and Negative zero
- Doesn't have infinity
- Doesn't have NaNs

Additional details:
- Zeros (+/-): S.00.000
- Max normal number: S.11.111 = ±2^(2) x (1 + 0.875) = ±7.5
- Min normal number: S.01.000 = ±2^(0) = ±1.0
- Max subnormal number: S.00.111 = ±2^(0) x 0.875 = ±0.875
- Min subnormal number: S.00.001 = ±2^(0) x 0.125 = ±0.125
```

Related PRs:
- [PR-94735](https://github.com/llvm/llvm-project/pull/94735) [APFloat]
Add APFloat support for FP6 data types
- [PR-105573](https://github.com/llvm/llvm-project/pull/105573) [MLIR]
Add f6E3M2FN type - was used as a template for this PR
2024-09-16 21:09:27 +02:00
Amy Wang
334873fe2d
[MLIR][Python] Python binding support for IntegerSet attribute (#107640)
Support IntegerSet attribute python binding.
2024-09-11 07:37:35 -04:00
Sergey Kozub
918222ba43
[MLIR] Add f6E3M2FN type (#105573)
This PR adds `f6E3M2FN` type to mlir.

`f6E3M2FN` type is proposed in [OpenCompute MX
Specification](https://www.opencompute.org/documents/ocp-microscaling-formats-mx-v1-0-spec-final-pdf).
It defines a 6-bit floating point number with bit layout S1E3M2. Unlike
IEEE-754 types, there are no infinity or NaN values.

```c
f6E3M2FN
- Exponent bias: 3
- Maximum stored exponent value: 7 (binary 111)
- Maximum unbiased exponent value: 7 - 3 = 4
- Minimum stored exponent value: 1 (binary 001)
- Minimum unbiased exponent value: 1 − 3 = −2
- Has Positive and Negative zero
- Doesn't have infinity
- Doesn't have NaNs

Additional details:
- Zeros (+/-): S.000.00
- Max normal number: S.111.11 = ±2^(4) x (1 + 0.75) = ±28
- Min normal number: S.001.00 = ±2^(-2) = ±0.25
- Max subnormal number: S.000.11 = ±2^(-2) x 0.75 = ±0.1875
- Min subnormal number: S.000.01 = ±2^(-2) x 0.25 = ±0.0625
```

Related PRs:
- [PR-94735](https://github.com/llvm/llvm-project/pull/94735) [APFloat]
Add APFloat support for FP6 data types
- [PR-97118](https://github.com/llvm/llvm-project/pull/97118) [MLIR] Add
f8E4M3 type - was used as a template for this PR
2024-09-10 10:41:05 +02:00
Matt Hofmann
ad89e617c7
[MLIR][Python] Fix detached operation coming from IfOp constructor (#107286)
Without this fix, `scf.if` operations would be created without a parent.
Since `scf.if` operations often have no results, this caused silent bugs
where the generated code was straight-up missing the operation.
2024-09-05 00:12:03 -04:00
Kasper Nielsen
3766ba44a8
[mlir][python] Fix how the mlir variadic Python accessor _ods_equally_sized_accessor is used (#101132) (#106003)
As reported in https://github.com/llvm/llvm-project/issues/101132, this
fixes two bugs:

1. When accessing variadic operands inside an operation, it must be
accessed as `self.operation.operands` instead of `operation.operands`
2. The implementation of the `equally_sized_accessor` function is doing
wrong arithmetics when calculating the resulting index and group sizes.

I have added a test for the `equally_sized_accessor` function, which did
not have a test previously.
2024-08-31 03:17:33 -04:00
PhrygianGates
c8cac33ad2
[MLIR][Python] add f8E5M2 and tests for np_to_memref (#106028)
add f8E5M2 and tests for np_to_memref

---------

Co-authored-by: Zhicheng Xiong <zhichengx@dc2-sim-c01-215.nvidia.com>
2024-08-26 21:40:38 -05:00
Christopher Bate
92e00af432
[mlir] NFC: add missing 'FloatType' to core Python stub file (#105554)
The stub class for `FloatType` is present in `ir.pyi`, but it is missing
from the `__all__` export list.
2024-08-25 18:51:22 -06:00
Bimo
f5f0055a77
[MLIR][Python] remove unused init python file (#104890)
remove unused `__init__.py` under `mlir/python/mlir/extras`
2024-08-20 16:13:08 +08:00
Bimo
4eefc8d4ce
[MLIR][Python] enhance python api for tensor.empty (#103087)
Since we have extended `EmptyOp`, maybe we should also provide a
corresponding `tensor.empty` method. In the downstream usage, I tend to
use APIs with all lowercase letters to create ops, so having a
`tensor.empty` to replace the extended `tensor.EmptyOp` would keep my
code style consistent.
2024-08-19 09:06:48 +08:00
zhicong zhong
558d7adaae
[mlir][linalg] fix linalg.batch_reduce_matmul auto cast (#102585)
Fix the auto-cast of `linalg.batch_reduce_matmul` from `cast_to_T(A *
cast_to_T(B)) + C` to `cast_to_T(A) * cast_to_T(B) + C`
2024-08-12 14:37:57 +08:00
Nhat Nguyen
d60008d861
[mlir] [python] Update PyYAML minimum version to 5.4 and limit ml_dtypes to 0.4.0 (#102178)
PyYAML 5.3.1 has a security vulnerability as described here:
https://nvd.nist.gov/vuln/detail/CVE-2020-14343. Update the minimum
PyYAML version to 5.4. Also limit ml_dtypes version to 0.4.0.
2024-08-07 08:24:15 -07:00
Alexander Pivovarov
eef1d7e377
[MLIR] Add f8E3M4 IEEE 754 type (#101230)
This PR adds `f8E3M4` type to mlir.

`f8E3M4` type  follows IEEE 754 convention

```c
f8E3M4 (IEEE 754)
- Exponent bias: 3
- Maximum stored exponent value: 6 (binary 110)
- Maximum unbiased exponent value: 6 - 3 = 3
- Minimum stored exponent value: 1 (binary 001)
- Minimum unbiased exponent value: 1 − 3 = −2
- Precision specifies the total number of bits used for the significand (mantissa), 
    including implicit leading integer bit = 4 + 1 = 5
- Follows IEEE 754 conventions for representation of special values
- Has Positive and Negative zero
- Has Positive and Negative infinity
- Has NaNs

Additional details:
- Max exp (unbiased): 3
- Min exp (unbiased): -2
- Infinities (+/-): S.111.0000
- Zeros (+/-): S.000.0000
- NaNs: S.111.{0,1}⁴ except S.111.0000
- Max normal number: S.110.1111 = +/-2^(6-3) x (1 + 15/16) = +/-2^3 x 31 x 2^(-4) = +/-15.5
- Min normal number: S.001.0000 = +/-2^(1-3) x (1 + 0) = +/-2^(-2)
- Max subnormal number: S.000.1111 = +/-2^(-2) x 15/16 = +/-2^(-2) x 15 x 2^(-4) = +/-15 x 2^(-6)
- Min subnormal number: S.000.0001 = +/-2^(-2) x 1/16 =  +/-2^(-2) x 2^(-4) = +/-2^(-6)
```

Related PRs:
- [PR-99698](https://github.com/llvm/llvm-project/pull/99698) [APFloat]
Add support for f8E3M4 IEEE 754 type
- [PR-97118](https://github.com/llvm/llvm-project/pull/97118) [MLIR] Add
f8E4M3 IEEE 754 type
2024-08-02 00:22:11 -07:00
Bimo
5ef087b705
Reapply "[MLIR][Python] add ctype python binding support for bf16" (#101271)
Reapply the PR which was reverted due to built-bots, and now the bots
get updated.
https://discourse.llvm.org/t/need-a-help-with-the-built-bots/79437
original PR: https://github.com/llvm/llvm-project/pull/92489, reverted
in https://github.com/llvm/llvm-project/pull/93771
2024-07-31 10:24:27 +02:00
Alexander Pivovarov
019136e30f
[MLIR] Add f8E4M3 IEEE 754 type (#97118)
This PR adds `f8E4M3` type to mlir.

`f8E4M3` type  follows IEEE 754 convention

```c
f8E4M3 (IEEE 754)
- Exponent bias: 7
- Maximum stored exponent value: 14 (binary 1110)
- Maximum unbiased exponent value: 14 - 7 = 7
- Minimum stored exponent value: 1 (binary 0001)
- Minimum unbiased exponent value: 1 − 7 = −6
- Precision specifies the total number of bits used for the significand (mantisa), 
    including implicit leading integer bit = 3 + 1 = 4
- Follows IEEE 754 conventions for representation of special values
- Has Positive and Negative zero
- Has Positive and Negative infinity
- Has NaNs

Additional details:
- Max exp (unbiased): 7
- Min exp (unbiased): -6
- Infinities (+/-): S.1111.000
- Zeros (+/-): S.0000.000
- NaNs: S.1111.{001, 010, 011, 100, 101, 110, 111}
- Max normal number: S.1110.111 = +/-2^(7) x (1 + 0.875) = +/-240
- Min normal number: S.0001.000 = +/-2^(-6)
- Max subnormal number: S.0000.111 = +/-2^(-6) x 0.875 = +/-2^(-9) x 7
- Min subnormal number: S.0000.001 = +/-2^(-6) x 0.125 = +/-2^(-9)
```

Related PRs:
- [PR-97179](https://github.com/llvm/llvm-project/pull/97179) [APFloat]
Add support for f8E4M3 IEEE 754 type
2024-07-22 23:20:28 -07:00
Renato Golin
de3e9d4138
[MLIR][Linalg] Fix named structured ops yaml file (#98865)
Added missing reciprocal to Python file and fixed ErfOp name in yaml
file. Now running the bash script yields the same output.
2024-07-15 09:30:56 +01:00
Renato Golin
80e61e3842
Remove redundant linalg.matmul_signed (#98615)
`linalg.matmul` already has an attribute for casts, defaults to signed
but allowed unsigned, so the operation `linalg.matmul_unsigned` is
redundant. The generalization test has an example on how to lower to
unsigned matmul in linalg.

This is the first PR in a list of many that will simplify the linalg
operations by using similar attributes.

Ref:
https://discourse.llvm.org/t/rfc-transpose-attribute-for-linalg-matmul-operations/80092
2024-07-13 14:46:44 +01:00
Bimo
bfa762a5a5
[MLIR][Python] fix class name of powf and negf in linalg (#97696)
The following logic can lead to a class name mismatch when using
`linalg.powf` in Python. This PR fixed the issue and also renamed
`NegfOp` to `NegFOp` in linalg to adhere to the naming convention, as
exemplified by `arith::NegFOp`.


173514d58e/mlir/python/mlir/dialects/linalg/opdsl/lang/dsl.py (L140-L143)
```
# linalg.powf(arg0, arg1, outs=[init_result.result])
NotImplementedError: Unknown named op_name / op_class_name: powf / PowfOp
```
2024-07-05 09:23:12 +08:00
Alexander Pivovarov
2628a5fd24
Rename f8E4M3 to f8E4M3FN in mlir.extras.types py package (#97102)
Currently `f8E4M3` is mapped to `Float8E4M3FNType`.

This PR renames `f8E4M3` to `f8E4M3FN` to accurately reflect the actual
type.

This PR is needed to avoid names conflict in upcoming PR which will add
IEEE 754 `Float8E4M3Type`.
https://github.com/llvm/llvm-project/pull/97118 Add f8E4M3 IEEE 754 type

Maksim, can you review this PR? @makslevental ?
2024-06-29 12:48:11 -07:00
muneebkhan85
a9efcbf490
[MLIR] Add continuous tiling to transform dialect (#82792)
This patch enables continuous tiling of a target structured op using
diminishing tile sizes. In cases where the tensor dimensions are not
exactly divisible by the tile size, we are left with leftover tensor
chunks that are irregularly tiled. This approach enables tiling of the
leftover chunk with a smaller tile size and repeats this process
recursively using exponentially diminishing tile sizes. This eventually
generates a chain of loops that apply tiling using diminishing tile
sizes.

Adds `continuous_tile_sizes` op to the transform dialect. This op, when
given a tile size and a dimension, computes a series of diminishing tile
sizes that can be used to tile the target along the given dimension.
Additionally, this op also generates a series of chunk sizes that the
corresponding tile sizes should be applied to along the given dimension.

Adds `multiway` attribute to `transform.structured.split` that enables
multiway splitting of a single target op along the given dimension, as
specified in a list enumerating the chunk sizes.
2024-06-21 16:39:43 +02:00
Jonas Rickert
abad8455ab
[mlir] Expose skipRegions option for Op printing in the C and Python bindings (#96150)
The MLIR C and Python Bindings expose various methods from
`mlir::OpPrintingFlags` . This PR adds a binding for the `skipRegions`
method, which allows to skip the printing of Regions when printing Ops.
It also exposes this option as parameter in the python `get_asm` and
`print` methods
2024-06-20 10:15:08 -05:00
Maksim Levental
91175313d4
[MLIR][python] include Rewrite.h (#95226) 2024-06-12 07:17:13 -05:00
Jacques Pienaar
18cf1cd92b
[mlir] Add PDL C & Python usage (#94714)
Following a rather direct approach to expose PDL usage from C and then
Python. This doesn't yes plumb through adding support for custom
matchers through this interface, so constrained to basics initially.

This also exposes greedy rewrite driver. Only way currently to define
patterns is via PDL (just to keep small). The creation of the PDL
pattern module could be improved to avoid folks potentially accessing
the module used to construct it post construction. No ergonomic work
done yet.

---------

Signed-off-by: Jacques Pienaar <jpienaar@google.com>
2024-06-11 07:45:12 -07:00
Sergei Lebedev
bc5ced54cc
Updated the annotations of Python bindings (#92733) 2024-06-11 08:46:43 -05:00
Egor Ospadov
367d50278a
[mlir][python] Fix attribute registration in ir.py (#94615)
This PR fixes attribute registration for `SI8Attr` and `UI8Attr` in
`ir.py`.
2024-06-09 22:06:46 -05:00
Mehdi Amini
e6821dd8c8
Revert "[MLIR][Python] add ctype python binding support for bf16" (#93771)
Reverts llvm/llvm-project#92489

This broke the bots.
2024-05-29 23:21:04 -06:00
Bimo
89801c74c3
[MLIR][Python] add ctype python binding support for bf16 (#92489)
Since bf16 is supported by mlir, similar to
complex128/complex64/float16, we need an implementation of bf16 ctype in
Python binding. Furthermore, to resolve the absence of bf16 support in
NumPy, a third-party package [ml_dtypes
](https://github.com/jax-ml/ml_dtypes) is introduced to add bf16
extension, and the same approach was used in `torch-mlir` project.

See motivation and discussion in:
https://discourse.llvm.org/t/how-to-run-executionengine-with-bf16-dtype-in-mlir-python-bindings/79025
2024-05-29 22:01:40 -07:00
zjgarvey
42a0fb2333
[mlir][linalg] Add linalg.conv_2d_ngchw_gfchw_q to named ops (#92136)
Adds a named op: linalg.conv_2d_ngchw_gfchw_q. This op is similar to
linalg.conv_2d_ngchw_gfchw, but additionally incorporates zero point
offset corrections.
2024-05-29 12:55:05 +02:00
Guray Ozen
7f58ffd09b
[mlir][python] Yield results of scf.for_ (#93610)
Using `for_` is very hand with python bindings. Currently, it doesn't
support results, we had to fallback to two lines scf.for.

This PR yields results of scf.for in `for_`

---------

Co-authored-by: Maksim Levental <maksim.levental@gmail.com>
2024-05-29 08:43:13 +02:00
Oleksandr "Alex" Zinenko
04c94ba65f
[mlir][py] NFC: remove exception-based isa from linalg module (#92556)
When this code was written, we didn't have proper isinstance support for
operation classes in Python. Now we do, so there is no reason to keep
the expensive exception-based flow.
2024-05-24 09:02:17 +02:00
Petr Kurapov
e7d09cecc9
[MLIR][Linalg] Ternary Op & Linalg select (#91461)
Following #90236, adding `select` to linalg as `arith.select`. No
implicit type casting.
OpDSL doesn't expose a type restriction for bool, but I saw no reason in
adding it (put a separate symbolic type and check the semantics in the
builder).

---------

Co-authored-by: Renato Golin <rengolin@systemcall.eu>
Co-authored-by: Maksim Levental <maksim.levental@gmail.com>
2024-05-14 10:50:35 +01:00
tyb0807
baa5beecc0
[NFC] Make NVGPU casing consistent (#91903) 2024-05-13 09:08:04 +02:00