omp.cancel and omp.cancellationpoint contain an attribute describing the
type of parent construct which should be cancelled. e.g.
```
!$omp cancel do
```
Must be inside of a wsloop. Previously the verifer required the
immediate parent to be this operation. This is not quite right because
something like the following is valid:
```
!$omp parallel do
do i = 1, N
if (cond) then
!$omp cancel do
endif
enddo
```
This patch relaxes the verifier to only require that some parent
operation matches (not necessarily the immediate parent).
In the propagate mode, NaN compare equal to each other so in case of
several NaNs the index of the first one needs to be returned. This
commit changes the index update condition to check that the current
index is not that of a NaN.
The commit also simplifies argmax NaN ignore lowering to only use OGT.
This prevent any update in case of NaN. The only case where the index of
a NaN is returned is when all values are NaN and this is covered by the
fact that the initial index value is 0 so no update will result in 0
being returned.
This commit improves the `EnumProp` class, causing it to wrap around an
`EnumInfo` just like` EnumAttr` does. This EnumProp also has logic for
converting to/from an integer attribute and for being read and written
as bitcode.
The following variants of `EnumProp` are provided:
- `EnumPropWithAttrForm` - an EnumProp that can be constructed from (and
will be converted to, if `storeInCustomAttribute` is true) a custom
attribute, like an `EnumAttr`, instead of a plain integer. This is meant
for backwards compatibility with code that uses enum attributes.
`NamedEnumProp` adds a "`mnemonic` `<` $enum `>`" syntax around the
enum, replicating a common pattern seen in MLIR printers and allowing
for reduced ambiguity.
`NamedEnumPropWithAttrForm` combines both of these extensions.
(Sadly, bytecode auto-upgrade is hampered by the lack of the ability to
optionally parse an attribute.)
Depends on #132148
Removed the calls to `sizeOp` after replacing `SliceOp`:
```
// Remove const_shape size op when it no longer has use point.
Operation *sizeConstShape = sliceOp.getSize().getDefiningOp();
```
Turns out as part of canonicalization, trivially dead ops are removed
anyway, so the above piece of code isn't actually needed.
Reverts llvm/llvm-project#135429 due buildbot breakage:
https://lab.llvm.org/buildbot/#/builders/169/builds/10405
Based on the ASan output, I think after the replaceOp on line 775, it's
no longer valid to do getSize() on sliceOp:
```
775 rewriter.replaceOp(sliceOp, newSliceOp.getResult());
776
777 // Remove const_shape size op when it no longer has use point.
778 Operation *sizeConstShape = sliceOp.getSize().getDefiningOp();
```
This reverts commit 54e70ac7650f1c22f687937d1a082e4152f97b22 which
itself fixed an [asan
leak](https://lab.llvm.org/buildbot/#/builders/55/builds/9761) from the
original upstreaming commit. The leak was due to op allocations not
being `free`ed.
~~The necessary change was to explicitly `->destroy()` the ops at the
end of the tests. I believe this is because the rewriter used in the
tests doesn't actually insert them into a module and so without an
explicit `->destroy()` no bookkeeping process is able to take care of
them.~~
The necessary change was to use `OwningOpRef` which calls `op->erase()`
in its [own
destructor](89cfae41ec/mlir/include/mlir/IR/OwningOpRef.h (L39)).
This patch fixes an issue in the FoldContiguousGather pattern which was
incorrectly folding vector.gather operations with contiguous indices
into vector.maskedload operations regardless of the base operand type.
While vector.gather operations can work on both tensor and memref types,
vector.maskedload operations are only valid for memref types. The
pattern was incorrectly lowering a tensor-based gather into a
masked-load, which is invalid.
This fix adds a type check to ensure the pattern only applies to
memref-based gather operations.
Co-authored-by: Sagar Kulkarni <sagar@rain.ai>
This PR upstreams the `SMT` dialect from the CIRCT project. Here we only
check in the dialect/op/types/attributes and lit tests. Follow up PRs
will add conversions in and out and etc.
Co-authored-by: Bea Healy <beahealy22@gmail.com>
Co-authored-by: Martin Erhart <maerhart@outlook.com>
Co-authored-by: Mike Urbach <mikeurbach@gmail.com>
Co-authored-by: Will Dietz <will.dietz@sifive.com>
Co-authored-by: fzi-hielscher <hielscher@fzi.de>
Co-authored-by: Fehr Mathieu <mathieu.fehr@gmail.com>
This change refines the verifier for `vector.load` and `vector.store` to
disallow the use of vectors with higher rank than the source or
destination memref. For example, the following is now rejected:
```mlir
%0 = vector.load %src[%c0] : memref<?xi8>, vector<16x16xi8>
vector.store %vec, %dest[%c0] : memref<?xi8>, vector<16x16xi8>
```
This pattern was previously used in SME end-to-end tests and "happened"
to work by implicitly assuming row-major memory layout. However, there
is no guarantee that such an assumption will always hold, and we should
avoid relying on it unless it can be enforced deterministically.
Notably, production ArmSME lowering pipelines do not rely on this
behavior. Instead, the expected usage (illustrated here with scalable
vector syntax) would be:
```mlir
%0 = vector.load %src[%c0, %c0] : memref<?x?xi8>, vector<[16]x[16]xi8>
```
This PR updates the verifier accordingly and adjusts all affected tests.
These tests are either removed (if no longer relevant) or updated to use
memrefs with appropriately matching rank.
A few additions:
- Lines with `{{`: These can show up if serializing non-MLIR info into
string attrs `my.attr = {{proto}, {...}}`. String escape the opening
`{{`, given that check lines are generated this has no effect on
`{{.*}}` etc in generated lines.
- File split line: Normally these are skipped because of their indent
level, but if using `--starts_from_scope=0` to generate checks for the
`module {...} {` line, and since MLIR opt tools emit file split lines by
default, some `CHECK: // -----` lines were emit.
- (edit removed this, fixed by
https://github.com/llvm/llvm-project/pull/134364) AttrAliases: I'm not
sure if I'm missing something for the attribute parser to work
correctly, but I was getting many `#[[?]]` for all dialect attrs. Only
use the attr aliasing if there's a match.
This PR is after #135253 and #134935 to fix the error reported by
https://github.com/llvm/llvm-project/pull/135253#issuecomment-2796077024.
This PR Adds typedef declarations for `MlirLinalgContractionDimensions`
and `MlirLinalgConvolutionDimensions` in the C API to ensure
compatibility with pure C code.
I confirm that this fix resolves the reported error based on my testing.
Signed-off-by: Bangtian Liu <liubangtian@gmail.com>
Add additional cases of this canonicalization, by checking the 'source
of truth' function `isBroadcastableTo` to check when it is possible to
broadcast directly to the shape resulting from the shape_cast.
---------
Signed-off-by: James Newling <james.newling@gmail.com>
Based on the ShapeCastConstantFolder, this pattern replaces
%0 = ub.poison : vector<2x3xf32>
%1 = vector.shape_cast %0 vector<2x3xf32> to vector<6xf32>
with
%1 = ub.poison : vector<6xf32>
---------
Signed-off-by: James Newling <james.newling@gmail.com>
Ops that are already snake case (like [`ROCDL_wmma_*`
ops](66b0b0466b/mlir/include/mlir/Dialect/LLVMIR/ROCDLOps.td (L411)))
produce python "value-builders" that collide with the class names:
```python
class wmma_bf16_16x16x16_bf16(_ods_ir.OpView):
OPERATION_NAME = "rocdl.wmma.bf16.16x16x16.bf16"
...
def wmma_bf16_16x16x16_bf16(res, args, *, loc=None, ip=None) -> _ods_ir.Value:
return wmma_bf16_16x16x16_bf16(res=res, args=args, loc=loc, ip=ip).result
```
and thus cannot be emitted (because of recursive self-calls).
This PR fixes that by affixing `_` to the value builder names.
I would've preferred to just rename the ops but that would be a breaking
change 🤷.
These are problematic because the iterative application that locally
resolves the tensor.dim operation introduces
intermediate floor_div, which is losing the information about the exact
division that was carried out in the original
IR, and the iterative algorithm can't converge towards the simplest
form.
Information loss is not acceptable for canonicalization.
Resolving the dimOp can be achieved through
resolve-ranked-shaped-type-result-dims and
resolve-shaped-type-result-dims passes.
---------
Signed-off-by: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
Addition of `inlinehint` attributes for CallOps in MLIR in order to be
able to say to a function call that the inlining is desirable without
having the attribute on the FuncOp.
This PR is mainly about exposing the python bindings for
`linalg::isaConvolutionOpInterface` and `linalg::inferConvolutionDims`.
---------
Signed-off-by: Bangtian Liu <liubangtian@gmail.com>
This PR implements `verify-diagnostics=only-expected` which is a "best
effort" verification - i.e., `unexpected`s and `near-misses` will not be
considered failures. The purpose is to enable narrowly scoped checking
of verification remarks (just as we have for lit where only a subset of
lines get `CHECK`ed).
The LLVM dialect no longer has its own vector types. It uses
`mlir::VectorType` everywhere. Remove
`LLVM::getFixedVectorType/getScalableVectorType` and use
`VectorType::get` instead. This commit addresses a
[comment](https://github.com/llvm/llvm-project/pull/133286#discussion_r2022192500)
on the PR that deleted the LLVM vector types.
Moving code to another function can lead to missed optimization
opportunities, because function passes operate on smaller chunks of
code, and they cannot figure out all details.
One example of missed optimization opportunities after code extraction
is information about pointer alignment. The instruction combine pass
adds information about pointer alignment to LLVM intrinsic memcpy calls
if it can deduce it from the code or if align metadata is added. If this
information is not present, then further optimization passes can
generate inefficient code.
If we add align metadata to extracted pointers, then the instruction
combine pass can add the align attribute to the LLVM intrinsic memcpy
call and unblock further optimization.
Scope of changes:
1. Analyze MLIR map operations. Add information about the alignment of
objects that are passed by reference to OpenMP GPU kernels.
2. Propagate alignment information to the outlined by `CodeExtractor`
helper functions.
This PR is mainly about exposing the python bindings for`
linalg::isaContractionOpInterface` and` linalg::inferContractionDims`.
---------
Signed-off-by: Bangtian Liu <liubangtian@gmail.com>
`failableParallelForEach` will non-deterministically early terminate
upon failure, leading to inconsistent and potentially missing
diagnostics.
This PR uses `parallelForEach` to ensure all operations are verified and
all diagnostics are handled, while tracking the failure state
separately.
Other potential fixes include:
- Making `failableParallelForEach` have deterministic early-exit
behavior (or have an option for it)
- I didn't want to change more than what was required (and potentially
incur perf hits for unrelated code), but if this is a better fix I'm
happy to submit a patch.
- I think all diagnostics that can be detected from verification
failures should be reported, so I don't even think this would be correct
behavior anyway
- Adding an option for `failableParallelForEach` to still execute on
every element on the range while still returning `LogicalResult`
The LLVM dialect no longer has its own vector types. It uses
`mlir::VectorType` everywhere. Remove `LLVM::getVectorElementType` and
use `cast<VectorType>(ty).getElementType()` instead. This commit
addresses a
[comment](https://github.com/llvm/llvm-project/pull/133286#discussion_r2022192500)
on the PR that deleted the LLVM vector types.
Also improve vector type constraints by specifying the
`mlir::VectorType` C++ class, so that explicit casts to `VectorType` can
be avoided in some places.
Replaces separate x86vector named intrinsic operations with direct calls
to LLVM intrinsic functions.
This rework reduces the number of named ops leaving only high-level MLIR
equivalents of whole intrinsic classes e.g., variants of AVX512 dot on
BF16 inputs. Dialect conversion applies LLVM intrinsic name mangling
further simplifying lowering logic.
The separate conversion step translating x86vector intrinsics into LLVM
IR is also eliminated. Instead, this step is now performed by the
existing llvm dialect infrastructure.
RFC:
https://discourse.llvm.org/t/rfc-simplify-x86-intrinsic-generation/85581
This patch enables multiple reductions to be used in a reduction clause
inside target regions for GPU offloading.
---------
Co-authored-by: Sergio Afonso <safonsof@amd.com>
Improve error messages when parsing an incorrect type.
Before:
```
invalid kind of type specified
```
After:
```
invalid kind of type specified: expected builtin.tensor, but found 'tensor<*xi32>'
```
This error message is produced when a certain operand/result type is
expected according to an op's TableGen definition, but a different type
is parsed. Type constraints (which may have nice error messages) are
checked after parsing a type. If an incorrect type is parsed, we never
get to the point of printing type constraint error messages. This may
discourage users from specifying C++ classes with type constraints.
(Explicitly specifying C++ classes is beneficial because the
auto-generated C++ code will have richer type information; explicit
casts are unnecessary, etc.) See #134981 for an example where specifying
additional type information with type constraints (e.g.,
`LLVM_AnyVector`) lead to worse error messages.
Note: In order to generate a better error message, the parser must
retrieve a type's name from the C++ class. TableGen-generated type
classes always have a `name` field, but hand-written C++ type classes
may not. The `HasStaticName` template was copied from
`DialectImplementation.h` (`HasStaticDialectName`).
We can always fold the input of a extract_strided_metadata operator to
the input of a reinterpret_cast operator, because they point to the same
memory. Note that the reinterpret_cast does not use the layout of its
input memref, only its base memory pointer which is the same as the base
pointer returned by the extract_strided_metadata operator and the base
pointer of the extract_strided_metadata memref input.
Operations like expand_shape, collapse_shape, and subview are lowered to
a pair of extract_strided_metadata and reinterpret_cast like this:
%base_buffer, %offset, %sizes:2, %strides:2 =
memref.extract_strided_metadata %input_memref :
memref<ID1x...xIDNxBaseType> -> memref<f32>, index, index, index, index,
index
%reinterpret_cast = memref.reinterpret_cast %base_buffer to offset:
[%o1], sizes: [%d1,...,%dN], strides: [%s1,...,%N] : memref<f32> to
memref<OD1x...xODNxBaseType >
In many cases the input of the extract_strided_metadata input can be
passed directly into the input of the reinterpret_cast operation like
this (see how %base_buffer is replaced by %input_memref in the
reinterpret_cast above and the input type is updated):
%base_buffer, %offset, %sizes:2, %strides:2 =
memref.extract_strided_metadata %input_memref :
memref<ID1x...xIDNxBaseType> -> memref<f32>, index, index, index, index,
index
%reinterpret_cast = memref.reinterpret_cast %input_memref to offset:
[%o1], sizes: [%d1,...,%dN], strides: [%s1,...,%N] :
memref<ID1x...xIDNxBaseType> to memref<OD1x...xODNxBaseType >
When dealing with static dimensions, the extract_strided_metatdata will
become deadcode and we end up only with a reinterpret_cast:
%reinterpret_cast = memref.reinterpret_cast %input_memref to offset:
[%o1], sizes: [%d1,...,%dN], strides: [%s1,...,%N] :
memref<ID1x...xIDNxBaseType> to memref<OD1x...xODNxBaseType >
Note that reinterpret_cast only reads the base memory pointer from the
input memref (%input_memref above), which is equivalent to the
%base_buffer returned by the extract_strided_metadata operation. Hence
it is legal always to use the extract_strided_metadata input memref
directly in the reinterpret_cast. Note that since this is a pointer,
this operation is legal even when the base pointer values are modified
between the operation pair.
@matthias-springer
@joker-eph
@sahas3
@Hanumanth04
@dixinzhou
@rafaelubalmw
---------
Co-authored-by: Ivan Garcia <igarcia@vdi-ah2ddp-178.dhcp.mathworks.com>
This patch revisits op verifiers for `LoopWrapperInterface` operations
to improve consistency across operations and to properly cover some
previously misreported cases.
Checks that should be done for these kinds of operations are documented
in the interface description.
Updates the description to align with the specification. Also includes
some small cleanup to `sigmoid`, to avoid confusion.
Signed-off-by: Luke Hutton <luke.hutton@arm.com>
This PR improves the SGMapAttr to enable workgroup-level programming, representing the first step in expanding the XeGPU dialect from subgroup to workgroup level, and renames it to LayoutAttr