22530 Commits

Author SHA1 Message Date
Sagar Kulkarni
357e3803bb
[mlir][vector] Prevent folding non memref-type gather into maskedload (#135371)
This patch fixes an issue in the FoldContiguousGather pattern which was
incorrectly folding vector.gather operations with contiguous indices
into vector.maskedload operations regardless of the base operand type.

While vector.gather operations can work on both tensor and memref types,
vector.maskedload operations are only valid for memref types. The
pattern was incorrectly lowering a tensor-based gather into a
masked-load, which is invalid.

This fix adds a type check to ensure the pattern only applies to
memref-based gather operations.

Co-authored-by: Sagar Kulkarni <sagar@rain.ai>
2025-04-12 04:15:51 +03:00
Maksim Levental
54e70ac765 [mlir][SMT] remove custom forall/exists builder because of asan memory leak 2025-04-11 20:12:36 -04:00
Maksim Levental
de67293c09
[mlir][SMT] upstream SMT dialect (#131480)
This PR upstreams the `SMT` dialect from the CIRCT project. Here we only
check in the dialect/op/types/attributes and lit tests. Follow up PRs
will add conversions in and out and etc.

Co-authored-by: Bea Healy <beahealy22@gmail.com>
Co-authored-by: Martin Erhart <maerhart@outlook.com>
Co-authored-by: Mike Urbach <mikeurbach@gmail.com>
Co-authored-by: Will Dietz <will.dietz@sifive.com>
Co-authored-by: fzi-hielscher <hielscher@fzi.de>
Co-authored-by: Fehr Mathieu <mathieu.fehr@gmail.com>
2025-04-11 17:10:09 -04:00
Tai Ly
d78b486414
[mlir][tosa] Add error_if checks for Mul Op (#135075)
This adds error_if validation checking for Mul Op

Signed-off-by: Tai Ly <tai.ly@arm.com>
2025-04-11 13:14:46 -07:00
Andrzej Warzyński
fedd79bdcd
[mlir][vector] Tighten the semantics of vector.{load|store} (#135151)
This change refines the verifier for `vector.load` and `vector.store` to
disallow the use of vectors with higher rank than the source or
destination memref. For example, the following is now rejected:

```mlir
  %0 = vector.load %src[%c0] : memref<?xi8>, vector<16x16xi8>
  vector.store %vec, %dest[%c0] : memref<?xi8>, vector<16x16xi8>
```

This pattern was previously used in SME end-to-end tests and "happened"
to work by implicitly assuming row-major memory layout. However, there
is no guarantee that such an assumption will always hold, and we should
avoid relying on it unless it can be enforced deterministically.

Notably, production ArmSME lowering pipelines do not rely on this
behavior. Instead, the expected usage (illustrated here with scalable
vector syntax) would be:

```mlir
  %0 = vector.load %src[%c0, %c0] : memref<?x?xi8>, vector<[16]x[16]xi8>
```

This PR updates the verifier accordingly and adjusts all affected tests.
These tests are either removed (if no longer relevant) or updated to use
memrefs with appropriately matching rank.
2025-04-11 20:08:08 +01:00
Kevin Gleason
e911f90a40
[mlir] Add support for broader range of input files in generate-test-checks.py (#134327)
A few additions:

- Lines with `{{`: These can show up if serializing non-MLIR info into
string attrs `my.attr = {{proto}, {...}}`. String escape the opening
`{{`, given that check lines are generated this has no effect on
`{{.*}}` etc in generated lines.
- File split line: Normally these are skipped because of their indent
level, but if using `--starts_from_scope=0` to generate checks for the
`module {...} {` line, and since MLIR opt tools emit file split lines by
default, some `CHECK: // -----` lines were emit.
- (edit removed this, fixed by
https://github.com/llvm/llvm-project/pull/134364) AttrAliases: I'm not
sure if I'm missing something for the attribute parser to work
correctly, but I was getting many `#[[?]]` for all dialect attrs. Only
use the attr aliasing if there's a match.
2025-04-11 13:00:47 -05:00
Bangtian Liu
76b85d3a27
[MLIR][CAPI] add C API typedef to fix downstream C API usage (#135380)
This PR is after #135253 and #134935 to fix the error reported by
https://github.com/llvm/llvm-project/pull/135253#issuecomment-2796077024.
This PR Adds typedef declarations for `MlirLinalgContractionDimensions`
and `MlirLinalgConvolutionDimensions` in the C API to ensure
compatibility with pure C code.

I confirm that this fix resolves the reported error based on my testing.

Signed-off-by: Bangtian Liu <liubangtian@gmail.com>
2025-04-11 11:16:58 -04:00
James Newling
409def2867
[mlir][vector] shape_cast(broadcast) -> broadcast canonicalization (#134939)
Add additional cases of this canonicalization, by checking the 'source
of truth' function `isBroadcastableTo` to check when it is possible to
broadcast directly to the shape resulting from the shape_cast.

---------

Signed-off-by: James Newling <james.newling@gmail.com>
2025-04-11 15:15:03 +01:00
James Newling
cd85f5dbdf
[mlir] canonicalizer: shape_cast(poison) -> poison (#133988)
Based on the ShapeCastConstantFolder, this pattern replaces

%0 = ub.poison : vector<2x3xf32>
%1 = vector.shape_cast %0 vector<2x3xf32> to vector<6xf32>

with 

%1 = ub.poison : vector<6xf32>

---------

Signed-off-by: James Newling <james.newling@gmail.com>
2025-04-11 15:13:03 +01:00
Maksim Levental
c12cb0ccbb
[mlir][python] fix value-builder generation for snake_case ops (#135302)
Ops that are already snake case (like [`ROCDL_wmma_*`
ops](66b0b0466b/mlir/include/mlir/Dialect/LLVMIR/ROCDLOps.td (L411)))
produce python "value-builders" that collide with the class names:

```python
class wmma_bf16_16x16x16_bf16(_ods_ir.OpView):
  OPERATION_NAME = "rocdl.wmma.bf16.16x16x16.bf16"
  ...

def wmma_bf16_16x16x16_bf16(res, args, *, loc=None, ip=None) -> _ods_ir.Value:
  return wmma_bf16_16x16x16_bf16(res=res, args=args, loc=loc, ip=ip).result
```

and thus cannot be emitted (because of recursive self-calls).

This PR fixes that by affixing `_` to the value builder names. 

I would've preferred to just rename the ops but that would be a breaking
change 🤷.
2025-04-11 08:55:38 -04:00
Vivek Khandelwal
e377a5d168
[MLIR][Tensor] Remove tensor.dim canonicalization patterns registered on tensor.expand_shape/tensor.collapse_shape (#134219)
These are problematic because the iterative application that locally
resolves the tensor.dim operation introduces
intermediate floor_div, which is losing the information about the exact
division that was carried out in the original
IR, and the iterative algorithm can't converge towards the simplest
form.
Information loss is not acceptable for canonicalization.

Resolving the dimOp can be achieved through
resolve-ranked-shaped-type-result-dims and
resolve-shaped-type-result-dims passes.

---------

Signed-off-by: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2025-04-11 06:57:34 -04:00
Jean-Didier PAILLEUX
aeb06c6152
[MLIR] Adding 'inline_hint' attribute on LLMV::CallOp (#134582)
Addition of `inlinehint` attributes for CallOps in MLIR in order to be
able to say to a function call that the inlining is desirable without
having the attribute on the FuncOp.
2025-04-11 09:31:18 +02:00
Bangtian Liu
9466cbdf29
[mlir][CAPI][python] expose the python bindings for linalg::isaConvolutionOpInterface and linalg::inferConvolutionDims (#135253)
This PR is mainly about exposing the python bindings for
`linalg::isaConvolutionOpInterface` and `linalg::inferConvolutionDims`.

---------

Signed-off-by: Bangtian Liu <liubangtian@gmail.com>
2025-04-10 20:22:15 -04:00
Luke Hutton
b39ab7a620
[mlir][tosa] Add error_if checks to clamp op verifier (#134224)
Specifically it introduces checks for:
- ERROR_IF(max_val < min_val)
- ERROR_IF(isNaN(min_val) || isNaN(max_val))

Signed-off-by: Luke Hutton <luke.hutton@arm.com>
2025-04-10 16:03:42 -07:00
Maksim Levental
1cec5fffd8
[mlir] implement -verify-diagnostics=only-expected (#135131)
This PR implements `verify-diagnostics=only-expected` which is a "best
effort" verification - i.e., `unexpected`s and `near-misses` will not be
considered failures. The purpose is to enable narrowly scoped checking
of verification remarks (just as we have for lit where only a subset of
lines get `CHECK`ed).
2025-04-10 18:50:00 -04:00
darkbuck
9188288581
[mlir][DataLayout] Keep consistent input/output order (#135185)
- Use 'MapVector' instead of 'DenseMap' to keep a consistent order when
importing/printing entries to prevent run-by-run differences.
2025-04-10 15:55:37 -04:00
Matthias Springer
53ae2bdceb
[mlir][NVVM] Remove commented out code (#135144)
This addresses a comment on #135051.
2025-04-10 21:51:32 +02:00
Guray Ozen
d7cb24e10d
[MLIR][NVVM] Run clang-tidy (#135006) 2025-04-10 21:12:14 +02:00
Tai Ly
ccdbd3b78d
[mlir][tosa] Rename int_div to intdiv (#135080)
This patch renames Tosa Operator int_div to intdiv to align with 1.0
spec

Signed-off-by: Tai Ly <tai.ly@arm.com>
2025-04-10 11:54:34 -07:00
Alan Li
959b8aaeac
[MLIR][NFC] Expose computeProduct function. (#135192)
Make it non-static, as its functionality is quite generic.
2025-04-10 08:29:32 -07:00
Jerry-Ge
b17bd73e62
[mlir][tosa] Add more negative tests for rank0 tensors, negate, and sub (#135061)
Signed-off-by: Jerry Ge <jerry.ge@arm.com>
2025-04-10 08:13:36 -07:00
Jerry-Ge
2bbe8e825e
[mlir][tosa] Add more level_check tests for tensor_dim and tensor_size (#135062)
Signed-off-by: Jerry Ge <jerry.ge@arm.com>
2025-04-10 08:13:24 -07:00
Matthias Springer
85742f7642
[mlir][LLVM] Delete getFixedVectorType and getScalableVectorType (#135051)
The LLVM dialect no longer has its own vector types. It uses
`mlir::VectorType` everywhere. Remove
`LLVM::getFixedVectorType/getScalableVectorType` and use
`VectorType::get` instead. This commit addresses a
[comment](https://github.com/llvm/llvm-project/pull/133286#discussion_r2022192500)
on the PR that deleted the LLVM vector types.
2025-04-10 10:36:21 +02:00
Dominik Adamski
adfc577895
[OpenMP][CodeExtractor]Add align metadata to load instructions (#131131)
Moving code to another function can lead to missed optimization
opportunities, because function passes operate on smaller chunks of
code, and they cannot figure out all details.

One example of missed optimization opportunities after code extraction
is information about pointer alignment. The instruction combine pass
adds information about pointer alignment to LLVM intrinsic memcpy calls
if it can deduce it from the code or if align metadata is added. If this
information is not present, then further optimization passes can
generate inefficient code.

If we add align metadata to extracted pointers, then the instruction
combine pass can add the align attribute to the LLVM intrinsic memcpy
call and unblock further optimization.

Scope of changes:
1. Analyze MLIR map operations. Add information about the alignment of
objects that are passed by reference to OpenMP GPU kernels.
2. Propagate alignment information to the outlined by `CodeExtractor`
helper functions.
2025-04-10 09:45:30 +02:00
Bangtian Liu
c359f7625f
[mlir][CAPI][python] expose the python bindings for linalg::isaContractionOpInterface and linalg::inferContractionDims (#134935)
This PR is mainly about exposing the python bindings for`
linalg::isaContractionOpInterface` and` linalg::inferContractionDims`.

---------

Signed-off-by: Bangtian Liu <liubangtian@gmail.com>
2025-04-09 20:01:38 -04:00
Maksim Levental
9b50167ed9
[mlir][python] add use_name_loc_as_prefix to value.get_name() (#135052)
Add `use_name_loc_as_prefix` to `value.get_name()`.
2025-04-09 19:28:59 -04:00
Nachi G
2f7e685e3d
[MLIR] Ensure deterministic parallel verification (#134963)
`failableParallelForEach` will non-deterministically early terminate
upon failure, leading to inconsistent and potentially missing
diagnostics.

This PR uses `parallelForEach` to ensure all operations are verified and
all diagnostics are handled, while tracking the failure state
separately.

Other potential fixes include:
- Making `failableParallelForEach` have deterministic early-exit
behavior (or have an option for it)
- I didn't want to change more than what was required (and potentially
incur perf hits for unrelated code), but if this is a better fix I'm
happy to submit a patch.
- I think all diagnostics that can be detected from verification
failures should be reported, so I don't even think this would be correct
behavior anyway

- Adding an option for `failableParallelForEach` to still execute on
every element on the range while still returning `LogicalResult`
2025-04-09 15:43:26 -07:00
Matthias Springer
a0d449016b
[mlir][LLVM] Delete getVectorElementType (#134981)
The LLVM dialect no longer has its own vector types. It uses
`mlir::VectorType` everywhere. Remove `LLVM::getVectorElementType` and
use `cast<VectorType>(ty).getElementType()` instead. This commit
addresses a
[comment](https://github.com/llvm/llvm-project/pull/133286#discussion_r2022192500)
on the PR that deleted the LLVM vector types.

Also improve vector type constraints by specifying the
`mlir::VectorType` C++ class, so that explicit casts to `VectorType` can
be avoided in some places.
2025-04-09 21:35:32 +02:00
Adam Siemieniuk
0c2a6f2d62
[mlir][x86vector] Simplify intrinsic generation (#133692)
Replaces separate x86vector named intrinsic operations with direct calls
to LLVM intrinsic functions.
    
This rework reduces the number of named ops leaving only high-level MLIR
equivalents of whole intrinsic classes e.g., variants of AVX512 dot on
BF16 inputs. Dialect conversion applies LLVM intrinsic name mangling
further simplifying lowering logic.
    
The separate conversion step translating x86vector intrinsics into LLVM
IR is also eliminated. Instead, this step is now performed by the
existing llvm dialect infrastructure.

RFC:
https://discourse.llvm.org/t/rfc-simplify-x86-intrinsic-generation/85581
2025-04-09 19:59:37 +02:00
Jan Leyonberg
1aed6ad906
[MLIR][OpenMP] Enable multiple variables for target teams reductions (#134903)
This patch enables multiple reductions to be used in a reduction clause
inside target regions for GPU offloading.

---------

Co-authored-by: Sergio Afonso <safonsof@amd.com>
2025-04-09 13:01:53 -04:00
Jerry-Ge
751c3f51eb
[mlir][tosa] Update TileOp infer shape (#134732)
update to use getConstShapeValues in TileOp's shape inference

Signed-off-by: Tai Ly <tai.ly@arm.com>
Co-authored-by: Tai Ly <tai.ly@arm.com>
2025-04-09 09:29:29 -07:00
Matthias Springer
a00a61d59b
[mlir][IR] Improve error message when parsing incorrect type (#134984)
Improve error messages when parsing an incorrect type.

Before:
```
invalid kind of type specified
```

After:
```
invalid kind of type specified: expected builtin.tensor, but found 'tensor<*xi32>'
```

This error message is produced when a certain operand/result type is
expected according to an op's TableGen definition, but a different type
is parsed. Type constraints (which may have nice error messages) are
checked after parsing a type. If an incorrect type is parsed, we never
get to the point of printing type constraint error messages. This may
discourage users from specifying C++ classes with type constraints.
(Explicitly specifying C++ classes is beneficial because the
auto-generated C++ code will have richer type information; explicit
casts are unnecessary, etc.) See #134981 for an example where specifying
additional type information with type constraints (e.g.,
`LLVM_AnyVector`) lead to worse error messages.

Note: In order to generate a better error message, the parser must
retrieve a type's name from the C++ class. TableGen-generated type
classes always have a `name` field, but hand-written C++ type classes
may not. The `HasStaticName` template was copied from
`DialectImplementation.h` (`HasStaticDialectName`).
2025-04-09 17:49:47 +02:00
ivangarcia44
5083e80c14
Folding extract_strided_metadata input into reinterpret_cast (#134845)
We can always fold the input of a extract_strided_metadata operator to
the input of a reinterpret_cast operator, because they point to the same
memory. Note that the reinterpret_cast does not use the layout of its
input memref, only its base memory pointer which is the same as the base
pointer returned by the extract_strided_metadata operator and the base
pointer of the extract_strided_metadata memref input.

Operations like expand_shape, collapse_shape, and subview are lowered to
a pair of extract_strided_metadata and reinterpret_cast like this:
      
%base_buffer, %offset, %sizes:2, %strides:2 =
memref.extract_strided_metadata %input_memref :
memref<ID1x...xIDNxBaseType> -> memref<f32>, index, index, index, index,
index

%reinterpret_cast = memref.reinterpret_cast %base_buffer to offset:
[%o1], sizes: [%d1,...,%dN], strides: [%s1,...,%N] : memref<f32> to
memref<OD1x...xODNxBaseType >

In many cases the input of the extract_strided_metadata input can be
passed directly into the input of the reinterpret_cast operation like
this (see how %base_buffer is replaced by %input_memref in the
reinterpret_cast above and the input type is updated):

%base_buffer, %offset, %sizes:2, %strides:2 =
memref.extract_strided_metadata %input_memref :
memref<ID1x...xIDNxBaseType> -> memref<f32>, index, index, index, index,
index
%reinterpret_cast = memref.reinterpret_cast %input_memref to offset:
[%o1], sizes: [%d1,...,%dN], strides: [%s1,...,%N] :
memref<ID1x...xIDNxBaseType> to memref<OD1x...xODNxBaseType >

When dealing with static dimensions, the extract_strided_metatdata will
become deadcode and we end up only with a reinterpret_cast:

%reinterpret_cast = memref.reinterpret_cast %input_memref to offset:
[%o1], sizes: [%d1,...,%dN], strides: [%s1,...,%N] :
memref<ID1x...xIDNxBaseType> to memref<OD1x...xODNxBaseType >

Note that reinterpret_cast only reads the base memory pointer from the
input memref (%input_memref above), which is equivalent to the
%base_buffer returned by the extract_strided_metadata operation. Hence
it is legal always to use the extract_strided_metadata input memref
directly in the reinterpret_cast. Note that since this is a pointer,
this operation is legal even when the base pointer values are modified
between the operation pair.

@matthias-springer 
@joker-eph 
@sahas3
@Hanumanth04
@dixinzhou
@rafaelubalmw

---------

Co-authored-by: Ivan Garcia <igarcia@vdi-ah2ddp-178.dhcp.mathworks.com>
2025-04-09 16:50:16 +02:00
Sergio Afonso
0de48de36e
[MLIR][OpenMP] Improve loop wrapper op verifiers (#134833)
This patch revisits op verifiers for `LoopWrapperInterface` operations
to improve consistency across operations and to properly cover some
previously misreported cases.

Checks that should be done for these kinds of operations are documented
in the interface description.
2025-04-09 12:36:07 +01:00
NimishMishra
53fa92dcad
[mlir][llvm][OpenMP] Hoist __atomic_load alloca (#132888)
Current implementation of `__atomic_compare_exchange` uses an alloca for
`__atomic_load`, leading to issues like
https://github.com/llvm/llvm-project/issues/120724. This PR hoists this
alloca to `AllocaIP`.


Fixes: https://github.com/llvm/llvm-project/issues/120724
2025-04-09 03:01:44 -07:00
Luke Hutton
20d1888cbe
[mlir][tosa] Update the description of rescale and variable ops (#134815)
Updates the description to align with the specification. Also includes
some small cleanup to `sigmoid`, to avoid confusion.

Signed-off-by: Luke Hutton <luke.hutton@arm.com>
2025-04-09 10:01:16 +01:00
Chao Chen
34d586fdd5
[MLIR][XeGPU] Extend SGMapAttr and Add ConvertLayoutOp (#132425)
This PR improves the SGMapAttr to enable workgroup-level programming, representing the first step in expanding the XeGPU dialect from subgroup to workgroup level, and renames it to LayoutAttr
2025-04-08 19:46:05 -05:00
Georgios Pinitas
9c38b2e513
[mlir][tosa] Fold PadOp to tensor operations (#132700) 2025-04-09 00:01:54 +01:00
Jerry-Ge
189baedb71
[mlir][tosa] Add missing divider in tosa-infer-shapes.mlir (#134883)
Minor format fix.

Signed-off-by: Jerry Ge <jerry.ge@arm.com>
2025-04-08 21:08:36 +01:00
Matthias Springer
234d30e36b
[mlir][LLVM] Delete LLVMFixedVectorType and LLVMScalableVectorType (#133286)
Since #125690, the MLIR vector type supports `!llvm.ptr` as an element
type. The only remaining element type for `LLVMFixedVectorType` is now
`LLVMPPCFP128Type`.

This commit turns `LLVMPPCFP128Type` into a proper FP type (by
implementing `FloatTypeInterface`), so that the MLIR vector type accepts
it as an element type. This makes `LLVMFixedVectorType` obsolete.
`LLVMScalableVectorType` is also obsolete. This commit deletes
`LLVMFixedVectorType` and `LLVMScalableVectorType`.

Note for LLVM integration: Use `VectorType` instead of
`LLVMFixedVectorType` and `LLVMScalableVectorType`.
2025-04-08 20:28:24 +02:00
Matthias Springer
b7b3758e88
[mlir][IR] Add VectorTypeElementInterface with !llvm.ptr (#133455)
This commit extends the MLIR vector type to support pointer-like types
such as `!llvm.ptr` and `!ptr.ptr`, as indicated by the newly added
`VectorTypeElementInterface`. This makes the LLVM dialect closer to LLVM
IR. LLVM IR already supports pointers as vector element type.

Only integers, floats, pointers and index are valid vector element types
for now. Additional vector element types may be added in the future
after further discussions. The interface is still evolving and may
eventually turn into one of the alternatives that were discussed on the
RFC.

This commit also disallows `!llvm.ptr` as an element type of
`!llvm.vec`. This type exists due to limitations of the MLIR vector
type.

RFC:
https://discourse.llvm.org/t/rfc-allow-pointers-as-element-type-of-vector/85360
2025-04-08 19:21:45 +02:00
tdanyluk
76d2e0881e
[mlir] fix references of attributes which are not defined earlier (#134364)
If an attribute is not defined earlier in the same file, but just
referenced from its dialect directly, then currently not the correct
check is being emited.

What would it emit for #toy.shape<[1, 2, 3]>:
Earlier:
// CHECK: #[['?']]<[1, 2, 3]>
Now:
// CHECK: #toy.shape<[1, 2, 3]>
2025-04-08 17:34:20 +02:00
Christopher McGirr
ae3faea1f2
[MLIR][mlir-opt] move action debugger hook flag (#134842)
Currently if a developer uses the flag `--mlir-enable-debugger-hook` the
debugger hook is not actually enabled. It seems the DebugConfig and the
MainMLIROptConfig are not connected.

To fix this we can move the `enableDebuggerHook` CL Option to the
DebugConfigCLOptions struct so that it can get registered and enabled
along with the other debugger flags. AFAICS there are no other uses of
the flag so this should be safe.

This also adds a small LIT test to check that the hook is enabled by
checking the std::cerr output for the log message.
2025-04-08 16:54:11 +02:00
Michael Liao
4f77e50042 [MLIR][AMDGPU] Fix shared build. NFC 2025-04-08 10:46:15 -04:00
Christian Sigg
3a6b9b3a87 [mlir][bazel] Fix after dae0ef53a0b99c6c2b74143baee5896e8bc5c8e7
Remove unnecessary include.
2025-04-08 15:47:14 +02:00
Alan Li
dae0ef53a0
[MLIR][AMDGPU] Add a wrapper for global LDS load intrinsics in AMDGPU (#133498)
Defining a new `amdgpu.global_load` op, which is a thin wrap around
ROCDL `global_load_lds` intrinsic, along with its lowering logics to
`rocdl.global.load.lds`.
2025-04-08 09:18:30 -04:00
TatWai Chong
728320f946
[mlir][tosa] Increase test coverage for profile-based validation (#134754)
Add more tests to increase test coverage.
2025-04-08 13:33:16 +01:00
Jerry-Ge
f0bdeb4b6a
[mlir][tosa] Cleanup ops.mlir (#134751)
* add missing CHECK-LABEL
* removed whitespace for consistency

Signed-off-by: Jerry Ge <jerry.ge@arm.com>
2025-04-08 09:34:40 +01:00
Jerry-Ge
f4328d0d3a
[mlir][tosa] Remove out_shape attribute from transpose_2d attributes (#134743)
out_shape is no longer an attribute

Signed-off-by: Jerry Ge <jerry.ge@arm.com>
2025-04-08 09:33:35 +01:00
Jerry-Ge
ccdc44f643
[mlir][tosa] Remove perms input for tosa.transpose tests (#134740)
Perms is now an attribute, not input.

Signed-off-by: Jerry Ge <jerry.ge@arm.com>
2025-04-08 09:32:45 +01:00