-
Notifications
You must be signed in to change notification settings - Fork 12.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[InstCombine] Canonicalize constant GEPs to i8 source element type #68882
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I looked through the test diffs, and it seems like the only substantial regressions are all related to the indexed compare fold, which should be made type-independent.
; CHECK-NEXT: [[T16:%.*]] = call i32 (ptr, ...) @printf(ptr noundef nonnull dereferenceable(1) @.str1, ptr nonnull [[T12]]) #[[ATTR0]] | ||
; CHECK-NEXT: [[T84:%.*]] = icmp eq i32 [[INDVAR]], 0 | ||
; CHECK-NEXT: [[T84:%.*]] = icmp eq ptr [[T12]], [[ORIENTATIONS]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Index compare fold regression.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Turns out this one is not the indexed compare fold, but the icmp %p, gep(%p)
fold. Now we have two GEPs though and it no longer triggers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(I still plan to fix this remaining regression, but I don't think it needs to block this PR.)
; CHECK-NEXT: [[GEP2:%.*]] = getelementptr i64, ptr [[P]], i64 2 | ||
; CHECK-NEXT: [[S:%.*]] = select i1 [[C:%.*]], ptr [[GEP1]], ptr [[GEP2]] | ||
; CHECK-NEXT: [[S_V:%.*]] = select i1 [[C:%.*]], i64 4, i64 16 | ||
; CHECK-NEXT: [[S:%.*]] = getelementptr i8, ptr [[P:%.*]], i64 [[S_V]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improvement.
; CHECK-NEXT: [[GEP:%.*]] = getelementptr i32, ptr [[PHI]], i64 1 | ||
; CHECK-NEXT: [[TMP1:%.*]] = phi i64 [ 4, [[IF]] ], [ 16, [[ELSE]] ] | ||
; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[P:%.*]], i64 [[TMP1]] | ||
; CHECK-NEXT: [[GEP:%.*]] = getelementptr i8, ptr [[TMP2]], i64 4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improvement.
; CHECK-NEXT: [[TMP1:%.*]] = getelementptr [6 x i32], ptr [[ARG:%.*]], i64 3, i64 [[ARG1:%.*]] | ||
; CHECK-NEXT: ret ptr [[TMP1]] | ||
; CHECK-NEXT: [[TMP1:%.*]] = getelementptr i8, ptr [[ARG:%.*]], i64 72 | ||
; CHECK-NEXT: [[TMP2:%.*]] = getelementptr [6 x i32], ptr [[TMP1]], i64 0, i64 [[ARG1:%.*]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possible regression. We kind of want to move in his direction, though it happens by accident here.
…71663) The indexed compare fold converts comparisons of GEPs with same (indirect) base into comparisons of offset. Currently, it only supports GEPs with the same source element type. This change makes the transform operate on offsets instead, which removes the type dependence. To keep closer to the scope of the original implementation, this keeps the limitation that we should only have at most one variable index per GEP. This addresses the main regression from #68882. TBH I have some doubts that this is really a useful transform (at least for the case where there are extra pointer users, so we have to rematerialize pointers at some point). I can only assume it exists for a reason...
8190be1
to
7b27c5b
Compare
@@ -2282,6 +2282,15 @@ Instruction *InstCombinerImpl::visitGetElementPtrInst(GetElementPtrInst &GEP) { | |||
if (MadeChange) | |||
return &GEP; | |||
|
|||
// Canonicalize constant GEPs to i8 type. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, this is a bit interesting downstream with non-8-bit addressing units :-)
Today i8 would be quite rare as GEP type for us as it is smaller than the addressing unit size. But I figure that canonicalizing to some other type downstream could end up as a lot of work (or lots of missed optimizations).
Afaict accumulateConstantOffset is returning an offset that depends on TypeStoreSize. So as long as this is used in a way so that the address still would be given by baseptr + DL.getTypeStoreSize(i8) * Offset
and not baseptr + 8 * Offset
, then I guess things will be fine (i.e not assuming that the offset is an 8-bit offset).
As far as I can tell you could just as well have picked i1 instead of i8 (given that DL.getTypeStoreSize(i1)==DL.getTypeStoreSize(i8)
). That would probably look confusing, but that is what happens for us when using i8 as type as we can't address individual octets.
(I also see this as a reminder for looking at the ptradd RFC to understand how that will impact us, so that we are prepared for that.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GEP operates in terms of bytes, not bits. The size of i8 is required to be 1. GEP doesn't care how the mapping from size to size in bits looks like. So if you want to map i8 to 32 bits, then that should be fine as far as GEP/ptradd are concerned (though of course breaks all kinds of other assumptions).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that i8 GEPs are already generated by some important components like SCEVExpander, so it's pretty likely that your target should handle them already (unless you are currently patching all the places generating them of course).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. So things can be expected to just work (given getTypeStoreSize(i8)==1), even when the addressing unit isn't 8 bits.
Since
%gep = getelementptr i3, ptr %p, i16 1
%gep = getelementptr i8, ptr %p, i16 1
%gep = getelementptr i16, ptr %p, i16 1
all are equivalent (for my target), then this patch just makes that more obvious by canonicalizing them into a single form.
So we just need to update some lit test checks to expect "getelementptr i8" instead of "getelementptr i16" downstream, and hopefully things will be fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reading about the planned steps to move from getelementptr to ptradd, I would appreciate the opportunity to not add more hardcoded instances of i8 (i.e. isIntegerTy(8)
or getInt8Ty
below) in this effort. You don't need an 8 bit container, you want the canonical type that has getTypeStoreSize(Ty) == 1, it would be much better if there was a minimal API that is just a slight abstraction over i8 but that makes this goal apparent. (I would personally use something with the terminology "addressable unit type", so "isAddressableUnitType", "getAddressableUnitType" etc.)
For context, we are also a downstream user in architectural contexts with memories that are not byte addressable. This can be a main memory that is word addressable and does not have byte or sub-word access (so the canonical unit type is naturally i16 or i24... although i8 would still work), but also additional memories that cannot hold integers at all, for example, a separate memory (address space) for vectors (the canonical unit type is a vector and i8 is not storable in that memory/address space). Whenever there are hardcoded instances of i8 in the code base, we typically have to review them in our tree (yes, SCEVExpander). But it makes a difference whether the code is assuming an 8 bit container, or just needs a type with getTypeStoreSize(Ty) == 1.
7b27c5b
to
59e52f8
Compare
@llvm/pr-subscribers-backend-risc-v @llvm/pr-subscribers-clang Author: Nikita Popov (nikic) ChangesThis patch canonicalizes getelementptr instructions with constant indices to use the This is a first step towards https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699. This is limited to constant GEPs only for now, as they have a clear canonical form, while we're not yet sure how exactly to deal with variable indices. The test llvm/test/Transforms/PhaseOrdering/switch_with_geps.ll gives two representative examples of the kind of optimization improvement we expect from this change. In the first test SimplifyCFG can now realize that all switch branches are actually the same. In the second test it can convert it into simple arithmetic. These are representative of common enum optimization failures we see in Rust. Patch is 775.99 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/68882.diff 174 Files Affected:
diff --git a/clang/test/CodeGen/PowerPC/builtins-ppc-pair-mma.c b/clang/test/CodeGen/PowerPC/builtins-ppc-pair-mma.c
index 3922513e22469a..5422d993ff1575 100644
--- a/clang/test/CodeGen/PowerPC/builtins-ppc-pair-mma.c
+++ b/clang/test/CodeGen/PowerPC/builtins-ppc-pair-mma.c
@@ -25,13 +25,13 @@ void test1(unsigned char *vqp, unsigned char *vpp, vector unsigned char vc, unsi
// CHECK-NEXT: [[TMP2:%.*]] = extractvalue { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } [[TMP1]], 0
// CHECK-NEXT: store <16 x i8> [[TMP2]], ptr [[RESP:%.*]], align 16
// CHECK-NEXT: [[TMP3:%.*]] = extractvalue { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } [[TMP1]], 1
-// CHECK-NEXT: [[TMP4:%.*]] = getelementptr inbounds <16 x i8>, ptr [[RESP]], i64 1
+// CHECK-NEXT: [[TMP4:%.*]] = getelementptr inbounds i8, ptr [[RESP]], i64 16
// CHECK-NEXT: store <16 x i8> [[TMP3]], ptr [[TMP4]], align 16
// CHECK-NEXT: [[TMP5:%.*]] = extractvalue { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } [[TMP1]], 2
-// CHECK-NEXT: [[TMP6:%.*]] = getelementptr inbounds <16 x i8>, ptr [[RESP]], i64 2
+// CHECK-NEXT: [[TMP6:%.*]] = getelementptr inbounds i8, ptr [[RESP]], i64 32
// CHECK-NEXT: store <16 x i8> [[TMP5]], ptr [[TMP6]], align 16
// CHECK-NEXT: [[TMP7:%.*]] = extractvalue { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } [[TMP1]], 3
-// CHECK-NEXT: [[TMP8:%.*]] = getelementptr inbounds <16 x i8>, ptr [[RESP]], i64 3
+// CHECK-NEXT: [[TMP8:%.*]] = getelementptr inbounds i8, ptr [[RESP]], i64 48
// CHECK-NEXT: store <16 x i8> [[TMP7]], ptr [[TMP8]], align 16
// CHECK-NEXT: ret void
//
@@ -60,7 +60,7 @@ void test3(unsigned char *vqp, unsigned char *vpp, vector unsigned char vc, unsi
// CHECK-NEXT: [[TMP2:%.*]] = extractvalue { <16 x i8>, <16 x i8> } [[TMP1]], 0
// CHECK-NEXT: store <16 x i8> [[TMP2]], ptr [[RESP:%.*]], align 16
// CHECK-NEXT: [[TMP3:%.*]] = extractvalue { <16 x i8>, <16 x i8> } [[TMP1]], 1
-// CHECK-NEXT: [[TMP4:%.*]] = getelementptr inbounds <16 x i8>, ptr [[RESP]], i64 1
+// CHECK-NEXT: [[TMP4:%.*]] = getelementptr inbounds i8, ptr [[RESP]], i64 16
// CHECK-NEXT: store <16 x i8> [[TMP3]], ptr [[TMP4]], align 16
// CHECK-NEXT: ret void
//
@@ -1072,7 +1072,7 @@ void test76(unsigned char *vqp, unsigned char *vpp, vector unsigned char vc, uns
// CHECK-NEXT: [[TMP2:%.*]] = extractvalue { <16 x i8>, <16 x i8> } [[TMP1]], 0
// CHECK-NEXT: store <16 x i8> [[TMP2]], ptr [[RESP:%.*]], align 16
// CHECK-NEXT: [[TMP3:%.*]] = extractvalue { <16 x i8>, <16 x i8> } [[TMP1]], 1
-// CHECK-NEXT: [[TMP4:%.*]] = getelementptr inbounds <16 x i8>, ptr [[RESP]], i64 1
+// CHECK-NEXT: [[TMP4:%.*]] = getelementptr inbounds i8, ptr [[RESP]], i64 16
// CHECK-NEXT: store <16 x i8> [[TMP3]], ptr [[TMP4]], align 16
// CHECK-NEXT: ret void
//
diff --git a/clang/test/CodeGen/aarch64-ls64-inline-asm.c b/clang/test/CodeGen/aarch64-ls64-inline-asm.c
index ac2dbe1fa1b31a..744d6919b05ee4 100644
--- a/clang/test/CodeGen/aarch64-ls64-inline-asm.c
+++ b/clang/test/CodeGen/aarch64-ls64-inline-asm.c
@@ -16,8 +16,8 @@ void load(struct foo *output, void *addr)
// CHECK-LABEL: @store(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP1:%.*]] = load i512, ptr [[INPUT:%.*]], align 8
-// CHECK-NEXT: tail call void asm sideeffect "st64b $0,[$1]", "r,r,~{memory}"(i512 [[TMP1]], ptr [[ADDR:%.*]]) #[[ATTR1]], !srcloc !3
+// CHECK-NEXT: [[TMP0:%.*]] = load i512, ptr [[INPUT:%.*]], align 8
+// CHECK-NEXT: tail call void asm sideeffect "st64b $0,[$1]", "r,r,~{memory}"(i512 [[TMP0]], ptr [[ADDR:%.*]]) #[[ATTR1]], !srcloc !3
// CHECK-NEXT: ret void
//
void store(const struct foo *input, void *addr)
@@ -29,25 +29,25 @@ void store(const struct foo *input, void *addr)
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = load i32, ptr [[IN:%.*]], align 4, !tbaa [[TBAA4:![0-9]+]]
// CHECK-NEXT: [[CONV:%.*]] = sext i32 [[TMP0]] to i64
-// CHECK-NEXT: [[ARRAYIDX1:%.*]] = getelementptr inbounds i32, ptr [[IN]], i64 1
+// CHECK-NEXT: [[ARRAYIDX1:%.*]] = getelementptr inbounds i8, ptr [[IN]], i64 4
// CHECK-NEXT: [[TMP1:%.*]] = load i32, ptr [[ARRAYIDX1]], align 4, !tbaa [[TBAA4]]
// CHECK-NEXT: [[CONV2:%.*]] = sext i32 [[TMP1]] to i64
-// CHECK-NEXT: [[ARRAYIDX4:%.*]] = getelementptr inbounds i32, ptr [[IN]], i64 4
+// CHECK-NEXT: [[ARRAYIDX4:%.*]] = getelementptr inbounds i8, ptr [[IN]], i64 16
// CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr [[ARRAYIDX4]], align 4, !tbaa [[TBAA4]]
// CHECK-NEXT: [[CONV5:%.*]] = sext i32 [[TMP2]] to i64
-// CHECK-NEXT: [[ARRAYIDX7:%.*]] = getelementptr inbounds i32, ptr [[IN]], i64 16
+// CHECK-NEXT: [[ARRAYIDX7:%.*]] = getelementptr inbounds i8, ptr [[IN]], i64 64
// CHECK-NEXT: [[TMP3:%.*]] = load i32, ptr [[ARRAYIDX7]], align 4, !tbaa [[TBAA4]]
// CHECK-NEXT: [[CONV8:%.*]] = sext i32 [[TMP3]] to i64
-// CHECK-NEXT: [[ARRAYIDX10:%.*]] = getelementptr inbounds i32, ptr [[IN]], i64 25
+// CHECK-NEXT: [[ARRAYIDX10:%.*]] = getelementptr inbounds i8, ptr [[IN]], i64 100
// CHECK-NEXT: [[TMP4:%.*]] = load i32, ptr [[ARRAYIDX10]], align 4, !tbaa [[TBAA4]]
// CHECK-NEXT: [[CONV11:%.*]] = sext i32 [[TMP4]] to i64
-// CHECK-NEXT: [[ARRAYIDX13:%.*]] = getelementptr inbounds i32, ptr [[IN]], i64 36
+// CHECK-NEXT: [[ARRAYIDX13:%.*]] = getelementptr inbounds i8, ptr [[IN]], i64 144
// CHECK-NEXT: [[TMP5:%.*]] = load i32, ptr [[ARRAYIDX13]], align 4, !tbaa [[TBAA4]]
// CHECK-NEXT: [[CONV14:%.*]] = sext i32 [[TMP5]] to i64
-// CHECK-NEXT: [[ARRAYIDX16:%.*]] = getelementptr inbounds i32, ptr [[IN]], i64 49
+// CHECK-NEXT: [[ARRAYIDX16:%.*]] = getelementptr inbounds i8, ptr [[IN]], i64 196
// CHECK-NEXT: [[TMP6:%.*]] = load i32, ptr [[ARRAYIDX16]], align 4, !tbaa [[TBAA4]]
// CHECK-NEXT: [[CONV17:%.*]] = sext i32 [[TMP6]] to i64
-// CHECK-NEXT: [[ARRAYIDX19:%.*]] = getelementptr inbounds i32, ptr [[IN]], i64 64
+// CHECK-NEXT: [[ARRAYIDX19:%.*]] = getelementptr inbounds i8, ptr [[IN]], i64 256
// CHECK-NEXT: [[TMP7:%.*]] = load i32, ptr [[ARRAYIDX19]], align 4, !tbaa [[TBAA4]]
// CHECK-NEXT: [[CONV20:%.*]] = sext i32 [[TMP7]] to i64
// CHECK-NEXT: [[S_SROA_10_0_INSERT_EXT:%.*]] = zext i64 [[CONV20]] to i512
diff --git a/clang/test/CodeGen/attr-arm-sve-vector-bits-bitcast.c b/clang/test/CodeGen/attr-arm-sve-vector-bits-bitcast.c
index 22e2e0c2ff102d..323afb64591249 100644
--- a/clang/test/CodeGen/attr-arm-sve-vector-bits-bitcast.c
+++ b/clang/test/CodeGen/attr-arm-sve-vector-bits-bitcast.c
@@ -30,21 +30,21 @@ DEFINE_STRUCT(bool)
// CHECK-128-LABEL: @read_int64(
// CHECK-128-NEXT: entry:
-// CHECK-128-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_INT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 16
// CHECK-128-NEXT: [[TMP0:%.*]] = load <2 x i64>, ptr [[Y]], align 16, !tbaa [[TBAA2:![0-9]+]]
// CHECK-128-NEXT: [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x i64> @llvm.vector.insert.nxv2i64.v2i64(<vscale x 2 x i64> undef, <2 x i64> [[TMP0]], i64 0)
// CHECK-128-NEXT: ret <vscale x 2 x i64> [[CAST_SCALABLE]]
//
// CHECK-256-LABEL: @read_int64(
// CHECK-256-NEXT: entry:
-// CHECK-256-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_INT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 32
// CHECK-256-NEXT: [[TMP0:%.*]] = load <4 x i64>, ptr [[Y]], align 16, !tbaa [[TBAA2:![0-9]+]]
// CHECK-256-NEXT: [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x i64> @llvm.vector.insert.nxv2i64.v4i64(<vscale x 2 x i64> undef, <4 x i64> [[TMP0]], i64 0)
// CHECK-256-NEXT: ret <vscale x 2 x i64> [[CAST_SCALABLE]]
//
// CHECK-512-LABEL: @read_int64(
// CHECK-512-NEXT: entry:
-// CHECK-512-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_INT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-512-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 64
// CHECK-512-NEXT: [[TMP0:%.*]] = load <8 x i64>, ptr [[Y]], align 16, !tbaa [[TBAA2:![0-9]+]]
// CHECK-512-NEXT: [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x i64> @llvm.vector.insert.nxv2i64.v8i64(<vscale x 2 x i64> undef, <8 x i64> [[TMP0]], i64 0)
// CHECK-512-NEXT: ret <vscale x 2 x i64> [[CAST_SCALABLE]]
@@ -56,21 +56,21 @@ svint64_t read_int64(struct struct_int64 *s) {
// CHECK-128-LABEL: @write_int64(
// CHECK-128-NEXT: entry:
// CHECK-128-NEXT: [[CAST_FIXED:%.*]] = tail call <2 x i64> @llvm.vector.extract.v2i64.nxv2i64(<vscale x 2 x i64> [[X:%.*]], i64 0)
-// CHECK-128-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_INT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 16
// CHECK-128-NEXT: store <2 x i64> [[CAST_FIXED]], ptr [[Y]], align 16, !tbaa [[TBAA2]]
// CHECK-128-NEXT: ret void
//
// CHECK-256-LABEL: @write_int64(
// CHECK-256-NEXT: entry:
// CHECK-256-NEXT: [[CAST_FIXED:%.*]] = tail call <4 x i64> @llvm.vector.extract.v4i64.nxv2i64(<vscale x 2 x i64> [[X:%.*]], i64 0)
-// CHECK-256-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_INT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 32
// CHECK-256-NEXT: store <4 x i64> [[CAST_FIXED]], ptr [[Y]], align 16, !tbaa [[TBAA2]]
// CHECK-256-NEXT: ret void
//
// CHECK-512-LABEL: @write_int64(
// CHECK-512-NEXT: entry:
// CHECK-512-NEXT: [[CAST_FIXED:%.*]] = tail call <8 x i64> @llvm.vector.extract.v8i64.nxv2i64(<vscale x 2 x i64> [[X:%.*]], i64 0)
-// CHECK-512-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_INT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-512-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 64
// CHECK-512-NEXT: store <8 x i64> [[CAST_FIXED]], ptr [[Y]], align 16, !tbaa [[TBAA2]]
// CHECK-512-NEXT: ret void
//
@@ -84,21 +84,21 @@ void write_int64(struct struct_int64 *s, svint64_t x) {
// CHECK-128-LABEL: @read_float64(
// CHECK-128-NEXT: entry:
-// CHECK-128-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_FLOAT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 16
// CHECK-128-NEXT: [[TMP0:%.*]] = load <2 x double>, ptr [[Y]], align 16, !tbaa [[TBAA2]]
// CHECK-128-NEXT: [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x double> @llvm.vector.insert.nxv2f64.v2f64(<vscale x 2 x double> undef, <2 x double> [[TMP0]], i64 0)
// CHECK-128-NEXT: ret <vscale x 2 x double> [[CAST_SCALABLE]]
//
// CHECK-256-LABEL: @read_float64(
// CHECK-256-NEXT: entry:
-// CHECK-256-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_FLOAT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 32
// CHECK-256-NEXT: [[TMP0:%.*]] = load <4 x double>, ptr [[Y]], align 16, !tbaa [[TBAA2]]
// CHECK-256-NEXT: [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x double> @llvm.vector.insert.nxv2f64.v4f64(<vscale x 2 x double> undef, <4 x double> [[TMP0]], i64 0)
// CHECK-256-NEXT: ret <vscale x 2 x double> [[CAST_SCALABLE]]
//
// CHECK-512-LABEL: @read_float64(
// CHECK-512-NEXT: entry:
-// CHECK-512-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_FLOAT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-512-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 64
// CHECK-512-NEXT: [[TMP0:%.*]] = load <8 x double>, ptr [[Y]], align 16, !tbaa [[TBAA2]]
// CHECK-512-NEXT: [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x double> @llvm.vector.insert.nxv2f64.v8f64(<vscale x 2 x double> undef, <8 x double> [[TMP0]], i64 0)
// CHECK-512-NEXT: ret <vscale x 2 x double> [[CAST_SCALABLE]]
@@ -110,21 +110,21 @@ svfloat64_t read_float64(struct struct_float64 *s) {
// CHECK-128-LABEL: @write_float64(
// CHECK-128-NEXT: entry:
// CHECK-128-NEXT: [[CAST_FIXED:%.*]] = tail call <2 x double> @llvm.vector.extract.v2f64.nxv2f64(<vscale x 2 x double> [[X:%.*]], i64 0)
-// CHECK-128-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_FLOAT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 16
// CHECK-128-NEXT: store <2 x double> [[CAST_FIXED]], ptr [[Y]], align 16, !tbaa [[TBAA2]]
// CHECK-128-NEXT: ret void
//
// CHECK-256-LABEL: @write_float64(
// CHECK-256-NEXT: entry:
// CHECK-256-NEXT: [[CAST_FIXED:%.*]] = tail call <4 x double> @llvm.vector.extract.v4f64.nxv2f64(<vscale x 2 x double> [[X:%.*]], i64 0)
-// CHECK-256-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_FLOAT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 32
// CHECK-256-NEXT: store <4 x double> [[CAST_FIXED]], ptr [[Y]], align 16, !tbaa [[TBAA2]]
// CHECK-256-NEXT: ret void
//
// CHECK-512-LABEL: @write_float64(
// CHECK-512-NEXT: entry:
// CHECK-512-NEXT: [[CAST_FIXED:%.*]] = tail call <8 x double> @llvm.vector.extract.v8f64.nxv2f64(<vscale x 2 x double> [[X:%.*]], i64 0)
-// CHECK-512-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_FLOAT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-512-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 64
// CHECK-512-NEXT: store <8 x double> [[CAST_FIXED]], ptr [[Y]], align 16, !tbaa [[TBAA2]]
// CHECK-512-NEXT: ret void
//
@@ -138,21 +138,21 @@ void write_float64(struct struct_float64 *s, svfloat64_t x) {
// CHECK-128-LABEL: @read_bfloat16(
// CHECK-128-NEXT: entry:
-// CHECK-128-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BFLOAT16:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 16
// CHECK-128-NEXT: [[TMP0:%.*]] = load <8 x bfloat>, ptr [[Y]], align 16, !tbaa [[TBAA2]]
// CHECK-128-NEXT: [[CAST_SCALABLE:%.*]] = tail call <vscale x 8 x bfloat> @llvm.vector.insert.nxv8bf16.v8bf16(<vscale x 8 x bfloat> undef, <8 x bfloat> [[TMP0]], i64 0)
// CHECK-128-NEXT: ret <vscale x 8 x bfloat> [[CAST_SCALABLE]]
//
// CHECK-256-LABEL: @read_bfloat16(
// CHECK-256-NEXT: entry:
-// CHECK-256-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BFLOAT16:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 32
// CHECK-256-NEXT: [[TMP0:%.*]] = load <16 x bfloat>, ptr [[Y]], align 16, !tbaa [[TBAA2]]
// CHECK-256-NEXT: [[CAST_SCALABLE:%.*]] = tail call <vscale x 8 x bfloat> @llvm.vector.insert.nxv8bf16.v16bf16(<vscale x 8 x bfloat> undef, <16 x bfloat> [[TMP0]], i64 0)
// CHECK-256-NEXT: ret <vscale x 8 x bfloat> [[CAST_SCALABLE]]
//
// CHECK-512-LABEL: @read_bfloat16(
// CHECK-512-NEXT: entry:
-// CHECK-512-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BFLOAT16:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-512-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 64
// CHECK-512-NEXT: [[TMP0:%.*]] = load <32 x bfloat>, ptr [[Y]], align 16, !tbaa [[TBAA2]]
// CHECK-512-NEXT: [[CAST_SCALABLE:%.*]] = tail call <vscale x 8 x bfloat> @llvm.vector.insert.nxv8bf16.v32bf16(<vscale x 8 x bfloat> undef, <32 x bfloat> [[TMP0]], i64 0)
// CHECK-512-NEXT: ret <vscale x 8 x bfloat> [[CAST_SCALABLE]]
@@ -164,21 +164,21 @@ svbfloat16_t read_bfloat16(struct struct_bfloat16 *s) {
// CHECK-128-LABEL: @write_bfloat16(
// CHECK-128-NEXT: entry:
// CHECK-128-NEXT: [[CAST_FIXED:%.*]] = tail call <8 x bfloat> @llvm.vector.extract.v8bf16.nxv8bf16(<vscale x 8 x bfloat> [[X:%.*]], i64 0)
-// CHECK-128-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BFLOAT16:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 16
// CHECK-128-NEXT: store <8 x bfloat> [[CAST_FIXED]], ptr [[Y]], align 16, !tbaa [[TBAA2]]
// CHECK-128-NEXT: ret void
//
// CHECK-256-LABEL: @write_bfloat16(
// CHECK-256-NEXT: entry:
// CHECK-256-NEXT: [[CAST_FIXED:%.*]] = tail call <16 x bfloat> @llvm.vector.extract.v16bf16.nxv8bf16(<vscale x 8 x bfloat> [[X:%.*]], i64 0)
-// CHECK-256-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BFLOAT16:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 32
// CHECK-256-NEXT: store <16 x bfloat> [[CAST_FIXED]], ptr [[Y]], align 16, !tbaa [[TBAA2]]
// CHECK-256-NEXT: ret void
//
// CHECK-512-LABEL: @write_bfloat16(
// CHECK-512-NEXT: entry:
// CHECK-512-NEXT: [[CAST_FIXED:%.*]] = tail call <32 x bfloat> @llvm.vector.extract.v32bf16.nxv8bf16(<vscale x 8 x bfloat> [[X:%.*]], i64 0)
-// CHECK-512-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BFLOAT16:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-512-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 64
// CHECK-512-NEXT: store <32 x bfloat> [[CAST_FIXED]], ptr [[Y]], align 16, !tbaa [[TBAA2]]
// CHECK-512-NEXT: ret void
//
@@ -192,7 +192,7 @@ void write_bfloat16(struct struct_bfloat16 *s, svbfloat16_t x) {
// CHECK-128-LABEL: @read_bool(
// CHECK-128-NEXT: entry:
-// CHECK-128-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BOOL:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 2
// CHECK-128-NEXT: [[TMP0:%.*]] = load <2 x i8>, ptr [[Y]], align 2, !tbaa [[TBAA2]]
// CHECK-128-NEXT: [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x i8> @llvm.vector.insert.nxv2i8.v2i8(<vscale x 2 x i8> undef, <2 x i8> [[TMP0]], i64 0)
// CHECK-128-NEXT: [[TMP1:%.*]] = bitcast <vscale x 2 x i8> [[CAST_SCALABLE]] to <vscale x 16 x i1>
@@ -200,7 +200,7 @@ void write_bfloat16(struct struct_bfloat16 *s, svbfloat16_t x) {
//
// CHECK-256-LABEL: @read_bool(
// CHECK-256-NEXT: entry:
-// CHECK-256-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BOOL:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 4
// CHECK-256-NEXT: [[TMP0:%.*]] = load <4 x i8>, ptr [[Y]], align 2, !tbaa [[TBAA2]]
// CHECK-256-NEXT: [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x i8> @llvm.vector.insert.nxv2i8.v4i8(<vscale x 2 x i8> undef, <4 x i8> [[TMP0]], i64 0)
// CHECK-256-NEXT: [[TMP1:%.*]] = bitcast <vscale x 2 x i8> [[CAST_SCALABLE]] to <vscale x 16 x i1>
@@ -208,7 +208,7 @@ void write_bfloat16(struct struct_bfloat16 *s, svbfloat16_t x) {
//
// CHECK-512-LABEL: @read_bool(
// CHECK-512-NEXT: entry:
-// CHECK-512-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BOOL:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-512-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 8
// CHECK-512-NEXT: [[TMP0:%.*]] = load <8 x i8>, ptr [[Y]], align 2, !tbaa [[TBAA2]]
// CHECK-512-NEXT: [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x i8> @llvm.vector.insert.nxv2i8.v8i8(<vscale x 2 x i8> undef, <8 x i8> [[TMP0]], i64 0)
// CHECK-512-NEXT: [[TMP1:%.*]] = bitcast <vscale x 2 x i8> [[CAST_SCALABLE]] to <vscale x 16 x i1>
@@ -222,7 +222,7 @@ svbool_t read_bool(struct struct_bool *s) {
// CHECK-128-NEXT: entry:
// CHECK-128-NEXT: [[TMP0:%.*]] = bitcast <vscale x 16 x i1> [[X:%.*]] to <vscale x 2 x i8>
// CHECK-128-NEXT: [[CAST_FIXED:%.*]] = tail call <2 x i8> @llvm.vector.extract.v2i8.nxv2i8(<vscale x 2 x i8> [[TMP0]], i64 0)
-// CHECK-128-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BOOL:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 2
// CHECK-128-NEXT: store <2 x i8> [[CAST_FIXED]], ptr [[Y]], align 2, !tbaa [[TBAA2]]
// CHECK-128-NEXT: ret void
//
@@ -230,7 +230,7 @@ svbool_t read_bool(struct struct_bool *s) {
// CHECK-256-NEXT: entry:
// CHECK-256-NEXT: [[TMP0:%.*]] = bitcast <vscale x 16 x i1> [[X:%.*]] to <vscale x 2 x i8>
// CHECK-256-NEXT: [[CAST_FIXED:%.*]] = tail call <4 x i8> @llvm.vector.extract.v4i8.nxv2i8(<vscale x 2 x i8> [[TMP0]], i64 0)
-// CHECK-256-NEXT: [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BOOL:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT: [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 4
// CHECK-256-NEXT:...
[truncated]
|
59e52f8
to
4ce94ca
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a good direction to go, lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, looks like a great first step! Will be interesting to see what kind of regressions this surfaces (if any)
@nikic Could you please have a look at dtcxzyw/llvm-opt-benchmark#17?
Alive2: https://alive2.llvm.org/ce/z/JfN5sB I will post a patch later. |
@dtcxzyw GitHub can't display the diff, and struggles to clone the repo. Can you share the diffs for just the mentioned files? |
Another example:
|
I have posted the diff between optimized IRs. |
As mentioned in llvm#68882 and https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699 Gep arithmetic isn't consistent with different types. GVNSink didn't realize this and sank all geps as long as their operands can be wired via PHIs in a post-dominator.
This patch canonicalizes constant expression GEPs to use i8 source element type, aka ptradd. This is the ConstantFolding equivalent of the InstCombine canonicalization introduced in llvm#68882. I believe all our optimizations working on constant expression GEPs (like GlobalOpt etc) have already been switched to work on offsets, so I don't expect any significant fallout from this change. This is part of: https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699
…into `gep T, (gep i8, base, C1 + C2 * sizeof(T)), Index` (#76177) This patch tries to canonicalize `gep T, (gep i8, base, C1), (Index + C2)` into `gep T, (gep i8, base, C1 + C2 * sizeof(T)), Index`. Alive2: https://alive2.llvm.org/ce/z/dxShKF Fixes regressions found in #68882.
This patch canonicalizes constant expression GEPs to use i8 source element type, aka ptradd. This is the ConstantFolding equivalent of the InstCombine canonicalization introduced in llvm#68882. I believe all our optimizations working on constant expression GEPs (like GlobalOpt etc) have already been switched to work on offsets, so I don't expect any significant fallout from this change. This is part of: https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699
As mentioned in llvm#68882 and https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699 Gep arithmetic isn't consistent with different types. GVNSink didn't realize this and sank all geps as long as their operands can be wired via PHIs in a post-dominator. Fixes: llvm#85333
Always generate GEP i8 / ptradd for struct offsets This implements #98615, and goes a bit further to remove `struct_gep` entirely. Upstream LLVM is in the beginning stages of [migrating to `ptradd`](https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699). LLVM 19 will [canonicalize](llvm/llvm-project#68882) all constant-offset GEPs to i8, which has roughly the same effect as this change. Fixes #121719. Split out from #121577. r? `@nikic`
As mentioned in llvm#68882 and https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699 Gep arithmetic isn't consistent with different types. GVNSink didn't realize this and sank all geps as long as their operands can be wired via PHIs in a post-dominator. Fixes: llvm#85333
As mentioned in llvm#68882 and https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699 Gep arithmetic isn't consistent with different types. GVNSink didn't realize this and sank all geps as long as their operands can be wired via PHIs in a post-dominator. Fixes: llvm#85333
As mentioned in llvm#68882 and https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699 Gep arithmetic isn't consistent with different types. GVNSink didn't realize this and sank all geps as long as their operands can be wired via PHIs in a post-dominator. Fixes: llvm#85333
As mentioned in llvm#68882 and https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699 Gep arithmetic isn't consistent with different types. GVNSink didn't realize this and sank all geps as long as their operands can be wired via PHIs in a post-dominator. Fixes: llvm#85333
As mentioned in #68882 and https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699 Gep arithmetic isn't consistent with different types. GVNSink didn't realize this and sank all geps as long as their operands can be wired via PHIs in a post-dominator. Fixes: #85333
I've observed a significant regression in one of the AMDGPU benchmarks after applying this patch. The base address calculation within the unrolled loop seems to be the source. I've attached "before.log" and "after.log" files that detail the issue. The modified GEP format, introduced by this patch, doesn't align with the canonical form expected by the "separate-constant-offset-from-gep" pass. Consequently, the "straight line strength reduction" (SLSR) pass cannot optimize the computation. While the intention behind this patch, replicating some "split-gep" pass functionality, is understood, the unintended impact on the SLSR pass is notable. Before I delve into potential solutions, I would greatly appreciate your insights and perspective on this matter.[ |
As mentioned in llvm#68882 and https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699 Gep arithmetic isn't consistent with different types. GVNSink didn't realize this and sank all geps as long as their operands can be wired via PHIs in a post-dominator. Fixes: llvm#85333
As mentioned in #68882 and https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699 Gep arithmetic isn't consistent with different types. GVNSink didn't realize this and sank all geps as long as their operands can be wired via PHIs in a post-dominator. Fixes: #85333 Reapply: #88440 after fixing the non-determinism issues in #90995
This patch canonicalizes constant expression GEPs to use i8 source element type, aka ptradd. This is the ConstantFolding equivalent of the InstCombine canonicalization introduced in llvm#68882. I believe all our optimizations working on constant expression GEPs (like GlobalOpt etc) have already been switched to work on offsets, so I don't expect any significant fallout from this change. This is part of: https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699
This patch canonicalizes constant expression GEPs to use i8 source element type, aka ptradd. This is the ConstantFolding equivalent of the InstCombine canonicalization introduced in #68882. I believe all our optimizations working on constant expression GEPs (like GlobalOpt etc) have already been switched to work on offsets, so I don't expect any significant fallout from this change. This is part of: https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699
I am debugging a Triton issue (triton-lang/triton#4060), where an {i32, i32, i32, i64} struct passed to vprintf is printing the wrong value for the i64. The issue here seems to be that Triton creates a llvm:Module with a default DataLayout. In the default layout, i64 abi alignment is 4(DefaultAlignments in DataLayout.cpp). This causes the optimization in this PR to rewrite the GEP to the i64 to be at offset 12:
But vprintf expects the i64 to be at offset 16. |
@karthik-man LLVM always requires a correct data layout. Yes, that includes InstCombine. |
What sort of correct data layout should be used if we are optimizing machine-independently? Like cross-compilation with |
@htyu LLVM does not support this. Support for doing that was officially removed about ten years ago when data layout became mandatory, but even prior to that IR was already target-specific, e.g. due to target-specific ABI. I think some parts of MLIR may support this, but certainly nothing does on the LLVM IR level. |
This patch canonicalizes getelementptr instructions with constant indices to use the
i8
source element type. This makes it easier for optimizations to recognize that two GEPs are identical, because they don't need to see past many different ways to express the same offset.This is a first step towards https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699. This is limited to constant GEPs only for now, as they have a clear canonical form, while we're not yet sure how exactly to deal with variable indices.
The test llvm/test/Transforms/PhaseOrdering/switch_with_geps.ll gives two representative examples of the kind of optimization improvement we expect from this change. In the first test SimplifyCFG can now realize that all switch branches are actually the same. In the second test it can convert it into simple arithmetic. These are representative of common optimization failures we see in Rust.
Fixes #69841.