NAKAMURA Takumi
bd91f50190
lib/CodeGen/TargetInfo.cpp: Add Win64 calling conversion.
...
FIXME: It would be incompatible to Microsoft's in one point.
On mingw64-gcc, {i128} is expanded for args and returned as {rax, rdx}.
llvm-svn: 123692
2011-01-17 22:56:31 +00:00
Bob Wilson
b9fa00e0c2
Remove special handling for opaque Neon vector types.
...
Clang does not wrap the vectors in structs anymore so this isn't needed.
llvm-svn: 123241
2011-01-11 16:53:49 +00:00
Bob Wilson
bd4520b535
Move DefaultABIInfo::classifyReturnType where it belongs. No functional change.
...
llvm-svn: 123195
2011-01-10 23:54:17 +00:00
Wesley Peck
36a1f68fec
1. Add some ABI information for the Microblaze.
...
2. Add attibutes "interrupt_handler" and "save_volatiles" for the Microblaze target.
llvm-svn: 122184
2010-12-19 19:57:51 +00:00
Benjamin Kramer
8c173cc364
Use a twine.
...
llvm-svn: 118892
2010-11-12 15:42:18 +00:00
Anders Carlsson
fd88a6160d
Rename getBaseClassOffset to getBaseClassOffsetInBits and introduce a getBaseClassOffset which returns the offset in CharUnits. Do the same thing for getVBaseClassOffset.
...
llvm-svn: 117881
2010-10-31 23:22:37 +00:00
Michael J. Spencer
f5a1fbcdf3
Fix Whitespace.
...
llvm-svn: 116798
2010-10-19 06:39:39 +00:00
Bill Wendling
9987c0ea42
We shouldn't keep track of MMX registers "needed" separately from the SSE
...
registers needed.
llvm-svn: 116772
2010-10-18 23:51:38 +00:00
Bill Wendling
5cd41c4b13
Reapply r116684 with fixes. The test cases needed to be updated.
...
llvm-svn: 116696
2010-10-18 03:41:31 +00:00
Bill Wendling
c7c9be661f
Temporarily revert r116684. It was causing failures with
...
Clang :: CodeGen/x86_32-arguments-darwin.c
Clang :: CodeGen/x86_32-arguments-linux.c
llvm-svn: 116687
2010-10-17 07:58:46 +00:00
Bill Wendling
812f4b123e
The "gcc.dg/compat/vector-1 -m32" test was broken after the MMX rewrite. The
...
function parameters weren't converted to use the correct type (x86_mmx). Add a
check, similar to the one in llvm-gcc, to see if we need the x86_mmx type for
that function parameter. If so, it coerces the type to be that.
llvm-svn: 116684
2010-10-17 07:38:01 +00:00
Chris Lattner
a09e8efd1f
Per discussion with Sanjiv, remove the PIC16 target from mainline. When/if
...
it comes back, it will be largely a rewrite, so keeping the old codebase
in tree isn't helping anyone.
llvm-svn: 116191
2010-10-11 05:44:49 +00:00
Daniel Dunbar
19964dbe3b
IRgen/ABI/ARM: Return large vectors in memory.
...
llvm-svn: 114619
2010-09-23 01:54:32 +00:00
Daniel Dunbar
b34b08098c
IRgen/ABI/ARM: Trust the backend to pass vectors correctly for the given ABI.
...
- Therefore, we can lower out the NEON wrapper structs and pass the vectors
directly. This makes a huge difference in the cleanliness of the IR after
optimization.
- I will trust, but verify, via future ABITest testing (for APCS-GNU, at
least).
llvm-svn: 114618
2010-09-23 01:54:28 +00:00
Daniel Dunbar
dd38fbc7fb
IRgen/ABI/x86-32: Realign indirect arguments when the ABI requires us to pass
...
them with a smaller alignment than the rest of codegen expects.
llvm-svn: 114115
2010-09-16 20:42:06 +00:00
Daniel Dunbar
7b7c2937ef
IRgen/ABI: Add support for realigning structures which are passed by indirect
...
reference.
llvm-svn: 114114
2010-09-16 20:42:02 +00:00
Daniel Dunbar
ed23de3348
IRgen/ABI/x86_32/Darwin: On Darwin, only structures with SSE vector types get passed
...
with a non-default-stack-ABI-alignment (of 16).
- This fixes the ABI convenient, but breaks codegen since we now have
underaligned arguments. Marginal improvement overall though, and will be
fixed in next commit.
llvm-svn: 114113
2010-09-16 20:42:00 +00:00
Daniel Dunbar
8a6c91ff76
IRgen/x86_32/Linux: Linux seems to align all stack objects to 4 bytes, unlike
...
Darwin. Checked vs the handiest Linux llvm-gcc I had around, someone on Linux is
welcome to investigate more.
llvm-svn: 114112
2010-09-16 20:41:56 +00:00
Chris Lattner
d426c8eae3
fix rdar://8360877 a really nasty miscompilation in Boost.Xpressive
...
caused by my ABI work. Passing:
struct outer {
int x;
struct epsilon_matcher {} e;
int f;
};
as {i32,i32} isn't safe, because the offset of the second element
needs to be at 8 when it is interpreted as a memory value.
llvm-svn: 112686
2010-09-01 00:50:20 +00:00
Chris Lattner
be5eb17536
same refactoring as before, this time on the argument side.
...
llvm-svn: 112684
2010-09-01 00:24:35 +00:00
Chris Lattner
52b3c13149
refactor some code to cut down on redundancy, no functionality change.
...
llvm-svn: 112683
2010-09-01 00:20:33 +00:00
Chris Lattner
04dc957260
Add support for windows x86-64 varargs, patch by Cameron Esfahani!
...
llvm-svn: 112603
2010-08-31 16:44:54 +00:00
Chris Lattner
a48fbe8c53
Fix PR8029, a x86-32 ABI regression in introduced in r112211
...
llvm-svn: 112537
2010-08-30 22:03:23 +00:00
Chris Lattner
d7e54804ee
improve comments.
...
llvm-svn: 112214
2010-08-26 20:08:43 +00:00
Chris Lattner
d774ae9ed1
fix 2xi16 to pass as i32 instead of <2 x i16>. The former passes in
...
memory (as required) the later now passes in an xmm register. This
fixes gcc.dg/compat/vector_1 on x86-32.
llvm-svn: 112211
2010-08-26 20:05:13 +00:00
Chris Lattner
69e683fb35
vector of long and ulong are also classified as INTEGER in x86-64 abi,
...
this fixes rdar://8358475 a failure of the gcc.dg/compat/vector_1 abi
test.
llvm-svn: 112205
2010-08-26 18:13:50 +00:00
Chris Lattner
46830f2fd6
1 x ulonglong needs to be classified as INTEGER, just like 1 x longlong,
...
this fixes a miscompilation on the included testcase, rdar://8359248
llvm-svn: 112201
2010-08-26 18:03:20 +00:00
Chris Lattner
51e1cc2fe2
tame an assertion, fixing rdar://8357396
...
llvm-svn: 112174
2010-08-26 06:28:35 +00:00
Chris Lattner
9f8b451876
Finally pass "two floats in a 64-bit unit" as a <2 x float> instead of
...
as a double in the x86-64 ABI. This allows us to generate much better
code for certain things, e.g.:
_Complex float f32(_Complex float A, _Complex float B) {
return A+B;
}
Used to compile into (look at the integer silliness!):
_f32: ## @f32
## BB#0: ## %entry
movd %xmm1, %rax
movd %eax, %xmm1
movd %xmm0, %rcx
movd %ecx, %xmm0
addss %xmm1, %xmm0
movd %xmm0, %edx
shrq $32, %rax
movd %eax, %xmm0
shrq $32, %rcx
movd %ecx, %xmm1
addss %xmm0, %xmm1
movd %xmm1, %eax
shlq $32, %rax
addq %rdx, %rax
movd %rax, %xmm0
ret
Now we get:
_f32: ## @f32
movdqa %xmm0, %xmm2
addss %xmm1, %xmm2
pshufd $16, %xmm2, %xmm2
pshufd $1, %xmm1, %xmm1
pshufd $1, %xmm0, %xmm0
addss %xmm1, %xmm0
pshufd $16, %xmm0, %xmm1
movdqa %xmm2, %xmm0
unpcklps %xmm1, %xmm0
ret
and compile stuff like:
extern float _Complex ccoshf( float _Complex ) ;
float _Complex ccosf ( float _Complex z ) {
float _Complex iz;
(__real__ iz) = -(__imag__ z);
(__imag__ iz) = (__real__ z);
return ccoshf(iz);
}
into:
_ccosf: ## @ccosf
## BB#0: ## %entry
pshufd $1, %xmm0, %xmm1
xorps LCPI4_0(%rip), %xmm1
unpcklps %xmm0, %xmm1
movaps %xmm1, %xmm0
jmp _ccoshf ## TAILCALL
instead of:
_ccosf: ## @ccosf
## BB#0: ## %entry
movd %xmm0, %rax
movq %rax, %rcx
shlq $32, %rcx
shrq $32, %rax
xorl $-2147483648, %eax ## imm = 0xFFFFFFFF80000000
addq %rcx, %rax
movd %rax, %xmm0
jmp _ccoshf ## TAILCALL
There is still "stuff to be done" here for the struct case,
but this resolves rdar://6379669 - [x86-64 ABI] Pass and return
_Complex float / double efficiently
llvm-svn: 112111
2010-08-25 23:39:14 +00:00
Michael J. Spencer
b2f376bdd0
Fix horrible white space errors.
...
llvm-svn: 112067
2010-08-25 18:17:27 +00:00
John McCall
a1dee5300b
Experiment with using first-class aggregates to represent member function
...
pointers. I find the resulting code to be substantially cleaner, and it
makes it very easy to use the same APIs for data member pointers (which I have
conscientiously avoided here), and it avoids a plethora of potential
inefficiencies due to excessive memory copying, but we'll have to see if it
actually works.
llvm-svn: 111776
2010-08-22 10:59:02 +00:00
Chris Lattner
8a2f3c778e
fix PR5179 and correctly fix PR5831 to not miscompile.
...
The X86-64 ABI code didn't handle the case when a struct
would get classified and turn up as "NoClass INTEGER" for
example. This is perfectly possible when the first slot
is all padding (e.g. due to empty base classes). In this
situation, the first 8-byte doesn't take a register at all,
only the second 8-byte does.
This fixes this by enhancing the x86-64 abi stuff to allow
and handle this case, reverts the broken fix for PR5831,
and enhances the target independent stuff to be able to
handle an argument value in registers being accessed at an
offset from the memory value.
This is the last x86-64 calling convention related miscompile
that I'm aware of.
llvm-svn: 109848
2010-07-30 04:02:24 +00:00
Chris Lattner
1f3a063f00
move the last hunk of getCoerceResult into the place
...
that needs it and remove getCoerceResult.
llvm-svn: 109807
2010-07-29 21:42:50 +00:00
Chris Lattner
60fbd7744f
now that direct and coerce are merged, getCoerceResult gets simpler.
...
llvm-svn: 109805
2010-07-29 21:29:53 +00:00
Chris Lattner
09794695ef
now that GetSSETypeAtOffset handles passing SSE class values as
...
float, the special case hack in getCoerceResult can go away.
llvm-svn: 109804
2010-07-29 21:22:50 +00:00
Chris Lattner
e556a71859
Implement the clang-side of detection for when to pass as
...
<2 x float> instead of double. This works but can't be turned
on until I teach codegen to pass <2 x float> as one XMM register
instead of two.
llvm-svn: 109790
2010-07-29 18:39:32 +00:00
Chris Lattner
50a357e962
Look at me, I can count!
...
llvm-svn: 109786
2010-07-29 18:19:50 +00:00
Chris Lattner
7f4b81af7a
fix rdar://8251384, another case where we could access beyond the
...
end of a struct. This improves the case when the struct being passed
contains 3 floats, either due to a struct or array of 3 things. Before
we'd generate this IR for the testcase:
define float @bar(double %X.coerce0, double %X.coerce1) nounwind {
entry:
%X = alloca %struct.foof, align 8 ; <%struct.foof*> [#uses=2]
%0 = bitcast %struct.foof* %X to %1* ; <%1*> [#uses=2]
%1 = getelementptr %1* %0, i32 0, i32 0 ; <double*> [#uses=1]
store double %X.coerce0, double* %1
%2 = getelementptr %1* %0, i32 0, i32 1 ; <double*> [#uses=1]
store double %X.coerce1, double* %2
%tmp = getelementptr inbounds %struct.foof* %X, i32 0, i32 2 ; <float*> [#uses=1]
%tmp1 = load float* %tmp ; <float> [#uses=1]
ret float %tmp1
}
which compiled (with optimization) to:
_bar: ## @bar
## BB#0: ## %entry
movd %xmm1, %rax
movd %eax, %xmm0
ret
Now we produce:
define float @bar(double %X.coerce0, float %X.coerce1) nounwind {
entry:
%X = alloca %struct.foof, align 8 ; <%struct.foof*> [#uses=2]
%0 = bitcast %struct.foof* %X to %0* ; <%0*> [#uses=2]
%1 = getelementptr %0* %0, i32 0, i32 0 ; <double*> [#uses=1]
store double %X.coerce0, double* %1
%2 = getelementptr %0* %0, i32 0, i32 1 ; <float*> [#uses=1]
store float %X.coerce1, float* %2
%tmp = getelementptr inbounds %struct.foof* %X, i32 0, i32 2 ; <float*> [#uses=1]
%tmp1 = load float* %tmp ; <float> [#uses=1]
ret float %tmp1
}
and:
_bar: ## @bar
## BB#0: ## %entry
movaps %xmm1, %xmm0
ret
llvm-svn: 109776
2010-07-29 18:13:09 +00:00
Chris Lattner
c95a398947
start setting up infrastructure for passing multi-floats
...
as <2 x float> instead of as double. The backend isn't ready
yet, but infrastructure in the frontend can come up.
llvm-svn: 109768
2010-07-29 17:49:08 +00:00
Chris Lattner
1c56d9ab56
rename Get8ByteTypeAtOffset -> GetINTEGERTypeAtOffset to
...
make it clear that this function should only return a type
that the codegen will classify the same as an INTEGER type.
llvm-svn: 109763
2010-07-29 17:40:35 +00:00
Chris Lattner
3f76342cfc
handle a case where we could access off the end of a function
...
that Eli pointed out, rdar://8249586
llvm-svn: 109762
2010-07-29 17:34:39 +00:00
Chris Lattner
cd84084f02
fix PR7742 / rdar://8250764, a miscompilation of struct
...
return where the struct has a base but no fields. This
was because the x86-64 abi logic was checking the wrong
predicate in one place.
This was introduced in r91874, which was a fix for PR5831,
which lacked a CHECK line, so I verified and added it.
llvm-svn: 109759
2010-07-29 17:04:54 +00:00
Chris Lattner
98076a25ce
This is a little bit far, but optimize cases like:
...
struct a {
struct c {
double x;
int y;
} x[1];
};
void foo(struct a A) {
}
into:
define void @foo(double %A.coerce0, i32 %A.coerce1) nounwind {
entry:
%A = alloca %struct.a, align 8 ; <%struct.a*> [#uses=1]
%0 = bitcast %struct.a* %A to %struct.c* ; <%struct.c*> [#uses=2]
%1 = getelementptr %struct.c* %0, i32 0, i32 0 ; <double*> [#uses=1]
store double %A.coerce0, double* %1
%2 = getelementptr %struct.c* %0, i32 0, i32 1 ; <i32*> [#uses=1]
store i32 %A.coerce1, i32* %2
instead of:
define void @foo(double %A.coerce0, i64 %A.coerce1) nounwind {
entry:
%A = alloca %struct.a, align 8 ; <%struct.a*> [#uses=1]
%0 = bitcast %struct.a* %A to %0* ; <%0*> [#uses=2]
%1 = getelementptr %0* %0, i32 0, i32 0 ; <double*> [#uses=1]
store double %A.coerce0, double* %1
%2 = getelementptr %0* %0, i32 0, i32 1 ; <i64*> [#uses=1]
store i64 %A.coerce1, i64* %2
I only do this now because I never want to look at this code again :)
llvm-svn: 109738
2010-07-29 07:43:55 +00:00
Chris Lattner
c8b7b53a1e
implement a todo: pass a eight-byte that consists of a
...
small integer + padding as that small integer. On code
like:
struct c { double x; int y; };
void bar(struct c C) { }
This means that we compile to:
define void @bar(double %C.coerce0, i32 %C.coerce1) nounwind {
entry:
%C = alloca %struct.c, align 8 ; <%struct.c*> [#uses=2]
%0 = getelementptr %struct.c* %C, i32 0, i32 0 ; <double*> [#uses=1]
store double %C.coerce0, double* %0
%1 = getelementptr %struct.c* %C, i32 0, i32 1 ; <i32*> [#uses=1]
store i32 %C.coerce1, i32* %1
instead of:
define void @bar(double %C.coerce0, i64 %C.coerce1) nounwind {
entry:
%C = alloca %struct.c, align 8 ; <%struct.c*> [#uses=3]
%0 = bitcast %struct.c* %C to %0* ; <%0*> [#uses=2]
%1 = getelementptr %0* %0, i32 0, i32 0 ; <double*> [#uses=1]
store double %C.coerce0, double* %1
%2 = getelementptr %0* %0, i32 0, i32 1 ; <i64*> [#uses=1]
store i64 %C.coerce1, i64* %2
which gives SRoA heartburn.
This implements rdar://5711709, a nice low number :)
llvm-svn: 109737
2010-07-29 07:30:00 +00:00
Chris Lattner
fe34c1d53e
Kill off the 'coerce' ABI passing form. Now 'direct' and 'extend' always
...
have a "coerce to" type which often matches the default lowering of Clang
type to LLVM IR type, but the coerce case can be handled by making them
not be the same.
This simplifies things and fixes issues where X86-64 abi lowering would
return coerce after making preferred types exactly match up. This caused
us to compile:
typedef float v4f32 __attribute__((__vector_size__(16)));
v4f32 foo(v4f32 X) {
return X+X;
}
into this code at -O0:
define <4 x float> @foo(<4 x float> %X.coerce) nounwind {
entry:
%retval = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=2]
%coerce = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=2]
%X.addr = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=3]
store <4 x float> %X.coerce, <4 x float>* %coerce
%X = load <4 x float>* %coerce ; <<4 x float>> [#uses=1]
store <4 x float> %X, <4 x float>* %X.addr
%tmp = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%tmp1 = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%add = fadd <4 x float> %tmp, %tmp1 ; <<4 x float>> [#uses=1]
store <4 x float> %add, <4 x float>* %retval
%0 = load <4 x float>* %retval ; <<4 x float>> [#uses=1]
ret <4 x float> %0
}
Now we get:
define <4 x float> @foo(<4 x float> %X) nounwind {
entry:
%X.addr = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=3]
store <4 x float> %X, <4 x float>* %X.addr
%tmp = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%tmp1 = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%add = fadd <4 x float> %tmp, %tmp1 ; <<4 x float>> [#uses=1]
ret <4 x float> %add
}
This implements rdar://8248065
llvm-svn: 109733
2010-07-29 06:26:06 +00:00
Chris Lattner
9fa15c3608
ignore structs that wrap vectors in IR, the abstraction shouldn't add penalty.
...
Before we'd compile the example into something like:
%coerce.dive2 = getelementptr %struct.v4f32wrapper* %retval, i32 0, i32 0 ; <<4 x float>*> [#uses=1]
%1 = bitcast <4 x float>* %coerce.dive2 to <2 x double>* ; <<2 x double>*> [#uses=1]
%2 = load <2 x double>* %1, align 1 ; <<2 x double>> [#uses=1]
ret <2 x double> %2
Now we produce:
%coerce.dive2 = getelementptr %struct.v4f32wrapper* %retval, i32 0, i32 0 ; <<4 x float>*> [#uses=1]
%0 = load <4 x float>* %coerce.dive2, align 1 ; <<4 x float>> [#uses=1]
ret <4 x float> %0
llvm-svn: 109732
2010-07-29 05:02:29 +00:00
Chris Lattner
4200fe4e50
move the 'pretty 16-byte vector' inferring code up to be shared
...
with return values, improving stuff that returns __m128 etc.
llvm-svn: 109731
2010-07-29 04:56:46 +00:00
Chris Lattner
ce1bd754d8
simplify code by eliminating a premature optimization.
...
llvm-svn: 109730
2010-07-29 04:51:12 +00:00
Chris Lattner
3a44c7e55d
now that we have CGT around, we can start using preferred types
...
for return values too. Instead of compiling something like:
struct foo {
int *X;
float *Y;
};
struct foo test(struct foo *P) { return *P; }
to:
%1 = type { i64, i64 }
define %1 @test(%struct.foo* %P) nounwind {
entry:
%retval = alloca %struct.foo, align 8 ; <%struct.foo*> [#uses=2]
%P.addr = alloca %struct.foo*, align 8 ; <%struct.foo**> [#uses=2]
store %struct.foo* %P, %struct.foo** %P.addr
%tmp = load %struct.foo** %P.addr ; <%struct.foo*> [#uses=1]
%tmp1 = bitcast %struct.foo* %retval to i8* ; <i8*> [#uses=1]
%tmp2 = bitcast %struct.foo* %tmp to i8* ; <i8*> [#uses=1]
call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 16, i32 8, i1 false)
%0 = bitcast %struct.foo* %retval to %1* ; <%1*> [#uses=1]
%1 = load %1* %0, align 1 ; <%1> [#uses=1]
ret %1 %1
}
We now get the result more type safe, with:
define %struct.foo @test(%struct.foo* %P) nounwind {
entry:
%retval = alloca %struct.foo, align 8 ; <%struct.foo*> [#uses=2]
%P.addr = alloca %struct.foo*, align 8 ; <%struct.foo**> [#uses=2]
store %struct.foo* %P, %struct.foo** %P.addr
%tmp = load %struct.foo** %P.addr ; <%struct.foo*> [#uses=1]
%tmp1 = bitcast %struct.foo* %retval to i8* ; <i8*> [#uses=1]
%tmp2 = bitcast %struct.foo* %tmp to i8* ; <i8*> [#uses=1]
call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 16, i32 8, i1 false)
%0 = load %struct.foo* %retval ; <%struct.foo> [#uses=1]
ret %struct.foo %0
}
That memcpy is completely terrible, but I don't know how to fix it.
llvm-svn: 109729
2010-07-29 04:46:19 +00:00
Chris Lattner
029c0f1681
sink preferred type stuff lower. It's possible that this might
...
improve codegen for vaarg or something, because its codepath is
getting preferred types now.
llvm-svn: 109728
2010-07-29 04:41:05 +00:00