630 Commits

Author SHA1 Message Date
Chris Lattner
d0496d0433 fix a crash on CodeGen/X86/vector-rem.ll
llvm-svn: 46422
2008-01-27 23:21:58 +00:00
Chris Lattner
888560d62c Implement some dag combines that allow doing fneg/fabs/fcopysign in integer
registers if used by a bitconvert or using a bitconvert.  This allows us to
avoid constant pool loads and use cheaper integer instructions when the
values come from or end up in integer regs anyway.  For example, we now 
compile CodeGen/X86/fp-in-intregs.ll to:

_test1:
	movl	$2147483648, %eax
	xorl	4(%esp), %eax
	ret
_test2:
	movl	$1065353216, %eax
	orl	4(%esp), %eax
	andl	$3212836864, %eax
	ret

Instead of:
_test1:
	movss	4(%esp), %xmm0
	xorps	LCPI2_0, %xmm0
	movd	%xmm0, %eax
	ret
_test2:
	movss	4(%esp), %xmm0
	andps	LCPI3_0, %xmm0
	movss	LCPI3_1, %xmm1
	andps	LCPI3_2, %xmm1
	orps	%xmm0, %xmm1
	movd	%xmm1, %eax
	ret

bitconverts can happen due to various calling conventions that require
fp values to passed in integer regs in some cases, e.g. when returning
a complex.

llvm-svn: 46414
2008-01-27 17:42:27 +00:00
Chris Lattner
e30e33af4f Infer alignment of loads and increase their alignment when we can tell they are
from the stack.  This allows us to compile stack-align.ll to:

_test:
	movsd	LCPI1_0, %xmm0
	movapd	%xmm0, %xmm1
***	andpd	4(%esp), %xmm1
	andpd	_G, %xmm0
	addsd	%xmm1, %xmm0
	movl	20(%esp), %eax
	movsd	%xmm0, (%eax)
	ret

instead of:

_test:
	movsd	LCPI1_0, %xmm0
**	movsd	4(%esp), %xmm1
**	andpd	%xmm0, %xmm1
	andpd	_G, %xmm0
	addsd	%xmm1, %xmm0
	movl	20(%esp), %eax
	movsd	%xmm0, (%eax)
	ret

llvm-svn: 46401
2008-01-26 19:45:50 +00:00
Chris Lattner
31e9edce1c Fix some bugs in SimplifyNodeWithTwoResults where it would call deletenode to
delete a node even if it was not dead in some cases.  Instead, just add it to
the worklist.  Also, make sure to use the CombineTo methods, as it was doing
things that were unsafe: the top level combine loop could touch dangling memory.

This fixes CodeGen/Generic/2008-01-25-dag-combine-mul.ll

llvm-svn: 46384
2008-01-26 01:09:19 +00:00
Chris Lattner
cb3cf546c3 reduce indentation
llvm-svn: 46377
2008-01-25 23:34:24 +00:00
Chris Lattner
2d7a830ff3 Add skeletal code to increase the alignment of loads and stores when
we can infer it.  This will eventually help stuff, though it doesn't
do much right now because all fixed FI's have an alignment of 1.

llvm-svn: 46349
2008-01-25 07:20:16 +00:00
Chris Lattner
34ed27c46d clarify a comment, thanks Duncan.
llvm-svn: 46313
2008-01-24 17:10:01 +00:00
Chris Lattner
e97fa8cdf0 Fix this buggy transformation. Two observations:
1. we already know the value is dead, so don't bother replacing 
   it with undef.
2. The very case the comment describes actually makes the load
   live which asserts in deletenode.  If we do the replacement
   and the node becomes live, just treat it as new.  This fixes
   a failure on X86/2008-01-16-InvalidDAGCombineXform.ll with
   some local changes in my tree.

llvm-svn: 46306
2008-01-24 07:57:06 +00:00
Chris Lattner
d66eac62fd The dag combiner is missing revisiting nodes that it really should, and thus leaving
dead stuff around.  This gets fed into the isel pass and causes certain foldings from
happening because nodes have extraneous uses floating around.  For example, if we turned
foo(bar(x)) -> baz(x), we sometimes left bar(x) around.

llvm-svn: 46305
2008-01-24 07:18:21 +00:00
Chris Lattner
0feb1b0f84 fold fp_round(fp_round(x)) -> fp_round(x).
llvm-svn: 46304
2008-01-24 06:45:35 +00:00
Chris Lattner
1ea55cf816 This commit changes:
1. Legalize now always promotes truncstore of i1 to i8. 
2. Remove patterns and gunk related to truncstore i1 from targets.
3. Rename the StoreXAction stuff to TruncStoreAction in TLI.
4. Make the TLI TruncStoreAction table a 2d table to handle from/to conversions.
5. Mark a wide variety of invalid truncstores as such in various targets, e.g.
   X86 currently doesn't support truncstore of any of its integer types.
6. Add legalize support for truncstores with invalid value input types.
7. Add a dag combine transform to turn store(truncate) into truncstore when
   safe.

The later allows us to compile CodeGen/X86/storetrunc-fp.ll to:

_foo:
	fldt	20(%esp)
	fldt	4(%esp)
	faddp	%st(1)
	movl	36(%esp), %eax
	fstps	(%eax)
	ret

instead of:

_foo:
	subl	$4, %esp
	fldt	24(%esp)
	fldt	8(%esp)
	faddp	%st(1)
	fstps	(%esp)
	movl	40(%esp), %eax
	movss	(%esp), %xmm0
	movss	%xmm0, (%eax)
	addl	$4, %esp
	ret

llvm-svn: 46140
2008-01-17 19:59:44 +00:00
Chris Lattner
7eabed3521 code cleanups, no functionality change.
llvm-svn: 46126
2008-01-17 07:20:38 +00:00
Chris Lattner
72733e573b * Introduce a new SelectionDAG::getIntPtrConstant method
and switch various codegen pieces and the X86 backend over
  to using it.

* Add some comments to SelectionDAGNodes.h

* Introduce a second argument to FP_ROUND, which indicates
  whether the FP_ROUND changes the value of its input. If
  not it is safe to xform things like fp_extend(fp_round(x)) -> x.

llvm-svn: 46125
2008-01-17 07:00:52 +00:00
Evan Cheng
7be1528004 Fixes a nasty dag combiner bug that causes a bunch of tests to fail at -O0.
It's not safe to use the two value CombineTo variant to combine away a dead load.
e.g. 
v1, chain2 = load chain1, loc
v2, chain3 = load chain2, loc
v3         = add v2, c 
Now we replace use of v1 with undef, use of chain2 with chain1.
ReplaceAllUsesWith() will iterate through uses of the first load and update operands:
v1, chain2 = load chain1, loc
v2, chain3 = load chain1, loc
v3         = add v2, c 
Now the second load is the same as the first load, SelectionDAG cse will ensure
the use of second load is replaced with the first load.
v1, chain2 = load chain1, loc
v3         = add v1, c
Then v1 is replaced with undef and bad things happen.

llvm-svn: 46099
2008-01-16 23:11:54 +00:00
Chris Lattner
2e50a6f90f Factor the ReachesChainWithoutSideEffects out of dag combiner into
a public SDOperand::reachesChainWithoutSideEffects method.  No 
functionality change.

llvm-svn: 46050
2008-01-16 05:49:24 +00:00
Chris Lattner
51b01bf8a5 Make load->store deletion a bit smarter. This allows us to compile this:
void test(long long *P) { *P ^= 1; }

into just:

_test:
	movl	4(%esp), %eax
	xorl	$1, (%eax)
	ret

instead of code like this:

_test:
	movl	4(%esp), %ecx
        xorl    $1, (%ecx)
	movl	4(%ecx), %edx
	movl	%edx, 4(%ecx)
	ret

llvm-svn: 45762
2008-01-08 23:08:06 +00:00
Chris Lattner
f3ebc3f3d2 Remove attribution from file headers, per discussion on llvmdev.
llvm-svn: 45418
2007-12-29 20:36:04 +00:00
Chris Lattner
2de9b85297 make sure not to zap volatile stores, thanks a lot to Dale for noticing this!
llvm-svn: 45402
2007-12-29 07:15:45 +00:00
Chris Lattner
5919b48fe9 don't fold fp_round(fp_extend(load)) -> fp_round(extload)
llvm-svn: 45400
2007-12-29 06:55:23 +00:00
Chris Lattner
3f9c6a7260 Delete a store whose input is a load from the same pointer:
x = load p
  store x -> p

llvm-svn: 45398
2007-12-29 06:26:16 +00:00
Chris Lattner
efd1cddb5a Tell TargetLoweringOpt whether it is running before
or after legalize.

llvm-svn: 45321
2007-12-22 20:56:36 +00:00
Evan Cheng
9f06e5e2df Don't leave newly created nodes around if it turns out they are not needed.
llvm-svn: 45186
2007-12-19 01:34:38 +00:00
Dale Johannesen
5eff4de9c8 Redo previous patch so optimization only done for i1.
Simpler and safer.

llvm-svn: 44663
2007-12-06 17:53:31 +00:00
Chris Lattner
eedaf92fcf third time around: instead of disabling this completely,
only disable it if we don't know it will be obviously profitable.
Still fixme, but less so. :)

llvm-svn: 44658
2007-12-06 07:47:55 +00:00
Chris Lattner
b5fdfb9612 Actually, disable this code for now. More analysis and improvements to
the X86 backend are needed before this should be enabled by default.

llvm-svn: 44657
2007-12-06 07:44:31 +00:00
Chris Lattner
7c709a5d08 implement a readme entry, compiling the code into:
_foo:
	movl	$12, %eax
	andl	4(%esp), %eax
	movl	_array(%eax), %eax
	ret

instead of:

_foo:
	movl	4(%esp), %eax
	shrl	$2, %eax
	andl	$3, %eax
	movl	_array(,%eax,4), %eax
	ret

As it turns out, this triggers all the time, in a wide variety of
situations, for example, I see diffs like this in various programs:

-       movl    8(%eax), %eax
-       shll    $2, %eax
-       andl    $1020, %eax
-       movl    (%esi,%eax), %eax
+       movzbl  8(%eax), %eax
+       movl    (%esi,%eax,4), %eax


-       shll    $2, %edx
-       andl    $1020, %edx
-       movl    (%edi,%edx), %edx
+       andl    $255, %edx
+       movl    (%edi,%edx,4), %edx

Unfortunately, I also see stuff like this, which can be fixed in the
X86 backend:

-       andl    $85, %ebx
-       addl    _bit_count(,%ebx,4), %ebp
+       shll    $2, %ebx
+       andl    $340, %ebx
+       addl    _bit_count(%ebx), %ebp

llvm-svn: 44656
2007-12-06 07:33:36 +00:00
Dale Johannesen
05bbbda78a Fix PR1842.
llvm-svn: 44649
2007-12-06 01:43:46 +00:00
Dan Gohman
9a69341725 Don't lower srem/urem X%C to X-X/C*C unless the division is actually
optimized. This avoids creating illegal divisions when the combiner is
running after legalize; this fixes PR1815. Also, it produces better
code in the included testcase by avoiding the subtract and multiply
when the division isn't optimized.

llvm-svn: 44341
2007-11-26 23:46:11 +00:00
Duncan Sands
e795efea5b Move MinAlign to MathExtras.h.
llvm-svn: 43944
2007-11-09 13:41:39 +00:00
Duncan Sands
e7a9ac929f Fix some load/store logic that would be wrong for
apints on big-endian machines if the bitwidth is
not a multiple of 8.  Introduce a new helper,
MVT::getStoreSizeInBits, and use it.

llvm-svn: 43934
2007-11-09 08:57:19 +00:00
Evan Cheng
ece4c68b82 If both parts of smul_lohi, etc. are used, don't simplify. If only one part is used, try simplify it.
llvm-svn: 43888
2007-11-08 09:25:29 +00:00
Evan Cheng
0747bc1df6 Typo.
llvm-svn: 43511
2007-10-30 20:11:21 +00:00
Dan Gohman
ae95d72a52 Fix a DAGCombiner abort on a bitcast from a scalar to a vector.
llvm-svn: 43470
2007-10-29 20:44:42 +00:00
Evan Cheng
e106e2f142 Enable more fold (sext (load x)) -> (sext (truncate (sextload x)))
transformation. Previously, it's restricted by ensuring the number of load uses
is one. Now the restriction is loosened up by allowing setcc uses to be
"extended" (e.g. setcc x, c, eq -> setcc sext(x), sext(c), eq).

llvm-svn: 43465
2007-10-29 19:58:20 +00:00
Duncan Sands
1826deda68 The guaranteed alignment of ptr+offset is only the minimum of
of offset and the alignment of ptr if these are both powers of
2.  While the ptr alignment is guaranteed to be a power of 2,
there is no reason to think that offset is.  For example, if
offset is 12 (the size of a long double on x86-32 linux) and
the alignment of ptr is 8, then the alignment of ptr+offset
will in general be 4, not 8.  Introduce a function MinAlign,
lifted from gcc, for computing the minimum guaranteed alignment.
I've tried to fix up everywhere under lib/CodeGen/SelectionDAG/.
I also changed some places that weren't wrong (because both values
were a power of 2), as a defensive change against people copying
and pasting the code.
Hopefully someone who cares about alignment will review the rest
of LLVM and fix up the remaining places.  Since I'm on x86 I'm
not very motivated to do this myself...

llvm-svn: 43421
2007-10-28 12:59:45 +00:00
Dale Johannesen
6802d0c96f Redo "last ppc long double fix" as Chris wants.
llvm-svn: 43189
2007-10-19 20:29:00 +00:00
Dale Johannesen
10432e5a67 More ppcf128 issues (maybe the last)?
llvm-svn: 43160
2007-10-19 00:59:18 +00:00
Dale Johannesen
e5facd51cb Disable attempts to constant fold PPC f128.
Remove the assumption that this will happen from
various places.

llvm-svn: 43053
2007-10-16 23:38:29 +00:00
Chris Lattner
3cfb56d489 One mundane change: Change ReplaceAllUsesOfValueWith to *optionally*
take a deleted nodes vector, instead of requiring it.

One more significant change:  Implement the start of a legalizer that
just works on types.  This legalizer is designed to run before the 
operation legalizer and ensure just that the input dag is transformed
into an output dag whose operand and result types are all legal, even
if the operations on those types are not.

This design/impl has the following advantages:

1. When finished, this will *significantly* reduce the amount of code in
   LegalizeDAG.cpp.  It will remove all the code related to promotion and
   expansion as well as splitting and scalarizing vectors.
2. The new code is very simple, idiomatic, and modular: unlike 
   LegalizeDAG.cpp, it has no 3000 line long functions. :)
3. The implementation is completely iterative instead of recursive, good
   for hacking on large dags without blowing out your stack.
4. The implementation updates nodes in place when possible instead of 
   deallocating and reallocating the entire graph that points to some 
   mutated node.
5. The code nicely separates out handling of operations with invalid 
   results from operations with invalid operands, making some cases
   simpler and easier to understand.
6. The new -debug-only=legalize-types option is very very handy :), 
   allowing you to easily understand what legalize types is doing.

This is not yet done.  Until the ifdef added to SelectionDAGISel.cpp is
enabled, this does nothing.  However, this code is sufficient to legalize
all of the code in 186.crafty, olden and freebench on an x86 machine.  The
biggest issues are:

1. Vectors aren't implemented at all yet
2. SoftFP is a mess, I need to talk to Evan about it.
3. No lowering to libcalls is implemented yet.
4. Various operations are missing etc.
5. There are FIXME's for stuff I hax0r'd out, like softfp.

Hey, at least it is a step in the right direction :).  If you'd like to help,
just enable the #ifdef in SelectionDAGISel.cpp and compile code with it.  If
this explodes it will tell you what needs to be implemented.  Help is 
certainly appreciated.

Once this goes in, we can do three things:

1. Add a new pass of dag combine between the "type legalizer" and "operation
   legalizer" passes.  This will let us catch some long-standing isel issues
   that we miss because operation legalization often obfuscates the dag with
   target-specific nodes.
2. We can rip out all of the type legalization code from LegalizeDAG.cpp,
   making it much smaller and simpler.  When that happens we can then 
   reimplement the core functionality left in it in a much more efficient and
   non-recursive way.
3. Once the whole legalizer is non-recursive, we can implement whole-function
   selectiondags maybe...

llvm-svn: 42981
2007-10-15 06:10:22 +00:00
Chris Lattner
f47e30627a Enhance the truncstore optimization code to handle shifted
values and propagate demanded bits through them in simple cases.

This allows this code:
void foo(char *P) {
   strcpy(P, "abc");
}
to compile to:

_foo:
        ldrb r3, [r1]
        ldrb r2, [r1, #+1]
        ldrb r12, [r1, #+2]!
        ldrb r1, [r1, #+1]
        strb r1, [r0, #+3]
        strb r2, [r0, #+1]
        strb r12, [r0, #+2]
        strb r3, [r0]
        bx lr

instead of:

_foo:
        ldrb r3, [r1, #+3]
        ldrb r2, [r1, #+2]
        orr r3, r2, r3, lsl #8
        ldrb r2, [r1, #+1]
        ldrb r1, [r1]
        orr r2, r1, r2, lsl #8
        orr r3, r2, r3, lsl #16
        strb r3, [r0]
        mov r2, r3, lsr #24
        strb r2, [r0, #+3]
        mov r2, r3, lsr #16
        strb r2, [r0, #+2]
        mov r3, r3, lsr #8
        strb r3, [r0, #+1]
        bx lr

testcase here: test/CodeGen/ARM/truncstore-dag-combine.ll

This also helps occasionally for X86 and other cases not involving 
unaligned load/stores.

llvm-svn: 42954
2007-10-13 06:58:48 +00:00
Chris Lattner
5e6fe054a2 Add a simple optimization to simplify the input to
truncate and truncstore instructions, based on the 
knowledge that they don't demand the top bits.

llvm-svn: 42952
2007-10-13 06:35:54 +00:00
Duncan Sands
56ab90d3ad Correct swapped arguments to getConstant.
llvm-svn: 42824
2007-10-10 09:54:50 +00:00
Dan Gohman
5c6d0c3b99 DAGCombiner support for UDIVREM/SDIVREM and UMUL_LOHI/SMUL_LOHI.
Check if one of the two results unneeded so see if a simpler operator
could bs used. Also check to see if each of the two computations could be
simplified if they were split into separate operators. Factor out the code
that calls visit() so that it can be used for this purpose.

llvm-svn: 42759
2007-10-08 17:57:15 +00:00
Evan Cheng
0de312dd7d Reapply 42677.
llvm-svn: 42692
2007-10-06 08:19:55 +00:00
Chris Lattner
82217bd155 revert evan's patch until the header is committed
llvm-svn: 42686
2007-10-06 06:08:17 +00:00
Evan Cheng
f4b5d491df Added DAG xforms. e.g.
(vextract (v4f32 s2v (f32 load $addr)), 0) -> (f32 load $addr) 
(vextract (v4i32 bc (v4f32 s2v (f32 load $addr))), 0) -> (i32 load $addr)
Remove x86 specific patterns.

llvm-svn: 42677
2007-10-06 02:46:29 +00:00
Evan Cheng
e2e8f2d96b Fix a bogus splat xform:
shuffle <undef, undef, x, undef>, <undef, undef, undef, undef>, <2, 2, 2, 2>
!=
<undef, undef, x, undef>

llvm-svn: 42111
2007-09-18 21:54:37 +00:00
Dale Johannesen
af12b57405 Prevent crash on long double.
llvm-svn: 42103
2007-09-18 18:36:59 +00:00
Dale Johannesen
028084efe5 Revise previous patch per review comments.
Next round of x87 long double stuff.
Getting close now, basically works.

llvm-svn: 41875
2007-09-12 03:30:33 +00:00
Dale Johannesen
245dceb06d Add APInt interfaces to APFloat (allows directly
access to bits).  Use them in place of float and
double interfaces where appropriate.
First bits of x86 long double constants handling 
(untested, probably does not work).

llvm-svn: 41858
2007-09-11 18:32:33 +00:00