Nate Begeman
c89fdf1eb3
Handle urem by shifted powers of 2.
...
llvm-svn: 26001
2006-02-05 07:36:48 +00:00
Nate Begeman
25d178bece
handle combining A / (B << N) into A >>u (log2(B)+N) when B is a power of 2
...
llvm-svn: 26000
2006-02-05 07:20:23 +00:00
Nate Begeman
dc7bba9ffe
Add a framework for eliminating instructions that produces undemanded bits.
...
llvm-svn: 25945
2006-02-03 22:24:05 +00:00
Nate Begeman
22e251abf1
Add common code for reassociating ops in the dag combiner
...
llvm-svn: 25934
2006-02-03 06:46:56 +00:00
Chris Lattner
49beaf40fc
Turn any_extend nodes into zero_extend nodes when it allows us to remove an
...
and instruction. This allows us to compile stuff like this:
bool %X(int %X) {
%Y = add int %X, 14
%Z = setne int %Y, 12345
ret bool %Z
}
to this:
_X:
cmpl $12331, 4(%esp)
setne %al
movzbl %al, %eax
ret
instead of this:
_X:
cmpl $12331, 4(%esp)
setne %al
movzbl %al, %eax
andl $1, %eax
ret
This occurs quite a bit with the X86 backend. For example, 25 times in
lambda, 30 times in 177.mesa, 14 times in galgel, 70 times in fma3d,
25 times in vpr, several hundred times in gcc, ~45 times in crafty,
~60 times in parser, ~140 times in eon, 110 times in perlbmk, 55 on gap,
16 times on bzip2, 14 times on twolf, and 1-2 times in many other SPEC2K
programs.
llvm-svn: 25901
2006-02-02 07:17:31 +00:00
Chris Lattner
49ce35542f
add two dag combines:
...
(C1-X) == C2 --> X == C1-C2
(X+C1) == C2 --> X == C2-C1
This allows us to compile this:
bool %X(int %X) {
%Y = add int %X, 14
%Z = setne int %Y, 12345
ret bool %Z
}
into this:
_X:
cmpl $12331, 4(%esp)
setne %al
movzbl %al, %eax
andl $1, %eax
ret
not this:
_X:
movl $14, %eax
addl 4(%esp), %eax
cmpl $12345, %eax
setne %al
movzbl %al, %eax
andl $1, %eax
ret
Testcase here: Regression/CodeGen/X86/compare-add.ll
nukage of the and coming up next.
llvm-svn: 25898
2006-02-02 06:36:13 +00:00
Nate Begeman
7e7f439f85
Fix some of the stuff in the PPC README file, and clean up legalization
...
of the SELECT_CC, BR_CC, and BRTWOWAY_CC nodes.
llvm-svn: 25875
2006-02-01 07:19:44 +00:00
Chris Lattner
f0b24d2dc0
Move MaskedValueIsZero from the DAGCombiner to the TargetLowering interface,making isMaskedValueZeroForTargetNode simpler, and useable from other partsof the compiler.
...
llvm-svn: 25803
2006-01-30 04:09:27 +00:00
Chris Lattner
3b40e64aa3
pass the address of MaskedValueIsZero into isMaskedValueZeroForTargetNode,
...
to permit recursion
llvm-svn: 25799
2006-01-30 03:49:37 +00:00
Chris Lattner
678da98835
eliminate uses of SelectionDAG::getBR2Way_CC
...
llvm-svn: 25767
2006-01-29 06:00:45 +00:00
Nate Begeman
af397cec0b
Add a missing case to the dag combiner.
...
llvm-svn: 25723
2006-01-28 01:06:30 +00:00
Chris Lattner
de02d7727f
Add explicit #includes of <iostream>
...
llvm-svn: 25515
2006-01-22 23:41:00 +00:00
Nate Begeman
569c439567
Get rid of code in the DAGCombiner that is duplicated in SelectionDAG.cpp
...
Now all constant folding in the code generator is in one place.
llvm-svn: 25426
2006-01-18 22:35:16 +00:00
Chris Lattner
5fee908be5
Fix a backwards conditional that caused an inf loop in some cases. This
...
fixes: test/Regression/CodeGen/Generic/2005-01-18-SetUO-InfLoop.ll
llvm-svn: 25419
2006-01-18 19:13:41 +00:00
Chris Lattner
fcdb420baf
Disable two transformations that contribute to bus errors on SparcV8.
...
llvm-svn: 25339
2006-01-15 18:58:59 +00:00
Chris Lattner
3470b5dee6
Add a simple missing fold to produce this:
...
subfic r3, r2, 33
instead of this:
subfic r2, r2, 32
addi r3, r2, 1
llvm-svn: 25255
2006-01-12 20:22:43 +00:00
Chris Lattner
b1ee616de9
Don't create rotate instructions in unsupported types, because we don't have
...
promote/expand code yet. This fixes the 177.mesa failure on PPC.
llvm-svn: 25250
2006-01-12 18:57:33 +00:00
Nate Begeman
1b8121b227
Add bswap, rotl, and rotr nodes
...
Add dag combiner code to recognize rotl, rotr
Add ppc code to match rotl
Targets should add rotl/rotr patterns if they have them
llvm-svn: 25222
2006-01-11 21:21:00 +00:00
Evan Cheng
85c973cda9
Revert the previous check-in. Leave shl x, 1 along for target to deal with.
...
llvm-svn: 25121
2006-01-06 01:56:02 +00:00
Evan Cheng
b03f9b32d2
fold (shl x, 1) -> (add x, x)
...
llvm-svn: 25120
2006-01-06 01:06:31 +00:00
Jim Laskey
762e9ec06c
Added initial support for DEBUG_LABEL allowing debug specific labels to be
...
inserted in the code.
llvm-svn: 25104
2006-01-05 01:25:28 +00:00
Jim Laskey
0da76a676a
Add unique id to debug location for debug label use (work in progress.)
...
llvm-svn: 25096
2006-01-04 15:04:11 +00:00
Jim Laskey
bdba3e2a46
Remove redundant debug locations.
...
llvm-svn: 24995
2005-12-23 20:08:28 +00:00
Chris Lattner
26943b9691
Simplify store(bitconv(x)) to store(x). This allows us to compile this:
...
void bar(double Y, double *X) {
*X = Y;
}
to this:
bar:
save -96, %o6, %o6
st %i1, [%i2+4]
st %i0, [%i2]
restore %g0, %g0, %g0
retl
nop
instead of this:
bar:
save -104, %o6, %o6
st %i1, [%i6+-4]
st %i0, [%i6+-8]
ldd [%i6+-8], %f0
std %f0, [%i2]
restore %g0, %g0, %g0
retl
nop
on sparcv8.
llvm-svn: 24983
2005-12-23 05:48:07 +00:00
Chris Lattner
54560f6887
fold (conv (load x)) -> (load (conv*)x).
...
This allows us to compile this:
void foo(double);
void bar(double *X) { foo(*X); }
To this:
bar:
save -96, %o6, %o6
ld [%i0+4], %o1
ld [%i0], %o0
call foo
nop
restore %g0, %g0, %g0
retl
nop
instead of this:
bar:
save -104, %o6, %o6
ldd [%i0], %f0
std %f0, [%i6+-8]
ld [%i6+-4], %o1
ld [%i6+-8], %o0
call foo
nop
restore %g0, %g0, %g0
retl
nop
on SparcV8.
llvm-svn: 24982
2005-12-23 05:44:41 +00:00
Chris Lattner
efbbedbf4a
Fold bitconv(bitconv(x)) -> x. We now compile this:
...
void foo(double);
void bar(double X) { foo(X); }
to this:
bar:
save -96, %o6, %o6
or %g0, %i0, %o0
or %g0, %i1, %o1
call foo
nop
restore %g0, %g0, %g0
retl
nop
instead of this:
bar:
save -112, %o6, %o6
st %i1, [%i6+-4]
st %i0, [%i6+-8]
ldd [%i6+-8], %f0
std %f0, [%i6+-16]
ld [%i6+-12], %o1
ld [%i6+-16], %o0
call foo
nop
restore %g0, %g0, %g0
retl
nop
on V8.
llvm-svn: 24981
2005-12-23 05:37:50 +00:00
Chris Lattner
a187460552
constant fold bits_convert in getNode and in the dag combiner for fp<->int
...
conversions. This allows V8 to compiles this:
void %test() {
call float %test2( float 1.000000e+00, float 2.000000e+00, double 3.000000e+00, double* null )
ret void
}
into:
test:
save -96, %o6, %o6
sethi 0, %o3
sethi 1049088, %o2
sethi 1048576, %o1
sethi 1040384, %o0
or %g0, %o3, %o4
call test2
nop
restore %g0, %g0, %g0
retl
nop
instead of:
test:
save -112, %o6, %o6
sethi 0, %o4
sethi 1049088, %l0
st %o4, [%i6+-12]
st %l0, [%i6+-16]
ld [%i6+-12], %o3
ld [%i6+-16], %o2
sethi 1048576, %o1
sethi 1040384, %o0
call test2
nop
restore %g0, %g0, %g0
retl
nop
llvm-svn: 24980
2005-12-23 05:30:37 +00:00
Evan Cheng
9cdc16c6d3
* Fix a GlobalAddress lowering bug.
...
* Teach DAG combiner about X86ISD::SETCC by adding a TargetLowering hook.
llvm-svn: 24921
2005-12-21 23:05:39 +00:00
Chris Lattner
83e4407379
Don't create SEXTLOAD/ZEXTLOAD instructions that the target doesn't support
...
if after legalize. This fixes IA64 failures.
llvm-svn: 24725
2005-12-15 19:02:38 +00:00
Chris Lattner
d39c60fcc8
When folding loads into ops, immediately replace uses of the op with the
...
load. This reduces number of worklist iterations and avoid missing optimizations
depending on folding of things into sext_inreg nodes (which aren't supported by
all targets).
Tested by Regression/CodeGen/X86/extend.ll:test2
llvm-svn: 24712
2005-12-14 19:25:30 +00:00
Chris Lattner
7dac1083da
Fix the (zext (zextload)) case to trigger, similarly for sign extends.
...
Allow (zext (truncate)) to apply after legalize if the target supports
AND (which all do).
This compiles
short %foo() {
%tmp.0 = load ubyte* %X ; <ubyte> [#uses=1]
%tmp.3 = cast ubyte %tmp.0 to short ; <short> [#uses=1]
ret short %tmp.3
}
to:
_foo:
movzbl _X, %eax
ret
instead of:
_foo:
movzbl _X, %eax
movzbl %al, %eax
ret
thanks to Evan for pointing this out.
llvm-svn: 24709
2005-12-14 19:05:06 +00:00
Chris Lattner
f753d1a574
Fix a miscompilation in crafty due to a recent patch
...
llvm-svn: 24706
2005-12-14 07:58:38 +00:00
Evan Cheng
bce7c47306
Fold (zext (load x) to (zextload x).
...
llvm-svn: 24702
2005-12-14 02:19:23 +00:00
Chris Lattner
57c882edf8
Only transform (sext (truncate x)) -> (sextinreg x) if before legalize or
...
if the target supports the resultant sextinreg
llvm-svn: 24632
2005-12-07 18:02:05 +00:00
Chris Lattner
cbd3d01a43
Teach the dag combiner to turn a truncate/sign_extend pair into a sextinreg
...
when the types match up. This allows the X86 backend to compile:
sbyte %toggle_value(sbyte* %tmp.1) {
%tmp.2 = load sbyte* %tmp.1
ret sbyte %tmp.2
}
to this:
_toggle_value:
mov %EAX, DWORD PTR [%ESP + 4]
movsx %EAX, BYTE PTR [%EAX]
ret
instead of this:
_toggle_value:
mov %EAX, DWORD PTR [%ESP + 4]
movsx %EAX, BYTE PTR [%EAX]
movsx %EAX, %AL
ret
noticed in Shootout/objinst.
-Chris
llvm-svn: 24630
2005-12-07 07:11:03 +00:00
Jeff Cohen
cf1f782a2f
Fix operator precedence bug caught by VC++.
...
llvm-svn: 24318
2005-11-12 00:59:01 +00:00
Chris Lattner
bf4f233214
Switch the allnodes list from a vector of pointers to an ilist of nodes.This eliminates the vector, allows constant time removal of a node froma graph, and makes iteration over the all nodes list stable when adding
...
nodes to the graph.
llvm-svn: 24263
2005-11-09 23:47:37 +00:00
Nate Begeman
ee065281e8
Fix a crash that Andrew noticed, and add a pair of braces to unfconfuse
...
XCode's indenting.
llvm-svn: 24159
2005-11-02 18:42:59 +00:00
Chris Lattner
17df608719
Fix a source of undefined behavior when dealing with 64-bit types. This
...
may fix PR652. Thanks to Andrew for tracking down the problem.
llvm-svn: 24145
2005-11-02 01:47:04 +00:00
Chris Lattner
a70878d4fb
Codegen mul by negative power of two with a shift and negate.
...
This implements test/Regression/CodeGen/PowerPC/mul-neg-power-2.ll,
producing:
_foo:
slwi r2, r3, 1
subfic r3, r2, 63
blr
instead of:
_foo:
mulli r2, r3, -2
addi r3, r2, 63
blr
llvm-svn: 24106
2005-10-30 06:41:49 +00:00
Chris Lattner
4b6d583d7a
Fix DSE to not nuke dead stores unless they redundant store is the same
...
VT as the killing one. Fix fixes PR491
llvm-svn: 24034
2005-10-27 07:10:34 +00:00
Chris Lattner
d8c5c066a1
Add a simple xform that is useful for bitfield operations.
...
llvm-svn: 24029
2005-10-27 05:06:38 +00:00
Chris Lattner
3b409a85eb
Clear a bit in this file that was causing a miscompilation of 178.galgel.
...
llvm-svn: 23980
2005-10-25 18:57:30 +00:00
Chris Lattner
9faa5b7a9a
BuildSDIV and BuildUDIV only work for i32/i64, but they don't check that
...
the input is that type, this caused a failure on gs on X86 last night.
Move the hard checks into Build[US]Div since that is where decisions like
this should be made.
llvm-svn: 23881
2005-10-22 18:50:15 +00:00
Chris Lattner
75ea5b10bf
add a case missing from the dag combiner that exposed the failure on
...
2005-10-21-longlonggtu.ll.
llvm-svn: 23875
2005-10-21 21:23:25 +00:00
Nate Begeman
8f62cd32ad
Fix a typo in the dag combiner, so that this can work on i64 targets
...
llvm-svn: 23856
2005-10-21 01:51:45 +00:00
Nate Begeman
4dd383120f
Invert the TargetLowering flag that controls divide by consant expansion.
...
Add a new flag to TargetLowering indicating if the target has really cheap
signed division by powers of two, make ppc use it. This will probably go
away in the future.
Implement some more ISD::SDIV folds in the dag combiner
Remove now dead code in the x86 backend.
llvm-svn: 23853
2005-10-21 00:02:42 +00:00
Nate Begeman
7efe53d90b
Fix a couple bugs in the const div stuff where we'd generate MULHS/MULHU
...
for types that aren't legal, and fail a divisor is less than zero
comparison, which would cause us to drop a subtract.
llvm-svn: 23846
2005-10-20 17:45:03 +00:00
Chris Lattner
a6efeb01f9
don't use llabs with apparently VC++ doesn't have
...
llvm-svn: 23845
2005-10-20 17:01:00 +00:00
Nate Begeman
c6f067a8c4
Move the target constant divide optimization up into the dag combiner, so
...
that the nodes can be folded with other nodes, and we can not duplicate
code in every backend. Alpha will probably want this too.
llvm-svn: 23835
2005-10-20 02:15:44 +00:00