13716 Commits

Author SHA1 Message Date
Jakob Stoklund Olesen
59a0d3243b Allow targets to inject passes before the virtual register rewriter.
Such passes can be used to tweak the register assignments in a
target-dependent way, for example to avoid write-after-write
dependencies.

llvm-svn: 159209
2012-06-26 17:09:29 +00:00
Chandler Carruth
9139f44d23 Update a bunch of stale comments that dated from when this folled the
very first (and worst) placement algorithm. These should now more
accurately reflect the reality of the pass.

llvm-svn: 159185
2012-06-26 05:16:37 +00:00
Andrew Trick
fb2ba3e1cb Enable the new LoopInfo algorithm by default.
The primary advantage is that loop optimizations will be applied in a
stable order. This helps debugging and unit test creation. It is also
a better overall implementation without pathologically bad performance
on deep functions.

On large functions (llvm-stress --size=200000 | opt -loops)
Before: 0.1263s
After:  0.0225s

On deep functions (after tweaking llvm-stress, thanks Nadav):
Before: 0.2281s
After:  0.0227s

See r158790 for more comments.

The loop tree is now consistently generated in forward order, but loop
passes are applied in reverse order over the program. If we have a
loop optimization that prefers forward order, that can easily be
achieved by adding a different type of LoopPassManager.

llvm-svn: 159183
2012-06-26 04:11:38 +00:00
Evan Cheng
4c6f917d34 Make sure type is not extended or untyped before create a constant of the type. No test case. Found by inspection.
llvm-svn: 159179
2012-06-26 01:19:33 +00:00
Jakob Stoklund Olesen
a57fc12ec9 Enforce stricter liveness rules for PHIs.
Verify that all paths from the entry block to a virtual register read
pass through a def. Enable this check even when MRI->isSSA() is false.

Verify that the live range of a virtual register is live out of all
predecessor blocks, even for PHI-values.

This requires that PHIElimination sometimes inserts IMPLICIT_DEF
instruction in predecessor blocks.

llvm-svn: 159150
2012-06-25 18:18:27 +00:00
Jakob Stoklund Olesen
eb49566447 Run ProcessImplicitDefs on SSA form where it can be much simpler.
Implicitly defined virtual registers can simply have the <undef> bit set
on all uses, and copies can be turned into implicit defs recursively.

Physical registers are a bit trickier. We handle the common case where a
physreg def is used by a nearby instruction in the same basic block. For
more complicated cases, just leave the IMPLICIT_DEF instruction in.

llvm-svn: 159149
2012-06-25 18:12:18 +00:00
Jakob Stoklund Olesen
70ed924e18 Teach PHIElimination to handle <undef> operands.
When a PHI use is <undef>, don't emit a copy in the predecessor block,
but insert an IMPLICIT_DEF instruction instead. This ensures that
virtual register uses are always jointly dominated by defs, even if some
of them are IMPLICIT_DEF.

llvm-svn: 159121
2012-06-25 03:36:12 +00:00
Jakob Stoklund Olesen
6b556f824d Handle <undef> operands in TwoAddressInstructionPass.
When the source register to a 2-addr instruction is undefined, there is
no need to attempt any transformations - simply replace the source
register with the destination register.

This also comes up when lowering IMPLICIT_DEF instructions - make sure
the <undef> flag is moved to the new partial register def operand:

  %vreg8<def> = INSERT_SUBREG %vreg9<undef>, %vreg0<kill>, sub_16bit
rewrite undef:
  %vreg8<def> = INSERT_SUBREG %vreg8<undef>, %vreg0<kill>, sub_16bit
convert to:
  %vreg8:sub_16bit<def,read-undef> = COPY %vreg0<kill>

llvm-svn: 159120
2012-06-25 03:27:12 +00:00
NAKAMURA Takumi
704de074b8 llvm/lib: [CMake] Add explicit dependency to intrinsics_gen.
llvm-svn: 159112
2012-06-24 13:32:01 +00:00
Pete Cooper
fe212e762f DAG legalisation can now handle illegal fma vector types by scalarisation
llvm-svn: 159092
2012-06-24 00:05:44 +00:00
Jakob Stoklund Olesen
502e4c6ac4 Teach LiveVariables to handle <undef> operands.
It's simple: Don't treat <undef> operands as uses, and don't assume a
virtual register has a defining instruction unless a real use has been
seen.

llvm-svn: 159061
2012-06-23 02:23:00 +00:00
Jakob Stoklund Olesen
a127fc780a Remove ProcessImplicitDefs.h which was unused.
The ProcessImplicitDefs class can be local to its implementation file.

llvm-svn: 159041
2012-06-22 22:27:36 +00:00
Jakob Stoklund Olesen
b033dede17 Also verify the def index for early clobbers.
llvm-svn: 159039
2012-06-22 22:23:58 +00:00
Jakob Stoklund Olesen
4fa84ba8b9 Delete a boring statistic.
llvm-svn: 159030
2012-06-22 20:40:15 +00:00
Jakob Stoklund Olesen
c61edda0ab Store live intervals in an IndexedMap.
It is both smaller and faster than DenseMap.

llvm-svn: 159029
2012-06-22 20:37:52 +00:00
Hal Finkel
8db5547252 Revert r158679 - use case is unclear (and it increases the memory footprint).
Original commit message:
    Allow up to 64 functional units per processor itinerary.

    This patch changes the type used to hold the FU bitset from unsigned to uint64_t.
    This will be needed for some upcoming PowerPC itineraries.

llvm-svn: 159027
2012-06-22 20:27:13 +00:00
Jakob Stoklund Olesen
48828bb402 Fix a crash in --debug code.
Don't try to print out the live range of a physreg.

llvm-svn: 159021
2012-06-22 19:51:41 +00:00
Jakob Stoklund Olesen
48a1647c93 Don't depend on live ranges being present.
DBG_VALUE instructions could be referring to non-existing virtual
registers.

llvm-svn: 159020
2012-06-22 18:51:35 +00:00
Jakob Stoklund Olesen
8a833649e5 Simplify handleMove() a bit.
There is no need to check for physreg live ranges. They don't exist any
more.

llvm-svn: 159019
2012-06-22 18:38:57 +00:00
Jakob Stoklund Olesen
37e797fedc Stop computing physreg live ranges.
Everyone is using on-demand regunit ranges now.

llvm-svn: 159018
2012-06-22 18:20:50 +00:00
Jakob Stoklund Olesen
bbad269a3e Remove some redundant LIS->hasInterval() checks.
These functions only operate on virtual registers now, and they all have
live ranges.

llvm-svn: 159015
2012-06-22 17:49:44 +00:00
Jakob Stoklund Olesen
7809578cfe Use MRI::isConstantPhysReg() to check remat feasibility.
Don't depend on LiveIntervals::hasInterval() to determine if a physreg
is reserved and constant.

llvm-svn: 159013
2012-06-22 17:31:01 +00:00
Jakob Stoklund Olesen
3244963ecc Use regunit liveness to guide LiveDebugVariables.
This should produce the same results as using physreg liveness directly.

llvm-svn: 159009
2012-06-22 17:15:32 +00:00
Jakob Stoklund Olesen
b1b3e4aa58 Remove LiveIntervals::trackingRegUnits().
With regunit liveness permanently enabled, this function would always
return true.

Also remove now obsolete code for checking physreg interference.

llvm-svn: 159006
2012-06-22 16:46:44 +00:00
Rafael Espindola
ea59166190 Remove another duplicated variable. We only need one to tell us if the linker
knows dwarf or not.

llvm-svn: 158993
2012-06-22 13:32:49 +00:00
Rafael Espindola
d7bdaf5795 Fix a FIXME: DwarfRequiresRelocationForSectionOffset is the same as
DwarfUsesRelocationsAcrossSections.

llvm-svn: 158992
2012-06-22 13:24:07 +00:00
Nick Lewycky
33da33676f Emit relocations for DW_AT_location entries on systems which need it. This is
a recommit of r127757. Fixes PR9493. Patch by Paul Robinson!

llvm-svn: 158957
2012-06-22 01:25:12 +00:00
Lang Hames
b8650f106a Rename -allow-excess-fp-precision flag to -fuse-fp-ops, and switch from a
boolean flag to an enum: { Fast, Standard, Strict } (default = Standard).

This option controls the creation by optimizations of fused FP ops that store
intermediate results in higher precision than IEEE allows (E.g. FMAs). The
behavior of this option is intended to match the behaviour specified by a
soon-to-be-introduced frontend flag: '-ffuse-fp-ops'.

Fast mode - allows formation of fused FP ops whenever they're profitable.

Standard mode - allow fusion only for 'blessed' FP ops. At present the only
blessed op is the fmuladd intrinsic. In the future more blessed ops may be
added.

Strict mode - allow fusion only if/when it can be proven that the excess
precision won't effect the result.

Note: This option only controls formation of fused ops by the optimizers.  Fused
operations that are explicitly requested (e.g. FMA via the llvm.fma.* intrinsic)
will always be honored, regardless of the value of this option.

Internally TargetOptions::AllowExcessFPPrecision has been replaced by
TargetOptions::AllowFPOpFusion.

llvm-svn: 158956
2012-06-22 01:09:09 +00:00
Jack Carter
c457f62033 The inline asm operand modifier 'n' is suppose
to be generic across architectures. It has the
following description in the gnu sources:

    Negate the immediate constant

Several Architectures such as x86 have local implementations
of operand modifier 'n' which go beyond the above description
slightly. This won't affect them.

Affected files:

    lib/CodeGen/AsmPrinter/AsmPrinterInlineAsm.cpp
        Added 'n' to the switch cases.

    test/CodeGen/Generic/asm-large-immediate.ll
        Generic compiled test (x86 for me)

    test/CodeGen/Mips/asm-large-immediate.ll
        Mips compiled version of the generic one

Contributer: Jack Carter
llvm-svn: 158939
2012-06-21 21:37:54 +00:00
Pete Cooper
5b61422d80 Fix potential crash if DAGCombine on stores sees a half type
llvm-svn: 158927
2012-06-21 18:00:39 +00:00
Jack Carter
b2fd5f66b4 The inline asm operand modifier 'c' is suppose
to be generic across architectures. It has the
following description in the gnu sources:

    Substitute immediate value without immediate syntax

Several Architectures such as x86 have local implementations
of operand modifier 'c' which go beyond the above description
slightly. To make use of the generic modifiers without overriding
local implementation one can make a call to the base class method
for AsmPrinter::PrintAsmOperand() in the locally derived method's 
"default" case in the switch statement. That way if it is already
defined locally the generic version will never get called.

This change is needed when test/CodeGen/generic/asm-large-immediate.ll
failed on a native Mips board. The test was assuming a generic
implementation was in place.

Affected files:

    lib/Target/Mips/MipsAsmPrinter.cpp:
        Changed the default case to call the base method.
    lib/CodeGen/AsmPrinter/AsmPrinterInlineAsm.cpp
        Added 'c' to the switch cases.
    test/CodeGen/Mips/asm-large-immediate.ll
        Mips compiled version of the generic one

Contributer: Jack Carter
llvm-svn: 158925
2012-06-21 17:14:46 +00:00
Evan Cheng
8c2ad81238 Emit a single _udivmodsi4 libcall instead of two separate _udivsi3 and
_umodsi3 libcalls if they have the same arguments. This optimization
was apparently broken if one of the node was replaced in place.
rdar://11714607

llvm-svn: 158900
2012-06-21 05:56:05 +00:00
Jakob Stoklund Olesen
58713de545 Update regunits in RegisterCoalescer::reMaterializeTrivialDef.
Old code would only update physreg live intervals.

llvm-svn: 158881
2012-06-21 00:09:15 +00:00
Jakob Stoklund Olesen
37a1338a16 Remove spurious typedefs.
llvm-svn: 158878
2012-06-20 23:54:18 +00:00
Jakob Stoklund Olesen
1911a0203d Remove the RenderMachineFunction HTML output pass.
I don't think anyone has been using this functionality for a while, and
it is getting in the way of refactoring now.

llvm-svn: 158876
2012-06-20 23:47:58 +00:00
Jakob Stoklund Olesen
51c63e64e3 Remove the -live-regunits command line option.
Register allocators depend on it being permanently enabled now.

llvm-svn: 158873
2012-06-20 23:31:34 +00:00
Jakob Stoklund Olesen
781e0b9fd7 Fix some more LiveInterval enumerations.
Deterministically enumerate the virtual registers instead.

llvm-svn: 158872
2012-06-20 23:23:59 +00:00
Jakob Stoklund Olesen
2d2dec96e0 Remove LiveIntervalUnions from RegAllocBase.
They are living in LiveRegMatrix now.

llvm-svn: 158868
2012-06-20 22:52:29 +00:00
Jakob Stoklund Olesen
96eebf0b14 Convert RAGreedy to LiveRegMatrix interference checking.
Stop depending on the LiveIntervalUnions in RegAllocBase, they are about
to be removed.

The changes are mostly replacing register alias iterators with regunit
iterators, and querying LiveRegMatrix instrad of RegAllocBase.

InterferenceCache is converted to work with per-regunit
LiveIntervalUnions, and it checks fixed regunit interference separately,
using the fixed live intervals provided by LiveIntervalAnalysis.

The local splitting helper calcGapWeights() is also considering fixed
regunit interference which is kept on the side now.

llvm-svn: 158867
2012-06-20 22:52:26 +00:00
Jakob Stoklund Olesen
03b87d5aaa Convert RABasic to using LiveRegMatrix interference checking.
Stop using the LiveIntervalUnions provided by RegAllocBase, they will be
removed soon.

llvm-svn: 158866
2012-06-20 22:52:24 +00:00
Jakob Stoklund Olesen
effc6b2d18 Enable register unit liveness by default.
Soon we won't need to compute live intervals for physical registers.

llvm-svn: 158865
2012-06-20 22:52:22 +00:00
Jakob Stoklund Olesen
bfa664eaae Teach PBQPBuilder::build() about regunit interference.
Filter out physreg candidates with regunit interferrence.
Also compute regmask interference more efficiently.

llvm-svn: 158864
2012-06-20 22:32:05 +00:00
Jakob Stoklund Olesen
a1f43dcdb8 Avoid iterating with LiveIntervals::iterator.
That is a DenseMap iterator keyed by pointers, so the iteration order is
nondeterministic.

I would like to replace the DenseMap with an IndexedMap which doesn't
allow iteration.

llvm-svn: 158856
2012-06-20 21:25:05 +00:00
Pete Cooper
fe5b84b404 Add users of a MERGE_VALUE node to the worklist to process again when the node is removed. Sorry, no test case. Foudn it by inspection of the code
llvm-svn: 158839
2012-06-20 19:35:43 +00:00
Jakob Stoklund Olesen
833308d785 Only update regunit live ranges that have been precomputed.
Regunit live ranges are computed on demand, so when mi-sched calls
handleMove, some regunits may not have live ranges yet.

That makes updating them easier: Just skip the non-existing ranges. They
will be computed correctly from the rescheduled machine code when they
are needed.

llvm-svn: 158831
2012-06-20 18:00:57 +00:00
Jakob Stoklund Olesen
d702e8fddf Delete dead code.
llvm-svn: 158827
2012-06-20 16:38:50 +00:00
Hal Finkel
8a31138521 Fix DAGCombine to deal with ext-conversion of pre/post_inc loads.
The test case for this will come with the PPC indexed preinc loads commit.

llvm-svn: 158822
2012-06-20 15:42:48 +00:00
Aaron Ballman
421a5ba06d Fixing a compiler warning in MSVC 10.
llvm-svn: 158820
2012-06-20 14:44:44 +00:00
Chandler Carruth
c60fbe6b58 Fix two rather subtle internal vs. external linker issues.
I'll admit I'm not entirely satisfied with this change, but it seemed
the cleanest option. Other suggestions quite welcome

The issue is that the traits specializations have static methods which
return the typedef'ed PHI_iterator type. In both the IR and MI layers
this is typedef'ed to a custom iterator class defined in an anonymous
namespace giving the types and the functions returning them internal
linkage. However, because the traits specialization is defined in the
'llvm' namespace (where it has to be, specialized template lives there),
and is in turn used in the templated implementation of the SSAUpdater.
This led to the linkage conflict that Clang now warns about.

The simplest solution to me was just to define the PHI_iterator as
a nested class inside the trait specialization. That way it still
doesn't get scoped widely, it can't be accidentally reused somewhere,
etc. This is a little gross just because nested class definitions are
a little gross, but the alternatives seem more ad-hoc.

llvm-svn: 158799
2012-06-20 08:39:30 +00:00
Andrew Trick
ff2ed7b687 A new algorithm for computing LoopInfo. Temporarily disabled.
-stable-loops enables a new algorithm for generating the Loop
forest. It differs from the original algorithm in a few respects:

- Not determined by use-list order.
- Initially guarantees RPO order of block and subloops.
- Linear in the number of CFG edges.
- Nonrecursive.

I didn't want to change the LoopInfo API yet, so the block lists are
still inclusive. This seems strange to me, and it means that building
LoopInfo is not strictly linear, but it may not be a problem in
practice. At least the block lists start out in RPO order now. In the
future we may add an attribute or wrapper analysis that allows other
passes to assume RPO order.

The primary motivation of this work was not to optimize LoopInfo, but
to allow reproducing performance issues by decomposing the compilation
stages. I'm often unable to do this with the current LoopInfo, because
the loop tree order determines Loop pass order. Serializing the IR
tends to invert the order, which reverses the optimization order. This
makes it nearly impossible to debug interdependent loop optimizations
such as LSR.

I also believe this will provide more stable performance results across time.

llvm-svn: 158790
2012-06-20 05:23:33 +00:00