188 Commits

Author SHA1 Message Date
Evan Cheng
678b86d6ce MachineInstr can change. Store indexes instead.
llvm-svn: 44612
2007-12-05 10:24:35 +00:00
Evan Cheng
06353b48b5 If a split live interval is spilled again, remove the kill marker on its last use.
llvm-svn: 44611
2007-12-05 09:51:10 +00:00
Evan Cheng
64b3baaaea Clobber more bugs.
llvm-svn: 44610
2007-12-05 09:05:34 +00:00
Evan Cheng
d7de56ac93 Fix kill info for split intervals.
llvm-svn: 44609
2007-12-05 08:16:32 +00:00
Evan Cheng
269dbd31d0 - Mark last use of a split interval as kill instead of letting spiller track it.
This allows an important optimization to be re-enabled.
- If all uses / defs of a split interval can be folded, give the interval a
  low spill weight so it would not be picked in case spilling is needed (avoid
  pushing other intervals in the same BB to be spilled).

llvm-svn: 44601
2007-12-05 03:22:34 +00:00
Evan Cheng
d1badb960e Discard split intervals made empty due to folding.
llvm-svn: 44565
2007-12-04 00:32:23 +00:00
Evan Cheng
196faa9dc5 Typo
llvm-svn: 44532
2007-12-03 10:00:00 +00:00
Evan Cheng
85ef9834a6 Update kill info for uses of split intervals.
llvm-svn: 44531
2007-12-03 09:58:48 +00:00
Evan Cheng
f45a1d623c Remove redundant foldMemoryOperand variants and other code clean up.
llvm-svn: 44517
2007-12-02 08:30:39 +00:00
Evan Cheng
388f6f51a0 Fix a bug where splitting cause some unnecessary spilling.
llvm-svn: 44482
2007-12-01 04:42:39 +00:00
Evan Cheng
69fda0a716 Allow some reloads to be folded in multi-use cases. Specifically testl r, r -> cmpl [mem], 0.
llvm-svn: 44479
2007-12-01 02:07:52 +00:00
Evan Cheng
b10dc27b20 Do not fold reload into an instruction with multiple uses. It issues one extra load.
llvm-svn: 44467
2007-11-30 21:23:43 +00:00
Evan Cheng
d35b5acae4 Do not lose rematerialization info when spilling already split live intervals.
llvm-svn: 44443
2007-11-29 23:02:50 +00:00
Evan Cheng
8494ee175c Fix a major performance issue with splitting. If there is a def (not def/use)
in the middle of a split basic block, create a new live interval starting at
the def. This avoid artifically extending the live interval over a number of
cycles where it is dead. e.g.

bb1:
       = vr1204   (use / kill) <= new interval starts and ends here.
...
...
vr1204 =          (new def)   <= start a new interval here.
       = vr1204   (use)

llvm-svn: 44436
2007-11-29 10:12:14 +00:00
Evan Cheng
f85c063ec0 Replace the odd kill# hack with something less fragile.
llvm-svn: 44434
2007-11-29 09:49:23 +00:00
Evan Cheng
be255b0650 Fixed various live interval splitting bugs / compile time issues.
llvm-svn: 44428
2007-11-29 01:06:25 +00:00
Evan Cheng
c1648b6a0d Recover compile time regression.
llvm-svn: 44386
2007-11-28 01:28:46 +00:00
Evan Cheng
8e22379303 Live interval splitting:
When a live interval is being spilled, rather than creating short, non-spillable
intervals for every def / use, split the interval at BB boundaries. That is, for
every BB where the live interval is defined or used, create a new interval that
covers all the defs and uses in the BB.

This is designed to eliminate one common problem: multiple reloads of the same
value in a single basic block. Note, it does *not* decrease the number of spills
since no copies are inserted so the split intervals are *connected* through
spill and reloads (or rematerialization). The newly created intervals can be
spilled again, in that case, since it does not span multiple basic blocks, it's
spilled in the usual manner. However, it can reuse the same stack slot as the
previously split interval.

This is currently controlled by -split-intervals-at-bb.

llvm-svn: 44198
2007-11-17 00:40:40 +00:00
Evan Cheng
2c1a50455c Fix a thinko in post-allocation coalescer.
llvm-svn: 44166
2007-11-15 08:13:29 +00:00
Evan Cheng
7f02cfa599 Clean up sub-register implementation by moving subReg information back to
MachineOperand auxInfo. Previous clunky implementation uses an external map
to track sub-register uses. That works because register allocator uses
a new virtual register for each spilled use. With interval splitting (coming
soon), we may have multiple uses of the same register some of which are
of using different sub-registers from others. It's too fragile to constantly
update the information.

llvm-svn: 44104
2007-11-14 07:59:08 +00:00
Evan Cheng
be51f28e2b Refactor some code.
llvm-svn: 44010
2007-11-12 06:35:08 +00:00
Evan Cheng
e742ee1dbe Simplify my (il)logic.
llvm-svn: 43819
2007-11-07 08:08:25 +00:00
Evan Cheng
dd71a5c37b When the allocator rewrite a spill register with new virtual register, it replaces other operands of the same register. Watch out for situations where
only some of the operands are sub-register uses.

llvm-svn: 43776
2007-11-06 21:12:10 +00:00
Evan Cheng
92d23e5204 Fix a bug where a def use operand isn't being detected as a sub-register use.
llvm-svn: 43763
2007-11-06 08:50:44 +00:00
Evan Cheng
a8044084ac Fix PR1187.
llvm-svn: 43692
2007-11-05 00:59:10 +00:00
Evan Cheng
66298e226f There are times when the coalescer would not coalesce away a copy but the copy
can be eliminated by the allocator is the destination and source targets the
same register. The most common case is when the source and destination registers
are in different class. For example, on x86 mov32to32_ targets GR32_ which
contains a subset of the registers in GR32.

The allocator can do 2 things:
1. Set the preferred allocation for the destination of a copy to that of its source.
2. After allocation is done, change the allocation of a copy destination (if
   legal) so the copy can be eliminated.

This eliminates 443 extra moves from 403.gcc.

llvm-svn: 43662
2007-11-03 07:20:12 +00:00
Evan Cheng
0dde6e5761 Apply Chris' suggestions.
llvm-svn: 43069
2007-10-17 06:53:44 +00:00
Evan Cheng
8b8c7c9927 Clean up code that calculate MBB live-in's.
llvm-svn: 43060
2007-10-17 02:10:22 +00:00
Evan Cheng
1410b8512c Did mean to leave this in. INSERT_SUBREG isn't being coalesced yet.
llvm-svn: 42916
2007-10-12 17:16:50 +00:00
Evan Cheng
aa2d6ef81d EXTRACT_SUBREG coalescing support. The coalescer now treats EXTRACT_SUBREG like
(almost) a register copy. However, it always coalesced to the register of the
RHS (the super-register). All uses of the result of a EXTRACT_SUBREG are sub-
register uses which adds subtle complications to load folding, spiller rewrite,
etc.

llvm-svn: 42899
2007-10-12 08:50:34 +00:00
Evan Cheng
21a58a72c5 Kill cycle of an live range is always the last use index + 1.
llvm-svn: 42742
2007-10-08 06:59:30 +00:00
Dan Gohman
c731c97fac Use empty() member functions when that's what's being tested for instead
of comparing begin() and end().

llvm-svn: 42585
2007-10-03 19:26:29 +00:00
Dan Gohman
9da02f5ee2 Remove isReg, isImm, and isMBB, and change all their users to use
isRegister, isImmediate, and isMachineBasicBlock, which are equivalent,
and more popular.

llvm-svn: 41958
2007-09-14 20:33:02 +00:00
Evan Cheng
d059eed1c1 Fix a memory leak.
llvm-svn: 41739
2007-09-06 01:07:24 +00:00
Evan Cheng
db53aef53e Use pool allocator for all the VNInfo's to improve memory access locality. This reduces coalescing time on siod Mac OS X PPC by 35%. Also remove the back ptr from VNInfo to LiveInterval and other tweaks.
llvm-svn: 41729
2007-09-05 21:46:51 +00:00
Evan Cheng
32a0a995c6 Try fold re-materialized load instructions into its uses.
llvm-svn: 41598
2007-08-30 05:53:02 +00:00
Evan Cheng
1ad4a6117b Change LiveRange so it keeps a pointer to the VNInfo rather than an index.
Changes related modules so VNInfo's are not copied. This decrease
copy coalescing time by 45% and overall compilation time by 10% on siod.

llvm-svn: 41579
2007-08-29 20:45:00 +00:00
Evan Cheng
70c2de7bf1 Fix some kill info update bugs; add hidden option -disable-rematerialization to turn off remat for debugging.
llvm-svn: 41118
2007-08-16 07:24:22 +00:00
Evan Cheng
33820da1da Re-implement trivial rematerialization. This allows def MIs whose live intervals that are coalesced to be rematerialized.
llvm-svn: 41060
2007-08-13 23:45:17 +00:00
Evan Cheng
05cc486c7b Code to maintain kill information during register coalescing.
llvm-svn: 41016
2007-08-11 00:59:19 +00:00
Evan Cheng
d771b793fe Adding kill info to val#.
llvm-svn: 40925
2007-08-08 07:03:29 +00:00
Evan Cheng
a8c2f38617 - Each val# can have multiple kills.
- Fix some minor bugs related to special markers on val# def. ~0U means
  undefined, ~1U means dead val#.

llvm-svn: 40916
2007-08-08 03:00:28 +00:00
Evan Cheng
0d0fee269a - LiveInterval value#'s now have 3 components: def instruction #,
kill instruction #, and source register number (iff the value# is defined by a
copy).
- Now def instruction # is set for every value#, not just for copy defined ones.
- Update some outdated code related inactive live ranges.
- Kill info not yet set. That's next patch.

llvm-svn: 40913
2007-08-07 23:49:57 +00:00
Evan Cheng
df0c705d7d If a livein is not used in the block. It's live through.
llvm-svn: 37764
2007-06-27 18:47:28 +00:00
Evan Cheng
6cf1371456 Fix an obvious bug. Old code only worked for the entry block.
llvm-svn: 37743
2007-06-27 01:16:36 +00:00
Dan Gohman
9e82064924 Replace M_REMATERIALIZIBLE and the newly-added isOtherReMaterializableLoad
with a general target hook to identify rematerializable instructions. Some
instructions are only rematerializable with specific operands, such as loads
from constant pools, while others are always rematerializable. This hook
allows both to be identified as being rematerializable with the same
mechanism.

llvm-svn: 37644
2007-06-19 01:48:05 +00:00
Dan Gohman
4a4a8eb00e Add a target hook to allow loads from constant pools to be rematerialized, and an
implementation for x86.

llvm-svn: 37576
2007-06-14 20:50:44 +00:00
David Greene
02f6e9b621 Factor live variable analysis so it does not do register coalescing
simultaneously.  Move that pass to SimpleRegisterCoalescing.

This makes it easier to implement alternative register allocation and
coalescing strategies while maintaining reuse of the existing live
interval analysis.

llvm-svn: 37520
2007-06-08 17:18:56 +00:00
Evan Cheng
e1595b6859 Only worry about intervening kill if there are more than one live ranges in the interval.
llvm-svn: 37052
2007-05-14 21:23:51 +00:00
Evan Cheng
c690cba7d9 Fix for PR1406:
v1 =
r2 = move v1
   = op r2<kill>
...
r2 = move v1
   = op r2<kill>

Clear the first r2 kill if v1 and r2 are joined.

llvm-svn: 37050
2007-05-14 21:10:05 +00:00