Chris Lattner bf3b57f221 optimize single MBB loops better. In particular, produce:
LBB1_57:        #bb207.i
        movl 72(%esp), %ecx
        movb (%ecx,%eax), %cl
        movl 80(%esp), %edx
        movb %cl, 1(%edx,%eax)
        incl %eax
        cmpl $143, %eax
        jne LBB1_57     #bb207.i
        jmp LBB1_64     #cond_next255.i

intead of:

LBB1_57:        #bb207.i
        movl 72(%esp), %ecx
        movb (%ecx,%eax), %cl
        movl 80(%esp), %edx
        movb %cl, 1(%edx,%eax)
        incl %eax
        cmpl $143, %eax
        je LBB1_64      #cond_next255.i
        jmp LBB1_57     #bb207.i

This eliminates a branch per iteration of the loop.  This hurted PPC
particularly, because the extra branch meant another dispatch group for each
iteration of the loop.

llvm-svn: 31530
2006-11-08 01:03:21 +00:00
..
2006-11-02 20:25:50 +00:00
2006-10-20 07:07:24 +00:00
2006-11-02 20:25:50 +00:00
2006-11-02 20:25:50 +00:00
2006-11-07 21:58:55 +00:00
2006-10-28 18:17:09 +00:00
2006-11-03 19:15:55 +00:00
2006-11-01 23:18:32 +00:00