llvm-project/llvm/utils/update_analyze_test_checks.py

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

225 lines
8.3 KiB
Python
Raw Normal View History

#!/usr/bin/env python3
"""A script to generate FileCheck statements for 'opt' analysis tests.
This script is a utility to update LLVM opt analysis test cases with new
FileCheck patterns. It can either update all of the tests in the file or
a single test function.
Example usage:
$ update_analyze_test_checks.py --opt=../bin/opt test/foo.ll
Workflow:
1. Make a compiler patch that requires updating some number of FileCheck lines
in regression test files.
2. Save the patch and revert it from your local work area.
3. Update the RUN-lines in the affected regression tests to look canonical.
Example: "; RUN: opt < %s -passes='print<cost-model>' -disable-output 2>&1 | FileCheck %s"
4. Refresh the FileCheck lines for either the entire file or select functions by
running this script.
5. Commit the fresh baseline of checks.
6. Apply your patch from step 1 and rebuild your local binaries.
7. Re-run this script on affected regression tests.
8. Check the diffs to ensure the script has done something reasonable.
9. Submit a patch including the regression test diffs for review.
A common pattern is to have the script insert complete checking of every
instruction. Then, edit it down to only check the relevant instructions.
The script is designed to make adding checks to a test case fast, it is *not*
designed to be authoratitive about what constitutes a good test!
"""
from __future__ import print_function
import argparse
import os # Used to advertise this file's name ("autogenerated_note").
import sys
import re
from UpdateTestChecks import common
def main():
from argparse import RawTextHelpFormatter
parser = argparse.ArgumentParser(
description=__doc__, formatter_class=RawTextHelpFormatter
)
parser.add_argument(
"--opt-binary",
default="opt",
help="The opt binary used to generate the test case",
)
parser.add_argument("--function", help="The function in the test file to update")
parser.add_argument("tests", nargs="+")
initial_args = common.parse_commandline_args(parser)
script_name = os.path.basename(__file__)
opt_basename = os.path.basename(initial_args.opt_binary)
if opt_basename != "opt":
common.error("Unexpected opt name: " + opt_basename)
sys.exit(1)
for ti in common.itertests(
initial_args.tests, parser, script_name="utils/" + script_name
):
triple_in_ir = None
for l in ti.input_lines:
m = common.TRIPLE_IR_RE.match(l)
if m:
triple_in_ir = m.groups()[0]
break
prefix_list = []
for l in ti.run_lines:
if "|" not in l:
common.warn("Skipping unparsable RUN line: " + l)
continue
(tool_cmd, filecheck_cmd) = tuple([cmd.strip() for cmd in l.split("|", 1)])
common.verify_filecheck_prefixes(filecheck_cmd)
if not tool_cmd.startswith(opt_basename + " "):
common.warn("WSkipping non-%s RUN line: %s" % (opt_basename, l))
continue
if not filecheck_cmd.startswith("FileCheck "):
common.warn("Skipping non-FileChecked RUN line: " + l)
continue
tool_cmd_args = tool_cmd[len(opt_basename) :].strip()
tool_cmd_args = tool_cmd_args.replace("< %s", "").replace("%s", "").strip()
check_prefixes = common.get_check_prefixes(filecheck_cmd)
# FIXME: We should use multiple check prefixes to common check lines. For
# now, we just ignore all but the last.
prefix_list.append((check_prefixes, tool_cmd_args))
update_test_checks: match IR basic block labels (#88979) Labels are matched using a regexp of the form '^(pattern):', which requires the addition of a "suffix" concept to NamelessValue. Aside from that, the key challenge is that block labels are values, and we typically capture values including the prefix '%'. However, when labels appear at the start of a basic block, the prefix '%' is not included, so we must capture block label values *without* the prefix '%'. We don't know ahead of time whether an IR value is a label or not. In most cases, they are prefixed by the word "label" (their type), but this isn't the case in phi nodes. We solve this issue by leveraging the two-phase nature of variable generalization: the first pass finds all occurences of a variable and determines whether the '%' prefix can be included or not. The second pass does the actual substitution. This change also unifies the generalization path for assembly with that for IR and analysis, in the hope that any future changes avoid diverging those cases future. I also considered the alternative of trying to detect the phi node case using more regular expression special cases but ultimately decided against that because it seemed more fragile, and perhaps the approach of keeping a tentative prefix that may later be discarded could also be eventually applied to some metadata and attribute cases. Note that an early version of this change was reviewed as https://reviews.llvm.org/D142452, before version numbers were introduced. This is a substantially updated version of that change.
2024-05-19 01:39:47 +02:00
ginfo = common.make_analyze_generalizer(version=1)
builder = common.FunctionTestBuilder(
run_list=prefix_list,
flags=type(
"",
(object,),
{
"verbose": ti.args.verbose,
"filters": ti.args.filters,
"function_signature": False,
"check_attributes": False,
"replace_value_regex": [],
},
),
scrubber_args=[],
path=ti.path,
update_test_checks: match IR basic block labels (#88979) Labels are matched using a regexp of the form '^(pattern):', which requires the addition of a "suffix" concept to NamelessValue. Aside from that, the key challenge is that block labels are values, and we typically capture values including the prefix '%'. However, when labels appear at the start of a basic block, the prefix '%' is not included, so we must capture block label values *without* the prefix '%'. We don't know ahead of time whether an IR value is a label or not. In most cases, they are prefixed by the word "label" (their type), but this isn't the case in phi nodes. We solve this issue by leveraging the two-phase nature of variable generalization: the first pass finds all occurences of a variable and determines whether the '%' prefix can be included or not. The second pass does the actual substitution. This change also unifies the generalization path for assembly with that for IR and analysis, in the hope that any future changes avoid diverging those cases future. I also considered the alternative of trying to detect the phi node case using more regular expression special cases but ultimately decided against that because it seemed more fragile, and perhaps the approach of keeping a tentative prefix that may later be discarded could also be eventually applied to some metadata and attribute cases. Note that an early version of this change was reviewed as https://reviews.llvm.org/D142452, before version numbers were introduced. This is a substantially updated version of that change.
2024-05-19 01:39:47 +02:00
ginfo=ginfo,
)
for prefixes, opt_args in prefix_list:
common.debug("Extracted opt cmd:", opt_basename, opt_args, file=sys.stderr)
common.debug(
"Extracted FileCheck prefixes:", str(prefixes), file=sys.stderr
)
raw_tool_outputs = common.invoke_tool(ti.args.opt_binary, opt_args, ti.path)
if re.search(r"Printing analysis ", raw_tool_outputs) is not None:
# Split analysis outputs by "Printing analysis " declarations.
for raw_tool_output in re.split(
r"Printing analysis ", raw_tool_outputs
):
builder.process_run_line(
common.ANALYZE_FUNCTION_RE,
common.scrub_body,
raw_tool_output,
prefixes,
)
elif (
re.search(r"(LV|LDist): Checking a loop in ", raw_tool_outputs)
is not None
):
for raw_tool_output in re.split(
r"(LV|LDist): Checking a loop in ", raw_tool_outputs
):
builder.process_run_line(
common.LOOP_PASS_DEBUG_RE,
common.scrub_body,
raw_tool_output,
prefixes,
)
else:
common.warn("Don't know how to deal with this output")
continue
builder.processed_prefixes(prefixes)
func_dict = builder.finish_and_get_func_dict()
is_in_function = False
is_in_function_start = False
prefix_set = set([prefix for prefixes, _ in prefix_list for prefix in prefixes])
common.debug("Rewriting FileCheck prefixes:", str(prefix_set), file=sys.stderr)
output_lines = []
generated_prefixes = []
for input_info in ti.iterlines(output_lines):
input_line = input_info.line
args = input_info.args
if is_in_function_start:
if input_line == "":
continue
if input_line.lstrip().startswith(";"):
m = common.CHECK_RE.match(input_line)
if not m or m.group(1) not in prefix_set:
output_lines.append(input_line)
continue
# Print out the various check lines here.
generated_prefixes.extend(
common.add_analyze_checks(
output_lines,
";",
prefix_list,
func_dict,
func_name,
update_test_checks: match IR basic block labels (#88979) Labels are matched using a regexp of the form '^(pattern):', which requires the addition of a "suffix" concept to NamelessValue. Aside from that, the key challenge is that block labels are values, and we typically capture values including the prefix '%'. However, when labels appear at the start of a basic block, the prefix '%' is not included, so we must capture block label values *without* the prefix '%'. We don't know ahead of time whether an IR value is a label or not. In most cases, they are prefixed by the word "label" (their type), but this isn't the case in phi nodes. We solve this issue by leveraging the two-phase nature of variable generalization: the first pass finds all occurences of a variable and determines whether the '%' prefix can be included or not. The second pass does the actual substitution. This change also unifies the generalization path for assembly with that for IR and analysis, in the hope that any future changes avoid diverging those cases future. I also considered the alternative of trying to detect the phi node case using more regular expression special cases but ultimately decided against that because it seemed more fragile, and perhaps the approach of keeping a tentative prefix that may later be discarded could also be eventually applied to some metadata and attribute cases. Note that an early version of this change was reviewed as https://reviews.llvm.org/D142452, before version numbers were introduced. This is a substantially updated version of that change.
2024-05-19 01:39:47 +02:00
ginfo,
is_filtered=builder.is_filtered(),
)
)
is_in_function_start = False
if is_in_function:
if common.should_add_line_to_output(input_line, prefix_set):
# This input line of the function body will go as-is into the output.
output_lines.append(input_line)
else:
continue
if input_line.strip() == "}":
is_in_function = False
continue
# If it's outside a function, it just gets copied to the output.
output_lines.append(input_line)
m = common.IR_FUNCTION_RE.match(input_line)
if not m:
continue
func_name = m.group(1)
if ti.args.function is not None and func_name != ti.args.function:
# When filtering on a specific function, skip all others.
continue
is_in_function = is_in_function_start = True
if ti.args.gen_unused_prefix_body:
output_lines.extend(
ti.get_checks_for_unused_prefixes(prefix_list, generated_prefixes)
)
common.debug("Writing %d lines to %s..." % (len(output_lines), ti.path))
with open(ti.path, "wb") as f:
f.writelines(["{}\n".format(l).encode("utf-8") for l in output_lines])
if __name__ == "__main__":
main()