Did you ever think that it would be nice to have an ATC check variant that allows you to see only the new findings, that come from the changes that you just apply to existing “legacy” code?
This is especially useful for SAP customers, that have (or plan to have) ATC checks configured to run during transport release. In a typical SAP customer situation, when a developer performs some small bugfix on an existing, old Z-program, he (or she) does not want to correct all the old quality problems that that program has. And additionally, the business user does not want to test all functions of the program, when she (or he) asked only for one little bugfix or change in a specific part of the program.
Wouldn’t it be nice to have a filter that lets us see only those findings that did not already exist in previous versions of the program?
I have heard that SAP is working on such a feature for the ATC. But it happened that I stumbled over an enhancement spot where I could implement it myself relatively easy (there is one bigger problem for a specific case though – see at the end of this blog).
The suitable spot is an implicit enhancement in class CL_SATC_CI_ADAPTER, at the end of method if_Satc_Ci_Adapter~analyze_Objects in the local class.
In that position, I inserted a call to a new class:
ENHANCEMENT 1 ZS_ATC_FILTER_FINDINGS. "active version
zcl_s_atc_filter_findings=>filter( exporting i_or_inspection = anonymous_inspection
changing c_it_findings = findings ).
ENDENHANCEMENT.
This is how the filter class is defined:
CLASS zcl_s_atc_filter_findings DEFINITION
PUBLIC
FINAL
CREATE PUBLIC .
PUBLIC SECTION.
CLASS-METHODS:
filter
IMPORTING i_or_inspection TYPE REF TO cl_ci_inspection
CHANGING c_it_findings TYPE scit_rest.
PROTECTED SECTION.
PRIVATE SECTION.
CLASS-METHODS:
is_finding_new
IMPORTING i_wa_f TYPE scir_rest
RETURNING VALUE(r_result) TYPE abap_bool
RAISING cx_satc_failure,
init_comparison_data
IMPORTINGi_it_findings TYPE scit_rest
RAISINGcx_satc_failure,
get_consolidated_names
IMPORTING i_it_findings TYPE scit_rest
RETURNING VALUE(r) TYPE if_satc_result_access_filter=>ty_object_names,
filter_previous_findings
CHANGINGc_it_findings TYPE scit_rest
RAISINGx_satc_failure.
CLASS-DATA s_comparison_findings TYPE scit_rest.
ENDCLASS.
In the filter() method, I check the variant name. For Z_DELTA, I forward to method filter_previous_findings() to do the actual work.
METHOD filter.
CHECK c_it_findings IS NOT INITIAL.
TRY.
IF i_or_inspection->chkv->chkvinf-checkvname = 'Z_DELTA'.
filter_previous_findings( CHANGING c_it_findings = c_it_findings ).
ENDIF.
CATCH cx_satc_failure INTO DATA(cx).
DATA(exc_text) = cx->get_text( ).
MESSAGE exc_text TYPE 'E'.
ENDTRY.
ENDMETHOD.
Method filter_previous_findings() does what its name says:
METHOD filter_previous_findings.
init_comparison_data( i_it_findings = c_it_findings ).
LOOP AT c_it_findings ASSIGNING FIELD-SYMBOL(<f>).
IF NOT is_finding_new( <f> ).
DELETE c_it_findings USING KEY loop_key.
ENDIF.
ENDLOOP.
ENDMETHOD.
But how can we compare with the check results of a previous version of the program? It is relatively easy, if we regularly run mass checks (for all customer coding) on the quality/test system, and replicate these checks to the development system. In this case, we can access those findings with the ATC API classes.
Method init_comparison_data() is the key element: it selects the newest complete, central check run from table SATC_AC_RESULTH, using a pattern for the title. You will have to adapt this pattern to your system name, or whatever you configured as name for the check run in your quality/test system.
METHOD init_comparison_data.
DATA(object_names) = get_consolidated_names( i_it_findings ).
CHECK object_names IS NOT INITIAL.
DATA(or_factory) = NEW cl_satc_api_factory( ).
DATA(or_filter) = or_factory->create_result_access_filter( i_object_names = object_names ).
SELECT display_id FROM satc_ac_resulth
WHERE is_central_run = 'X'
AND is_complete = 'X'
AND title LIKE 'D1Q:%' " adapt this to the pattern of your mass test run name
ORDER BY scheduled_on_ts DESCENDING
INTO @DATA(display_id)
UP TO 1ROWS.
ENDSELECT.
CHECK sy-subrc = 0.
or_factory->create_result_access( i_result_id = display_id )->get_findings(
EXPORTING i_filter = or_filter
IMPORTING e_findings = s_comparison_findings ).
ENDMETHOD.
In the above method, we do not want to load all findings of the mass run (usually an enormously big number, depending on your system), so we prepare a filter from the objects in the current findings, using method get_consolidated_names().
METHOD get_consolidated_names.
DATA wa LIKE LINE OF r.
wa = VALUE #( sign = 'I'option = 'EQ' ).
LOOP AT i_it_findings ASSIGNING FIELD-SYMBOL(<f>).
wa-low = <f>-objname.
APPEND wa TO r.
ENDLOOP.
SORT r.
DELETE ADJACENT DUPLICATES FROM r.
ENDMETHOD.
And here is the method for the actual comparison:
METHOD is_finding_new.
READ TABLE s_comparison_findings WITH KEY test = i_wa_f-test " test class
code = i_wa_f-code
objtype = i_wa_f-objtype
objname = i_wa_f-objname
" sub object (where the findings was actually detected)
sobjtype = i_wa_f-sobjtype
sobjname = i_wa_f-sobjname
param1 = i_wa_f-param1
" param2 seems to contain technical and sometimes
" language-dependent info, so we ignore it
TRANSPORTING NO FIELDS.
r_result = xsdbool( sy-subrc<> 0 ).
ENDMETHOD.
That’s it!
Unfortunately, there is one little loophole: if you use ATC as part of the transport release, and if you either grant limited exemptions for ATC findings (that expire at a certain date), or if you allow “emergency transports” to bypass the checks in some way, then you accumulate “new dirt” in your test/quality system, and you will never notice this because the mechanism proposed here cannot distinguish the “new dirt” from the old, “accepted” dirt.
A simple solution for this is to keep the comparison run from your quality/test system fixed, and not replace it with newer runs.
However, this has a disadvantage if you ever want to switch on additional checks in your check variant. In that case, all findings of the new checks will appear as “new”, even if they already existed in old coding.
To overcome this, we implemented a “dirt list” database table, where we store all unresolved findings that were transported to the quality/test system (for whatever reasons). If there is sufficient interest, I will explain this in another blog.