Quantcast
Channel: SCN : Blog List - ABAP Development
Viewing all 943 articles
Browse latest View live

/NSE16H, /NTAANA, or Hana Studio, which way to go? Instant Database Analytics

$
0
0

Remember Transaction /NTAANA? Do you already use the new SE16H Transaction?

 

Transaction TAANA is a very useful transaction to analyze number of documents per year or per Document Type etc for Data Archiving Projects, Carve-Out analysis etc.

TAANA is able to compute the statistics in Batch so even in tables with millions of rows, we are able to analyse the results in a comfortable way.

i played around with this on hana, and of course i don't need batch analytics, the results come in dialog in seconds.

 

see also note for some limitations: http://search.sap.com/notes?id=0001879808

 

taana.gif

select 'create ad hoc variant button'

 

taana2.gif

 

choose statistic fields like organisation values (salesorg, co area) or order type etc

 

taana3.gif

non-hana: choose in the background for large tables

hana: online!

 

taana4.gif

 

analysis results: number of documents etc

 

in Netweaver 7.40, we can do the same using SE16H: (works also for Classic Databases)

see note http://search.sap.com/notes?id=0001636416

 

advantage here: we can also define JOINS!

 

se16h.jpg

just select group or total and you will get the distinct/total values instead of the detail lines

 

join definition:

 

se16h_join.gif

 

se16h_result.gif

 

in realtime

 

 

hana studio:

get a quick impression by using data preview/distinct values:

hana_studio.gif

 

or use analysis for more details (select x/y axis):

hana_studio2.gif

 

in my version of web ide there was no preview feature, i think this is coming in Hana SPS9:

hana_web_ide.gif


CL_SALV_TABLE->FACTORY method, no negative symbols on ALV export.

$
0
0

Fellow ABAP Developers,

 

I would like to raise a possible issue or misunderstanding that I have come across recently when using the CL_SALV_TABLE->FACTORY method for generating ALV reports. It had recently come to my attention that when exporting a report to spreadsheet from ALV, that the negation symbols would be missing in a spreadsheet and output when the ALV originated from the CL_SALV_TABLE->FACTORY method. A SCN/Google search revealed only two previous posts on the matter which will be referenced below. The posts suggested solutions from anonymous users which resolve the issue but fail to adequately explain the behavior. I want to raise this again for discussion not only for clarity of the problem/misunderstanding but to make a point in the chance there are other developers out there who used this method but are not aware of the potential issue.

 

PROBLEM

When using method CL_SALV_TABLE->FACTORY to generate ALV reports, negative signs or symbols do not come across for domains which do not have a “Sign” attribute for them flagged. This means inherently that these domains are not meant to have negations signs or any other symbols for that matter attached to them. However, even prior to CL_SALV... the use of these corresponding data types never seemed to be an issue for output to ALV when negation is done directly on the field. In fact, ALV always displays the correct value on screen (shows negative symbol) when using the REUSE FMs and CL_SALV_TABLE->FACTORY. The problem comes when the fields are exported to spreadsheet and possibly other methods (raw text) from ALV. In the case where this is exported to a spreadsheet from ALV, and using the CL_SALV_TABLE->FACTORY method to generate the ALV, the negative signs are missing. This behavior with the signs and spreadsheet export did not seem to be an issue when using the REUSE FMs.

 

In my experience, the spreadsheet export function is used very frequently in relation to ALV which is why I see this to be very problematic as one may view the data correctly on screen through ALV (negative sign shown) but when exported, the data is now invalidated because the sign is now missing.

 

SOLUTION

As mentioned earlier, a simple internet search revealed two discussions on this:

 

1. http://scn.sap.com/thread/2069432

2. http://scn.sap.com/thread/1042384

 

Anonymous users suggested the following two solutions:

 

1. Do not use a data type of a domain that does not have their sign attribute flagged.

2. Call method on the columns experiencing this issue SET_SIGN( 'X' ).

ex. lo_column->set_sign( 'X').


The problem or misunderstanding with solution 1 is that this may not always be clear on what types to use for the ALV output, assuming the developer is aware of this potentiality and corruption of exported data.

 

The problem or misunderstanding with solution 2 is that the column type CL_SALV_COLUMN should have this attribute “SIGN” set to true ( 'X' ) by default. This means that this is being overridden somewhere, but alas, could find no documentation which mentions this in the class tree or reasoning for this.

 

POINTS OF DISCUSSION/REVIEW

  1. Bring this potential problem to the attention of other developers who may have used this method incorrectly and have not checked the ALV export data which may contain negative signs.
  • This problem misunderstanding could be easy for a ABAP developer to miss especially if they are migrating from the REUSE FMs.
  • It may also be harder to catch the issue as the spreadsheet export might not be checked as the ALV on screen outputshows the correct symbol.

 

  1. Why, in more depth, does this behavior occur?
  • Data is displayed correctly in ALV on screen but not in the export.
  • This did not appear to be an issue with RESUE FMs.
  • Why and where does the SIGN attribute get set to FALSE for column type CL_SALV_COLUMN when using the FACTORY method?

 

  1. Was this behavior intended for CL_SALV_TABLE ALV use and the developer must explicitly set the SIGN attribute where this behavior for effected columns may occur?
  • Would it be better if the class default for attribute SIGN be left TRUE for CL_SALV_COLUMN?
    • What advantage is there for a user to explicitly define this if the problem/understanding can be avoided altogether if this remains TRUE?
      • Perhaps other issues arise when this is set to TRUE?

     4. Are there any other similar caveats such as this one that a developer migrating to CL_SALV from the REUSE FMs should be aware of?

SAP2SAP Runtime Remote Data Typing

$
0
0

Whenever structures or internal tables with many fields are involved in an sap2sap integration i usually carry out it using remote runtime data-typing, as i've shown in my two blogs:


 

This approach avoids data dictionary redundancy across systems and saves also a lot of time in data dictionary creation if involved structures contain dozens or hundreds of fields. Recently, thanks to the work of Hans (and previously also thanks to my colleague Fabrizio), i realized that the solution was limited to simple structures; simple in the sense that weren't managed fields defined as:

 

  1. structure include
  2. nested structures
  3. nested internal tables

 

For example (considering all above cases) a structure defined as:

 

Untitled.png

 

Although a simple structure is the most common case i decided to put my hands on code once again, starting from Hans's work, to handle all these cases. At first glance i thought it was a really hard work but i was wrong; in fact, once reviewed my old code, it has been very easy. I did a few simple fixes to my zcl_dyn_remote_type_builder=>get_components method to achieve the goal. Now the method is able to handle following internal data types:

 

  1. 'h' internal table (cl_abap_elemdescr=>typekind_table)
  2. 'u' structure (cl_abap_elemdescr=>typekind_struct1)
  3. 'v' deep structure (cl_abap_elemdescr=>typekind_struct2)

 

and create recursively the related abap components by means of usuals zcl_dyn_remote_type_builder=>create_struct_type and zcl_dyn_remote_type_builder=>create_table_type methods. It was necessary to call ddif_fieldinfo_get function module specifying the "all_types flag". It was also necessary to handle the returned internal table "lines_descr" for nested internal table data typing. At last i used the lfield field returned in dfies_tab to distinguish between fields added as structure include (.include field in picture) and fields of a structure included (the struc1 field in picture). Below a snippet of the new code:

 

* build elements

  call function 'DDIF_FIELDINFO_GET' destination i_rfcdest

    exporting

      tabname        = i_struct

      all_types      = 'X'

    importing

      x030l_wa       = ls_x030l

      lines_descr    = lt_lines

    tables

      dfies_tab      = lt_fields

    exceptions

      not_found      = 1

      internal_error = 2

      others         = 3.


  [...]

 

   loop at lt_fields into ls_dfies where not lfieldname cs '-'.

 

   [...]

 

*   build elements

    case ls_dfies-inttype.


*     build table types

      when cl_abap_elemdescr=>typekind_table.

     

        read table lt_lines into ls_line with key typename = ls_dfies-rollname.

        read table ls_line-fields into ls_lfield with key tabname = ls_line-typename.

        lo_table = zcl_dyn_remote_type_builder=>create_table_type

                               ( i_rfcdest = i_rfcdest

                                 i_struct  = ls_lfield-rollname ).

        ls_comp-type = lo_table.


*     build structure types

      when cl_abap_elemdescr=>typekind_struct1 or                      

           cl_abap_elemdescr=>typekind_struct2.

     

        lo_struct = zcl_dyn_remote_type_builder=>create_struct_type

                                  ( i_rfcdest = i_rfcdest

                                    i_struct  = ls_dfies-rollname ).

        ls_comp-type = lo_struct.


*     build element types (also for fields in included structures)

      when others.

      lo_elem = zcl_dyn_remote_type_builder=>get_elemdescr

                                      ( i_inttype  = ls_dfies-inttype

                                        i_intlen   = lv_intlen

                                        i_decimals = lv_decimals ).

      ls_comp-type = lo_elem.

 

    endcase.

 

    append ls_comp to result.

   

  [...]

 

  endloop.


My saplink nugg is available there.

Once you tired of ALV GRID

$
0
0

Today is the day you may learn how to show table data!

 

No, not again! Not again one of those REUSE_ALV_GRID or CL_GUI_ALV_GRID nor CL_SALV_TABLE. Not a bit!

Imagine you need to provide users with great layout he or she familiar with.

Furthermore you want to let users edit functionality.

 

Please feel free to use one of those techniques. But CL_SALV_TABLE fals out since you need edit function.

You shouldn't expect each user to be happy with SAP standard interface. But almost all users are pretty familiar with MS Office.

 

Let's imagine if we can create an Excel workbook with data users want to see. It would be a great deal because almost everyone in the modern world is familiar with its interface:

http://social.technet.microsoft.com/wiki/cfs-file.ashx/__key/communityserver-wikis-components-files/00-00-00-00-05/2158.ExcelResults.png

This nice-looking interface has additional strengths. At least one: it may manipulate with data that is not shown on the screen.

Some of data may be lost if someone do copy-paste operation with a huge amount of data working with ALV GRID.

 

It's surely solved with MS Excel.

Pros and cons:

ALV Grid:

+ Already exists

+ Easy to use

+ Can be used in backgroud jobs

+ Pretty (? Sure it is pretty if you're a SAP professional)

- Not transparent behavior for users and developers in case of editable Grid

- Not as flexible as Excel

- Takes time to get used to it

 

MS Excel

+ Pretty

+ Works with great amount of data

+ Users are familiar with it

+ Can have Charts, Logo and so on

- Needs some magic to implement it (it is discussed below)

- Needs MS Office to be installed onto user's PC (being honest: it's almost a standard to have MS Office on each work Win PC)

- Holds some extra space of user's screen (shown below)

 

Why we don't try to create this MS Excel integration in our report?

 

What we need: a template XLSX file. This will hold styles, charts and logos (if realy need it).

Perhaps you know it already XLSX file is actualy a zip archive with several XML files within it.

We need only two:

.\xl\sharedStrings.xml

Без имени-2.jpg

and

.\xl\worksheets\sheet1.xml

Без имени-3.jpg

 

A quick info: the first is to store all unique texts of the workbook cells and the second is to keep the first worksheet.

All we need is to add new strings into the first one and to put a table into the second.

I've created a simple tool to add new texts into a XLSX file and its use looks like:

Без имени-4.jpg

You may find it at my posts if you realy need it.

And then I've created a XSL-transformation to fill the worksheet:

 

Please forgive me for placing it here:

<xsl:transform version="1.0"

   xmlns:xsl="http://www.w3.org/1999/XSL/Transform"

   xmlns:sap="http://www.sap.com/sapxsl"

>

<xsl:strip-space elements="*"/>

<xsl:template match="/">

<worksheet xmlns="http://schemas.openxmlformats.org/spreadsheetml/2006/main" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="x14ac"

xmlns:x14ac="http://schemas.microsoft.com/office/spreadsheetml/2009/9/ac">

   <dimension ref="A1:J2"/>

   <sheetViews>

     <sheetView tabSelected="1" workbookViewId="0"/>

   </sheetViews>

   <sheetFormatPr defaultRowHeight="14.4" x14ac:dyDescent="0.3"/>

   <cols>

     <col min="1" max="1" width="9" bestFit="1" customWidth="1"/>

     <col min="2" max="2" width="28" bestFit="1" customWidth="1"/>

     <col min="3" max="4" width="10.109375" bestFit="1" customWidth="1"/>

     <col min="5" max="5" width="20" bestFit="1" customWidth="1"/>

     <col min="6" max="6" width="7.21875" bestFit="1" customWidth="1"/>

     <col min="7" max="7" width="8.5546875" bestFit="1" customWidth="1"/>

     <col min="8" max="8" width="12.109375" bestFit="1" customWidth="1"/>

     <col min="9" max="9" width="8.6640625" bestFit="1" customWidth="1"/>

     <col min="10" max="10" width="9.21875" bestFit="1" customWidth="1"/>

   </cols>

   <sheetData>

     <row r="1" spans="1:10" ht="30.6" x14ac:dyDescent="0.3">

       <c r="A1" s="1" t="s">

         <v>0</v>

       </c>

       <c r="B1" s="1" t="s">

         <v>1</v>

       </c>

       <c r="C1" s="1" t="s">

         <v>2</v>

       </c>

       <c r="D1" s="1" t="s">

         <v>3</v>

       </c>

       <c r="E1" s="1" t="s">

           <v>9</v>

       </c>

       <c r="F1" s="1" t="s">

         <v>4</v>

       </c>

       <c r="G1" s="1" t="s">

         <v>5</v>

       </c>

       <c r="H1" s="1" t="s">

         <v>6</v>

       </c>

       <c r="I1" s="1" t="s">

         <v>7</v>

       </c>

       <c r="J1" s="1" t="s">

         <v>8</v>

       </c>

     </row>

     <xsl:for-each select="//ITEMS/*">

     <row spans="1:10" x14ac:dyDescent="0.3">

       <xsl:attribute name="r">

          <xsl:value-of select="INDX"/>

       </xsl:attribute>

       <c s="2" t="s"><v><xsl:value-of select="F1"/></v>

       </c>

       <c s="2" t="s"><v><xsl:value-of select="F2"/></v>

       </c>

       <c s="2" t="s"><v><xsl:value-of select="F3"/></v>

       </c>

       <c s="2" t="s"><v><xsl:value-of select="F4"/></v>

       </c>

       <c s="2" t="s"><v><xsl:value-of select="F10"/></v>

       </c>

       <c s="3"><v><xsl:value-of select="F5"/></v>

       </c>

       <c s="3"><v><xsl:value-of select="F6"/></v>

       </c>

       <c s="3"><v><xsl:value-of select="F7"/></v>

       </c>

       <c s="3"><v><xsl:value-of select="F8"/></v>

       </c>

       <c s="3"><v><xsl:value-of select="F9"/></v>

       </c>

     </row></xsl:for-each>

   </sheetData>

   <sheetProtection password="DE25" sheet="1" formatCells="0" formatColumns="0" formatRows="0" insertColumns="0" insertRows="0" insertHyperlinks="0" deleteColumns="0" deleteRows="0" sort="0" autoFilter="0" pivotTables="0"/>

   <pageMargins left="0.7" right="0.7" top="0.75" bottom="0.75" header="0.3" footer="0.3"/>

   <pageSetup paperSize="9" orientation="portrait" horizontalDpi="0" verticalDpi="0" r:id="rId1"/>

</worksheet>

</xsl:template>

</xsl:transform>

 

Here are thre points to be discussed:

1. Text values go with <c t="s"> tag (note attribute t value). And I pass here indexes from my tool.

2. Number values go without attribute t for tag 'c'.

3. Protection for the sheet is set on.

 

The last point will forbid any unexpected changes. In my example there is a difference between <c s="2"> style and <c s="3">.  The main purpose is to portect uneditable cells and let the rest be changeable.

 

At last we are to show the XLSX.

Let we have:

1. A binary string with zip content of an XLSX file

2. A screen with a container

 

And here is my example to show the MS Excel file:

     DATA: lv_string TYPE char1024.

*       container   TYPE REF TO cl_gui_container,

*       doi_proxy   TYPE REF TO i_oi_document_proxy.

*       l_control   TYPE REF TO i_oi_container_control

*       control     TYPE REF TO i_oi_ole_container_control

*       xlsx_string TYPE xstring " holds binary of the XLSX file

 

     CHECK container IS NOT BOUND.

 

     CREATE OBJECT container TYPE cl_gui_custom_container

       EXPORTING

         container_name = 'CONTAINER'.                       "#EC NOTEXT

 

     c_oi_container_control_creator=>get_container_control(

       IMPORTING

         control = l_control ).

 

     control ?= l_control.

 

     CALL METHOD control->init_control

       EXPORTING

         r3_application_name      = 'Demo'                   "#EC NOTEXT

         inplace_enabled          = abap_true

         inplace_scroll_documents = abap_true

         parent                   = container

         register_on_close_event  = abap_true

         register_on_custom_event = abap_true.

 

     CALL METHOD control->get_document_proxy

       EXPORTING

         document_type      = 'Excel.Sheet'                  "#EC NOTEXT

         register_container = abap_true

       IMPORTING

         document_proxy     = doi_proxy.

 

 

     DATA: lt_table TYPE enh_version_management_hex_tb,

           lv_size  TYPE i.

 

     CALL FUNCTION 'ENH_XSTRING_TO_TAB'

       EXPORTING

         im_xstring = xlsx_string

       IMPORTING

         ex_data    = lt_table

         ex_leng    = lv_size.

 

     doi_proxy->open_document_from_table(

       document_table = lt_table

       document_size  = lv_size

       open_inplace   = abap_true ).

 

And the final screenshot to encourage:

Без имени-5.jpg

Hope this may help or inspirit you!

Best Practice updating IT759 Compensation Process Records in SAP Enterprise Compensation Management

$
0
0

I was recently helping one of my clients regarding a reporting issue with SAP Enterprise Compensation Management and data not being displayed correctly on one of their custom reports. This report was a copy of standard report PECM_SUMMARIZE_CHNGS - Summarize Comp. Planning Changes with some customer specific enhancements and additions. As the title of the report gives away, the report is to summarize the changes to the compensation data for a particular compensation plan during a particular compensation review. This client is on SAP ECC 6.0 EHP5, and has been live with ECM compensation planning for about 2 years (Merit and Bonus).

 

I tried to understand what was going on and found out from the client that they ran a custom program to mass update existing IT759 records recently and that was the start of the issue. The issue was that none of the most recent changes were not picked up by the report, and in many cases some of the associates who were changed in mass were not picked up by the report at all.

 

So I looked into the logic of the custom version of the summarize change report and saw that it reads based upon a history table T71ADM_EE_HIST which captures the changes of the IT759 record during the compensation planning process. I next noticed that none of the changes that were made via the custom program were captured into this program. So I debugged the program used to make the changes and found the following:

 

IF p_test EQ ' '.

UPDATE pa0759 ls_pa0759.

 

The program was written to update the database table PA0759 directly! This is never recommended, and I immediately let the client know that we should change this. There are many issues with updating database tables directly, including the lack of a data consistently check with updating the values directly and the fact that it does not update the corresponding fields such as Changed On, & changed By unless you specifically write the logic.

 

My next thought was that we should update the program using Function Module HR_INFOTYPE_OPERATION, as is the standard practice for updating infotypes. I was thinking that there was logic built by SAP that would then automatically update the history table.

 

 

img2.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

After making the change and testing, unfortunately the history table remained unchanged.

Next, I decided to look into the SAP standard code and see how they update the history table, which I confirmed is update whenever you use any standard SAP route to create/update IT759. The standard routes for updating this table are the following

  1. Create & Update via MSS when manager changes data
  2. Standard program PECM_CREATE_COMP_PRO - Create Compensation Process Records
  3. Standard program PECM_ADJUST_0759 - Adjust Compensation Process Records
  4. Update master data directly via PA30

I debugged the standard SAP code and found that SAP created a message handler method PROCESS_IT_CHANGE (CL_HRECM00_EE_HIST)  implemented in class ‘CL_HRECM00_EE_HIST’ to modify the history table.  Whenever there is any update to an IT759, this method will compare the old and new records then modify the record in history table

img3.png

 

img4.png

 

After debugging and finding this, I called this METHOD in the custom program so that whenever we run the custom program to update IT759, it will also update the history table!

ABAP Modern Code Conventions

$
0
0

Introduction

Coming from a computer science background and having learnt ABAP on the job in a SAP consultancy, I always wondered why there are so restrictive naming-conventions and why every customer has their own custom ones, each apparently trying to trump the other with even more detailed naming-conventions for every little technical possibility... To me it seems to be a SAP-speciality that has vanished everywhere else (or so it seems ... hello cobol legacy applications, hello hungarian notation)


Having read this interesting Blog post: Nomen est omen a while ago and having to tackle a big custom code-base ourselves (with our own inevitable hardcore-detailed naming conventions and our own fair share of technical debt ...) we discussed what to focus on in the future.


Goals: Simple, short & concise code conventions that ease development and make the code easier to read.

After all, you as a developer are spending about 70% of your time reading & analysing existing code. You want that code to be short and readable. And you don't want to constantly lookup naming conventions, do you?

 

How much code do you fully understand from top to bottom? Can you analyse & memorize only 100 lines of unbroken, unknown code and estimate the implication of new changes? I don't. I feel big relief if that code is broken up into as many form-routines (functions, methods, whatever) as possible: We dearly want Seperation of Concerns.


Decisions

  • No global variables (except where required... Hello, dynpro-binding): Local variables are the norm
  • No technical duplication of information already explicitly & neatly specified in the type-system (we love statically typed languages and despise dynamic languages, dont' we?). Instead focus on semantics, readability and meaningful names.
  • Keep it short: Form-Routines, Function-Modules and Class-Methods are limited to 70 lines of code/LOC.
    From all we've heard this should automagically lead to better (not good) maintainability

 

Some Rules derived from these decisions:

  • Since every variable is local there is no need for "My prefix hereby tells you that I am ...*fanfare*... local!".
    So no more L_ prefix. If you see a variable like "flight_time", it is local. Spare prefixes for special occasions, to emphasize.
  • Global variables are a special case, use the G_ prefix to discriminate them as the despicable things they are.
    Your brain/eye instantly catches these special prefixes, they no longer disappear between all those other L_ ...
  • Class-attributes are like local variables, they are class-local, they are not global.
    As such they don't have any prefix either but you may choose to use "me->attribute" to clearly indicate attribute-access.
  • Use meaningful constants like co_open_in_external_window instead of 'E'
    It does not matter whether these constants are defined globally or locally, just use a "CO_" prefix to specifiy them as being a constant.
    If you have an interface that only contains constants (an ABAP design pattern to create nice container for constants), you may omit the "CO_" prefix altogether. And yes, you may argue this constant-prefix too
  • If you define types it does not matter whether the type-definition is local-only or globally, just use "TY_" for types.
  • The most controversial: The variable shall have a meaningful name, no abbreviations and shall not contain technical information that is already specified in the type-system. Focus on the semantics!
    So no more "ls_sflight" and "lt_sflight" but "sflight_entry" and e.g. "sflight_tab_delete" (emphasize on "delete" to describe what entries are contained in the table and why).
    You may argue "So you traded lt_ for the _tab suffix and actually gained nothing, well done.".
    In some way or the other you have to declare a table as being a multitude of things, it does not matter if you use the typical s-plural suffix for "sflightS" or use a "_tab" suffix, the important thing is to focus on its meaning and reserve prefixes to emphasize on important things (hello ugly globals...).
    Besides, ABAP itself is already shouting in your face whether that variable is a table/structure:
    READ TABLE x, LOOP AT x, x-comp = 'foobar' etc.    you really don't need anything more 90% of the time...
  • Use mathematical notation instead of abbreviations that make the code harder to read: <= is way more intuitive than LE.
    Just use normal math-operators that you've already learnt in school. I often hear that NE and LE are perfectly reasonable and understandable but this argument always comes from guys with decades of experience. I think the more simple way (not easy) is always to be preferred, there is no point to distinguish yourself from lesser experienced by using voodoo'ish notation...

 

TL/DR: This leads to the following inevitable naming-convention-matrix:

PrefixComment
LocalsNONE
GlobalsG_
Field-Symbols<NONE>Really ugly: global Field-Symbols: <G_XXX>
AtttributesNONEUse "me->attribute" to distinguish from locals if necessary
ConstantsCO_No distinction local vs. global (Omit prefix in constant-interfaces)
TypesTY_No distinction local vs. global
Form UsingI_Concentrate: Using = Importing/Input, no need to distinguish
Form ChangingC_
FM ImportingI_Try by reference
FM ExportingE_Try by reference
FM ChangingC_Try to avoid
FM TableT_Avoid!
Method ImportingI_Try by reference
Method ReturningR_
Method ExportingE_Try to avoid
Method ChangingC_Try to avoid
Select-OptionsS_
ParametersP_

 

This table is hopefully concise enough, we actually printed it on 1 DinA5, instead of the former 2 DinA4 pages.

 

Before starting the eagerly awaited flame war, please consider that these conventions follow those described in the book Official ABAP Programming Guidelines. of Horst Keller, Wolf Hagen Thümmel - by SAP PRESS (p. 208ff). Even if the internal SAP conventions really hurt my eyes and every SAP development team seems to use their own even more cumbersome conventions, Thank you Horst Keller for these guidelines.


No conclusion yet, we just started with these conventions and still figure out how to transform the existing code-base in a pragmatic approach...

 

Tools

Code-Inspector Test based on CL_CI_TEST_SCAN to limit the allowed Lines of code (Will release that soon...)

We are using smartDevelop but still need to figure out a good transformation-strategy.

 

Thank you for your attention, any comments appreciated.

Happy 6E65772079656172!

$
0
0

cl_demo_output=>new( )->write_html(

  cl_abap_codepage=>convert_from(

    CONV xstring(

      `3C68313E486170707920` &&

      `4E657720596561723C2F` &&

      `68313E3C62723E2E2E2E` &&

      `20616E64206C6F747320` &&

      `6F662066756E20776974` &&

      `68203C423E414241503C` &&

      `2F623E20696E203C623E` &&

      `323031353C2F623E21` ) ) )->display( ).

AngularJS Single-Page Application + {z}restapi and token authentication

$
0
0

Building a Single-Page Application (SPA) with AngularJS  and Bootstrap CSS to consume a REST API based on {z}restapi and token authentication

 

In this blog we are going to develop a Single-Page Application (SPA) with AngularJS and Boostrap CSS to store contacts on the SAP WebAS ABAP using a REST API based on {z}restapi and its token based authentication mechanism. The Single-Page Application will allow us to make use of the 4 CRUD methods, Create, Read, Update and Delete, provided by the REST API.



Pre-requisites

 

In order to develop and test the application it is necessary to have:

 

On the Server side

 

  • A SAP WebAS ABAP with the {z}restapi installed ({z}restapi download and installation instructions on GitHub)

 

On the Client side

 

 

Note: you can deploy and run the Single-Page Application (SPA) as a BSP application on your SAP WebAS ABAP or locally using the XAMPP Apache Server. Here we are going to use the XAMPP Apache Server to consume the API from the outside (other domain) in order to explore the Cross Origin Resource Sharing - CORS.

 

 

Downloads

 

You can download below the .nugg files to import the ABAP objects into your system and the zip file with the Single-Page Application.

 

Nugg Files

 

 

Zip File

 

 

GitHub Repository

 

 

 

Dictionary objects

 

If you want to create the dictionary objects manually please find below the details of each object.

 

TABLE


  • ZTB_CONTACTS
    • MANDT         type MANDT
    • EMAIL           type CHAR30
    • FIRSTNAME type CHAR30
    • LASTNAME   type CHAR30
    • PHONE         type CHAR30


TABLE TYPE

 

  • ZTT_CONTACTS line type ZTB_CONTACTS

 

STRUCTURE

 

  • ZST_CONTACTS
    • .INCLUDE ZST_REST_RESPONSE_BASIC
      • SUCCESS  type STRING
      • MSG       type STRING 
      • CODE     type i
    • CONTACTS    type ZTT_CONTACTS

 

 

Implementing the Contacts REST API

 

Let's create the resource class ZCL_REST_RESOURCE_CONTACTS.


Go to SE24 and create the class ZCL_REST_RESOURCE_CONTACTS, final with public instantiation as shown in figure 01.

 

http://www.jianelli.com.br/scnblog8/figures/scnblog8_01.JPG

    Figure 01 - Class Properties

 

On the interfaces Tab inform the four interfaces provided by {z}restapi

 

  • ZIF_REST_RESOURCE_CREATE    REST API - Resource Create Method
  • ZIF_REST_RESOURCE_READ        REST API - Resource Read Method
  • ZIF_REST_RESOURCE_UPDATE    REST API - Resource Update Method
  • ZIF_REST_RESOURCE_DELETE     REST API - Resource Delete Method

 

 

http://www.jianelli.com.br/scnblog8/figures/scnblog8_02.JPG

   Figure 02 - Interfaces Tab

 

Now let's implement the inherited methods. Go to the Methods tab.

 

http://www.jianelli.com.br/scnblog8/figures/scnblog8_03.JPG

    Figure 03 - Methods Tab

 

First of all let's implement our GET_REQUEST_TYPE and GET_RESPONSE_TYPE methods.

 

The GET_REQUEST_TYPE of all interfaces will have the same request type, what means that all CRUD methods expect to receive the fields of our contacts table as parameters.

 

GET_REQUEST_TYPE method implementation

 

METHOD ZIF_REST_RESOURCE_CREATE~GET_REQUEST_TYPE.  r_request_type = 'ZTB_CONTACTS'.
ENDMETHOD.

 

 

METHOD ZIF_REST_RESOURCE_READ~GET_REQUEST_TYPE.  r_request_type = 'ZTB_CONTACTS'.
ENDMETHOD.

 

 

METHOD ZIF_REST_RESOURCE_READ~GET_REQUEST_TYPE.  r_request_type = 'ZTB_CONTACTS'.
ENDMETHOD.

 

 

METHOD ZIF_REST_RESOURCE_READ~GET_REQUEST_TYPE.  r_request_type = 'ZTB_CONTACTS'.
ENDMETHOD.

 

We will see later (testing the API) that when we call the methods passing parameters with the same name of the contact's table fields the values are automatically passed to the importing structure I_REQUEST, which in our case will have the type ZTB_CONTACTS.

 

The only CRUD method that needs to have the GET_RESPONSE_TYPE method implemented is the READ method, where we will return the contact's data. All other methods don't need to be implemented.


GET_RESPONSE_TYPE method implementation


METHOD ZIF_REST_RESOURCE_READ~GET_RESPONSE_TYPE.

   r_response_type = 'ZST_CONTACTS'.

ENDMETHOD.


We will see later that the {z}restapi has a fall back mechanism that uses the structure ZST_REST_RESPONSE_BASIC when a custom response type is not defined.


Now let's implement the CREATE, READ, UPDATE and DELETE methods.


CREATE method implementation


METHOD zif_rest_resource_create~create.   DATA: ls_contact  TYPE ztb_contacts,         ls_response TYPE zst_rest_response_basic.   ls_contact = i_request.   IF NOT ls_contact IS INITIAL.     INSERT ztb_contacts FROM ls_contact.     IF sy-subrc = 0.       ls_response-success = 'true'.       ls_response-code    = 200.       ls_response-msg     = 'Contact created successfully!'.     ELSE.       ls_response-success = 'false'.       ls_response-code    = 409.       ls_response-msg     = 'Contact already exists!'.     ENDIF.   ELSE.     ls_response-success = 'false'.     ls_response-code    = 403.     ls_response-msg     = 'Contact has no information!'.   ENDIF.   e_response = ls_response.
ENDMETHOD.


 

READ method implementation


METHOD ZIF_REST_RESOURCE_READ~READ.   DATA: ls_request  TYPE ztb_contacts,         ls_response TYPE zst_contacts.   ls_request = i_request.   IF ls_request-email IS INITIAL.     SELECT * FROM ztb_contacts       INTO TABLE ls_response-contacts.   ELSE.     SELECT * FROM ztb_contacts       INTO TABLE ls_response-contacts       WHERE email = ls_request-email.   ENDIF.   e_response = ls_response.
ENDMETHOD.

 

UPDATE method implementation


METHOD zif_rest_resource_update~update.   DATA: ls_contact  TYPE ztb_contacts,         ls_response TYPE zst_rest_response_basic.   ls_contact = i_request.   UPDATE ztb_contacts FROM ls_contact.   IF sy-subrc = 0.     ls_response-success = 'true'.     ls_response-code    = 200.     ls_response-msg     = 'Contact updated successfully!'.   ELSE.     ls_response-success = 'false'.     ls_response-code    = 409.     ls_response-msg     = 'Contact not found!'.   ENDIF.   e_response = ls_response.
ENDMETHOD.


DELETE method implementation


METHOD ZIF_REST_RESOURCE_DELETE~DELETE.   DATA: ls_request  TYPE ztb_contacts,          ls_response TYPE zst_rest_response_basic.   ls_request = i_request.   IF NOT ls_request-email IS INITIAL.     DELETE FROM ztb_contacts       WHERE email = ls_request-email.     IF sy-subrc = 0.       ls_response-success = 'true'.       ls_response-code = 200.       ls_response-msg = 'Contact deleted successfully!'.     ELSE.       ls_response-success = 'false'.       ls_response-code = 409.       ls_response-msg = 'Contact not found!'.     ENDIF.   ENDIF.   e_response = ls_response.
ENDMETHOD.

 

Let's test our Contacts REST Resource using the POSTMAN Google Chrome extension.

 

Assuming that {z}restapi is installed and its ICF service is created under /sap  (see figure 4) the URL of our Contacts service is:

 

http://server:port/sap/zrestapi/myApp/contacts

 

where myApp is the name of our application (that is required by {z}restapi) and contacts is what identifies our contacts resource, i.e., everything that goes after "ZCL_REST_RESOURCE_". As you have already noticed, together they match the resource class name.

 

 

http://www.jianelli.com.br/scnblog8/figures/scnblog8_04.JPG

    Figure 4 - {z}restapi ICF service

 

Here we are not going to cover all test cases, only the most basic of the positive test cases.

 

Testing the CREATE method

 

http://www.jianelli.com.br/scnblog8/figures/scnblog8_05.JPG

     Figure 5 - testing the create method

 

 

Testing the READ method

 

http://www.jianelli.com.br/scnblog8/figures/scnblog8_06.JPG

     Figure 6 - testing the read method

 

 

Testing the UPDATE method

 

http://www.jianelli.com.br/scnblog8/figures/scnblog8_07.JPG

     Figure 7 - testing the update method

 

 

Testing the DELETE method

 

http://www.jianelli.com.br/scnblog8/figures/scnblog8_08.JPG

    Figure 8 - testing the delete method

 

 

Developing the Contacts AngularJS Web Application

 

Now that our API is ready let's start to develop our frontend application.

 

You can download the zip file with the complete web application from this link.

 

INDEX.HTML page

 

The index.html is a very basic html page where we are going to reference the CSS and javascript files used by the application and do the basic setup of the AngularJS application. Below is a extract of the index.html source code.

 

 

<!DOCTYPE html><html lang="en"><head>  <title>SCN Blog 8 - AngularJS Contacts App with {z}restapi and token authentication</title>  <meta charset="utf-8">  <!-- Mobile Specific Metas -->    <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no">    <!-- Libs CSS -->    <link href="http://maxcdn.bootstrapcdn.com/bootstrap/3.3.1/css/bootstrap.min.css" rel="stylesheet">    <link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.2.0/css/font-awesome.min.css" rel="stylesheet">    <!-- Custom CSS -->    <link href="app.css" rel="stylesheet"></head><body ng-app="myApp">  <!-- Placeholder for the views -->    <div class="container" ng-view=""></div>  <!-- Start Js Files -->    <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.3.6/angular.min.js" type="text/javascript"></script>    <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.3.6/angular-route.min.js" type="text/javascript"></script>    <script src="hmac-sha1.js" type="text/javascript"></script>    <script src="app.js" type="text/javascript"></script></body></html>

On the body tag we are informing the attribute ng-app="myApp" which defines our application and the <div class="container" ng-view=""></div> which is the placeholder for the views of the Single-Page Application.

 

Our Single-Page Application will be composed by 2 views:

 

  • main.html
  • contact.html

 

 

 

MAIN.HTML (view)

 

The main view will have a form to add new contacts and a List to display all contacts.

 

http://www.jianelli.com.br/scnblog8/figures/scnblog8_09.JPG

    Figure 9 - main view

 

<!-- Overlay to display the loading indicator --><div id="overlay" ng-show="$parent.data.loading"><i id="ajax-loader" class="fa fa-3x fa-spinner fa-spin"></i></div><h3>Add Contact</h3><div class="row" ng-hide="data.showPaneAddContact">  <div class="col-xs-12">  <a ng-click="data.showPaneAddContact=!data.showPaneAddContact"><i class="fa fa-2x fa-plus-square"></i></a>  </div></div><div class="row" ng-show="data.showPaneAddContact">  <div class="col-xs-12">  <a ng-click="data.showPaneAddContact=!data.showPaneAddContact"><i class="fa fa-2x fa-minus-square"></i></a>  </div></div><form ng-show="data.showPaneAddContact" name="contactForm" novalidate class="css-form" role="form" ng-submit="addContact(contactForm)">  <h5 ng-show="$parent.data.message" class="text-center">{{$parent.data.message}}</h5>  <fieldset>  <div class="row">  <div class="form-group col-sm-12 col-sm-3">  <label for="email">Email address</label>  <input type="email" class="form-control" id="email" placeholder="Enter e-mail" ng-model="data.contact.email" ng-maxlength="30" required>  </div>  <div class="form-group col-sm-12 col-sm-3">  <label for="firstname">First Name</label>  <input type="text" class="form-control" id="firstname" placeholder="Enter first name" ng-model="data.contact.firstname" ng-maxlength="30" required>  </div>  <div class="form-group col-sm-12 col-sm-3">  <label for="lastname">Last Name</label>  <input type="text" class="form-control" id="lastname" placeholder="Enter last name" ng-model="data.contact.lastname" ng-maxlength="30" required>  </div>  <div class="form-group col-sm-12 col-md-3">  <label for="phone">Phone</label>  <input type="tel" class="form-control" id="phone" placeholder="Enter phone" ng-model="data.contact.phone" ng-pattern="/^[-+.() ,0-9]+$/" ng-maxlength="30" required>  </div>  </div>  <button type="submit" class="btn btn-primary">Add Contact</button>  <button type="button" class="btn btn-default" ng-click="resetForm(contactForm)">Reset</button>  <button type="button" class="btn btn-default" ng-click="data.showPaneAddContact=!data.showPaneAddContact">Hide</button>  </fieldset></form><h3>Contact List</h3><div class="list-group">  <a href="#/" class="list-group-item" ng-show="isEmpty()">No Contacts</a>    <a href="#/contact/{{contact.email}}" class="list-group-item repeated-item" ng-repeat="contact in data.contacts | orderBy:'+firstname'">        <p><span class="glyphicon glyphicon-user"></span> {{contact.firstname}} {{contact.lastname}}</p>    </a></div>



CONTACT.HTML (view)

 

The contact view will have a form to allow us to update or delete the contact and also call or send e-mail to the contact.

 

http://www.jianelli.com.br/scnblog8/figures/scnblog8_10.JPG

    Figure 10 - contact view

 

 

<!-- Overlay to display the loading indicator --><div id="overlay" ng-show="$parent.data.loading"><i id="ajax-loader" class="fa fa-3x fa-spinner fa-spin"></i></div><h3>Contact Details</h3><h5 ng-show="$parent.data.message" class="text-center sample-show-hide">{{$parent.data.message}}</h5><form name="contactForm" novalidate class="css-form" role="form">  <fieldset>  <div class="row">  <div class="form-group col-sm-12 col-md-3">  <label for="email">Email address</label>  <input type="email" class="form-control" id="email" ng-model="data.contact.email" readonly>  </div>  <div class="form-group col-sm-12 col-md-3">  <label for="firstname">First Name</label>  <input type="text" class="form-control" id="firstname" placeholder="Enter first name" ng-model="data.contact.firstname" required>  </div>  <div class="form-group col-sm-12 col-md-3">  <label for="lastname">Last Name</label>  <input type="text" class="form-control" id="lastname" placeholder="Enter last name" ng-model="data.contact.lastname" required>  </div>  <div class="form-group col-sm-12 col-md-3">  <label for="phone">Phone</label>  <input type="tel" class="form-control" id="phone" placeholder="Enter phone" ng-model="data.contact.phone" required>  </div>  </div>  <div class="row">  <div class="col-xs-12">  <a href="tel:{{data.contact.phone}}"><i class="fa fa-3x fa-phone-square green"></i></a>  <a href="mailto:{{data.contact.email}}"><i class="fa fa-3x fa-envelope-square green"></i></a>  </div>  </div>  <br/>  <div class="row">  <div class="col-xs-12">  <button type="submit" class="btn btn-primary" ng-click="updateContact(contactForm)">Update</button>  <button type="button" class="btn btn-danger" ng-click="deleteContact()">Delete</button>  <button type="button" class="btn btn-default" ng-click="back()">Back</button>  </div>  </div>  </fieldset></form>

 

 

APP.CSS

 

We need some custom styles to set the border color of our input boxes to red when the field gets the invalid state. We also need some css to style the overlay container that shows a spin icon while the ajax calls is running (waiting the server response).

 

 

.glyphicon-user {  margin-top: 10px;  margin-right: 5px;
}
.css-form input.ng-invalid.ng-touched {    border-color: #FA787E;
}
#overlay{    position: absolute;    top: 0;    left: 0;    width: 100%;    height: 100%;    min-height: 100%;    min-width: 100%;    z-index: 10;    text-align: center;    background-color: rgba(0,0,0,0.5); /*dim the background*/
}
#ajax-loader{    margin-top: 25%;
}
.green{    color: #5cb85c;
}
.green:hover {    color: #449d44;
}

 

 

APP.JS

 

Although it is not a good practice, I decided to keep all javascript code in one file just to keep things as simple as possible. This way I believe that it is easier to understand how the pieces (modules, services, controllers and views) work together to form the application. The drawback here is that having all code in one file makes it a little bit big and can seem to be challenging to understand. But believe me, it will not be that hard.


Before diving into the code let's take a look at the parts or pieces that form the application.

 

http://www.jianelli.com.br/scnblog8/figures/scnblog8_11.png

   Figure 11 - App Overview


All application code is encapsulated inside the myApp module. The myApp module has a config where we define the routes and bind Controllers to the Views. It has also two Services, Token Service and Contacts Service and the Controllers Main and Contact.

 


myApp Module


If you have ever tried to use AngularJS to build web applications in the SAP WebAS ABAP you may probably know that the functions $http.post and $http.put provided by the $http service does not behave like jQuery.ajax(). AngularJS transmits data using Content-Type: application/json and the SAP WebAS ABAP is not able to unserialize it. So it is necessary to transform the http request to transmit data using Content-Type: x-www-form-urlencoded. Thanks to Ezekiel Victor we don't need to write the code to do this. The solution is very well explained bt Ezekiel in his blog.


Make AngularJS $http service behave like jQuery.ajax()

http://victorblog.com/2012/12/20/make-angularjs-http-service-behave-like-jquery-ajax/


 

angular.module('myApp', ['ngRoute'], function($httpProvider) {    // Use x-www-form-urlencoded Content-Type    $httpProvider.defaults.headers.post['Content-Type'] = 'application/x-www-form-urlencoded;charset=utf-8';    $httpProvider.defaults.headers.put['Content-Type'] = 'application/x-www-form-urlencoded;charset=utf-8';    /**     * Converts an object to x-www-form-urlencoded serialization.     * @param {Object} obj     * @return {String}     */    var param = function(obj) {        var query = '',            name,            value,            fullSubName,            subName,            subValue,            innerObj,            i;        for (name in obj) {            if (obj.hasOwnProperty(name)) {                value = obj[name];                if (value instanceof Array) {                    for (i = 0; i < value.length; i = i + 1) {                        subValue = value[i];                        fullSubName = name + '[' + i + ']';                        innerObj = {};                        innerObj[fullSubName] = subValue;                        query += param(innerObj) + '&';                    }                } else if (value instanceof Object) {                    for (subName in value) {                        if (value.hasOwnProperty(subName)) {                            subValue = value[subName];                            fullSubName = name + '[' + subName + ']';                            innerObj = {};                            innerObj[fullSubName] = subValue;                            query += param(innerObj) + '&';                        }                    }                } else if (value !== undefined && value !== null) {                    query += encodeURIComponent(name) + '=' + encodeURIComponent(value) + '&';                }            }        }        return query.length ? query.substr(0, query.length - 1) : query;    };    // Override $http service's default transformRequest    $httpProvider.defaults.transformRequest = [        function(data) {            return angular.isObject(data) && String(data) !== '[object File]' ? param(data) : data;        }    ];
});

 

 

 

myApp Config - Defining app routes


The application have only two routes. The "/" points to the main view and "/contact:email/" that points to the contact view.  


/**
* Defines myApp module configuration
*/
angular.module('myApp').config(    /* Defines myApp routes and views */    function($routeProvider) {        $routeProvider.when('/', {            controller: 'MainController',            templateUrl: 'main.html'        });        $routeProvider.when('/contact/:email/', {            controller: 'ContactController',            templateUrl: 'contact.html'        });        $routeProvider.otherwise({            redirectTo: '/'        });    }
);





Token Service


The Token Service is used by the Contacts Service to create authentication tokens for each http request sent to the server. It concatenates all parameters, the private key and a timestamp separated by pipes "|" and creates a hash (SHA1). It also returns the auth_token_s2s that contains the string to sign that is the name (and order) of the parameters that the server must use to calculate the hash and verify the token. The auth_token_uid contains the User Id. Notice that the Private Key is not sent along with the http request. The server already knows it. The askForPkey method prompts the user to inform it so the service can use it to create the authentication tokens.



/**
*  Creates the token service, responsible for generating the authentication tokens
*/
angular.module('myApp').service('TokenService', function($rootScope) {    // Stores user's private key used to create the token in the getToken method.    this.pkey = '';    var self = this;    this.askForPkey = function() {        self.pkey = window.prompt("Please inform your private key", "12345");    };    this.setPkey = function(sPkey) {        self.pkey = sPkey;    };    this.getToken = function(params) {        var timestamp = Date.now();        var auth_token_con = '';        var auth_token_s2s = '';        angular.forEach(params, function(value, key) {            auth_token_con = auth_token_con + value + '|';            auth_token_s2s = auth_token_s2s + key + '|';        });        if (self.pkey === '') {            self.askForPkey();        }        auth_token_con = auth_token_con + timestamp + '|' + self.pkey;        auth_token_s2s = auth_token_s2s + 'timestamp';        var token = CryptoJS.SHA1(auth_token_con);        var auth = {            token: token.toString(),            token_s2s: auth_token_s2s,            token_uid: 'IU_TEST',            timestamp: timestamp        };        return auth;    };
});





Contacts Service


The contacts service is the heart of the application. All communication with the server is handled by this service at it is also responsible for storing the contacts data displayed by the application views.


It has basically the implementation of the 4 CRUD methods that allow the application to Create, Read, Update and Delete contacts on the server side through the Contacts REST API. It makes use of the token service to create the authentication token to sign every single http request sent to the server.



/**
*  Creates the contacts service, responsible for storing and handling contact's data
*/
angular.module('myApp').service('ContactsService', function($http, $location, $rootScope, TokenService) {    this.contacts = [];    this.contact = {        email: '',        firstname: '',        lastname: '',        phone: ''    };    this.rootScope = $rootScope;    this.isFinished = true;    this.serverUrl = 'http://yourserver:port/sap/zrestapi/myApp/contacts';    var self = this;    this.loadingFinished = function() {        window.setTimeout(function() {            self.rootScope.$apply(function() {                self.rootScope.data.loading = false;            });        }, 500);    };    /**     *  Adds a new contact     */    this.addContact = function(contact) {        var auth = TokenService.getToken(contact);        var params = {            email: contact.email,            firstname: contact.firstname,            lastname: contact.lastname,            phone: contact.phone,            timestamp: auth.timestamp        };        self.rootScope.data.loading = true;        $http.post(self.serverUrl,            params,            {                timeout: 30000,                headers: {                    'Auth-Token': auth.token,                    'Auth-Token-S2S': auth.token_s2s,                    'Auth-Token-UID': auth.token_uid                }            }        ).success(function(oData) {            self.loadingFinished();            if (oData.success === 'true' || oData.success === true) {                self.contacts.push(contact);                     self.contact.email = '';                self.contact.firstname = '';                self.contact.lastname = '';                self.contact.phone = '';                if (angular.isString(oData.msg)) {                    self.rootScope.data.message = oData.msg;                }            }else{                if (angular.isString(oData.msg)) {                    self.rootScope.data.message = oData.msg;                }            }        }).error(function(data, status) {            self.loadingFinished();            if (status === 0) {                self.rootScope.data.message = 'Communication error: timeout.';            }else if(data !== null && data !== undefined ) {                    if (data.msg !== undefined) {                               if (angular.isString(data.msg)) {                            self.rootScope.data.message = data.msg;                        }else{                            self.rootScope.data.message = 'Communication error: server returned code ' + status;                        }                    }else{                        self.rootScope.data.message = 'Communication error: server returned code ' + status;                    }            }else{                self.rootScope.data.message = 'Communication error: server returned code ' + status;            }        });    };    /**     *  Retrieves all contacts     */    this.getContacts = function() {        var auth = TokenService.getToken({});        self.rootScope.data.loading = true;        $http.get(self.serverUrl, {            timeout: 30000,            params: {                timestamp: auth.timestamp            },            headers: {                'Auth-Token': auth.token,                'Auth-Token-S2S': auth.token_s2s,                'Auth-Token-UID': auth.token_uid            }        }).success(function(oData) {            self.loadingFinished();            if (angular.isArray(oData.contacts)) {                         self.contacts.splice(0, self.contacts.length);                         oData.contacts.forEach(function (contact) {                    self.contacts.push(contact);                });            }        }).error(function(data, status) {            self.loadingFinished();            if (status === 0) {                self.rootScope.data.message = 'Communication error: timeout.';            }else if(data !== null && data !== undefined ) {                    if (data.msg !== undefined) {                               if (angular.isString(data.msg)) {                            self.rootScope.data.message = data.msg;                        }else{                            self.rootScope.data.message = 'Communication error: server returned code ' + status;                        }                    }else{                        self.rootScope.data.message = 'Communication error: server returned code ' + status;                    }            }else{                self.rootScope.data.message = 'Communication error: server returned code ' + status;            }        });    };    /**     *  Retrieves contact's data     *  @param {Object} obj     */    this.getContact = function(id) {        var email = id;        var auth = TokenService.getToken({email:id});        self.rootScope.data.loading = true;        $http.get(self.serverUrl, {            timeout: 30000,            params: {                email: email,                timestamp: auth.timestamp            },            headers: {                'Auth-Token': auth.token,                'Auth-Token-S2S': auth.token_s2s,                'Auth-Token-UID': auth.token_uid            }        }).success(function(oData) {            self.loadingFinished();            if (angular.isArray(oData.contacts)) {                oData.contacts.forEach(function (contact) {                                 self.contact.email = contact.email;                    self.contact.firstname = contact.firstname;                    self.contact.lastname = contact.lastname;                    self.contact.phone = contact.phone;                });            }        }).error(function(data, status) {            self.loadingFinished();            if (status === 0) {                self.rootScope.data.message = 'Communication error: timeout.';            }else if(data !== null && data !== undefined ) {                    if (data.msg !== undefined) {                               if (angular.isString(data.msg)) {                            self.rootScope.data.message = data.msg;                        }else{                            self.rootScope.data.message = 'Communication error: server returned code ' + status;                        }                    }else{                        self.rootScope.data.message = 'Communication error: server returned code ' + status;                    }            }else{                self.rootScope.data.message = 'Communication error: server returned code ' + status;            }        });    };    /**     *  Updates selected contact     */    this.updateContact = function() {        var auth = TokenService.getToken(self.contact);        var params = {            email: self.contact.email,            firstname: self.contact.firstname,            lastname: self.contact.lastname,            phone: self.contact.phone,            timestamp: auth.timestamp        };        self.rootScope.data.loading = true;        $http.put(self.serverUrl,            params,            {                timeout: 30000,                headers: {                    'Auth-Token': auth.token,                    'Auth-Token-S2S': auth.token_s2s,                    'Auth-Token-UID': auth.token_uid                }            }        ).success(function(oData) {            self.loadingFinished();            if (oData.success === 'true' || oData.success === true) {                if (angular.isString(oData.msg)) {                    self.rootScope.data.message = oData.msg;                }            }else{                if (angular.isString(oData.msg)) {                    self.rootScope.data.message = oData.msg;                }            }        }).error(function(data, status) {            self.loadingFinished();            if (status === 0) {                self.rootScope.data.message = 'Communication error: timeout.';            }else if(data !== null && data !== undefined ) {                    if (data.msg !== undefined) {                               if (angular.isString(data.msg)) {                            self.rootScope.data.message = data.msg;                        }else{                            self.rootScope.data.message = 'Communication error: server returned code ' + status;                        }                    }else{                        self.rootScope.data.message = 'Communication error: server returned code ' + status;                    }            }else{                self.rootScope.data.message = 'Communication error: server returned code ' + status;            }        });    };    /**     *  Deletes selected contact     */    this.deleteContact = function() {        var auth = TokenService.getToken({email: self.contact.email});        self.rootScope.data.isFinished = false;        self.rootScope.data.loading = true;        $http.delete(self.serverUrl, {            timeout: 30000,            params: {                email: self.contact.email,                timestamp: auth.timestamp            },            headers: {                'Auth-Token': auth.token,                'Auth-Token-S2S': auth.token_s2s,                'Auth-Token-UID': auth.token_uid            }        }).success(function(oData) {            self.loadingFinished();            if (oData.success === 'true' || oData.success === true) {                if (angular.isString(oData.msg)) {                    self.rootScope.data.message = oData.msg;                                 self.contact.email = '';                    self.contact.firstname = '';                    self.contact.lastname = '';                    self.contact.phone = '';                    self.rootScope.data.isFinished = true;                }            }else{                if (angular.isString(oData.msg)) {                    self.rootScope.data.message = oData.msg;                }            }        }).error(function(data, status) {            self.loadingFinished();            if (status === 0) {                self.rootScope.data.message = 'Communication error: timeout.';            }else if(data !== null && data !== undefined ) {                    if (data.msg !== undefined) {                               if (angular.isString(data.msg)) {                            self.rootScope.data.message = data.msg;                        }else{                            self.rootScope.data.message = 'Communication error: server returned code ' + status;                        }                    }else{                        self.rootScope.data.message = 'Communication error: server returned code ' + status;                    }            }else{                self.rootScope.data.message = 'Communication error: server returned code ' + status;            }        });    };    /**     * Reset selected contact in order to clear the form     */    this.resetSelectedContact = function(id) {        self.contact.email = '';        self.contact.firstname = '';        self.contact.lastname = '';        self.contact.phone = '';    };
});




Main Controller


The main controller make use of the Contacts Service to read all contacts (to display them on the contacts list) and to Add new contacts. It is important to highlight here that the $scope.data.contacts points to the ContactsService.contacts, so whenever the ContactsService updates its contacts data the view is automatically updated thanks to AngularJS data-binding.



/**
*  Defines the main controller (for the view main.html)
*/
angular.module('myApp').controller('MainController',    function($scope, $rootScope, ContactsService) {        var rootScope = $rootScope;        $scope.data = {};        if ($rootScope.data === undefined) {            $rootScope.data = {};            $rootScope.data.message = null;            $rootScope.data.pkey = null;        }        $rootScope.data.loading = false;        ContactsService.resetSelectedContact();        $scope.data.contact = ContactsService.contact;        $scope.data.contacts = ContactsService.contacts;        $scope.data.showPaneAddContact = false;        ContactsService.getContacts();        /**         *  Checks whether the contacts array is empty or not         */        $scope.isEmpty = function(){            if($scope.data.contacts.length > 0){                return false;            }else{                return true;            }        };        /**         *  Adds a new contact         */        $scope.addContact = function(form){            if(form.$valid === true) {                   var contact = {                    email: $scope.data.contact.email,                    firstname: $scope.data.contact.firstname,                    lastname: $scope.data.contact.lastname,                    phone: $scope.data.contact.phone                };                ContactsService.addContact(contact);                form.$setUntouched();            }else{                $rootScope.data.message = "All fields are required!";            }        };        $scope.resetForm = function(form){            ContactsService.resetSelectedContact();            form.$setUntouched();        };        /**         *  Clears displayed messages after 3 seconds         */        $scope.resetMessage = function() {            window.setTimeout(function() {                rootScope.$apply(function() {                    rootScope.data.message = null;                });            }, 3000);           };        /**         *  Watches changes of the variable "$rootScope.data.message" in order to clear         *  displayed messages after 3 seconds         */        $rootScope.$watch(function(scope) { return scope.data.message },            function(newValue, oldValue) {                if (newValue !== oldValue && newValue !== "") {                    $scope.resetMessage();                };            }        );    }
);




Contact Controller


The Contact Controller also make use of the ContactService to read the selected contact's data, update the selected contact or delete it. Just like the main controller, the $scope.data.contact points to the ContactsService.contact.


 

/**
*  Defines the contact controller (for the view contact.html)
*/
angular.module('myApp').controller('ContactController',    function($scope, $routeParams, $location, $rootScope, ContactsService) {        var rootScope = $rootScope;        $rootScope.data = {};        $rootScope.data.message = null;        $rootScope.data.isFinished = true;        $scope.data = {};        $scope.data.isFinished = $rootScope.data.isFinished;        $scope.data.contact = ContactsService.contact;        ContactsService.getContact($routeParams.email);        /**         *  Executes the updateContact method of ContactsService to update the selected contact.         *  No information needs to be passed to identify the selected contact because the service knows who it is.         */        $scope.updateContact = function(form){            if(form.$valid === true) {                ContactsService.updateContact();            }else{                $rootScope.data.message = "All fields are required!";            }        };        /**         *  Executes the deleteContact method of ContactsService to delete the selected contact.         *  No information needs to be passed to identify the selected contact because the service knows who it is.         */        $scope.deleteContact = function(){            ContactsService.deleteContact();        };        /**         *  Navigates back to the main view         */        $scope.back = function(){            ContactsService.resetSelectedContact();            $location.url("/");        };        /**         *  Clears displayed messages after 3 seconds         */        $scope.resetMessage = function() {            window.setTimeout(function() {                rootScope.$apply(function() {                    rootScope.data.message = null;                });            }, 3000);           };        /**         *  Watches changes of the variable "$rootScope.data.isFinished" in order to trigger         *  the navigation back to the main view when a contact is deleted         */        $rootScope.$watch(function(scope) { return scope.data.isFinished },            function(newValue, oldValue) {                if (newValue === true && oldValue === false) {                    $scope.back();                };            }        );        /**         *  Watches changes of the variable "$rootScope.data.message" in order to clear         *  displayed messages after 3 seconds         */        $rootScope.$watch(function(scope) { return scope.data.message },            function(newValue, oldValue) {                if (newValue !== oldValue && newValue !== "") {                    $scope.resetMessage();                };            }        );    }
);



Testing the application locally with XAMPP



http://www.jianelli.com.br/scnblog8/figures/scnblog8_12.png

   Figure 12 - Starting with no contacts


http://www.jianelli.com.br/scnblog8/figures/scnblog8_13.JPG

   Figure 13 - Adding a new contact


http://www.jianelli.com.br/scnblog8/figures/scnblog8_14.JPG

   Figure 14 - Contact Added


http://www.jianelli.com.br/scnblog8/figures/scnblog8_15.JPG

http://www.jianelli.com.br/scnblog8/figures/scnblog8_16.JPG

   Figure 15 and 16- Updating the contact


 

http://www.jianelli.com.br/scnblog8/figures/scnblog8_17.JPG

   Figure 17 - Contact updated!



http://www.jianelli.com.br/scnblog8/figures/scnblog8_18.JPG

   Figure 18 - Contact list updated


http://www.jianelli.com.br/scnblog8/figures/scnblog8_19.JPG

   Figure 19 - Contact list empty again after deleting the contact

 

http://www.jianelli.com.br/scnblog8/figures/scnblog8_20.JPG

   Figure 20 - No contacts


http://www.jianelli.com.br/scnblog8/figures/scnblog8_21.JPG

   Figure 21 - Authentication token sent in the request header



You can see a live demo at


SCN Blog 8 - AngularJS Contacts App with {z}restapi and token authentication


but this demo uses a PHP backend to simulate the SAP WebAS responses.


All code is available on GitHub


christianjianelli/scnblog8 · GitHub





Fix missing "Spreadsheet" option while right-clicking on ALV generated with REUSE_ALV_GRID_DISPLAY

$
0
0

Today I came across a situation where the "Spreadsheet" option was missing while right-clicking on an ALV generated using REUSE_ALV_GRID_DISPLAY.

(See below)

 

2015-01-05_110428.jpg

 

I went through couple of discussions on this topic but most answered the missing "Export" option in Application bar (List> Export> Spreadsheet)

 

Observed that in my case, I could view the "Spreadsheet" option via from Application bar.

2015-01-05_111256.jpg

 

The Internal table declaration is as below-

* TYPES-----------------------------------------------------------------

  types :   begin of ty_output,

              kbeln      type kbeln,

              begru      type begru,

              bstat      type bstat,

              end of ty_output.

**--Internal Table----------------------------------------------------

Data: it_outtab type standard table of ty_output.

 

..

..

Logic for fetching data

..

* Display the data in grid

    call function 'REUSE_ALV_GRID_DISPLAY'

      exporting

        i_callback_program = sy-repid

        is_layout          = rec_layout

        it_fieldcat        = li_fldcatlog[]

        i_default          = 'X'          

        i_save             = 'A'          

        is_variant         = g_v_variant  

      tables

        t_outtab           = li_output.

While debugging, found that the internal table 'li_output' had a deep structure 'BSTAT'.

2015-01-05_113753.jpg

The types declaration was updated as below-

 

The Internal table declaration is as below-

* TYPES-----------------------------------------------------------------

  types :   begin of ty_output,

              kbeln      type kbeln,

              begru      type begru,

              bstat      type wrfbstat, "<<<<<-

              end of ty_output.

**--Internal Table----------------------------------------------------

Data: it_outtab type standard table of ty_output.

 

Post the code change, the 'Spreadsheet" Option appeared.

2015-01-05_114302.jpg

Documentation on REUSE_ALV_GRID_DISPLAY didn't mention about not using "Deep structures", though.

 

 

Hope this will be helpful for others!

ABAP Dump Texts - Quick and Dirty

$
0
0

Matthew Billingham has written a helpful blog about ABAP dumps.

 

From my point of view a lot of helpful information is already contained in the short dump texts.

 

You might be interested to see these texts without having to raise the exception.

 

The following - quick and dirty - program can be used as a starting point.

 

Simply enter the dump id, e.g. COMPUTE_INT_ZERODIVIDE or TIME_OUT, in the selection screen and see the short dump text.

 

Kind of poor man's ST22.

 

No guarantee that it works in all cases. I tested a view. Feel free to report errors or to improve that little hack.


REPORT ...

PARAMETERS errid TYPE snapt-errid.


CLASS write_dump DEFINITION.

  PUBLIC SECTION.
    CLASS-METHODS main.
  PRIVATE SECTION.
    CLASS-METHODS write_section
      IMPORTING VALUE(errid) LIKE errid
                section      TYPE snapt-ttype.
ENDCLASS.

CLASS write_dump IMPLEMENTATION.
  METHOD main.
    WRITE / errid COLOR COL_HEADING.
    SKIP.
    WRITE / 'What happened?' COLOR COL_HEADING.
    write_section( errid = errid
                   section  = 'W' ).
    SKIP.
    WRITE / 'What can I do?' COLOR COL_HEADING.
    write_section( errid = errid
                   section  = 'T' ).
    SKIP.
    WRITE / 'Error analysis' COLOR COL_HEADING.
    write_section( errid = errid
                   section  = 'U' ).
    SKIP.
    WRITE / 'Hints for Error handling' COLOR COL_HEADING.
    write_section( errid = errid
                   section  = 'H' ).
    SKIP.
    WRITE / 'Internal notes' COLOR COL_HEADING.
    write_section( errid = errid
                   section  = 'I' ).
    SKIP.
  ENDMETHOD.
  METHOD write_section.
    DATA tline   TYPE snapt-tline.
    DATA sect    TYPE snapt-ttype.
    SELECT tline ttype
           FROM snapt INTO (tline,sect)
           WHERE langu = sy-langu AND
                 errid = errid AND
                 ttype = section
                 ORDER BY seqno.
      IF strlen( tline ) >= 8 AND
         tline(8) = '&INCLUDE'.
        REPLACE '&INCLUDE' IN tline WITH ``.
        CONDENSE tline.
        errid = tline.
        write_section( errid = errid
                       section = sect ).
      ELSE.
        WRITE / tline.
      ENDIF.
    ENDSELECT.
  ENDMETHOD.
ENDCLASS.

START-OF-SELECTION.
  write_dump=>main( ).

Project Object - The best Excel (xls, xlsx, csv) file reader around?

$
0
0

Dear community,

 

I believe we have all been there. We want to read an excel file with some data in a table, and each time we look around for an example (or for the last report where we needed this), and each time we are faced with the same tedious tasks, which include but are not limited to:

  • You need to create the dialog to the user and limit it to the extension that you are able to read, making it very limited
  • You need to create a char like structure because the excel data will be imported in external format.
  • You need to take care of the conversion of each field (unless if it's just text), and many times you make assumptions in this conversion that are not always true (like the decimal separator, the date format...)

 

Now, what if it was possible to have this all done for you automatically??

 

Sounds like a dream? Well, now it's reality.

 

You can find it here:

Project Object - File Reader

 

Please check the Readme file for installation instructions.

 

Sounds interesting, how does it work?

 

Well, I'm glad you ask. This file reader uses the abap runtime type description functionalities extensively.

 

All you need to do is call the READ_FILE method of the class and provide it with the internal table where you want the data to be imported to. If you want to skip the dialog prompting for a file, you can use the importing parameter for the file path. There's also the classic optional "skip first line" parameter, if the user insists on uploading the file with the header line.

 

The "reader" is split into the next steps:

  1. Create a char like table automatically from the provided internal table. All fields will be type char with a length of 50 characters. If for some reason this is not sufficient for you, I have put a BAdI in place for you to override this.
  2. The "reader" will get the extension from the file path provided and will call the BAdI implementation for the respective extension. At the moment there are implementations for extensions CSV, XLS and XLSX. Please keep in mind that to read XLSX files you'll need to install abap2xlsx. More info on that here abap2xlsx
  3. The "reader" will convert the char like table into the internal format following the pattern: It will call a BAdI to allow the developer to override the default conversion per data element. If this BAdI is not implemented, it will check for a conversion routine. If the data element does not have a conversion routine, it will run a standard conversion algorithm (implemented in a BAdI) per data type.

 

 

This does a lot, but I need more, how can I enhance it?

 

I created the following BAdIs that allow you to enhance this file reader to best fit your needs:

  • Z_BD_FILE_READER_COL_TYPE - This BAdI allows you to override the data type used in the char like structure. Basically, if you need more than 50 characters, use this.
  • Z_BD_FILE_READER_CONV - This BAdI allows you to override the default conversion algorithm, in case you need to convert some data in a special way. You need to use filter F_ROLLNAME by providing the data element of the column you wish to override.
  • Z_BD_FILE_READER_EXECUTE - This BAdI allows you to create a new extension reader. As I mentioned, I have already implemented a reader for CSV, XLS and XLSX files. If you need another extension, you can implement this BAdI yourself. Feel free to share it with me if you want to contribute
  • Z_BD_FILE_READER_TYPE_CONV - If you don't override the default conversion, and if there's no SAP standard conversion routine, I have implemented a few conversion routines for the most "popular" types. These are text, date and numbers. The date conversion routine accepts pretty much every format (dd/mm/yyyy, yyyy/mm/dd, yyyymmdd...) and the number routine gets the format (decimal and thousands separator) from the user options, so as long as it matches the format in the file, it should work. If you encounter some fancy data type you'll have to implement the conversion yourself. If you feel like contributing, feel free to share

 

 

Some credit where credit is due

 

I'd like to thank Aaron Pennington for his help and inspiration, and I'd like to thank Christian Skoda also for his help and inspiration, and for being the installation manual tester

 

Of course, I'd like to thank as well the abap2xlsx team for sharing their xlsx reader.

 

 

Any comments or suggestions are appreciated as usual. If you have any questions, feel free to ask.

 

Best,

Bruno

Design - Dynamic Programming or Not...

$
0
0

Technical Design Time

 

So you are doing the technical design of a project.   OK - so the design and the coding have been done and you are supporting it.  (Me)   But pretend I helped design the "mess" I call dynamic programming.   Don't get me wrong, I love dynamic programming when used with some common sense.   But anyway...   Back to the story, YOU are doing a technical design for a project.

 

Requirement:

A screen is needed to display a table.   In the table there can be many records that relate back to one line.   You want to display the table in one line for editing, creating, and displaying.

 

Table

LineName of FieldDescription of FieldValue
1Crazy1Crazy field 1Yes
1Medication1Medication NeededTranquilizer
1Time_DangerousTime when dangerousWhile Programming
2Crazy1Crazy field 1No
3HappyWhen are happyNo computers
3Medication1Medication NeededNone

 

Display / Change / Create - After clicked on line 1

LineCrazy1Medication1Time_Dangerous
1YesTranquilizerWhile programming

 

Display/Change / Create - After clicked on line 2

LineCrazy Field 1
2No

 

OK Now lets make it a little harder - the fields for the lines are determined by different criteria.

LineJobNumber of yearsPeople interactionhours logged with people interactionColorAnimalName of Field
1Programmer3Crazy1
2ProgrammerNoCrazy1
3HorseCrazy1
4Independent wealthyHappy
5Yes10Medication1
6RedTime_dangerous

 

So it's an interesting requirement.    Let's pretend nothing in standard SAP will work for the requirement.   So what do you do?   What do you put in your technical spec?  How are you really going to program this beast?

 

Maybe the above isn't enough information.   So you get more and continue.

 

Dynamic programming is an option.   Probably a good one.   Use it with an ALV.   Ahhhhh... Now you see where I'm going with the dynamic programming.   But..   What about a Step-loop?   It's old, true.  And the display will look different - going down the page instead of across.  So which is the better choice?

 

Aha!  It's SAP ABAP we are talking about.   So the answer "It depends".   Yes!  Fist pump.  That is the answer.   I know it.

 

Questions to ask yourself

 

So step back - look at the mess and try to think about it.   Now step back again look at it again, and decide which will be the easiest to code.   Step back - yes look again, which is going to have the best performance.   Alright one more time - the last and most import to all of us that support your code - step back look again.   Which is going to be the easiest to support?  What skill levels are on staff?  What do they normally use?   What about the development team who will be developing the code?  What if it isn't you - can they do what you want?

 

So many questions and no good answer.

 

Decision Time:

 

So one more time - I'll move forward.

 

I decide to use dynamic programming and an ALV output.    I'm going to build a custom container to hold the code as I'm adding it via a screen exit.

Code Snipit

 

Here's some code:

 

code.JPG

 

I know not enough - but search on Dynamic programming.  You'll find a lot of examples.    This blog isn't meant to explain how to do it.  Also I know the requirements are not all detailed.  But hey we are developers, we know they aren't always well defined!   I like trying to read the back of the napkin for my functional requirements, it's always fun.

 

So now - answers to my questions.  Of course, my own answers.

Readability - Maintenance

 

Here's where it gets fun.   I'm a new program and have to maintain this.   Oh boy.   I wouldn't know where to start.   I'm an "older" programmer (Yes, I sure am) and I've never seen anything like this.  Either way the maintenance cost will go up.

 

Performance


Here it gets a little harder....   If I had used a step loop, I would have been querying and adding data screen by screen.   However, a step loop really didn't meet the requirement.   As a side note, if they used page up and down a lot - it really wouldn't have been a good performance save.   Do think about things like that if you are doing web programming, RF programming, etc.  Screen by screen or all at once?  And it would have depended on if I pulled all the data up front before moving from screen to screen.

 

Easiest?

 

Not always the best way to go.   It depends on your background.  For me, no this wasn't the easiest.

 

Should Dynamic programming been a technical requirement?

 

Do it sounds like step loop should have won, and dynamic programming should not have been used.   Well I would say the answers are interesting.  The one that would concern me would be the performance.

 

So did dynamic programming make sense?  YES!   Look at some different blogs.   I get to ride my soapbox a bit.  It's always fun.   It should have been used for many different reasons.  It is more flexible.   Once you learn it, it will be easy to maintain.  For what I'm using it for, it meets the requirement much better.   Performance is better.   But one of the best reasons - it's fun!   No not really, the best reason.   The best reason is it's going to expand the technical skills of anyone who has to maintain it.   Will it take longer for them to do?  Of course.  Will they complain?  Maybe.   But they will move forward a bit.

 

And yes, this is still a bit behind the times.  No web, no HANA... What can I say?  We aren't there yet.

 

So what do YOU think Dynamic or some other form of programming?  Would you have done something different?   Maybe even a newer technique I didn't think of.    Feel free to get on your soapbox.  I'd love to hear what you think.

Using UPL for doing just the needed rework

$
0
0

The Situation

In bigger and long-running projects you are faced more often than not with task to rework already existing coding. For example, the project used a specific version of a library and now it is time to upgrade this version to benefit from new features. Therefore the places in the coding where the library is used have to be found and adjusted.

 

In ABAP development, we see from time to time that some constructs are defined as obsolete. While there is in most cases no need to rework the code that makes use of these constructs as backward compatibility is done to the max in the ABAP stack (even to a point where it starts to get painful...), one might get rid of these obsolete coding for different reasons like

  • being state of the art
  • using it as a good "excuse" for doing additional refactoring work in the surroundings
  • just for feeling better not to rely on obsolete and possibly not supported features

 

But what to do if your project is a very long-running one and the amount of coding that would need a rework is really huge (say: above one million lines of code)? Wouldn't it be nice to first concentrate on that parts that are actually used daily in your productive system? If this is the case I probably have a little hint for you.

 

The Example

Let me give you an example that I had to deal with recently: We just upgraded one of our main ABAP systems from 7.01 to 7.40 SP7. A giant step forward and it went ok, except for one little problem. It seemed that using an old construct that was marked as deprecated years ago lead the using programs to dump when used heavyly in parallel (after some weeks it turned out that it was "just" a small bug in the ABAP kernel, but this is another story...).

As I searched for this construct in the code base of our 15 year old application with over 10 Mio. lines of code using a Code Inspector test, I found round about 400 places. Adjusting all of these places seemed to be a little bit too much even for a boring and rainy Friday afternoon, so I thought about how to strip this set down to reduce my work. Always remember: A good developer is a lazy one.

 

Within a mailing thread with Boris Gebhardt concerning the Code Inspector tests for HANA readiness he gave me the hint to use one of the many, but not so well-known object collectors to reduce the set of development objects to scan. A good advice for my task, as one of these object collectors was just the one I was searching for! But it needed a little bit of support from another tool.

 

Introducing UPL

Recently, we activated the Usage and Procedure Logging (UPL) on our productiv system to know about the used and the not used development objects. There is a nice document about Usage Procedure Logging (UPL) from Ashishkumar Banker that explains what UPL is and how it can be used. The link for an official how-to guide to UPL can also be found in Getting started with Usage and Procedure Logging (UPL) by Shuge Guo and Bjoern Panter. The official guide clearly explains the benefits of UPL compared to other monitoring tools like ST03, ABAP Coverage Analyzer and others. One of these benefits is that you get nearly no performance penalty from activating it, so you can safely do so even in productive systems to monitor the real stuff that is going on.

While UPL is well integrated into Solution Manager's Custom Code Lifecycle Management for managing obsolete and not used objects, there are also use cases when used on the monitored system in stand-alone mode. One of these use cases is to get a list of executable development objects that were actually used during the monitoring phase for further analysis.

 

Let's start the work!

If you have actived UPL according to the guides mentioned and let it do its magic for the planed amount of time (in our system it keeps a history of 31 days), you can start to harvest the results. Just start the report /SDF/SHOW_UPL and just the parameters to your needs. For example, I was interested in our own code so I narrowed the results to custom packages ("Z*").

 

UPL_EXport_to_CI.jpg

Marked red in the screen shot you find the parameter that will return the found results in a format that can be used with Code Inspector and one of its object colletors.

After some seconds, you will get a file that contains all the used objects in the following format:

 

UPL_Export_File.jpg

As you can see it is simply a tab-separated list of the object type and the name, similar to the TADIR table.

Important to notice here: The export will always return the surrounding objects in cases of methods, From routines and function modules. The UPL records down to this level of granularity and could be more specific, but the current version of the object collector expects the list in exactly this format.

 

What does this mean? For the presented use case it will result in some "false positives" as the Code Inspector will scan a whole class and not perhaps just the few methods that were actually executed. This is not correct from an academic point of view but for most use cases, the reduction of needed work will probably be good enough. And it is perhaps another argument against monster classes with too much responsibilities. :-)

For my use case I tried to check how this would affect my results. Therefore I used my free SAP Lumira account at https://cloud.saplumira.com to do a bit of analysis of the executed objects. As you can see in the diagram below, the top 25 packages are used very similar throughout a week and this also holds true for the top 10 development objects.

LumiraFun2.jpg

When I analyzed the data set a bit further I found out that in most cases surrounding objects are used to nearly 80% of their sub parts. So for me it was acceptable to look at the surrounding objects instead of the fine level of methods and single function modules.

 

Ok, now we switch to the Code Inspector. With the list from UPL we can create an object set using the object collector.

SCI_Create_Object_Set.jpg

You will find the list of installed object collectors on the last tab of the main screen:

SCI_Open_OC_list.jpg

To actually see the list of object collectors, use the search help that is marked in the screen shot above and you will get a popup like this:

SCI_Choose_OC_File_Upload.jpg

As you can see, there are many interesting object collectors available but I won't go into details for all of them now. The object collector that is need is called "Objects form File Upload" and accepts a simple text file with the format mentioned above. When you select it you will be for the file and that's it. You have just created an object set containing the really used objects in your productive system!

 

From here, it is the normal work with Code Inspector to search for certain constructs or errors. Create a new inspection with a matching check variant and the just created object set. When working on the result list of the inspection run you will be sure to work on just the objects that used. Nice!

 

There is of course one thing to remember: There is no guarantee that the set of objects that was executed during a small period of time is correct for a whole year. Some objects might just be used once a year on special dates. Keep this in mind when using UPL data on the managed system for narrowing down the search. Again, form an academic point of view, this way of doing things is not 100% correct. If you need more certainty you should probably have a look into Custom Code Management in the SAP Solution Manager. For my task (and perhaps also for some of yours) it was ok to probably miss out one or two objects. So I traded speed and ease of execution for absolute correctness.

 

The Result

Using the described procedure I was able to cut down the amount of needed rework to a half. As I started there were over 400 objects on my list. After matching these objects with the list from UPL, only 200 objects were left to concentrate on. Not bad for about 10 minutes of work.

The rest of the original list can be patched on another day or even be ignored. This depends on the probability of being used again and what happens if the problem should occur.

 

Conclusion

This will lead me to the end of my blog. I hope you enjoyed the ride and you cloud get some inspiration on using SAP tools in cooperation the reduce the amount of tedious work. If you have any comments or questions, just feel free to do so. I am looking forward to your feedback!

FM to get SD Document Flow

$
0
0

I'm writing this post in order to help others who have difficulties finding the correct SD document flow by accessing directly to standard tables. SAP provides us with a function module called SD_DOCUMENT_FLOW_GET, it's very easy to use and fast enough to look for all the documents in the flow. To give you an example I'll use an Outbound delivery, but you can you any SD document such as Sales Orders, Invoices, etc:

 

1. Let's go to transaction SE37 and in parameter IV_DOCNUM enter the Outbound delivery number:

foto1.png

 

2. We get the list of the SD document flow for the Outbound delivery, for our example there are seven documents:

 

foto2.png

 

3. Let's open the table ET_DOCFLOW to see all documents:

 

foto3.png

 

4. The fields of these table are important:

 

DOCNUM: SD document number.

FOCUS:     Points the document from which the search began.

HLEVEL:    Hierarchy position on the document flow tree.

VBTYP_N: SD document category (type of document for DOCNUM field). In our example we have the following:

 

CategoryDescription
COrder
JDelivery
RGoods movement
MInvoice
8Shipment

 

You can check the complete list of categories by looking the VBTYP domain's values in transaction SE11:

 

foto4.png

 

And that's all we get the SD documents from the order up to the accounting document in FI. As an advice you should order the table by the fields ERDAT and ERZET in descending order to see the documents by the date they were created.

 

I hope this help you with your SD Abap developments.

Does ABAP Really Require Longer Procedures?

$
0
0

Note: I did originally publish in the following post in my company's blog on software quality. Since it might be interesting for many ABAP developers, I re-publish it here (slightly adopted).

 

My colleague once summarized the benefits of short methods in his blog post: These are easier to comprehend, do better document the code (by the method name), are better to reuse and simplify testing as well as debugging or profiling. As the examples were in Java, one may intend that this will work for modern languages—but not in general, and maybe not for a language like ABAP. In this post I’ll focus if short procedures (e.g., methods, function modules, or form routines) have the same relevance for ABAP or if longer procedures are to legitimate or can not be avoided.

 

As an upper limit for a sub-routine’s length the post suggested 40 lines of code or the number of lines visible at once in the editor window. Sometimes even 40 lines are considered as far too long. For example, Robert C. Martin stated in his well-known book Clean Code that even 20 lines are too long and preferred micro methods with only up to five lines. Even in Java and C# development, keeping methods that short is rarely seen in practice. However, from our experience when analyzing and auditing Java or C# projects we see that most Java or C# developers agree on the advantages of having methods fitting on a screen page. Of course we’ve seen Java / C# code with a high amount of very long methods, but usually the developers are aware that this is not like it should be.

 

The situation is quite different when we analyze ABAP code. Of course, also here developers agree that methods and procedures in general should not become too long—but procedures with some hundred lines are more often considered as acceptable and sometimes are even preferred over short procedures. Also the official ABAP programming guidelines recommend a maximum of 150 statements and SAP’s Code Inspector will issue a warning only above this threshold with its default configuration for the procedural metrics check (see image).

 

sci_proceduarl_metrics_config.png

SAP Code Inspector default configuration for procedural metrics

 

 

Since every statement should be placed on a single line, but may span several lines, a 100 or 150 statements long method will often be a few hundred lines long. Taking comments and empty lines into account, this number even increases.

 

So, does ABAP Require Longer Procedures?

 

Indeed, there are some reasons, that Java thresholds are not suitable for ABAP. One will note that many constructs require much more lines of code, if the code is reasonable formatted. For example, comparing a method call with three parameters in Java, e.g.

 

     result = calculateResult(x, y, z);

 

with the call to a similar function module in ABAP:

 

     CALL FUNCTION 'Z_CALCULATE_RESULT'

       EXPORTING

         iv_x = lv_x

         iv_y = lv_y

         iv_z = lv_z

       IMPORTING

         ev_result = lv_result.


Thus you need seven lines of ABAP code for one line of Java code. The same applies to a simple loop over a number range. In Java this are two lines (not counting the content of the loop)

 

     for (int i = getStart(); i <= getEnd(); i++) {

        // loop content

     }

 

but ABAP requires five lines (again, without the loop content):

 

     DATA lv_i TYPE i.

     lv_i = get_min( ).

     WHILE lv_i <= get_max( ).

       " loop content

       lv_i = lv_i + 1.

     ENDWHILE.

 

The latter example also shows, that variable declarations—which do not really increase the complexity—require an additional line. From these examples, one could argue that for ABAP the procedure length will be around three or four times compared to Java to achieve the same functionality.

 

 

Count Lines or Statements?

 

Especially when thinking about the many lines a simple function call requires in ABAP, developers often suggest to count statements instead of lines. Indeed, this would rate the function call of the first example like in Java, but still ABAP requires more statements in many cases as shown in the second example. Furthermore, consider statements like the following SELECT:

 

     SELECT * FROM zcqseexpobj AS z INTO TABLE deleted_objects

       WHERE z~export_config_id = me->export_meta_data-export_config_id

             AND z~object = 'PROG'

             AND (

                     NOT EXISTS (

                             SELECT * FROM tadir AS t

                               WHERE t~pgmid = 'R3TR'

                                     AND t~object = z~object

                                     AND t~obj_name = z~obj_name

                                     AND t~delflag <> 'X'

                     )

                     OR NOT EXISTS (

                             SELECT * FROM reposrc AS r

                               WHERE r~progname = z~obj_name

                                     AND r~r3state = 'A'

                     )

               ).

 

This is a single statement but lasts over 17 lines - and in my view it is at least as complex as 17 lines are and is not just one »simple« statement. The example is from our ABAP code extractor which we need to analyze the code—and I really would not like to have more than two of these in a single method, thus the 40 lines limit would be perfect here. As you see, statements can get very complex. That’s why we strongly prefer to count lines of code over number of statements (and it’s always clear what a line is, but not always what to count as statement). More complex procedure metrics aren’t helpful either, see e.g. Benjamin’s post on McCabe’s Cyclomatic Complexity.

 

 

Well, Let’s Count Lines—but What’s a Good Length Limit for ABAP then?

 

 

As we’ve pointed out, ABAP syntax often requires more lines in contrast to other programming languages, thus 40 lines might be a too hard limit, I agree. But 100 lines or more should be an exception if you want to keep your ABAP code maintainable. Even if you usually can express much more functionality in 40 lines of Java than in 100 lines of ABAP code. Yes, in general for Java I’d accept methods comprising more functionality than in ABAP. There are basically two reasons: First, if more lines are written, one has to read and understand more lines—thus influencing comprehensibility on its own. But the more crucial issue is the scope of local variables.

 

 

ABAP: There are No Block-Local Variables

 

In Java, C#, and most languages which support statement blocks, the scope of a variable defined inside a block is only from the declaration of the variable to the end of the block. For example, when defining a variable inside a loop, the variable is not accessible anymore after the loop block. This helps to lower the amount of variables which you have to consider at a certain position of your code and avoids misuse of already defined variables.

 

In ABAP, however, a local variable is visible from the line of its declaration till the very end of the procedure. Thus the amount of variables in your scope will always increase, even if the variable is needed only inside a specific IF, ELSE, WHILE, SELCET, or whatever block. One could reuse a variable later to store a semantically completely different value (maybe because someone is too lazy to spend an extra declaration, or one has defined a generic multiple purpose »helper« variable)—this really makes code hard to understand and is extremely error-prone. But even if variables are not misused, the number of variables to track alone has a decisive impact on how easy the code is to follow. That’s why we recommend ABAP procedures should regularly stay clearly below 100 lines.

 

 

Summary

 

The benefits of short procedures on comprehension, documentation, reuse, testing and profiling/debugging are independent from the used programming language. ABAP has indeed a more verbose syntax which justifies a higher threshold than 40 lines of code. Still we prefer to count lines, not statements since a single statement could be as complex as a single method. Since it is very important to keep the total number of declared local variables low in ABAP—as all are valid in the whole procedure—I’d recommend that for maintainable ABAP code it should be aimed to have procedures clearly shorter than 100 lines.

 

When we do quality analysis or quality control of an ABAP project, we apply usually a maximum length limit of 80 lines (or 60 lines when not counting blank and comment lines) to rate a procedure as green. For the vast majority of code, a procedure length of 80 (60) lines should be a reasonable limit. Of course, there will be always a few procedures for which it makes no sense to cut them into thus short parts, but these should remain an exception.


SAP User Tracking System - Part 4

$
0
0

Hi


It's another program, where i have use CL_GUI_TIMER class, and based on that displaying the User Login Information in different location.


In this program, at the starting a screen open with maps, which will contains the all user location information, those has been logged on to the system or still login.


The page is refresh after certain time slap, this page refresh information will store in ZGEOKEY table (point 3).

 

But that too only refresh, where any one of the user information change or any new user login to the system.


AT PBO event.


Creating the Object for the CL_GUI_HTML_VIEWER class and considering the whole screen as container.

Call the below Methods to display the Location Information.


a. GET_TODATS_USER.


    

 

 

Collecting all users’ information from the ZLOGIN Table, based on the Today/ current date and storing into the E_BNAME range table.

 

b. SET_MAP_REFRESH_TIME

 

     In this method creating an object of CL_GUI_TIMER class and fetching the Refresh Time information from      ZGEOKEY table based on the USER ID (I_BNAME)

 

    

 

 

     And setting a handle to refresh the Screen through TIMER_FINISH_EVENT method, which is event for the      CL_GUI_TIME.

 


   c. TIMER_FINISH_EVENT

This method trigger based on the refresh time of the CL_GUI_TIMER object.

Inside this Method again I call the GET_URL method,  and if found,  URL information, then only refreshing the URL information of the CL_GUI_HTML_VIEWER class.

 

  d.GET_URL

              

 

        This is the method to get the URL Information, which will display the Google Map with the Users Login Location Information. (Point 5).


Output:

Initially only single user login, so the output will display the below map.

         


After Sometime, i logged with SAP* user, but accessing the system in same location, so forcefully change the location information.

         


Map will refresh only, if any user will login.


Happy Learning

Praveer.       





SAP User Tracking System - Part 3

$
0
0

Hi


Now the final part to display the Outcomes.


5. Program to Display users Logged Information.


     Now display the Information in final output and check the created service is working fine or not.

  Now it’s show time.

    

       Create a Program with Select-Option (below image) and a blank screen.

    

     In start-of-selection event

 

     Generate Object for the CL_GUI_HTML_VIEWER.

 

      Call the below methods to display to generate the URL information based on the Entered User ID’s in input      field.


     a. GET_URL

              

         

In this method, I am passing the User ID, which is a range table.

Based on did query on ZLOGIN table and fetching the User Login Information.


Now, this is the method where I am storing the information in ZMAPLOC table, why?

It may be possible a user is accessing the system different places or same user id is used by different place by different users, so the location information could be more.



Created a Number Range Object ZURLID





Updating all information’s, like URL ID, City, User Name, latitude and longitude in ZMAPLOC table and creating the URL with URLID information.


If any information not found, then displaying an image from the MIME directory.



Call screen 9000.

 

In PBO of the Screen,

 

Call method SHOW_URL of the CL_GUI_HTML_VIEWER object.


1st time, it will ask User ID and Password of the SAP Login.

After Provide the Credentials,

              Final Output:


 

After select any of the marker a small info window will open to display User and Location Information (as display above image)

 

SAP User Tracking System - Part 4

 

Happy Learning..

Praveer.

SAP User Tracking System - Part 2

$
0
0

Hi


Still steps are waiting to follow and find the final output comes. We have to write code which contains google map environment to display Google Map. Now i continue with pending steps...


4. Created a Customized Service to Call the Google Map


     To display Google MAP, I have created a customized service in SICF T-Code to execute an URL from SAP ends. Below is screen, where ZGOOGLE_MAP is the service name.

    

     


     The customized service should be in


          SICF-->DEFAULT_HOST-->SAP-->BC-->


     It’s HTTP based service, find the more information below.

    

    


     Create a customized class and pass in Handler List Tab, here I have created ZCL_GOOGLE_MAP class.

    

    


     Inside the class, I have used IF_HTTP_EXTENSION interface and write the code inside the      IF_HTTP_EXTENSION~HANDLE_REQUEST Method.

    

    

    

    


     Find the Method in attachment; inside the method I have call below methods.

    

     a.GET_LOCATION

 

               IF_HTTP_EXTENSION~HANDLE_REQUEST called, when I pass the link in URL. I have maintained                one ID URL,

         

          http://iprv.com:8000/sap/bc/zgoogle_map?sap-client=000$$000216


          By the server->request->get_header_field( name = '~QUERY_STRING' ). system will return the                     "sap-client=000$$000216 information"

         

          where 000 is the client and 000216 is URL ID.

         

          Read the URL information, and Passing the into the GET_LOCATION method to get the all Information in           IT_LAT_LONG internal table.

         


     b. GET_GEO_KEY

This method I already explained under point 3, to get the GEO Key information based on the User ID.


    

     c. CREATE_JS_LOCATION_FOR_MAKER

               Aah now it’s came, this is the Final Part of the All Process, through this method; I am creating a HTML                page which contains the Java Script and Google map coding to Display the Google Map and Location                Information.


        


          Inside this method I have written the code to display the location and while clicking any of the Marker, a           small pop-up (info Window) will open, which will display the User ID and Location Name.

          And the all Information will return in E_JSCRPT parameter.

     

     d. CREATE_FINAL_HTML

          And this is the Final Method, through this I am creating a HTML page and passing the java script           information in-between.

         

         


         

          And final, passing html information in below code.

     

     server->response->set_cdata( data = lv_html ).


SAP User Tracking System - Part 3


Happy Learning..

Praveer.

SAP User Tracking System - Part 1

$
0
0

Hi


My Question is very simple; can we track user login information? Yes, one way is there, I think everybody knows the answer, but let me explain.


To activate any of the Exist, that’s trigger at the time of login.


Yes the same way I did, I Activate the SUSR0001 Exist. And write codes to store the Login Information of the User.


But here my point comes, from where user is accessing or accessed the System?



Yes, there it is, above a marker in MAP, Now my analysis comes here, I did some analysis and modified the login information while doing the storage process.


Here I have used an already created Web Service, to get the Public IP address of the Used System and location based information.


Below are the Steps which I have followed to store the Information and display the Google MAP with the Login location.


  1. Activate Exist to Store login information.
  2. Activate the Google Service for SAP,  though SAP used IE as default browser,
  3. Activate Google API MAP in Google
  4. Created a Customized Service to Call the Google Map
  5. Program to Display users Logged Information.
  6. Program, to see current login Information of User based on today date.


  1. Activate Exist to Store login information.

 

          As I have explained, we have to store the User Login Information, since SAP is not storing the 

          All Login information, you can check the current login information in USR41 Table,

          Once you log-off and re-login, the current login information will update into the same table.

          So I have to store the all Login Information in custom table, Below is the Table Information


 

In the above table, I am storing the all information which I need.

In attachment (UPDATE_LOGIN) Methods to update the Information into ZLOGIN table.


Inside the Method, I have used


                    http://ip-api.com/json


web service to get the Public IP and Location Information in JSON format.

Method in GET_WEB_SERVICE_INFO attachment, in this method, I am passing Web Service URL and getting the JSON format information in E_INFO parameter.




2.Activate the Google Service for SAP,  though SAP used IE as default browser,

 

     I have used CL_GUI_HTML_VIEWER class to display the Google map in SAP screen.

     When I have tried 1st time, the map itself not displaying and I did some analysis and found. SAP      opens IE (Internet Explorer) as default browser.

     I have put a thread in this forum and found the Answer. it's contains the all changes information to activate      the Google service in SAP.


     Note: All Changes has to do in Application Server Side.

 

     http://scn.sap.com/thread/3678742

    

 

3. Activate Google API MAP in Google


     To use Google MAP in coding, we require Google Service Key, Please find the below link to activate the Google API Key.

 

     https://developers.google.com/maps/documentation/javascript/tutorial


     Follow the all steps from the above link, and generate the API key.

     I have stored the API Key Information in customize Table and fetch the same through Class Method.

     Below is the table, where I am storing the Information.

    

    


     In this table, I am storing the MAP API key, Map Type and Map Refresh Time.


    


     Based on the user id, I am storing the information.

     Below are the Methods I am using to SET and GET the information from the ZGEOKEY table.

      

     a. SET_GEO_KEY

 

              

    

     b. GET_GEO_KEY

              

              


     Please find the SET_GEO_KEY and GET_GEO_KEY method code in attachment.


SAP User Tracking System - Part 2


Happy Learning..

Praveer.



HR_INFOTYPE_OPERATION : Synchronous Employee Data Update

$
0
0

Introduction:


Most often we come across requirement to update employee master data, with synchronously update / modify multiple info type in one transaction.


Intent of given document to brief about approach to update multiple info type by using standard functional module HR_INFOTYPE_OPERATION.


Overview:


To explain more, we’ll take following illustrating example:


Business needs to created new records automatically in info type 033 as and when records got changed or created for Organizational Assignment (info type 0001).


Thus during updating of info type 0001, simultaneously info type 033 three sub type new records should be created in background.


Trouble Area:


During such type of requirement  we need to give leverage on Dynamic Action i.e. SPRO->Personnel Management->Personnel Administration->Customizing Procedures->Dynamic Actions. But there is lot of challenges with Dynamic action esp. when we need to Create/Update / Delete record with certain validations or substitute some values where integration with legacy system involved.


From ABAP side we do face lot of challenges while using HR_INFOTYPE_OPERATION, primary one will be employee locking issue along with inconsistent commit transaction as internally given functional module called PA40 through means of BDC - Batch Data Communication.


Addition of info group will be another approach but it will involve manual intervention. Thus most of the user will discard that approach.


Implementation:


Step 1: As a first step, we’ll create / change in employee master data for Organizational Assignment i.e. 0001 Info Type through transaction code PA40.


Step 2:  With “SAVE” action of the above, Update task will be triggered .We’ll implement standard BADI HRPAD00INFTY, Method IN_UPDATE, which will be called during update task. As required, we’ll lock employee to create other info type data synchronously in background.


Afterwards,we’ll call our custom functional module ZHR_FM_UPDATE. So, all required business logic will be encapsulated in custom

functional module ZHR_FM_UPDATE.All import / export parameters can be custom adjusted as per requirement.


Step 3: In this functional module ZHR_FM_UPDATE, we’ll create new LUW, with unique identifier w.r.t Info type + Date + Time for background task.       

 

Step 4: As we need to trigger HR_INFOTYPE_OPERATION in new background task, we must to have RFC enable functional module. But

HR_INFOTYPE_OPERATION is not RFC enabled thus we need to create wrapper for the same.


Thus we’ll create new wrapper functional module ZHR_FM_UPDATE_LUW, containing below given flow logic to create info type data for 33 i.e. Statistics records will be created.


Good part we can have re-usability to implement any custom logic in case if needed before updating of records .

 

As needed, we can create multiple LUW (Task) to update different info type one after another.


Step 5: Last but not least, employee needs to be unlock through  BAPI_EMPLOYEE_DEUEUE and HR_PSBUFFER_INITIALIZE. Afterwards execution will be back to update info type 001 – Org. Assignment records. Thus in one transaction, multiple records created for given employee.

 

Hope document will be helpful to cater synchronous employee details update.

Viewing all 943 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>