Quantcast
Channel: SCN : Blog List - ABAP Development
Viewing all 943 articles
Browse latest View live

Transaction code SE16N using conditions like AND,OR and NOT

$
0
0

There will be some cases during our testing and verification of data we may need to filter table entries using AND, OR and NOT conditions in the Selection Screen. Being a functional Consultant majority of the time we try to export the data into excel and try to apply filters in the excel to see the desired output which is time consuming. But the same task can be achieved using SE16N with multiple selection screens.

 

Let us say we have a table with the following table contents and we want to filter and display only the below highlighted data.

 

1.jpg

 

  1. Go to transaction code SE16N.
  2. Right Click for context menu and select "Technical View On". Upon selection it will enable new button "More" on application tool bar as shown in step#3

   1.jpg

    3. Click "More" on application tool bar

1.jpg

 

    4. Fill in the selection Criteria, and press next to add your next filter

          In this example we are trying to display data

where ( NAME1 = 'Ashok'    and LANDX = 'INDIA' )

      or  ( NAME1 = 'Thomas' and LANDX = 'USA' )

 

Note: In below screen capture you will notice the selection screen number will be increasing based on the OR condition

1.jpg

 

OR

 

1.jpg

 

    5. Press Execute. Now you will see the data filtered based on the condition stated in STEP#4

 

Note: You can always see the selection screen filters by clicking "More" button

1.jpg

 

I hope you enjoyed this Blog.


GUI_DOWNLOAD with Field Names with more than 10 characters.

$
0
0

Hi All,

 

I have seen many posts for downloading from internal table to PC and many replies for the same. Many people have suggested different ways. But I saw those posts are yet Not Answered. Some complained that they are able to download with Field name. But Field name characters are only 10.

 

So for all these, I got a suitable way to download with proper field names. Some might have tried this method, some may be seeing newly. I thought of sharing this anyway.

 

Here I will be having 2 internal tables.

1. Final Internal table with the data to be downloaded.

2. Field names of the final internal table.

 

 

Fetching data and getting field names.

sap1.PNG

 

 

Downloading the Field names internal table.

 

sap2.PNG

 

 

After calling GUI_DOWNLOAD function module, again call GUI_DOWNLOAD and put the final internal table with data.

 

Downloading the Final Internal table

 

sap3.PNG

 

 

Check the exporting parameters passed while calling the function module both times.

 

Result:

sap4.PNG

The best debugging tool - your brain

$
0
0

Introduction

Usually when I blog on SCN I write about some specific development problem and the solution I found for it. In contrast this blog is about a more abstract topic, namely how to efficiently debug code. While it is quite easy to debug SAP code (the business suite is open source after all, at least the applications written in ABAP) debugging a certain problem efficiently is sometimes quite complex. As a result I've seen even seasoned developers getting lost in the debugger, pondering over an issue for hours or days without being close to a solution. In my opinion there are different reasons for this. One, however, is that some special approaches or practices are necessary in order to find the root cause of complex bugs using debugging.

In this blog I try to describe the approaches that are from my experiences successful. However, I'd also be interested which approaches you use and what your experiences are. Therefore I'm looking forward to some interesting comments.

 

Setting the scene

First I'd like to define what I would classify as complex bugs. In my opinion there are basically two categories of bugs. The simple ones and the complex ones . Simple bugs are all the bugs that you would be able to find and fix with a single debugger run or even by simply looking at the code snippet. For example, copy and past errors or missing checks of boundary conditions fall in this category. By simply executing the code once in the debugger every developer is usually able to immediately spot and correct these bugs.

The complex ones are the once that occur in the interaction of complex frameworks or APIs. In the SAP context these frameworks or APIs are usually very sparsely documented (if documentation is available at all). Furthermore, in most cases the actual behaviour of the system is influenced not only by the program code but also by several customizing tables. In this context identifying the root cause of a bug can become quite complex. Everyone that has every tried to e.g. debug the transaction BP and the underlying function modules (which I believe were the inspiration for the Geek & Poke comic below) or even better a contract replication form ERP to CRM knows what I'm talking about. The approaches I will be discussion in the remainder of this blog are the ones I use to debug in those complex scenarios.

http://geekandpoke.typepad.com/.a/6a00d8341d3df553ef016767875265970b-800wi

Know your tools

As said in the introduction I want to focus on the general approach for debugging in this blog. Nevertheless, an important prerequisite for successful debugging is knowing the available tools. In order to get to know the tools you need to do two things. First, its important to keep up to date with new features. In the context of ABAP development SCN is a great resource to do so. For example, Olga Dolinskaja wrote several excellent blogs regarding new features in the ABAP debugger (cf. New ABAP Debugger – Tips and Tricks, News in ABAP Debugger Breakpoints and Watchpoints , Statement Debugging or News in ABAP External Debugging – Request-based Debugging of HTTP and RFC requests). Also Stephen Pfeiffers blog on ABAP Debugger Scripting: Basics or Jerry Wangs blog Six kinds of debugging tips to find the source code where the message is raised are great resources to learn more about the different features of the tools. Besides the debugger also tools like checkpoint groups (Checkgroups - ABAP Development - SCN Wiki) or the ABAP Test Cockpit (Getting Started with the ABAP Test Cockpit for Developers by Christopher Kaestner) can be very useful tools to identify the root cause of problems.  However, reading about new features and tools is not enough. In my opinion it is important to once in a while take some time to play with the new features you discovered. Only if you tried a feature in toy scenario and understood what is able to do and what now will you be able to use the feature in order to track down a complex bug in a productive scenario.

Besides the development tools there are other important tools you should be able to use. Recently I adopted the habit to reply to questions by colleague whether I knew what the cause of a certain bug could be if they already performed a search on SCN and in the SAP support portal. In a lot of cases the answer is no. However, in my opinion searching for hints in SCN and the SAP support portal should be the first step whenever you encounter a complex bug. Although SAP software is highly customizable and probably no two installations are the same those searches usually result in valuable information. Even if you won't find the complete solution you will at least get information in which areas the cause of the bug might be. And last, but not least, also an internet search usually turns up some interesting links.

 

Thinking about the problem...

The starting point for each debugging session is usually an error ticket. Most likely these tickets was created by a tester or a user that encountered an unexpected behaviour. Alternatively the unexpected behaviour could also be encountered by the developer during developer testing (be it automated or manual). In the first case the next step is normally to reproduce the error in the QA system. Once a developer is able to reproduce the error it is usually quite easy to identify the code that causes an error message or an exception (using the tools described in the previous chapter). If no error message or exception but rather an unexpected result is produced identifying the starting point for debugging can already become quite challenging.

In both cases I recently adopted the habit to not start up the debugger immediately. Instead I start by reasoning about the problem. In general I start this process of by asking myself the following questions:

  • What business process triggers the error?
    The first question for me is always which business process triggers a certain error. Without an detailed understanding of which business process and its context causes an error identifying the root cause might be impossible.
  • What does the error message tell me?

In the case of a dump this is pretty easy. The details of the dump clearly show what happened and where it happened. However, in the case of an error message the first step should always be to check if a long text with detailed explanations is available. Most error massages don't have an detailed e

description available. But if a detailed description is available it is usually quite helpful.

Even the error messages without detailed descriptions can be very helpful. For example error message following the pattern "...<some key value> not available." or "....<some key value> is not valid." usually point to missing customizing. In contrast to that a message like "The standard address of a business partner can not be deleted" points to some problem in the process flow. Once one gets used to reading the error messages according to those kind of patterns they are quite useful to narrowing down the root cause of a error.

  • Which system causes the error?

Even if it seams to be trivial question it is in my opinion a quite important on. Basically all software systems in use today are connected to other software systems. So in order to identify the root cause of an error it is important to understand which system (or which process in which system) is responsible for triggering the error. While this might be easy to answer in most cases there are a lot of some where answering this question is far from trivial. For example consider SAP Fiori application that is build using oData service from different back end systems.

  • In which layer does the error occur?

Once the system causing an error is identified, it is important to understand in which layer of the software the error occurs. Usually each layer has different responsibilities (e.g. provide the UI, perform validation checks or access the database) For example, in a SAP CRM application the error could occur in the BSP component building the UI, the BOL layer, the GenIL layer or the underlying APIs. Understanding on which layer an error occurs helps to take short cuts while debugging. If the error occurs in the database access layer it's probably a good idea to not perform detailed debugging on the UI layer.

 

Usually I try to get a good initial answer to this questions. In my opinion it is important to come up with a sensible assumptions for answers to these questions. If the first answers obtained by reasoning about the error are not correct the iterative process described below will help to identify and correct these.

 

...and the code

The next step I take is looking at the code without using the debugger. After answering the question mentioned in the previous section I usually have a first idea in which part of the software the error occurs. By navigating through the source code I try to come up with a first assumption what the program code is supposed to do and which execution path leads to the error. This way I get a first assumption what I would expect to see in the debugger and also test my assumptions if come up with so far.

Note that trying to understand the code might not be sensible approach in all cases. Especially when dealing with very generic code it is usually far easier to understand what happens using the debugger. Nevertheless, I've had the experience that first trying to understand the code without the debugger allows me to debug much more efficient afterwards.

 

Debugging as an experiment

After all the thinking it is time to get to work and start up the debugger. I try to thinks about debugging as performing an experiment. After understanding what the scenario and context are in which the error occurs (by thinking about the problem) and getting a first assumption what the cause of the error might be (by thinking about the code) I use the debugger to test my assumptions. So basically I use the cycle depicted below to structure my debugging sessions.

debugging_as_experiment.png

First I try to think of an "experiment" to test my assumptions about the problem. Usually this is simply performing the business process that causes the error. Especially if an error occurs in a complex business process it might be better to find a way to test the assumptions without performing the whole complex process. The next step is to execute the "experiment" in order to test the assumptions. This basically is the normal debugging everyone is used to. If the root cause of the problem is identified during debugging the cycle ends here. If not, the final step of the cycle is to refine the assumptions based on the insights gained during the debugging. On the basis of  the new assumptions we can redesign the experiment and start the cycle over again. In this step it is important to move forward in small increments. If you change to many parameters between to debugging sessions it might be very difficult to identify the cause of a different system behaviour. For example consider a situation where an error occurs during the address formatting for a business partner. If order to identify the root cause of the problem it might be sensible to first test the code for the address formatting with a BP of type person and after that with a BP of type organization with the same address. This will enable to check if the BP type is part of the formatting problem or not.

 

<F5> vs. <F6> vs. <F7>

During the debug step of the cycle presented above the important question in each debugging step is if to hit <F5>, <F6> or <F7> (step in, step over or step out respectively). Using <F5> it is easy to end up deep down in some code totally unrelated to the problem at hand. On the other side using <F6> at the wrong position might result in not seeing the part of the source code causing the problem.

In order to decide if to step into a particular function or method or to step over it I use a simple heuristic that has proven very useful for me:

  • The more individual a function or method is the more likely is it to use <F5>
  • The more widely used a function or method is the more likely is it to use <F6>.

Using this heuristic basically leads to the following results:

  1. I will almost always inspect custom code using <F5>. the only exception is that I'm sure the function or method is not the cause of the problem.
  2. I will only debug SAP standard code if I wasn't able to identify the root cause of a problem in the custom code.
  3. I will basically never debug widely used standard function modules an methods and instead focus on new ones (e.g. those delivered recently with a new EhP).

As an example consider an error in some SEPA (https://en.wikipedia.org/wiki/Single_Euro_Payments_Area) related functionality. When debugging this error I would first focus on the custom code around SEPA. If this doesn't lead to the root cause of the error I would start also debugging SEPA related standard functions and methods. The reason is that this code has only been recently developed (compared to the general BP function modules). If I would encounter function modules like BAPI_BUPA_ADDRESS_GETDETAIL or GUID_CREATE in the process I would allways step over them using <F6>. These function modules are so common that it is highly unlikely they are the root cause of the problem.

Nevertheless it might turn out that in rare cases everything points to a function module or method like e.g. BAPI_BUPA_ADDRESS_GETDETAIL as the root cause of an error. In this case I would always check the SAP support portal first before debugging these function modules or methods. As these are widely used for quite some time it is highly unlikely I'm the first one encountering the given problem. Only if everything else fails I would start debugging those function modules or methods as a last resort.

 

The right mind set

For all the techniques described before it is important to be in the right mind set. I don't know how often I heard sentenced like "How stupid are these guys at SAP?" or "Have you seen this crappy piece of code in XYZ". I must admit I might have used sentences like these one or two times myself. However, I think this is the wrong mind set. The developers at SAP are neither stupid nor mean. Therefore, whenever I see something strange I try to think what might have been the reason to build a particular piece a code a certain way. What was the business requirement they tried to solve by the code. This usually has the nice effect that with each debugging session I learn something new about some particular area of the system. This will in the future help me to identify the root cause of new issues more quickly.

 

And probably the most important technique of all is the ability to take a step back. It happened to me numerous times already that I was working on a problem (be it a bug or trying to implement a new feature) for a while without any progress. For whatever reason I had to stop what I was doing (e.g. because the night guard walked in and ask me to finally leave the building). After coming back to the problem the next day i quickly found the solution. It then always seemed like I had been blind for the solution the day before. So whenever I get stuck working on a problem I started to force myself to step back, doe something else, and revisit the problem afresh a few hours later.

 

What do you think?

Finally I'd like to here from you what your approaches to debugging are. Do you use similar practices? What are the ones you find useful in identifying the root cause of complex errors?

 

Christian

ABAP keyword syntax diagram

$
0
0

As a Fiori developer, I am now reading this famous Javascript book.

clipboard1.png

In this book, the following graph is used to explain the Javascript grammar in a very clear way.

clipboard2.png

And today I just find in ABAP help documentation there are also similar syntax graph to illustrate the grammar of each keyword.

 

Just open one ABAP report, select any keyword and press F1, and you can find "ABAP syntax diagrams".

clipboard3.png


Double click on it and choose one keyword like "APPEND" in the right area:

clipboard4.png

Then the syntax diagram is opened. Ckick small "+" icon to drill down.

clipboard5.png

Click the "?" icon to get the meaning of each legend used in the graph.

clipboard6.png

Hope this small tip can help those ABAP newbie to fall in love with ABAP

Step by Step to generate ABAP code automatically using Code Composer

$
0
0

Today I am going through the SAP help for BRFplus stuff and come across with some introduction about ABAP code composer.

 

I would like to share with you a very simple example to demonstrate its logic.

clipboard1.png

How to find the above help document in a quick way? Just google with key word "ABAP CODE COMPOSER" and click the first hit.

clipboard2.png

And here below are steps how to generate ABAP codes which contains a singleton pattern using ABAP code composer.

 

1. Create a new program with type "INCLUDE":

clipboard3.png

And paste the following source code to include and activate it:

 

*---------------------------------------------------------------------* 
*       CLASS $I_PARAM-class$ DEFINITION 
*---------------------------------------------------------------------* 
*       Instance pattern: SINGLETON 
*---------------------------------------------------------------------* 
CLASS $I_PARAM-class$ DEFINITION 
@if I_PARAM-GLOBAL @notinitial   \ PUBLIC 
@end 
\ FINAL CREATE PRIVATE.   PUBLIC SECTION.     INTERFACES:       $I_PARAM-interface$.     CLASS-METHODS:       s_get_instance         RETURNING           value(r_ref_instance) TYPE REF TO $I_PARAM-interface$ 
@if I_PARAM-exception @notinitial         RAISING           $I_PARAM-exception$ 
@end 
\.   PRIVATE SECTION.     CLASS-DATA:       s_ref_singleton TYPE REF TO $I_PARAM-interface$.     CLASS-METHODS:       s_create_instance         RETURNING           value(r_ref_instance) TYPE REF TO $I_PARAM-class$ 
@if I_PARAM-exception @notinitial         RAISING           $I_PARAM-exception$ 
@end 
\. 
ENDCLASS.                    "$I_PARAM-class$ DEFINITION 
*---------------------------------------------------------------------* 
*       CLASS $I_PARAM-class$ IMPLEMENTATION 
*---------------------------------------------------------------------* 
*       Instance pattern: SINGLETON 
*---------------------------------------------------------------------* 
CLASS $I_PARAM-class$ IMPLEMENTATION. 
************************************************************************ 
*       METHOD S_CREATE_INSTANCE 
*----------------------------------------------------------------------* 
*       Constructs an instance of $I_PARAM-class$ 
*......................................................................*   METHOD s_create_instance. 
*    RETURNING 
*      value(r_ref_instance) TYPE REF TO $I_PARAM-class$ 
@if I_PARAM-exception @notinitial 
*    RAISING 
*      $I_PARAM-exception$ 
@end 
************************************************************************ 
@if I_PARAM-exception @notinitial     DATA:       l_ref_instance TYPE REF TO $I_PARAM-class$. 
************************************************************************     CREATE OBJECT l_ref_instance. 
@slot object_construction 
*   Construction of the object which can lead to $I_PARAM-exception$ 
@end     r_ref_instance = l_ref_instance. 
@else     CREATE OBJECT r_ref_instance. 
@end   ENDMETHOD.                    "s_create_instance 
************************************************************************ 
*       METHOD S_GET_INSTANCE 
*----------------------------------------------------------------------* 
*       Keeps track of instances of own class -> only one 
*......................................................................*   METHOD s_get_instance. 
*    RETURNING 
*      value(r_ref_instance) TYPE REF TO $I_PARAM-interface$ 
@if I_PARAM-exception @notinitial 
*    RAISING 
*      $I_PARAM-exception$ 
@end 
************************************************************************     IF s_ref_singleton IS NOT BOUND.       s_ref_singleton = s_create_instance( ).     ENDIF.     r_ref_instance = s_ref_singleton.   ENDMETHOD.                    "s_get_instance 
ENDCLASS.                    "$I_PARAM-class$ IMPLEMENTATION

The string wrapped with a pair of @,for example, the string "$I_PARAM-class$", acts as a importing parameter of code composer, which means during the code generation, you must tell code composer what is the actual class name in generated code, by passing the actual name to this parameter.

 

This activated include will act as a code generation template. We now have the following importing parameter:

 

  • $I_PARAM-class$
  • $I_PARAM-global$
  • $I_PARAM-interface$
  • $I_PARAM-exception$

 

2. create another driver program which will call code composer API to generate the code with the help of the template include created in step1. The complete source code of this program could be found from attachment.

clipboard5.png

I just use the cl_demo_output=>display_data( lt_tab_code ) to simply print out the source code.

 

In the output we see all of the placeholder ( $XXXX$ ) in the template have been replaced with the hard coded value we specify in the driver program.

clipboard6.png


Although the google result shows the code composer API is marked as for SAP internal use only and thus could not be used in application code, however I think we can still leverage it to design some tool which can improve our daily work efficiency.

clipboard7.png

Step by Step Type Group understanding and explanation in Detail.

$
0
0

Introduction: Hey this document explains the creation of type group and its use in ABAP program.

 

What is type group: There are several type groups available in SAP. For example

 

‘ABAP’ and ’SLIS’ etc. To use them in program we use key word ‘TYPE-POOLS’. It allows us to define non-predefined types. Combination of all such non-predefined types is knows as type-pool or type-group.

 

In simple terms, if we want to use some custom types in various programs then we need not define them separately, we can simply create a type group in ABAP dictionary and use that in our programs.

 

Steps to create:

 

Go to transaction SE11; select the radio button ‘Type Group’. Click ‘Create’ button.


p11.jpg

 

Note: - Maximum length of type-group name can be ‘5’.

 

Provide some meaningful description in short text

 

p22.jpg


And click on save button.

 

Then next screen appears where we can write our source code as highlighted in the below screenshot. As an example, I have created two constants. One thing that needs to be taken care of while declaring the structures, constants, etc. in

 

Type Group is that every object must start with ‘<name of type group>_’.

 

In this example the constants that are declared starts with ‘ZTYPE_’. The system gives syntax error in case the above naming convention is not followed.

 

p33.jpg

 

Now save psave.jpg and activate pact.jpg the type group.

 

Now we can use the type group created above in our SAP programs. Please find below the screenshot for the same


p44.jpg


                      Output will be following:

 

p55.jpg

 

Summary: In this way we can create type group and use it in ABAP programs.

 

 

 

Thanks Enjoy Coding,

Pavan Golesar

SAP ABAP, SAP Netweaver Gateway, SAP FIORI.

Advanced navigation to a source code from the message long text.

$
0
0

Hi again.

 

In the previous post I described the basic concepts of programming using SE91 messages

 

How to use messages properly in the code. Common rules.

 

If you used to do OO programming your logic probably works on class-based exceptions.

 

In most of cases I would choose IF_T100_MESSAGE variant to explain the reason of the error (Rule #4)

 

Meanwhile, sometime you have a foreign code you're not responsible to modify and this code raises an exception.

 

Now we speak about the case when you want to output the message immediately. To be abstract let's just use cx_root example.

 

If you go the easiest way:

try.
do_something( ).
catch cx_root into data(lo_cx).  message lo_cx->get_text( ) type 'I' .
endtry.

you will get the popup:

Снимок.PNG

 

but unfortunatelly F1 button won't work here. Debugger on you, my friend.

 

But let's jut imagine that we press F1 and have a documentation like this:

 

Снимок.PNG

and when we click "Navigate to source" link we go directly to the source code when the exception has been raised:

Снимок.PNG

 

Pretty cool, isn't it?! =)

 

Let's just see how many actions do we need to do this? Saying it before, i wanted to reuse standard SAP UI without own screen creation.


1. We need 3 SET/GET parameters.


Go to SE80.


Edit object (Shift+F5) -> Enhanced options -> SET/GET parameter ID -> type zcw_nav_prog -> Create (F5).


repeat these steps for zcw_nav_incl and zcw_nav_line parameters.

 

2. Go to SE38 and create a very simple program:

 

program zcw_navigate_to_source.
parameters:  p_prog type syrepid memory id zcw_nav_prog,  p_incl type syrepid memory id zcw_nav_incl,  p_line type num10 memory id zcw_nav_line.
start-of-selection.  /iwfnd/cl_sutil_moni=>get_instance( )->show_source(      iv_program    = p_prog    " Source Program      iv_include    = p_incl    " Source Include      iv_line       = conv #( p_line )   " Source Line      iv_new_window = ''    " New Window  ).

I really hope you have this component. If not - you can find something similar in where-used-list for 'RS_ACCESS_TOOL' FM

 

3. Create ZCW_NAV_SRC transaction in SE93.


Choose report transaction and assign ZCW_NAVIGATE_TO_SOURCE report to it.

 

4. We need a real SE91 message.


Just create some message with the text &1&2&3&4. Remove self-explanatory flag and go to long text.


Put the cursor where you wish to place a link ->Insert menu -> Link


Choose "Link to transaction and skip first screen" as Document class, use the transaction from step 3.


"Name in Document" is the real text that you see on the screen like "Navigate to source".


5. Now we're ready to code.

try.  do_something( ).
catch cx_root into data(lo_cx).   
" get source code position      lo_cx->get_source_position(        importing          program_name =  data(lv_prog)   " ABAP Program: Current Main Program          include_name =  data(lv_incl)          source_line  =  data(lv_line)      ).      " it's not possible to store integer as parameter value      data(lv_line_c) = conv num10( lv_line ).      " export parameter values      set parameter id 'ZCW_NAV_PROG' field lv_prog.      set parameter id 'ZCW_NAV_INCL' field lv_incl.      set parameter id 'ZCW_NAV_LINE' field lv_line_c.      types:          begin of message_ts ,             msgv1 type bal_s_msg-msgv1,             msgv2 type bal_s_msg-msgv2,             msgv3 type bal_s_msg-msgv3,             msgv4 type bal_s_msg-msgv4,           end of message_ts .      " parse our string to message format      data(ls_message) = conv message_ts( lo_cx->get_text( ) ).      " Output, don't forget we always use static message definition      " Put here message created in Step 4.      message id 'ZCW_COMMON' type 'I' number 124        with ls_message-msgv1             ls_messagemsgv2             ls_message-msgv3             ls_message-msgv4.
endtry.

That's it! What I actually did - I put this handling logic into a minimalistic method ZCL_MSG=>CX( lo_cx ) and actively use it in my code.

 

I hope you enjoyed it.

 

Petr.

 



Clarification on Secondary Indexes limitations on database tables

$
0
0

Couple of frequently asked questions in SCN forum,

1. How many secondary indexes can be created on a database table in SAP?

2. How many fields can be included in a secondary index (SAP)?

 

By seeing many threads over the above couple of questions in SCN forum marked as 'Answered' (correctly) with different answers, I have decided to test the limitations on the Secondary Indexes. The different answers are like 9, 10 (1 Primary and 9 Secondary), 15, 16 (15 Secondary, 1 Primary), No such limit.

 

So, to check, I have created Secondary indexes on table SFLIGHT.

 

1. How many Secondary Indexes can be created on a database table in SAP?

Ans. I have created 18 secondary indexes, but the system has not objected at 9 or 10 or 15 or even 16.

 

Capture.PNG

 

So, I believe that, there is no such limit for number of Secondary indexes to create on database table in SAP. But it is not at all recommended to create more than 5 Secondary indexes on a database table.

 

2. How many fields can a Secondary Index can contain?

 

When I am testing this I have created Secondary Index for EKKO table and for  an Index I have assigned all the table fields (134). Then the system says that 'max 16 fields can be assigned' with an error message.

 

error.PNG

 

So, for a Secondary index we can assign maximum of 16 fields in a database table. But it is recommended to create a secondary index with not exceeding 4 fields.

 

 

> These are the points to be remembered before creating an Index.


a. Create Secondary Indexes for the tables that you mainly read. Because every time we update a database table, it would update indexes also. Let's say there is a database table where we create (or update) 100s of entries in a single day. Avoid using Indexes in such cases.    

b. We should take care that an index shouldn't have more than 4 fields and also the number of indexes should not exceed 5 for a database table. Or else, it would result in choosing a wrong one for particular selection by an optimizer.

c. Place the most selective fields at the beginning of an Index.

d. Avoid creating an Index for a field that is not always filled i.e., if it's value is initial (null) for most entries in a table.

 

> These are the points to be remembered while coding in ABAP programs for effective use of Indexes i.e., to avoid the full table scan.

a. In the select statement, always put the condition fields in the same order as you mentioned in the INDEX. Sequence is very important here.

b. If possible, try to use positive conditions such as EQ and LIKE instead of NOT and IN which are negative conditions.

c. Optimizer might stop working if you use OR condition. Try to use IN operator instead of that.

d. The IS NULL operator can cause a problem for the Index as some of the database systems do not store null values in the Index structure.

 

 

Thanks and Regards,

Vijay Krishna G


Real life examples & use cases for ABAP 740 and CDS views

$
0
0

It has been long time since I post my previous blog which draw more attention than I expected and I was thinking what could be next. Luckily I am working on a S4HANA project and having opportunity to try new syntax options and I will compile some real life examples and test samples about new syntax options and CDS views and try to explain why and how they are useful and we should try to use them.

 

First of all it is possible to get information about all of them in ABAP keyword documentation in Release specific changes as shown below. There are many changes, in my blog I will only briefly mention the ones that I had opportunity to use.

 

Some of the examples was only created to see what can be done, as they may not fully fit to a business case.

 

Release specific changes branch in keyword documentation

1.png

 

1. Inline declarations

Field symbol with inline declaration

2.png


Data declaration

3.png

I only code in ABAP for a long time and I can say it is really nice to avoid necessity of going top of the block just to define something so using inline declarations is really practical and time saving.

 

2. Constructor expressions (  Thanks to  Ahmet Yasin Aydın  )


Constructor expression new

4.png

Value operator

5.png


 

It is again time saving have better readability and helps us to have shorter source codes, no need to say there can be countless different usage options.


3. Rule changes for joins

6.png

  7.png


Above statements is directly from ABAP keyword documentation which will allow us to build more complex join statements, we can now use only restriction from another left side table and we can use fields from left side table in where condition which is quite revolutionary and it is possible to build one big select statement which means some reports can now only built using one select statement by also help of other changes (using literals, case and more) that can be seen in keyword documentation.  I did verify it and coded some reports in both logic and compared the results it is simpler to code and faster in HANA Here comes the example.

 

Also restriction to use only equality comparison on “On condition” is removed for outer joins.


Excluded the field list for the below select since it was really a big one

Joins ( some of the tables may better be connected using  inner join just created to test left outer joins ) 

 

8.png

 

Where condition also contains fields from left side tables:

9.png

 

4.  Source codes  that can only be edited in Eclipse( ADT )

 

This one is not a syntax option but it is something that we need to know so I wanted to add this to my list. Eclipse is in place for a long time but so far we were able to edit every development object in Eclipse or in SAP GUI (Correct me if I am wrong ) but it is changed, now there are CDS ciews and AMDP (Abap Managed Database Procedures) that can only be edited in Eclipse. So if for any reason you need to develop these objects you also need to have Eclipse in your PC and it may be nice to start coding in Eclipse if you have not started yet.

 

Message if we try to edit AMDP in GUI:

10.png

 

Eclipse edit display:

 

11.png

 

5. CDS Views

 

After HANA we had different tools like Analytical and Calculation views and external view, database procedure proxies to read these views directly from ABAP but there are some practical difficulties to use them if most of the development tasks in a project is handled by ABAP programmers (learning SQL script, granting project team members authorization in DB level and having different level of transport management which can easily cause problems), CDS views can be a good alternative they are at least managed in application layer and have same transport  procedure as older abap development objects.


We keep searching use cases for the CDS’s  in our project. We so far created master data views and tried to create some reusable CDS views which can be used by several reports can be seen below.


We also tried to convert some old logic ( Select data from different tables and merge them inside loops) into CDS’s its performance is better but could not test with some really big data yet but I also need to mention it is at the same level of performance with a big select shown in Open SQL explained in point 3 .

 

Example view on material data, it can be used in select statements in ABAP or viewed in SE16

12.png

 

6. CDS Union all example

This helped us to simplify a case: There are different price tables with different structures and several reports needs to read price data. We designed one big structure with necessary fields from different tables, now we can use this one view instead of 5 different tables wherever we need to read price data. I am only adding first two tables but 3 more table is added with union all and all can read at once now.

 

13.png

Result for the view

 

14.png

 

There are many other changes to explore for some it may take long time to have  a proper case to apply, would be happy to know if you also used some new syntax changes and how they made life easier for you.

The Add-on Assembly Kit 5.00 is available

$
0
0

"If you develop industry-, company-, or country-specific enhancements to SAP solutions, the SAP Add-On Assembly Kit can help you plan and deliver those enhancements as software add-ons. The SAP Add-On Assembly Kit guarantees quality software development by using a standardized process flow from planning to delivering the add-on. The delivery tools work smoothly with SAP’s maintenance strategy, helping you integrate new developments into your existing environment and providing maintenance throughout the enhancement’s life cycle. The SAP Add-On Assembly Kit and its comprehensive documentation help ensure high-quality product development from the planning phase. The add-on tools also help you efficiently install, update, and maintain the enhancement."

 

see help.sap.com/aak

 

In other words if you want to stop deliver your ABAP software via transports, you can request and ask for the AAK. You will get it via an separate contract.

 

Your ABAP software can even be deinstalled conveniently via the SAP tool SAINT/SPAM.

BSP application which adds external document (URL) to the purchase requisition

$
0
0


This blog explains how to create an URL attachments to the purchase requisition using BSP application.

Prerequisites

  • Basic knowledge on BSP applications, OOABAP and HTML.

Creating URL's manually from SAP

We can create URL attachments manually by going to transaction ME51N

click on "services for Object" button-> Create->Create External Document(URL)

sap.png



You will get the popup, please enter title and address and click on green tick shown in the below screen shot.

img2.png



Now the URL will be saved to the SAP.

img 3.png




Step by step procedure to create URL attachments using BSP application.

Step 1: Let's create BSP application using transaction SE80, choose BSP application from the drop down and give the name of the application.


Step 2: Right click on the BSP application and create the controller

img 4.png



Step 3:Create Controller class in the controller as shown in the below screen shot

img5.jpg

 



Step 4: Place the cursor on DO_REQUEST method and click on redefine button.

img7.png

 



Step 5: we are going toImplement our logic within the DO_REQUEST method.


Here I am giving an overview for the creating the .For complete code find the attached "ABAP codedocument" in this blog.

Follow the below steps for URL attachments.


  i. Get all the form field values using the method "get_form_field"

    For example

              CALL METHOD request->get_form_field
                   EXPORTING
          name 
= c_url
        RECEIVING
                      value = lv_url.

 

ii.  Create the reference for the View(UI).

       r_obj_view = create_view( view_name = 'zcreateexturl_view.htm').  

 

iii.  To specify that the purchasing document is purchase requisition,use the bus type as  BUS2105 and pass as parameter to the below function module.

 

iv. Use the Standard function module "SO_FOLDER_ROOT_ID_GET" to get the folder root ID(All the URL attachments are stored under this folder ).

 

v. We are going to pass the Pass title and URL as parameters to the function module  "SO_OBJECT_INSERT". This function module will create the object Id.


vi. To make the necessary changes to the database create the binary relation and commit it. Now the Title and URL will be attached to the purchase requisition.

 

vii. Send the response back to the UL using the method set_attribute. weather created successfully or failed.

For Example :

             CALL METHOD r_obj_view->set_attribute
                    EXPORTING
          name 
= c_message
                        value = lv_message.

 

 

Step 6: Right click on the BSP application create the page and choose radio button "View" as shown in the below screen shot.

img9.png

 

Click on the Layout tab using HTML Code design the UI.

Please find the HTML code in the attached "BSP Code document"


Step 7: Declare page attributes as shown in the below screen shot. The values for the below variables are being sent from the controller.

img10.png



Step 8: Right click on the BSP application and test it. You will be navigated to browser.

           

***Note: Before you test it make sure both the CONTROLLER and VIEW should be activated.***

 

Step 9: Give the purchase requisition number and hit enter.

img11.png




Step 10: The Item details of given purchase requisition number and two input fields will be displayed.

Enter the title and URL click on submit button.

img12.png

 

 

Step 11: If you get the status message, your URL will be attached to the purchase requisition successfully

img13.png



Step 12: Check weather the URL is added to purchase requisition(ME51N) or not.

img14.jpg

Hungarian beginner's course - a polemic scripture against Hungarian Notation

$
0
0

I love the atmosphere of constructive debating - lively and resolutely debated, but not becoming personal. In this mood, the following blog post became composed.

 

There is no other way around it by auditing, maintaining or understanding other's coding: "lt_" stalks you! Everywhere! In SAP's Coding. In customer's ABAP code and in their official developers guidelines. In SAP-PRESS-Books of Rheinwerk Verlag, even though they also published the official developers guidelines of SAP (which tells us, that's bad coding) - WTH their editorial office was hired for? I also think it's a bad idea and I'm not alone with this opinion. I'm gonna tell you right now, why - in the following blog post.

 


Hungarian Notation, so it's called in most cases, was invented by Mirosoft, which is a valid reason for being for most of the developers on the planet. That's the tale.

 

The truth is: Founder of the Hungarian Notation was Charles Simonyi, an hungarian developer (that's why it's called "Hungarian Notation") at Microsoft, who wrote an article, but it's epidemical spreading misunderstanding by masses of developers around the planet was not his intention!

 

Following my main rule ("I don't like metaphors, I prefer to speak in pictures"), I'll illustrate the problems by showing it at an example:

 

Using three indicators to identify a data type

 

Let's take a common data type's name lt_ekko. What tells us it's name? It tells us, that it's a local table, which linetype equals the well known linetype EKKO. To make a long story short: It tells us masses of redundant information.

 

1. The local/global indicator

 

For an ambitious software developer, global data types don't exist. We should not work with it, that's what SAP told us for years, and they were right and they are still right - but why their own employees are permanently breaking this rule?

 

Developers, working with other programming languages, can not believe, that ABAPers work with methods of before OO was invented. In a well encapsulated environment, global data has no reason for being, because they are conflicting basically with object oriented software development paradigm.

 

And - this question may be allowed - what is the definition of global? All data types, defined in a program or class, are locally by definition, because outside this development object they do not exist. The only data types, which are existing globally, are defined in the Data Dictionary. Same as classes and interfaces (which are just data types with a higher grade of complexity): Global classes and interfaces are defined in SE24/SE80 and not inside an ABAP. A class, defined in an ABAP, is a local class by definition.

 

In conclusion to this statements, all so called global data types are also locally by definition (program wide locally, to be exact). This doesn't touch the rule, that we should not use this, but in order to this blog post, it's important, that an ABAP can not define global data types, so the prefix "g" won't be used correctly. This results into the question: If everything is locally by definition, why the hell we do need a prefix for that?

 

And, pals: Don't tell me, a static class attribute is the same like a so called global program variable, because it's valid in the whole class and accessible system-wide! An attribute is called an attribute, because it has a context (the classes' context!), this is way different from what a variable is! And the accessibility of such an attribute depends on it's visibility configuration. A private attribute is not accessible system wide.

 

2. The data type dimension indicator

 

The next question is, why I should use an indicator, describing the dimension of a data type. A table is just a data type, same as a structure or a field. In most cases, I simply don't know, what dimension a data type has, I work with - i. e. while working with references and reference variables (what we should do, most of the times). And what is (from a developer's view) the difference of a clear-command to a field in comparison of the same command to an internal table? It does simply the same: The clear command clears, what stands to the right of this command. It's that simple. What kind of information will tell me the "t" in lt_ekko in this context???

 

What's about nested tables? In

 

TYPES: 

  begin of ls_main,

    materials type standard table of mara,

    ....

end of ls_main, 

lt_main type standard table of ls_main.

 

the table materials should be named lt_materials. No? Why not? Why a "such important" information, that this is a table, suddenly gets worthless, just because it's a component? That this is a table, is only important in relation to the access context. Which means: For a statement like

 

ASSIGN COMPONENT 'materials' OF STRUCTURE ls_main ....

 

materials is a component, not more or less.

 

I'm not kidding: I really read some developers guidelines, which strictly orders, a field symbol has to have the prefix "fs_", what is really dump, because a field symbol has it's own syntax element definition "<...>"! Is this the way, a professional developer should work???

 

Next example is a guideline, which says, that I don't have to use "lv_" for local variables, but "li_" for local integers, "ln_" for local numerics, "lc_" for local characters (which is in conflict to local constants) and so on. A developer needs to have a list of "magic prefixes" on his desk, to bear in mind this dozens of prefixes!

 

But this causes a problem: What, if you have to change the data type definition during development or maintenance process? You really have to rename it through the complete calling hierarchy through all of the the system, which means, you may have to touch development objects, only for the renaming process. You have to test all this objects after changing the code! What a mess! You need some Hobbies, if you need to fill your time, but not this kind of evil work.

 

It's a well known rule: The more development objects you have to change, the more likely is, that you'll get to objects, which are locked by other developers.

 

A public example: The change of data type definition from 32 to 64 Bit in Windows. All the developers, who have used Hungarian Notation, are now using a data type's name, referring to a definition, which has nothing to do with it's type!

 

What's about casting? I could find more questions like this, but that'll it for now, because it's enough for you to get the key statement.

 

3. The structure's description

 

This is another surplus information, because the structure's or the basic data type definition is just a double click (in SAPGUI) or a mouse over (Eclipse) far from the developer's cursor.

 

Now that we know, which redundant, surplus information we can get, let's have a look, what kind of important information we won't get from lt_ekko:

 

What kind of data we will find in lt_ekko? EKKO contents different kinds of documents: Purchase Order headers, contract headers, and so on. And by deep inspection, there are a few different kinds of Purchase Orders. Standard PO? Cross Company? What a cross company purchase order exactly is, depends on the individual definition of the customer's business process, so it's identification is not easy!

 

To get to know, what kind of documents are selected into table lt_ekko, we have to retrace the data selection and the post data selection processing, which is much more complex than a double click. For this reason, this is the most important information, we have to place in the table's name!

 

If you select customers, what do you select in detail? Ship-to-partners? Payers? Or the companies, who will get the bill? Whatever you do, lt_kna1 won't tell me that! ship_to_partners will do!Conclusion:

 

To get rid of all surplus information and replace them with relevants, we should not name his table lt_ekko, but cc_po_hdrs, to demonstrate: This are multiple (hdrs = plural = table, if you really want to do that) cross-company purchase order headers. A loop could look like this:

 

LOOP AT cc_po_hdrs     "<--- plural = table

INTO DATA(cc_po_hdr).  "<--- singular = record of

   ....

ENDLOOP.

 

No surplus information, all relevant information included. Basta!

 

I am not alone

 

You may ask, why this nameless silly German developer is telling you me how you have to do your job? I am not alone, the following quotes proof:

 


“No I don’t recommend ‘Hungarian’. I regard ‘Hungarian’ (embedding an abbreviated version of a type in a variable name) a technique that can be useful in untyped languages, but is completely unsuitable for a language that supports generic programming and object-oriented programming”


  • Robert Martin, Founder of Agile Software Development, wrote in “Clean Code: A Handbook of Agile Software Craftsmanship”:


"...nowadays HN and other forms of type encoding are simply impediments. They make it harder to change the name or type of a variable, function, member or class. They make it harder to read the code. And they create the possibility that the encoding system will mislead the reader.”



"Encoding the type of a function into the name (so-called Hungarian notation) is brain damaged—the compiler knows the types anyway and can check those, and it only confuses the programmer.”


Brain damaged", to repeat it. Is this the way, we want to talk about our work, we should be proud of?


Conclusion


Of course, I know, that masses of developers will disagree, only because the always worked like this (because they learned it from others, years or decades ago) and they don't want to change it. Hey, we're Software Developers! We are the one, who permanently have to question the things we do. Yesterday, we did procedural software development, today our whole world is object oriented, tomorrow we're gonna work with unbelievable masses of "Big Data", resulting in completely new work paradigms, we don’t know, yet. And those guys are too lazy, to question their way of data type naming? Are you kidding?We are well payed IT professionals, permanently ahead in latest technologies, working on the best ERP system ever (sic!) and the SAP themselves shows all of us, that they can throw away the paradigms of 20 years to define new ones (to highlight the changes to S/4HANA, I never would have estimated as possible).Let's learn from our colleagues, who also develop applications with a high grade of complexity. Let's learn from the guys, who invented the paradigms we work with. Let's forget the rules of yesterday....


Appendix


I've been asked, lately, if I don't like prefixes at all. The answer is: No. Indeed, there are prefixes, indeed, making sense:

  • importing parameters are readonly, so they may have the prefix "i_".
  • exporting parameters have to be initialized, because their value is undefined, if they are not filled with a valid value. So we should give them a prefix "e_".
  • changing parameters transport their value bidirectional, so they should marked with a "c_" and
  • returning parameters will be returned by value, so we should mark them with prefix "r_".

 

This is a naming rule, I'd follow and support, if requested. Because this prefixes transports relevant, non-redundant information (in terms of the things, which are not obvious), influencing the way we handle this data types.


Request for comments


Your opinion differs? Am I wrong, completely or in some details? You'd like to back up me? Feel free to leave a comment....                                                                                                                                                                                 

 

Disclaimer: English ain't my mother tongue - Although I do my very best, some things maybe unclear, mistakable or ambiguous by accident. In this case, I am open to improve my English by getting suggestions

Reasons for so many ABAP Clones

$
0
0

Note: I did originally publish the following post in my company's blog on software quality. Since it might be interesting for many ABAP developers, I re-publish it here (slightly adopted).

 

From the code audits and quality control of ABAP projects we do in our company, we observe again and again that ABAP code tends to contain a relative high rate of duplication within the custom code. The data of our benchmark confirm this impression: From the ten projects with the highest rate of duplicated code, six projects are written in ABAP (but only 16% of all projects in the benchmark are ABAP projects). In this post I will discuss what are the reasons for the tendency to clones in ABAP.

 

What is Cloning and Why is it Important?

 

Code clones are duplicated fragments (of a certain minimal length) in your source code. A high amount of duplicated code is considered to clearly increase maintenance efforts on the long term. Furthermore, clones bear a high risk of introducing bugs, e.g. if a change should affect all copies, but was missed in one instance. For more background information see e.g. the post of my colleague or »Do Code Clones Mater?«, a scientific study on that topic.

 

The following figure shows a typical example of an ABAP clone:

 

abap_clone.png

 

The code is fully identical, unless the name of the variable over which is iterated. As mentioned before, in many ABAP projects we see many of such clones (frequently, the cloned part is much longer—some hundred lines are no surprise).

 

So, What Might be the Reasons for the High Tendency Towards Code Cloning in ABAP?

 

First, it is not a lack of language features to re-use code: The most important mechanism is the ability to structure code in re-usable procedures. There exist form routines, function modules and methods—but it seems the barrier to consequently use these concepts is higher than in other languages. Why? I see three main causes:

 

  • Poor IDE support
  • Constraints in the development process
  • Dependency fear

 

Besides these constructive reasons, there is also a lack of analysis tools to detect duplicated code. The SAP standard tools are not able to detect clones within custom code. Thus a third-party tool is required for clone detection. However, in this post I will focus the before mentioned constructive reasons and discuss them.

 

Poor IDE support

 

In every language, the fastest way to implement a function, which only differs in a tiny detail from an already existing function, is to copy the source code and modify it. To avoid the duplication, these are common practices:

 

  • Extract the common code to a separate procedure where it could be used form the old and new functionality
  • Add a parameter to a procedure’s signature to make it more generic
  • Rename a procedure (to reflect the adopted common function)
  • Move a procedure (method, function module) to a different development object (class, function group) for common functionality
  • Introduce a base class a move common members there

 

Most IDEs for other languages provide support for these refacotrings, e.g. method calls are updated automatically if a method was moved. The ABAP Workbench SE80 (which many developers still use) provides hardly any refactoring support required to resolve duplicates. Even with ADT refactorings are limited to be local in one development object, are supported yet. This makes restructuring the code more difficult, it is more time-consuming and the risk of introducing errors is increased. The last issues is especially relevant since not even syntax errors in non-edited objects might be detected, but these errors first unveil at runtime or during the next transport to another SAP system. All these makes duplicating ABAP code more »productive« during the initial development—but it will hinder maintenance as in any other program language.

 

Constraints in the Development Process

 

The shortcomings of the ABAP IDEs are obvious reasons for duplicated code. More surprisingly, but with even more impact are constraints in the development process. When we discuss duplicated ABAP code with developers, this is often justified by restrictions of the development scope: Assume program Z_OLD was copied to Z_NEW instead of extracting common functionality and re-use it from both programs. Sometimes the development team copied the program since they were not allowed to alter Z_OLD since the change request is bound to specific development objects or packages. The reason for such restrictions is an organization structure where the business departments »own« the respective programs and every department fears that changes initiated by others could influence their specific functionality.

 

A similar situation arises when changing of existing code is avoided to save manual test effort in the business departments. Especially if the change request for Z_NEW was issued by a different department, the owners of Z_OLD may refuse to test it. (Maybe the wouldn’t if tests were automated.—Having only manual tests is not the best idea.)

 

Dependency Fear

 

Not specific to ABAP, but here more widespread is the fear of introducing dependencies between different functionalities, especially if these are loosely related. Often the benefit of independent code / programs is seen, since a modification of the code is always local to one instance and would not influence other parts. It is hard to say why this fear is more common in the ABAP world, one reason is the before mentioned organization of the development process. An other reason may be the lack of continuous integration where the whole code base is automatically built. The lack of automated testing might be the major reason: Whereas substantial test suites for automated unit tests are the rule in Java or C# projects, ABAPUnit tests are not that widespread.

 

No matter what the reason for this fear of dependencies is, there is an assumption that future changes of one copy should not affect the other copies. But in many cases the opposite is true! Cloning makes the code independent, but not the functionality—it will still be a similar thing. Thus it is an apparent independence only. Yes, there might be cases where a future change should only affect one of many copies. But very often a change should be applied at all occurrences of the related functionality. Consider bug fixes for example: in general, these must be done in all copies. We’ve observed the same change in two copies under two different change requests (were the second change was done several time later). This will almost double the maintenance effort without any need.

 

Can we Avoid Cloning in ABAP?

 

Yes, I’m sure cloning could be avoided as in any other programming language. Despite the fact that in many ABAP projects there is a high trend towards cloning, we’ve also seen counter-examples with only few clones. It is possible to have a code base with many hundreds of thousands lines of ABAP code and keeping the clone coverage low. From the reasons for intensive ABAP cloning discussed above we can conclude these recommendations to avoid it:

 

 

  • Dismiss copy-and-paste programming and encourage your developers to avoid duplication and restructure existing code instead. Accept that this is a bit more time-consuming in the beginning.
  • Make intensive use of common code and utilities, which are intended to be used by several programs. This code should be clustered in separate packages.
  • The development team should be the owner of the code, not the business departments—at least not for common functionalities. The developers should be free to restructure code if it is worth for technical reasons. Keeping the code base maintainable is a software engineering task which hardly can be addressed by the business department.
  • Make use of test automation, e.g. using ABAPUnit and execute all of these tests at least once a day. Many regression errors could be detected this way.

 

If these is given, also ABAP code could be mainly free of redundancies. Of course, additionally you should introduce an appropriate quality assurance to keep your code base clean. This could be either by code reviews or static analysis. More about how to deal with clones can be found in part 2 of Benjamin’s posts on cloning.

Deadlock Holiday

$
0
0

To whom it may concern ...

 

For any write access to a line of a database table the database sets a physical exclusive write lock on that line. This lock prevents any other write access to the line until it is released by a database commit or database rollback.

 

How can we see that in ABAP?

 

Rather simple, write a program:

 

DATA(wa) = VALUE scarr( carrid = 'XXX' ).

DELETE scarr FROM wa.
INSERT scarr FROM wa.

DO 100000000 TIMES.
ENDDO.

MESSAGE 'Done' TYPE 'I'.

 

Run it in one internal session. Open another internal session and run another program in parallel:

 

DATA(wa) = VALUE scarr( carrid = 'XXX' ).

DELETE scarr FROM wa.
INSERT scarr FROM wa.

MESSAGE 'Done' TYPE 'I'.

 

The program in session 2 finishes only when the first program has finished.

 

This is  as expected. The second program tries to write to the same line as the first program and therefore is locked.

 

You must be aware that such locks do not only occur for Open SQL statements but for all write accesses to database tables. Clearly all writing native SQL statements are other candidates. But also other ABAP statements access database tables. Recently, I stumbled over EXPORT TO DATABASE.

 

Program in internal session 1:

 

EXPORT dummy = 'Dummy' TO DATABASE demo_indx_table(xx) ID 'XXX'.

DO 100000000 TIMES.
ENDDO.

MESSAGE 'Done' TYPE 'I'.

 

Program in internal session 2:

 

EXPORT dummy = 'Dummy' TO DATABASE demo_indx_table(xx) ID 'XXX'.

MESSAGE 'Done' TYPE 'I'.

 

The program in session 1 locks the parallel execution of the program in session 2 because the same lines in the INDX-type database table are accessed. This can lead to deadlock situations, where you might have not expected it.

 

To prevent such long lasting locking or even deadlock situations, the write locks must be released as fast as possible. These means, there must be database commits or database rollbacks as soon as possible. In classical ABAP programming a lot of implicit database commits occur. E.g., each call of a dynpro screen leads to a rollout of the work process and a database commit. If there is only a short time between write access and database commit, you don't realize such locks in daily live. But if you have long running programs (as I have simulated above with the DO loop) without a database commit shortly after a write access, you can easily run into unwanted locking situations. In my recent case, I experienced deadlock situations during parallelized module tests with ABAP Unit: no screens -> no implicit database commits.

 

Therefore, as a rule:  If there is the danger of parallel write accesses to one and the same line of a database table, avoid long running processes after a write access without having a database commit in between.

 

In the examples above, you could prevent the deadlock e.g. as follows:

 

DATA(wa) = VALUE scarr( carrid = 'XXX' ).

DELETE scarr FROM wa.
INSERT scarr FROM wa.


CALL FUNCTION 'DB_COMMIT'.

DO 100000000 TIMES.
ENDDO.

MESSAGE 'Done' TYPE 'I'.

 

or

 

EXPORT dummy = 'Dummy' TO DATABASE demo_indx_table(xx) ID 'XXX'.


CALL FUNCTION 'DB_COMMIT'.

DO 100000000 TIMES.
ENDDO.

MESSAGE 'Done' TYPE 'I'.

 

By calling function module DB_COMMIT in the programs of session1 an explicit database commit is triggered. The programs in session 2 are not locked any more during the long running remainders of the programs in session 1.

 

It is not a rule, to place such calls behind each write access. Of course, a good transaction model should prevent deadlocks in application programs anyway. But if you experience deadlocks in special situations, e.g. in helper programs that are not governed by a clean transaction model, such explicit database commits can be helpful.

 

If deadlocks occur during automated testing only, you can also consider the usage of lock objects during test runs. A test that involves a write access can use the SAP enqueue/dequeue mechanism to lock and release table lines and to react appropriately if a line is already locked.

Thoughts on Material Data Migration

$
0
0

Part I: Back Story, a developer's suffering


In most cases, a company has to pass just one Material Data Migration Project at a time, in some others, as the company is growing, there might be one or the other project to integrate other company’s material data. I have a customer, which is a fast growing company. I can't recall a year without a migration project. In fact: during the last years there were three or more migration projects per year and there is a queue of migrations, waiting to be processed.

 

 

Due to privacy reasons and because SCN is not a pillory, the customer's name won't (and shouldn't) be mentioned here. It's just an example for problems, which can appear likewise in many projects at many customers.


Before I joined their SAP Competence Center (as an external, freelancing developer), they worked with single-use reports to migrate the new companies' data. In the past, they tried to use LSMW, but since several external developers failed by migrating material master data with LSMW, I was not allowed to use it! In this single-use reports, it was hard coded, in which way fields are to be filled depending on their material type and it's display-only/mandatory-customizing, as well as standard values, which are to be used by default, if it's undefined or empty in source system. Hard coded inserts of MVERs, additional MARCs/MARDs, MLGNs/MLGTs, etc. Some flags appeared from nowhere and there was no way to separate the overall usable coding from the project specific code (what results in the fact, that the whole program was project specific, so they had to code another from scratch for each project). This coding was called "pragmatic".


I had to obey - knowing, that I would take great risks if I would try other ways. So I did as I was told and used - under protest - hard coded single used reports. As we were pressed by time, no discussion arose about it. And - I must admid - my last material data migration project lay back 15 years. For the sake of peace and quiet, I did as I was advised.

 

And guess what: This project was a mess - for my nerves and my health. Instead of being proud of my work, I hated my coding. After I made all requested changes it was impossible to tell by whom they were required. Of course, at Going-Live, all data have been migrated in time and correctly (hey, that’s what I am payed for!), but you don’t want to know how much money they had to pay. I won't quote, what I said to them, after passing the project (it wasn't very friendly, but honest), but I said, that I won't do another migration project in a similar way, I wanted to go my own.

 

Because the next migration project was already announced, I knew I had find a solution for this and the most important items were easily to identify:

 

  • Separation

between frontend and backend features; the single-run-programs, used in the past, were designed to be started by the developer and noone else. I wanted to have an application, which can be started by everyone after a short briefing. And I don’t want to test the whole migration stuff just because the frontend changes (S/4HANA is just around the corner, even for this customer!)


  • Exception handling

Of course, I want to work with Exception Classes....


  • Documentation

I hate undocumented development objects and even most of  SAP's are not documented, I prefer to do that (if the customer does not want to pay the documentation, I even do it in my spare time). So each class, each interface, each component, each program, table and data element has to be accompanied by a documentation. Expectation was high: For an experienced ABAP OO developer, single workday of eight hours has to be enough to perform the full maintenance program.


  • Testing

mostly it works like this: Try a few different cases (just one in most cases) and if they don’t get a dump, the app is working fine per definition. I love to test classes and I want to have a minimum of test effort. A test class is written once and a well defined bunch of test cases (growing and growing, because each issue from the productive system has to be simulated as well) can be processed by a single click. This results in the effect, that no working feature can be destroyed by developer failures.


  • Separation of concerns

It would have to have a reusable and a project specific part. In each project, there are some essentials, which have to be developed only once to be used in every migration project. On the other hand, there is always  project-specific code, which can not be handled in the reusable part. On closer inspection, there appears a third layer, which bundles similar projects, between this two layers. We’ll get deeper into that later. In particular, the „you need multiple MARAs when you want to create multiple MARCs/MARDs/MLGNs/….“-thing (more infos about it below), I wanted to code once!


  • Field status determination

As the FM MATERIAL_MAINTAIN_DARK does, I want to read the customizing to determine input/output/mandatory-attributes - not just to send a simple error message and abort (like the FM does), but to have the chance to fix the problem automatically. It turned out, that the customer was wrong: Reading the customizing was much faster and easier to implement than collecting the filling rules from all functional consultants! In addition to this, I want to determine the views, I have to create, from the customizing.


  • Protocol

Each callback "why does have field X in material no. Y value Z?" has to be answered by a protocol, which can be inspected by the functional consultants, so there is no need to bother the developer. To get this, all FM messages and all data manipulation have to be accompanied by a protocol entry.The problem was to sell this solution to my customer. So I needed two things: A good, advertising-effective name and a calculation, that my solution is cheaper than the single-run-programs, used in the past. For the name, I had to exaggerate a bit and I chose „Material Data Migration Framework“ - you can call a box of cigarettes a 'smoking framework' and every CIO will buy it! - and replaced in it’s abbrevation from MDMF to MAMF to make it speakable like a word.The calculation was  simple: I just made a bet, stating, that I would cut my bill, if the costs were higher than those of that last project. To make a long story short: The costs have been much lower (and much faster, as well!), even most of the coding was reusable, so the costs in the following projects will be FAR lower. They never had such a smooth migration project.

 

Part II - Elementaries


Explanations:

  • In the text below, I use $ as a variable for the customer’s namespace, in most cases Z or Y, in some cases something like '/…./'.
  • The migration tables' dependencies, explained at first, will be called "object hierarchy", which must not to be mixed up with the "class hierarchy", which will be explained later.
  • I won't post any coding, here - because the customer paid for this coding, so they own it.


At first, we need a package to collect all related development objects: $MAMF.


For material master data migration, we won't stop using FM MATERIAL_MAINTAIN_DARK, which works in logical transactions, as I mentioned before. More details are explained in it’s documentation. The most important fact is, that the migration tables' records are related to others of the same material master data set (material number). One example: To post a material master data set with multiple MARC-records, with multiple MARD-records each, there have to be multiple MARA-records (in the single used programs this problem was solved by inserting the multiple entries directly).


This was the high order bit for the decision to develop object-oriented. I realized, that I would have to interpret each record of each migration table of FM MATERIAL_MAINTAIN_DARK as an object, because an object has a constructor and a destructor. This means, that a MARD-record can check at construction, whether or not there is a MARC-record related to the same plant. If not, it fires the MARC record's constructor to generate one and this constructor checks, if there is a MARA-record, using the same transaction number TRANC. This results into an object hierarchy.


So I need a class inheriting hierarchy, which differs - as mentioned above - from the object hierarchy: A basic class $CL_MAMF_MMD, same for all material master data migration projects and a subclass $CL_MAMF_MMD_xxxx for each migration project, dealing with the project specific steps (xxxx is a migration project ID).


Anticipatory, it will be predicted, that we’ll learn, we’re gonna get some other basic classes, i. e. $CL_MAMF_PIR… for Purchasing Inforecords, $CL_MAMF_BOM, etc., which results in a „higher level (root) class“ $CL_MAMF for all migration projects. But for now, this is irrelevant.We need this hierarchy for all migration table types: one for MARA_UEB, one for MARC_UEB, one another for MARD_UEB, etc. For LTX1_UEB, we gonna do some special: A special class for each long text with name = Text-ID; BEST, GRUN, PRUE, and IVER. For the Sales Text (Text-ID 0002), we take the text-object MVKE for better identification of the class and (because, there already is a MVKE_UEB table) change it to MVKT. All this classes inherit (as $CL_MAMF_MMD does) from $CL_MAMF, which means, they are on the same level like $CL_MAMF_MMD). To repeat it: The object hierarchy must not to be mixed up with the classes hierarchy!The root and the basic classes' instance generation is to be set to "abstract", the project specific classes will be set to private and they are always final to avoid project-to-project-dependencies.


$CL_MAMF                root class

$CL_MAMF_MMD_xxxx       basic class

$CL_MAMF_MMD_xxxx_nnnn  project specific class


     xxxx = table / long text (MARA, MARC, ..., BEST, ....)    (not applicable for migration process controlling class)

     nnnn = migration ID


Conclusion


For each migration project, we just have to make a new subclass to each of the basic classes (except the data, we don’t want to migrate - we won’t need a BEST_XXXX-class in a migration project, which is not supposed to migrate purchasing order texts).The controlling class (...MMD) has to have a class constructor, to get some customizing tables (particularly the field status of MM01-screen fields). This class will also have a method post, which posts the whole material data set.


All classes do have a protected constructor, because we have to adopt a modified Singleton Design Pattern (a so called Multiton), to administrate the instances in a static internal table MY_INSTANCES, containing all key columns of the related migration table and a column for the objects, related to this key columns.


The next step, I did not implement, but it seems to be a good idea for the future:The following methods could be bundled in an interface $IF_MAMF_LT, implemented by all basic classes and inherited to all project specific classes.


Because ABAP does not support overloading, we have to abstract the importing parameters, which has to be explained: We store a data reference in this class, written by a set- and read by a get-method. So we can be sure, that every data object can be stored. Alternatively, we can't use an interface for that, because each migration table has it's own structure.


A free method provides a destruction service, including automatical destruction of all subordinated objects.


A factory method builds the classname by concatenating classname and migration ID to return an instance of a basic classes' subtype.


An instance creator method get_instance which checks the existence of the superior object - if this check fails, the constructor of its class will be called - and calls the constructor of his own class to return a unique instance.


The results of this concept are, that the dependencies between the migration tables have only to be coded once (in the basic classes) but used in each migration project. No developer of a migration project has to care about this stuff, he just creates objects he needs, the objects themselves  will care about the technical dependencies, the MATERIAL_MAINTAIN_DARK needs.


And, as explained earlier, we don't want to code fields contents hard in the migration program, so we have to read the customizing the field's status. MATERIAL_MAINTAIN_DARK does that, too, but only for firing an error message and abortion. This has two consequences: On the one hand, we can copy the coding instead of re-inventing it and on the other hand, we can avoid the abortion.


The method get_field_status returns an indicator for obligatory and output-only fields and in combination of the field's content we can find filled output-only fields (which have to be cleared) and empty obligatory fields. For this fields, we need a get_(fieldname) method, which returns a default value - implemented in basic class for all projects or project specific in the final class. This methods (which will be hundreds) shall be created automatically and in most cases, they will be empty (means: take data from source). Same for set-methods to manipulate the saving process for each field. An example for a manipulated field content is MARA-BISMT, which contents the material number of the former system. My customer has multiple old material numbers, because (for example) company A has been migrated to plant 1000, company B to plant 2000. For this reason, they defined a table to store the BISMT per plant. The easiest way to do that in MAMF, we implement this in the method $CL_MAMF_MMD_MARA_nnnn->set_bismt( ), which stores  the relation between the former and the current material number in the table for each migration project (means: for each plant).


Part III  - The Migration Cockpit


I've always been of the opinion, that using an app has to be funny for the user, not just performing their works duties. So the user's view on each migration project is very important: the Migration Cockpit application, which will be a report with the transaction code $MAMF and follows the current design rules of SAP: Beside the selection screen, the report itself won't have any coding, only method calls. The coding will be placed in local classes, lcl_cockpit for the main coding and lcl_cockpit_evthdl for the eventhandler, because I prefer to handle report PAI by raising events, i.e. when the user strikes F4, an event will be raised and the value help is implemented in the eventhandler class.The selection-screen is splitted into three areas:


  1. The header line with migration ID and it's description
  2. A bunch of tabstrips, one for each migration object. By now, we only need a tab for material master data, but we want to have the chance for getting more to have a single point of migration works for all material data.
  3. A docking container, displaying a short briefing, what to do to migrate the migration object from the foreground / active tab.


To define a migration project, we need a customizing table with the columns migration_id, description (I don't want to maintain this text in multiple languages, because that will be the new companie's name, so no language field is needed) and a flag for each migration object, with a generated maintenance screen. The Cockpit will read this table to show this data and to disable tabs for all migration objects, we don't want to migrate. A button in the cockpit's toolbar will open a pop up for table maintenance.The cockpit will have three modes:


  1. Post online,
  2. Post in background job, which has to be started immediately (after a single click) and
  3. Post in a job, we plan to run later.

 

In both background modes, we need an additional step for sending a SAP express mail to inform the user, that the migration run has been passed. All run modes can be processed as a test run or a productive run. And we have to put some protocol related buttons into the screen.

 

Now we come to a special feature: Buffering migration data! In the messy migration project, I talked about earlier, we had to migrate about 70.000 materials, loaded from an Excel file and enriched with additional data directly from the source system via RFC. This takes hours and a simple network problem can disconnect the migrating person's SAPGUI from the application server, causing an interrupted migration. To make it possible to upload the migration data from the client to the application without waiving the background processing and to speed up the migration run, we have to buffer the data on the application server. To avoid creating application server files from the source system's data, we will save all data in a cluster table, INDX in this case. Advantage: We can store ready-to-migrate SAP tables. And the flexible storage in cluster tables allows us to save not only the data from SAP tables, but the excel file as well and the selection screen may show, who has buffered when. And maintaining a cluster table is much easier than managing files on the application server.

 

The class hierarchy may look like this:

$MAMF_BUF           Basic class

$MAMF_BUF_RFC       RFC specific class

$MAMF_BUF_RFC_nnnn  project specific class for a migration via RFC

$MAMF_BUF_XLS       XLS-based specific class

$MAMF_BUF_XLS_nnnn  project specific class for a migration, based on XLS-files

 

So, the migration process will have two steps: Buffering the source system's data and migrating the buffered data. For multiple test runs, you'll buffer once for all test runs, which is a nice time saver. And now we can see the third layer between the migration project and the migration object: the migration process, because all RFC-based data collections are similar to each other, as well as Excel-only based migration projects are similar to each other and so on. This differentiation only works for the buffering process, after that, we have a standard situation for all migration projects: The data can be found in INDX, sorted into SAP structures MARA, MARC, etc., so we don't need this third layer in the migration classes, described earlier.

 

Of course, the brief instruction in the docking container has to be fed by a translatable SAPscript text and it needs only a handful of steps to implement it. Besides, the cockpit will have an extensive documentation to explain each step in detail.

 

Part IV -- Saving Protocols and look ahead

 

A migration protocol particularly should support two ways of analysis: on the one hand, we have to analyze, what errors occured during migration run to fix this problems. On the other hand, some functional consultants may ask the developer "Why does field x of material no. y has value z?" and the question may be allowed, why the developer has to figure that out. To avoid overloading the developer with questions like this, we should write all data manipulation in the protocol, so each difference between source data and the migration data, we send to MATERIAL_MAINTAIN_DARK can be read in the protocol. All undocumented differences between this and the posted material data were made by the SAP system.

 

At first: The application log is not appropriate to do that, because it can not be filtered properly. I tried it this way and it was a mess. So we'll define a transparent table in the data dictionary to store the protocol in. Each insert has to be committed immediately, because the rollback, caused by a program abortion (the worst case scenario) would send all protocol entries up into the Nirvana. This table $MAMF_LOG_MMD needs to have the following columns: Migration-ID, No. of migration run (we gonna need a few testruns, I'm afraid), Testrun/Productive-Indicator, Material no., message text, person in charge. By filtering a SALV based list, the functional consultant himself can retrace the "migration story" for each material no. of each migration run, and he can do that years after the migration, if he wants to. And he is able to filter the list for the messages, which are relevant just for him. If a field content i. e. from MBEW does make any trouble, the name of the FI-Consultant has to be placed in this column.

 

The Migration Cockpit needs a button on the material master data tab, which reads the table and filters the list for the last run (which is the most relevant in most cases), but as said before, the consultant is able to manipulate this filter rules to meet his individual requirements.

 

What's next? There are more material data to be migrated, so - as mentioned before - there will be more basic classes beside $CL_MAMF_MMD, i. e. $CL_MAMF_PIR for purchasing inforecords migration, $CL_MAMF_BOM for bill of materials and $CL_MAMF_STK for the stocks migration. Although the migration process will be way different, we have the chance to migrate all material data with one migration cockpit. For this reason, we need a root class $CL_MAMF to make the migration framework extensible (the magic word is "dynamic binding") without changing the root classes coding.

 

In conclusion, we do have an application, that separates the UI layer from the business logic and the reusable from individual coding, is easy to use even for non-developers and extendable. With this appliction, I have a lot of fun and no frustration in my migration projects and I learned much about OO concepts and Design Patterns (even when they are not all described here, I did of course, used them), the customer is thrilled how easy and fast we can migrate material data (which is important, because without material master data no orders, no purchases, no stocks, etc.).

 

Discussing the efforts

 

Yes, the question is allowed, why I put so much effort into such a simple thing like migration of material data. Well, it isn't that simple what it seems to be and the quality of this data is underrated. I often saw duplicates, missing weights, etc. And - we shouldn't forget this important fact - this was a really funny software development, kind of a playground, because I had the chance to work alone: I defined the requirements, wrote the concept, developed the data model and wrote the code, tested it and after all I could say: This is completely MY application. Noone took this hand on my application and I never had to hurry, because the raw concept in my head was finished before the migration project started. And in each following migration project, I was a little bit proud, because now we do have a standard process for this and we're able to do a migration within 2-3 days without being in hurry.

 

I hope you had fun, reading this and perhaps you learned a bit If you have any questions, suggestions for improvements, comments or anything else, feel free to leave a comment.


Disclaimer: English ain't my mother tongue - Although I do my very best, some things maybe unclear, mistakable or ambiguous by accident. In this case, I am open to improve my English by getting suggestions


Type less - SE80 Editor Code Templates

$
0
0

The "new" Editor has a few nice features that make it a bit less painful to use. I use them to prevent unnecessary typing of repetitively used Code-Blocks and to generate local Classes (Test, Exception and so on).

 

You can find a Collection of my Templates in a GitHub Repository. Over the time I'll add further, so if you are interested, you may watch that Repository. I you like to contribute please feel free to send a pull request!

 

 

Here's the Repository: https://github.com/zs40x/se80_templates


See also: Sharpen your ABAP Editor for Test-Driven-Development (TDD) - Part II

ABAP News for Release 7.50 - What is ABAP 7.50?

$
0
0

Today was RTC of SAP NetWeaver 7.5 with AS ABAP 7.50. While there are already some big pictures around, let me provide you with some small ones.

 

As with ABAP 7.40, let's start with the question "What is ABAP 7.50?" and extend the figure that answered this question for 7.40.


ABAP_750.jpg

The figure shows ABAP Language and ABAP Runtime Environment as seen by sy-saprl, so to say.

 

The good news is, we are back in calmer waters again. While the way to ABAP 7.40 was not too linear and  involved development in enhancement packages (EHPs as 7.02 and 7.31) and backports from NGAP, development from ABAP 7.40 on took place in support packages. The support packages 7.40, SP05 and  7.40, SP08 were delivered with new Kernels 7.41 and 7.42. New Kernels meant new functionality.  Good for you if you waited for exciting new things. Maybe not so good, if you see "support packages" as what they are: With support packages most people expect bug fixes but no new functionality. And that's why 7.40, SP08 was the last one bundled with a new Kernel. All further SPs for 7.40 stay on Kernel 742 and are real support packages again.

 

Of course, the ongoing development of ABAP did not stop with that. You might have heard rumors of 7.60 and Co. already. A new release line was opened for SAP's internal cloud development immediately, starting with ABAP 7.60 based on Kernel 7.43. This line has short release cycles, where each release is connected to an own Kernel and delivers new functionality. These releases are used - and thereby tested - by SAP-internal development teams.

 

For all the other environments than AS ABAP for Cloud Development, the now shipping release ABAP 7.50  was created as a copy of ABAP 7.62 based on Kernel 7.45. For these environments, as e.g. SAP S/4HANA or SAP NetWeaver 7.5 standalone, ABAP 7.50 is simply the direct successor of ABAP 7.40 and provides the ABAP Language and Runtime Environment for AS ABAP for NetWeaver 7.5. See the big pictures, where ABAP 7.50 will be available.

 

In an upcoming series of blogs I will present to you the most important ABAP news for ABAP 7.50. And there are quiet some of them ...

Dustbins at SAP TECHED 2015 in Las Vegas (1)

$
0
0

3 2 1.png

Apart from the keynote last night today was the first day of TECHED2015 in Las Vegas. I thought I would write down as much of what I could remember while it was still fresh in my mind. I am sure I have forgotten a lot already, but here goes:-

 

Workflow in Outlook

 

There is an add-on to Microsoft called “Microsoft Gateway server” where you can connect to ODATA exposed services from the SAP back end, and have them appear in Outlook 2010 or 2103, as, for example, tasks, but you can also see contact details and appointments.

 

For workflow items in particular there is a generic service you activate in transaction SICF. Thereafter you have to configure this service to say what particular type of work items you want to be visible from outside the SAP system.

 

Outlook “pulls” the data from SAP either by the user pressing a button, or some sort of scheduled job. This means the data is never really up to date, and if a work item is sitting in four peoples Outlook inbox, and one of them approves it, the other three items do not instantly vanish, as they would inside the SAP system.

 

SAP GUI

 

SAP GUI version 730 goes out of support in October 2015. The 740 GUI will be supported until January 2018. About a year before the that the next version will come out, the number of which has not been decided yet.

 

SAP GUI for Java is very strange; I don’t know why anyone would want to use that. Of course I am biased as I could not live without the graphical screen painter for all the DYNPRO screens no-one uses any more.

 

In the new version of Screen Personas it can work with SAP GUI for Windows, as well as with SAP GUI for HTML. However since the Personas editor is only in the HTML GUI you have to create your new screens using that tool, and then SAP GUI for Windows can read them (If you have the Personas add-on in your ABAP system).

 

It was stressed that if you create over-complex scripts there is a performance hit, as the DYNPRO screens have to run in the background, this presumably plus the time lag for the round trip from the web browser to the back end system. I don’t know if running Personas using SAP GUI for Windows will be any faster.

 

Netweaver 7.5

 

This came out today – 20th of October 2015 – though of course it would, to co-incide with the conference. The actual EHP which you would need to install on your ERP system in order to get the new version of ABAP is not yet out however, and no-one knows when it will be available.

 

The so called “Push Channels” which I am giving a speech on tomorrow are now released for productive use in version 7.5 They worked in 7.4 but you were not supposed to use them as that would have been naughty. Someone told me the underlying technology has changed somewhat radically as well. This is all needed for the so called “internet of things” where a sensor detects something of interest and then “pushes” the information to the SAP system, without the SAP system having to constantly poll for new information.

 

There is a new data type INT8 for storing really big numbers. This must be what they mean by “big data” – I had been wondering.

 

Karl Kessler gave a demonstration where he coded a CDS view in the ABAP system (using SQLScript) which linked sales order headers to items and customers. One line of code at the top said something like “Odata : Publish” which means a service was generated which exposed the data to the outside world.

 

He then tested this and the result was like a SE16 of VBAK where you could click on the customer number and then see the relevant line in KNA1.

 

Moreover, he then opened up the SAP Web IDE ( I got a bit of a mixed message, speakers were saying ABAP in Eclipse was the way to go, it’s great, and then when they coded anything they used the Web IDE – and it was still called “River” on one slide) and then generated a UI5 application from a template. Whilst configuring the application he chose the CDS view, and then picked some fields to display.

 

The resulting application not only had the data but automatic navigation to the customer data, as defined in the view. We were told SAP is working on transactional applications as well as report type things like this.

 

The BOPF got mentioned, I was really hoping this had not become obsolete already! Mind you its name had already changed to BOPFSADL on the slide, I have been wondering if SADL is a new framework for business objects like monsters and sales orders. Maybe it’s like the Incredible Hulk and BOPF turns into SADL when it gets angry.

 

There is a lot of improved tools for checking your custom code to see if it will work in the S/4 HANA on-premise addition. In the cloud addition you can’t have Z code anyway (pretty much) so the problem is not so relevant. Mind you I don’t think any customer has gone onto S/4 HANA in the cloud yet, they all chose the on-premise version.

 

S/4 HANA in General

 

First and foremost the material number increases in length from 18 characters to 40. This will of course be backward compatible so nothing existing will break (they say).

 

In the same way that “simple finance” got rid of all the tables like COEP and BSIK and all their friends, leaving just BKPF and a new table called ACCDOCU, the same treatment is being given to tables like MARD and MBEW by the “simple logistics”. The slide started off with about 20 such tables, and then they all ran off leaving just two – I think one for master data and one for transactional data (MSEG was one of the two). I can’t imagine how that is going to work.

 

It looks like all the functionality in the supply chain management “new dimension” product is being migrated back into the core – things like the APO and Demand Planning and the like. My guess is eventually (could take 20 years) all the “new dimension” products will die in the same way as SEM with everything going back to the core ERP system.

 

I give my speech tomorrow, so I am sure I will be pelted with rotten eggs and tomatoes. At least I am not the last speaker before the heavy drinking, I mean "networking" session.

 

Cheersy Cheers

 

Paul

 

 

 

 

 

 

ABAP News for Release 7.50 - IS INSTANCE OF

$
0
0

This one is a tribute to the community, to SCN, to you. One of you, Volker Wegert; has blogged his ABAP Wishlist - IS INSTANCE OF some years ago and reminded us again and again. Others backed him and I myself participated a little bit by forwarding the wish to the kernel developers. And constant dripping wears the stone, ABAP 7.50 comes with a new relational expression IS INSTANCE OF, even literally.

 

If you wanted to find out whether a reference variable of a given static type can point to an object before ABAP 7.50, you had to TRY a casting operation that might look like something as follows:

 


    DATA(typedescr) = cl_abap_typedescr=>describe_by_data( param ).

    DATA:
      elemdescr   TYPE REF TO cl_abap_elemdescr,
      structdescr TYPE REF TO cl_abap_structdescr,
      tabledescr  TYPE REF TO cl_abap_tabledescr.
    TRY.
        elemdescr ?= typedescr.
        ...
      CATCH cx_sy_move_cast_error.
        TRY.
            structdescr ?= typedescr.
            ...
          CATCH cx_sy_move_cast_error.
            TRY.

                tabledescr ?= typedescr.
                ...
              CATCH cx_sy_move_cast_error.
                ...
            ENDTRY.
        ENDTRY.
    ENDTRY.


In this example we try to find the resulting type of an RTTI-operation.

 

With ABAP 7.50 you can do the same as follows:

 

    DATA(typedescr) = cl_abap_typedescr=>describe_by_data( param ).
   

    IF typedescr IS INSTANCE OF cl_abap_elemdescr.
      DATA(elemdescr) = CAST cl_abap_elemdescr( typedescr ).
      ...
    ELSEIF typedescr IS INSTANCE OF cl_abap_structdescr.
      DATA(structdescr) = CAST cl_abap_structdescr( typedescr ).
      ...
    ELSEIF typedescr IS INSTANCE OF cl_abap_tabledescr.
      DATA(tabledescr) = CAST cl_abap_tabledescr( typedescr ).
      ...
    ELSE.
      ...
    ENDIF.

The new predicate expression IS INSTANCE OF checks, if the dynamic type of the LHS operand is more special or equal to an RHS type. In fact it checks whether the operand can be down casted with that type. In the above example, such a casting takes place after IF, but its your decision if you need it. If you need it, there is even a shorter way to write it. A new variant of the CASE -WHEN construct:!

 


    DATA(typedescr) = cl_abap_typedescr=>describe_by_data( param ).

    CASE TYPE OF typedescr.
      WHEN TYPE cl_abap_elemdescr INTO DATA(elemdescr).
        ...
      WHEN TYPE cl_abap_structdescr INTO DATA(structdescr).
        ...
      WHEN TYPE cl_abap_tabledescr INTO DATA(tabledescr).
        ...
      WHEN OTHERS.
        ...
    ENDCASE.

 

The new TYPE OF and TYPE additions to CASE and WHEN allow you to write IS INSTANCE OF as a case control structure. The optionalINTO addition does the casting for you, I think that's rather cool.

 

B.T.W., the new IS INSTANCE OF and CASE TYPE OF even work for initial reference variables. Then they check if an up cast is possible.This can be helpful for checking the static types of formal parameters or field symbols that are typed generically. Therfore IS INSTANCE OF is not only instance of but might also be labeled as a type inspection operator.

 

For more information see:

 

ABAP News for Release 7.50 - CDS Table Functions Implemented by AMDP

$
0
0

I just started with blogging about important ABAP News for ABAP 7.50 and - whoosh - I am asked for CDS news. OK then, a blog about the new CDS table functions (but hey, I have also real work to do).

 

ABAP CDS is the ABAP-specific implementation of SAP's general Core Data Services (CDS) concept. ABAP CDS is open, meaning that you can use it on all database platforms supported by SAP. And yes, CDS views with parameters, introduced with ABAP 7.40, SP08, are supported by all databases with ABAP 7.50.

 

While openess has its merits, developers working only on the HANA platform might miss some code-push-down capabilities in ABAP CDS. One of these missing capabilities was the usage of database functions in data models built with CDS. Up to now, only CDS views were available. With ABAP 7.50 ABAP CDS also supports CDS table functions as CDS entities. Two problems had to be solved:

 

  • how to make table functions that are implemented natively on the database callable in CDS
  • how to manage the life cycle of native table functions to be constantly available to a data model built on the application server

 

Two questions, one answer: ABAP Managed Database Procedures (AMDP), introduced with ABAP 7.40, SP05. AMDP is a class-based framework for managing and calling stored procedures as AMDP procedures in AS ABAP. For the time being, AMDP is supported by the HANA platform only. Before ABAP 7.50, AMDP knew only database procedures without a return value. With ABAP 7.50, AMDP supports also database functions with a tabular return value. And the main purpose of these AMDP-functions is the implementation of CDS table functions. They cannot be called as functional methods in ABAP, while AMDP-procedures can be called as ABAP methods.

 

In order to create a CDS table function, you have two things to do:

 

  • define it in a CDS DDL source code,
  • implement it in an AMDP method with a  return value.

 

Both steps are possible in ADT (Eclipse) only.

 

The definition in CDS DDL is straight forward, as e.g.:


@ClientDependent: true
define table function DEMO_CDS_GET_SCARR_SPFLI_INPCL
  with parameters @Environment.systemField: #CLIENT
                  clnt:abap.clnt,
                  carrid:s_carr_id
  returns { client:s_mandt; 
            carrname:s_carrname;
            connid:s_conn_id;
            cityfrom:s_from_cit;
            cityto:s_to_city; }
  implemented by method
    CL_DEMO_AMDP_FUNCTIONS_INPCL=>GET_SCARR_SPFLI_FOR_CDS;

 

A CDS table function has input parameters and returns a tabular result set, that is structured as defined behind returns. You see, that the annotation @ClientDependent can be used to switch on an automatic client handling for Open SQL. You also see a new parameter annotation @Environment.systemField, also available for views, that is handled by Open SQL by implicitly passing the value of sy-mandt to that parameter. Such a CDS table function is a fully fledged CDS entity in the ABAP CDS world and can be used like a CDS view: It is a global structured data type in the ABAP Dictionary and it can be used as data source in Open SQL's SELECT and in CDS views. Behind implemented by method you see the AMDP class and method where the function has to be implemented in.

 

After activating the CDS table function you can go on implement the functional AMDP method in an AMDP class, that is a class with the marker interface IF_AMDP_MARKER_HDB. An AMDP method for a CDS table function must be a static functional method of a static AMDP class that is declared as follows:


CLASS-METHODS get_scarr_spfli_for_cds

              FOR TABLE FUNCTION demo_cds_get_scarr_spfli_inpcl.

The declaration is linked directly to the CDS table function. The parameter interface is implicitly derived from the table function's definition! Implementation looks like you might expect it:


  METHOD get_scarr_spfli_for_cds
        BY DATABASE FUNCTION FOR HDB
        LANGUAGE SQLSCRIPT
        OPTIONS READ-ONLY
        USING scarr spfli.
    RETURN SELECT sc.mandt as client,
                  sc.carrname, sp.connid, sp.cityfrom, sp.cityto
                  FROM scarr AS sc
                    INNER JOIN spfli AS sp ON sc.mandt = sp.mandt AND
                                              sc.carrid = sp.carrid
                    WHERE sp.mandt = :clnt AND
                          sp.carrid = :carrid
                    ORDER BY sc.mandt, sc.carrname, sp.connid;
  ENDMETHOD.

Nothing really new but BY DATABASE FUNCTION and that  READ-ONLY is a must. Implementation is done in native SQLScript for a HANA database function. And native means, you have to take care for the client. Automatic client handling is done on the Open SQL side only. Of course, a real CDS table function would do more HANA specific things (e.g. wrapping a HANA function) than a simple join as shown in the simple example here! A join you can code also in Open SQL or in a CDS View.

 

Speaking about Open SQL, last but not least, the usage of our CDS table function as data source of SELECT in an ABAP program:


  SELECT *
        FROM demo_cds_get_scarr_spfli_inpcl( carrid = @carrid )
        INTO TABLE @DATA(result)
        ##db_feature_mode[amdp_table_function].

Not different to an access of a CDS view with parameters. But you must switch off a syntax warning with a pragma to show that you are sure what you are doing, namely coding for HANA only.

 

Note that we don't need to pass the client explicitly. This is because the according parameter was annotated for implicit passing of the respective system field. Since the CDS table function was annotated as client dependent, the result set of Open SQL's SELECT does not contain a client column - as it is for CDS views. Furthermore all lines of the result set, that do not belong to the current client are implicitly removed. That's why the return list of a client dependent table function must have a client column.  For the sake of performance, the native implementation should deliver only lines of the current client. But since it is native it has to take care for that itself. Confusing? That's when open and native meet. In ABAP, normally the DBI does this handling for you. But this is not possible here.

 

For more information see

 

Viewing all 943 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>