Quantcast
Channel: SCN : Blog List - ABAP Development
Viewing all 943 articles
Browse latest View live

Getting the Brownfield clean, but not green - Part I

$
0
0

Initial situation

Imagine a legacy application which has grown over the past few years. It has evolved for generations, maintained by several generations of developers. While the first generation of developers had a clear vision of what the application should do and what the design should be, this vision was lost somewhere in between. The application cannot be understood easily and specific documentation would be needed to understand implementation details. As developers often run out of time, this documentation is not existent at all.

As a new developer in an existing application, you often face this situation. If the application’s structure is not clear you will have a hard time to figure out what the core concepts are and how to make enhancements to the existing code.

A historically grown application which tracks movements of objects through storage bins could consist of the following three entities:

  • Movable object
  • Storage bin
  • Movement activity

The main task of this example application is to keep track of the current storage bin of the moveable objects and their movement activities in the past. So here is the simplified class diagram:

 

Sample Legacy Application.png

 

This class diagram looks quite handy at the beginning. In the next chapters I’m going to show what went wrong and what makes future enhancement unnecessarily complicated.

 

Tell, don’t ask

ZCL_MOVEMENT_ACTIVITY expects a movable object as well as a destination to work properly. This data is passed as an ID of the movable object via the constructor (which eventually leads to a flat structure Z_MOVEABLE_OBJECT) and the method +SET_DESTINATION(IS_DESTINATION : Z_STORAGE_BIN) . The intention is probably to provide some data sources in order to allow ZCL_MOVEMENT_ACTIVITY make some assumptions about the data it got.

This is not a good practice.

Example: If there was an object status which indicates of the object is blocked for any movements, the activity class needs to read the status from the structure Z_MOVEABLE_OBJECT. Also, it needs to make a decision based on this status, if the object is blocked for movements or not.

If there was another class, which would need to list all blocked moveable objects, it would need to perform this interpretation on the status field again und you would probably need to copy the status interpretation logic into some other method. This also violates the “Don’t repeat yourself” principle.

This way, you make assumptions on foreign objects. Instead of telling, you ask for something and interpret the object’s data somewhere outside of the object.

 

Don’t repeat yourself

As mentioned before, don’t repeat yourself. This also applies to the logic which needs to evaluate the structure Z_STORAGE_BIN. Again, you are getting inveigled to make some assumptions about another object. Whenever you want to perform the assumption and an interpretation about that again, the fastest way to do it is to copy it, which is a clear smell.

 

Example: If the storage bin is completely reserved with other movements, is something that needs to be retrieved from the data passed in Z_MOVEABLE_OBJECT, either directly or indirectly.

If there is no central interpretation logic for this, the logic will need to be implemented wherever it is needed – you are starting to repeat yourself.

 

Single responsibility principle

Take a look at the three classes again. Is there anything they have in common?

You might recognize the SAVE() Method at the end of each methods list.

So what are the classes are intended to do?

  • ZCL_MOVEABLE_OBJECT: carry moveable objects data and read it from DB and store it somewhere
  • ZCL_MOVEMENT_ACTIVITY: perform movements and store these movements somewhere
  • ZCL_STORAGE_BIN: carry storage bin data and store it somewhere

 

Found the smell?

Everytime you need to use an “and” in your class’ description, there could be smell. In fact, in all of these three classes, there is more than one responsibility for each of them: Besides their core competency, they are obviously also caring about the persistence of their data, in most of the cases this is where some update modules are called or open SQL statements are called directly.

 

Inappropriate Intimacy

Take a look at ZCL_MOVEMENT_ACTIVITY. What would need to be done if the class was interested in the current storage bin of the moveable object passed to it’s constructor via it’s ID? In case this information was included in the structure Z_MOVEABLE_OBJECT, which can be loaded using the method LOAD_DATA in ZCL_MOVEABLE_OBJECT, we would be fine.

But more often, these data structures directly inherit from database tables, and often, the bin status report would have its own database table.

 

This means, as a lack of appropriate access functionality in the current class diagram, the developer would certainly read the data directly from the database table. The developer would make assumptions about the implementation of the method +SET_CURRENT_STORAGE_BIN(IS_STORAGE_BIN : Z_STORAGE_BIN)  from outside of the core class ZCL_MOVEABLE_OBJECT. This is a clear sign of inappropriate intimacy. ZCL_MOVEMENT_ACTIVITY would be dependent of implementation details of ZCL_MOVEABLE_OBJECT.

Often this smell regularly leads to another smell, “Don’t repeat yourself”, since this retrieval logic is often just copied wherever needed.

 

Testability

At some point of time you may decide to unit test ZCL_MOVEMENT_ACTIVITY. You figure out, that it expects an object ID in the constructor which likely leads to a call to ZCL_MOVEABLE_OBJECT=>LOAD_DATA(…) in order to retrieve the data which is needed to get the moveable object.

If ZCL_MOVEABLE_OBJECT=>LOAD_DATA(…) is not called, there needs to be database SELECT for the moveable object which is even worse.

In any case, just to pass a moveable object to the test, you need to prepare the database in the SETUP routine and clean it up in the TEARDOWN routine. This is hard work compared to the benefit you expect from a unit test.

Uli once told me, a good architecture is quite often an architecture which can be easily tested and I think this guy is right.

 

Conclusion

You have seen three classes and at least five different smells. The list of smells is not a complete list, there could be more, but this would go beyond the scope of this blog post. In real life applications you easily have several hundreds of classes. Every smell decreases reusability of your application’s components, makes it more and more inflexible and in the end very hard to maintain. This is where bugs may arise quite easy.

So what should we do? Throwing the application away is not an option. There will be no green field where we can start from scratch. Our options are refactoring and introducing unit tests which also serve as documentation for the behavior of specific functionalities to some extent. We need to get the brownfield clean, but not green. But this would be the subject of my next blog post.


Getting the Brownfield clean, but not green - Part II

$
0
0

Initial situation

In my previous blog post, I introduced an architecture which could be the backbone of a historically grown application. Certain design issues, called smells, were introduced which could prevent a new developer in the project from its main task, which is adding certain extra functionality to the existing application. Certain smells, that contradict an evolvable architecture, need to be eliminated. This process of improving the existing code before applying new functionality is called Refactoring, which is the subject of today’s blog post.

A good definition for an entity in this blog can be found in Domain Driven Design by Eric Evans. In short, an entity can be identified by its key and is an independent object that usually has a real life representation, as this is the case for movable objects or storage bins and even activities as they are a business document.

A historically grown application which tracks movements of objects through storage bins could consist of the following three entities:

  • Movable object
  • Storage bin
  • Movement activity

The main task of this example application is to keep track of the current storage bin of the moveable objects and their movement activities in the past. So here is the simplified class diagram:

Sample Legacy Application.png

Tell, don’t ask

ZCL_MOVEMENT_ACTIVITY expects a movable object as well as a destination to work properly. This data is passed as a flat structure via the constructor and the method +SET_DESTINATION(IS_DESTINATION : Z_STORAGE_BIN) . The intention is probably to provide some data sources in order to allow ZCL_MOVEMENT_ACTIVITY make some assumptions about the data it got.

This is not a good practice.

Providing structures as method input is a common anti-pattern in ABAP development. The habit certainly comes from programming with function modules, where your only opportunity to hand over complex data structures was to provide flat or nested structures or even internal tables.

In ABAP OO, you don’t need to do it. Think of structures differently: Objects are better structures since the can include basically the same data that can be included also by structures, with the difference of adding behavior to them. This behavior could also be restricted, which means that certain fields in the structure may not be accessible. This behavior could be validation which is processed before a certain field in the structure is populated.

The appropriate class in the example would be ZCL_STORAGE_BIN. As you can see, ZCL_STORAGE_BIN defines the attribute MS_STORAGE_BIN. In addition you can later add certain instance methods like IS_BLOCKED( ) or CHECK_DEST4MOVEABLE_OBJ(IO_MOVEABLE_OBJCT: ZCL_MOVEABLE_OBJECT) in order to support quantity checks or checks, if the storage bin may serve as a destination for an object’s movement. The method’s signature wouldn’t even need to change.

So instead of interpreting the data you got in your method, you are now telling the passed object to return some data to you which is needed for processing. Since the only data which is accessible for you can be accessed by the object’s methods (besides public attributes, never ever do that!), data encapsulation is far better in comparison to use simple structures, which cannot transfer any logic at all. A well written class is a pleasure for the developers who need to work with it. However, a badly written class may still expose to much information to it’s callers, but this is another smell.

Solution: The best approach would be to completely replace the signature of +SET_DESTINATION(IS_DESTINATION : Z_STORAGE_BIN)

With

+SET_DESTINATION(IO_DESTINATION: ZCL_STORAGE_BIN)

Inside the method, call appropriate Get-Methods of the passed object, or the common Is-Methods like IS_BLOCKED( ).

If this is not possible, introduce another, new method which is subject to future developments. The old method is marked as OBSOLETE in the comments or method’s description or both. Please do not forget to also include a hint of where the new method is located since one of the most annoying issues for a developer is to use an obviously deprecated method, which lacks the link to its newer version.

If the structure Z_STORAGE_BIN contains the storage bin key, you could consider another solution. The old method +SET_DESTINATION(IS_DESTINATION : Z_STORAGE_BIN) will remain as it is. But internally, you call a factory or repository to retrieve the storage bin’s object representation by its key attributes.

Storage Bin Repository.PNG

The class diagram currently gives you no hint about instance creation control patterns. You could consider to implement the repository as a Singleton, but this would usually prevent IoC Containers to create the repository for you as the contructor is usually made protected in these patterns. IoC Containers may bring you huge advantages when it comes to unit tests but this topic deserves its own section.

 

Don’t repeat yourself

Any logic hat has a natural relevance to the entity where it is related to should need to be implemented closely to the specific class which implements this entity.

This also applies to a status check on the movable object, wether it is blocked for movement activities or not. Don’t try to make this decision outside of the class ZCL_MOVEABLE_OBJECT, based on a STATUS-field in Z_MOVEABLE_OBJECT which may be retrieved by IV_MOVEABLE OBJECT in the constructor of ZCL_MOVEMENT_ACTIVITY.

Even if the status attribute only contains two possible values, “B” for “Blocked” and “R” for “Ready”, don’t do that. Playing with fire usually leads to burned fingers.

Instead, try to get an instance of type ZCL_MOVEABLE_OBJECT as fast as possible. This could happen by a method parameter or, again, a repository lookup. Implement the interpretation upon the status where it belongs to (ZCL_MOVEABLE_OBJECT).

To come back to the topic of this section: Interpreting the object’s status outside of the object’s main class usually leads to other code fragments where this interpretation is done again as the main class lacks appropriate methods. This way, you are starting to repeat yourself.

 

Single responsibility principle

Take a look at the three classes again. Is there anything they have in common?

You might recognize the SAVE() Method at the end of each methods list.

So what are the classes are intended to do?

  • ZCL_MOVEABLE_OBJECT: carry moveable objects data and read it from DB and store it somewhere
  • ZCL_MOVEMENT_ACTIVITY: perform movements and store these movements somewhere
  • ZCL_STORAGE_BIN: carry storage bin data and store it somewhere

Found the smell?

 

Entity objects should not take care about how they are stored somewhere, for example in the database.

This is not because we like to switch from one database to another in the near future (despite the fact that ABAP offers you very good Open SQL support).

Instead, coupling entities like ZCL_MOVEABLE_OBJECT to the way they are stored usually leads to another issue: Caching requirements will likely be also implemented in ZCL_MOVEABLE_OBJECT.

This is where ZCL_MOVEABLE_OBJECT starts to grow and to serve more than just one purpose.

It is better to leave this task to repositories, as mentioned before. Repositories control the way, how entity objects are created or stored to the database. Hence, our SAVE-implementations should move to their appropriate repositories.

Storage Bin Repository with SAVE-method.PNG

Testability

Repositories manage the access to entities. Usually, they are seldom referenced by entities, since the classes that implement these entities should expect other entity instances wherever possible. Sometimes you cannot get an instance directly, for example if the method’s signature needs to stay the same and expects only the ID of the referenced entity from its callers.

Take a look at +SET_DESTINATION(IS_DESTINATION : Z_STORAGE_BIN) of ZCL_MOVEMENT_ACTIVITY again. In case, the key of the storage bin is already included in the structure Z_STORAGE_BIN and you have a repository for this entity, try to ask the repository for an instance inside the method.

However, the repository needs to be registered in the activity instance before. This could happen either via constructor injection or via a RESOLVE_DEPENDENCIES( ) – method which can be called in the constructor.

This method may acquire the instance of the storage bin repository by either calling the constructor, a factory or a static method of ZCL_STORAGE_BIN_REPOSITORY which implements the singleton pattern. If you share only one instance of ZCL_STORAGE_BIN_REPOSITORY in your application you may also utilize caching of objects on instance level.

In order to unit test ZCL_MOVEMENT_ACTIVITY you will need to have sample storage bins as well as sample movable objects. Creating these storage bins and moveable objects will likely cause you to setup database table entries for each test which is quite a big effort.

Instead, it is usually a far better idea, to describe repositories by specific interfaces.

Storage Bin Repository with interface.PNG

This way you can access the repository by its interface, not caring about any implementation details in your client’s classes. The implementation behind this interface can be easily mocked either by using a mocking framework or manually.

On the other side, IoC Containers, may serve as service locators in a RESOLVE_DEPENDENCIES( )-method and replace the productive implementation of a repository by a mocked repository in a unit test. The mocked repository will eventually always returns the same instance of a storage bin, no matter what is really stored in the database.

Warning: using IoC Containers as pure service locators is quite dangerous as you would start to call the container all over the place. Instead it is better idea to declare every dependency via dependency injection (CONSTRUCTOR- or SETTER-based) in the entity. This dependency will be satisfied by the entity’s repository, which in turn, declares its own dependencies, for example other repositories. The goal of IoC Conatiners is to create the backend objects like repositories and satisfy their dependencies automatically for you. In a clean design there is no need to create entities with the help of IoC Containers, because there are already repositories for this task, but this would certainly be a topic for a future blog post.

 

Result

In the end, if all client calls can be adjusted, the resulting UML diagram could look like this. Of course such a result would be the optimal and most optimistic scenario, but in most of the cases you can come close to this result by utilizing RESOLVE_DEPENDENCY( ) methods as described in the previous sections, without changing the old API at all. This solution wouldn’t be clean, but much better than the current design and certainly more robust.

The goal is not, to change the design of a working application without specific purpose. But in case enhancements would have to be done, it is often better to improve the design where you see obstacles in implementing the desired functionality in the first step.

Afterwards, the enhancement is implemented even faster and more robust, with less testing effort and side effects. New developers understand more easily the task of each class, at the cost of a higher learning curve at the beginning in order to get the whole picture of entity classes, repositories and cross-cutting concerns like IoC Containers or mocking frameworks.

Eventually, unit testing of certain components becomes easier. These unit tests contribute to both software quality as well as even a bit of documentation as you are testing an object in unit tests the way it is intended to be used (respectively not used).

Sample Legacy Application after Refactoring.png

Is SAP NW EhP 3 really Non-Disruptive?

$
0
0

Everyone talks about non-disruptive evolution of software systems - but what about the ABAP application server? Is non-disruptiveness really the most important development guideline? In this blog entry I look at the facts.

 

The ABAP Application Server contains the technological basis of SAP Business Suite. It has a set of technical tools in software component SAP_BASIS, among them

  • UI technologies like ABAP Dynpro Controls, BSP and ABAP WebDynpro
  • SOA frameworks and a local integration engine
  • frameworks for business rule like BRFplus
  • SAP Business Workflow
  • frameworks for output management technology and document archiving and integration

The software component SAP_ABA is more useful for rapid development of business applications because it is a treasure chest of reuse tools. If you get curious a recommend you one of my previous blogs.

 

In fact I was already looking for AS ABAP 7.03 which is also part shipped with SAP NetWeaver 7.31. The components SAP_BASIS and SAP_ABA 7.31 contain many useful tools and it is part of SAP ERP 6.06 and CRM 7.2 which is necessary for SAP Business Suite on HANA.

 

For me the implementation of SAP NW 7.31 turned out as painful and without help from SAP it could have become a nightmare. The reason for this problem was that SAP deleted many development objects which caused syntax errors in custom code. The good news: SAP was fair especially if you think of Note 109533 - Use of SAP function modules.

 

Evolution of AS ABAP and Stability of Software Components

 

Was it just bad luck? Let’s look at the facts. Do you know how many development objects of SAP NetWeaver have been deleted in the last time. If you don’t know you should have a look on the number of development objects (TADIR entries) of software components SAP_BASIS and SAP_ABA in the following picture:

01.JPG

The SAP_BASIS component got bigger while the SAP_ABA component even had shrunk. So I counted TADIR objects again and learned that from 7.0 to 7.31 more than 11.000 development objects of SAP_ABA have been deleted! 2.328 deletions came from the deletion a whole framework: see note 838772. This is a drastic incompatibility but I don’t consider it as harmful: ABAP developers should only use SAP frameworks that they know. No reliable SAP developer should perform recklessly and randomly reuse. If you hired one of those developers then you had bad luck, the future of you custom development is in danger.

 

Implementation of SAP NetWeaver Ehp 3 Considered Painful

 

As I already mentioned before I made bad experience with SAP_BASIS and SAP_ABA 7.31 but you might have made a different experience. So let’s talk about facts and count deletions of development objects in terms of transparent table TADIR. Winston Churchill once remarked that the only statistics you can trust are those you falsified yourself. You might judge if this holds for me – I think I was quite fair:

  • I don’t count the elements of above mentioned deleted framework because I consider it as deployment error.
  • Furthermore I don’t consider deletions which are downports to a deeper software component.
  • And last but not least I’m counting only development objects that are shipped with NW 7.0 and I don’t count deletions of objects shipped with NW EhP 1 resp. EhP 2.

Nevertheless, the number of deletions is dramatic – from NW 7.0 NW 7.31 3.25% of SAP_BASIS was deleted and 7.98% of SAP_ABA. You can see this in the following picture:

02.JPG 

This is really much, especially if you compare it with other software components of SAP Business Suite and Industry Solutions which is displayed in the following diagram.

03.JPG

 

What can we learn from this picture: SAP Business Suite is quite stable and has only between 0.2% and 1.4% deletions – and please compare it with the deletions in SAP NetWeaver: We have 5.8% and 8% deletions in SAP_BASIS and SAP_ABA – and please remember that I was fair and didn’t count a huge framework which was deployed by accident.

 

You may ask why I am that fair because and didn’t count a huge set of deletions. SAP NetWaver had a hard time within the last years:

  • SAP NetWeaver had to find its place between onPremise and onDemand
  • SAP shot down a development line 7.10, 7.11, (7.20,) and 7.30 and 7.31 and did downports to 7.01, 7.02 and now 7.31 is a consolidation release.

Under these circumstances I am quite impressed that SAP managed to keep quite SAP NetWeaver stable. In the mean time the software components of SAP Business Suite and Industry Solutions got bigger and bigger as you can see in the following picture:

04.JPG

 

I wasn’t surprised because of this because SAP shipped many Enterprise Services as SOA layer which usually cause an inflation of data types.

 

The Consequences of Incompatible Changes

 

Every deletion is an incompatible change (the opposite is not true – there may be much more incompatibilities in SAP NetWeaver) and every incompatible change may cause trouble for customers and partners. The consequences are severe:

  • reduced stability - incompatible changes can come with SPs and not only EhPs
  • maintenance costs - you have to correct the error, test and ship the corrections
  • implementations of SPs will get more costly with many side-effects like security risks

 

This is really weird: SAP does a great job making SAP Business Suite on HANA a non-disruptive change and developers and SAP Developers from SAP NetWeaver and Reuse Layer seem to try to perform a “spring-cleaning” which thwarts SAP’s efforts.

 

I don’t think this is consistent: SAP Business Suite as well as solutions from SAP Ecosystem build a software pyramid – if the technological basis – SAP NetWeaver is not stable that the stability of the whole solution gets questionable.

05.JPG

So if the priority of SAP Business Suite development is to avoid any disruption and the same should hold for SAP NetWeaver development.

 

What should you do right now?

 

My first advice: Don't panic. If you have a bigger custom development project I suggest you to set up a sandbox before doing a software refresh and test your applications using the Code Inspector. Don't try to avoid a NetWeaver EhP implementation - this won't help you because many nasty deletions have been shipped in SPs and I recommend a high SP level because of many reasons - think of security for example.

 

Let’s Talk about Disruptions

 

SAP Business Suite on HANA is released – please I readThorsten Franz's great blog about it:  The SAP Ecosystem can only follow this strategy if SAP NetWeaver development stops making incompatible changes - otherwise the whole HANA strategy of SAP Business Suite will become questionable because of disruptiveness.

 

On the other hand the code of SAP NetWeaver is very old and I expect evolution will come to point where incompatible changes are necessary because it has to face challenges like SAP HANA, Mobile and Cloud.

 

SAP has much experience with stability and governance processes: think of BAPIs, Enterprise Services and so on. The ABAP language has a package concept, package interfaces and SAP Code Inspector can check usage of obsolete elements. So the technical infrastructure for a governance process is there – SAP only has to establish it.

Sharpen your ABAP Editor for TDD - Part III.

$
0
0

Scharpen your ABAP Editor for Test-Driven-Development (TDD) - Part III.

How to optimize your Performance by using the new ABAP Editor

 

http://www.sxc.hu/pic/m/g/ge/gerard79/1327908_the_maze_3.jpg

 

[image from http://www.sxc.hu/pic/m/g/ge/gerard79/1327908_the_maze_3.jpg  - copyright www.digital-delight.ch]

Navigation is quite easy... really - it´s easy?

 

In my previous Blogs I described what is necessary for sharpen your ABAP Editor to use successful TDD. In this Blog I will show you:

  • the Split-View Option
  • create own local Bookmarks
  • Search and Replace of Code

 

Navigation Options in the new ABAP Editor

 

Every Developer has his own process to develop software. With the time I´ve collect some interesting Tips. I´ve tried to implement some methods and techniques in my daily work as a agile Developer. Some works for me and some works for others

 

And here are the topics:

  • the Split-View Option
  • create own local Bookmarks
  • Search and Replace of Code

 

The Split-View Option

 

The Split-View is only available if you have Coding which has more lines of code, than your Display can show...

 

 

If you don´t see the split-View-Button, then put some blank lines in your coding:

 

 

 

and the it is possible to use the split-View Option. Klick on the top-right Button and Drag it to the middle of your displayed coding.

Like in the following pics:

 

 

 

When you are finished the Operation your ABAP Editor have two area:

Often I arrange a the Top-Panel my Productive Coding and at the Bottom-Panel my local Testclasses.

This gave me the opportunity to scroll or jump with the Bookmark-Shortcuts in one area when the other are is still freezed.

 

 

 

Creating local Bookmarks

 

When I create a structure during my TDD my own applied Bookmarks help me to navigate really fast in my Coding.

Creating local Classes will left you to create an Definition and a Implementation Part.

You can image when your Code-Basis grows that scrolling could become very noisy

 

To creating own Bookmarks go with the mouse to the left column-pane as you can see here.

Please note, that I create my Bookmark in the Definition Part of my Production-Coding:

 

 

then choose one Bookmark-Id you like:

 

 

After selecting an Bookmark-Id you got an blue Flag with the Number 0 on the left-hand side:

 

 

Again, create the next Bookmark (here with Bookmark-Id No. 1). Now in the Implementation Part of your local Class.

 

 

Ok, that´s great! Let us create the Bookmarks in ower local ABAP Unit Testclasses.

I will here shorten it a little up. Create Bookmark No. 8 for the Definition Part of your Testclass.

And Bookmark No. 9 for the Implementation Part of your Testclass:

 

 

Now the fast Navigation is really easy. Press the Shortcuts CTRL + 0 or CTRL + 1 for the Production Code and CTRL + 8 or CTRL + 9 for your local Testclasses, and you jump immediately to the Place you want!

 

Search and Replace of Code

 

In TDD small iterations are recommended. After the Unit-Test (red) and a little Coding to pass the Test (green) you come to the Refactoring Part of TDD (blue). For the Refactoring Part I realized, that it is important to master the Search and Replace Mechanism of your Editor.

 

Navigate to the Place where you will  replace some coding.

Click with the Mouse the Coding-Part (for example an Variable) and Press also CTRL-Shortcut. After this the Variable is selected (and also copied internal). Now, right Click and the Context-Menu will be shown. Here Choose the Replace Item:

 

 

The following Replace-Diaglog is displayed. You see that your Variable is automaticly is filled in the Find what-Form. Check that the Checkbox Match Whole Word is marked. Now fill your new Variable-Name in the Replace With-Form.

Here I would suggest that you Press the Replace-Button so often till all Variables are replaced. By the way you can also choose the Replace All-Button, but be aware, if you had anouther Method with the same Variable Name all existing Places will be changed automaticly.

 

 

Another Option is to use the Find Incremental (Shortkey CTRL + I ) to search for names.

 

 

Image you have a lot of Variables which are named PLAYER_ etc. (something else) you whish to look only at the coding where PLAYER as one Whole Word is meant. So Press CTRL + I and start typing PLAYER the first available Place is found now you can Press again CTRL + I and you will be navigate to the next founded Place.

 

 

 


  My experience is..

 

It is brilliant that I realized that the combination of all the Navigation Tips make me really fast.

I realized, that the  usage happend unconscious and it enables me to concentrate on the relevant Part of my daily programming tasks.

Try out the Tips. Look if it speed up your work.

 

 

Your experience is?

 

What is your experience with using the above Tips?

What was good and what could be implemented better?

Have you something diffrent that works for you?

 

  Further Information

 

Blogs in this Series:

    

Bring Data to Life – Integrating D3.js in SAP via RESTful Web Service

$
0
0

Originally Published at:

 

 

 

Displaying data in using graphics like Bar charts, Stacked charts, Pyramids, Maps are more appealing as well as fun to work with. Recently, I came across this great library D3.js which is based on Javascript. The library D3.js is very powerful and provides so many different type of graphical options which we can use.

 

 

 

Introduction

After doing quite a lot research I came across the tutorial on creating the Bar charts. The example works excellent as long as data is embedded or you get the data and pass it to your HTML page. As first thought, I started looking into exposing SAP data using the WebServices. But, I haven’t yet able to make it work, yet. So, I selected second best option which I knew would work – Using the RESTful web service to build the entire page alongwith the data as the response of the Service.

 

Intro to D3.js

D3 is supports HTML5 which is much more powerful flavor of HTML. D3 has so many API which can be used to create fancy graphics using HTML5 tags, CSS and JavaScript. To be able use and view the output generated using D3.js, your browser must support HTML5 like Chrome, Mozilla, Opera and IE 8 onwards. I will use SVG component of HTML5. D3 definitely supports traditional HTML but it would be much more easier with HTML5.  To start with you need to include the D3.js library in your HTML page. After that you need to bind some data in JS. You can assign the data directly to an array and start using the APIs. I am assigning this data into JSON format first as I am planning to expose the data in JSON from SAP after calling it from ABAP. I would then parse the JSON to build up my required arrays of data.

 

Example to create Vertical Bar Charts

This code would create rect elements calculating height and width. Add values on top of the bar charts. Add this line underneath the chart as a base. Add labels below the line to complete the chart.

 

var obj = eval ("(" + document.getElementById("json-data").innerHTML + ")");
var jdata = obj.cdata;
var data = new Array();
var datal = new Array();


for (i=0; i<jdata.length;i++){          data[i] = jdata[i].VALUE;          datal[i] = jdata[i].LABEL;
}
console.log(datal);
var w = 30,    h = 400;
var max_data = Math.max.apply( Math, data );
max_data = max_data + 1;

var x = d3.scale.linear()
    .domain([0, 1])    .range([0, w]);


var y = d3.scale.linear()
    //.domain([0, 25])    // min and max of the data set    .domain([0, max_data])    // min and max of the data set    .rangeRound([0, h]);



var chart = d3.select("body").append("svg")
     .attr("class", "chart")     .attr("width", w * data.length - 1)     .attr("height", function(){return h + 40;}); 

chart.selectAll("rect")
     .data(data)   .enter().append("rect")     .attr("x", function(d, i) { return x(i) - .5; })     .attr("y", function(d) { return h - y(d) - .5 ; })     .attr("width", w)     .attr("height", function(d) { return y(d); }); 


chart.append("line")
     .attr("x1", 0)     .attr("x2", w * data.length)     .attr("y1", h - .5)     .attr("y2", h - .5)     .style("stroke", "#000"); 

//values on top 
chart.selectAll("text")
     .data(data)   .enter().append("text")     .attr("x", function(d, i) { return x(i) + w / 2 + 5; })     .attr("y", function(d) { return h - y(d) - 10; })     .attr("dx", -3) // padding-right     .attr("dy", ".35em") // vertical-align: middle     .attr("text-anchor", "end") // text-align: right           .style("fill", "blue")     .text(String); 



// labels at bottom 
chart.selectAll("text1")
     .data(datal)   .enter().append("text")      .attr("x", function(d, i) { return x(i) + w / 2 + 10; })     .attr("y", function(){ return h + 20;})     .attr("dx", -3) // padding-right     .attr("dy", ".35em") // vertical-align: middle     .attr("text-anchor", "end") // text-align: right     .text(String);

 

The logic would create a output like this. If you don’t see this output, you might NOT be using the HTML5 compatible browser. This not an image but a IFRAME running a test page to generate bar chart

.

 

I have used the example code for this tutorial which is available at Bar Chart - 1& Bar Chart - 2.

 

Introduction to RESTful WS

RESTful WS is service implemented using HTTP using REST principles. In RESTful WS, you pass arguments in the URI itself like http://something.com/PARAM1. You extract this parameters or set of parameters  (PARAM1/SUBparam1/Text1) and perform desired operation – GET, POST, PUT, DELETE. In GET, you get the data and send back the response. Using POST action, you try to POST the data within the system where RESTful WS is implemented.  Read more on RESTful WS and Real Web Service with REST and ICF   

 

Step by step guide on Creating RESTful web service

1. Create RESTful WS in SAPYou can create the service in transaction SICF. Create a new service underneath the node default_host/sap/bc. You may want to create a new node, whenever you are creating a new RESTful WS as you can create different service underneath that node. Don't use the node default_host/sap/bc/srt/ as it would required to have SOAMANAGER Configuration, which we are not going to have for our service.  

Place cursor on desired node and select New Sub-Element. Enter the name of the Service in next Popup. In subsequent screen, enter the description.

WS_initial_popup.jpg

 

2 Enter the Logon Data: Select the tab Logon data and enter the required User credentials

WS_User_Credentials.png

 

3 Enter the Request Handler: In the handler tab, assign a class which would be used to handle the http request coming from the web. This class need to implement the interface IF_HTTP_EXTENSION in order to gain access of the request and also the required method. Press F1 to know more about the handler class.

Class_F1_Help.png

 

Add the interface IF_HTTP_EXTENSION in your handler class Implement the method and activate the class.

Handler_Class.png

 

Add the class in the Handler tab of the Service definition. You can create as many as handler class and assign them in the handler tab. All the classes here would be accessed in the sequence.

WS_Handler_Class.png

 

Locate your service in the service tree and activate it.

 

Web Service Testing

Once the class is active, Put an external break point in the method implementation. Locate your service and select Test Service from context menu. System will stop at your break-point.

WS_Initial_Test.png

You can note down the URI or URL and call the Service directly. Make sure you remove the client from the URL.  For now we would add this code in our handler class to interpret the request and send our response back. This would send response in HTML with text Hello SCN from Restful Ws.  At high level, code does this:

  • Extract the Parameters from URI
  • Do the logic
  • Build the response HTML

 

  DATA:

  lv_path TYPE string,

  lv_cdata TYPE string,

  lv_param TYPE string.

  DATA: lt_request TYPE STANDARD TABLE OF string.

* get the request attributes

  lv_path = server->request->get_header_field( name = '~path_info' ).

  SHIFT lv_path LEFT BY 1 PLACES.

  SPLIT lv_path AT '/' INTO TABLE lt_request.

* build the response

  CONCATENATE

  '<head>'

  '<title>Success</title>'

  '</head>'

  '<body>'

  `<h1>Hello `  lv_param ` from Restful WS</h1>`

  '</body>'

  '</html>'

  INTO lv_cdata.

* Send the response back

  server->response->set_cdata( data = lv_cdata ).

 

 

Putting all together

Lets use the RESTful web service created in the previous step. You need to implement below logic to prepare the entire HTML with Data and D3.js JavaScript code and send it as request.

 

Parse the URL

Get the value from the URL. You can use the method GET_HEADER_FIELD of object SERVER attribute REQUEST. For the Demo, I would pass an integer as the URI parameter. I'll use this integer to create number of required bars.

* get the request attributes

  lv_path = server->request->get_header_field( name = '~path_info' ).

  SHIFT lv_path LEFT BY 1 PLACES.

  SPLIT lv_path AT '/' INTO TABLE lt_request.

  READ TABLE lt_request INTO lv_param INDEX 1.

 

HTML template

First all you need to load the HTML template. Creating an HTML tag from scratch in ABAP would need lot of concatenation. Instead you load the file in SMW0. Get this HTML content to make up full output HTML. You place some place holder in the template file. You would need to replace this placeholder with your JSON data. The template file has everything – CSS, JavaScript to load D3.js, API calls to D3.js. So, if any change is required to HTML output other than data needs to be done in HTML. You can definitely create this from scratch or put more place holders to make it more dynamic. 

 

Load the template d3_bar_chart using transaction code SMW0. Use the option "HTML templates for WebRFC application". Use FM WWW_GET_SCRIPT_AND_HTML to get the HTML template content.

 

Exposing Data as JSON

For demo purpose, I would just create some random data. But you can definitely prepare actual data and convert that to JSON. To build a json, I have used a utility json4abap. Download the class include as a local class in the HTTP request handler. Prepare your data and convert the data to json.

 

Build Final HTML

Replace the placeholder with the JSON data in the HTML template. You would need to convert the JSON data to the table compatible to the HTML data. After that, FIND and REPLACE the placeholder with JSON data. Generate the HTML string and send it back to the request. 

 

Code Snippet

Method IF_HTTP_EXTENSION~HANDLE_REQUEST of the class ZCL_TEST_D3_DEMO_HANDLER

METHOD if_http_extension~handle_request.

 

 

  DATA:

  lv_path TYPE string,

  lv_cdata TYPE string,

  lv_param TYPE string.

 

 

  DATA: lt_request TYPE STANDARD TABLE OF string.

  DATA: lv_times TYPE i.

 

 

* get the request attributes

  lv_path = server->request->get_header_field( name = '~path_info' ).

  SHIFT lv_path LEFT BY 1 PLACES.

  SPLIT lv_path AT '/' INTO TABLE lt_request.

  READ TABLE lt_request INTO lv_param INDEX 1.

 

 

* convert to number

  TRY.

      lv_times = lv_param.

      if lv_times ge 20.    " avoid misuse

        lv_times = 20.

      endif.

    CATCH cx_root.

      lv_times = 5.

  ENDTRY.

* Get HTML data

  lv_cdata = me->prepare_html( lv_times ).

*

* Send the response back

  server->response->set_cdata( data = lv_cdata ).

ENDMETHOD.

 

Method PREPARE_HTML of the class PREPARE_HTML

 

METHOD prepare_html.

 

 

  TYPE-POOLS: swww.

 

 

  TYPES: BEGIN OF ty_data,

           label TYPE char5,

           value TYPE i,

         END OF ty_data.

  DATA: ls_data TYPE ty_data,

        lt_data TYPE TABLE OF ty_data.

 

 

  DATA: lr_json TYPE REF TO json4abap,

        l_json TYPE string.

  DATA: lt_json TYPE soli_tab.

 

 

  DATA: template        TYPE swww_t_template_name,

        html_table      TYPE TABLE OF w3html.

 

 

  DATA: ls_result TYPE match_result.

  DATA: lt_html_final TYPE TABLE OF w3html.

  DATA: lv_to_index TYPE i.

  DATA: lv_total    TYPE i.

  DATA: ls_html LIKE LINE OF lt_html_final.

 

 

 

 

* Some dummy data. This can be real data based on the parameters

* added in the URL

  CALL FUNCTION 'RANDOM_INITIALIZE'.

  DO 3 TIMES.

    CALL FUNCTION 'RANDOM_I4'

      EXPORTING

        rnd_min = 0

        rnd_max = 20.

  ENDDO.

 

 

  DO iv_times TIMES.

    ls_data-label = sy-index + 70.

    CONDENSE ls_data-label.

    CALL FUNCTION 'RANDOM_I4'

      EXPORTING

        rnd_min   = 0

        rnd_max   = 20

      IMPORTING

        rnd_value = ls_data-value.

    APPEND ls_data TO lt_data.

  ENDDO.

 

 

* Create JSON

  CREATE OBJECT lr_json.

  l_json = lr_json->json( abapdata = lt_data

                          name = 'cdata' ).

 

 

* convert string to 255

  lt_json = cl_bcs_convert=>string_to_soli( l_json ).

 

 

* Get template data from the SMW0

  template = 'ZDEMO_D3_BAR_CHART'.

 

 

  CALL FUNCTION 'WWW_GET_SCRIPT_AND_HTML'

    EXPORTING

      obj_name         = template

    TABLES

      html             = html_table

    EXCEPTIONS

      object_not_found = 1.

 

 

* Merge data into output table

  FIND FIRST OCCURRENCE OF '&json_data_holder&'

    IN TABLE html_table

    RESULTS ls_result.

 

 

  lv_to_index = ls_result-line - 1.

  lv_total = LINES( html_table ).

 

 

  APPEND LINES OF html_table FROM 1 TO lv_to_index TO lt_html_final.

  APPEND LINES OF lt_json TO lt_html_final.

  ls_result-line = ls_result-line + 1.

  APPEND LINES OF html_table FROM ls_result-line TO lv_total TO lt_html_final.

 

 

  LOOP AT lt_html_final INTO ls_html.

    CONCATENATE rv_html_string ls_html-line

      INTO rv_html_string.

  ENDLOOP.

 

 

ENDMETHOD.

 

Output

When you execute the URL with any integer value, you will get output like this. Doesn't it look great?

Testing_1.png

 

You can play around with the numbers in the URI to generate different number of vertical bars.

Check Printing - MICR

$
0
0

Recently we had a requirement to print MICR characters in Checks and had to do a little bit of research to achieve the same. Key points / learning are noted below

 

SAP Provides two good and comprehensive resources on the same

 

http://help.sap.com/erp2005_ehp_04/helpdata/en/b7/2326ceac7e11d299750000e83dd9fc/content.htm

 

SAP Note     :     94233

 

Important points are noted below

 

1. First and foremost the printer should be equipped with the necessary hardware

 

2. As Specified in SAP help we would need to use the correct MICR Font type and Size. For printing special characters equivalent characters would need to be used. For Example Transit is replaced by D In the Smart form.

 

3. SAP Provides a Standard text SAPSCRIPT-MICRTEST to test if MICR characters are printed properly.

 

4. Printer needs to be defined in SAP with access method as G or F and the device type needs to correspond to the printer being used.

 

We can only test if the MICR characters are being printed properly by printing it in the actual printer itself and print preview will not show the characters.

 

Hopefully this will be useful for people who are starting afresh on MICR Printing.

How to disable component display in Material Tab of Subcontracting PO type in Tcode ME22N

$
0
0

Hi Expert,

I am new ABAPer. Currently I have issue of MM in which component display in Material Tab of Subcontracting PO type shows BOM which is EDITABLE I want to make it noneditable.

How to make it noneditable please help me.

Thanks in Advance.

Essential Basis for SAP (ABAP, BW, Functional) Consultants Part-III

$
0
0

This blog is the third part of previous two blogs (I'll suggest to read these blogs to be in sync)

Essential Basis for SAP (ABAP, BW, Functional) Consultants Part-I

Essential Basis for SAP (ABAP, BW, Functional) Consultants Part-II

 

Here goes some of the remaining topics:

------------------------------------------------------------------------------------------------------------------------

Lets give a fresh look at Developer Key and Object Key: If we know how to check Installation Number of any SAP system then we can verify that installation number remain same across that landscape(example ECC-Sandbox, ECC-Dev, ECC-Quality, ECC-Test, ECC-Production etc). This means if you have developer key for your sandbox system, same key can be used in your development system.

 

How to check installation Number?

11.DeveloperKeyI.png11.DeveloperKeyII.png

 

Now, what information is required to generate a Developer Key?

1. SAP User Name 2. Installation Number.

In all those systems where these two information will be same Developer Key will be same!

11.DeveloperKeyIII.png11.DeveloperKeyIV.png

 

For Object Key ? Notice the 5 marked information, again Installation is the part which may vary for an object, so next time you get prompted for an object key, before sending request to basis team, you can check your mail archives and find out if these information were same or different. If same then you saved your time.

------------------------------------------------------------------------------------------------------------------------

Next topic is RFC connection:

12.RFC01.png

RFC Connections in SAP system are used to connect with another SAP system or non-SAP system. Above screenshot shows different type of RFC Connections.

Type I - Internal connections are used when SAP calls one program or one program tries to call another program in same system.
Type 3 - When a connection is required with another SAP system(ABAP to ABAP connection), this connection requires information like host/IP Instance number, client, User ID, password. What to do if this connection is not working, goto SM59 and double click the connection:

12.RFC03.png

From above screenshot, one can find out if the pointed SAP system is up or not by clicking on connection test, by authorization test we can find out, the User ID and password maintained in connection is fine or not.

 

There are two more connection types which are used more often http and TCP/IP connection. You need a URL and User ID/password in order to use http connection, which means java based systems where there is scope of login through User ID/password can be connected... easy peasy!

 

I am interested in discussing more about TCP/IP connection which does not restrict you for the system type be it another SAP system, any third party, legacy system or whatever, as long they understand the talking of TCP/IP ... and......and... their development team has developed some kind of interface after consulting with SAP in a way that it accepts User ID/password and other related information about SAP system which has ability to register a program at the Gateway of said SAP system.

 

Meaning? One has to go to that third party system and find out that interface which has the ability to register a program on SAP system's gateway.

Whats special about this registered program?  ..well this guy knows how to talk to its mother system (legacy, non-SAP) and by registering at your's SAP gateway you now have given him permission so that it can stand on the main gate of the City (SAP system) and whatever you will convey to him, he will pass it on to its mother system (Non-SAP system) and also it will bring back the response. These program names may be Case Sensitive too so watch out.

12.RFC05.png

These registered programs can always be checked from Basis transaction SMGW -> Goto -> Logged On Clients

 

For some reason if this connection stops working and it is not showing in the gateway list, what generally people do... they ask Basis team to register this program again on the gateway. That sincere Basis guy (who has only access to his SAP servers) will register the program from the same SAP system with the given Program ID on its own gateway by following the process mentioned in SAP Note 63930(Gateway registration of RFC server program)

Now the program ID will start showing again at gateway list and guess what? the connection test will start working again.....BUT here is the problem.
This new guy(registerd ProgramID) who is standing at the main gate of the City and listening to your messages has no idea where else to go in the desert(outside of SAP system)... He is not an outsider. Whatever you will say to him he will nod his head like a dummy but nothing actually will happen in other system.Well! you know what I mean.

 

RFCDES is the table which stores most of the information related to RFC connection.

 

In SM59 transaction you will notice there are some connection which cannot be edited, ...well! open the connection by double clicking in SM59 which is non-editable and in top-left command box type TOGL and press enter This does not mean you should change those values.

 

------------------------------------------------------------------------------------------------------------------------

Lets conclude this blog with last topic: Printers and never ending correction of format issues:

13.Printer.png

This always happens when a form does not get printed as expected. In order to reach the solution, a clear idea of the flow is necessary. Here goes the story...

 

When a print is triggered inside the SAP system, print data is collected and sent to the spool server (which is nothing but an SAP system with the dialog and spool work processes). A dialog work process of the spool server forwards the print/spool data to the spool database for temporary storage -> A spool request got generated!

The spool work process generates output request from the existing spool request, a device type is used to format/convert the spool request in data stream that the printer understands. Spool request forwards the output request to the Operating System Spooler. Main task of OS spooler is to manage the wait-queue and transfer the date to the physical printer.

 

Once the great philosopher Aristotle said : If thou art not getting desired format of thy printout then thou should be looking for coding beneath the form where format is inscribed, otherwise its upto the "Device Type".

 

We cannot discuss about the print format coded in the 'Z' forms, so next best thing is Device Type.

 

When a printer is defined by a Basis person in an SAP system, they assign appropriate device type to that printer. The selection of device type depends on the model of the printer, the language in which printout is required or whether its a pdf document or barcode etc. In most of the cases one can find a specific device type for a specific printer model. One should not confuse the device type with the printer driver. Printer driver is installed at Operating System level (say, you've got a printer for your home then you install the printer driver in your windows/mac system)

 

The device type provides the information to the SAP System how to control the output device correctly. A Device type has following attributes:

 

  • Character set: By now we all know -> With help of the Device Type a spool work process converts spool request into output request, which means with one spool request you can generate different type of output request by selecting different sap printers. A character set is responsible to convert the characters used internally by an SAP spool system to the corresponding characters of output request.
  • Printer driver: In device type one can specify different printer drivers for printing SAPscript documents and ABAP lists.
  • Print controls: This controls the font size, bold italic etc so that printer can understand.
  • Formats: Formats specify the format supported by the SAP system. The system differentiates between SAPscript formats (DINA4 and LETTER) and ABAP list formats (X_65_132 = 65 rows/132 columns). There is also a special format for additional print options (format POSS).
  • Page format: A page format is the interface between a format and SAPscript. It specifies the paper dimensions with which SAPscript can calculate the row and column lengths.
  • Actions: Actions are output device-specific commands that are required for the implementation of a format.

 

Lets conclude printer topic here.

 

Okkay... by now, this blog has become considerably dense, I read it again in order to cut it short but the trimming will take away the Protein. This leaves me as a bad editor,  with this confession I am signing off..... see ya....

 

Your comments/suggesting/corrections are most welcome. You can ask your queries here.


Excel with Colour - Background Emailing - History

$
0
0

Hi I am Joffy, working as a SAP Technical Consultant.

It was always a challenge when Clients come up with Excel related requirement while they need some report from SAP.

Most of the information is already know but I thought of writing my first blog for sharing my experience.

 

Earlier in my career in 2005 we did an interfacing project for one of prestigious Indian client which made me learn  lot

about Background and Foreground mode execution. The particular client was using a Treasury application so they

needed Purchase Order, Sales Order, Letter of Credit and Invoice Information from SAP to their dot-net based

Treasury application where their whole financial accounting activities are handled. We started with function modules

‘WS_Download’ / ‘WS_Upload’ option for pushing the file to that Third-party application and finally realized that it will

not work in background mode as it’s for download/ upload to/from Presentation server i.e., our own  Client PC.

Then finally we used the Open dataset option and successfully completed the project by putting the files to application

server location. From where the third-party solution did FTP and take the required file and pull back the return log file.

 

Then for a US client we had requirement of pushing the data to Bank’s FTP location periodically.

That helped me understand that FTP option is possible in background mode as explained in my own wiki post.

 

http://wiki.sdn.sap.com/wiki/display/ABAP/FTP+file+transfer+in+Background

 

Recently our client needed a report to be broad-casted as email in 'multi-coloured excel format periodically.

There can be many easy solution if we are using some other SAP approach...but with core ABAP...

So I was again in search of various options, discussed with friends, searched in net and finally thought it’s not possible

to produce coloured excel in background other than foreground execution OLE technique.

Then I came across then wiki link based on XML which changed my whole understanding.

 

http://wiki.sdn.sap.com/wiki/display/Snippets/Formatted+Excel+as+Email+Attachment

 

I applied it for my requirement which was Hierarchical ALV output and finally I could run that in background too.

My client happy so am I.

Sharing my Excel output too.

output.jpg

 

So what I have learned is that most of the information is available.

We have to find the right solution and use that. And learning curve never ends for Technical Consulant.

Report as HTML attachment - why one of the image is disappearing?

$
0
0

Hi I am Joffy working as SAP Development Consultant.

This blog is about one of the  issue that we faced recently which many others has faced earlier

but solution not clearly documented anywhere. One cute bug.

 

We were in the developement of  a new  Report as HTML attachment  emailing requirement

where we getting the  report as list object by submiting the report and  exporting to memory.

Capture.PNG

and later converting to HTML for generating the attachment content.

html.jpg

But while user acceptance testing  an image missing error was noted as shown below.

Capture.PNG

 

What happened to that image ? After verifying the source of this HTML page a clue was got about this missing icon image.

 

img.PNG

So the possible solutions are

 

1) download the ICON image and include that file pathin the HTML generation code.

2) Filter out the HTML code for image <IMG>

Refer the below code.txt file for code for removing the image html tag

 

We used the second solution as SAP as adopted the same in some of HTML based mail sending function module.

This is may be a small issue but has lots of technical stuffs involved.

Finally the issue is resolved and our utility is live in PRD.

How to find transport Request….

$
0
0

I found that most of the people create transport requests for different objects and they forget about the request numbers. In case if they hadn't maintain the proper description of their request then it can create a headache for them. I hope this will be help full for my ABAPer fellows. Functional Consultant Can use the third method.

There are 3 main methods through which we can check transport request. I will show all of them with screen shots.

 

   1.  In most of our ABAP transactions as SE11, SE80, SMARTFORMS, SFP, CMOD, SE51, etc. we can find our transport request for developed object        

        through this method.

 

    Go to SE11 give table name.  

1.png

    Go to the GOTO and select Object Directory Entry.

2.png

    Click Lock Overview.

3.png

     Double Click the Task/Request number given, highlighted in image.

4.png

    And you are done. This is your transport request. Now select it, go to SE10 Release it and in STMS transport it to other layer in landscape.

55.png

     The same method 1 you can use for SE80 as shown in fig.

5.png

     We can use for SMARTFORMS as well.

6.png

 

 

    2.   The second method is mostly used for code areas, but we can use it on other places as well where our method 1 doesn't give the result.

    Go to SE80 open your program and go to Utilities. Select Versions and then go to version management.

7.png

      Double click this number.

8.png

     This is your Transport request.

9.png

 

 

    3. This is the third method it is the most powerful method through which you can find all transport requests. Basically this is the database table where you have to give object type and its name and you will find its request number. This method can be used by functional consultants

        as well.

 

        As in our example we will find the Transport Request for Workflows.

       Go to SE16. Open table E071.

 

10.png

    Select Object type and press F4. It will show you the list of all objects. Search for your desire one. In our case it is PDTS.

11.png

     Give Object Name in our case customized workflows range start with 999 so I give 999* and execute it.

12.png

    This is your transport request list with your object names and there types.

13.png

Starting a report out of a feature (SAP-HCM)

$
0
0

I want to start a report out of a feature and I don't know, if it is possible and if yes how to do it. Can anybody help ? regards Stefan

Manual Version Generation for objects

$
0
0

As we all know in SAP, version management for an object is tracked against tranpsort request in which its saved.

We can retrieve the changes in a particular version that gets automatically generated when a transport request is released.

But this version will not be there unless the object's transport request is released other than the current available version.

version.png

But when we are  working with new developments/ existing logic change enhancment and

program code and logic is fine tuned often, its better to go for manual version generation before a large

program change so that we can retreive the previous version if the new code changes are to be rolled back.

 

Generate Version.png

This manual version generation will save time and help easy tracking compared to manual back-up options.

This is a good solution to prevent issues related accidental over-writing or deletion.

Error Handling of Outbound Proxy Calls

$
0
0

Sometimes ago I wrote a weblog series about error handling in SOA scenarios especially when calling server proxies in inbound scenarios. I discussed features of NW Ehp 1 resp. 2 (Forward Error Handling – A short look at SAP Business Suite Ehp 4). Michal Krawczyk did the same in his SCN blog: PI/XI: Forward Error Handling (FEH) for asynchronous proxy calls with the use of Error and Conflict Handler (ECH) part 2. In this blog instalment I want to discuss the outbound scenario: In ABAP you can perform SOAP calls to Web Services outside the SAP system. Every ABAP server can perform this by calling client proxies. You don’t need a special middleware like SAP PI it although it makes sense if you consider administration and governance.

 

What are the use cases of client proxy calls:

  • Passing data to an external system that persists those data.
  • Retrieving results of a complex calculation performed in an external system.
  • Implementation of asynchronous web services (server proxies) that give the result back using an outbound proxy call. In fact this is the recommendation how an SAP Enterprise Service for update/change services should work.

 

Please remark the latter scenarios can occur in online and batch processes, in every scenario (A2A, B2B and so on), and even in the case that an external system wants to read SAP data asynchronously which is sometimes done if an external system reads data from many data sources in an AJAX-like way.

This blog discusses only one topic: what should you do if a service call fails? This shouldn’t happen quite often if system administrators do a good job but it is possible in the following cases:

  • The Web Service runtime of an ABAP system is misconfigured.
  • The logical port of the client proxy is misconfigured. In this case the instantiation of the client proxy will fail.
  • The external system isn’t reachable.
  • The external system returns a SOAP fault message.

 

When looking deeper at this list we should distinguish the synchronous and asynchronous case:

  • The error handling in the synchronous client proxy call is just the same as every other function module would call. We have to implement an error handling.
  • In the asynchronous client proxy call we should expect that the called system has a forward error handling mechanism (see Forward Error Handling – Part 1: Outline) for explanation. By the way: the case that an external system isn’t reachable is easy to handle in an asynchronous scenario because the ABAP server has a local integration engine which can be switched on if you connect it to a PI (but you can switch it on and use it without external PI, too). In this case the outgoing message will be persisted in a queue after the end of a LUW (COMMIT WORK) and resend it afterwards using transactions of local integration engine (transaction SXMB_MONI). 

A

A frequent error will be probably caused by a misconfiguration of the logical port of the client proxy in transaction.

 

A bad advice

Transaction SPROXY has a very good inline documentation that explains Web Service implementation and I really appreciate the depth of the explanations. Unfortunately there it contains one error that I believe SAP will correct soon. Let’s have a look at the inline documentation that explains how to call a client proxy:

clientproxy.jpg

Let’s have a deeper look at the last sentences:

 

 

Important Note: Consumer Proxy in Update Task

If the application data is persisted in an update task, then the XI outbound call also needs to be made in the update. Otherwise, if the update is canceled, the qRFC queue may become blocked due to the unscheduled update task. Since the queues are interlinked, this can have a significant effect on the whole of outbound processing.

 

This recommendation is obviously wrong and can’t work in synchronous client proxy calls that use result of the proxy call for own calculations. One reason is very simple: the update module is performed after the COMMIT WORK and has no access of the local memory of the application. But even in rare cases in which it could be possible you shouldn’t do it and I will explain you why in the next section.

 

Avoid complex code in update modules!

The first lesson every ABAP programmer learns is to keep an update module simple – it should be so robust that it is extremely unlikely that it will ever fail. We should keep the rule because of many reasons – please let me mention only a few:

  • A robust application should ensure that after a COMMIT WORK every update is performed correctly.
  • Crashed update modules are hard to analyze and to debug in transaction SM13. The user of an SAP transaction usually can’t do this.
  • The error context is hard to analyze because it is not easy to see what a user who is using a SAP transaction wanted to do and what he was working on.
  • The best applications give a user information about wrong input and other error situations so that a user can react which is not possible if the error occurs after end of LUW.

 

These are only a few reasons and I could write more about it: there are forbidden commands in update modules, you have to take care about the order execution (V1, V2 and V3) and the user context and so on but I don’t have to do because the lesson is simple: please avoid complex update modules!

The best update modules are short and simple and fit on a screen. They don’t call many modularization units and you can check within short time that there no forbidden commands within the codes and no harmful enhancements will ruin the execution.

 

A call of a client proxy is a very complex operation that shouldn’t be performed in an update task. Adding more business logic to an update module makes the situation even worse. That’s why I don’t consider this as a feasible programming model.

 

What should you do?

As I already mentioned above the case of synchronous calls is comparable to RFC call and this case is well understood so I don’t discuss it here.

 

At first let’s discuss the recommendation of SAP mentioned above for enterprise services:

 

 

If the application data is persisted in an update task, then the XI outbound call also needs to be made in the update. Otherwise, if the update is canceled, the qRFC queue may become blocked due to the unscheduled update task. Since the queues are interlinked, this can have a significant effect on the whole of outbound processing.

 

 

At first it is possible to persist the application data of an enterprise service in an update task and call the client proxy in the same LUW directly (i.e. outside a update module) and this is what you should do. If the client proxy is called asynchronously the scenario is very robust and only in case of misconfigured logical ports errors will occur. If you want to prevent misconfiguration you should work with UDDI and service groups resp. PI and those misconfiguration will become unlikely. Even if there are network problems or an external system is down all outgoing messages will be queued and can be resend later using transactions of local integration engine.

 

But how should your application react if an error (think of misconfigured local port) occurs so that the client proxy can’t be instantiated and the outgoing message can’t be queued in the local integration engine? in asynchronous scenarios I recommend to persist application data and to send the response resp. confirmation later as a retry with one of the following strategies:

  • Use a Web Service error handling framework like AIF or ECH. It is no standard scenario in ECH but nevertheless possible.
  • If the proxy call fails you can add the call to an event queue to perform a retry.

 

If you use an event queue (there are many of them: qRFC, bgRFC, BOR events for example) for a retry mechanism, you should ensure the following:

  • The administration reports of the event queue make it possible to analyze even hundreds of errors and give the administrator a hint what to do (which logical port has to be reconfigured for example).
  • The error messages should be standardized and meaningful.
  • The administration reports allow performing mass retries and should allow a developer to debug the execution of an event.

This strategy can be feasible for sending unidirectional messages (informations and notifications), too, but you can also try to treat them like synchronous calls depending on the application context.

 

Summary

Let me summarize the most important recommendations of this blog:

  • Try to avoid complex logic in update modules.
  • Try to avoid complex logic in update modules even if SAP’s online documentation recommends it.
  • Make your SOA application stable and robust using an integration engine. If misconfiguration is the most frequent problem (think of changes of external systems) then an enterprise service bus can prevent a chaos of P2P connections.
  • When developing Enterprise Services get familiar with error handling tools like AIF or ECH.

How to trigger a BADI When SAVE Data button is clicked in EPM Excel sheet

$
0
0
I am working on a Product development in SAP BPC/EPM module which interacts with SAP BPC/EPM,BI and ABAP modules. I need to collect data from EPM Excel sheet and do some calculations to get new dimensions values.

There is an option of triggering a BADI(UJ_CUSTOM_LOGIC) from BPC Script Logic via Process chain and Data Manager Package as mentioned in the attached file(How to Pass parameters to custom logic BADI using START_BADI-Custom Logic BADI).

But we have a requirement to trigger to BADI after clicking on EPM SAVE Data option in EPM tab without Process chain and Data Manager Package.

Is there any option or How-to-guides document or any other help files ? Please share your ideas/inputs.
Regards,
Ramatli.

Don't Tell Me What We CAN'T Do In SAP!

$
0
0

computer-code.jpgI think now is a good time to bring up my biggest workplace pet peeve. It is when people tell me about what we can't do in SAP. When I say SAP, I'm talking about the SAP ERP product and the capabilities of the NetWeaver platform. Even though my examples are focused on this platform, I have noticed similar attitudes about other technologies as well. People telling me what we can't do have been around me my entire career and I have noticed that some people find it easier to blame the technology instead of admitting their own shortfalls.

 

Prove It

 

The first project I was involved in with my first job out of college was a large implementation project, where I had a non-developer role mainly focused on testing and working with business users. We had a custom interface built to connect to another system and it was having major issues. I was not familiar with ABAP, but I noticed that issue could be resolved with a semaphore (used in C++), so I made the suggestion only to be told that "there is no semaphore in ABAP". Which is technically true, except what I was suggesting is part of the SAP  locking system and is just not called a semaphore. The same person told me that there "is no boolean in ABAP", which made me wonder how anything worked at all. That was my first experience of "experts" telling me what we can't do in SAP. I think that was the most extreme case and now that I am an ABAP developer, it is a lot easier for me to separate the bull from reality because I can prove that we can do it.

 

Solve It

 

I think that a lot of my experience with this issue comes as a result of people getting used to doing things a certain way and thinking that way is the only way. Soon, the shortcomings of doing things that particular way become shortcomings of the technology. As someone who works in IT, it seems like the only thing that is truly constant is change. However, people will continue to fight change. Even taking advantage of the modern ABAP programming can be controversial to some.

 

Have you ever had similar experiences where someone has said that something can't be done in SAP or even some other technology, when you knew it could? How have you handled these situations?

Three different ways to serialize and deserialize complex ABAP data

$
0
0

If you've ever written an RFC-enabled function that transfers a structure, then you've probably seen a code inspector error like this:

 

RFC structure enhancement - syntax check warning.png

 

"The type "BKPF" or the type of one of its subcomponents can be enhanced. An enhancement can cause offset shifts when the RFC parameters are transferred."

 

So, what is that all about? Well, what it is saying is that the structure, BKPF in this case, can change. Your system can get upgraded, and one of the fields can get bigger or smaller. Or new fields can be added to the structure. And when that happens, if you forget to make the same change in the calling system, it may be unable to handle these "offset shifts". And this could result in all sorts of unexpected behaviour. Like truncated fields, or values that end up in the wrong field. So, ideally we would like some way to make things more robust than that.

 

One way we can make things more robust is to 'serialize' data to a single string. So, for instance, we could replace the structure in our function interface with a single variable of type STRING. Let's write a test ABAP program to demonstrate this idea:

 

report  z_xml_demo.

types: begin of ty_start,
         mychar(10) type c,         mynum(5)   type n,         myint      type i,       end of ty_start.

types: begin of ty_fin,
         mychar     type string,         mynum      type string,         myint      type string,       end of ty_fin.

data ls_start type ty_start.
data ls_fin   type ty_fin.
data lv_xml   type string.

ls_start-mychar = 'A1B2C3D4E5'.
ls_start-mynum  = 987654.
ls_start-myint  = 1234567890.

call transformation demo_asxml_copy source root = ls_start
                                    result xml lv_xml.

write / 'XML looks like this:'.
write / lv_xml.

call transformation demo_asxml_copy source xml lv_xml
                                    result root = ls_fin.

write / 'Final structure:'.
write / ls_fin-mychar.
write / ls_fin-mynum.
write / ls_fin-myint.

 

Note, the above code depends on the XML transformation DEMO_ASXML_COPY. If it isn't on your system, you can create a transformation that is the same as DEMO_ASXML_COPY:

 

<?sap.transform simple?><tt:transform xmlns:tt="http://www.sap.com/transformation-templates">  <tt:root name="ROOT"/>  <tt:template>    <node>      <tt:copy ref="ROOT"/>    </node>  </tt:template></tt:transform>

Note that this is a "Simple SAP transform" type of transformation.

 

So, what does the XML produced by our example program look like? Well, it looks like this on my system:

 

<?xml version="1.0" encoding="iso-8859-1"?>#<node><MYCHAR>A1B2C3D4E5</MYCHAR><MYNUM>87654</MYNUM><MYINT>1234567890</MYINT></node>

 

In theory you could do something similar with your own hypothetical RFC function. If say you had an RFC function that had an exporting structure, and that structure had three fields or it had say a complex nested data structure, you could replace them with just a single string.

 

And how would things be more robust? Well if you look again at the example ABAP code, you'll see that all of the fields in the final structure are of type STRING. This isn't accidental, doing so means the code can accomodate changes in field length. Imagine say, that we made the fields in the final structure exactly the same as they were in the starting structure. So, say the "mynum" field also was a numeric field and had a length of 5. What would happen then if we 'upgraded' our design and decided that a length of 5 was too short and made the length in the starting structure 10, but we forgot to do the same in the final structure? Well, things wouldn't work as '1234567890' which would have come from our starting structure won't fit when we try to squeeze it into a field in the final structure that only has a length of 5 digits! In fact, if you test this out, you'll get a dump stating "Value loss during allocation". Making all the final fields of type STRING means that they will always expand to accept the values given to them, and makes the solution more robust.

 

Another way in which using serialization makes things more robust is that it accomodates changes when new fields are added to the starting structure. So, even if we were to add a new field, "mynew" to the starting structure:

 

types: begin of ty_start,         mychar(10) type c,         mynum(5)   type n,         myint      type i,         mynew(15)  type c,       end of ty_start.

 

Things would still work.

 

And if we were to remove a field from the final structure, for instance removing the "mynum" field:

types: begin of ty_fin,         mychar     type string,         mynum      type string,       end of ty_fin.

 

And if we messed with the sequence of fields? You guessed it, "things would still work!"

 

There is another benefit to serialization and that is that the calling system needn't be an ABAP system - XML is a well known standard for data transfer between all kinds of systems.

 

Some folks (myself included) prefer JSON to XML, and there is even a page out there which calls it the "Fat-Free Alternative to XML". Like XML, JSON has the advantage of being text-based and position independant. In ABAP the standard classes CL_TREX_JSON_SERIALIZER and CL_TREX_JSON_DESERIALIZER can be used for conversion between abap data types and JSON. You may not find CL_TREX_JSON_DESERIALIZER on older systems though - it can be found on a NW 7.30 system, but I can't see it on my 7.0 system.

 

So, here is the earlier example written to use JSON that makes use of the above classes:

 

report  z_json_demo.

types: begin of ty_start,
         mychar(10) type c,         mynum(5)   type n,         myint      type i,       end of ty_start.

types: begin of ty_fin,
         mychar     type string,         mynum      type string,         myint      type string,       end of ty_fin.

data ls_start type ty_start.
data ls_fin   type ty_fin.
data lv_json  type string.

data lr_json_serializer   type ref to cl_trex_json_serializer.
data lr_json_deserializer type ref to cl_trex_json_deserializer.

ls_start-mychar = 'A1B2C3D4E5'.
ls_start-mynum  = 987654.
ls_start-myint  = 1234567890.

create object lr_json_serializer
  exporting    data = ls_start.
lr_json_serializer->serialize( ).
lv_json = lr_json_serializer->get_data( ).

write / 'JSON looks like this:'.
write / lv_json.

create object lr_json_deserializer.

lr_json_deserializer->deserialize(  exporting    json   = lv_json  importing    abap   = ls_fin ).


write / 'Final structure:'.
write / ls_fin-mychar.
write / ls_fin-mynum.
write / ls_fin-myint.

The JSON output looks like this:

 

{mychar: "A1B2C3D4E5", mynum: "87654", myint: "1234567890 "}

 

The JSON is beautiful, don't you agree?

 

Finally, what do you do if you need to serialize binary data? The other day, we had the requirement to send binary attachment data (in the form of XSTRING fields) between two systems via an RFC call. And we could have had more than one attachment, so we were working with an internal table of XSTRING fields. In this case, because the data was binary an alternative approach to using XML or JSON had to be used.

 

The ABAP command:

 

export lt_complex_table_containing_xstrings to data buffer lv_xstring.

 

was used for our serialization, and similarly deserialization was achieved by the command "import from data buffer". Note that the serialized field in this case is of type XSTRING and not STRING. Also, the documentation for this command does state that "the undefined content of alignment gaps in structures can result in different data clusters with structures that otherwise have the same content", so this method of serialization is not as fault tolerant of field position. However, it worked flawlessly for our data in an internal table that could have had one or fifty rows.

 

So there you have it: three different ways to serialize and deserialize complex ABAP data: XML, JSON and EXPORT TO DATA BUFFER.

 

My challenge to you is to describe all the other methods that you know about in the comments below so that we've got a comprehensive list in one place!

Get Data From Two Tables in a report.

$
0
0

here is a very simple example of getting data from two table in a report.

 

*&---------------------------------------------------------------------*

*& Report  ZFIRAPJPSLEDGER2                                            *

*&                                                                     *

*&---------------------------------------------------------------------*

*&                                                                     *

*&                                                                     *

*&---------------------------------------------------------------------*

 

 

REPORT  ZFIRAPJPSLEDGER2

line-size 99 line-count 39(4)

no standard page heading                                .

*tables used in the program.

tables : bsid ,kna1 .

 

 

 

 

***********************************************************************

*  Selection-options

***********************************************************************

 

 

data : begin of itab occurs 0 ,

       KUNNR LIKE BSID-KUNNR ,

       DMBTR LIKE BSID-DMBTR ,

       name1 like kna1-name1 ,

       ort01 like kna1-ort01 ,

 

 

       end of itab .

 

 

 

 

* select option for program selection.

select-options : custmer for bsid-kunnr .

 

 

Initialization .

custmer-low = '1' .

custmer-high = '5000' .

append custmer .

 

 

*start of selection event.,

start-of-selection .

select bi~kunnr bi~dmbtr ka~name1 ka~ort01 into corresponding fields of table itab from  bsid as bi left outer join kna1 as ka on bi~kunnr = ka~kunnr

where ( bi~kunnr in custmer ) .

 

 

 

 

loop at itab .

write :/ itab-kunnr under 'Custmer' ,14 sy-vline ,

         itab-dmbtr under 'Closing Amount' ,34 sy-vline ,

         itab-name1 under 'Name' ,64 sy-vline ,

         itab-ort01 under 'City' .

         endloop .

 

 

end-of-selection.

 

 

*    top of page event.

top-of-page .

write :/20 'Custmer closing amount details' color 2 .

uline .

 

 

write :/  'Custmer' ,14 sy-vline ,

      15  'Closing Amount',34 sy-vline ,

      35  'Name' ,64 sy-vline  ,

      65  'City'  ,sy-uline .

[Web Dynpro ABAP] Dynamic Display with Query String

$
0
0

This is a blog post showing how we can make use of Query String aka URL Parameters in Web Dynpro ABAP. Query String is useful for Web Dynpro that needs to be displayed dynamically based on the URL given e.g. URL link to Web Dynpro in email notifications.

 

In below example, you will see how Query String can be used in Web Dynpro ABAP. Before deciding to use Query String, security should be the main concern. This is because if users know how Query String works, there is a chance whereby they will access information that they are not authorized to see. Therefore, authorization check should be in place at the start of the Web Dynpro.

 

Example Scenario

A Web Dynpro that displays Flight Details dynamically based on Query String given.

 

Steps How To

First in Web Dynpro component, create a View that can display SPFLI as a form and SFLIGHT as a table.

Blog001 - Pic01.jpg

Next, create a Window and add above View. Go to Inbound Plugs and double click Default. A Event Handler HANDLEDEFAULT will be created. Create CARRID and CONNID as parameters which will be the Query String. Use the parameters’ value to retrieve data that are then bind to the context.

Blog001 - Pic02.jpg

Next, create a Web Dynpro Application and execute it. You will notice that there are no data populated as there are no Query String.

Blog001 - Pic05.jpg

Now try with a Query String e.g. “?carrid=AA&connid=0017”.

Blog001 - Pic03.jpg

Voila, and now try with another Query String e.g. “?carrid=SQ&connid=0158”.

Blog001 - Pic04.jpg

Let’s look at the backend. As you can see, CARRID and CONNID are populated based on the Query String.

Blog001 - Pic06.jpg

 

Hope this blog post will give you an idea on how to use Query String in Web Dynpro ABAP. Do feel free to share with us on how you use Query String for different scenarios.

 

In the next blog post, I will show you all on how to create Password Reset Web Dynpro with Email Authentication using the same Query String concept. Cheers!

SAP Continues Disrupting Itself

$
0
0

As you might know I’m an enterprise architect working for an ISV that is a special expertise partner of SAP in insurance. For our business reliability and dependability is as important as innovation. We believe that HANA is a cornerstone for our IT strategy but there are other important topics, too: We need to improve automation of business processes, we connect IT systems of stakeholders in the area of health care, we need better support of telemedicine and of course mobile solutions. To master these challenges we need SAP NetWeaver Platform and to benefit from its technical evolution.

 

In one of my last blogs I discussed some problems with the last developments of SAP NetWeaver platform: SAP delivered lots of deletions as you can read here. In fact it was even worse because some deletions have been ported back to earlier releases. So even if you decide not to install an Enhancement Package then some of these problems will be shipped to you in SPs which you will install to get bug and security issues fixed.

In the last time we had a lot of trouble with disruptive changes. Fortunately SAP fixed many of these problems but nevertheless all issues aren’t solved. It was my hope that SAP would deliver all corrections so that we could avoid any further delays in the future but this didn’t happen. In fact there is a solution but a unit at SAP doesn’t want to ship it in a regular release. So let me summarize:

  • Those changes are completely unnecessary.
  • SAP could solve these issues immediately.
  • I’m talking with SAP managers since months about it and many the most urgent problems have been solved but one severe issue remains still unsolved.

“Are they kidding? Go Disrupt Yourselves!”

  This was first reaction of my colleague Thorsten Franz when I told him of the decision of this development unit of SAP. We should implement to functionality of the deleted function modules in our namespace. Then we have to adjust the code in hundreds of locations.   

 

Maybe some developers at SAP don’t know how an ISV is working: We don’t just copy some function modules into Z-namespace, do adjustments and transport it to Q-systems and then into production. As ISV we create software components the same way SAP does. We have software releases which are shipped to our customers who install, integrate and test them. We develop two releases in a year which is necessary because of legal compliance reasons. If we can’t implement software updates within a short period of time we will have a delay in development – moreover we have limited development resources and we could invest it in a better way, say developing solutions for our customers, spending more effort in quality assurance or innovation.

 

“I wished more Companies would have ambitious technology strategy and culture of innovation as you have”. This is what many people at SAP are telling me – and yes, in the past software updates have been predicatable but now they aren’t any longer. This is really a weird story: We are working closely together with SAP to obtain best results. We are working in various customer Engagement Initiatives and in the SAP Mentor Initiative. SAP always emphasizes that innovation without disruption is their highest priority and I’m sure that developers of SAP Business Suite share this paradigm as well as people from server technology development and most of people in NetWeaver development – unfortunately there are some development units who spoil our joint efforts: Yes, we want to implement HANA and benefit from enhancement packages and many people at SAP work hard that we are successful. And there are others that spoil all this joint efforts.

 

Does Everyone at SAP Understand the Nature of Disruption?

Well, I don’t think so and that’s why I have to explain it so that everyone will understand it:

  • The root cause for any disruption (major or minor) is a technical disruption.
  • When you delete an IDoc segment just to “clean up” then the whole integration scenario of an enterprise architecture can collapse because exchange of master data is crucial for core processes of our business.
  • Compared to this the change of a UI is a minor disruption but still severe: customers have to be trained the use the new functions. This can be managed but needs to be planned. Changing the UI technology is more disruptive but customers accept it as long the impacts of disruption is predictive and manageable and provides business value.
  • If a developer at SAP deletes a function module then a huge solution of a partner resp. customer isn’t working anymore and needs to be corrected. Correction isn’t always easy because sometimes we don’t know how to correct it. But even if we know it takes time and this can become critical: most IT infrastructures are heterogeneous and there are centralized governed change processes. They have releases and for changes there are only a few time slots because many organizational units are involved in the change process. If you can’t perform these changes in a defined time slot the whole change requests gets in danger.

  This is why enterprise architects and IT managers fear disruptions especially when they affect core processes. Sometimes disruption is necessary but it has to be manageable and provide business value.  

 

The fear of Disruption spoils the Success of SAP’s Innovation Strategy

This is a simple truth everyone at SAP knows and any business analyst will confirm. Every executive and most developers at SAP already understand this simple truth. But unfortunately there are some developers who don’t seem to care about it. It seems to me that they are only thinking in terms of functions modules and are happy if they can leverage the proportion of object oriented code by say 0.1 ‰ and delete “legacy code”. And yes, since I was a developer for a long time I appreciate the intention to produce clean code. But they are incompatible changes will produce disruptions.

 

SAP has to establish Standardized and Consistent Development Standards to make Software Updates Predicatable

I am worried. Those disruptions caused already delays and since no everything is resolved it will cause further trouble. But it is worse: The consequences of software updates have been predictable and could been mastered using best practices but now my predictions and best practices fail and this is what me worries most.

 

I development units of SAP NetWeaver at TIP Core continue then we won’t be able to follow SAP’s innovation strategy because we would have to spend our time in completely unnecessary work.

SAP’s TIP Core has to decide what is more important to them: performing “Spring Cleaning” in SAP NetWeaver that provides customers absolutely no business value and causes only pain or providing business value say be optimizing NetWeaver for HANA for example. At the moment SAP’s strategy is not consistent:

  • SAP can’t encourage customers to spend efforts to follow their innovation strategy on the one hand and create obstacles on the other hand.
  • SAP can’t spend lots of effort to work as enabler for innovation on the one hand while other development units prevent customers from implementing innovations.
  • SAP can’t do both: “spring cleaning” and thorough HANA optimizations of the infrastructure.
  • SAP can’t announce the non-disruptiveness is most important and allow that SAP’s development units have different objectives.

 

Me message is simple: Those unnecessary changes have to stop immediately. Otherwise they will harm the SAP Ecosystems and so SAP as a consequence.

Viewing all 943 articles
Browse latest View live