Quantcast
Channel: SCN : Blog List - ABAP Development
Viewing all 943 articles
Browse latest View live

ABAP News for Release 7.50 - Fundamentals

$
0
0

This one will be short but fundamental.

Only Unicode Systems in Release 7.50

 

ABAP 7.50 is Unicode only! An AS ABAP can only run as Unicode System with system code page UTF-16. All ABAP programs must be Unicode programs where the Unicode syntax checks are effective.You must always set the respective program attribute.

 

Finally a programming guideline that is gone for good.

 

All the weird non-Unicode stuff is deleted from the  documentation.

 

Nothing more to say from my side.

 

New Built-in Data Type INT8

 

The existing built-in types b, s, i in the ABAP language and the respective built-in types INT1, INT2, INT4 in the ABAP Dictionary got a big brother (or is it a sister?):

 

int8 in ABAP language and INT8 in the ABAP Dictionary.

 

They specify 8-byte integers with a value range of  -9223372036854775808 to +922337203854775807.

 

Not much to say about that either. Simply go and use it if you need big integers. The rules are mainly the same as for 4-byte integers, of course with enhanced values for alignment requirement (adress must be divisible by 8), predefined output length on screens (20) and a new calculation type  int8 for arithmetic expressions.

 

Just a little example to remind you that it is always good to use an integer calculation type if you calculate integers, especially for big integers:


DATA arg TYPE int8 VALUE 2.

cl_demo_output=>new(
  )->write( |**  : { arg ** 62 }|
  )->write( |ipow: { ipow( base = arg exp = 62 ) }|
  )->display( ).

While ** calculates with floating point type f and produces a wrong result, ipow calculates with int8 and produces the correct result.

 

PS: Cool way of "writing" isn't it? But only for demo purposes ...


ABAP News for Release 7.50 - Test Seams and Test Injections

$
0
0

Writing ABAP Unit tests can be somewhat cumbersome if the code to be tested is not suited for automatic tests. All the hubbub about mock frameworks or test driven development isn't worth a cent if you have to deal with code that never came in touch with the concept of separation of concerns. Imagine you have code to maintain, that depends on database contents or calls UI screens and your boss wants you to increase the test coverage of the department - a real life scenario? Yes, at least in my life. If you cannot  redesign and rewrite the whole application, as a workaround you make the code test dependent. This is regarded as bad style, but it helps.

 

As a simplistic example take a method that gets data from a UI screen but should be tested by a module test. Normally there is no UI available during the test. Setup and teardown methods also do not help as they might do for selecting data from a database by providing test data. A workaround before ABAP 7.50 was a free-style test flag, e.g. as follows:

 

CLASS cls DEFINITION.
  PUBLIC SECTION.
    METHODS get_input
      RETURNING
        VALUE(input) TYPE string.
  PRIVATE SECTION.
    DATA test_flag TYPE abap_bool.
ENDCLASS.

CLASS cls IMPLEMENTATION.
  METHOD get_input.
    IF test_flag IS INITIAL.
      cl_demo_input=>request( CHANGING field = input ).
    ELSE.
      input = 'xxx'.
    ENDIF.
  ENDMETHOD.
ENDCLASS.

 

The test method of a test class that is a friend of the class to be tested can influence the method by setting the test flag.

 

CLASS tst DEFINITION FOR TESTING
          RISK LEVEL HARMLESS
          DURATION SHORT
          FINAL.
  PRIVATE SECTION.
    METHODS test_input FOR TESTING.
ENDCLASS.

CLASS tst IMPLEMENTATION.
  METHOD test_input.
    DATA(oref) = NEW cls( ).
    oref->test_flag = abap_true.
    DATA(input) = oref->get_input( ).
    cl_abap_unit_assert=>assert_equals(
    EXPORTING
      exp = 'xxx'
      act = input ).
  ENDMETHOD.
ENDCLASS.

 

Bad style and not governed by any conventions. To overcome this, with ABAP 7.50 the concept of test seams and test injections is introduced:

 

CLASS cls DEFINITION.
  PUBLIC SECTION.
    METHODS get_input
      RETURNING
        VALUE(input) TYPE string.
ENDCLASS.

CLASS cls IMPLEMENTATION.
  METHOD get_input.
    TEST-SEAM fake_input.
      cl_demo_input=>request( CHANGING field = input ).
    END-TEST-SEAM.
  ENDMETHOD.
ENDCLASS.

 

With TEST-SEAM - END-TEST-SEAM a part of the code is defined as a test seam that can be replaced by test friendly code during testing. No selfdefined attribute is necessary and the test class does not have to be a friend of the class to be tested any more (as long as public methods are tested only). The test method now might look as follows:

 

CLASS tst DEFINITION FOR TESTING
          RISK LEVEL HARMLESS
          DURATION SHORT
          FINAL.
  PRIVATE SECTION.
    METHODS test_input FOR TESTING.
ENDCLASS.


CLASS tst IMPLEMENTATION.
  METHOD test_input.
    TEST-INJECTION fake_input.
      input = 'xxx'.
    END-TEST-INJECTION.
    DATA(input) = NEW cls( )->get_input( ).
    cl_abap_unit_assert=>assert_equals(
    EXPORTING
      exp = 'xxx'
      act = input ).
  ENDMETHOD.
ENDCLASS.

 

With TEST-INJECTION - END-TEST-INJECTIONM a test injection is defined that replaces the test seam of the same name during test execution. A test injection can be empty and then simply removes the respective test seam during testing. Test injections can be defined in test includes of global classes and function groups.


For more information, more use cases, and more examples see Test Seams.

Dustbins in Las Vegas - Part Two

$
0
0

Dustbins in Las Vegas – Part 2

 

You are probably familiar with crime shows on TV like “CSI : New York”. I were to try and describe TECHED 2015 in a similar fashion it would be “CDS : Las Vegas”. The so called “CDS View” was front of stage the entire time, with UI5 standing behind it, waving over its shoulder, and HANA standing in the background trying to get some attention. This blog is just a list of the notes I made from various sessions, hopefully in a logical order, but possibly not.

 

I know everything changes; everything changes; now I know what I like about You

 

We will start with the minor things about CDS views and then work our way up. A CDS view arrived on the scene with ABAP 7.4 which the vast amount of SAP customers clearly are not on yet based on a show of hands at various speeches – and the SAO speakers are always shocked, as they have had this for years and so are amazed when real people in the real world are not using such technology. I think my company won’t upgrade till ECC 6.0 goes out of support in 2025.

 

Anyway, a really big thing was made about the fact that in the names you give to CDS Views you can use “Camel Case”. What is that you may ask? A Camel has been murdered and you are a famous detective trying to work out who committed the crime. Or, possibly, it is naming things in the style of SalesOrderItem as opposed to SALES_ORDER_ITEM. Languages like Java have always used names like that and I think SAP were getting jealous. I am all for making code easier to read but I think you are between a rock and a hard place here.

 

In 7.5 the generation (creation) of CDS views in ABAP in Eclipse is a lot better e.g. the lines at the top which you previously had to type in manually are generated for you.

 

Since CDS views are supposed to be the core of everything now it is possible to call and AMDP from inside a CDS view.

 

When playing in the Web IDE and trying to view the representation of a CDS view, you can get a mockup of the Firori Launchpad with a tile showing your new application. What use this is I have not a clue.

 

Now, since a CDS view (which is a model in the MVC sense) is the basis for generating a Fiori application you can put a whole bunch of “annotations” inside it, which seem a lot like view related data to me but I am assured it is not, such as can the UI allow the user to search on the data. If you put an annotation like @search in the CDS view then, yes they can, but you also have to specify before each field can it be searched, and the “fuzziness” actor for the search.

 

In fact what we were shown looked a lot like how you set up the ALV back in 1999 – annotations for which position a field should appear in the UI, can it be totalled and other UI specific things.

 

One other annotation which reflects the new world is you can say how important a field is – that way if the report appears on a phone the less important fields get hidden due to the lack of room.

 

I am a Model, you Know What I Mean

 

James Wood was sitting next to me and asked me – “Should all this information go in the model?”

 

It did seem odd, I would have thought a model just knows about the data and business logic, the purpose of a view was to decide how to display it to a user, which is why you can have various views of the same model. From a technical perspective all the annotation data goes into a separate file, on the grounds that if you expose the data to some consumers outside the SAP system don’t care about it i.e. ones that will not display the data.

 

Some years back, it took me a while to get to grips with the “model view controller” concept where each one of the three components does a separate job. Just to confuse things SAP has decided that the data model is going to be called a view i.e. a CDS view. This is because of the historical SE11 data dictionary definition of a database view, which a CDS view evolved out of. Yesterday someone told me that they went to a talk where someone from SAP was talking about “System Landscape Transformation” and said “This is not be confused with System Landscape Transformation”. A redundant warning clearly, who would make such a mistake? It is like the joint product from SAP and Microsoft called “Duet” which is a totally different product from the old one, also called “Duet”.

 

Anyway, so the data model is called a view in SAP terms. OK we can live with that. What is puzzling from some perspectives is all the “annotations” you can put in the CDS view giving instructions to the UI i.e. the sort of thing you would normally expect the view to take responsibility for. In addition you have a list of commands (actions) you can add to the CDS view which will correspond to the buttons that will appear on the screen, sort of like the icons at the top of the screen when you run an ALV report e.g. export data to excel or whatever. In a minute we will see this is sort of a controller like function, in the definition named “view” which is in fact a model.

 

Now, SAP want to have a common programming framework for both transactional and analytic applications and the CDS view plays a core role in this. Traditionally a database view was what it said on the tin – a view of the data, so you can have a look at that data and analyse it. Now we can make a CDS view into a transactional view by writing an annotation at the top to say it is a “#BUSINESS_OBJECT” and another one saying it is write enabled.

 

This generates a BOPF definition, the sort of thing you would normally set up using transaction BOB. Then you can in fact use the BOB transaction to add in business logic to the generated BOBPF object to perform data validations, fill in derived fields, and – as alluded to earlier code the logic needed for the actions (commands the user can do during the transaction). The list of actions themselves are listed in the CDS view and then get automatically generated in the BOPF entity.

 

I then got utterly confused, as it turns out you need another CDS view, this time called a consumption view, which looked the same as the first one to me, but had lines in it saying you could create / read / update / delete instances of the object. The two CDS views and the BOPF entity all fit together somehow.

 

The other day a German gentleman was arguing with me in relation to the BOPF chapter in my book, about the fact that when I was defining a model class I had it fill in the texts for things like sales organisation, material name and so forth. His position was that sort of thing had no place in the model; it was purely a UI function. So I was fascinated as to what SAP’s position on this was. In turns out that in the CDS view you add an annotation to say where to get the text name from. That makes sense to me as you are in effect coding a join between say MARA and MAKT and saying you want both MATNR and MAKTX in your data model.

 

However just to be contrary, at the Fiori Café yesterday I was arguing with a German guy from SAP, arguing the opposite position I usually take, saying that the texts had no place in the model and should live in the view. He managed to convince me otherwise, which is not surprising as that was my real opinion in the first place.

 

Next comes the big change in 7.5 – in the transactions we are used to you either back out or save the new or changed data. You cannot usually save the record to the database in a draft state with 10% of the fields filled out. However in the new SAP world where you are most likely on a mobile device where the connection to the back end drops in and out like a yo-yo, we now have the concept of a draft document. I fill in a few fields of the sales order, it gets saved to the database as I go along (I think) as a draft, and then my connection drops out, an hour later I can get back online on another device, I fill in the rest of the fields and press save, and the draft gets converted into an actual bona fide database record. In the BOPF generated classes you have methods like CREATE_DRAFT, COPY_DRAFT_TO_ACTUAL and LOCK. I am not yet sure how much of the logic is generated for you, I imagine most of it and then you can add anything extra you might need.

 

You may have heard of “Project Objectify” on SCN, created due to the failure of SAP to create a set of business objects representing sales orders, deliveries, purchase orders etc. To be more exact SAP has tried with SWO1 definitions, and BAPIS and the like, but you don’t have a class like CL_DELIVERY with life cycle methods and methods like GOODS_ISSUE.

 

The claim from SAP is that in S/4 HANA they will delivery precisely that. For each business object there will be a CDS view linking the header with the items, and having actions like goods issue. That sounds too good to be true, and has been promised before, we shall see.

 

Like a Dream, a life, a reason everything ABAP must change

 

This morning who should be in the lift with me but Karl Kessler from SAP who writes the “under development” column for SAP Insider magazine. I then went to his two hour talk on the future of ABAP.

 

The first point that came up was that although ABAP 7.5 was released on Tuesday in time for TECHED it wasn’t really, you still cannot download it from the service marketplace. It is aimed at BW initially, and then round about the first quarter of 2016 we will have EHP8 which will delivery ABAP 7.5 to ERP systems. You also need Kernel 7.45 for ABAP 7.5, and that Kernel is not released yet either.

 

Karl asked the audience how many were using 7.4 already, and based on the show of hands it was virtually nobody, so maybe it is a trifle premature to talk about 7.5, but it is as interesting to me as it gets.

 

As we know virtually all HANA tables are column based, but when you look at the definition in SE11 you can see there is a radio box to make the table row based. I wondered about this, apparently it is more efficient to have some customising tables as row based.

 

Next came a list of new features in ABAP 7.5, the SAP examples all look like <X> = (A – B) with no semantic meaning whatsoever, which makes it difficult to get your head around what the new feature does, but after a while I think I get it.

 

For example you can programmatically generate an internal table by lopping over another internal table and performing calculations on the fields in the source table to generate fields in the target table.

 

We now have a dynamic “move corresponding” feature where the source and target structures are not known until run time. Also you can use statements TYPE OF and CASE based on the nature of a variable e.g. if it is a string do this, if an integer do that. I presume that is aimed at dynamic data objects such as field symbols.

 

When it comes to class based exceptions they can now raise a T100 error message before propagating themselves, that message does nothing (I think) it is like in a function module where you say RAISE EXCEPTION SUCH_AND_SUCH WITH MESSAGE etc. and the message is only output if the exception is not handled, to prevent a short dump. This also enables a “where used” search for the error message, so you can hunt down the source of the problem. I thought the idea was a class based exception could contain a load of information about where it was raised and in what circumstances, but anyway that is the new feature. I like exceptions which inherit from CX_NO_CHECK so in the unlikely event of one of my exceptions not getting caught somewhere higher up the food chain, an error message would be better than a dump.

 

I knew there was a new class for ALV reports, a successor to CL_SALV_TABLE which is optimised for HANA. The class is called something like CL_SALV_IDA and once again you can have a report in just one line of code. This time there is a method called CREATE_FOR_CDS_VIEW which does what you might expect. The important thing here is that instead of the whole amount of data requested being in an internal table, in memory on the server, all that is in memory on the server is the data that is on the screen. All the grouping and sorting and totalling is done on the HANA database level, and when you press the page down button more data is retrieved.

 

In database tables we have foreign key relations, but sometimes it is difficult to spot relationships between tables, sometimes it takes the developer a while to work out the relationship between tables all to do with production orders – AFKO / AFPO / RESB etc… or the link between fixed assets and the purchase orders used to buy them. CDS views are designed to make such links more obvious – they are supposed to read as “close to conceptual thinking” as possible. The SQLScript code reads something like “sales_order.customer.address” which represents (to my mind) the link between VBAP,KNA1 and ADRC.

 

So a CDS view is supposed to be a layer above the database tables, giving meaningful names to field names like AUFNR and RGEKZ. In the S/4 HANA system the SAP developers had created 6400 CDS view as at 22/07/2015. The idea is that ABAP programs should only do SELECTS on the views on this will make them more readable, more like a domain specific language .As a side effect it is then possible to mess about with the underlying tables, if such tables are never directly read. The example given was getting rid of an ITEM_COUNT field in a header table, redundant in a HANA system. The actual example I can think of is the field in EKKO which says the highest item number in EKPO.

 

I think that in 7.4 a CDS view could have input parameters, but then you could only call such a view from within ABAP if you were on a HANA database. This is no longer the case in 7.5, all databases are fully supported with CDS views with parameters. This is quite important; you want to be able to pass selection criteria into such a database view.

 

To be clear about this - note that CDS views work on any database, not just HANA so if you have an Oracle database like my company you will be fine. SAP are clearly hedging their bets here, they want all their customers to move to HANA but realise this will not happen overnight.

 

New York Port Authority Check

 

The next interesting thing inside a CDS view is the authority check. This is called “data control language” (DCL) and reads something like:-

 

DEFINE_ROLE (‘MAD_SCIENTIST’)

            GRANT_SELECT

            WHERE (EVIL_LABORATORY’)

            ASPECT PCGF_AUTH(‘EVIL_LABORATORY’, *, ‘03’)

 

What this means is if you run a report against a CDS view and you don’t have authority in your user master for a certain evil laboratory, then records for laboratories you are not authorised to view will not show up. SAP security people should not be worried, the existing authority objects and roles are unchanged, it is just that ABAP developers no longer need to explicitly code AUTHORITY-CHECKS into reports reading from CDS views. The bad news is that the failure does not show up in SU53, though I imagine that will be sorted out in time.

 

In 7.4 the CDS view had not yet assumed its role as the be all and end all of everything in the SAP universe, and so there was a clear separation between an ABAP program calling a CDS view (database independent) and an ABAP Managed Database Procedure (HANA Only). Now a CDS view can itself call and AMDP, though this will of course invalidate its database independence if the programmer decides to do such a thing.

 

There are three parts to doing this, first you need to code a definition in the CDS view itself, which goes something like as follows (I don’t have the exact syntax, this is from memory):-

 

DEFINE TABLE FUNCTION VILLAGERS_TO_BE_KILLED

WITH PARAMETERS

  VILLAGE_NAME : abap.string

RETURNS

{

  LIST_OF_VILLAGERS : abap.something

}

IMPLEMENTED BY METHOD TO_BE_KILLEDOF CLASS VILLAGERS

 

Then in your ABAP code you do the following:-

 

CLASS VILLAGERSDEFINITION.

INTERFACE IF_HDMP_MARKER.

CLASS-METHOD VILLAGERS_TO_BE_KILLED FOR TABLE FUNCTION VILLAGERS_TO_BE_KILLED.

 

This has to be a static method. You probably have to say what CDS view you are talking about as well, as I said I cannot remember the exact syntax.

 

The last part is writing the implementation of the method in SQLScript. The code has to start with a list of all ABAP tables you will be reading, te reason given for this is that helps if “anything changes” as the ABAP system has a bit of a blind spot where code written in other languages lives.

 

Then you call the CDS view by doing a SELECT statement in your ABAP program on the view. You will get a syntax warning telling you this is a statement that will only work on a HANA database, this warning can be suppressed with a Pragma.

 

Apparently in 7.4 debugging an ADMP was done by using a separate tool to the ABAP workbench (which is ABAP in Eclipse in this case). In 7.5 this has been unified, you can only create/view the SQLScript of ADMPS in ABAP in Eclipse in any case, now you can put in a soft breakpoint, and you can then run the transaction and the debugger will stop inside the database whilst the ADMP is being executed. As I may have said previously debugging inside the database as opposed to the application server is quite spooky.

 

I thought it was quite funny that Karl Kessler said that writing SQLScript code was no fun at all, copying an existing sample and changing it was far easier. I found that as well in my experiments, the syntax is demanding to say the least; I thought ABAP was bad enough – sometimes demanding spaces between the bracket and the variable, sometimes forbidding spaces – but I did not know when I was well off. It also keeps changing, so if you copy something off the internet or a blog, then it most likely will not compile. In the last demo of the day – also two hours and all about CDS views – the SAP developer from Walldrof wrote some SQLScript and kept getting syntax errors and he could not figure out for the life of him what was wrong even with his colleague (and the audience) making suggestions. In the end he copied some – seemingly identical code – from another working application, and everything was fine.

 

The next funny thing – which I guessed at in my book but am pleased to find turns out to be true – is that inside SAP there is a contest/race between the team that develop CDS views and the team that improves the open SQL access in ABAP. So if a new feature gets added in a CDS view then the SQL team will go all out to replicate that feature in Open SQL, and vice versa. The guideline from SAP (up till now) had been that you use Open SQL first and then only use a CDS view if Open SQL could not cut it. Now CDS views have been elevated to first class citizens that may no longer apply.

 

Anyway, the idea is that with each release of ABAP the Open SQL gets closer to the “SQL_92” worldwide standard. In release 7.5 for example you have new options like UNION and UNION_ALL and you can have a dynamic ON statement in your ABAP SELECT statement.

 

On the SCN assorted people have for a long time been bitching about ABAP in Eclipse dropping you back into the SAP GUI for certain elements you wanted to view or change or create. It seems SAP have taken note and it was claimed this will now happen a lot less (they did not say never). You also get the “ABAPDOC” in Eclipse which is like JAVADOC and is for all the ABAP developers who love documenting their code for external people to read. That happens all the time. For example you can write comments next to the parameters in your function modules and you can generate a lovely HTML document. I would note that in the SAP GUI for a long time you have been able to add a long text into parameters of function modules and methods, but very few people either inside or outside of SAP actually did this.

 

Custom Code Management

 

This was a talk about how to use the solution manager tools to identify what custom code is being used, get rid of objects that have not been used for years, and then gauge the quality of the custom code that remains. I think it is common knowledge that 65%+ of custom code never gets executed, as us developers add new things all day long and nothing ever gets deleted.

 

In version 7.1 of the Solution Manager you have the custom code monitoring cockpit with a nice pretty “city model” where types of custom code as shown as skyscrapers of varying height depending on the amount of custom objects in a category. You can use your mouse to twirl the diagram around if you want.

 

The idea is that first of all you use transaction CCLM to get everything single custom object in your development system, and then use “usage and procedure logging” and the SQL Monitor (SQLM) to see what actually gets executed in the production system over a protracted period. You then set a filter to say (for example) to flag anything over three years old that has not been used for two years as a potential candidate for deletion.

 

The speaker noted that UPL informed one customer that a certain Z method was getting executed a billion times an hour (actually a billion, not me exaggerating for once) and obviously nothing needs to get called that often so clearly there was a problem in the code that needed to be sorted. That is the sort of unexpected benefit you get when doing a really detailed analysis of what goes on in your live system.

 

Once you have flagged the vast bulk of your code as never being used then you can start to use the other tools like the ABAP Test Cockpit to do a whole bunch of static checks on the code quality of the portion that remains.

 

Next year (Q2 2016) the next version of the Solution Manager comes out (it can run on a HANA database, but does not have to) it will have the “Quality Control Cockpit” which is supposed to help with the second half of this process i.e. improving the custom code that actually gets used.

 

There is also something called the “simplification database” which I think is going to be some sort of standalone tool to check your custom code for things than will break and/or could be optimised when running in an S/4 HANA system.

 

Fiori in my Inbox

 

This is the workflow inbox appearing in a Fiori app on your mobile phone or tablet. It’s quite clearly still being enhanced, it can do most of the things you would expect from the standard SAP transaction, and there are user exits to fill the gaps. You can call up attachments for example, and even jump into the work item in SAP GUI for HTML mode.

 

There is no offline capability at the moment, but that is on the roadmap.

 

This Netweaver Business Client is Guilty

 

This kept jumping between NWBC 5.0 which has been out for a while and NWBC 6.0 which is not out yet. In my humble opinion NWBC 5.0 was a lot better than 4.0 because it shares data with the actual SAP GUI and so knows what my SAP systems are for example. I agree though with the developer on SCN who said it was unusable because you get a third of the icons and menu options you are used to in the SAP GUI at the top of the screen. You can get to the others via a drop down menu but it is painful. That clearly has not changed in version 6.

 

Anyway version 6 is coming out some point this year, which probably means early next year, or late next year. You will need a 7.5 system so the question is academic anyway since it will probably be about three years till any meaningful amount of companies are using 7.5. If you have 7.5 you can also use the Fiori ESS/MSS in NWBC 5.0 but that would not make any sense.

 

Version 6 opens up with the good old Fiori launch pad with those nice white tiles. As an aside most of the “user menu” type transaction screens in the SAP GUI in my company (ECC 6.0 EHP5) like the plant managers dashboard also have a screen full of tiles you press on to go to various transactions, but ours looks more like a windows phone - we call the underlying transaction ZMETRO.

 

The point of NWBC is to have the various different UI technologies be able to open within it, starting with the SAP GUI (apparently controls like in the “Enjoy” transactions in the GUI work a lot better) right through to UI5

.

I noticed on one the slides Web Dynpro was missing from the list of possible UI technologies – may it rest in peace - though the Floorplan manager was there.

 

We were shown the good old MM03 transaction in NWBC 6.0 and the material number field was acting like the Google search field. You also get side panels (so called CHIPS) for a vast array of SAP transactions, though if you add Screen Personas to the mix they vanish until you write a whole bunch of JavaScript code to get them back.

 

Adding Fiori apps to the list of possible transactions is of course possible but you need to know the “semantic object” when adding the application, and that is like finding the needle in a haystack. You have to go to the SAP Fiori Apps Library web page and then go on a goose hunt.

 

UI5

 

Guess what! You need ABAP 7.5 for this as well! Actually you don’t, it works fine on my 7.02 system, but of course all the fancy new things I was seeing need the latest 7.5 version.

 

At least it was explained why we should use the Web IDE rather than ABAP in Eclipse. Firstly to get the Web IDE you have to sign up for a HANA Cloud Platform account, something SAP want you to do very much indeed. The next reason stated was that you get code completion for JavaScript and XML in the Web IDE and you “don’t in Eclipse”. That sounds odd to me, Eclipse seemed to be doing code completion when I was coding in ABAP and naturally in Java. Another reason given was the built in testing framework – a unit testing framework and some acronym which I presume lets you look at the finished application. That was not as easy as you might think when I tried the same in Eclipse.

 

You also get all the templates in the Web IDE. The idea being put forward by SAP here is that using templates is like moving from WRITE statements to the ALV i.e. built in buttons for common tasks like exporting to Excel, and these templates are also supposed to be like the Floorplan manager to give a unified look and feel. Then you can mess about with the generated XML view to change anything you want.

 

The use of such “smart controls” is supposed to reduce the amount of code by 90%.

 

Actually in my experience what a lot of developers like about UI5 is the very fact you have to write all the code yourself. But SAP – like IT companies since the nineteen fifties – are still pushing the “not one line of code needs to be written” approach. I don’t know how they can say that with a straight face in front of 10,000 developers all of whom would be out of a jo if no-one needed to write code anymore. You need 7.5 for those templates anyway, so this is years away. The generated application code takes its information from the “annotations” in the CDS view as mentioned earlier. We were shown how a “key user” can press a button and then do a GUIxt type of thing where you can move fields around and rename them and hide and add fields, and then let that be the default view for all the company. The company behind GUIxt sponsored the Demo Jam last night (which was wonderful). I am wondering why they are still in business now Personas is free.

 

On one of the demonstrations I noticed that the controller was in JavaScript and the view was in XML (naturally the model is in ABAP inside the SAP system). That is the way I was taught to do UI5 by Graham Robinson, it makes a lot of sense to me, as that is a clear separation of the concerns of the MVC. Then in another live demonstration one of the presenters said “you will notice there is no view or controller”. Really? That was just after auto-generating the application.

 

We were shown the “UI5 Inspector” which runs in a Chrome browser, some sort of free ad-on, for looking at the code behind the controls on the screen, debugging them, and changing them on the fly e.g. changing the icon, making the field wider or what have you.

 

There is some sort of new release cycle for UI5 where the first year is all about innovation (what I call a beta release) and then you get two years of stability. I think they are talking about the back end here as the JavaScript part seems to change every two weeks. So we have SAP_UI_740 where support ends Q12017, SAP_UI_750 where support ends 2018, SAP_UI_751 starts in 2016, and so forth.

 

The only OSS Notes that count, are the ones that come in large amounts

 

This was my last session, it might seem like a dry topic, but everyone was very emotional about this subject. You may have had to implement a specific note in the past and SNOTE does the code changes for you, but you have to do the DDIC changes yourself, and add the text elements for selection screens, and assorted other tasks. For some notes this is so complicated you are told not to bother, wait till you implement the next support stack – the new ABAP editor in SWO1 fell into this category.

 

One of the banes of my life is that in SAP the accounting entry for goods receipt posting for purchase orders with multiple account assignment happens at invoice time, whereas for a purchase order with only one cost centre it happens at GR time. This breaks one of the four laws of accounting, and as I was an accountant to start off with I have always been horrified about this. So when we found a dormant business function in EHP5 to solve this problem we switched it on as fast as fast can be. Sadly the code was all over the place, must have been written by someone on the first day of the job, mixing up WERKS and BUKRS in the code as in SELECT FROM X WHERE BUKRS = P_WERKS. Thus we got the error message “Company Code 1234 does not exist” where 1234 was in fact the plant.

 

There was an OSS Note to fix this, so I applied it, and at the bottom it said “this causes problems, fixed in note 123”. So I implemented the next note, which in turn pointed me to another note, which in turn pointed me to another note and so forth. At the end of this chain the final note basically said “this is too difficult, upgrade to EHP 6”. Lovely. That would have been wonderful information at the start of the first note, but now I will have to explain every week for the next ten years (which is when we will upgrade) why this task on my list is still “pending”.

 

Anyway, that is by the by – the problem generally is that SNOTE cannot currently do everything; the developer has to do some things manually. In the new world SNOTE has been beefed up so that it can do everything a support package can do, even though you are implementing a single note. This is done by creating a transport request, which of course happened anyway during the current process so I don’t see why this is so different but a great improvement nonetheless.

 

Also we were reminded about the automated note search – transaction ANTS_PANTS or whatever it is called, where you can do a trace on the transaction that causes the problem and you get a gigantic list of possible notes related to every module/table called and then have to guess which one fixes your problem. That is still painful but a gigantic step up from doing the search on the service marketplace. The problem was the search was that if your sales order transaction was broken then searching for “sales order” would not find the fix but searching for “RV45ZZ87” would find it, as that is the INCLUDE that was broken. So you had to know the technical name of what was broken, which of course no-one does. The ANTS_PANTS is a big step up from that.

 

The End is Nigh

 

This blog ended up a bit longer than I thought, but I needed to write everything down before I forgot it – the contents are therefore in somewhat of a random order, but at least it was not a series of photographs with no text (as in “look at this lovely place I am in”) or a series of names of people who I met (I did meet a lot of great people, but why would you care?). I am currently in the Double Helix Wine Bar typing up my notes and trying to explain to the bartender what SAP is. A conference with ten thousand people and this hotel is so big none of the staff noticed. The Industrial Fastener conference was next door to ours, which probably made more impact, everyone knows what a fastener is.

 

I gave a speech myself – about Push Channels – but that will be a subject for a blog of its own. So that is that…..

 

Cheersy Cheers

 

Paul 

 

 

 

 

 

 

 

 

Skeleton/Architecture of a Technical Design plays crucial role

$
0
0

     Architecture of a technical design plays a crucial role in any project development of a SDLC. Its long time now, Organisations have matured up in their In-House Software Teams rather going for a client support. And almost a number of companies have their own well built IT-Support team. Not only restricted to the Network related issues, but even to handle daily chores ERP Related business issues.

 

     In Order to satisfy

  • Emerging business needs,
  • Compatibility and adaptability issues with upgrades,
  • To catch with the bench mark status in technology,

     Organisations have to opt for the consulting companies for the Roll Out and phase development activities. The critical phase arrives with coordination at the time of Handover from the project team to the support team. The support team has to adopt the new development objects in continuation to the existing support they carry in day-to-day chores.

 

To overcome the ambiguity, and ensure smoother transition, following plays crucial importance.

 

  • Follow of unique and precise architecture for all the development.
  • Integration of the support team in all crucial phases of the development.
  • Separation of business logic from Database Fetch and GUI handling.
  • Ensure proper provisions for ease of modifications and enhancements for future needs.
  • Adopting dynamic code built where possible.
  • Provision of code (commented) for any general extensions that usually arrive in future. per say, addition of a button on the GUI/Report display.
  • Avoid obsolete and outdated platforms.

 

Examples for Reporting using ALV follows soon...

Dynamic POWL Search Help – OVS

$
0
0

Recently I had a requirement of providing search help to POWL selection screen field. Twist part was, search help result should be depend on the other POWL selection screen field’s value.

 

SAP has provided an Interface IF_POWL_OVS which could be used to provide OVS search help to POWL.

 

First create OVS event handler class by implementing the interface IF_POWL_OVS.  As prerequisite, set the class name as OVS handler for the required field. This should be done in GET_SEL_CRITERIA method of POWL Feeder class, Pass the class name to OVS_HANDLER_NAME.

The interface has four separate methods HANDLE_PHASE0, HANDLE_PHASE1, HANDLE_PHASE3 and HANDLE_PHASE4. Implementation is similar to like other OVS implementation.

HANDLE_PHASE0 to set configuration like title, selection mode, ROW count etc. Here SET_CONFIGURATION method of I_OVS_CALLBACK called.

scn_blog2.png

 

HANDLE_PHASE1 to define selection fields and default values of search help. Here SET_INPUT_STRCTURE method of I_OVS_CALLBACK called. This is optional, only required if dialog selection required to restrict search result. (Search help type C – Complex Dialog)

 

scn_blog3.png

HANDLE_PHASE2 to build the search help result. Here SET_OUTPUT_TABLE method of I_OVS_CALLBACK called with result set.

 

scn_blog4.png

HANDLE_PHASE3 to export result value back to entry field by setting value for the context attribute.

scn_blog5.png

 

Here I want emphasis more HANDLE_PHASE2 method to provide dynamic result. PHASE2 used to build search result set. POWL interface has provided one additional importing parameter IT_RELATED_FIELDS which has POWL selection screen fields. This importing table could be used to get selection criteria of POWL.

 

The IT_RELATED_FIELDS contain field called M_ID which is nothing but selection screen field name which we had given during defining GET_SEL_CRITERIA method. For each selection screen field, the importing table contains the properties like type (Parameter / Select Option), as check box, as drop down, read only etc.Values which are been selected on the screen will be available at Field M_VALUE (For Parameter) and MT_RANGE_TABLE (For Select Option).

 

My requirement was to provide business Partner value help for each voyage which has been selected on POWL screen. Hence I have to read which voyage has been selected.  To get voyage I have read the IT_RELATED_FIELDS with M_ID = S_003 (Screen field name of Voyage field).

scn_blog6.png

From the screen shot we can notice I’m reading voyage by passing M_ID = S_003 (Voyage) which is nothing the screen name I defined for voyage in GET_SEL_CRITERIA method. ET_R_VOYAGE of type RSELOPTION.

 

scn_blog7.png

Later I used the voyage information to get partner related to it. The final result set as output for search result list display. And so on.

scn_blog8.png

Runtime values IT_RELATED_FIELDS,

scn_blog9.png

 

Result:

 

ABAP News for Release 7.50 - Host and Other Expressions in Open SQL

$
0
0

After a long time of stagnation the development of Open SQL awoke from its deep slumber and took some major steps in ABAP 7.40 in order to comprise as many features as possible from SQL92 and to offer about the same functionality as the SELECT statement of the DDL of ABAP CDS. In order to do so, a new foundation for Open SQL was laid by introducing a new SQL parser into the ABAP runtime environment. One consequence of this is the fact that Open SQL plays a bit other role in ABAP than before. While before 7.40 Open SQL was regarded more a part of the ABAP language itself, meanwhile the SQL character becomes more and more pronounced. One of the major indicatons for this is the new role of host variables. Before 7.40, you used ABAP variables in Open SQL statements as it is done in all other ABAP statements.  In fact, this prevented further development quiet effectively. Open SQL statements are executed on the database after being transformed to native SQL. In order to push down more sophisticated things than simple comparisons with ABAP variables in WHERE conditions - say SQL expressions in many operand positions - the Open SQL parser must be able to distinguish clearly between operands that are evaluated by the database and ABAP variables whose contents has to be passed to the database. In order to fulfill this task, ABAP variables in Open SQL statements meanwhile fully play the role of host variables as ABAP variables always did in static native SQL (EXEC SQL). You can and should prefix ABAP host variables in Open SQL with @. In fact, you can use all the new SQL features introduced to Open SQL starting with Release 7.40 only if you do so. Other fundamental changes that were introduced to Open SQL in order to make it fit for the future were comma separated lists and placing the INTO addition of a SELECT statement behind the authentic SQL clauses.

 

As a first benefit of these measures, already with ABAP 7.40  fundamental new features in Open SQL were rolled out, comprising SQL expressions in various operand positions or the possibilty of inline declarations. With ABAP 7.50 this development is continued and this blog introduces some of the new features (more to come).

 

Host Expressions

 

In almost all positions where you could place host variables, including the operand positions of SQL expressions from 7.40 on or the work areas of writing SQL statements, you can use host expressions now using the syntax

 

... @( abap_expression ) ...


A host expression abap_expression can be any ABAP expression, that is a constructor expression, a table expression, an arithmetic expression, a string expression, a bit expression, a builtin-function, a functional method, or a method chaining inside parentheses () prefixed with @. The host expressions of an Open SQL statement are evaluated from left to right and their results are passed to the database as it is done for the contents of host variables. In fact you can see host expressions as short cuts for assignments of ABAP expressions to ABAP helper variables and using those as host variables. The following example shows a table expression that reads a value from an internal table carriers on the right hand side of a WHERE condition.


SELECT carrid, connid, cityfrom, cityto
       FROM spfli
       WHERE carrid =
         @( VALUE spfli-carrid( carriers[ KEY name
                                          carrname = name ]-carrid
                                          OPTIONAL ) )
       INTO TABLE @DATA(result).


I personally like the following:


DATA(rnd) = cl_abap_random_int=>create(
               seed = CONV i( sy-uzeit ) min = 1 max = 100 ).

INSERT demo_expressions FROM TABLE @(
   VALUE #(
    FOR i = 0 UNTIL i > 9
      ( id = i
        num1 = rnd->get_next( )
        num2 = rnd->get_next( ) ) ) ).


An internal table is constructed and filled with random numbers inside an INSERT statement. A cool feature for the ABAP documentation's demo programs ...


For more information see Host Expressions

 

SQL Expressions

 

With ABAP 7.50, the usage of SQL expressions  was extended as follows:

 

  • Besides using them in  the SELECT list, you can use them as left hand sides of comparisons with WHERE, HAVING, ON, and CASE and as Operands of a CAST expression. Note that this includes host variables and host expressions as operands of SQL expressions.

  • The following SQL functions can be used in SQL expressions now: ROUND, CONCAT, LPAD, LENGTH, REPLACE, RIGHT, RTRIM, SUBSTRING. The COALESCE function can have up to 255 arguments now.

As an example of an arithmetic expression on the left hand side of a WHERE condition see:

 

SELECT carrid, connid, fldate, seatsmax, seatsocc,

       seatsmax - seatsocc AS seatsfree

       FROM sflight

       WHERE seatsmax - seatsocc> @( meth( ) )

       INTO TABLE @DATA(result).

 

As an example for string functions see the following concatenation of columns into one column with CONCAT:

 

SELECT CONCAT( CONCAT( carrid,

                       LPAD( carrname,21,' ' ) ),

               LPAD( url,40,' ' ) ) AS line

       FROM scarr

       INTO TABLE @DATA(result).

 

This concatenation is not possible with the operator && that is available since ABAP 7.40.

 

For more information see SQL Expressions.

 

Path Expressions

 

Path expressions are something you know from CDS already (duh!). If a CDS view exposes an association, the same or another view can access it using a path expression.

 

For example, the following CDS view uses path expressions in its SELECT list:

 

@AbapCatalog.sqlViewName: 'DEMO_CDS_USE_ASC'

@AccessControl.authorizationCheck: #NOT_REQUIRED

define view demo_cds_use_assocs

  with parameters p_carrid:s_carrid

  as select from demo_cds_assoc_scarr as scarr

{ scarr.carrname,

  scarr._spfli.connid,

  scarr._spfli._sflight.fldate,

  scarr._spfli._sairport.name }

where scarr.carrid = :p_carrid

 

The name of the associations are prefixed by an underscore _ and are defined in the following views:

 

@AbapCatalog.sqlViewName: 'DEMO_CDS_ASC_CAR'

@AccessControl.authorizationCheck: #NOT_REQUIRED

define view demo_cds_assoc_scarr

  as select from scarr

            association to demo_cds_assoc_spfli as _spfli

              on scarr.carrid = _spfli.carrid

     { _spfli,

       carrid,

       carrname }

 

 

@AbapCatalog.sqlViewName: 'DEMO_CDS_ASC_SPF'

@AccessControl.authorizationCheck: #NOT_REQUIRED

define view demo_cds_assoc_spfli

  as select from spfli

            association to sflight as _sflight

              on spfli.carrid = _sflight.carrid and

                 spfli.connid = _sflight.connid

             association [1..1] to sairport as _sairport

              on spfli.airpfrom = _sairport.id

     { _sflight,

      _sairport,

       carrid,

       connid,

       airpfrom }

 

With ABAP 7.50 Open SQL's SELECT can also use such path expressions in its SELECT list or FROM clause when accessing CDS views. The following Open SQL statement does the same as the first CDS view above:

 

SELECT scarr~carrname,

       \_spfli-connid AS connid,

       \_spfli\_sflight-fldate AS fldate,

       \_spfli\_sairport-name AS name

       FROM demo_cds_assoc_scarr AS scarr

       WHERE scarr~carrid = @carrid

       ORDER BY carrname, connid, fldate

       INTO TABLE @DATA(result).

 

Looks not too different, eh? Only the dots have to be replaced by backslashes \ (and because of this, the path expressions looks like those for meshes). When compiling such an Open SQL statement, the path expressions are converted to joins on the database. Check it out with ST05.

 

For more information see Path Expressions.

 

More News

 

That's not all about Open SQL in ABAP 7.40. In an upcoming blog I will show you an enhancement to SELECT that became possible because the INTO clause can and should be placed at its end ...

~ To Print Different Pages in Smartforms using Command ~

$
0
0

Hey There


After hearing lots of Questions about “how can we print the data to the next page based on the certain condition?


So here is the answer -> Use Command in the Smartforms.


Now question is how we can use the Command so that we can achieve our requirement?


Below are the step by step detail which will help you to Use Command functionality correctly.



Example:  This Example shows you how you can print the 3 different Internal Tables data in 3 Different Page.

 

Step 1.  Create the Form having 2 Pages.

                         Img1.png


Step 2.  Create Main Window inside both the Pages (with both Dimensions) :

                                                                   Page1 – Main Window

      Img2.png

     

                                                                  Page2 – Main Window

    Img3.png

 

 

Step 3. Inside the Main Window of first page create 3 Tables and 2 Commands:

                           Three tables are for 3 different internal tables which we want to show in three different Pages. Also set the attributes of the Command.

                                           

                                                            Img4.png

 

1st table Loop : Loop it with  First Internal Table which needs to display on first page.    

                                                       Img5.png

2nd Table Loop:Loop it with Second Internal Table which needs to display on second page.

                                                      Img6.png

3rd Table Loop: Loop it with Third Internal Table which needs to display on second page

                                                     Img7.png


 

Command #1: In attribute Tab, tick the check box and put PAGE 2 as the next page. As shown below :

                                      Img8.png

Command #2: In attribute Tab, tick the check box and put PAGE 2 as the next page. As shown below:

                                                   Img9.png


Try COMMAND  in your smartform to print different details in different pages.


From the Main Window of first  page we can hold the data to print in next below pages.


Note: - Things that need to take care is that Main Window of each Page should be of Same Width.


  Keep Learning 


Stay Awesome

   Romit Raina


ABAP News for Release 7.50 - CORRESPONDING, again ...

$
0
0

In ABAP, as a rule, the name is not always the game (see an entertaining recent discussion about that).

 

But as you all know there is a prominent exception to that rule: All the syntax forms involving CORRESPONDING for assigning structure components that (by chance) have the same name.

 

  • Before ABAP 7.40, these were mainly MOVE-CORRESPONDING for the components of structures, the CORRESPONDING addition to Open SQL's SELECT, and some obsolete calculation statements.

  • With ABAP 7.40 MOVE-CORRESPONDING was enabled to handle structured internal tables and a new constructor operator CORRESPONDING was introduced that allows an explicit mapping of structure components with different names.

 

What was still missing?  A dynamic mapping capability! And this was introduced with ABAP 7.50.

 

The new system class CL_ABAP_CORRESPONDING allows you to assign components of structures or internal tables with dynamically specified mapping rules.

 

The mapping rules are created in a mapping table that is passed to a mapping object, e.g. as follows:

 

DATA(mapper) =

  cl_abap_corresponding=>create(

    source      = struct1

    destination = struct2

    mapping     = VALUE cl_abap_corresponding=>mapping_table(

     ( level   = 0

       kind    = cl_abap_coresponding=>mapping_component

       srcname = '...'

       dstname = '...' )

     ( level   = 0

       kind    = cl_abap_coresponding=>mapping_component

       srcname = '...'

       dstname = '...' )

     ( level   = 0

       kind    = cl_abap_coresponding=>mapping_component

       srcname = '...'

       dstname = '...' ) ) ).

 

This is a simple example, where all structure components are on top level (0) and where all components are to be mapped (kind = cl_abap_coresponding=>mapping_component). More complicated forms involve nested structures and exclusions. With srcname and dstname the component names  can be specified dynamically. The table setup is similar to the mapping-clause of the CORRESPONDING operator.

 

After creating the mapping object, all you have to do is to execute the assignment as follows:

 

mapper->execute( EXPORTING source      = struct1

                 CHANGING  destination = struct2 ).

 

You can do that again and again for all structures or internal tables that have the same types as those used for creating the mapping object.

 

Not much more to say about that. For details and more examples see CL_ABAP_CORRESPONDING - System Class.

 

 

Outlook

 

Up to now, only the basic form of the CORRESPONDING operator is mirrored in CL_ABAP_CORRESPONDING. But a variant for using a lookup table is already in the queue.


The Delivery Barrier

$
0
0

Introduction

 

In the last years, ABAP development has become a lot easier. The individual developer’s perspective has improved considerably through advanced tooling (ADT!), better online documentation and community support. When I started ABAP development around 2001, you still needed a dedicated machine with quite a price tag only to run the development system of our landscape at a decent speed. Today, that’s easily accomplished using a virtual machine, even on cheap off-the-shelf hardware (which is obviously not recommended for a production system, but hey – what do you think the average Hudson CI server in a small development shop runs on?). With pre-packaged demo systems, a complete ABAP development environment is in reach for most people – I just installed a system a within a few hours (including downloading 15 GB of installation files, setting up the VirtualBox server and the underlying OS, while doing other stuff alongside). If you don’t want to run the system on your own hardware, you can get a CAL account and run the systems in the cloud as well.

From an organizational point of view, things have become easier (that is: cheaper) as well. For the most part, you no longer need to invest in large-scale hardware (see above), and there are plans available (see below) that will provide you with up to 25 developer licenses, a registered namespace and access to pretty much everything you need for very reasonable prices.

A lot has been done to lower the entry barrier, to get individual developers and possibly small startup companies to embrace the ABAP ecosystem more easily, and I am very grateful for all the hard work that must have gone on behind the scenes to make this a reality. If you know ABAP and have some great ideas for additional products that complement some existing solution, just build your product, test it thoroughly, write whatever documentation you deem necessary, sell it to your customers and then deliver it.

Just deliver it” – unfortunately, that is the one point that is still easier said than done. This has not been a problem so far, when ABAP development was only available to medium-to-large-scale enterprises anyway. For a small company comprising of only a few enthusiastic employees, ABAP development has now become rather easy and cheap, while the delivery process has yet to undergo that transformation. There are a couple of options available that I would like to discuss in this article. I’m very well aware that this has become a rather lengthy article, but I hope it’s worth the read – giving the discussions of the last weeks, it most certainly needed writing.


The Obvious: “No. Nonononono.”


Okay, that’s not a serious alternative. You could, if you were so inclined, manually copy your solution over to each and every customer system. Don’t. Even. Think. About. It.


The Automated Obvious: “No.”


There’s a great tool available to get development objects out of the system and back into the system: SAPlink. Don’t get me wrong, it really is a great tool, if used by a knowledgeable mind for the right purposes. Software delivery is definitely not one of those.

SAPlink has a few advantages, which do look compelling, from a distance:

  • It’s free, which is always good, right?
  • It’s comparatively easy to learn – no complex concepts, just export to XML and reimport. Anyone can learn that fast.
  • It’s extensible – if it doesn’t support what you need, just add the missing pieces yourself.

However, when using it as a delivery tool, there are some serious issues.

  • It is most definitely not supported by SAP. Some of the import/export implementations available might not even use supported APIs to extract and insert development objects – probably because there are none. While this might not be huge deal for in-house use of SAPlink, the situation changes when you’re an external solution provider. You’ll be providing components for mission-critical enterprise systems – better make sure you’re covered when it comes to maintenance issues.
  • There are virtually no integrity checks during either the export or the import process. A lot of things can go wrong when packaging software, and SAPlink is simply not designed to handle any of these issues.
  • There is no support for object deletion. This happens frequently in product maintenance – you no longer need a class, so you delete it. SAPlink might be able to deliver the class, but it can’t deliver the deletion.
  • There’s no dependency resolution during import. Imagine a class that uses a structure that contains a data element that in turn refers to an interface. You might need to import some of these objects in the right order because it largely depends on the object type import/export plugin whether you can import inactive objects that refer to other objects that don’t yet exist. Sometimes you can’t, and then you have to keep tabs on the dependencies manually. Not cool.
  • Speaking of the plugins, the support for certain objects heavily relies on the plugins working correctly. Since the plugins come from a handful of loosely connected volunteers, they will naturally vary in quality, so YMMV.
  • One of the most important points might be that on the target side, SAPlink actually does little less than automate the object creation. You can create a class using SAPlink just like you could manually – and vice versa, you can’t (legally, and most of the time technically as well) create a class that you couldn’t create manually as well. That means that you have to develop and deliver your solution in a namespace that is fully writeable by your prospective customer. Either you use Y or Z and risk conflicts with existing customer objects (besides demonstrating that you probably shouldn’t be delivering a product just yet), or you give the customer full access (production key) to a registered namespace, essentially rendering its protective character useless. Welcome to maintenance hell – oh and don’t forget that some of the object type plugins don’t (yet) support objects in registered namespaces.

Compared to the Copy&Paste approach, SAPlink certainly is a huge step – but for professional software delivery, a huge step in an entirely wrong direction.


The Slightly Less Obvious: “Not A Good Idea.”


Anyone who knows a bit about the ABAP environment will know what’s coming next: Transports. From my personal experience, this is probably the most popular way to deliver add-ons (it certainly is within the Healthcare sector) – we even get add-ons from the consulting departments that are as close to SAP as can be via transports. And why not – this method does have a number of advantages:

  • It is supported by SAP. Perhaps not exactly for the use case of delivering software to customers, but it is somehow covered by the standard maintenance.
  • Not only is it officially supported, but it is also widely tested, and it is guaranteed to be implemented in every system landscape you might want to deliver software to – simply because everyone needs it.
  • There is support for all relevant object types – obviously, since you need that capability within a customer system landscape anyway.
  • It does a really great job of handling stuff like import-level ordering, inactive import and mass activation and import post-processing. There is a huge amount of complexity involved that is cleverly hidden underneath a tool that every ABAP developer uses without even thinking about it. I’d highly recommend the course ADM325 to every ABAP developer to get a deeper knowledge about the inner workings of the system.

This looks like the perfect solution, but: the CTS/TMS that you know from your development or customer landscape was designed for transports within the system landscape. It was not designed for software delivery between different landscapes, although it is frequently used for this. Because of this off-label use, there are some nasty issues that are just waiting to bite you:

  • Transport identifiers are generated automatically using the system ID (SID) of the exporting system (<sid>K9<nnnnn>). Not considering some special values, there is no telling what SIDs you might encounter in a customer system landscape. If the SID of your delivery system exists in a target landscape, transport identifiers will collide sooner or later, with very unpleasant effects. That means trouble, and there is no easy way to solve this.
  • Only an extremely limited consistency checking is performed before exporting a transport. Basically, if it’s a valid and mostly consistent object, you can export it. That includes Z-Objects, Test stubs, Modifications and a few other things that might easily slip into delivery transport unnoticed. You can implement checks for this using CTS extension points, but you have to be aware of the danger and prepare for it.
  • There is no support for modification adjustments, no SPAU/SPDD upgrade assistance. Your customer can modify your delivered objects (provided that you supply the modification key of the namespace, which you should), but then what? With the next delivery of that object, the customer has to backup his modifications (manually), have your transport overwrite it and re-implement the modifications (again, manually).
  • There is no integrated dependency management. The CTS/TMS is supposed to be used in a system landscape that has a homogeneous arrangement of software versions, so whatever you can safely export from the development box, you’ll probably can import safely into the other boxes, right. If the transport originates from a system where HR happens to be installed and maybe you used some HR data elements or function modules just because they seemed convenient, you can export the transport easily. If the customer doesn’t have HR installed, you won’t land on the moon today – and you have no way of ensuring this beforehand, you’ll just notice during the import. The same pattern applies if you want to supply multiple add-ons that rely on each other – you can’t ensure that your customer will import these in the right order and only using matching versions.
  • Speaking of versions – there is no version management to speak of, you’ll have to store the version number and patch level manually if you want to and build your own display function. Not a big deal, but cumbersome none the less.
  • The import order of individual transports is not governed in any way. This not only affects dependencies (as discussed above), it also allows for mishaps like partial downgrades, import of patches in the wrong order and numerous other issues that will keep your support staff unnecessarily busy. Even worse, unintended downgrading of database tables might lead to data loss.
  • One rather subtle problem lies hidden in the area of object deletion and skipped transports. With CTS/TMS transports, it’s easily possible to export a deletion record for an object that will cause the object to be deleted in the target system as well. Let’s assume you export that deletion record with version 5. The customer decides (consciously or by accident) to skip version 5 and upgrade directly from 4 to 6. In that case, the deletion record is not imported and the object stays in the system. In most cases, that won’t be a problem, but if you think of class hierarchies, interface implementations and other heavily interconnected objects, you might end up with leftovers of the unpleasant sort. This isn’t easy to solve, either: It’s not trivially possible to add a deletion record to the transport of version 6 because the TADIR entry of the object was deleted when exporting version 5, and you can’t add the deleted object to the transport of version 6 without creating the TADIR entry first. It’s possible, but not trivial – BTDT.
  • There’s a procedural trapdoor that might lead to unexpected results as well. Since you’re essentially using the normal change management software logistics system of the customer system landscape, your software upgrades might be imported by the TMS along with regular in-landscape transports (that they technically are!) inadvertently. If that happens at the wrong time – especially when importing into the production system – bad things might occur. Avoid if possible.
  • As a last hidden obstacle: There’s no support for a clean upgrade path between releases. Your software will inevitably use some components of the NetWeaver Basis, ECC, CRM or any other SAP product – I’ll simply call this “the foundation” from now on. For different releases of the foundation your product relies upon, you will frequently have to deliver slightly different product versions. This means that during a major upgrade, objects may have to be deleted while others might have to be added or changed. You have to figure out a way to support this manually – there’s nothing in the CTS/TMS that will help you with that.

As you can see, while this obvious solution will be workable with a number of limitations for a wide variety of scenarios, it is far from ideal, it places a huge burden on the people managing the export and import process, and it will totally collapse in certain situations (SID collisions). One would think that there just has to be a better way.


Pricey Professional Product Preparation


Fortunately, there is a better way – after all, SAP manages to deliver a wide range of software based on the NetWeaver ABAP platform, and it all has to be packaged somehow. The software to do so (or at least a software that is capable of doing so) is available; it’s known as the Add-On Assembly Kit (AAK for short; the software component is AOFTOOLS, probably for Add-On Factory Tools).

The online documentation of the AAK is freely available at http://help.sap.com/aak, so I won’t try to replicate all that’s written there. In a nutshell, the AAK provides two tools that assist with defining, checking and maintaining the contents of a deliverable software package (Software Delivery Composer) and turning that package into an installable file (Software Delivery Assembler). While the whole process uses TMS tools internally, the entire process is much more sophisticated and specially designed to support the “delivery to anonymous customer” scenario.

Note that although the AAK is an installable unit, it’s not a product that you can buy a license for. You sign a separate contract with the SAP Integration and Certification Center to have your solution certified, and as part of the process you get the AAK. The details are specified here and here as well as in note 929661.

The advantages of this approach, to name only a few, are:

  • Supported by SAP, used by SAP. What better reference could you wish for?
  • There’s extensive documentation available – online reference, 130 pages PDF, even SAP Tutor introductory videos when I last had the chance to use it. In addition, you’ll be assigned a personal contact, and at least for the people I’ve had the pleasure to work with, their competence and professionalism leaves nothing to be desired.
  • The import tools, namely transactions SAINT and SPAM, are known to most basis admins. In contrast to the common TMS transport operations, they are designed for imports of large software packages and deal with all kinds of issues.
  • If you ever wondered where the strange software component names come from – with the AAK, you get your shot at creating your own software component. The name of the software component is based on a registered namespace and therefore guaranteed to be unique; the delivery object lists contain the software component identifier and are therefore unique as well. The final EPS delivery files contain the system ID and installation number, which in this combination are unique as well (at least unique enough for all intents and purposes). Collisions with customer system IDs are thus avoided.
  • There are extensive consistency checks during the packaging and export process that can even be extended by customer checks. For instance, as an i.s.h.med developer, you may want to stop some generated function groups from being delivered because they need to be generated on the customer system. Writing a custom check for this is rather straightforward.
  • As the import uses the well-known SAINT/SPAM route, you’ll get full SPDD/SPAU modification support, including modification adjustment transports that can be used to automatically adjust the modifications on QA and production systems.
  • There’s an integrated dependency management system that allows you to specify which software components have to be present or may not be present in which versions. These dependencies are checked early during the import process – if a dependency is not met, the entire import won’t happen.
  • The AAK provides support for various upgrade or cross-grade scenarios, including release upgrades. You can build special Add-On Exchange Upgrade packages that allow you to cleanly remove objects that are no longer required in the new release and import whatever new objects are needed. This is fully integrated into the upgrade process itself.
  • Hardly worth mentioning, but of course there is full support for deleting objects. With the rather recently released version 5.0, there’s even support for deletable add-ons.
  • Compared with transports, a lot of additional checks take place during the installation process, including version checks (e. g. a downgrade protection) and collision checks with other application components.
  • Since an entirely different set of tools is used for import, AAK Add-Ons can’t be mixed up with regular transports.
  • Aside from regular versions (“Releases”), the AAK provides support for patches. The patch import process enforces the correct order of patches, thus ensuring a consistent software state on the customer system.
  • Your software will be listed in the component version list via System > Status, which is something that every software developer should aspire.
  • Finally, the certification process will get you a check by SAP and a certification logo, as well as a listing in the partner directory.

Considering the many advantages, one would have to think that this just has to be the solution to all problems. So why isn’t everyone using it? Obviously, there have to be some drawbacks. Let’s see…

  • It’s largely unknown to the general public, or at least that’s my observation. I hope I’m helping to change that, bit by bit.
  • The certification isn’t exactly cheap. One of the articles mentioned above has the figures– 15k€ for the first year and 10k€ for every subsequent year.
  • An extensive system landscape is required or at least strongly recommended for the packaging and testing process. The usual rule of thumb is three systems per foundation release supported – depending on your requirements, this number might change in either direction, but it’s a good estimate for starters.
  • The delivery process looks huge at first. It can be cut down once you get to know the system better and once the scenario is well-defined – there are many special cases that might not apply in your scenario, but you’ll have to think about them and decide whether to handle them consciously.

That last point is actually not necessarily a drawback: The AAK forces you to think about a lot of things that could go wrong and devise ways to prevent them. It imposes a certain level of quality – you can’t just deliver anything, at least not without consciously suppressing the warning telling you that you’re about to make a mistake.


The Steep Incline


After going through the delivery processes in detail, what’s the point?

Starting SAP development is nowadays relatively easy and cheap – if I read the SAP PartnerEdge Program for Application Development information correctly, you get a full-fledged package for 25 named developers for 2.5k€ per year. This will enable you to develop software and deliver it using transports, which – as we have seen – is not an optimal solution.

If you wanted to upgrade to the professional solution, you’ll easily end up paying ten or twenty times as much. You have to consider the basic certification fee (15k€ for the first year) for one thing, but you will also have to provide and maintain the system landscape. This might be an ideal opportunity for the cloudizationalized world – just use some pre-configured instances available from the SAP Cloud Appliance Library, link them up in a transport domain and there you are, right? Right, just that each system will easily set you back 1k€ or more in CAL and cloud provider fees – per month). So for certification and three cloud-based systems during the first year, you’ll probably end up with a total bill in the area of 30-50 k€. Add to this the effort required to setup the delivery process if nobody is familiar with the AAK and the basis administration tasks (which is probably the case), and the point becomes clear:

Getting the software to the customer using the tool that is ideally suited for the job is so expensive in time and money that many (most?) small companies and even some of the largest there are resort to a sub-optimal solution. One would hope this might change, so that now that we have the development environment available for a fraction of the cost of a few years ago, the same would apply for the delivery solution.

Abstraction class to generate MSWORD with SAP using OLE

$
0
0

Hello,

 

I wrote an abstraction class to generate complex MS Word document from SAP. This class is delivered "ready to use" and cover, i hope, a large panel of usage.

 

Why this class ?

OLE syntax is not easy and find help is a pain. I learned all i need to know and, to never have to do it again, i have written this class.

 

Now to write text into word, i simply use the method "write_text" or "write_table" to write a complete table

 

What you can actually do with the class :

 

  • Create document from scratch or with a template (dotx)
  • Write text with or without style (font style or paragraph style)
  • Use option bold, italic, underline, choose font, font size, font color
  • Break line, Break page, Section, section continuous
  • Write table, with or without Table style, and option to define format by cell (bold, background color...)
  • Footnote
  • Write simple header/footer
  • Choose orientation landscape/portrait
  • Add image
  • Add canvas
  • Insert table of content
  • Insert custom field
  • Change the title of the word window
  • Save document and close word

 

As OLE is an old method, and seem to be a little slow for big document (in particular if you have a lot of table). So this class could be used to help all people that have a problem with OLE syntax, it contain almost all solutions

 

In the download, you will find a test program that contain the class CL_WORD and a sample of how to use it. You will find also some image and 1 template. Theses files are used by the test program, but are not necessary for the class itself.

 

French presentation can be found here : Faire communiquer SAP et MS WORD grâce à OLE - Quelquepart

 

And there is a direct download link (remember that you will need SAPLINK to install) : http://quelquepart.biz/telechargements&file=L2RhdGEvbWVkaWFzL1pDTF9XT1JEX09MRS56aXAqOGQwZmVh&source=SCN-OLE

 

Feel free to comment here


My others blogpost :

LISTCUBE replacement : a new way to display data

ZAL11 : a replacement for AL11

ZTOAD - Open SQL editor

Some Best Practices for ABAP Core Data Services

$
0
0

In case you don’t know about ABAP CDS let me short introduce the basic facts. CDS allows building database views with many features of SQL-92 that are available for HANA and anyDB. You can use it for

  • data models for operational analytics
  • fast queries – identification of business objects or building packages for parallelization
  • fast data access for OData services where joins are only used if necessary
  • you can use is to “redefine” DDIC information in SELECTs which can help you to overcome restrictions in many frameworks like BRFplus
  • it will be one major cornerstone for building next generation application apps in NW 7.5 and S/4HANA which was/will be explained in this year’s SAP TechEd.

You can read more about it in the following blogs:

 

This is only a short blog entry but reflects experiences with ABAP CDS in releases NW 7.40 SP8 and 11 we had in the development of two complex CDS models. Those hints may sound easy but if I had know it earlier many things would have been easier.

 

Get the latest ADT tools

You need ABAP in Eclipse to create and edit DDL Sources. Please don’t work with older versions of ADT tools. Until summer they allowed some dangerous manipulations that will bring you in big trouble.

 

Learn SQL-92 and apply SQL best practices

CDS is about view building. Improve your SQL skills before starting with CDS. Otherwise you will most likely create severe errors. Without SQL knowledge it is possible that you will get bad runtime results. But if you are a skelled SQL developer you will get amazing results.

 

Start slowly

With CDS you can build complex data models. But complexity is no value in itself – simplicity rules. So start slowly and explore all features. The complexity will arise soon when you are building view on view which is the usual programming model.

 

Learn about restrictions

Sometimes restrictions are hidden in the documentation: http://help.sap.com/abapdocu_740/en/abenselect_cds_para_abexa.htm und the ABAP Keyword Documentation. So please read it carefully. By the way: I am don’t like the way SAP communicates the restrictions but this is the topic of another blog entry.

 

Use the ABAP package concept

CDS models use to get complex and complicated within short time.  I recommend to implement CDS views with different purposes (operational reporting, OData..) in different ABAP packages. One reason for it is to control reuse. I made the experience that most ABAP developers don’t understand the concept of reuse and apply it whenever if it's possible and not when it is necessary.

 

Study OSS Notes

CDS is a new technology and there are problems like the one here: http://service.sap.com/sap/support/notes/2023690. So I recommend to study the OSS.

 

Look at transport protocols

As I mentioned before CDS is a new technology and we had some surprises with the transport behavior. So look at the transport protocols.

 

Visit this years’s SAP TechEd

CDS is a cornerstone of new business applications – so I recommend to visit ABAP lectures like “DEV106 - The ABAP Programming Model in SAP S/4HANA” (see https://scn.sap.com/community/abap/hana/blog/2015/09/08/abap-at-sap-teched-2015  for example).

Quo Vadis ABAP? I am worried.

$
0
0

ABAP is getting more and more database-specific features – and this make me worry. Please don’t understand me wrong – I have no problem with SAP’s technological strategy but with SAP’s communication und documentation. And I’m convinced that this will lead to severe problems unless SAP decides to deliver better documentation.

 

SAP said it loud and clear: not every feature of ABAP language is supported on every database platform – typical examples are ADMP and CDS with parameters. With parameterized CDS you can do amazing things: operational reporting, fast queries, and it is a very powerful tool for code pushdown to the database.

 

Before reading the next section I would like you to answer following questions about NW 7.40:

  • Do you know which DBMS of the PAM support parameterized CDS?
  • Supposed you have a complex SAP landscape with different DBMS: How do you find out whether parameterized CDS is supported in your landscape?
  • Supposed you are an SAP Partner and your solution uses parameterized CDS – do you know on which DBMS your solution runs?

 

The answer is difficult since in NW 7.40 parameterized CDS is not supported on every DBMS of the PAM. SAP doesn’t provide any official information which database platforms are supported. This restriction is documented in ABAP Help with two sentences: if you use parameterized CDS on DBMS that doesn’t support this feature you get an exception. There is also a class that provides the information whether the system database supports the feature.

 

Where’s the Problem?

The answer is simple: how can we make technological decisions when SAP doesn’t give precise information which feature of ABAP (and of course AS ABAP, too) works on which database? How can we be sure that our software is running a system landscape with different  DBMS? This is common in larger development systems and the general case for SAP partners.


We have the following possibilities:

  1. We can develop a solution that supports anyDB. This can be difficult and expensive since we have to develop two variants, say one with parameterized and one without. Dual development has its costs and if I have the choice I would like to avoid it. In the end I think I have to avoid it because of economic reasons.
  2. We don’t know or I don’t care about the restrictions. But then the solution crashes when it runs on a DBMS which doesn’t support the feature. In the worst case this could lead to a production downtime, which is or course no option. Unfortunately parameterized CDS has some unique features and when there are in the core of your application a change could be problematic and expensive.
  3. We know about a restriction and I will change the DBMS. Of course this is expensive.
  4. We decide not to use the new techniques at all. This is annoying since I paying for the platform. Without using new features the platform loses value.

 

This is the dilemma. Without knowing the facts we could get into trouble if our systems landscapes are running on multiple DBMS. Without this information SAP partners can’t neither answer the question whether their solution runs on a special DBMS nor make reasonable decision.

 

What do we need?

The answer is simple: whenever there is a restriction, SAP customers and partners should be informed about it. Here we need a list of supported databases. As I explained above it does not suffice that there is a solution only at runtime. I need this information in a release note so that it is an eye-catcher no one can miss by chance.

 

Why I am worried?

At the moment there are only a few restrictions, but as I explained above the times are over when AS ABAP was database agnostic and the whole PAM was supported. Of course it was possible to create database specific solutions using native SQL/ADBC, but the Code Inspector / ABAP Test Cockpit contains checks so that we ensure at development time that our solutions are platform independent, but I know no automated checks for parameterized CDS so far.

 

I think it is very likely that SAP will continue to develop ABAP features that are no more database agnostic. As I said above I have no problem with this but I need explicit information about restrictions. Everyone with complex system landscapes needs this information especially partners. If SAP continues adding database proprietary features to NetWeaver platform without precise information about supported DBMS, we will start developing solution where no one can tell on which DBMS they can run.

Hi Git, I'm ABAPer how do you do?

$
0
0

Have you gotten into Git yet? It is one of the new things to know when working with teams on SAPUI5 Projects. But first...

 

 

git (1).png

This comic comes from XKCD.com by Randal Monroe and is re-used under creative commons.

 

Flashback

Earlier today I was looking for a draft blog that I had started with a simple title as a memory jog for me to come back and flesh out. Then I found this blog that I had written up over the course of several plane trips worth of waiting in lounges last year. Given that I presented a similar Introductory GIT course for ABAPers this year (2015) at SAP TechEd I thought I should dust this off and polish it up for your enjoyment and learning pleasure.

The Call for Papers

When there was a call for community sessions before #SAPtd I jumped on the opportunity to present on two topics that I feel were going to be under-represented at the event.

  1. Developer Communication Skills
  2. Git

So I proposed two abstracts to the community talk selection committee and they told me that they while they were very excited about both topics they could only take one and they would take the first one. Well I was excited, but to be honest, also a little dissapointed. As much as I wanted to talk on communication skills for developers I really wanted to introduce ABAPers to Git as a method of source code control as I know it will become very important as we move toward the HCP and OpenUI5.

As we got closer to the conference I was advised that one of other speakers that was selected had to withdraw and I was asked to prepare my second talk.

This presentation was only delivered at #SAPtd in Las Vegas so all the people going to Berlin (2014) will miss out. For this reason and that there should be a bit of an intro to Git here on SCN. I thought I would distill my off the cuff talk into a short blog for the benefit of the ABAPers in the universe.

Welcome to Source Control

So source control is not a new subject to ABAPers. We are used to putting our code into transport requests, which then enables the code to be delivered across the landscape and into production in an orderly manner.

Let's for a moment think about what happens when you put a code artifact into a transport request. It's not that hard. You lock the object so that you and only you can work on the object at one time. Until you release your transport, everyone else is prevented from doing anything to that object.

This is all well and good, but what happens in the following scenario?

  1. You are working on a new feature request.
  2. A bug is discovered in production that related directly to your code object.

Well resolving the bug will obviously trump the new feature but you are halfway though your change and it will need to be parked while the emergency is dealt with.

The challenge is you change request might have a whole bunch of other objects that are not ready to go to prod, let alone ready for regression testing.

So mostly you do something like the following:

  1. Copy all of your changes to notepad* and save the file for safekeeping.
  2. Hack the lock on the transport request and remove your file from your transport request
  3. Copy the code from the prior transport (or back from production if they are different)
  4. Open a new transport request.
  5. Make the hotfix.
  6. Release the new transport.
  7. Add your object back into the old request and merge your changes in from the file you saved.
  8. When you are ready you release the transport.

While this is functional and it is not the best. Also note that only one person can be working on each file at once.

Enter Git

So what's so great about Git. Well, for starters the file types involved in a OpenUI5 project don't have to be edited in SE80. In fact you can set up your system to work on your project locally on your machine and there are many great blogs here on SCN to help you with that. This means that you can work on your project on the move rather that having to be connected to the ABAP app server in the same way you need to in the ABAP world.

 

Back to the start.

So Git is a distributed source code control system where pretty much everything happens locally and you are only need a network connection when you are pushing to the remote server.

You don't even need to have a remote system to use Git. You can install git on your Linux, Mac or Windows system and use it to version whatever files you like without ever sharing your code. Personally I have done this when I was working on a project even though I wasn't working with other team members. This enabled me to version code and rollback changes if they didn't work.

For most scenarios though, you will need a remote system that all members of your team can access. For this, GitHub is the answer to your needs. Is it the only answer? No, there are others but it is a great place to start.

So surf on over to GitHub and signup for an account.

(insert image of GitHub initial screen)

There are several options to consume GitHub content. Native apps, command line and the GitHub site itself. I will focus on the command line in this blog because even though the native GUI applications simplify everything so nicely, sometimes the power that the command line affords is the only way to get out of the muddle. So I like to build up the muscle memory in my fingers so that when it hits the fan and my colleague's Windows commit has clobbered my commit I can recover without getting into a "tissie".

First the init

Let's assume that we will start locally, because you are on a long haul flight (with power) and have had a great new idea you want to code or write about. Lets call it NextBigThing.

The first thing you are going to do is to initialise a folder to be tracked under source control.

As I said we will start with the command line and look at other options later:

05:58:42 ~/squarecloud$ mkdir NextBigThing
05:58:59 ~/squarecloud$ cd NextBigThing/
05:59:04 ~/squarecloud/NextBigThing$ ll
total 8
drwxrwxr-x  2 nigeljames nigeljames 4096 Nov  4 17:58 ./
drwxrwxr-x 19 nigeljames nigeljames 4096 Nov  4 17:58 ../
05:59:05 ~/squarecloud/NextBigThing$ git init
Initialized empty Git repository in /home/nigeljames/squarecloud/NextBigThing/.git/
05:59:11 (master) ~/squarecloud/NextBigThing$ ll
total 12
drwxrwxr-x  3 nigeljames nigeljames 4096 Nov  4 17:59 ./
drwxrwxr-x 19 nigeljames nigeljames 4096 Nov  4 17:58 ../
drwxrwxr-x  7 nigeljames nigeljames 4096 Nov  4 17:59 .git/
05:59:13 (master) 

So here we have created a new directory for our new exiting project and initialised it so that git can track its contents

Next we will start editing our documents and or code. I will start by creating a ProjectOverview.md to get my ideas down.

After editing that document for a while I need to see what is going on. I check the status back on the command line.

~/squarecloud/NextBigThing$ subl Project Overview.md
05:59:28 (master) ~/squarecloud/NextBigThing$ git status
On branch master

Initial commit

Untracked files:
  (use "git add <file>..." to include in what will be committed)     ProjectOverview.md
nothing added to commit but untracked files present (use "git add" to track)
06:00:23 {master} ~/squarecloud/NextBigThing$

 

So let's see what is going on:

You can see that the command line prompt has changed from being round brackets and green to red curly brackets. This is my visual cue that my repository is not up to date.

 

So looking at the directory listing I can see my new file and by checking git's status I can see that I have one untracked file.

Git is nice and tells us what to do most of the time. It is telling me to add the file to be tracked. So let's do that:

 

06:00:23 {master} ~/squarecloud/NextBigThing$ git Add ProjectOverview.md 
06:00:44 {master} ~/squarecloud/NextBigThing$ git status
On branch master

Initial commit

Changes to be committed:
  (use "git rm --cached <file>..." to unstage)     new file:   ProjectOverview.md
06:00:47 {master} ~/squarecloud/NextBigThing$

It is now telling me that the tracked file needs to be committed. You can think of commiting to like saving a file. It is a snapshot of our file at that point in time.

So let's go ahead and commit the file:

06:00:47 {master} ~/squarecloud/NextBigThing$ git commit -m "Inital Idea"
[master (root-commit) cb62f3d] Inital Idea
 1 file changed, 5 insertions(+)
 create mode 100644 ProjectOverview.md
06:01:21 (master) ~/squarecloud/NextBigThing$

Did you notice that the brackets have now changed back to curly?

This work-add-commit loop is what you will do most with git.

Share and share a like

We now need to share this idea with our stakeholders and team so we are going to create a repository on GitHub and then push our work there for all to see.

So we log onto GitHub and find the big green 'Add Repository' button. Personally, I love green for postive actions.

create-git-repo.png

 

We have to fill out the Repository name which has to be unique under your account. We enter a description and decide if we are making this public or private. I want the world to know about my new idea so of course we are making this public.

There are a couple of other options:

  1. Initialize with a README
  2. Add .gitignore
  3. Add a licence

As we have content for our repository already we are going to leave these blank but if you were creating a repo from scratch on the GitHub site it would be pretty handy to choose these options.

A README.md is created if you select option one. This is a handy place to tell the world about your repo as it is displayed by default on the GitHub site.

The .gitignore file tells git which files to ignore when committing your code. This is handy if your IDE creates project files or there are secure files that are not appropriate to be shared publicly.

Lastly the licence is a handy feature for open source project that needs to have a licence to be considered open source.

So, with all that out of the way, we press another Big Green Button and create our project.

 

repo-created.png

 

Git now presents us with options as to how to clone or push our repo.

Since we have content already we are going to push our repo.

06:01:21 (master) ~/squarecloud/NextBigThing$ git remote add origin git@njames.github.com:njames/NextBigThing.git
06:04:58 (master) ~/squarecloud/NextBigThing$ git push -u origin master
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects:  50% (1/2)   
Compressing objects: 100% (2/2)   
Compressing objects: 100% (2/2), done.
Writing objects:  33% (1/3)   
Writing objects:  66% (2/3)   
Writing objects: 100% (3/3)   
Writing objects: 100% (3/3), 320 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To git@njames.github.com:njames/NextBigThing.git
 * [new branch]      master -> master
Branch master set up to track remote branch master from origin.
06:05:22 (master) ~/squarecloud/NextBigThing$     

Now if we refresh the GitHub repo page we can see all the commits made to it.

 

repo-after-push.png

 

The rest of the world can now clone our repo and find out about the NextBigThing.

 

Clone me baby, one more time

If we look at the repository properties we can see there is a url that we can use to clone the repo. There are several protocols we can use. Firstly we can use http or (and this is my preference) we can use ssh. I prefer ssh because once I have added my public ssh key to GitHub that is the way I am identified and it is seemless.

The windows client uses http and you need to add your username and password.

So staying with the command line, this is how we clone:

git clone git@github.com:njames/NextBigThing.git 

Summary

So in this blog, we have learned that git is a great distributed source control system.

We have learned how to: 1. init 2. add 3. commit 4. push 5. clone

There is a lot more to get into with git but you can get started and then learn as you go.

I hope you have found this a useful introduction and if this topic is of interest I will expand some of these topics.

You can also refer to my session slides from my recent session at TechEd Las Vegas 2015

Outer Join of Internal Tables (ABAP Release 7.40)

$
0
0

Here I am trying to explain how to do an Outer Join of 2 internal tables using the new internal table functions available in ABAP Release 7.40. The same method can be extended to add more tables to the join.

 

Sample program is below. I hope the inline comments are clear. If you need any explanation, please add a comment below and I will try to answer.

 

report outer_joins.

class lcl_outer_join definition.

   public section.

     methods:

*Main method

       perform.

   private section.

*Sample tables to hold initial values

     types:begin   of   struc1,

             f1 type numc1,

             f2 type numc1,

           end     of   struc1,

           begin   of   struc2,

             f1 type numc1,

             f3 type numc1,

           end     of   struc2,

*Table structure to hold output. Common key between the tables is field F1

           begin   of   struc3,

             f1 type numc1,

             f2 type numc1,

             f3 type numc1,

           end     of   struc3,

           tab1    type standard table of struc1 with non-unique key f1,

           tab2    type standard table of struc2 with non-unique key f1,

           tab3    type standard table of struc3 with non-unique key f1,

           ty_numc type numc1.

     data: table1 type tab1,

           strc2  type struc2,

           table2 type tab2.

     methods:

       build_tables,

       outer_join,

       line_value importing value(key) type numc1 returning value(result) type numc1.

endclass.


class lcl_outer_join implementation.

   method perform.

*Build input tables

     build_tables( ).

*Perform outer join

     outer_join( ).

   endmethod.

   method build_tables.

*Populate initial values

*Reference: ABAP News for 7.40, SP08 - Start Value for Constructor Expressions

     table1 = value tab1( ( f1 = '1' f2 = '2' )

                          ( f1 = '2' f2 = '8' )

                          ( f1 = '1' f2 = '9' )

                          ( f1 = '3' f2 = '4' ) ).

 

     table2 = value tab2( ( f1 = '1' f3 = '5' )

                          ( f1 = '3' f3 = '6' ) ).

   endmethod.

 

   method line_value.

*Store the last accessed structure, to reduce table access

     if strc2-f1 ne key.

       try.

*Read the line from 2nd table, with respect to the key

*Reference: ABAP News for Release 7.40 - Table Expressions

           strc2 = table2[ f1 = key ].

         catch cx_sy_itab_line_not_found.

*If corresponding line was not found, then avoid dump, populate blank

           clear strc2.

           strc2-f1 = key.

       endtry.

     endif.

*Pass the required field from 2nd table as result

     result = strc2-f3.

   endmethod.


   method outer_join.

*Perform join and display output

*Reference: ABAP News for 7.40, SP08 - FOR Expressions

     cl_demo_output=>display_data( value tab3( for data1 in table1

                                             ( f1 = data1-f1

                                               f2 = data1-f2

*Field F3, is populated by calling the method, passing the common key

                                               f3 = line_value( data1-f1 ) ) ) ).


*If you are sure that table2 will always have an entry corresponding to F1 field,

*then there is no need to create a method to read the value.

*Meaning: Since my table1 has a row with F1 = 2, but table2 doesn't,

*the statement below, will result in a dump with exception

*cx_sy_itab_line_not_found. But, if my table2 also

*had an entry corresponding to F1 = 2, then, this single statement

*is enough to perform the join.

    cl_demo_output=>display_data( value tab3( for data1 in table1

                                             ( f1 = data1-f1

                                               f2 = data1-f2

                                               f3 = table2[ f1 = data1-f1 ]-f3 ) ) ).

   endmethod.

endclass.

 

start-of-selection.

   data:   joins   type ref to lcl_outer_join.

*Initialize and perform join

   create object joins.

   joins->perform( ).

ABAP News for Release 7.50 - SELECT UNION

$
0
0

I promised to tell you, why the INTO clause should be placed behind all the other clauses in an Open SQL SELECT statement. One reason is that Open SQL also wanted to support the SQL syntax addition UNION. An UNION addition can be placed between SELECT statements in order to create the union of the result sets. ABAP CDS offered its UNION from the beginning (7.40, SP05).  If you wanted to use it in Open SQL, you had to wrap it in a CDS view. What hindered Open SQL? Well, the position of the INTO clause before the WHERE, GROUP BY and ORDER BY clauses. These clauses can be part of any SELECT statement participating in unions and there must be only one INTO clause at the end. Therefore, with 7.40, SP08, as a first step, the INTO clause was given a new position.

 

Now, with ABAP 7.50, we can bring the harvest in. Let me show you an example. The task is to get the names of all the ABAP source texts of a package. These might be needed for searching in the sources or for dumping them in a file. All programs can be found in the database table TRDIR. You know, that the source code files of some ABAP program types like class pools and function pools are distributed over include programs. In order to select the correct technical names of the include programs, it is not a bad idea to construct a ranges table that does the search for you based on some known features.

 

Before ABAP 7.50, the construction of such a ranges table might have looked as follows:

 

DATA prog_range TYPE RANGE OF trdir-name.


SELECT 'I' AS sign, 'EQ' AS option, obj_name AS low, ' ' AS high

        FROM tadir

        WHERE pgmid = 'R3TR' AND object = 'PROG' AND devclass = @devclass

        INTO TABLE @prog_range.

 

SELECT 'I' AS sign, 'CP' AS option, obj_name && '*' AS obj_name, ' ' AS high

        FROM tadir

        WHERE pgmid = 'R3TR' AND object = 'CLAS' AND devclass = @devclass

        APPENDINGTABLE @prog_range.

 

SELECT 'I' AS sign, 'CP' AS option, 'SAPL' && obj_name AS obj_name, ' ' AS high

        FROM tadir

        WHERE pgmid = 'R3TR' AND object = 'FUGR' AND devclass = @devclass

        APPENDINGTABLE @prog_range.

 

SELECT 'I' AS sign, 'CP' AS option, 'L' && obj_name && '*' AS obj_name, ' ' AS high

        FROM tadir

        WHERE pgmid = 'R3TR' AND object = 'FUGR' AND devclass = @devclass

        APPENDINGTABLE @prog_range.

 

Four individual SELECT statements are used to fill one internal table prog_range with the help of the APPENDING addition. Note the usage of string expressions in the SELECT lists.

 

With ABAP 7.50 you can pack the four SELECT statements into one (this can be called code push down):

 

DATA prog_range TYPE RANGE OF trdir-name.

 

SELECT 'I' AS sign, 'EQ' AS option, obj_name AS low, ' ' AS high

       FROM tadir

       WHERE pgmid = 'R3TR' AND object = 'PROG' AND devclass = @devclass

UNION

SELECT 'I' AS sign, 'CP' AS option, obj_name && '*' AS obj_name, ' ' AS high

       FROM tadir

       WHERE pgmid = 'R3TR' AND object = 'CLAS' AND devclass = @devclass

UNION

SELECT 'I' AS sign, 'CP' AS option, 'SAPL' && obj_name AS obj_name, ' ' AS high

       FROM tadir

       WHERE pgmid = 'R3TR' AND object = 'FUGR' AND devclass = @devclass

UNION

SELECT 'I' AS sign, 'CP' AS option, 'L' && obj_name && '*' AS obj_name, ' ' AS high

       FROM tadir

       WHERE pgmid = 'R3TR' AND object = 'FUGR' AND devclass = @devclass

       INTO TABLE @prog_range.

 

The result is the same as above and can be used to get the program names, e.g. as follows:

 

SELECT name

       FROM trdir

       WHERE name IN @prog_range

       ORDER BY name

       INTO TABLE @DATA(programs).

 

(The example is not bullet-proof, but well an example and you might extend it...)

 

As shown here, with UNION you can unite the result sets from SELECT statements for one database table, but it is also possible to combine the result sets of different database tables, if the numbers of columns and the column types match.

 

For more information and examples see SELECT - UNION.


ABAP News for Release 7.50 - Converting Messages into Exceptions

$
0
0

Messages are basically short texts stored in database table T100. What makes them special is the ABAP statement MESSAGE. This statement sends a message with a short text from T100 and adds a message type (S, I, W, E, A, X). The system behavior after sending a message is extremely context dependent and I'm not really confident that the documentation covers all the possible situations.

 

Historically, messages were invented for the PAI-handling of classical dynpros. There they can be used to conduct an error dialog. As a rule, messages should be restricted to that usage. But there is an important exception. Messages are also closely connected to exception handling:

 

  • In exception classes that implement IF_T100_MESSAGE, messages from T100 can serve as exception texts. Then they are part of the semantical properties of an exception class besides the class name and its super classes.

 

  • For non-class-based exceptions, messages can play the role of a poor man's exception text concept.

    • By raising a classical exception with MESSAGE RASING instead of RAISE, you add the message text and type to the exception. After handling such a classical exception with the  EXCEPTIONS addition of the CALL statement, you find the information in the well known system fields sy-msg... .

    • You can catch messages sent with MESSAGE naming the predefined classical exception error_message behind EXCEPTIONS of the CALL statement.

 

What's missing?

 

Nowadays you work with class based exceptions in your application programs. But from time to time you have to call legacy procedures that throw classical exceptions that are bound to messages. If you cannot handle the reason of the exception in place, you want to pass it to your caller in form of a class based exception. The problem is, how to find an appropriate exception class and how to convert the message based exception text of the original exception to an exception text of the exception class?

 

Since the exception texts of an exception class are part of their semantics, you would need an own exception class or at least exception text for each message that might occurr. Then you can raise the class based exception e.g. as follows:

 

meth( EXCEPTIONS exception = 4 ).

IF sy-subrc = 4.

  RAISE EXCEPTION TYPE cx_demo_t100

    EXPORTING

      textid = cx_demo_t100=>demo

      text1  = CONV #( sy-msgv1 )

      text2  = CONV #( sy-msgv2 )

      text3  = CONV #( sy-msgv3 )

      text4  = CONV #( sy-msgv4 ).

ENDIF.

 

Here, meth is a method that raises a classical exception exception with MESSAGE RAISING and cx_demo_t100 implements IF_T100_MESSAGE and denotes a message that fits to the message passed by the classical exception. If there is no approptiate exception class at hand that is able to cover all the messages that might be send by a called procedure, shrewd developers proceed also as follows:

 

meth( EXCEPTIONS exception = 4 ).

IF sy-subrc = 4.

  RAISE EXCEPTION TYPE cx_demo_t100

    EXPORTING

      textid = VALUE scx_t100key( msgid = sy-msgid

                                  msgno = sy-msgno

                                  attr1 = 'TEXT1'

                                  attr2 = 'TEXT2'

                                  attr3 = 'TEXT3'

                                  attr4 = 'TEXT4' )

      text1  = CONV #( sy-msgv1 )

      text2  = CONV #( sy-msgv2 )

      text3  = CONV #( sy-msgv3 )

      text4  = CONV #( sy-msgv4 ).

ENDIF.

 

This exploits the fact, that you can pass any structure of type scx_t100key to the contructor of an exception class. By doing so, you define a message from T100 not statically as message text but when raising the exception. Only the attributes for the replacement texts have to be there. Knowing that, you can create a kind of generic exception class for messages. But this is not recommended for exception classes implementing IF_T100_MESSAGE. For such an exception class, the exception text should not be dynamic and you should pass constants of that class to the parameter textid only. Furthermore, the above coding is quiet cumbersome. And further-furthermore, there's no way to pass the message type.

 

Solution with ABAP 7.50

 

Since the above scenario is a valid use case, a solution is provided with ABAP 7.50: A new interface IF_T100_DYN_MSG that contains IF_T100_MESSAGE It adds an attribute msgty for the message type and it also adds predefined attributes msgv1 to msgv4 for the replacment texts (placeholders) of a message.

 

If an exception class cx_demo_dyn_t100 implements IF_T100_DYN_MSG, you can profit from a new MESSAGE addition to the RAISE EXCEPTION statement:

 

meth( EXCEPTIONS exception = 4 ).

IF sy-subrc = 4.

  RAISE EXCEPTION TYPE cx_demo_dyn_t100

    MESSAGEID    sy-msgid

            TYPE   sy-msgty

            NUMBER sy-msgno

            WITH   sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.

ENDIF.

 

This does basically the same as the example above, but now in a well-educated way. You can pass the full signature of a message to an exception class including the message type and the runtime environment does the rest for you. Also, you don't have to care about the names of the attributes for the placeholders any more. When handling the exception you have access to the message, e.g. as follows:

 

CATCH cx_demo_dyn_t100 INTO DATA(oref).

  cl_demo_output=>display(

    |Caught exception:\n\n| &&

    |"{ oref->get_text( ) }" of type { oref->msgty }| ).

 

You get back the message text and, that's new, also the message type. From now on, this is the recommended way of converting classical messages to exceptions.

 

For more information and more examples see:

 

 

The MESSAGE addition is also available for THROW in conditional expressions, of course.

Unit testing mockup loader for ABAP

$
0
0

Hi Community !

 

I'd like to share a tool for unit testing me and my team have developed for our internal usage recently.

 

The tool is created to simplify data preparation/loading for SAP ABAP unit tests. In one of our projects we had to prepare much tables data for unit tests. For example, a set of content from BKPF, BSEG, BSET tables (FI document). The output to be validated is also often a table or a complex structure.

 

Data loader

 

Hard-coding all of that data was not an option - too much to code, difficult to maintain and terrible code readability. So we decided to write a tool which would get the data from TAB delimited .txt files, which, in turn, would be prepared in Excel in a convenient way. Certain objectives were set:

 

  • all the test data should be combined together in one file (zip)
  • ... and uploaded to SAP - test data should be a part of the dev package (W3MI binary object would fit)
  • loading routine should identify the file structure (fields) automatically and verify its compatibility with a target container (structure or table)
  • it should also be able to safely skip fields, missing in .txt file, if required (non strict mode) e.g. when processing structures (like FI document) with too many fields, most of which are irrelevant to a specific test.

 

Test class code would look like this:

...

call method o_ml->load_data " Load test data (structure) from mockup

  exporting i_obj      = 'TEST1/bkpf'

  importing e_container = ls_bkpf.


call method o_ml->load_data " Load test data (table) from mockup

  exporting i_obj      = 'TEST1/bseg'

            i_strict    = abap_false

  importing e_container = lt_bseg.


...

call method o_test_object->some_processing " Call to the code being tested

  exporting i_bkpf  = ls_bkpf

            it_bseg  = lt_bseg

  importing e_result = l_result.


assert_equals(...).

...


The first part of the code takes TAB delimited text file bseg.txt in TEST1 directory of ZIP file uploaded as a binary object via SMW0 transaction...


BUKRS BELNR GJAHR BUZEI BSCHL KOART ...

1000  10    2015  1    40    S    ...

1000  10    2015  2    50    S    ...


... and puts it (with proper ALPHA exits and etc) to an internal table with BSEG line type.


Store/Retrieve


Later another objective was identified: some code is quite difficult to test when it has a select in the middle. Of course, good code design would assume isolation of DB operations from business logic code, but it is not always possible. So we needed to create a way to substitute selects in code to a simple call, which would take the prepared test data instead if test environment was identified. We came up with the solution we called Store. (BTW might nicely co-work with newly announced TEST-SEAM feature).


Test class would prepare/load some data and then "store" it:


...

call method o_ml->store " Store some data with 'BKPF' label

  exporting i_name = 'BKPF'

            i_data = ls_bkpf. " One line structure

...


... And then "real" code is able to extract it instead of selecting from DB:


...

if some_test_env_indicator = abap_false. " Production environment

  " Do DB selects here

else.                                    " Test environment

  call method zcl_mockup_loader=>retrieve

    exporting i_name  = 'BKPF'

    importing e_data  = me->fi_doc_header

    exceptions others = 4.

endif.


if sy-subrc is not initial.

  " Data not selected -> do error handling

endif.

...


In case of multiple test cases it can also be convenient to load a number of table records and then filter it based on some key field, available in the working code. This option is also possible:


Test class:


...call method o_ml->store " Store some data with 'BKPF' label

  exporting i_name  = 'BKPF'

            i_tabkey = 'BELNR'  " Key field for the stored table

            i_data  = lt_bkpf. " Table with MANY different documents

...


"Real" code:


...

if some_test_env_indicator = abap_false. " Production environment

  " Do DB selects here

else.                                    " Test environment

  call method zcl_mockup_loader=>retrieve

    exporting i_name  = 'BKPF'

              i_sift  = l_document_number " Filter key from real local variable

    importing e_data  = me->fi_doc_header  " Still a flat structure here

    exceptions others = 4.

endif.


if sy-subrc is not initial.

  " Data not selected -> error handling

endif.

...


As the final result we can perform completely dynamic unit tests in our projects, covering most of code, including DB select related code without actually accessing the database. Of course, it is not only the mockup loader which ensures that. This requires accurate design of the project code, separating DB selection and processing code. But the mockup loader and "store" functionality makes it more convenient.


illustration.jpg

Links and contributors


The tools is the result of work of my team including:

 

The code is freely available at our project page on github - sbcgua/mockup_loader · GitHub

 

I hope you find it useful

 

Alexander Tsybulsky

ABAP News for Release 7.50 - Annotations in ABAP CDS

$
0
0

ABAP CDS in TechEd Keynote

 

Bjoern Goerke has shown ABAP CDS in his keynote for TechEd Barcelona, yes, ABAP!

 

Interestingly, he did not talk much about all the DDL language elements of ABAP CDS. In fact, he used quiet a simple CDS view:

 

cds1.gif

 

The select statement of the view wraps the access to database table zbg_marsdata. Some of the modeling capapbilities of CDS shine through with the association _MarsSite that joins Z_MarsRoverMissions with Z_MarsSites. But this was of marginal importance for the presentation.

 

What he did talk about were annotations!

 

cds2.gif

 

The DDL source code consists mainly of annotations and not of SQL! What's that all about?

 

With annotations, you can semantically enrich your data model. And as you see, this is abundantly done above. Let's have a look behind the curtain.

 

What are Annotations?

 

From the compiler's point of view, annotations are simply something that can be written at given positions and have to follow a prescribed syntax. As shown in the example, you can write something like

 

@v_annot4:{ annot0, annot1:'abc', annot2:123 }

 

into the DDL source of a CDS view. The source is syntactically correct and can be activated. Of course, such an annotation has no meaning as long as nobody evaluates it. During activation of a DDL source, its annotations are saved in system tables and there are system classes available to evaluate them.

 

Some annotations are evaluated directly during activation and by the ABAP runtime environment.

 

 

Annotations before ABAP 7.50

 

Before ABAP 7.50, only a handful annotations played a role. Those were the annotations that are evaluated during activation and by the ABAP runtime environment. We call those annotations ABAP annotations. They are documented as part of the ABAP CDS documentation, as e.g. the ABAP view annotations. An important example is @ClientDependent, that defines the client handling when Open SQL is used to access a CDS entity. Other examples are the EndUserText annotations that denote translatable texts.

 

Annotations with ABAP 7.50

 

The usage of annotations is not restricted to the ABAP Dictionary's own needs and the ABAP runtime environment (e.g.. Open SQL). As said above, you can enter for annotations what you want, as long as you stay within the syntax rules. Of course, there must be someone who evaluates it. And that's what software components of SAP do with ABAP 7.50! Software components of SAP such as ODATA, UI, and Analytics prescribe sets of annotations that can be used to achieve a defined behavior and provide frameworks that evaluate these component annotations and act accordingly. With other words, it's not the ABAP runtime environment alone any more that evaluates DDL source codes! Accordingly the documentation of these annotations is not part of the ABAP CDS reference documentation (so don't send your error messages there ...) but delivered by the respective software components. There is a landing page where all SAP annotations are listed and where you find links to the detailed component documentation.

 

As you see in the screen shot of Bjoern's session above, he uses lots of component annotations as @Search... , @UI.... While the syntax coloring and code completion of ADT recognizes them, the ABAP runtime environment (e.g. Open SQL) does not care at all. You have to work with the respective frameworks in order to see the effects! Of course Bjoern did exactly that.

 

Here is an example for a documentation that describes what you have to do in order to expose a CDS View as OData Service:

 

Exposing CDS View as OData Service

 

Have fun!

 

 



Generate DOCX file in ABAP

$
0
0

Last month, i published my abstraction class to manage Word OLE link. It can generate complete (and complex) word document, but it is a little slow for big tables.

 

OLE is an old technology... DOCX is an XML file extension... Enough to change my mind about word file generation. Exit OLE, welcome XML

 

I updated my abstraction class to generate DOCX file from ABAP directly, without any OLE usage. I know there is actually some projects that want to do that (abap2docx for example). But i think theses projects are too complex to use, or not yet usable in the true life.

 

With my class, it never be easier to generate DOCX. You never have to use or understand XML.

 

Here is the code of the "hello word" program.

docx_generation-sap-abap-hello-world[1].jpg

The class is simple, but can manage complex documents !

 

Here is the feature list :

  • Empty document creation or with use of template (docx, dotx, docm, dotm)
  • Write text with or without style (character style and/or paragraph style)
  • Option to manualy apply bold, underline, italic, strike, subscript, superscript, small caps, font name & size, font color, highlight color, letter spacing
  • Management of alignment, indent, spacing before/after paragraph
  • Break line, page, section, continuous section
  • Write table with or without style (and option to define cell format : bold, color...)
  • Write Header / footer
  • Choose portrait/landscape, manage page border
  • Add images
  • Add canvas
  • Insert table of content (toc)
  • Add and manage document properties
  • Create and insert custom fields
  • Style creation (character/paragraph)
  • Manage files in SAP Web Repository for template/image (SAPWR, access with transaction SMW0)

 

In the download, you will find a test program that contain the class CL_WORD and a sample of how to use it. You will find also some image and 1 template. Theses files are used by the test program, but are not necessary for the class itself.

 

French presentation can be found here : SAP : Générer un document Word DOCX en ABAP

 

And there is a direct download link (remember that you will need SAPLINK to install) : http://quelquepart.biz/telechargements&file=L2RhdGEvbWVkaWFzL1pDTF9XT1JEX0RPQ1guemlwKjU1NGMxNQ&source=SCN-DOCX

 

Feel free to comment here


My others blogpost :

LISTCUBE replacement : a new way to display data

ZAL11 : a replacement for AL11

ZTOAD - Open SQL editor

Abstraction class to generate MSWORD with SAP using OLE

ABAP News for Release 7.50 - INSERT FROM Subquery and GTTs

$
0
0

This is about two (no, even three) new things in Open SQL in ABAP 7.50.

 

INSERT FROM Subquery

 

Before ABAP 7.50, you can use subqueries in the WHERE condition of SELECT, UPDATE, or DELETE of Open SQL.

 

With ABAP 7.50, there is a new position for a subquery in Open SQL: Behind FROM of statement INSERT. This can help you to avoid unnecessary round trips of aggregated data between database and application server. I simply show you the example from the documentation:

 

 

Before 7.50: INSERT FROM TABLE

 

SELECT

  FROM scarr AS s

       INNER JOIN spfli AS p ON s~carrid = p~carrid

  FIELDS s~mandt,

         s~carrname,

         p~distid,

         SUM( p~distance ) AS sum_distance

  GROUP BY s~mandt, s~carrname, p~distid

  INTO TABLE @DATA(temp).


INSERT demo_sumdist_agg FROM TABLE @temp.

 

The task is to aggregate some data from two tables and insert them into another table. Before ABAP 7.40 you aggregated into an internal table temp and inserted that table into the target table. For that the aggregated data were transported from the database to the application server and back again.

 

With 750: INSERT FROM SELECT

 

INSERT demo_sumdist_agg FROM

  ( SELECT

      FROM scarr AS s

        INNER JOIN spfli AS p ON s~carrid = p~carrid

      FIELDS s~carrname,

             p~distid,

             SUM( p~distance ) AS sum_distance

      GROUP BY s~mandt, s~carrname, p~distid ).

 

Isn't that simple? Same SELECT statement as above but placed directly as a subquery in parentheses behind additionFROM of INSERT. Only one SQL statement. No transport of the aggregated data from the database to the application server and back again. Don't have to say more here, or? For more information see INSERT dbtab - subquery.



Intermission: SELECT FROM FIELDS

 

Did you notice the new FIELDS addition in SELECT? The FIELDS addition allows you to place the FROM clause directly behind SELECT and in front of all other clauses. To do so, the SELECT list has to be placed behind FIELDS. Why that new sequence? A FROM clause in front of all other clauses supports tools like code completion and syntax coloring in all clauses. Compare to SELECT in ABAP CDS. Same there, but with curly brackets instead of FIELDS.


 

GTTs

 

Now imagine that you need the aggregated data of table demo_sumdist_agg only within one DB transaction (DB LUW). A normal database table is too heavy weighted for such temporary data because an database's administration of a normal table supports persistent data. Therefore databases support the concept of Global Temporary Tables (GTTs). A GTT is database table with a light weight administration that is bound to one DB transaction: A GTT is always empty at the start of a DB transaction and must be empty at the end of each DB transaction.

 

Database systems support GTTs natively.

 

With ABAP 7.50, you can also define GTTs in the ABAP Dictionary by setting the Table Category to Global Temporary Table.

 

gtt.gif

 

Then, the underlying database behaves accordingly. When accessing a GTTdefined in the ABAP Dictionary with Open SQL, the following additional rules apply:

 

  • If you modify the contents of  a GTT with Open SQL statement, you must delete it explicitly before the end of the current database LUW either by the Open SQL statement DELETE FROM dbtab without a WHERE condition or with an explicit database commit or database rollback (then the database interface deletes the contents).
  • If the contents of a GTT filled with Open SQL is not deleted explicitly before an implicit database commit, the runtime error COMMIT_GTT_ERROR occurs independently of the actual content (even it is empty).

 

Why that special behavior?

 

  • Comprehensibility: A database system deletes the contents of a GTT at the end of the database LUW. A developer might be surprised to find a table empty after an implicit database commit (a simple WAIT statement suffices). Therefore, Open SQL forces explicit deletion.

  • Platform dependency: It cannot be guaranteed that every database platform deletes the data in a GTT before an implicit database commit.

 

For our above example, this means: If demo_sumdist_agg is a GTT (it isn't in 7.50 SP01, but I changed it into a GTT for SP02 during writing that blog) you must include

 

DELETE FROM demo_sumdist_agg.

 

before any implicit database commit as e.g. before calling a dynpro screen. Otherwise the above runtime error occurs (and I had to adjust my examples that access the table).

 

For more information and examples see Global Temporary Tables.

Viewing all 943 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>