Ariba Master data extraction program

Introduction: Master data extraction program

This blog talks about how to use the master data extraction program in master data extraction program to read data from ECC to Ariba system. This blog talks about below topic areas:

  • Master Data extraction program
  • Sequence for Initial load consideration
  • Customization of the master program
  • Incremental load consideration
  • Lessons Learnt
  • Test cases for Master data integration

Master Data extraction program:

The standard program /ARBA/MASTER_DATA_EXPORT needs customization or you will keep wasting your time to find the implicit enhancement points which are too less and not in the right places. So I just did what was the easier thing to do copied the entire program as a Z and then did the changes. When you run the program you see radio buttons like Procure to Order and Procure to Pay, Sourcing, and contracts.

What needs to run depends on what modules you implement in Ariba. A lot of the contents available to download is common among sourcing and P2P and P2O radio buttons. However it still needs to  run twice as it updates specific modules in Ariba. So based on what you select you get options to ‘Choose Master data to export’ you get different master data options.

To understand better you need to first extract company code from P2O and then extract company code again from sourcing. So even though it is the identical company code master data will get export. But it needs to go to different modules in Ariba like sourcing and contract management so you need to run the extractors twice. Generally procurement scenario in Ariba consist of

P2P: Contracts Orders and Invoicing

P2O: Contracts only

 You can also choose to send data as an initial load one time and as an incremental load.  So you would need to run :   

  1. Sourcing and contracts
  2. Procure to Pay/ Procure to Order (One of these depending on modules implemented in Ariba)

There are also connectivity options like :

  • Direct connectivity: You can use this to send the data to the Ariba system directly
  • Integration Toolkit: This loads the file in a shared drive and Integration layer (PI/PO) should pick this up to send to Ariba.                                                                                                                                                                Fore more read: https://blogs.sap.com/tag/sap-ariba-master-data-extraction/

Ariba Master data extraction program

Sequence for Initial load consideration:

It is advisable to follow a specific logic to ensure the correct data. And in the right sequence, uploaded to the Ariba system. It is advisable to fill Sourcing and contracts before as its an Upstream module in Ariba and the others are downstream

Ariba Master data extraction program

 

Customization of the master data extraction program

There may be a need for some customization  to either clean/correct the data which is send to Ariba. I am specifying some of the scenarios below:

To Change column Name or add an additional column and populate data:

It may require you to change some column name to make it as the primary key. For example instead of the user Id which is the default primary key for the user extraction client wants to make email ID as the primary key. So, I had to change the name of the column for email as UNIQUENAME (Ariba understand UniqueName column as the primary key) for the user extraction file and assign it as value of email. In /ARBA/FIELD_MAP the field BNAME (User ID) currently is mapped to Ariba field UNIQUENAME. I now want the email ID field to populate for UniqueName.

Let’s have a look at how we do it.

Ariba Master data extraction program

  1. In Table /ARBA/FIELD_MAP in SM30 I added an extra SAP column UNIQUENAME1 in SAP. Then the name of the column on CSV will be UniqueName This value UNIQUENAME1 was the assign  email ID.  

Ariba Master data extraction program

Also I changed the label for BNAME to UserId as only 1 field can have value as UniqueName which becomes the key in Ariba .

Ariba Master data extraction program

  1. Add an append structure in the standard structure as seen below:

Ariba Master data extraction program

3. To assign a value to this column we need to change code to read email of the user:

And in order to add value to this field I used BADI /ARBA/MASTER_DATA. This badi has all the user extraction data methods as seen below

Ariba Master data extraction program

Method /ARBA/IF_EXP_MASTER_DATA~MODIFY_USER_DATA is called for each time the user record is extracted so I enter my field UNIQUENAME1 and pass Email address. Remember, this is a structure, so we will call it for all entries one by one. 

Ariba Master data extraction program

However most of the methods like /ARBA/IF_EXP_MASTER_DATA~PUBLISH_USER_DATA have exporting /changing parameter as table. So you can loop at each record and delete if the Email ID is blank etc as this is our primary key so keeping it blank will create issues in Ariba. Ariba Master data extraction program

Now when you have the file extracted as a CSV it will have an additional column as UniqueName with the email ID of the user when the Ariba program will run.

Ariba Master data extraction program

Incremental load consideration:

Note that not all data can be set up as incremental load. In P2O/P2P scenario the incremental load consists of below master data.

Ariba Master data extraction program

In Sourcing and contracts below master data can  schedule to run for incremental data:

Ariba Master data extraction program

If the data which you wish the send to Ariba is not included in the incremental list above like product category, Pur grp, company code, pur org etc. then you need to either update that manually or send an initial load everytime as needed.

How would the system know what changes occured since last time and how to pick the delta load?

he program handles the incremental load based on a table /ARBA/INCR_DTTIM. Everytime an incremental load is run the table gets updated with date and time of the specific load which was run. THe next time system would automatically filter the data based on the last updated fill from the table for users, suppliers or other master data for which you plan to run the incremental load. After the load run the system automatically update the Date information and the next time for incremental load it references this to understand what has been the delta change in the system since the last run.

:Ariba Master data extraction program

To ensure the data is sent to the right destination /ARBA/auth_param table should be populated. Ensure you have maintained the necessary parameters in the table /ARBA/ AUTH_PARAM so the ECC system points to the correct Ariba Realm

Ariba Master data extraction program

Lessons Learnt: Master data extraction program

  1. Ensure you get a confirmation of the mandatory fields in Ariba, which may not be required in ECC; hence always check if the extracted file from ECC got successfully updated in Ariba with the same number of records.
  2. Data fails in Integration, like email IDs having the unique character that must be handled by custom logic by creating email aliases, etc.
  3. If several error records are more than 100 or something, the entire file fails to load in Ariba. The data sent needs to be corrected before you send the file again, as it stops reading the whole file after a threshold of N number of failures is reached.
  4. If the extracted data load is enormous, it may cause your SAP screen to freeze, so it better run it as a batch job.
  5. You can upload data directly to the connected Ariba system or store it in a specific location on your local system using the ‘Integration Tool Kit.’ To conduct testing, I used the ‘Integration toolkit.’
  6. You can also have automation using the Integration tool kit, where PI/PO picks this up and sends it to Ariba. If so, send the file to another location for testing.
  7. Not all data is available for incremental load, so if you create new product categories, you must manually update this in Ariba. Check the delta load available to update, as seen in the last column in my table above.
  8. Before running the batch job, check user authorization for local directory access, folder creation, and file storage permissions.
  9. If the jobs are Run for incremental load, ensure that the DateTime stamp gets correctly updated by the system in the table /ARBA/INCR_DTTIM. This is how the system knows which records to pick up for delta extraction.
  1. When you put filters for the load like user you may use below files.
    1. /ARBA/IF_EXP_MASTER_DATA~PUBLISH_USER_DATA
    2. /ARBA/IF_EXP_MASTER_DATA~MODIFY_USER_DATA
    3. /ARBA/IF_EXP_MASTER_DATA~PUBLISH_USER_GROUP
  2. Supplier data extraction Methods are separate for both initial and delta load
  3. /ARBA/IF_EXP_MASTER_DATA~PUBLISH_VENDOR
  4. /ARBA/IF_EXP_MASTER_DATA~PUBLISH_SUPPLIER_INCREMENT                                                                                                                                                                                                                                                                                                                                                                                                                            

Master program to modify user-deleted file and renaming it

For incremental load for users SAP does not provide any method in the BADI to modify the incremental load files for deletion like userdelete and usergroupdelete files.

These files, User Delete File and UserGroup delete file, are send to Ariba to deactivate those users who have left the organisation.  However no method in the BADI is  for formatting these files. Hence I came up with a workaround as below. Overwrite the extracted user deletion and group_deletion file with the required data.

Note that in the deletion file only one column is updated the primary key using which Ariba system would know which user to deactivate.Since I had changed the primary key to email this was not populated in the deletion files being sent to Ariba and system would not know which user to deactivate. So I did below

Rename the file using below code where gv_fname is the file that you generate without the filter. So I am Overwriting the same extracted file with additional information that I need to pass to the Ariba system. In my master data program, I modify the gr_user_del file to add email and also rename the file to the same name by Ariba so my file will replace the original file which you create by the program.

*** Modify the deleted file as needed:
IF sy-subrc EQ 0. "update entry with email
 ls_user_del-uniquename1 =   ls_smtp-e_mail.
 MODIFY   gt_user_del FROM ls_user_del INDEX lv_index.
ENDIF.

***** overwrite the file
CONCATENATE gv_fname 'GroupConsolidated_Delete.csv' INTO lv_file_name.

CALL FUNCTION '/ARBA/DATA_CONVERT_WRITE_FILE'
    EXPORTING
      i_filename        = lv_file_name
      i_fileformat      = 'CSV'
      i_field_seperator = ','
      i_tabname         = '/ARBA/USER'
      i_encoding        = 'UTF-8'
      i_solution        = 'AR'
    TABLES
      i_tab_sender      =  gt_user_del
    EXCEPTIONS
      open_failed       = 1
      close_failed      = 2
      write_failed      = 4
      conversion_failed = 5
      OTHERS            = 6.
  IF sy-subrc <> 0.                                         "#EC NEEDED
  ENDIF.

Test cases for Master data integration should include:

  1.  Maintain a checklist to ensure all master data gets save in Ariba
  2. In addition to each of the master data also compare the count for each example 200 users sent from SAP should all be loaded in Ariba
  3. Create Documents like the contract, PO to ensure the system data for the users is correctly updated and you are able to create the documents successfully with multiple users.
  4. If there are failures in loading the data to Ariba verify that file format being sent to Ariba is being read correctly.
  5. Check the system is reading the incremental load correctly with a small set of data for suppliers and users.
  6. If there are issues check incremental files have the right format and changes which you applied to the initial file load whether they are needed or not for the incremental push.
  7. Ensure blocked suppliers are updated correctly in the Ariba system and no more showing up
  8. Ensure the users who have left the org are removed from the Ariba system
  9. Check integration of the system is correct from and to ECC and SRM as applicable.
  10. For currency conversion the same rate applies as shown in the ECC system
  11. For UoM ensure the Ariba value translated to the ECC ANSI conversion units
  12. Ensure there is a process documented for all manual activities like updating delta load manually if not set as a batch job e.g. for new product categories, cost center and WBS.
  13. If you use SSO in your organization ensure it works fine for the users when they login. Email notifications are send from Ariba to ensure that SSO is enable for them and they are send to the correct recipients.
  14. Ensure the approval rules are working correctly for creation as well as changes

Thanks for reading. I hope this document is helpful for those looking to find some help on using the master data extraction program.

For more information and services:https://peritossolutions.com/services/sap-consulting-services

Get In Touch If You Have A Business Query

Related Posts