Changelog für Kommentare in einer Query anzeigen

Motivation

Comments are very often required in planning applications. SAP offers two options to store comments:  

  • Using a document store  
  • Attributes as a key figure  

A short introduction to each of the different options can be found e.g. in the following blogs: 

https://www.nextlytics.com/blog/integrated-comment-functions-in-sap-analytics-cloud-and-bw4/hana 

https://evosight.com/comments-in-bw-ip-using-characteristic-as-key-figure 

We often use the technique using an attribute as key figure for the following reasons:  

  • Comments are directly visible in the query  
  • Reports can be built on comments   
  • Using a document store, comments are only visible as excel like comments and you have to click on each cell to read the comment  
  • Older and more stable solution  
  • Limitation to 250 characters is in most cases not critical  

Now we had the requirement to show the history of comments in a report. As this is not an out-of-the-box solution, we want to share our approach with you.  

Activating the audit trail in a composite provider makes it possible to show the history of changes for key figures without any programming. 

Unfortunately, this does not work for comments as the audit trail will not show any request number for comments and no history of comments is shown. 

Nevertheless, the history of comments is stored in the changelog table. This is the starting point of our solution. We will show you two options to provide this information from the changelog within a query:  

  • via HANA calculation view  
  • via ABAP CDS-View  

Both options will end with an identical result. Which one you will use depends on your skills and / or restrictions to use HANA artefacts.  

Of course, reading data from the changelog table in a report is not officially supported by SAP since SAP may change name or structure of these tables but works quite stable.   

Logging changes via BADI BADI_RSPLS_LOGGING_ON_SAVE would be one more option. But as this option needs more development effort, we do not investigate this solution here. For more details see: 

https://help.sap.com/docs/ABAP_PLATFORM_NEW/12af290e80264faab3b1e6ac283950a2/4cae4e3fdd6e3b9ee10000000a42189b.html?locale=en-US&version=202310.000 

Why do Audit trails not work? 

First, we want to make sure, we did not miss an option and find out, why audit trails are not available for comments.  

When you activate the audit trail in a composite provider, the system just makes the request information of the data available in the report. As soon as you activate planning data, request information will get lost, and the audit trail will show no information. Technically, data will be moved from table 1 to table 2 of the aDSO (e.g for aDSO ### the tables are /BIC/A###1 and /BIC/A###2).   

(see e.g. for tables of an aDSO and different types of an aDSO  https://learning.sap.com/learning-journeys/upgrading-your-sap-bw-skills-to-sap-bw-4hana/working-with-the-datastore-object-advanced-_b1fbbc9a-e6ad-44bf-bfce-d5b2defa3ccb 

But only for an aDSO of type data mart reporting will be based on a join of the inbound table 1 and table 2. For all other types of an aDSO reporting will be always only based on table 2 with the activated data!  

As the data mart requires all attributes as key fields, attributes as key figures are not possible on this type of aDSO. That is why audit trails are not available for comments.  

But if you activate the change log for the standard data store object the history is available in table *3 of the aDSO including history of comments! 

For more details see https://community.sap.com/t5/financial-management-blogs-by-sap/bpc-embedded-model-auditing-feature-in-adso-and-infocube/ba-p/13357497 

Demo scenario 

In our demo solution we have key figures and comments in the same aDSO. The recommendation would be to split and store key figures in an aDSO of type data mart with audit information and comments in a separate aDSO.  

Our aDSO is a “Standard Data Store Object” with change log and planning enabled. 

The aDSO contains the following field and one attribute as key figure 

When you go to the changelog table, you see the historic comments. User, date and time of change is hidden in the request number (REQTSN) . 

Solution via HANA Calculation View 

In this solution we use a HANA Calculation view that can be consumed on a composite provider.  

First, we create a calculation view to make the changelog - table information available and join as well table RSPMREQUEST to get user and time of the request in a handy way.  

In addition, we do filter on storage CL (change log) in the request table (RSPMREQUEST) to get only one line per request number:
 

Details of the left outer join: 

Now we create a composite provider with the original aDSO and our calculation view with the changelog. 

We do map the fields of the changelog to the same fields as the original aDSO with the additional changelog – information. Attention, now you get key figures and comments several times in the report as we join a current reporting view with the historic view. But this helps to check changelog with current values. 

A query on top of this composite provider shows the history of the changes. Total of all changes for the key figures is identical to the current value in the infoprovider (last line). 

This query can now be adapted to your changes, e.g. exclude Recordmode X if you want to see only the new comment and avoid too many lines. 

Using ABAP CDS Views

In case you do not have the possibility to use HANA calculation views in the system you can achieve the same result as well with ABAP CDS-Views! 

For general information how to use a CDS – View in BW see the following SAP Notes and the blog: 

https://me.sap.com/notes/2198480 

https://me.sap.com/notes/2673267 

https://community.sap.com/t5/technology-blogs-by-members/sap-s-4hana-amp-sap-bw-data-integration-via-odp-abap-cds-views/ba-p/13224473 

We will use the same tables (changelog table /bic/azum_ad023 with request information RSPMREQUEST) and join them within the ABAP CDS-View. Same filter is applied. 

Make sure to add the following the annotations to allow the open ODS view to consume the data: 

@Analytics.dataCategory: #CUBE 

@Analytics.dataExtraction.enabled : true
 

To use a the CDS-View we first need to create a data source based on the CDS-view and an Open ODS-View based on this DS consumed in a composite provider: 

Datasource: 

With the open ODS View 

Now we use thn ODS-View together with our original aDSO in a composite provider and map all fields again 

The data flow has now some more layers as the solution with a calculation view, but we do not need a HANA User! 

 

At the end we can create a query with the identical outcome as in the solution with the HANA Composite provider! 

Conclusion

We have shown you two ways to make the history of comments available in a query. Both are quite simple and provide the requested information. Which way you want to use is up to you. 

Data Products Setup

I’ll start with Data Products setup. If you’re new to the concept, this recent video is a great starting point, but here’s a short summary. A data product is a well-described, easily discoverable, and consumable collection of data sets.

Creating a Data Product in Datasphere

Note that in this article I create Data Products in the Data Sharing Cockpit in Datasphere. This functionality is expected to move into the Data Product Studio, but that had not taken place at the time writing.

Before creating a Data Product in Datasphere, I need to set up a Data Provider profile, collecting descriptive metadata like contact and address details, industry, regional coverage, and importantly define Data Product Visibility. Enabling Formations allows me to share the Data Product with systems across your BDC Formation – Databricks, in this case.

With the Data Provider set up, I can go ahead and create a Data Product. As with the Data Provider, I’ll need to add metadata about the product and define its artifacts – the datasets it contains. Only datasets from a space of SAP HANA Data Lake Files type can be selected. Since this Data Product is visible across the Formation, it is available free of charge.

For this demo, the artifact is a local table containing ten years of Ice Cream sales data. Since this is a File type space, importing a CSV file directly to create a local table isn’t an option (see documentation).

I used a Replication Flow to perform an initial load from a BW aDSO table into a local table.

Once Data Product is created and listed, it becomes available in the Catalog & Marketplace, from where it can be shared with Databricks by selecting the appropriate connection details.

Jump into Databricks

To use the shared object In Databricks, I need to mount it to the Catalog – either by creating a new Catalog or using an existing one.

Databricks appends a version number to the end of the schema – ‘:v1’ – to maintain versioning in case of any future changes to the Data Product.

Once the share is mounted, the schema is created automatically, and the Sales actual data table becomes available within it. From there, I can access the shared table directly in a Notebook.

Creating a Data Product in Databricks

To create a Data Product in Databricks, I first need to create a Share – which I can either do via the Delta Sharing settings in the Catalog:

Or directly out of the table which is going to become a part of the Share:

Since a single Share can contain multiple tables, I have the option to either add the table to an existing Share, or create a new one:

To publish the Share as a Data Product, I run a Python script where I define the target table for the forecast and describe the Share in CSN notation, setting the Primary Keys. Primary Keys are required for installing Data Products in Datasphere.

Jump back into Datasphere

Once the Databricks Data Product is available in Datasphere, I install it into a Space configured as a HANA Database space – since my intention is to build a view on top of the table and use it for planning in SAC.

There are two installation options: as a Remote table for live data access, or as a Replication Flow, in which case the data is physically copied into the object store in Datasphere.

Since I want live access, I install it as a Remote Table:

and build a Graphical view of type Fact on top:

Forecast calculation

With my Data Products set up and Sales actual data are available in Databricks, I create a Notebook to calculate the Sales Forecast.

The approach combines Sales and Weather data to train a Linear Regression model. I import the Weather data *https://zenodo.org/records/4770937 from an external server directly into Databricks, select the relevant features from the weather dataset, and combine them with the Sales actual data:

* Klein Tank, A.M.G. and Coauthors, 2002. Daily dataset of 20th-century surface
air temperature and precipitation series for the European Climate Assessment.
Int. J. of Climatol., 22, 1441-1453.
Data and metadata available at http://www.ecad.eu

Using the “sklearn” library, I build and train a Linear regression model:

Once trained, the model predicts the Sales forecast for Rome in June 2026 based on the weather forecast, and I save the results to my Catalog table:

Seamless planning data model

Seamless planning concept is built around physically storing planning data and public dimensions directly in Datasphere, keeping them alongside the actual data.

Since the QRC4 2025 SAC release, it has also been possible to use live versions and bring reference data into planning models without replication.

In this scenario, I build a seamless planning model on top of the Graphical view I created over the Remote table. This lets me use the forecast generated in Databricks as a reference for the final SAC Forecast version.

 

The model setup follows these steps:

Create a new model:

Start with data:

Select Datasphere as the data storage:

From there, I define the model structure and can review the data in the preview.

For a deeper dive into Seamless Planning, I recommend this biX blog.

Process Flow automation

Multi-action triggers Datasphere task chain

The final step is automating the entire forecast generation by using SAC Multi-actions and a Task-Chain in Datasphere – so that my user can trigger the calculation with a single button click from an SAC Story.

The model setup follows these steps:

Create a new model:

Triggering Task Chains from Multi-actions is a recent release. This blog post walks through how to set it up.

For details on how to trigger a Databricks Notebook from Datasphere, I recommend referring to this blog.

With everything in place, I create a Story, add my Seamless planning Model, and attach the Multi-action:

Running the Multi-action triggers the Task Chain, which in turn triggers the Databricks Notebook.

I can monitor the execution details in Datasphere:

and in Databricks:

Once the calculation completes, the updated forecast appears in the Story:

The end-to-end calculation took 2 minutes 45 seconds in total. The Task Chain in Datasphere is triggered almost instantly by the Multi-action, the Databricks Notebook execution itself took 1 minute 29 seconds, with the remaining time spent on Serverless Cluster startup.   

 

From here, I can copy the calculated forecast into a new private version:

adjust the numbers as needed, and publish it as a new public version to Datasphere:

Conclusion

With SAP Business Data Cloud, it is possible to build a forecasting workflow that feels seamless to the end user — even though it spans multiple systems under the hood.

Companies using BW as the main Data Warehouse and Databricks for ML calculations or Data Science tasks can benefit from using the platform, as the data no longer needs to be physically copied out of BW.

What this scenario demonstrates is that once wrapped as a Data Product, BW sales data can be shared with Databricks via the Delta Share protocol. Databricks, in turn, can then create its own Data Products on top of the calculation results and share them back with Datasphere as a Remote Table.

A Seamless Planning model in SAC sits on top of that Remote Table, giving planners live access to the generated forecast. A single Multi-action in an SAC Story ties it all together, triggering a Datasphere Task Chain that kicks off the Databricks Notebook — completing the full cycle in under three minutes.

As SAP Business Data Cloud continues to mature, scenarios like this one are becoming achievable – leaving the complexity in the architecture and not in the workflow.

Contact

Ilya Kirzner
Consultant
biX Consulting
Privacy overview

This website uses cookies so that we can provide you with the best possible user experience. Cookie information is stored in your browser and performs functions such as recognizing you when you return to our website and helps our team to understand which sections of the website are most interesting and useful to you.