Synchronize BW authorizations with the SAP Analytics Cloud 

January 2026

Introduction

Data from various sources is often used in reporting. This means that some of the data may be stored in SAC, while other data is accessed in BW via a live connection. 

If this data is restricted by an authorization-relevant characteristic, user authorizations must be synchronized across both systems. With many users and/or characteristics with many attributes, this quickly becomes impossible to do reliably by hand. 

In this blog, we describe one approach to solve this challenge in a clever and efficient way. 

Solution Approach 

  • The authorizations are originally maintained in BW in a table, e.g., Planning – aDSO. 
  • If the names of the BW users differ from the names of the SAC users, the assignment of these names must also be maintained in BW and made available in the query. 
  • In BW, the restriction is implemented using an exit variable that reads the values from this table. 
  • The authorizations are made available in SAC through a query and then transferred to the authorization columns of the characteristic using a script. 
  • Now, the script for the transfer only needs to be started manually when required. This means that transfer errors can no longer occur. 

 

The individual points are described in detail below using a simple example. 

Authorization maintenance in BW and mapping BW users to SAC User 

In our example, we assume that authorizations for company codes are maintained individually. We have solved this using a planning-ready aDSO with the key fields “BW User” and “Company” as well as the planning-relevant attribute “Flag Valid.” We use the flag to revoke authorizations, as it is not that easy to delete rows. 

To enable mapping to a different BW username, we simply added the attribute “SAC User” to the characteristic “BW User.” This must then be maintained once for each user. 

Figure 1: Example of user authorization maintenance

 

If authorization is required for hierarchy nodes or time-dependent maintenance, the scenario must be expanded accordingly, but this does not change the basic logic.  

Derive BW - Authorizations from the Table 

To set up BW authorization based on a table, see, for example, the open SAP course: 

https://learning.sap.com/courses/implementing-authorizations-in-sap-bw-4hana/using-variables-in-authorizations_b506a634-82b9-4985-97e3-b7ce4ec49648 

First, an exit variable must be created that provides the characteristics for which the user is authorized. Please note that the coding is executed in step 0 during the authorization check. The coding for our example can be seen in the next screenshot. 

Figure 2: Coding of the exit variable for BW authorization 

Now all that remains is to create a BW authorization object and assign the authorization to the corresponding characteristic for the desired info providers using the exit variable. (see next picture) 

Figure 3: BW Authorization object for restriction using the authorization variable ZCOMPANY4AUTH 

This means that only company codes to which the user is authorized are displayed in the report.  

An example report where the authorization for the test user is not applied: 

Figure 4: Example BW - Report without restriction 

And one to whom the authorization applies: 

Figure 5Same reportfor user with restriction 

In BW the user is restricted to 2 company codes.  

Synchronization with the SAP Analytics Cloud 

In SAC, the company code is created as a public dimension in accordance with BW: 

Figure 6: Public authorization relevant SAC dimension

 

In addition, data access authorization is assigned. For the following example, we will consider only write authorization, but the same approach also applies to read authorization or a combination of both. 

To provide the authorization information in the SAC, we need a query that displays all permitted accesses. The authorization-relevant attribute (in this case, ZCOMPANY) and all users (the SAC username is important at the end) who have access to it must be listed. 

In our example, this appears as follows in SAC: 

Figure 7BW report in SAC with all maintained authorizations

 

This outline is necessary because the assignment of company code and user with the attribute SAC User is read via the result set in the script in order to transfer the authorizations to the SAC dimension.  

The actual synchronization takes place in the script. This is triggered by a simple button. This calls up the method “f_setAuthoriation” wich is defined as follows:  

Figure 8: Definition of function to transfer the authorizations 

Within this method, several variables are first defined 

// Get result set from table data source 

var tableResultSet = p_Table.getDataSource().getResultSet(); 

// Reference planning model 

var planningModel = PlanningModel_1; 

// Get all members of the given dimension 

var planningModelMembers = planningModel.getMembers(p_Dimension); 

// Array for members to be updated 

var members = ArrayUtils.create(Type.PlanningModelMember); 

// Array to store BW member–user combinations (DIM$USER) 

var dimensionMembersBW = ArrayUtils.create(Type.string); 

// Temporary variables 

var dimensionMember = ""; 

var dimensionUser = „“; 

var dimensionKey = „“; 

// Array to store unique dimension members from BW 

var dimensionArrayBW = ArrayUtils.create(Type.string); 

  

After that, two loops are executed. The first loop iterates through the result set and creates an artificial key combining the company code and the user. In addition, an array is populated with the individual company codes. 

// Extract dimension members and users from BW result set 

for (var j = 0; j < tableResultSet.length; j++) { 

// Get dimension member ID 

dimensionMember = tableResultSet[j][p_Dimension].id; 

// Get BW user mapped to SAC user 

dimensionUser = tableResultSet[j][p_User].properties["ZBW_USER.ZSAC_USER.DISPLAY_KEY"]; 

// Build DIM$USER key if user exists 

if (dimensionUser !== „#“) { 

dimensionKey = dimensionMember + "$" + dimensionUser; 

dimensionMembersBW.push(dimensionKey); 

} 

// Store unique dimension members 

if (dimensionArrayBW.includes(dimensionMember) === false) { 

dimensionArrayBW.push(dimensionMember); 

} 

} 

 

In “dimensionKey,” we have the combined keys from BW that should have write permission.  

The second loop runs through the planning model elements of the company code in the SAC. Here, it must be checked whether the company code from BW corresponds to that from the SAC. In addition, existing teams should not be changed. In this variant, all users are removed and replaced with those from BW, while all teams remain unchanged. 

// Loop through planning model members 

for (var n = 0; n < planningModelMembers.length; n++) { 

// Process only members found in BW result set 

if (dimensionArrayBW.includes(planningModelMembers[n].id) === true) { 

  // Remove existing user writers but keep team writers 

for (var i = 0; i <= planningModelMembers[n].writers.length; i++) { 
var member = planningModelMembers[n].writers.pop(); 

if (member !== undefined && member.type === UserType.Team) { 

planningModelMembers[n].writers.push(member); 

} 

} 

// Add users as writers based on BW mapping 

for (var k = 0; k < dimensionMembersBW.length; k++) { 

if (planningModelMembers[n].id === dimensionMembersBW[k].split(„$“)[0]) { 

planningModelMembers[n].writers.push({ 

id: dimensionMembersBW[k].split(„$“)[1], 

type: UserType.User 

}); 

} 

}  

// Collect updated member 

members.push(planningModelMembers[n]); 

} 

} 

  

The customized company codes are listed under “members.” To adjust the authorizations, all you need to do is update the planning model. 

// Update dimension members with new authorizations 

planningModel.updateMembers(p_Dimension, members); 

 

After the function was run, the company code looks as follows: 

Figure 9SAC dimension after take over of BW authorizations 

Both users are authorized for the company codes that correspond to the values returned by the query.  

This makes it possible to synchronize authorizations across both systems; however, there are a few pitfalls. If the BW user do not correspond one-to-one with the SAC username, the SAC username must be added as an attribute and maintained accordingly. If this attribute is maintained incorrectly in BW, the user will still be included in the authorization. This results in an error in the dimension itself, with the following error message: 

Figure 10: Error message if an SAC user is transferred which does not exist in SAC 

SAC suggests removing the incorrect users. The correct users must then be manually maintained.  

The script must always be executed when changes to permissions need to be transferred. In the version presented, it is up to the user to decide when the changes should be made. 

Runtime 

Since this function is located in a script behind a button, the SAC page must not be closed during synchronization until the script has been executed! 

The runtime depends on the number of values for the authorization-relevant attribute. If there are more than 1000 entries, which can occur, for example, with an authorization for a cost center, runtimes of several minutes must be expected, which increase linearly with the number of entries.  

Since the result set in the script takes the current selection, the amount of data can be adjusted with a filter. This allows the administrator to perform a comparison specifically for the combinations that have just been changed, with correspondingly short runtimes. 

Figure 11: Example of user list with restriction to a single User

Conclusion

The effort involved in authorization synchronization is relatively low. With little coding, the authorization information can be transferred from BW to SAC. If users have the same ID in both systems, the attribute on the BW object is not even required. However, there is currently no good way to check whether the transferred users and company codes also exist in SAC. If a company code exists in BW but not in SAC, the corresponding authorization would not be taken into account. It must therefore be ensured that the master data of the authorization-relevant attribute is identical to that in BW, i.e., it is best to load it from BW and ensure that the users also exist in SAC. 

Data Products Setup

I’ll start with Data Products setup. If you’re new to the concept, this recent video is a great starting point, but here’s a short summary. A data product is a well-described, easily discoverable, and consumable collection of data sets.

Creating a Data Product in Datasphere

Note that in this article I create Data Products in the Data Sharing Cockpit in Datasphere. This functionality is expected to move into the Data Product Studio, but that had not taken place at the time writing.

Before creating a Data Product in Datasphere, I need to set up a Data Provider profile, collecting descriptive metadata like contact and address details, industry, regional coverage, and importantly define Data Product Visibility. Enabling Formations allows me to share the Data Product with systems across your BDC Formation – Databricks, in this case.

With the Data Provider set up, I can go ahead and create a Data Product. As with the Data Provider, I’ll need to add metadata about the product and define its artifacts – the datasets it contains. Only datasets from a space of SAP HANA Data Lake Files type can be selected. Since this Data Product is visible across the Formation, it is available free of charge.

For this demo, the artifact is a local table containing ten years of Ice Cream sales data. Since this is a File type space, importing a CSV file directly to create a local table isn’t an option (see documentation).

I used a Replication Flow to perform an initial load from a BW aDSO table into a local table.

Once Data Product is created and listed, it becomes available in the Catalog & Marketplace, from where it can be shared with Databricks by selecting the appropriate connection details.

Jump into Databricks

To use the shared object In Databricks, I need to mount it to the Catalog – either by creating a new Catalog or using an existing one.

Databricks appends a version number to the end of the schema – ‘:v1’ – to maintain versioning in case of any future changes to the Data Product.

Once the share is mounted, the schema is created automatically, and the Sales actual data table becomes available within it. From there, I can access the shared table directly in a Notebook.

Creating a Data Product in Databricks

To create a Data Product in Databricks, I first need to create a Share – which I can either do via the Delta Sharing settings in the Catalog:

Or directly out of the table which is going to become a part of the Share:

Since a single Share can contain multiple tables, I have the option to either add the table to an existing Share, or create a new one:

To publish the Share as a Data Product, I run a Python script where I define the target table for the forecast and describe the Share in CSN notation, setting the Primary Keys. Primary Keys are required for installing Data Products in Datasphere.

Jump back into Datasphere

Once the Databricks Data Product is available in Datasphere, I install it into a Space configured as a HANA Database space – since my intention is to build a view on top of the table and use it for planning in SAC.

There are two installation options: as a Remote table for live data access, or as a Replication Flow, in which case the data is physically copied into the object store in Datasphere.

Since I want live access, I install it as a Remote Table:

and build a Graphical view of type Fact on top:

Forecast calculation

With my Data Products set up and Sales actual data are available in Databricks, I create a Notebook to calculate the Sales Forecast.

The approach combines Sales and Weather data to train a Linear Regression model. I import the Weather data *https://zenodo.org/records/4770937 from an external server directly into Databricks, select the relevant features from the weather dataset, and combine them with the Sales actual data:

* Klein Tank, A.M.G. and Coauthors, 2002. Daily dataset of 20th-century surface
air temperature and precipitation series for the European Climate Assessment.
Int. J. of Climatol., 22, 1441-1453.
Data and metadata available at http://www.ecad.eu

Using the “sklearn” library, I build and train a Linear regression model:

Once trained, the model predicts the Sales forecast for Rome in June 2026 based on the weather forecast, and I save the results to my Catalog table:

Seamless planning data model

Seamless planning concept is built around physically storing planning data and public dimensions directly in Datasphere, keeping them alongside the actual data.

Since the QRC4 2025 SAC release, it has also been possible to use live versions and bring reference data into planning models without replication.

In this scenario, I build a seamless planning model on top of the Graphical view I created over the Remote table. This lets me use the forecast generated in Databricks as a reference for the final SAC Forecast version.

 

The model setup follows these steps:

Create a new model:

Start with data:

Select Datasphere as the data storage:

From there, I define the model structure and can review the data in the preview.

For a deeper dive into Seamless Planning, I recommend this biX blog.

Process Flow automation

Multi-action triggers Datasphere task chain

The final step is automating the entire forecast generation by using SAC Multi-actions and a Task-Chain in Datasphere – so that my user can trigger the calculation with a single button click from an SAC Story.

The model setup follows these steps:

Create a new model:

Triggering Task Chains from Multi-actions is a recent release. This blog post walks through how to set it up.

For details on how to trigger a Databricks Notebook from Datasphere, I recommend referring to this blog.

With everything in place, I create a Story, add my Seamless planning Model, and attach the Multi-action:

Running the Multi-action triggers the Task Chain, which in turn triggers the Databricks Notebook.

I can monitor the execution details in Datasphere:

and in Databricks:

Once the calculation completes, the updated forecast appears in the Story:

The end-to-end calculation took 2 minutes 45 seconds in total. The Task Chain in Datasphere is triggered almost instantly by the Multi-action, the Databricks Notebook execution itself took 1 minute 29 seconds, with the remaining time spent on Serverless Cluster startup.   

 

From here, I can copy the calculated forecast into a new private version:

adjust the numbers as needed, and publish it as a new public version to Datasphere:

Conclusion

With SAP Business Data Cloud, it is possible to build a forecasting workflow that feels seamless to the end user — even though it spans multiple systems under the hood.

Companies using BW as the main Data Warehouse and Databricks for ML calculations or Data Science tasks can benefit from using the platform, as the data no longer needs to be physically copied out of BW.

What this scenario demonstrates is that once wrapped as a Data Product, BW sales data can be shared with Databricks via the Delta Share protocol. Databricks, in turn, can then create its own Data Products on top of the calculation results and share them back with Datasphere as a Remote Table.

A Seamless Planning model in SAC sits on top of that Remote Table, giving planners live access to the generated forecast. A single Multi-action in an SAC Story ties it all together, triggering a Datasphere Task Chain that kicks off the Databricks Notebook — completing the full cycle in under three minutes.

As SAP Business Data Cloud continues to mature, scenarios like this one are becoming achievable – leaving the complexity in the architecture and not in the workflow.

Contact

Ilya Kirzner
Consultant
biX Consulting
Privacy overview

This website uses cookies so that we can provide you with the best possible user experience. Cookie information is stored in your browser and performs functions such as recognizing you when you return to our website and helps our team to understand which sections of the website are most interesting and useful to you.