Hands On with the new model on the SAP Analytics Cloud

In March, the Q2 2021 release added several new features to SAC. Among them is a way to make data modelling more flexible and effective. A new model is made available to the modeller, which can now also be used to create key figure models.

Until now, it was not possible to map key figure models in the SAP Analytics Cloud. If the data source in a system was available as a key figure model, a mapping from a key figure model to an account model had to be carried out with additional effort so that the data could be evaluated/planned. Now this is finally history. SAC enables both in one model.

This new data model is referred to in this article as a hybrid model. Hybrid because both a ratio model and/or an account model can be created from a hybrid model. This model will, it seems, replace the old data model (here referred to as the Classic Account Model (see Figure 1)) in the future and thus move into focus for new developments and functions.

 

The new data model on the SAC can be used by...

  • Start with a blank model and then click on New Model in the selection in Figure 1 below or
  • For an existing model (Classic Account Model), select Migrate to New Model Type under Edit (see Figure 2).

Figure 1: Create New Model -> Blank Model -> New Model

Figure 2: Workspace Model Structure -> Edit -> Migrate (marked red)

This article will give an insight into the hybrid model, how to handle it and what to consider when migrating existing models (Classic Account Models).

 

First impressions

The hybrid model finally makes it possible to create key figure models and thus build completely new scenarios. It is becoming more important to think about how one's models should be modelled on the SAC, because it can reduce and improve data volumes. Furthermore, it also makes it possible to connect existing data models from other systems that were not connectable until now (except with additional mapping to an account model) to the SAC.

Further differences

New number types

Another innovation that comes with the hybrid model are new number types, e.g. integers. In the future, it will finally be possible to plan entire bicycles without having to set up the workaround with decimal places, with the risk that calculations will be rounded up in the background.

No automatic creation of master data

With the Classic Account Model, new master data is automatically created when loading. This is not (yet) possible with the hybrid model. When loading, it must therefore be ensured that all master data is maintained beforehand.

Currency conversion

The possibilities for currency conversion during planning have also been extended. Now you can, for example, enter values in transaction currency and have them translated directly into company code and group currency during entry before transferring the data! This facilitates consistent entry.

Calculated and restricted key figures

In addition, calculated and restricted key figures can now be created without having to create them via Account.

 

Create data model from dataset only works for account models

These new features encourage new developments to be created in the hybrid model. Unfortunately, the hybrid model cannot be generated from a dataset, it must always be created. It makes no difference whether the data source is an account model or a key figure model. SAP always generates an account model that can be manually migrated into a hybrid model (see next chapter).

Migration

The most striking thing about a migration is that the familiar Data Wrangling Editor is no longer needed. This means that SAC does not yet have any functions that allow adjustments to be made when the models are transferred. A mapping function would have been in good hands there, in order to fill several from one key figure (depending on the account dimension). After activating Migrate to New Model Type, the old model is deleted and SAC creates a hybrid model. If I am not satisfied with the migration, there is no Undo! So it is very important that some steps are prepared beforehand

A model that is in use should never be migrated. This is also not possible because stories cannot be migrated. It is better to create a duplicate of the model to be migrated. This duplicate has no connections to Stories and can therefore be migrated. The stories would either have to be rebuilt manually or you try it by duplicating the stories as well, and deposit/exchange the hybrid model as a new data source.

It is also important to ensure that the currency conversion is switched off, otherwise no migration can take place. After the migration, the currency conversion can be switched on again. Import jobs are also not migrated. Thus, all migrated models should be checked for this.

Another important point is that when migrating existing account models, the unit properties are omitted. On the old Classic Account Model, the unit & currency can be set via the dimension Account, as can be seen in Figure 3.

 

Figure 3: Classic Account Model: Dimension Account

A story on this model also shows the units & currencies from the Account dimension accordingly with the symbols (see Figure 4).

 

Figure 4: Story on a Classic Account Model

Figure 5 shows that the currency symbols are omitted after migration. This is because the hybrid model does not know any mixed key figures. The automatically generated ratio has a field where the unit & currency can also be set. However, one must decide whether this key figure contains units or amounts. In the example shown, the account model to be migrated contains units of quantity and currency. After migration, the ratios will all be interpreted as either income or amounts. However, it is also possible to create a unitless key figure. This setting can be seen in the figure below. Care must be taken when creating reports to ensure that the user interprets their ratios correctly.

 

Figure 5: Story on a hybrid model (after migration)

This is still the biggest problem with the hybrid model. To complete the migration, the key figures must be split into several key figures (amount, quantity) (see the following thought experiment).

Thought experiment

To enable a complete migration, additional manual effort is then required. One idea would be to duplicate the Classic Account Model to be migrated as many times as there are different units in the original model. Each of the duplicated Classic Account Models would then have to be trimmed down to one unit in the model, so that in our case there is one model with only units of measure and one model with only currencies. These could then be migrated individually. After the migration, the corresponding unit must be assigned to the automatically created key figure.

The two models could then be stored in a story and linked to each other. This function is called data blending. However, there are limitations to this. Further information on data blending (https://blogs.sap.com/2019/11/05/sap-analytics-cloud-blending-information-part-1/).

Alternatively, a DataAction could be used to copy the values of one model into a new key figure of the other.

This would allow the data to remain jointly evaluable.

 

Conclusion

The hybrid model already fulfils a long-awaited wish to be able to map key figure models. This enables flexibility in data modelling, as well as possible performance advantages. In addition, other data sources can now be connected to the SAC that could not be connected before due to their key figure model. Newly created data models will mostly be developed on the hybrid model because this offers more flexibility.

However, there is still room for improvement when migrating from a Classic Account Model to a Hybrid Model. Losing units is a challenge that does not encourage Classic Account models to migrate. It remains to be seen whether SAP will address this issue.

It is also exciting to see whether SAP will migrate the standard content or create new content.

 

More information can be found under the link (https://saphanajourney.com/sap-analytics-cloud/resources/sap-analytics-cloud-new-model/) or in the help in the SAC. The other findings are derived from try & error (as of 28.04.2021).

Contact Person

Julius Nordhues

Consultant

Data Products Setup

I’ll start with Data Products setup. If you’re new to the concept, this recent video is a great starting point, but here’s a short summary. A data product is a well-described, easily discoverable, and consumable collection of data sets.

Creating a Data Product in Datasphere

Note that in this article I create Data Products in the Data Sharing Cockpit in Datasphere. This functionality is expected to move into the Data Product Studio, but that had not taken place at the time writing.

Before creating a Data Product in Datasphere, I need to set up a Data Provider profile, collecting descriptive metadata like contact and address details, industry, regional coverage, and importantly define Data Product Visibility. Enabling Formations allows me to share the Data Product with systems across your BDC Formation – Databricks, in this case.

With the Data Provider set up, I can go ahead and create a Data Product. As with the Data Provider, I’ll need to add metadata about the product and define its artifacts – the datasets it contains. Only datasets from a space of SAP HANA Data Lake Files type can be selected. Since this Data Product is visible across the Formation, it is available free of charge.

For this demo, the artifact is a local table containing ten years of Ice Cream sales data. Since this is a File type space, importing a CSV file directly to create a local table isn’t an option (see documentation).

I used a Replication Flow to perform an initial load from a BW aDSO table into a local table.

Once Data Product is created and listed, it becomes available in the Catalog & Marketplace, from where it can be shared with Databricks by selecting the appropriate connection details.

Jump into Databricks

To use the shared object In Databricks, I need to mount it to the Catalog – either by creating a new Catalog or using an existing one.

Databricks appends a version number to the end of the schema – ‘:v1’ – to maintain versioning in case of any future changes to the Data Product.

Once the share is mounted, the schema is created automatically, and the Sales actual data table becomes available within it. From there, I can access the shared table directly in a Notebook.

Creating a Data Product in Databricks

To create a Data Product in Databricks, I first need to create a Share – which I can either do via the Delta Sharing settings in the Catalog:

Or directly out of the table which is going to become a part of the Share:

Since a single Share can contain multiple tables, I have the option to either add the table to an existing Share, or create a new one:

To publish the Share as a Data Product, I run a Python script where I define the target table for the forecast and describe the Share in CSN notation, setting the Primary Keys. Primary Keys are required for installing Data Products in Datasphere.

Jump back into Datasphere

Once the Databricks Data Product is available in Datasphere, I install it into a Space configured as a HANA Database space – since my intention is to build a view on top of the table and use it for planning in SAC.

There are two installation options: as a Remote table for live data access, or as a Replication Flow, in which case the data is physically copied into the object store in Datasphere.

Since I want live access, I install it as a Remote Table:

and build a Graphical view of type Fact on top:

Forecast calculation

With my Data Products set up and Sales actual data are available in Databricks, I create a Notebook to calculate the Sales Forecast.

The approach combines Sales and Weather data to train a Linear Regression model. I import the Weather data *https://zenodo.org/records/4770937 from an external server directly into Databricks, select the relevant features from the weather dataset, and combine them with the Sales actual data:

* Klein Tank, A.M.G. and Coauthors, 2002. Daily dataset of 20th-century surface
air temperature and precipitation series for the European Climate Assessment.
Int. J. of Climatol., 22, 1441-1453.
Data and metadata available at http://www.ecad.eu

Using the “sklearn” library, I build and train a Linear regression model:

Once trained, the model predicts the Sales forecast for Rome in June 2026 based on the weather forecast, and I save the results to my Catalog table:

Seamless planning data model

Seamless planning concept is built around physically storing planning data and public dimensions directly in Datasphere, keeping them alongside the actual data.

Since the QRC4 2025 SAC release, it has also been possible to use live versions and bring reference data into planning models without replication.

In this scenario, I build a seamless planning model on top of the Graphical view I created over the Remote table. This lets me use the forecast generated in Databricks as a reference for the final SAC Forecast version.

 

The model setup follows these steps:

Create a new model:

Start with data:

Select Datasphere as the data storage:

From there, I define the model structure and can review the data in the preview.

For a deeper dive into Seamless Planning, I recommend this biX blog.

Process Flow automation

Multi-action triggers Datasphere task chain

The final step is automating the entire forecast generation by using SAC Multi-actions and a Task-Chain in Datasphere – so that my user can trigger the calculation with a single button click from an SAC Story.

The model setup follows these steps:

Create a new model:

Triggering Task Chains from Multi-actions is a recent release. This blog post walks through how to set it up.

For details on how to trigger a Databricks Notebook from Datasphere, I recommend referring to this blog.

With everything in place, I create a Story, add my Seamless planning Model, and attach the Multi-action:

Running the Multi-action triggers the Task Chain, which in turn triggers the Databricks Notebook.

I can monitor the execution details in Datasphere:

and in Databricks:

Once the calculation completes, the updated forecast appears in the Story:

The end-to-end calculation took 2 minutes 45 seconds in total. The Task Chain in Datasphere is triggered almost instantly by the Multi-action, the Databricks Notebook execution itself took 1 minute 29 seconds, with the remaining time spent on Serverless Cluster startup.   

 

From here, I can copy the calculated forecast into a new private version:

adjust the numbers as needed, and publish it as a new public version to Datasphere:

Conclusion

With SAP Business Data Cloud, it is possible to build a forecasting workflow that feels seamless to the end user — even though it spans multiple systems under the hood.

Companies using BW as the main Data Warehouse and Databricks for ML calculations or Data Science tasks can benefit from using the platform, as the data no longer needs to be physically copied out of BW.

What this scenario demonstrates is that once wrapped as a Data Product, BW sales data can be shared with Databricks via the Delta Share protocol. Databricks, in turn, can then create its own Data Products on top of the calculation results and share them back with Datasphere as a Remote Table.

A Seamless Planning model in SAC sits on top of that Remote Table, giving planners live access to the generated forecast. A single Multi-action in an SAC Story ties it all together, triggering a Datasphere Task Chain that kicks off the Databricks Notebook — completing the full cycle in under three minutes.

As SAP Business Data Cloud continues to mature, scenarios like this one are becoming achievable – leaving the complexity in the architecture and not in the workflow.

Contact

Ilya Kirzner
Consultant
biX Consulting
Privacy overview

This website uses cookies so that we can provide you with the best possible user experience. Cookie information is stored in your browser and performs functions such as recognizing you when you return to our website and helps our team to understand which sections of the website are most interesting and useful to you.