Seamless Planning with SAP SAC and SAP Datasphere Live Connections - Initial tests  

December 2025

Introduction 

Live Connection with Seamless Planning was presented at TechEd in Berlin and has recently become generally available with the SAC Q4/2025 release.  

Our experience from the first tests with the new and brilliant functionality will be shared within this blog. 

Following aspects will be looked at: 

  • Live Connection in Seamless Planning – What is it? 
  • What possibilities does the new functionality offer? 
  • How do SAC and Datasphere permissions interact with Live Connections? 
  • Is a Self-Reference to the plan data possible?

Live Connection in Seamless Planning – What is it? 

This blog provides a good overview of Seamless Planning: 

Understanding Seamless Planning  

Live Connection in Seamless Planning allows you to link additional tables/views to your plan figures without copying them into the data model with the plan data. This functionality is described very well in the following blog post by Maximilian Gander: 

Unlocking the Next Chapter of Seamless Planning in SAP Business Data Cloud with Live Versions  

There is also an excellent training course/demo on this topic, which was presented at TechEd 2025. The link to this training course and other highlights can be found in this blog: 

TechEd 2025 in Berlin – Recap on planning in SAP Business Data Cloud  

Setting up a Live Connection was straightforward. The following points, which are self-evident from the concept, should be noted: 

  • It is only possible to integrate fact views. 
  • A live version can/must be linked to an empty/new version. If data for this version already exists in the model, it cannot be linked to live data.  
  • Only one version can be marked as “actual.” Therefore, it is not possible to copy part of the actual data into the model and link another part live, or to obtain the actual data from two live connections. To do this, you must use versions that are not marked as “actual” or first summarize all the actual data in a single view. 
  • All dimensions of the live data MUST be linked to a dimension of the data model. As long as the dimension of the live data is not linked, the model cannot be saved. Superfluous dimensions must be hidden in advance in the view within Datasphere. However, it is not a problem if the live view does not provide all dimensions. 
  • Only ONE version can be assigned to a live view at a time. Therefore, it is not possible to assign the “Version” dimension. The view must always be filtered to one version. 
  • If master data is missing in the SAC tables that appear in the live view, this is displayed in the SAC model. The master data can be easily added at the touch of a button (see screenshots). 

Figure 1: Error message in case of missing master data 

Figure 2:Popup to add missing master data

Figure 3:Version with origin in case of live versions

What possibilities does the new functionality offer?

For anyone familiar with the classic BW world, this functionality now corresponds to a (limited) composite provider. For the first time in SAC, there is an option of combining data into a model without having to copy them into the SAC model! This means that you are no longer forced to pack everything into one model and copy data from one model to another for integrated planning.

Figure 4: Possibilities with Live Connection in seamless planning

How do SAC and Datasphere permissions interact with Live Connections?

With Seamless Planning, the data is stored in Datasphere, but everything is managed by the SAC.
How does this work with regard to permissions?

  • The user needs authorization in SAC to access the planning.
  • If the user does not have authorization in the Datasphere (either no user at all or no access to the corresponding space), he can execute the SAC planning but will not see any data that is integrated via the live model. (see Fig. 5)
  • If a dimension in the SAC model is marked as authorization-relevant, this restricts the data in the SAC for both the planning data and all live models. (see Fig. 6)
  • Please note that the restrictions in SAC do not apply to the data in Datasphere! Users who can access the planning data in Datasphere directly have this access without the SAC restrictions! Separate restrictions (DAC) must also be implemented in Datasphere for this purpose and kept in synch!
  • If there are restrictions in Datasphere via “data access control” for the user, these also apply if the data is integrated into a planning model via a Live Connection. (see Fig. 6)

 

Figure 5: SAC users without Datasphere user or space authorization: Planning is possible, but live data is missing!

Figure 6: DAC - Authorization in Datasphere and restriction in SAC
-> both apply to live data; plan data only uses the SAC restriction.

Is a Self-Reference to the plan data possible?

The plan data in Seamless Planning is stored in the Datasphere, where it is available for integration into other data models. One might think of consuming this data via a view, implementing further processing there, and then linking it back to one's own data model via a Live Connection (self-reference).

Unfortunately, this did not work for me. I was able to define a live connection, and the data was visible in the data preview as expected. However, as soon as I saved the model, it was aborted with an SQL error.

What does work, however, is integrating the data into another model and then copying the calculated data into my original data model via a cross-model copy data action.

This means that it is possible in principle to use calculations in planning that are not possible in SAC data actions but can be implemented via a Datasphere View or HANA SQL View.

Figure 7: Self-referencing virtual data model possible via second SAC model

Conclusion

The new live connection feature in SAC's Seamless Planning opens up many possibilities in SAC planning that were previously impossible or difficult to achieve:

  • Live integration of actual data without data copying
  • Live integration of other plan data without data copying
  • Use of SQL functions in the Datasphere

This is an enhancement and makes the Seamless Planning model the preferred planning environment in SAC, with further functions to follow.

Appendum: A move to a BDC environment should also work, as a Datasphere tenant can be transferred to the BDC via “rewiring.” This means that existing Seamless planning solutions should also have no problem migrating to the BDC, as long as SAC and Datasphere tenants are migrated at the same time.

Data Products Setup

I’ll start with Data Products setup. If you’re new to the concept, this recent video is a great starting point, but here’s a short summary. A data product is a well-described, easily discoverable, and consumable collection of data sets.

Creating a Data Product in Datasphere

Note that in this article I create Data Products in the Data Sharing Cockpit in Datasphere. This functionality is expected to move into the Data Product Studio, but that had not taken place at the time writing.

Before creating a Data Product in Datasphere, I need to set up a Data Provider profile, collecting descriptive metadata like contact and address details, industry, regional coverage, and importantly define Data Product Visibility. Enabling Formations allows me to share the Data Product with systems across your BDC Formation – Databricks, in this case.

With the Data Provider set up, I can go ahead and create a Data Product. As with the Data Provider, I’ll need to add metadata about the product and define its artifacts – the datasets it contains. Only datasets from a space of SAP HANA Data Lake Files type can be selected. Since this Data Product is visible across the Formation, it is available free of charge.

For this demo, the artifact is a local table containing ten years of Ice Cream sales data. Since this is a File type space, importing a CSV file directly to create a local table isn’t an option (see documentation).

I used a Replication Flow to perform an initial load from a BW aDSO table into a local table.

Once Data Product is created and listed, it becomes available in the Catalog & Marketplace, from where it can be shared with Databricks by selecting the appropriate connection details.

Jump into Databricks

To use the shared object In Databricks, I need to mount it to the Catalog – either by creating a new Catalog or using an existing one.

Databricks appends a version number to the end of the schema – ‘:v1’ – to maintain versioning in case of any future changes to the Data Product.

Once the share is mounted, the schema is created automatically, and the Sales actual data table becomes available within it. From there, I can access the shared table directly in a Notebook.

Creating a Data Product in Databricks

To create a Data Product in Databricks, I first need to create a Share – which I can either do via the Delta Sharing settings in the Catalog:

Or directly out of the table which is going to become a part of the Share:

Since a single Share can contain multiple tables, I have the option to either add the table to an existing Share, or create a new one:

To publish the Share as a Data Product, I run a Python script where I define the target table for the forecast and describe the Share in CSN notation, setting the Primary Keys. Primary Keys are required for installing Data Products in Datasphere.

Jump back into Datasphere

Once the Databricks Data Product is available in Datasphere, I install it into a Space configured as a HANA Database space – since my intention is to build a view on top of the table and use it for planning in SAC.

There are two installation options: as a Remote table for live data access, or as a Replication Flow, in which case the data is physically copied into the object store in Datasphere.

Since I want live access, I install it as a Remote Table:

and build a Graphical view of type Fact on top:

Forecast calculation

With my Data Products set up and Sales actual data are available in Databricks, I create a Notebook to calculate the Sales Forecast.

The approach combines Sales and Weather data to train a Linear Regression model. I import the Weather data *https://zenodo.org/records/4770937 from an external server directly into Databricks, select the relevant features from the weather dataset, and combine them with the Sales actual data:

* Klein Tank, A.M.G. and Coauthors, 2002. Daily dataset of 20th-century surface
air temperature and precipitation series for the European Climate Assessment.
Int. J. of Climatol., 22, 1441-1453.
Data and metadata available at http://www.ecad.eu

Using the “sklearn” library, I build and train a Linear regression model:

Once trained, the model predicts the Sales forecast for Rome in June 2026 based on the weather forecast, and I save the results to my Catalog table:

Seamless planning data model

Seamless planning concept is built around physically storing planning data and public dimensions directly in Datasphere, keeping them alongside the actual data.

Since the QRC4 2025 SAC release, it has also been possible to use live versions and bring reference data into planning models without replication.

In this scenario, I build a seamless planning model on top of the Graphical view I created over the Remote table. This lets me use the forecast generated in Databricks as a reference for the final SAC Forecast version.

 

The model setup follows these steps:

Create a new model:

Start with data:

Select Datasphere as the data storage:

From there, I define the model structure and can review the data in the preview.

For a deeper dive into Seamless Planning, I recommend this biX blog.

Process Flow automation

Multi-action triggers Datasphere task chain

The final step is automating the entire forecast generation by using SAC Multi-actions and a Task-Chain in Datasphere – so that my user can trigger the calculation with a single button click from an SAC Story.

The model setup follows these steps:

Create a new model:

Triggering Task Chains from Multi-actions is a recent release. This blog post walks through how to set it up.

For details on how to trigger a Databricks Notebook from Datasphere, I recommend referring to this blog.

With everything in place, I create a Story, add my Seamless planning Model, and attach the Multi-action:

Running the Multi-action triggers the Task Chain, which in turn triggers the Databricks Notebook.

I can monitor the execution details in Datasphere:

and in Databricks:

Once the calculation completes, the updated forecast appears in the Story:

The end-to-end calculation took 2 minutes 45 seconds in total. The Task Chain in Datasphere is triggered almost instantly by the Multi-action, the Databricks Notebook execution itself took 1 minute 29 seconds, with the remaining time spent on Serverless Cluster startup.   

 

From here, I can copy the calculated forecast into a new private version:

adjust the numbers as needed, and publish it as a new public version to Datasphere:

Conclusion

With SAP Business Data Cloud, it is possible to build a forecasting workflow that feels seamless to the end user — even though it spans multiple systems under the hood.

Companies using BW as the main Data Warehouse and Databricks for ML calculations or Data Science tasks can benefit from using the platform, as the data no longer needs to be physically copied out of BW.

What this scenario demonstrates is that once wrapped as a Data Product, BW sales data can be shared with Databricks via the Delta Share protocol. Databricks, in turn, can then create its own Data Products on top of the calculation results and share them back with Datasphere as a Remote Table.

A Seamless Planning model in SAC sits on top of that Remote Table, giving planners live access to the generated forecast. A single Multi-action in an SAC Story ties it all together, triggering a Datasphere Task Chain that kicks off the Databricks Notebook — completing the full cycle in under three minutes.

As SAP Business Data Cloud continues to mature, scenarios like this one are becoming achievable – leaving the complexity in the architecture and not in the workflow.

Contact

Ilya Kirzner
Consultant
biX Consulting
Privacy overview

This website uses cookies so that we can provide you with the best possible user experience. Cookie information is stored in your browser and performs functions such as recognizing you when you return to our website and helps our team to understand which sections of the website are most interesting and useful to you.