How Non Programming Integration Solutions Undermine SAP Projects
Executive Summary
- SAP is continually fighting against the perception that its applications are challenging to integrate.
- We explain why this is.
Shaun Snapp and Denis Myagkov co-authored this article.
Video Introduction: How Non Programming Integration Solutions Undermine SAP Projects
Text Introduction (Skip if You Watched the Video)
SAP’s current integration products use non-programming approaches that are outdated. For organizations, this translates to poor integration performance with scaling limitations, restrictions on pipeline optimization, and the inability to reuse data. Is the solution to increasing performance is to use a custom-made solution with a 100% programming approach? The article ends with a demo of a REST API to an SAP system to show how a fully-coded solution can enhance integration performance. You will learn about the history of SAP integration, the reliance on non-programming integration applications, and then we will discuss a new SAP integration approach.
Our References for This Article
If you want to see our references for this article and other related Brightwork articles, see this link.
Notice of Lack of Financial Bias: We have no financial ties to SAP or any other entity mentioned in this article.
History of SAP’s Integration Solutions
SAP has historically been the most challenging vendor to integrate but has a long history of marketing its integration prowess. One of the authors, Shaun Snapp, can recall being shocked when I had to integrate to SAP through a hierarchical document (that went back to mainframes) called an IDOC.
Faking the Complexity of Standard Integration
For decades, the primary strategy of both SAP and SAP consulting companies has been to instill fear into customers regarding how difficult it is to perform integration, while at the same time underemphasizing the costs with creating customizations in SAP and the costs and inefficiencies of using SAP integration products.
These positions have been to direct customers away from purchasing non-SAP applications and moving customizations from existing “legacy” systems and migrating them to SAP.
Recently, SAP has once again tried to recast/reboot its integration image. This has led SAP to make many proposals about the App Center (an Apple App Store type center, giving the illusion that SAP is an open ecosystem) and the SAP Cloud Platform.
The following video covers some of these claims regarding the SAP Cloud Platform and integration.
SAP removed this video from YouTube.
SAP is not an expert in integration and has some of the least efficient integration tools that we have ever reviewed.
SAP has a lengthy history of making SAP integration seem much more useful than it is, even presenting themselves as a leader in integration when zero companies outside of SAP’s customers use any of SAP’s integration tools.
This leads typically to customers receiving costs and time surprises from the application integration effort that is much more difficult to perform to SAP than initially presented.
Two of the primary reasons why SAP projects are over budget are unexpected customizations and integrations.
What follows is an abbreviated list of the history of SAP integration offerings.
SAP’s Integration Offerings
The First Period: Before 2003
SAP performs integration via RFC using C/C++ library equivalent to modern libsapjco3.so for Unix or sapjco3.dll for Windows. Today these libraries are delivered as a part of SAP JCo (Java Connector) functionality.
The Second Period: After 2003 but Before 2005
SAP releases its Exchange Infrastructure – SAP XI as a tool to build an integration solution to the SAP ERP system without explicit coding. This product was released together with SAP NetWeaver 2004 as killer-feature. Technically, SAP XI was Java wrapper on top of what became modern SAP JCo.
The Third Period: 2005 SAP renamed XI to PI (Process Integration)
Due to modification of the licensing policy, this change was made. This is so that clients paid for traffic, but the number of instances. Also, SAP extended the number of use cases in marketing materials. In the same year, SAP acquired LightHammer. LightHammer had and their integration solution aimed at the manufacturing domain. Later this solution was renamed SAP MII, which was delivered with SAP PCo (Plant Connectivity). An additional service designed to support interfaces of data exchange like OPC. SAP MII was also built as a wrapper on top of C/C++ libraries.
The Fourth Period: 2011 SAP PI was renamed to SAP PO (Process Orchestration).
This was more marketing rebranding than a technology update. This rename did not change SAP PO’s prospects globally, with PO declining in popularity since 2011.
The Fifth Period 2016: SAP presents its Cloud Platform Integration & Hybris Data Hub.
The SAP Cloud Platform (renamed from SAP HCP) is designed to perform integration with a remote SAP System. The Hybris Data Hub aims to integrate the integration of SAP’s Hybris e-commerce application with a small number of functions in SAP ERP, like material master data, stock value, and prices.
The Reality of SAP’s Integration Products
Every one of SAP’s integration products is worse than just using a good data manipulation language to create an integration harness. Let us consider what this list of SAP integration products tells us.
If we follow the evolution of SAP’s integration solutions, SAP had attempted to move from programming paradigm to non-programming when writing of explicit code was replaced with some transactions for customization.
However, all of the technologies listed in the previous section are based on the C/C++ library that wrapped with Java library that wrapped with one of the listed integration tools (much like an onion).
The Price Paid by Customers SAP’s Non-Programming Integration Approach
SAP has historically benefited from a large pool of specialized workers that can configure systems without knowing how to code. This is the standard SAP “functional consultant” as opposed to a “technical consultant.”
Meanwhile, vendors ranging from Oracle to Axapta to Baan frequently struggle with a shortage of well-qualified IT resources to build their ecosystems. SAP quickly attracted non-technical resources and converted them to consultants.
Doing this, they developed an army of semi-IT literate resources who were ready to set up all business logic in SPRO. And they did so without writing a single line of code with most of these resources working in SAP consulting partners or as independent consultants rather than working for SAP itself. This reduced the learning curve for configuring SAP and opened the area to more resources.
How SAP Does The Same Thing in Integration
With integration solutions, the picture is quite similar.
SAP provides people who cannot distinguish between TCP and UDP protocols the ability to build integration solutions without writing an even single line of code. They can do this without even understanding protocols.
In the mid-2000s, this can be said to be a reasonable trade-off. At this point, there was just the dawn of modern web technology, and there was little to integrate into most of the SAP systems.
However, by the end of the 2000s, we already had an extensive offering of different technologies and concepts that is still exponentially growing.
In recent years, we see the growth of solutions to integrate with SAP ERP to extend its planning, reporting, or any other abilities. We also have the massively increased popularity of mobile and web-based solutions that also aimed to be integrated into SAP.
All of these tools have their requirements for integration, security, user management, performance.
Getting to the Heart of the Problem with XI/PI/PO
The biggest problem of current integration products from SAP is that it provides an inappropriate model of integration. Features that made SAP XI/PI/PO accessible to SAP consultants are now doing them a disservice. SAP XI was not designed to work with most of the contemporary technologies and protocols.
SAP, as a vendor also unable to extend and update its solution to fit modern requirements.
Understanding the Server, The Weakest Link of Chain
Every server can handle only a limited number of external connections simultaneously.
Given the restriction is due to things like the following:
- Network bandwidth
- Parameters of Unix core and constraints of specific software server.
For instance, the default number of parallel connections to Apache Tomcat is 100. The default number to MySQL is 151. The default number to HANA is….well, nobody knows for sure, but it is not more significant than MySQL.
If we consider the server, it can be viewed as operating as a big supermarket no matter what type. Imagine a supermarket with 100 checkout stations, and all incoming requests are like its customers. There is something called Little’s Law that could help us to estimate what time it will take.
Let us get into the math of server requests.
- Number of Business Users: Let assume that we have 2000 business users using WEB applications connected to some server
- Average Number of Requests per Server per Second: Each application makes an average of 10 requests to the server per second, which the server could handle in parallel.
- Server Requests: Let also assume that the server will process every request in 5 seconds.
- Wait Times: Here, we will get the situation when half of our users will wait at least two times longer than it could be with a lower number of users.
Contention at the Server with SAP Fiori
Same issue we face with the Fiori when the extension of the connection dramatically reduces overall application performance. Here we have to understand that everything we get to the screen sent by Fiori Frontend Server responds to our multiple requests (JS code, the layout structure, dashboard data, user permissions, some transactional data, pictures, and so on).
Any system will work only as fast as its slowest or bottleneck resource, or even slower than that. This is one reason for Fiori’s constant performance problems.
We measured Fiori’s performance in the article Why is SAP Fiori So Slow?
SAP proposed unbeatable performance with HANA. However, if Fiori is the bottleneck due to integration design, it won’t matter. Systems with Fiori constantly perform poorly.
Can One Address the Issue by Simply Adding More Servers?
In practice, we have a situation when a solution that works worse than in the 2000s due to the growing number of connections in modern applications.
However, wait one second. What about Google, Facebook, Netflix, PayPal? They could handle thousands of requests per second and work as fast as the servers staying in the next room.
Why can’t SAP do the same for its clients?
To improve our integration performance, we have to be able to do the following:
- Scale the Landscape Horizontally: To reduce the number of requests to a single server by adding new servers.
- Optimization of the Pipeline: Improve the number of requests that a single server could handle by optimizing code and memory usage.
- Reuse of Data: Add client-side and server-side cash to not serve requests that were served one minute ago.
Second, we have to acknowledge that all of those companies (Google, Facebook, etc.) are using their own fully tailored solutions that were designed and adapted to handle the targeted number of requests from the beginning.
How Does SAP Address This Issue?
What did SAP do so differently from other modern IT-Giants?
Quite a few things, it turns out.
Let us review each of them.
Difference #1: Scaling Limitations
SAP integration solutions horizontally. Even one decides to add more PI/PO servers behind some reverse proxy. It will not help because the number of requests to the SAP ERP backend still will be the same. The next problem is how to maintain similar logic on several PI/PO instances. This has no real solution.
Difference #2: Pipeline Optimization
There is no way to optimize the pipeline. The ability to use the non-programming paradigm was obtained only in exchange for a loss of performance. When a connection is made to SAP PI/PO, the server creates a request to the database to obtain non-programming integration rules. Then the server transforms them into programming, compiles them, then allocates them to memory, then executes. All of those steps between getting a request and its execution is the overhead of the non-programming paradigm. It is possible to write a custom component to the PI/PO in Java. Then we will get minor improvements in performance with no test or debug tools. The majority of PI/PO consultants are unable, unfortunately, to write production-ready code.
Difference #3: Reuse of Data
With PI/PO, they are unable to reuse data (at least on the server-side). If 2000 users decide to open Fiori apps, the SAP PI behind it will make 2000 times the requests to the database to obtain non-programming integration rules versus the requests for programming integration rules. Hence, our non-programming overhead became 2000 times more palpable, and time in queue for every request became much longer.
How SAP Became Trapped in the Non-Programming Paradigm
Today SAP is trapped in the non-programming paradigm of its own making, which was in part done to fit within the consulting model of having consultants configure applications, not code.
This has left SAP with no possible way to improve integration performance. Yet, it insists on the supremacy of the non-programming model. Generically solving specific business cases is impossible.
Let us review a particular example.
How Google, Facebook, and Netflix Compare to SAP’s Dated Integration Approach
Companies like Google, Facebook, and Netflix are also struggling with their infrastructure’s performance, but let us review how they are solving this issue of integration.
- Google: When a video becomes viral on YouTube, Google copies this video to hundreds or thousands of servers worldwide. This is to make this data available to users as fast as possible.
- Netflix: When Netflix starts a new season of Game of Thrones, it runs thousands of new servers to broadcast content. However, running 10-12 additional application servers to improve overall performance during a December run-up is not feasible in the SAP ecosystem.
- Visa: For Visa, it is entirely reasonable to handle 4000 transactions per second. This is much higher than any single server could handle. When you are paying for your coffee at Starbucks, Visa is somehow able to check your account balance, verify your pin-code, and send a confirmation to the terminal. How could that possibly work with SAP FI + SAP PI? This requirement would break SAP FI + PI.
None of those achievements was the result of magic or supernatural abilities.
All of those companies invest enormous efforts and money to design and build their own solutions with no SPRO/configuration client to simplify things for consultants. They are specially made for one client.
Big Data itself was only a by-product of Google, and it reshaped the IT industry after they published this article in 2003. Google solved the problem of handling significant amounts of heterogeneous data by a large number of cheap commodity servers.
After 15 years, SAP was able to propose to the market as the Big Data solution was SAP Vora that works on top of Hadoop, which works on the HDFS – file system similar to described by Google. SAP is trying to insert itself into Hadoop, Hadoop, and other related technologies that do not need SAP.
Getting Back to Basics for Integration
Does this all mean that to get the high performance, we need to get back to programming?
Yes, but with a caveat.
We have to acknowledge that programming 15 years ago versus today has evolved. Those changes are for good. For example, 15 years ago, it was necessary to spend several days writing and deploy a simple enterprise-ready server. Today any recent computer science graduate can do that in 1-2 hours maximum and use a server in the cloud.
In my practice (Denis Myagkov), I use my custom-made solution that I named Integration Framework. As with the SAP integration solutions, the Integration Framework also works on the SAP JCo and C/C++ libraries. However, I choose a 100% programming approach where all of the integration logic compiled to bytecode produces zero overhead.
Moreover, I can work with any data cache, scale my servers horizontally, or do whatever I want.
Below is a code that provides REST API to a real-life SAP system from the Amazon AWS t2.micro instance.
This is everything needed to write to open a single API to SAP BAPI. It requires only a couple of hours or days through our SAP Integration Ninja to build an integration component that easily outperforms any SAP PI/PO implementations 100-150 times while working on the 7$ Amazon AWS server.
View #1: Introduction to the Ninja
To understand the history of SAP integration, see the following interactive graphic.
View #2: A Brief History of SAP’s Integration Offering
Tracking the history of SAP integration is critical to understanding why SAP integration tends to be so problematic and why the Ninja has such an advantage over any SAP integration approach or technology.