HANA Virtual Data Models

Recently I had a chance to work with HANA VDM – Analytics for ERP 1.0 in sidecar scenario. These built-in views are very valuable for real-time operational reporting. For organization which already made an investment on HANA, implementing these views mean a quick ROI to business users. Another big advantage is to use the existing ECC as-is while using HANA as a secondary database before doing “Integrated Scenario”.

Here are the high level steps I took to implement these views;

Technical System Landscape

  • HANA SPS08
  • SLT 2.0 (Dedicated Server)


  • Verify HANA appliance is functioning and HANA Lifecycle Manager is working (we had an issue which we had to upgrade lifecycle Manager)
  • Installation of SLT 2.0 on Windows server
  • Setup proper roles in ECC and HANA
  • Configuration of replication between ECC and HANA (one to one mapping)


  • Replicate 559 tables from ECC
    • If not all 559 tables replicates, the view activation process will be skipped during HANA Analytics package deployment
    • 1781992 – Tables for SAP HANA Analytics for ERP 1.0
    • 1782065 – Tables for SAP HANA Analytics for SAP Business Suite
  • Configure Schema Mapping
  • Download and Deploy Content package in HANA


Install following adds-on on HANA (http://help.sap.com/hba )

  • SAP HANA Live Extensions Assistant
  • SAP HANA Live View Browser
  • Analytical Authorization Assistant (AAA) for HANA Live VDM


  • SAP HANA Live Extensions Assistant

With the SAP HANA Live extension assistant, you can extend query or reuse views delivered by SAP. The tool is an eclipse plug-in for the SAP HANA studio.


  • SAP HANA Live View Browser

SAP HANA Live Browser is a SAP UI5 based web application that allows to browse, search, and tag HANA content views and consume them in SAP Business Objects Lumira or Analysis Office for analyzing the data.


  • Analytical Authorization Assistant (AAA) for HANA Live VDM

Here are some key points which I’d like to investigate more;

–          Create Analytical Privileges in a regular way. This will give us flexibility of the naming of APs and create a custom restriction

  • This is very important considering there are almost 1000 prebuild models / queries for HANA Live

–          For real-time operational reporting we don’t need to replicate ECC security. However it should be granular enough and flexible for security provisioning (example, secure PM KPI from Finance, etc.)

–          Once we create all APs we create Roles and assign APS to role and assign roles to users.

–          This adds-on comes with metadata tab which we could define authorization object (similar to what is done for vendor security in EDW).



We selected couple of queries such as “CostCenterPlanActualCostQuery” and “LocationAndPlannedGroupCostQuery”. We were able to have reconciled them back to the ECC system via Lumira and Aoffice.

Lesson learned

I found Cost Center Hierarchy is missing from “CostCenterPlanActualCostQuery”. There are couple of hierarchy queries available as part of 1018 Calculation Views, however none of them works as hierarchy. I noticed the “ProfitCenterQuery” uses the same set of tables that are required for Cost Center Hierarchy, however here are some challenges;

  • “Profit Center Group” in ProfitCenterNode is hardcoded (0106). This query is not reusable for Cost report.
  • ProfitCenter queries don’t have any hierarchy. For some attributes, the hierarchy property is set to “true” but it doesn’t function as hierarchy.
  • Any change in these queries (reusable query) impact the higher level queries.
  • Any change in these queries could be overwritten by upgrade / patches

Considering these challenges I decided to build a Calculation View for Cost Center Hierarchy which worked as I expected!. This view could be joined with CostCenterPlanActualQuery to provide true hierarchy functionality for cost centers.

How big data and new technologies such as SAP HANA changes oil and gas industry – Part 1

In previous blog “Big Data: a mysterious giant IT buzzword”, I refer to Gartner’s definition of big data which covers 5 V’s with focus on some specific characteristics of the upstream data;

  • Volume – Seismic data acquisition
  • Velocity – Real-time streaming data from well-heads, drilling equipment and sensors
  • Variety
    • Structured: standard and data models such PPDM, SEG-Y, WITSML, PRODML, RESML, etc.
    • Unstructured: images, log curves, well log, maps, audio, video, etc.
    • Semi-structured: processed data such analysis, interpretations, daily drilling reports, etc.
  • Veracity (Data Management practice to provide accurate and good quality data)
    • Pre-processing to identify data anomalies
    • Run integrated asset models
    • Combination of seismic, drilling and production data
  • Value
    • Faster decision and enhancing production
    • Reduce costs, such as Non Productive Time (NPT)
    • Reduce risks in the areas of Health, Safety and Environment
    • Forecast and planning using predictive analytics

The oil and gas industries generate significant data volume through exploration, development and producing hydrocarbons. The Oil and gas industry conducts advanced geophysics modeling and simulation where 2D, 3D & 4D Seismic generate significant data during exploration phases. Thanks to new technologies, we’re able to gather, integrate and interpret data received from thousands of data-collecting sensors to track any activity happening almost real-time or near real-time (NRT). It means structured, semi-structured and unstructured dataset is growing daily.

The oil and gas industry started to recognize the importance of getting access to accurate data faster to make decision quicker. So far most of the analysis has been done the same way it was historically used within technical disciplines and a relatively small geographical study area. Now, we observe huge potential using in-memory technologies such as SAP HANA and big data to learn much more from the data. We need access to the appropriate technology, tools, and expertise to integrate and synthesize diverse data sources into more manageable format and derive insight from these datasets. With big data analytic solutions, we’re able to manage and control the data volume, the complexity of the data and break the barriers of geography and disciplines to see the big picture. Currently there are handful of companies have adopted big data such as Chevron and Shell, however the future looks promising and we’re expecting a big demand for big data, in-memory technology and analytics solutions. Let’s say it will happen eventually!!!

What is the primary application of BI in oil and gas companies?

Business intelligence (BI) is a broad category of applications and technologies for gathering, storing, analyzing, and providing access to data to help enterprise users make better business decisions. These days to run a competitive business one needs to manage its supply and demand very effectively. Some of the primary goals have been to shorten the time required to create reports and analyses, improve the accuracy of information and create a single reporting repository. BI Performance Management Solutions help organizations to identify and interpret business information to gain a better view of, and control over the key drivers of high performance for Upstream, Midstream and Downstream.

BI Solutions address three different levels of requirements;
• Strategic – Such as optimizing the locations and sizes, partnering with distributors and customers
• Tactical – production, transportation and inventory decisions
• Operational – daily production, source planning, inbound/outbound planning, production-to-supply level planning

As Enterprise / Solution Architect, we design BI solutions to help oil and gas companies to consolidate operations, monitor the current progress, forecast the future very effectively and cut costs. Oil and gas products are commodities and competitive based on price. This makes the industry cost-conscious and highly dependent upon the price of commodity such as crude oil.

The most common application areas for BI have been to provide daily, monthly, quarterly and yearly financial reports and support business operations with a special focus on ERP. For example, BI could provide significant value for areas such as distribution network configuration, distribution strategy, trade-offs in logistic activities, inventory management and cash flow management (Supply Chain Domain). Some the KPI(s) of Supply Chain which might be considered for Dashboard / Scorecard are; Inventory turns, Manufacturing Metrics, percentage of units sold under a specific period, percentage of total stock, etc. Some KPI’s related to oil and gas in general that might be of interest are; Meters drilled per day, drilling costs and quarterly exploration expenditure, strategic zones held under exploration license, percentage of market share of exploration expenditure, etc.

How to secure information views in SAP HANA



As more and more organization implement SAP HANA native, S4/HANA or sidecar solutions, the need to understand how to provide access and secure information views has emerge.  The intent of this article is to provide the reader with a few technical details relevant to securing SAP HANA information views.

Before we describe how to secure an information view, let’s quickly define the various information views that are available within SAP HANA.

Attribute Views

Attribute Views are created to serve as a reusable type of view. Developers will create Attribute Views to represent items such as customers, products, dates, salespersons and cost centers. Once activated, they can be joined to one or more analytic view data foundations. Within an attribute view we can also create Hierarchies.

Analytic Views

Analytic Views are created to serve as the SAP HANA Cube. When designing the analytic view, developers will design a data foundation using…

View original post 1,208 more words

Big Data: A mysterious giant IT buzzword

In the world of technology there are a hundred definitions for “Big Data,” it seems confusing to come up with a single definition when there is a lack of standard definition. Like many other terms in technology, Big Data has been evolved and matured and so has its definition. It certainly depends on who we ask and what industry/business field, we will get different definitions. Timo Elliott summarized some of the more popular definitions of Big Data in “7 Definitions of Big Data You Should Know About”.

You may be familiar with three “V’s” or the classic 3V model. However this original definition does not fully describe the benefits of Big Data. Recently, it has been suggested to add 2 more V’s to the list such as Value and Verification or Veracity which are resulted from “Data Management Practices.” As a BI expert who is been involved in Big Data, my approach is to have a practical definition for my clients by emphasizing on main characteristics of data and purpose of Big Data related to each specific area. I like Gartner’s definition which is not too long. Gartner defined Volume, Velocity and Variety characteristics of information assets as not 3 parts but one part of Big Data definition.

Big data is high-volume, -velocity and -variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making. (Gartner’s definition of big data)

The second part of the definition addresses the challenges we face to take the best of infrastructure and technology capabilities. Usually these types of solutions are expensive and clients expect to have cost effective and appropriate solution to answer their requirement. In my opinion this covers the other V which is related to how we implement Data Management Practices in Big Data Architecture Framework and its Lifecycle Model.

The third part covers the most important part and ultimate goal which is Value. Business value is in the insight to their data and to react upon the insight to make better decisions. To have a right vision, it’s important to understand, identify and formulate business problems and objectives knowing practical Big Data solutions are feasible but not easy. So when I define Big Data for my clients, I use Gartner’s definition and explain the journey we need to take together to achieve their goal.

In any Big Data project, I start with BDAF or Big Data Architecture Framework which consists of Data Models, Data Lifecycle, Infrastructure, Analytic tools, Application, Management Operation and Security. One of the key components is having high performance computing storage. Since Big Data technologies are evolving and more options to be considered, I’m focusing on SAP HANA capabilities which enable us to design practical and more cost effective solutions. HANA could be one part of overall Big Data Architecture Framework but it’s the most essential part. The beauty behind SAP HANA is that it is not just a powerhouse Database but it is a development platform to provide real time platform for both analytics and the transactional systems. It enables us to move beyond traditional data warehousing and spending significant time on data extraction and loading. In addition we’re able to take advantage of hybrid processing to design more advance modeling. Another big advantage of HANA is the capability of integrate it with SAP and non-SAP tools.

So, why am I so excited about it? Looking around I see tons of opportunities and brilliant ideas which could get off the ground by some funding. So far, HANA has been more successful in large enterprises with big budgets and larger IT staff. However I’m also interested to encourage medium size enterprises to see the potential of HANA to provide a solution for their problems. The majority of businesses don’t spend their budget to develop a solution. They are eager to pay to solve a particular problem. Now, our challenges as SAP consultants are helping businesses to see the potential and how HANA will address their challenges. The good news is SAP supports by providing test environment and development licenses for promising startups.

Got your attention? Well, just to give you a glimpse, take a look at some of the success stories. In addition there are many many other cases if we look around. For instance, these days many applications capture Geo-location data like trucking company, transportation, etc. it means capturing data every 10 seconds or so from every section, every piece of equipment, every location. This could add up to a Petabyte of data! This is an excellent way to bring insight into data and drive intelligence out of it and have it circulated back to scheduling and movement processes. Another example could be companies needing to mine information from social media regarding to their products and connecting this intelligence back to their back end processes to increase customer engagement and satisfaction.

So, do you have any Big Data Challenge? With some funding, we’re able to provide cost effective and practical solution for your challenge to add value to your business.

How technology changes us: Canada in 10 years

The theme of the Ideaca Blogging network for month of August is a very interesting subject. Certainly technology changes the way we do things on a daily basis significantly, not only in Canada but also globally. There could be some specific cases in Canada such as Green technology to combat climate changes. However most of technology changes impact us globally specially in more advanced countries.

The first thing that it comes to my mind is technology will enable us to convert Zettaflood (10 to the 21st bits, or a thousand exabytes) of data to meaningful information which we are very dependent. Like it or not, business intelligence already has an increasingly important part of our life. The challenge will be how to deal with the explosion of data coming from all types of gadgets and smart technologies because value-based intelligent information helps us to get things faster, better and easier. The speed of rising adoption of cloud, mobile, real-time applications and social technologies and exponential data growth is a big challenge of staying current.

How technology solve this challenge? Some of the biggest improvements have been around networking. We will be able to move more data faster from many sources and applications to where is needed. We wont have any restriction in terms of capacity, scalability and processing speed. Organizations will be able to leverage the three “V’s”, volume, variety and velocity of data to augment the value of data for their decision making. Powerful in-memory technology such as SAP HANA enable us to design complex Predicative and Preventative models for all type of data from structured and unstructured like Audio & Video files. Next generation of Data visualization and Intelligent Reporting tools empower users to slice & dice information any way it is demanded. We will be able to tell stories with data by connecting millions of data points to get a bigger picture. Big data will change our world and it will blow our mind by providing us tons of opportunities. It will make our word smaller and we will be all connected.

I believe in the next 10 years, another significant change will be human and machines interaction. It seems that human interaction, communication and relationships will be more efficient, faster and stronger through smart technologies. Also, we will be able to have better understanding of Machines behaviors and Machines have a better understanding of ours. Ideally humans and machines will work alongside each other and hopefully not replacing human with machines. Although there are ongoing development and opportunities to replace human with machines, but it’s required to consider all potentials dangers and associated risks.

Personally I’m very excited to see how technology will enable us to access information easily, increase our potential and creativity, improve our lifestyle and promise of longevity and improve communication and social networking. On the other hand I believe we need to keep things in balance with respect to human identity and our social behavior. For example, neuroscientists are concerned how modern technology is making us not to use our brains to their full potentials. Based on the evidence the loneliness and depression is increasing and people are less happy in modern society. It’s been observed that newer generation equipped with all kind of smart technology are less effective in terms of communication skills and human interaction.

The bottom line is we use technology to change the world to suit us better. The importance is to control it so it doesn’t destroy human intelligence and social interaction. For instance, it will be great to get a relaxing massage after a long day by our smart robot which already took care of house chores. However nothing will replace a nice face to face talking with our favorite person or give a warm friendly hug to someone. I don’t think we could replace our human connection with human-robot connection.