SharePoint, PowerPivot, Power View and Multidimensional Analysis Services

This article was written to help me sort out SharePoint, PowerPivot, Power View and Analysis Services. Read on to discover the questions that arose as I embraced this technology set and the answers I’ve uncovered.

So what are the options for working with Power View within SharePoint with MDX as the source cube? Let me frame this a bit, I am referring to SharePoint 2013 and SQL Server 2012 SP 1 or SQL Server 2014. It appears that PowerPivot becomes the data source for Power View reports. That is, a PowerPivot workbook saved within a Document Library. So it seems feasible that building a PowerPivot report pulling from an MDX cube, then stored in a Doc Library should work. The first test is the Document Library, PowerPivot and MDX cube test. If this works the same way as using a tabular cube, then we are one step closer to a complete solution.

Continue reading “SharePoint, PowerPivot, Power View and Multidimensional Analysis Services”

Network Load Balancing and Reporting Services

Recently doing a BPC install in a Network Load Balanced (NLB) environment and ran into an error when testing the Reporting Services on the second application server. When clicking on the Report Manager URL I would receive the error:

The report server installation is not initialized

It turns out I needed to go back to the first server and run Reporting Services Configuration Manager and select Scale Out Deployment, then join server two.

 

Clearing All Rows From All Tables

This couple of SQL statements will delete all the rows from all tables.

  1. It disables referential integrity
  2. DELETES or TRUNCATES each table
  3. Enables referential integrity
  4. Reseeds rows with identity

— disable referential integrity
EXEC sp_MSForEachTable ‘ALTER TABLE ? NOCHECK CONSTRAINT ALL’
GO

EXEC sp_MSForEachTable ‘
IF OBJECTPROPERTY(object_id(”?”), ”TableHasForeignRef”) = 1
DELETE FROM ?
else
TRUNCATE TABLE ?

GO

— enable referential integrity again
EXEC sp_MSForEachTable ‘ALTER TABLE ? CHECK CONSTRAINT ALL’
GO

— This will reseed each table [don’t run this exec if you don’t want all your seeds to be reset]
EXEC sp_MSForEachTable ‘
IF OBJECTPROPERTY(object_id(”?”), ”TableHasIdentity”) = 1
DBCC CHECKIDENT (”?”, RESEED, 0)

GO

Thanks to Mauro Cardarelli’s post

BPC Processor Preferences

Based on this SQL Server Central post: a processor with more cores is better for Data Warehouses / Decision Support Systems (smells like OLAP). And at this time an Intel processor is better for a transactional system (OLTP).

SAP Supported SQL and Windows Server Versions for BPC

Windows Server

BPC 7.5 SP04 or later:

Windows Server 2008 R2 Standard or Enterprise Edition with or without Hyper-V
Windows Server 2008 Standard or Enterprise Edition with or without Hyper-V, or
Windows Server 2008 Data Center with or without Hyper-V
Windows Server 2003 Standard or Enterprise Edition SP2
Windows Server 2003 Standard configured on the SQL database or OLAP server components only
Windows Server 2003 R2 Standard or Enterprise Edition SP2

SQL Server

BPC 7.0 and 7.5 support:

SQL 2008: SP2 with Cumulative Update 1 for SP2.
SQL 2008 R2: Cumulative Update 5.

Loading BPC Data the Old Fashioned Way; Using SQL

Thanks to Gert Andries van den Berg on the SAP BPC forums for making sense of all this. In a nutshell, by writing records to either the WB or FAC2 table, the cube is updated. No need to build SSIS package with Dumpload task. This can be done with SQL. Writing to the FAC2 table seems to be the destination of choice. Though I’ve yet to determine if running optimization is required to see data in reports.

From Gert Andries van den Berg.

As per the tuning doc:

WB – real time data input (ROLAP partition)
This is data that is the most current data sent to the system. Data sent by BPC for Excel data sends and Investigator browser data sends is placed in real-time storage.
FAC2 – short term and Data Manager imports (MOLAP partition)
This is data that is not real-time data, but is also not in long-term storage yet. When you load data via Data Manager (automatic data load from external data sources), it loads the data to short-term storage so that the loaded data does not affect system performance. Only the cube partition associated with this table is processed, so the system is not taken offline.
Fact – long term history (MOLAP partition)
This is the main data storage. All data eventually resides in long-term storage. Data that is not accessed very often remains in long-term storage so that the system maintains performance
This structure allows SAP BPC to maintain the same performance over time even when there is a large increase in data volumes.
Periodically clearing real-time data greatly optimizes the performance of the system and an “Optimization” process is required (this could be scheduled automatically based on given parameters like a numbers of records threshold).

Lite Optimization:

Clears Real-time data storage (WRITEBACK) and moves it to short-term data storage (FAC2). This option doesn’t take the system offline, and can be scheduled during normal business activity.

Incremental Optimization:

Clears both real-time and Short-term data storage (WB and FAC2) and moves both to Long-term data storage (FACT).

This option should be run when the system is offline, but it will not take the system offline so it should be run during off-peak periods of activity.

Full Process Optimization:

Clears both real-time and short-term data storage and processes the dimensions.

This option takes the system offline and takes longer to run than the incremental optimization.

It is best run scheduled at down-time periods – for example after a month-end close.
The Compress Database option is available to rationalize the Fact Tables. “Compress” sums multiple entries for the same CurrentView into one entry so that data storage space is minimized. Compressed databases also process more quickly.

More info on this topic from Sorin Radulescu:

First you have to be aware about structure of BPC cubes:
Each cube has 3 partitions:
1. fact – MOLAP
2. fac2 – MOLAP
3. WB – ROLAP

When you insert records into WB table because WB is ROLAP partitions you will see the impact of that insert into cube in Real Time.
If you insert records into any of MOLAP partitions without processin the partition you are not able to see these records into cube.
I think now you have a clear picture about BPC cube and you undertsood diference between MOLAP and ROLAP partitions.

Lite Optimize is necessary just to keep under contrl the number of records from WB table.
For SSAS if a Rolap Partitions has more than 100 000 records retrieve data from that cube it will be very slow if in the same time users are doing insert into WB Table.
So Lite optimize is schedule usually every 15 minutes when number of records is over 20 000.
That’s means every 15 minutes this dtsx package check if WB has 20 000 records.
If yes then is running this process
If not then is not doing anything.

LITE Optimize process
It is doing the follow steps:
1. Copy records from wb to fac2 and marlk the records from wb move into fac2
2. Create a temporary partitions and start to process this partition just for these records move from wb table
3. When it is finishing the process of partition then the system is doing in transaction the follow:
– merge partition fac2 with temp partition
– delete the records marked from wb

Exposing Reporting Services Reports to the Internet

By using the ASP.NET ReportViewer control on a web page exposed to the internet, it is possible to display Reporting Services reports. The trick is in turning on Anonymous Access on the ReportServer Virtual Directory and creating the anonymous user as a Reporting Services user and granting that user permission to view the reports you want to share.

In the ReportViewer control configure the ReportView Tasks as follows:

  1. Choose Report:
  2. Report Server Url: the path to the ReportServer Virtual Directory
  3. Report Path: the path to the report beginning with ‘/’

On the web server hosting Reporting Services configure the ReportServer Virtual directory as follows:

  1. Turn on Anonymous Access
  2. Remember the Anonymous Access user

In Report Manager do the following:

  1. Configure the Anonymous Access user as a Reporting Services user
  2. Grant that user access to the necessary folders and/or reports

Deploy your web page with the ReportViewer control to your internet server and you are good to go.

Find Which Table(s) Contain a Column and the INFORMATION_SCHEMA Namespace.

To find which table(s) contain the PersonID column try this:

SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMN_NAME LIKE ‘%PersonID%’

In the AdventureWorks DB this is what you see:


So of course you can search for any column, not just PersonID.

Another interesting tidbit is that any ANSI compliant DBMS provides the INFORMATION_SCHEMA namespaces. From this namespace you have access to the metadata on any DB object. You can look up information on Stored Procedures and Functions using INFORMATION_SCHEMA.ROUTINES or use any of the many different objects within this namespace: COLUMNS, ROUTINES, CHECK_CONSTRAINTS, PARAMETERS, TABLES, etc.

Enjoy