One Version of the Truth is Not Attainable

One Version of the Truth is Not Attainable

I recently met with a financial executive of a large distribution company and he expressed to me his frustration.  The company had invested in a Data Warehouse and deployed a leading BI solution and yet, they still had issues with “one version of the truth” (OVOT).  They spent significant hours trying to validate which numbers were correct.

With a slight smile I said, “One Version of the Truth is Not Attainable.”  He looked at me in shock, because many highly paid consultants had told him he just needed to invest in a data warehouse and master data management (MDM) and then put a BI tool on top.  I continued to explain that for most organizations; especially those growing through acquisitions, one version of the truth would not be a reality.  However, there are definite steps and considerations that can get you closer to “One Version of the Truth”.

1. Leadership needs to come from the Business Executives
. Guiding your organization to realize the benefits of OVOT requires strict discipline. The Executive Sponsor needs to plan out their strategy for implementing the solutions, while looking at both the short term and long term goals.
a. Putting the right resource in place. Including knowledgeable subject matter experts in the core team may require pulling them away from their regular jobs and having others backfill, but the benefits will be tremendous.
b. Providing appropriate funding. By setting short and long term goals, you can prove incremental ROI’s and justify future expenditures.
c. Setting priorities. The leadership team needs to set the priorities and communicate them to the entire organization.  This should include establishing incremental steps to measure the ROI along the way.  Often times, other organizational priorities will be in conflict with the OVOT initiative, so executives must set the priorities.
d. Identifying and communicating risks of bad data. The business is really the owner and creator of data.  IT protects it.  The executives need to communicate that.  To be successful, the business must take ownership and understand the risks of bad or incomplete data.  When this message comes from the top, people listen.

2. Master Data Management (MDM) is not an IT project.  The business must be involved in MDM or cleaning and managing the data.  We recommend that someone with in-depth knowledge of the business be assigned to the team, as they can understand and articulate the impact of poor data quality on the business.
3. Strict change management. Once the data warehouse and MDM foundation have been established, strict change management rules must be established and enforced.  Various subsystems and business units should not be operating in silos where their changes can impact other areas of the business.
4. Silos of data in an acquisition. The reality is that if you are in acquisition mode, you will always have data sources that are not fully migrated to the corporate structures and systems.  You must plan to have short term and long term solutions to deliver reporting and analysis that includes data that has not come on board. I refer to this as the MacGyver approach, as you have to make it work with what you have on hand.
5. Eliminate “duct tape and super glue” reporting. Many talented BI report writers have been instructed to just make the reports work, so despite challenges with the data, they write queries and code into the reports to make them work.  I call this the “duct tape and super glue” method.  The report writers need to work with and communicate the data challenges to the data team so they can correct issues at the source. This issue is magnified as the organization pushes out self-service to the end users.  Duct tape and super glue will not withstand the pounding of end users.

If you would like to learn more about how to navigate data and realize an increased ROI on your Business Intelligence investment, contact Lodestar Solutions about our new Executive Coaching for Business Analytics at 813-254-2040 or email us at

Don’t Drive Blind – Incorporate Planning Data with TM1

Every company is wrapped up in their own data.  How do we capture our data?  How do we analyze our data?  How do we report on our data?  It seems like the ultimate goal in many companies is how to report on their data and get through the monthly meetings with a pretty presentation. Read on to learn about Cognos TM1 and how to Incorporate Planning Data.

We’ve all heard the analogy that when you have a reporting package without incorporating a forecasting/planning package, it’s like driving with your eyes focused on the rearview mirror and not on the road ahead.

Cognos TM1 is a great product to get your planning incorporated with your reporting to bring your data together in order to plan your roadmap ahead.  But there is still a piece that many companies are missing in tying this data together.  While you might be the lucky one that is the only car on the road when commuting to work, no one company is the only car on the road in their industry.

Get your side view mirrors adjusted and start leveraging Cognos TM1’s strength by adding to your model ways to incorporate competitor and industry data as well.  Granted, it’s not always easy to get the data if a competitor is not publicly traded, but there are many ways to get general industry data with a bit of due diligence.

Since internal resources may be very lean, this may be a task that you incorporate for quarter ends and something that Accounting may help Finance with between close periods.  When you begin seeing the data side-by-side you may be amazed at what you find.

Star Schema and Snowflake Schema? To Be Or Not To Be

To star or to snowflake, that is the question…Which of star schema and snowflake schema models perform better is an age old debate between database developers.

Both star schema and snowflake schema are relational models made up of fact and dimension tables. The difference is a snowflake dimension is made up of several highly normalized tables that remove redundant attributes; whereas, a star schema dimension is a signal denormalized contextual table.  Both have a place in data warehousing depending on the objective .  However, in most cases when the end objective is killer analytical query performance, the datamart/database should be modeled using star schema techniques.

When an end user imposes a question on a normalized database, the system must search through numerous tables and records before it can return the answer which murders performance. However, in a star schema model, denormalized contextual dimensions surround a single fact table. This allows the system to point to the dimensions that are contextually necessary and to ignore any table whose context is not referenced in the question.

Since Framework Manager does not store any data in memory and is only a pass-through metadata application, it is especially important that the data source be optimized for maximum query performance. Implementing Framework Manager using a poorly designed database will only produce poor results thus agitating your end users with subpar query speed and accuracy.

As stated above, there are a few instances when it makes sense to snowflake (outrigger) a dimension in a database designed for analytical querying. For instance, if a dimension has an extreme amount of attributes (about 20 or more) that are contextually related, then it makes since to build an outrigger table.

Unlike Hamlet, you do not have a choice whether your BI system lives a miserable existence because of poor query performance or dies a tragic death. The silver lining is a well-designed data warehouse using star schema dimensional modeling.

Database Documentation Made Easy With APEX SQL

Having to create documentation is like having to go to the dentist. You know you have to go and you dread it every six months. Database documentation is similar because everyone dreads it and whenever it is mentioned you can find hundreds of reasons to do other things until your teeth start to hurt.

Well, I have a great dentist for you and your Sql database. The product is called APEX SQL. First, before I give you a review and how to use it, there are few points that I have to make. The product only works if you have a normalized database with proper indices, primary keys/foreign keys, and good object naming conventions. If your database does not conform to a good database standard, then documentation is going to hurt because the nice dentist won’t work on your teeth until you start brushing and flossing on a regular basis. Don’t worry, I will tell you how to make your database conform to a standard good enough to get you in the chair.

You might be thinking there aren’t any real standards out there especially for Sql. You are right, so here are a few rules that I use when creating a database.

  1. 3rd form of Normalization – I try to adhere to this rule to manage the size of my database as well as ensure I only store my data once.
  2. Avoid protected naming – Sql uses prefixes to identify its objects verses objects created by a user. For example you should never prefix your stored procedure with a sp_ because all of the system stored procedures will have the same prefix. The impact is that whenever you call your stored procedure, Sql will look through its own objects first and then to the user objects to execute the object. That might not seem like a big deal, but if your database becomes sizable it will become a big deal. There are enough letters and characters available…use your own.
  3. Primary and Foreign Keys – In a relational database there should be primary and foreign keys assigned. This tells the database how tables are related to one another. It is the core component of creating an ERD (entity relationship diagram).
  4. Indexes – In a relational database, there should be an index on any table that stores large amounts of data. There are different conversations out there about the blessing and curse of indexes. If they are done correctly, they are a blessing because it will speed up the result delivery but may increase the size of your database. The curse is if you index on columns that don’t reduce the need for a full table scan, you have simple made the database larger without speeding up result delivery.

Now that we have some basic standards in place, we can talk about how Apex Sql will help with documentation. It is a server side installation, which means it has to be installed on the same server where the Sql databases exist. Once you have it installed, you can use the wizard and it will go and read every object in the database. If you followed my basic rules, it will create fantastic documentation for you in less time it would take you to rinse out your mouth at the dentist! After it is done, you can use a window scheduler to create your documentation on a routine. How nice it that?! Now, whenever you change your database your documentation is updated automatically.

The best part of Apex Sql is that you can try it before you buy it. Take it for a test run and see how easy it will make your life. Remember that you must have the basics covered in your designed or it won’t work as well.

Everyone will be impressed that your database documentation is always up to date. Get ready for everyone to compliment on your nice smile!