Data Modeling Master Class

Obtain the skill for creating precise diagrams of the terms essential for the success of your initiatives.

The Master Class is the complete data modeling course,  covering practical techniques for producing conceptual, logical, and physical relational and dimensional and NoSQL data models. After learning the styles and steps in capturing and modeling requirements, you will apply a best practices approach to building and validating data models through the Data Model Scorecard®. You will know not just how to build a data model, but how to build a data model well. Several case studies and many exercises reinforce the material and will enable you to apply these techniques in your current projects.

Virtual or In-person?

The Data Modeling Master Class covers over 20 hours of instruction and exercises to make you competent in data modeling, in either a virtual or in-person format. The in-person class is three days with optionally a fourth day workshop, and the virtual class is three full days or five extended half days. The virtual class contains additional activities to keep the class interactive and engaging.

 

Public or dedicated?

Register for one of the upcoming classes or bring Steve in (virtually or in-person) to teach a dedicated class to your group. If three people register for a public class from the same company, the fourth person is free. See upcoming classes, or complete the contact form to receive a quote for a dedicated class.

 

Course materials

Over $400 of books are included with each registration, including digital copies of four of Steve’s books: The Rosedata StoneData Modeling for MongoDB, Data Model Scorecard, and Data Modeling Made Simple, and a digital or print copy of the Data Modeling Master Class Training Manual.

 

Feedback

I am really enjoying your class. I actually love this format, for some reason I am able to laser focus here in my house versus in a large classroom setting.  I am brand new to data modeling, so this is EXACTLY the training I have been needing.     Wendy D.

 

The Data Modeling Master Class provides a great foundation of data modeling by using real-world examples, in-class exercises and quizzes, and analogies to make the complex simple – all of which was done online. Thank you, Steve!    Britton T.

 

The virtual Zoom Data Modeling Mater Class was phenomenal.  The pace was perfect even with the varying levels of data modeling experience among the participants in the class.   Steve made it fun and engaging and seeing him on video showed the passion he has for data modeling.      Dana K.

 

The Data Modeling Master Class was surprisingly over my expectation.  The class was loaded with useful data modeling techniques, strategies, references and fun.   Steve Hoberman’s passion in data is contiguous. From this training, I’ve learned not only the how, more importantly, the why.  The holistic view of data definitely helps me to see data from the enterprise level.      Crystal C.

 

I believe your personal experience, enthusiasm for the subject and examples used helped demonstrate how relevant the Data Modeling is to both Business, Applications and Data Analytics. I want to thank you for the excellent practical exercises in sessions that you ran for us over the five days last week and not to mention the ‘survivor quiz’ questions :L). I especially appreciate the way you encourage questions and interactions feeling safe to participate in open discussions. You are very responsive and offers good suggestions to answering the questions. I strongly recommend this class to others who wants to be successful in Data Modeling or anything to do with Data Management. Movin D.

 

I found the data modelling course very enlightening. Your training has made data modelling a lot less daunting especially from the perspective of eliciting information by providing a framework of what question to ask when. The training has also ground the whole Data Modeling journey for me by showing me that the conceptual and logic model serve a distinct function and that it is not only the physical model that matters. Leon S.

 

I am really enjoying the class. I know you cannot always see or hear us, but you have me laughing every day at your jokes – especially today when you were playing actor Steve and switching between CEO Steve and Data Modeler Steve. Jennifer R.

 

This was a fantastic class.  I thought that learning remotely would be challenging, but I honestly felt like I was sitting in a classroom with you!  Your enthusiasm, energy and knowledge make for a great learning environment. Sue K.

 

I really enjoyed your class, and got a lot out of it.    I was a little hesitant on how a 3 day virtual class would work, especially with so much interactive material.    I was extremely impressed by the organization and how much interaction that occurred during the class.   In some ways, it was more interactive than a physical class, as the students could ‘talk’ via instant message throughout the class, and ask questions of the instructor while the question was still top of mind – without interrupting the flow of the training.  Jim B.

 

Top 5 Objectives

  1. Determine how and when to use each data modeling component.
  2. Apply techniques to elicit data requirements as a prerequisite to building a data model.
  3. Build relational and dimensional conceptual, logical, and physical data models.
  4. Assess the quality of a data model across ten categories: Correctness, Completeness, Model Scheme, Structure, Abstraction, Standards, Readability, Definitions, Consistency, and Data.
  5. Incorporate supportability and extensibility features into the data model.

 

Prerequisite(s)

This course assumes no prior data modeling knowledge and, therefore, there are no pre-requisites. This course is designed for anyone with one or more of these terms in their job title: “data”, “analyst”, “architect”, “developer”, “database”, and “modeler”.

 

Topics

Part 1: Modeling Basics

Assuming no prior knowledge of data modeling, we introduce our first case study which illustrates four important gaps filled by data models. Next, we will explain data modeling concepts and terminology, and provide you with a set of questions you can ask to quickly and precisely build a data model. We will also explore each component on a data model and practice reading business rules. We will complete several exercises, including one on creating a data model based upon an existing set of data. You will be able to answer the following questions by the end of this section:

  1. What is a data model and what characteristic makes the data model an essential wayfinding tool?
  2. How does the 80/20 rule apply to data modeling?
  3. What three critical skills must the data modeler possess?
  4. What six questions must be asked to translate ambiguity into precision?
  5. Why is precision so important?
  6. What three situations can ruin a data model’s credibility?
  7. What are three key skills every data modeler should possess?
  8. Why are there at least 144 ways to model any situation?
  9. What do a data model and a camera have in common?
  10. What are the most important questions to ask when reviewing a data model?
  11. What are entities, attributes, and relationships?
  12. Why subtype and how do exclusive and non-exclusive subtypes differ?
  13. How do different modeling notations represent subtypes?
  14. What are candidate, primary, natural, alternate, and foreign keys?
  15. What are the perceived and actual benefits of surrogate keys?
  16. What is cardinality and referential integrity and how do they improve data quality?
  17. How do you “read” a data model?
  18. What are the different ways to model hierarchies and networks?
  19. What is recursion and why is it such an emotional topic?

 

Part 2: Overview to the Data Model Scorecard®

The Scorecard is a set of ten categories for validating a data model. We will explore best practices from the perspectives of both the modeler and reviewer, and you will be provided with a template to use on your current projects. Each of these following ten categories heavily impacts the usefulness and longevity of the model:

  1. Ensuring the model captures the requirements
  2. Validating model scope
  3. Understanding conceptual, logical, and physical data models
  4. Following acceptable modeling principles
  5. Determining the optimal use of generic concepts
  6. Applying consistent naming standards
  7. Arranging the model for maximum understanding
  8. Writing clear, correct and consistent definitions
  9. Fitting the model within an enterprise architecture
  10. Comparing the metadata with the data

 

Part 3: Ensuring the model captures the requirements

There is no one way to elicit requirements – rather it requires knowing when to use certain elicitation techniques such as interviewing and prototyping. We will focus on techniques to ensure the data model meets the business requirements. You will be able to answer the following questions by the end of this section:

  1. What is the Requirements Lifecycle?
  2. Why do we “elicit” instead of “gather” requirements?
  3. When should you use closed questions vs. open questions during an interview?
  4. How do you perform data archeology during artifact analysis?
  5. What are two creative prototyping techniques for the non-techie?
  6. How can you validate that a data model captures the requirements without showing the data model?

 

Part 4: Validating model scope

We will focus on techniques for validating that the scope of the requirements matches the scope of the model. If the scope of the model is greater than the requirements, we have a situation known as “scope creep.” If the model scope is less than the requirements, we will be leaving information out of the resulting application. You will be able to answer the following questions by the end of this section:

  1. How do you define “metadata” in practical terms?
  2. What techniques can you use to avoid scope creep?
  3. When is observation (job shadowing) an effective way to capture requirements?
  4. What are the different techniques for initiating an interview?
  5. What are the three job shadow variations?
  6. How can prototyping assist with defining model scope?

 

Part 5: Understanding conceptual, logical, and physical data models

The conceptual data model captures a business need within a well-defined scope, the logical data model captures the business solution, and the physical data model captures the technical solution. Relational, dimensional, and NoSQL techniques will be described at each of these three levels. We will also practice building several data models and you will be able to answer the following questions by the end of this section:

  1. How do relational and dimensional models differ?
  2. What are the ten different types of data models?
  3. What are the five strategic conceptual modeling questions?
  4. Why are conceptual and logical data models so important?
  5. What are the Concept and Question Templates?
  6. What are four different ways of communicating the conceptual?
  7. What are six conceptual data modeling challenges?
  8. What are the five steps to building a conceptual data model?
  9. What is the difference between grain, base, and atomic on a dimensional?
  10. What are the three different paths for navigation on a dimensional data model?
  11. What are the differences between transaction, snapshot and accumulating facts?
  12. What are the three different variations of conformed dimensions?
  13. What are junk, degenerate, and behavioral dimensions?
  14. What are outriggers, measureless meters, and bridge tables?
  15. What are some dimensional modeling do’s and don’ts?
  16. How can you leverage the grain matrix to capture a precise and program-level view of business questions?
  17. What is the difference between a star schema and a snowflake?
  18. What is normalization and how do you apply the Normalization Hike?
  19. What is the Attributes Template?
  20. Where should denormalization be performed on your models?
  21. What are the five denormalization techniques?
  22. What is the difference between aggregation and summarization?
  23. What are the three ways of resolving subtyping on the physical data model?
  24. What are views, indexing, and partitioning and how can they be leveraged to improve performance?
  25. What are the four different types of Slowly Changing Dimensions?
  26. What is the lure of NoSQL?
  27. What are the four characteristics NoSQL differs from RDBMS?
  28. What are Document, Column, Key-value, and Graph databases?
  29. What are the advantages and disadvantages of going “schema-less”?
  30. What is the difference between ACID and BASE?
  31. What is MongoDB and is there a difference between a physical and implementation data model?

 

Part 6: Following acceptable modeling principles

We will cover Consistency, Integrity, and Core modeling principles. You will be able to answer the following questions by the end of this section:

  1. What tools exist to automate checking model structure?
  2. What are circular relationships and why are they evil?
  3. Why are good default formats really bad?
  4. What are the most common structural violations on a data model?
  5. Why should you avoid redundant indexes?
  6. Why shouldn’t an alternate key be null?
  7. How do you catch definition inconsistencies?
  8. What is a partial key relationship?
  9. Why must a subtype have the same primary key as its supertype?

 

Part 7: Determining the optimal use of generic concepts

Abstraction is a technique for redefining business terms into more generic concepts such as Party and Event. This module will explain abstraction and cover where it is most useful. You will be able to answer the following questions by the end of this section:

  1. What is abstraction and at what point in the modeling process should it be applied?
  2. What three questions (known as the “Abstraction Safety Guide”) must be asked prior to abstracting?
  3. What is the high cost of having flexible structures?
  4. How does abstraction compare to normalization?
  5. What are the three levels of data model patterns?
  6. Why are roles so important to analytics?
  7. What are metadata entities?
  8. Why does context play a role in distinguishing event-independent from event-dependent roles?
  9. What are industry data models and where do you find them?

 

Part 8: Applying consistent naming standards

Consistent naming standards will get your organization one step closer to a successful enterprise architecture. We will focus on techniques for applying naming standards and you will be able to answer the following questions by the end of this section:

  1. What is naming structure, term, and style and how do they apply to entities, attributes, and relationships?
  2. What are the three most important parts of a naming standards document?
  3. What is a Reference Guide?
  4. Why is an “enforcer” required for standards compliance?
  5. What is the ISO 11179 standard and how can it help my organization?

 

Part 9: Arranging the model for maximum understanding

A data model is a communication tool and if the model is difficult to read it can hamper communication. We will focus on techniques for arranging the entities, attributes, and relationships to maximize readability. You will be able to answer the following questions by the end of this section:

  1. How can our modeling tools make readability an easy category to ace?
  2. Why is keeping relationship lines as short as possible better than minimizing crossing lines?
  3. Why should we not alphabetize attribute names?
  4. Why should we avoid UPPERCASE?
  5. Why should we organize attributes in a transaction entity by classword, and attributes in a reference entity by chronology?

 

Part 10: Writing clear, complete, and correct definitions

Although definitions may not appear on the data model diagram itself, the definitions are integral to data model precision. We will focus on techniques for writing useable definitions. You will be able to answer the following questions by the end of this section:

  1. How do you play Definition Bingo?
  2. Why are definitions so much more important now than they were in the past?
  3. What are best practices for writing a good definition?
  4. How do you validate a definition?
  5. How do you reconcile competing definitions?
  6. What is the Consensus Diamond and how can being aware of Context, State, Time, and Motive improve the quality of our definitions?
  7. What are some workarounds when you cannot get common agreement on a definition (e.g. the Batman technique)?

 

Part 11: Fitting the model within an enterprise architecture

A data modeler is not only responsible to the project for capturing the application requirements, but also responsible to the organization to ensure all terms and relationships are consistent within the larger framework of the enterprise data model. We will focus on techniques for ensuring the data model fits within a “big picture”. You will be able to answer the following questions by the end of this section:

  1. What is the Data Vault and how do you build a Data Vault using hubs, links, and satellites?
  2. What is an enterprise data model and why have one?
  3. What are the secrets to achieving a successful enterprise data model?
  4. Why is enterprise mapping more important than enterprise modeling?
  5. What three program initiatives benefit most from an enterprise data model?

 

Part 12: Comparing the metadata with the data

A logical or physical data model should not be considered complete until at least some data analysis has been done on the data that will be loaded into the resulting data structures. We will focus on techniques for confirming that the attributes and their rules match reality. Does the attribute Customer Last Name really contain the customer’s last name, for example? You will be able to answer the following questions by the end of this section:

  1. How can domains help improve data quality?
  2. What are the three main types of domains?
  3. How can I capture lineage using the Family Tree?
  4. Why is the Family Tree an important reality check?
  5. How can the Data Quality Validation Template help us with catching data surprises early?

Register

Dec 14-18Virtual Data Modeling Master Class  US time (save $605)


Mar 29-31: Virtual Data Modeling Master Class European time

 


Ask a question or request a quote for Steve to teach the Master Class to your group, virtually or in-person


    Organizations certified to teach the Master Class

    ITGAIN, teaches this class in German

    Modelware Systems teaches this class in South Africa

    About Steve

    Steve Hoberman has trained more than 10,000 people in data modeling since 1992. Steve is known for his entertaining and interactive teaching style (watch out for flying candy!), and organizations around the globe have brought Steve in to teach his Data Modeling Master Class, which is recognized as the most comprehensive data modeling course in the industry. Steve is the author of nine books on data modeling, including the bestsellers The Rosedata Stone and Data Modeling Made Simple. Steve is also the author of Blockchainopoly. One of Steve’s frequent data modeling consulting assignments is to review data models using his Data Model Scorecard® technique. He is the founder of the Design Challenges group, creator of the Data Modeling Institute’s Data Modeling Certification exam, Conference Chair of the Data Modeling Zone conferences, director of Technics Publications, lecturer at Columbia University, and recipient of the Data Administration Management Association (DAMA) International Professional Achievement Award.