Obtain the skill for creating precise diagrams of the terms essential for the success of your initiatives.
The Master Class is the complete data modeling course, covering practical techniques for producing conceptual, logical, and physical relational and dimensional and NoSQL data models. After learning the styles and steps in capturing and modeling requirements, you will apply a best practices approach to building and validating data models through the Data Model Scorecard®. You will know not just how to build a data model, but how to build a data model well. Several case studies and many exercises reinforce the material and will enable you to apply these techniques in your current projects.
Restructured for virtual learning
The Data Modeling Master Class covers over 20 hours of instruction and exercises to make you competent in data modeling, The virtual class is three full-day sessions or five half-day sessions, chock full of exercises activities to keep the class interactive and engaging.
Public or dedicated?
Register for one of the upcoming classes or bring Steve in (virtually) to teach a dedicated class to your group. If three people register for a public class from the same company, the fourth person is free. See upcoming classes, or complete the contact form to receive a quote for a dedicated class.
Over $400 of books are included with each registration, including digital copies of four of Steve’s books: The Rosedata Stone, Data Modeling for MongoDB, Data Model Scorecard, and Data Modeling Made Simple, and a digital or print copy of the Data Modeling Master Class Training Manual.
I am really enjoying your class. I actually love this format, for some reason I am able to laser focus here in my house versus in a large classroom setting. I am brand new to data modeling, so this is EXACTLY the training I have been needing. Wendy D.
The Data Modeling Master Class provides a great foundation of data modeling by using real-world examples, in-class exercises and quizzes, and analogies to make the complex simple – all of which was done online. Thank you, Steve! Britton T.
The virtual Zoom Data Modeling Mater Class was phenomenal. The pace was perfect even with the varying levels of data modeling experience among the participants in the class. Steve made it fun and engaging and seeing him on video showed the passion he has for data modeling. Dana K.
The Data Modeling Master Class was surprisingly over my expectation. The class was loaded with useful data modeling techniques, strategies, references and fun. Steve Hoberman’s passion in data is contiguous. From this training, I’ve learned not only the how, more importantly, the why. The holistic view of data definitely helps me to see data from the enterprise level. Crystal C.
I believe your personal experience, enthusiasm for the subject and examples used helped demonstrate how relevant the Data Modeling is to both Business, Applications and Data Analytics. I want to thank you for the excellent practical exercises in sessions that you ran for us over the five days last week and not to mention the ‘survivor quiz’ questions :L). I especially appreciate the way you encourage questions and interactions feeling safe to participate in open discussions. You are very responsive and offers good suggestions to answering the questions. I strongly recommend this class to others who wants to be successful in Data Modeling or anything to do with Data Management. Movin D.
I found the data modelling course very enlightening. Your training has made data modelling a lot less daunting especially from the perspective of eliciting information by providing a framework of what question to ask when. The training has also ground the whole Data Modeling journey for me by showing me that the conceptual and logic model serve a distinct function and that it is not only the physical model that matters. Leon S.
I am really enjoying the class. I know you cannot always see or hear us, but you have me laughing every day at your jokes – especially today when you were playing actor Steve and switching between CEO Steve and Data Modeler Steve. Jennifer R.
This was a fantastic class. I thought that learning remotely would be challenging, but I honestly felt like I was sitting in a classroom with you! Your enthusiasm, energy and knowledge make for a great learning environment. Sue K.
I really enjoyed your class, and got a lot out of it. I was a little hesitant on how a 3 day virtual class would work, especially with so much interactive material. I was extremely impressed by the organization and how much interaction that occurred during the class. In some ways, it was more interactive than a physical class, as the students could ‘talk’ via instant message throughout the class, and ask questions of the instructor while the question was still top of mind – without interrupting the flow of the training. Jim B.
I realised how well you explained complex ideas with simple words. So many good anchors to remember the fundamentals – took so many notes! This is probably because dimensional modelling is where I have the most experience in, but your class helped to really cement some fundamentals. Like “factless fact counts relationships” – brilliant. Simply brilliant! Dovilė K.
Top 5 Objectives
- Determine how and when to use each data modeling component.
- Apply techniques to elicit data requirements as a prerequisite to building a data model.
- Build relational and dimensional conceptual, logical, and physical data models.
- Assess the quality of a data model across ten categories: Correctness, Completeness, Model Scheme, Structure, Abstraction, Standards, Readability, Definitions, Consistency, and Data.
- Incorporate supportability and extensibility features into the data model.
This course assumes no prior data modeling knowledge and, therefore, there are no pre-requisites. This course is designed for anyone with one or more of these terms in their job title: “data”, “analyst”, “architect”, “developer”, “database”, and “modeler”.
This Course Contains Six Modules
Module 1: Establishing a foundation in data modeling
Assuming no prior knowledge of data modeling, we introduce our first case study, which illustrates four important gaps filled by data models. Next, we will explain data modeling concepts and terminology and provide you with a set of questions you can ask to quickly and precisely build a data model. Many exercises will ensure you are competent in leveraging the data model components of entities, attributes, relationships, keys, subtyping, hierarchies, and networks. We will also explore best practices from the perspectives of both the modeler and reviewer, and you will be provided with a Data Model Scorecard template to use on your current projects. You will be able to answer the following questions by the end of this module:
- What is a data model, and what characteristic makes the data model an essential wayfinding tool?
- How does the 80/20 rule apply to data modeling?
- What six questions must be asked to translate ambiguity into precision?
- What three situations can ruin a data model’s credibility?
- Why are there 144 ways to model any situation?
- What do a data model and a camera have in common?
- What are the most important questions to ask when reviewing a data model?
- What are entities, attributes, and relationships?
- Why subtype and how do exclusive and non-exclusive subtypes differ?
- What are candidate, primary, natural, surrogate, alternate, and foreign keys?
- How does cardinality and referential integrity improve data quality?
- How do you “read” a data model?
- What are the different ways to model hierarchies and networks?
- What is recursion, and how do you balance its promise of flexibility with its cost of obscurity?
- What are the ten categories of the Data Model Scorecard that determine the quality of the data model and, therefore, the success of the initiative?
Module 2: Ensuring the model captures the requirements and reflects an accurate scope
There is no one way to elicit requirements – rather, it requires knowing when to use certain elicitation techniques such as interviewing and prototyping. We will focus on techniques to ensure the data model meets the business requirements and that the scope of the requirements matches the scope of the model. You will be able to answer the following questions by the end of this section:
- Why do we “elicit” instead of “gather” requirements?
- What are the different techniques for initiating an interview?
- When should you use closed questions versus open questions?
- How do you perform data archeology during artifact analysis?
- How can prototyping assist with defining model scope?
- What are two creative prototyping techniques for the non-techie?
- How can you validate that a data model captures the requirements without showing the data model?
- When is observation (job shadowing) an effective way to capture requirements?
Module 3: Building conceptual, logical, and physical data models
The conceptual data model captures the common business vocabulary, the logical data model captures the business requirements, and the physical data model captures the technical requirements. At each level, we explore relational, dimensional, and NoSQL modeling approaches. We will build several data models and you will be able to answer the following questions by the end of this module:
- How do relational and dimensional models differ?
- What are the five steps to building a conceptual data model?
- What are the six strategic conceptual modeling questions?
- Why should we call the conceptual the “business terms model”?
- What are six conceptual data modeling challenges?
- What is the difference between grain and atomic on a dimensional?
- When should we drill up, drill down, and drill across?
- What are the differences between transaction, snapshot, and accumulating facts?
- What are the three different variations of conformed dimensions?
- What are junk, degenerate, and behavioral dimensions?
- What are outriggers, measureless meters, and bridge tables?
- What are the dimensional modeling do’s and don’ts?
- How can you leverage the measure matrix to capture a precise and program-level view of business questions?
- What is the difference between a star schema and a snowflake?
- What is normalization and how do you apply the Normalization Hike?
- What are the five denormalization techniques?
- What is the difference between aggregation and summarization?
- What are the three ways of resolving subtyping on the physical data model?
- How can views, indexing, and partitioning be leveraged to improve performance?
- What is the Data Vault?
- What are the four different types of Slowly Changing Dimensions?
- How does NoSQL differs from RDBMS?
- When should we use Document, Column, Key-value, and Graph databases?
Module 4: Following acceptable modeling principles
We will cover Consistency, Integrity, and Core modeling principles. You will be able to answer the following questions by the end of this module:
- What are the most common structural violations on a data model?
- Why are circular relationships pure evil?
- Why are good default attribute formats really bad?
- Why should you avoid redundant indexes?
- Why shouldn’t an alternate key be null?
- What is a partial key relationship?
- Why must a subtype have the same primary key as its supertype?
Module 5: Determining the optimal use of generic concepts
Abstraction is a technique for redefining business terms into more generic concepts such as Party and Event. This module will explain abstraction and cover where it is most useful. You will be able to answer the following questions by the end of this module:
- What is abstraction and at what point in the modeling process should it be applied?
- What three questions (the “Abstraction Safety Guide”) must be asked before abstracting?
- What are the pros and cons of abstraction?
- How does abstraction compare to normalization?
- What are the three levels of data model patterns?
- Why are roles so important to analytics?
- What are metadata entities?
- Why does context play a role in distinguishing event- from event-dependent roles?
- How can we leverage industry data models?
Module 6: Ensuring the data model’s future use
This module covers naming standards, readability, definitions, consistency, and data profiling. Consistent entity and attribute naming enables a successful enterprise architecture. Readability increases the communication ability of the data model. A well-written definition supports the precision of the data model. Consistency ensures that the data model fit within a “big picture” enterprise perspective. Data profiling reassures the analysis of data to be loaded into the resulting data structures. These five topics will ensure the data model is supportable and provide a valuable communication tool for many projects to come. You will be able to answer the following questions by the end of this module:
- How do you apply naming structure, term, and style to entities, attributes, and relationship labels?
- What are the three most important parts of a naming standards document?
- How can the ISO 11179 standard help my organization?
- What are the best ways to arrange entities and relationships?
- How should we sequence attributes to make them easy to find?
- How do you play Definition Bingo?
- What are the best practices for writing a great definition?
- How do you reconcile competing definitions?
- What is the Consensus Diamond, and how can being aware of Context, State, Time, and Motive improve the quality of our definitions?
- What are the secrets to achieving a successful enterprise data model?
- What three program initiatives benefit most from an enterprise data model?
- How can domains help improve data quality?
- Why is the Family Tree an important reality check?
- How can the Data Quality Validation Template help us with catching data surprises early?
Sept 27- Oct 1: Virtual Data Modeling Master Class US time (save $505)
Nov 22-24: Virtual Data Modeling Master Class European time
Ask a question or request a quote for Steve to teach the Master Class to your group, virtually or in-person
Organizations certified to teach the Master Class
ITGAIN, teaches this class in German
Modelware Systems teaches this class in South Africa
Steve Hoberman has trained more than 10,000 people in data modeling since 1992. Steve is known for his entertaining and interactive teaching style (watch out for flying candy!), and organizations around the globe have brought Steve in to teach his Data Modeling Master Class, which is recognized as the most comprehensive data modeling course in the industry. Steve is the author of nine books on data modeling, including the bestsellers The Rosedata Stone and Data Modeling Made Simple. Steve is also the author of Blockchainopoly. One of Steve’s frequent data modeling consulting assignments is to review data models using his Data Model Scorecard® technique. He is the founder of the Design Challenges group, creator of the Data Modeling Institute’s Data Modeling Certification exam, Conference Chair of the Data Modeling Zone conferences, director of Technics Publications, lecturer at Columbia University, and recipient of the Data Administration Management Association (DAMA) International Professional Achievement Award.