Why Canonicalization Should Be a Core Component of Your SQL Server Modernization (Part 1)

Topics in this article

In my previous blog “The New DBA Role – Time to Get Your aaS in Order,” I discussed quite a few of the “XaaS services,” old and new, that a DBA must embrace. A sidebar topic that is related to the Services conversation is the Canonical Model. Canonicalization will become the north star where all new work is deployed to and managed, and its implementation and maintenance will depend on the maturity curve and skill set of your team.

The Canonical Model, Defined

A canonical model is a design pattern used to communicate between different data formats; a data model which is a superset of all the others (“canonical”) and creates a translator module or layer to/from which all existing modules exchange data with other modules [1]. It’s a form of enterprise application integration that reduces the number of data translations, streamlines maintenance and cost, standardizes on agreed data definitions associated with integrating business systems, and drives consistency in providing common data naming, definition and values with a generalized data framework.

SQL Server Modernization

I’ve always been a data fanatic and forever hold a special fondness for SQL Server. As of late, my many clients have asked me: “How do we embark on era of data management for the SQL Server stack?”

Canonicalization, in fact, is very much applicable to the design work of a SQL Server modernization effort. It’s simplified approach allows for vertical integration and solutioning an entire SQL Server ecosystem. The stack is where the “Services” run—starting with bare-metal, all the way to the application, with seven integrated layers up the stack.

The 7 Layers of Integration Up the Stack

The foundation of any solid design of the stack starts with . Dell Technologies is best positioned to drive consistency up and down the stack and its supplemented by the company’s subject matter infrastructure and services experts who work with you to make the best decisions concerning compute, storage, and back up.

Let’s take a look at the vertical integration one layer at a time. From tin to application, we have:

  1. Infrastructure from Dell Technologies
  2. Virtualization (optional)
  3. Software defined – everything
  4. An operating system
  5. Container control plane
  6. Container orchestration plane
  7. Application

There are so many dimensions to choose from as we work up this layer cake of both hardware and software-defined and, of course, applications. Think: Dell, VMware, RedHat, Microsoft. With the progress of software, evolving at an ever-increasing rate and eating up the world, there is additional complexity. It’s critical you understand how all the pieces of the puzzle work and which pieces work well together, giving consideration of the integration points you may already have in your ecosystem.

Determining the Most Reliable and Fully Supported Solution

With all this complexity, which architecture do you choose to become properly solutioned? How many XaaS would you like to automate? I hope you answer is – All of them! At what point would you like the control plane, or control planes? Think of a control plane as the where your team’s manage from, deploy to, hook your DevOps tooling to. To put it a different way, would you like your teams innovating or maintaining?

As your control plane insertion point moves up towards the application, the automation below increases, as does the complexity. One example here is the Azure Resource Manager, or ARM. There are ways to connect any infrastructure in your on-premises data centers, driving consistent management. We also want all the role-based access control (RBAC) in place – especially for our data stores we are managing. One example, which we will talk about in Part 2, is Azure Arc.

This is the main reason for this blog, understanding the choices and tradeoff of cost versus complexity, or automated complexity. Many products deliver this automation, out of the box.  “Pay no attention to the man behind the curtain!”

One of my good friends at Dell Technologies, Stephen McMaster an Engineering Technologist at Dell, describes these considerations as the Plinko Ball, a choose your own adventure type of scenario. This analogy is spot on!

With all the choices of dimensions, we must distill down to the most efficient approach. I like to understand both the current IT tool set and the maturity journey of the organization, before I tackle making the proper recommendation for a solid solution set and fully supported stack.

Dell Technologies Is Here to Help You Succeed

Is “keeping the lights on” preventing your team from innovating?

Dell Technologies Services can complement your team! As your company’s trusted advisor, my team members share deep expertise for Microsoft products and services and we’re best positioned to help you build your stack from tin to application. Why wait? Contact a Dell Technologies Services Expert today to get started.

Stay tuned for Part 2 of this blog series where we’ll dive further into the detail and operational considerations of the 7 layers of the fully supported stack.

[1] Source: Wikipedia

About the Author: Robert F. Sonders

Robert F. Sonders has been with Dell Technologies for over 10 years. During his tenure he has been laser-focused on Microsoft workloads, both on-premises, and hybrid, specifically around the entire SQL Server ecosystem, both from an Administrative and Business Intelligence perspective. During this journey, Robert has lead SQL Server delivery project teams as a technical consultant architect and a pre-sales Solution Principal. Throughout his 25+ years of IT experience, Robert has been passionately engaged in implementations that have directly contributed to the advanced performance and productivity of diverse organization verticals in healthcare, financial, and e-commerce. Robert will always begin with understanding the unique challenges of the business verticals with a focus on first learning the client’s perspective (listening first) and then will align the subsequent requirements to formulate a solution that will uniquely fit their needs. He prides himself on staying bleeding edge with his respective “wheelhouse” technologies. His current role now aligns with core Dell Technologies workloads as a “Principal System Engineer – Data-Centric Workloads for Microsoft”, His major focus is on all things SQL Server and Azure Hybrid ecosystem, complementing all Account Teams and Systems Engineering teams for all pre-sales efforts. As an MCT (Microsoft Certified Trainer), the opportunity to present innovative Microsoft technologies, allows Robert to insightfully move clients to overcome any roadblocks. These types of engagements and conversations also allow Robert to provide a client feedback loop, back to Dell internal engineering, and even Microsoft product engineering teams. When he is not consumed with reading and learning he is traveling, handling AKC champion Pembroke Welsh Corgis, and vigorously enjoying the outdoors that surround his desert home. He resides in Scottsdale AZ with his wife and Corgis.
Topics in this article