Disaster Recovery Is Not a Replacement for Backup & Recovery

It surprises me how many people treat “disaster recovery” and “backup & recovery” as interchangeable terms. But backups are not disaster recovery, and disaster recovery is not a backup strategy. Confusing the two creates a false sense of security that often becomes visible the moment something goes wrong. The goal of this post is to offer clarity on what separates these concepts, so you can design a strategy that actually protects your business, not just your data.

What Disaster Recovery Really Is

Disaster recovery (DR) is the capability to restore business operations after a major outage, as quickly as possible. It’s about continuity, not just data. DR covers the entire stack: infrastructure, networking, applications, services, dependencies, and yes, data. It typically involves a functional version of these components in a remote location. A well-defined DR plan articulates how quickly systems must be recovered (RTO), how much data loss is acceptable (RPO), and the sequence in which components must come back online. DR is not just infrastructure, but also orchestration. It’s people, processes, automation, communication plans, and testing. You can have perfect backups and still have a failed disaster recovery effort if none of this is in place. In SQL Server, we commonly design Availability Groups where data is replicated from site A to site B, ensuring the current data set in DR is as close to real-time as Production as possible, in preparation for an unplanned DR cutover.

What Backups Are Actually For

Backups serve a very different purpose. They protect your data, not your operational continuity. A backup is a point-in-time copy that allows you to restore a database. They help you recover from corruption, accidental deletion, ransomware (maybe), and internal mistakes. Backups answer questions like “Can we get the data back?” and “To which point in time?”. But they do not answer “Can our business continue running?”. Backups are essential, but they are a safety net, not a continuity plan. Without DR, backups simply give you something to restore while you remain offline. Backups are typically most valuable when someone runs that DELETE statement without a WHERE clause (happens more than you you’d like to believe), or you discovered corruption in a database.

The Key Differences (and Why They Matter)

The easiest way to understand the difference is this: backups rewind; disaster recovery moves forward. Backups help you return to a previous state, while DR helps you resume operations after a major disruption. You can restore data without ever achieving uptime, and you can fail over to a secondary site without having clean, restorable data. DR focuses on speed and continuity; backups focus on retention and recovery points. Backups alone cannot solve for an entire data center going offline, and DR alone cannot protect you from corruption or data-loss scenarios. Mature organizations design both, because one without the other is a half-built strategy.

Summary

Disaster recovery and backup & recovery are complementary, not interchangeable. DR protects the business; backups protect the data. Both are required if you want real resilience. If you’ve only tested restores, you don’t have a DR plan. If you’ve only tested failover, you don’t have a data-protection strategy. The organizations that survive outages with minimal impact are the ones that treat DR and backups as two distinct disciplines working together. Make the distinction clear, design intentionally, and your business will be far better prepared when the unexpected happens.

The Hidden Cost of Being the Smartest Person in the Room

Why Being “The Expert” Can Kill Your Growth, Influence, and Career

In tech, especially in the database world, we celebrate expertise. We respect the person who knows every wait type, every DMV, every undocumented trace flag. But there’s a danger hidden in becoming too comfortable being “the smartest person in the room”.

I’ve seen it in others. I’ve caught it in myself, at times.

And it’s one of the fastest ways to stall your career, damage your influence, and slowly transform into what every team dreads:

The grumpy DBA in the corner who wonders why no one listens anymore.

Here’s why you should avoid that trap, and how to do it.

When You’re the Smartest Person in the Room, You Stop Growing

Growth comes from friction:

  • Someone challenging your assumptions
  • Someone showing you a tool you’ve never used
  • Someone explaining a pattern you haven’t seen
  • Someone exposing blind spots you didn’t know existed

If you’re always the one teaching, correcting, proving a point, or worse, trying to prove someone wrong, you’re not learning.

And in a field changing as fast as data engineering, SQL Server, and cloud platforms, the moment you stop learning is the moment your value starts dropping.

Comfort feels good. But comfort kills careers.

When You’re Always Right, People Stop Listening

This part hurts, but it’s true: If you always have the answer, eventually people stop asking their questions.

Not because you’re wrong. But because you make them feel:

  • Inferior
  • Judged
  • Uncomfortable
  • Shutdown
  • Interrupted

Think about the smartest engineer you’ve ever known, who constantly corrects people mid-sentence.

  • Do you enjoy brainstorming with them?
  • Do you feel heard?
  • Do you want to collaborate?

Competence builds credibility. Humility builds influence.

If you want people to listen to you, collaborate with you, and trust you, you need to make space for others to shine.

Being the Only Expert Makes You “The Bottleneck”, Not the Hero.

Many DBAs learn this lesson the hard way. You’re proud no one else can troubleshoot broken replication, or tune a problematic query, or rebuild a broken AG. You feel irreplaceable.

Then you wake up one day and realize: If you’re irreplaceable, you’re also not promotable.

You’ve boxed yourself into a corner, and the business sees you as a single-function asset rather than a scalable leader.

Worse, you’ve trained the entire team to need you instead of to learn from you. That’s how smart experts become bitter, exhausted, and stuck.

The “Grumpy DBA” Is Usually Someone Who Never Stretched Beyond Their Expertise

The stereotype exists for a reason.

  • The person muttering in the back of the meeting…
  • The one ranting about how no one understands indexing…
  • The one convinced that “management doesn’t listen”…
  • The one who refuses to learn anything cloud-related…
  • The one whose career hasn’t moved in 10+ years…

That person is often genuinely brilliant.

  • But brilliance without humility becomes isolation.
  • Isolation becomes frustration.
  • Frustration becomes bitterness.

And bitterness is career poison.

So What Should You Do Instead?

Here’s how to avoid becoming the smartest stuck person in the room:

1. Put yourself in rooms where you’re not the expert

Go to user groups, conferences, new communities, architecture sessions, or cross-functional teams where you feel outclassed.

That discomfort is a sign of growth.

2. Ask more questions than you answer

Curiosity builds connection.

Questions create collaboration.

But always being the one with the Answers can shut it down.

Your goal isn’t to prove you’re smart. It’s to help others rise with you.

3. Build successors, not dependencies

If your team can operate without you, that’s leadership — not replacement.

4. Learn outside your lane

Cloud. Platform Engineering. Python. Security frameworks. Data governance. Observability.

These skills multiply your value.

Final Thoughts

Being smart isn’t the problem.

Believing and acting as if you are the smartest is.

  • When you’re the only expert, you limit your growth.
  • When you’re always right, you lose influence.
  • When you isolate yourself by expertise, you become stuck.

And when you’re stuck long enough, you become the stereotypical “grumpy DBA” wondering why your career never moved.

  • Put yourself in bigger rooms.
  • Be curious.
  • Grow with others.

And never let your expertise become the ceiling over your career.

Why Bad Data Type Choices Kill Performance

Choosing a data type seems simple

If you want to store text, you have a few choices: VARCHAR, NVARCHAR, CHAR, NCHAR.

If you’re storing dates, you pick DATE, DATETIME or DATETIME2 depending on precision.

These seem obvious, yet I still see people storing dates in CHAR(8), routinely.

So what actually happens when you get your data types wrong?


1. Implicit Conversions (aka CONVERT_IMPLICIT)

This one often goes unnoticed for a long time. SQL Server politely hides the mistake from you. How kind!

Take this example:

‘Brendan’ stored as VARCHAR is not the same thing as ‘Brendan’ stored as NVARCHAR.

  • In VARCHAR, the letter B is stored using 8 bits.
  • In NVARCHAR, the same B consumes 16 bits.

SQL Server must convert one side of the comparison so the types match.

And since NVARCHAR is 16-bit Unicode, SQL Server can’t down-convert it to 8-bit.

So guess which side gets converted?

Correct: the VARCHAR column gets hit every time.

That becomes a very big deal when you’re using C# or .NET and dealing with predominantly ASCII data.

Most developers think: “We only store ASCII, let’s use VARCHAR everywhere to save space.”

But by default, .NET sends all string parameters to SQL Server as NVARCHAR, even if every character is ASCII.

So SQL Server sees:

  • VARCHAR in the column
  • NVARCHAR in the query parameter

…and is forced to convert the entire column to NVARCHAR before it can evaluate the predicate.

Yes, SQL Server must evaluate and convert each candidate row, which usually means every row in the table.

A real-world story

I once worked on an application years ago with this exact issue on its most frequently executed query: the login query.

  • Username column → VARCHAR
  • Application parameter → NVARCHAR

This ran fine for years, until the business grew. More users. More logins. More load.

Then one day, the entire application ground to a halt.

  • CPU pegged at 100%
  • Login queries taking 20–30 seconds
  • The app essentially dead in the water
  • And this wasn’t a “small” SQL Server, either.

What happened after fixing the implicit conversions?

CPU dropped under 20%, login queries fell to milliseconds, and the app was running faster than ever.

Implicit conversions are silent, deadly, and everywhere.

2. Queries Become Non-SARGable

Once SQL Server is forced to convert the column, your beautiful index becomes useless.

Example: WHERE Username = @Username — but types don’t match

If SQL Server converts the column, not the parameter, the index becomes non-SARGable.

That means:

  • No index seeks
  • Full scans
  • High I/O
  • Long runtimes

All because a data type didn’t match.

3. Oversized Data Types Waste Space and Break Memory Grants

Now let’s talk about another common mistake: choosing data types that are way larger than needed.

Example: Defining a column as NVARCHAR(2000) when NVARCHAR(50) would have been plenty.

You might think: “It only stores what’s actually used, right, what’s the harm?”

The harm is hidden in the query optimizer.

When SQL Server generates a query plan, it must estimate how much memory is needed before execution. It must do this so the memory is already available once it starts reading the data from disk. It does this by predicting the average length of variable-length columns.

If you declare a column as NVARCHAR(2000), SQL Server may assume a large average row size based on the declared max length, which is often around 50% of the declared length. In this example, that would be average length of ~1,000, even if most rows contain no more than 25-50 characters.

That leads to:

  • Huge memory grants
  • Excessive RAM consumption
  • Buffer pool flushes
  • Lower buffer cache hit ratio
  • More reads going to slow disk instead of memory

A single oversized data type can push SQL Server into thinking it needs 2 GB of RAM for a query that actually needs 500 MB.

And all the data it flushes from memory?

Well, other queries likely still need it, but it’s now on disk.

In Summary

Bad data types don’t always hurt you immediately.

They hurt you when:

  • Your workload grows
  • Your data grows
  • Your concurrency grows
  • Your business grows

All queries run fast in Dev, when you only have a 1000 rows in a test table. But when those tables grow in production, that’s when the pain is begins.

Three things to remember:

  1. Implicit conversions silently destroy performance and CPU.
  2. Non-SARGable queries eliminate index usage.
  3. Oversized data types inflate memory, I/O, and query cost.

Get your data types right early, and avoid serious downtime later.


If you like what you’ve read, please subscribe so you don’t miss my latest posts…

My Data & Leadership Monthly Roundup — November 2025

Opening Thoughts

I had a great time attending the PASS Data Community Summit for the first time last month. It was incredible meeting so many of you face-to-face. I’m looking forward to attending more meetups and conferences in 2026.

My New Posts From Last Month

Why Email Is Outdated: Embrace Modern Workplace Tools

Stop Wasting Money on SQL Server Cores — Reduce Licensing Costs the Smart Way

Top Content I Read Last Month

Query Exercise Answer: Generating Big TempDB Spills — Brent Ozar

Local AI Models for SQL Server – A Complete Guide — Pinal Dave

“Rebuilding Indexes and Updating Statistics”: The DBA’s Panic Button — Amy Abel

Tools or Scripts Worth Checking Out

Brent Ozar’s First Responder Kit

Ola Hallengren’s SQL Server Maintenance Solution

dbatools (PowerShell Module for SQL Server Automation)

Tip for New DBAs

Successful backups do not translate to successful restores. If you’re not testing your backups, you don’t have backups.

Thanks for following along! If this was helpful, please share it with others who might benefit.


Backups on Secondary Replicas in SQL Server 2025: What’s New, What’s Better, and What Still Worries Me

Backups on Secondary Replicas in SQL Server 2025: What’s New, What’s Better, and What Still Worries Me

Back in 2022, I wrote a post called SQL Server Backups on Secondary Replicas: Best Practice or Bad Idea? At the time, the limitations were clear: backups on secondaries were restricted, operationally risky, and often misunderstood.

Three years later, SQL Server 2025 has expanded what you can do on a secondary replica. Some of these changes are genuinely great. But the question I keep getting is:

Does SQL Server 2025 finally make backups on secondary replicas a best practice?

Let’s break it down.

What’s New in SQL Server 2025

SQL Server 2025 introduces long-requested enhancements to backup capabilities on secondary replicas. Specifically:

  • Regular Full backups
  • Differential backups
  • Transaction Log backups (as before)
  • ZSTD backup compression (new in 2025)

This is a major improvement over previous versions, where secondaries were limited to copy-only full backups and log backups only.

Microsoft’s goal is clear: Offload more backup workload to secondaries and reduce overhead on primaries.

But as always with Availability Groups, the story is more complicated.

The Big Tradeoff: Performance vs. Reliability

  • Yes, your backups run faster on a secondary replica.
  • Yes, you avoid CPU, I/O, and memory pressure on the primary.

But here’s the hidden risk: When you back up from a secondary, you are now depending on replication health for your backup chain to be valid.

And I have seen far too many production environments where:

  • A database is silently removed from the AG
  • Secondary lag spikes for minutes or hours
  • A replica goes Not Synchronizing during the backup window
  • Backups succeed… but the data on the secondary was behind

Even in synchronous commit, replication is not perfect. SQL Server can still introduce synchronization delay silently to avoid slowing down transactions on the primary.

Backups must be predictable, not eventually consistent.

If my business continuity depends on a process, I don’t want that process introducing unnecessary points of failure.

Restore Chains in SQL Server 2025 (The Part Nobody Talks About)

With SQL Server 2025 supporting full + diff + log backups on secondaries, DBAs naturally ask:

Does a diff backup taken on a secondary depend on a full backup taken on a primary?

Yes. SQL Server maintains a unified backup chain across replicas.

But that doesn’t mean restore operations become simpler. Your restore sequence might look like:

  • Full backup (secondary)
  • Differential backup (primary)
  • Log backups (mixed across replicas)

SQL Server can handle that. But, your recovery playbook may not.

AG backups across nodes increase restore complexity.

So Why Would Anyone Back Up on a Secondary?

The two real benefits are:

  • Performance.
  • Licensing optimization

But both of these come with tradeoffs. Primary in backup reliability. In high-availability architectures, I believe every synchronous replica should be sized to handle 100% of the production workload at any moment. If the primary cannot absorb the backup load, that points to a capacity-planning gap.

If your system is truly hammered during backup windows and you can’t add CPU, storage throughput, or improve compression, then running backups on a secondary can help. But you still need to weigh the performance benefits against:

  • Replication risk
  • Operational complexity
  • Restore-path complications

Have these new secondary backup types supported in SQL Server 2025 changed my Opinion?

Short Answer: No.

I still prefer:

  • Reliability > cleverness
  • Simplicity > complexity
  • Consistency > theoretical best performance

My advice today is the same as in 2022:

You can back up on secondary replicas. But you probably shouldn’t, unless you fully understand the risk, and complexity, and are prepared to accept the potential consequences.

My Recommended Backup Approach (Even in 2025)

I prefer to run Ola Hallengren’s Maintenance Solution for AG-aware backups on all replicas. But, since all of these backup jobs are AG-aware, they automatically target taking the backup on whichever replica is currently primary, or on non-AG databases as normal.

After Failover

Backup jobs continue running without change.

The new primary becomes the active backup source automatically.

No process changes required.

Most importantly:

Every database, in AG or not, is always backed up by the same job.

This prevents the terrifying scenario: “A database fell out of the AG and no one noticed… so it wasn’t backed up.”

SQL Server 2025 gives you more backup options…

But, it does not remove your responsibility to choose responsibly.

Now, I know what you’re going to say: “But we’ve been running backups on our secondary replicas without any issues.” And I’m sure you, as well as many other DBAs, have done this successfully, at least up till now. But that doesn’t change the fact that it still introduces another point of failure into the backup process. Smaller shops, accidental DBAs, or even your own team after you move on may not be prepared to manage that complexity.

We should all be architecting solutions that will easily outlast our tenure, not ones that only work because the current DBA understands the edge cases.


If you like what you’ve read, please subscribe so you don’t miss my latest posts…

Stop Wasting Money on SQL Server Cores — Reduce Licensing Costs the Smart Way

I made a blog post recently called Hard Work Isn’t Enough. Add Value or Get Stuck. If you haven’t read that yet, I suggest you go check that out first.

Minimizing CPU core counts is a perfect example of how to add value, and is arguably one of the easiest ways to do so.

I run this exercise in my environments about every six months, typically right before true-up time and again at mid-year, just to make sure we haven’t drifted too far.

How expensive is SQL Server licensing?

Very expensive.

If you’re running Enterprise Edition, you’re going to pay around $7,561.50 per core. Core licenses are sold in 2-core packs, and there’s a 4-core minimum per server. If you need more cores, you must add them in 2-core increments, as you cannot split a license pack across multiple servers.

Note: Pricing varies depending on licensing program (EA, CSP, Select Plus).

That puts the minimum cost of an Enterprise SQL Server VM at:

$7,561.50 × 4 cores = $30,246 (and that’s before Software Assurance)

Want to run that VM in an HA or DR configuration? SA is required if you want to run your HA/DR replicas as free passive nodes – up to one passive HA replicas, one passive DR replica, and one passive Azure replica.

Note: If a replica performs readable queries or takes backups, it no longer counts as passive and must be fully licensed. This is one reason I don’t recommend running backups on your secondary replicas — but I digress. If you want to know the other reason, read: SQL Server Backups on Secondary Replicas: Best Practice or Bad Idea?

How much is Software Assurance?

Software Assurance typically runs around 25% of the licensing cost per year.

So if you paid $30,246 to license a 4-core SQL Server, expect to pay:

$7,561.50 per year for SA.

Yes — annually.

So, how can I reduce this cost?

The 4-core example was just to illustrate the math. In the real world, it’s not uncommon to find SQL Servers running with 16, 32, 64, or more cores. You can see now how expensive these could get.

But how do you decide if you actually need all of those cores?

Look at CPU trends over the past 33+ days.

You want at least 33 days because many workloads have monthly processing cycles, and shorter windows may give you a false sense of stability.

You’re looking for peak CPU usage, not averages.

Averages are misleading — because when that monthly job kicks off and you don’t have enough cores, things can come crashing to a halt fast.

What I typically find the first time I run this analysis:

• Most SQL Servers peak at **40–50% CPU**
• I once saw a SQL VM with **64 cores** using only 30% at peak

But don’t go straight to the floor.

If you have 8 cores and peak at 45%:

• Don’t drop directly to 4
• Drop to 6
• Monitor performance
• Adjust again if needed

Keep in mind that reducing vCPUs on any VM always requires a reboot, so schedule the change during a maintenance window.

This approach avoids risk while saving real money.

Ideal CPU targets

• Short bursts: 80–90%
• Sustained peaks: 60–70%
• Off-hours: 20–30%

If you’re consistently below these thresholds, you’re likely over-allocated.

What next?

With all the money you just saved, go invest in a great database monitoring solution.

It will pay for itself after just a single right-sizing exercise.


If you like what you’ve read, please subscribe so you don’t miss my latest posts…

Why Email is Outdated: Embrace Modern Workplace Tools

For decades, email has been the backbone of corporate communication. It was simple, reliable, and revolutionary in its time. But has its time passed? Today’s workplace has countless tools at its disposal: instant messaging, text messaging, ticketing systems, and collaborative platforms. And this is where I see the most friction: people still choose email as their default go to, even when the work clearly belongs in a structured tool like ServiceNow, Jira, or Azure DevOps. The way we collaborate, share information, and make decisions has changed dramatically, but email hasn’t kept up. Instead, it has become a burden, an outdated system trying to solve problems it was never designed for. It’s time to acknowledge the truth: email should no longer be our default, and in many cases, it should disappear entirely from our workflows.

The most obvious problem with email is the sheer amount of noise it generates. In a single day, many of us receive more messages than we could reasonably respond to in a week. Irrelevant CCs, chaotic reply-all chains, vague subject lines, and unorganized information make it nearly impossible to keep track of what matters. I find myself archiving 90% of the emails that come in if I’m not directly addressed in the body. So, you have about .5 seconds to grab my attention when email me. Because if I don’t treat email this way, I’ll spend my entire day in my inbox instead of doing real work. Important messages get buried, tasks fall through the cracks. Email wasn’t designed to help us prioritize, provide context, or behave like a ticket queue. It delivers everything, urgent or trivial, in the same flat FIFO list. Modern tools organize communication by project, team, priority, due dates and/or context. Email forces all communication into one chaotic pile.

Email also slows work down. It is inherently inefficient for the fast-paced, collaborative environments teams operate in today. We still waste time hunting for the latest version of a file buried in an attachment or waiting for back-and-forth responses that take hours or days to resolve something that could’ve taken minutes in a shared document or workspace. Attachments, especially for internal use, should not be sent over email. Put your documents on SharePoint or another shared platform, bookmark the location, and never go digging through your inbox for that file again. Email introduces delays, fragments conversations, and blurs responsibility. Tools like Slack, Teams, shared documents, and project management systems allow people to collaborate instantly and transparently with a clear, auditable record of changes, directly tied to the work. Nothing is hidden. Nothing gets lost. Email simply wasn’t built for dynamic, multi-person workflows, yet we keep trying to force it into that role.

Finally, email encourages bad work habits and undermines team culture. People use email to avoid direct conversations, accountability, or clear ownership. Managers issue instructions over email instead of placing tasks into proper systems where responsibilities and deadlines are visible. Team members forward threads, hide behind long paragraphs, or cause confusion with incomplete responses. And when an email has ten people on copy, it might as well have no one on copy, as everyone assumes someone else will respond. On top of that, email follows us everywhere: on our phones, at night, on weekends. It blurs boundaries and fuels burnout. Modern communication tools offer better visibility and better control, allowing for healthier expectations around availability and response times.

Email isn’t evil, it’s just outdated. It solved a 1990s problem and has overstayed its welcome in a world that has moved on. It had a great run, but today’s workplace needs communication tools designed for clarity, speed, transparency, and focus. If we want to work smarter and healthier, we need to reduce our dependence on email and use systems built around how people actually collaborate. It’s time to let email retire gracefully and build a more efficient digital workplace without it.

And of course, we all know the line: “Well, I sent an email!”. Given everything we know, is it any surprise that email never got a response? If you’re not putting something into a ticket or project management tool, it’s unlikely the recipient is going to take meaningful action on that cold email. Email doesn’t hold people accountable, proper tools do.

So What Tool Makes Sense, and When?

Here is my general guidance on which type of tool to use for different types of work.

Task & Work Tracking Systems

Examples: ServiceNow, Jira, Azure DevOps

Use when: a task needs ownership, accountability, deadlines, or an audit trail.

Not for: discussions, status checks, or FYIs.

Rule: If work must be done, it belongs here, not in email.

And to be absolutely clear: once a ticket is open in one of these tools, all follow-up dialogue and updates should go in the tickets work notes, not in an external email. Email creates fragmentation, hides information from the next shift or teammate, and breaks the system of record. Keep all communication tied to the ticket so it’s trackable, visible, and actionable.

Instant Messaging / Chat Platforms

Examples: Slack, Microsoft Teams

Use when: you need quick answers, clarifications, or real-time conversation.

Not for: assigning tasks or storing decisions.

Rule: If you need it fast, chat. If you need it tracked, ticket it.

Examples: SharePoint, OneDrive, Google Docs

Use when: multiple people need to read, edit, or reference living information.

Rule: If more than one person needs access, share it, don’t attach it.

Knowledge & Documentation Platforms

Examples: Confluence, SharePoint Sites, ServiceNow

Use when: information will be reused, referenced, or taught.

Not for: temporary notes or one-time explanations.

Rule: If someone may ask about it again, document it once.

Live Conversations

Examples: Teams calls, Zoom, phone

Use when: alignment is unclear, the topic is sensitive, or a thread is getting long.

Not for: tasks that need tracking or formal decisions.

Rule: If your message becomes a debate, call instead.

Email (rare, narrow use case)

Use when: you need a formal announcement or are communicating externally.

Not for: tasks, decisions, status, assignments, or real collaboration.

Rule: Email informs, but should not drive work.

Adopt the right tools now, or continue questioning why your productivity never improves.


If you like what you’ve read, please subscribe so you don’t miss my latest posts…

Hard Work Isn’t Enough. Add Value or Get Stuck.

The Most Important Thing You Can Do in Your Job: Add Value

Companies hire people for many reasons, but it’s no secret that the end goal is always to create more value than they cost.

If you think about it like an investment, the logic becomes obvious. No one buys a $100 stock hoping to sell it for $90 later, and then plans to repeat that approach again and again in hopes of becoming wealthy. The same goes for your employer. They’re investing in you. They expect that the time, salary, and benefits they put in will produce an even greater return for the business.

This isn’t cynical; it’s the foundation of trust between you and the company. When you understand it, you unlock a mindset that separates good employees from truly impactful ones.

Your real job isn’t just to do work, it’s to make sure your work matters.

Development and Engineering Is Still a Business Function

I see this all the time among engineers. Someone wants to do something great, only to be told “no.”

  • Rewrite a service
  • Adopt a cutting-edge framework
  • Completely refactor a system

That “no” usually comes down to one of two things.

  1. Overkill for the business case. The idea may be technically impressive but doesn’t justify the cost or risk for the problem at hand.
  2. Poor articulation of value. The idea might be valuable, but it wasn’t explained in a way that ties to measurable business outcomes.

In both cases, the idea fails not because it’s wrong, but because its value wasn’t visible.

Adding Value Isn’t Just Doing Your Job

It’s easy to think, “I’m doing my work; that’s value.” But value means more than output. It means impact.

Ask yourself:

  • Does this make money, save money, or reduce risk?
  • Does it improve customer experience, speed, quality, or reliability?
  • Can I demonstrate that improvement in terms my business leaders care about?

When you start connecting technical work to measurable business benefit, you become more than an engineer. You become a business partner.

Learn to Sell Your Ideas

You don’t have to be a salesperson, but you do have to sell your ideas.

If you can’t explain how your work benefits the company in language decision-makers understand, it doesn’t matter how brilliant the solution is.

Use metrics like cost savings, uptime, customer satisfaction, and time-to-market. Translate tech into business. “This reduces manual steps by 30%” is more powerful than “This automates a process.”

Every great idea still needs a business case.

Of course, even the best ideas and efforts lose impact if we don’t see them through.

Accountability: Protecting the Value You Create

Adding value isn’t just about generating ideas or effort. It’s also about owning outcomes.

All too often, people half-pass a task to another team with no follow-up or confirmation that the recipient understands what’s expected. In some cases, they never even tell the other team there’s a task waiting for them. That handoff moment is where value is most often lost.

When you delegate without clarity or follow-through, you’re not transferring responsibility, you’re abandoning it. And when outcomes fall through the cracks, it doesn’t just cost time or money; it costs trust.

Accountability means staying connected to the result, even when it’s no longer in your hands. If you want to add value, don’t just do your part. Make sure the whole job gets done right.

However, don’t confuse this with micromanaging. That can be just as harmful to the process.

That’s how you move from being a contributor to being someone others rely on. Reliability, in any organization, is a form of compound value.

Accountability isn’t just about finishing tasks. It’s about thinking beyond them.

Short-Term Savings vs. Long-Term Value

Adding value doesn’t mean chasing short-term efficiency.

Yes, you could stop patching or upgrading servers and save time today. But over time, that decision erodes trust, security, and reputation. The hidden costs of neglect will far outweigh the visible savings.

The best professionals know how to balance short-term gains with long-term sustainability. That’s where true value lives.

Measure, Communicate, Repeat

The formula is simple.

  1. Know what matters to your business. (Revenue, uptime, customer trust, compliance, etc.)
  2. Do things that move those metrics.
  3. Tell the story. Don’t assume people see the value, show it to them.

If you can consistently tie your work back to the company’s success, you’ll never have to wonder if you’re doing enough.

In the End: Value Is the Language of Impact

No matter your title — engineer, analyst, or manager — your real job is to increase the return on your company’s investment in you.

When you think in terms of value, you stop working in the business and start working for the business. That’s the difference between being an employee and being an asset.

Do these things well. Think with vision, communicate clearly, and follow through. You’ll move up faster than you expect. Fall short in any of them, and it might be time to look in the mirror.


If you like what you’ve read, please subscribe so you don’t miss my latest posts…

If You’re Not Automating Yourself Out of a Job, Someone Else Will

How embracing automation can secure your tech career, not threaten it.

Many DBAs and Platform Engineers still hesitate when the topic of automation comes up. Some find it daunting. Others quietly fear it, as if automating their work might make them replaceable.

But the truth is simple: if you’re not automating yourself out of a job, someone else eventually will. And if that’s the case, wouldn’t you rather be the one driving the automation rather than the one being replaced by it?

The reality is that the people who build automation rarely lose their jobs, they just redefine them.

The Fear of Automation Is Understandable, but Misplaced

For decades, manual work defined what it meant to be a DBA or Platform Engineer. Backups, patching, deployments, service restarts—those were our bread and butter.

But those same tasks are now the easiest to automate, and that is a good thing.

Automation doesn’t erase expertise; it amplifies it. Every repetitive task you automate buys you back time to focus on what actually requires your judgment: architecture, optimization, and strategy. Automation doesn’t make you less valuable. It makes your expertise scalable.

The Shift from Doing to Designing

The modern engineer’s job is no longer about “keeping the lights on.” It’s about designing systems that keep themselves on. Automation changes your focus from doing to designing:

  • How do we enforce database standards across hundreds of servers?
  • How do we detect configuration drift automatically?
  • How can we deliver new environments faster without compromising security?

If you’re the one architecting those solutions, you’re not eliminating your role, you’re future-proofing it.

The Human Element that Automation Can’t Replace

Automation can execute consistently, many times, but it can’t reason. It doesn’t know why a maintenance window matters to a financial system. That is where human experience still leads.

Automation replaces repetition, not reasoning. The best engineers teach systems how to operate, while maintaining ownership of why they operate the way they do.

Automation Is the Gateway to AI

There’s a lot of talk about AI taking over. But the truth is you cannot adopt AI responsibly without first building automation. And as I’ve said before, you can’t scale automation successfully until you’ve first mastered your standards.

AI thrives on structure, consistency, and feedback loops. If your environment is chaotic and manual, AI will only amplify that chaos.

Automation lays the foundation for AI success by ensuring:

  • Reliable data pipelines
  • Repeatable deployment patterns
  • Auditable, policy-driven workflows

In short: you can’t just throw AI in, and hope that it’ll do your automating for you.

Lead the Change, Don’t Resist It

The best leaders and engineers don’t just adapt to automation, they drive it. They show their teams that automation isn’t about working less; it’s about working smarter. It’s how you multiply your impact. Not by doing more yourself, but by enabling your systems and your peers to do more without you.

Automation doesn’t make you obsolete, it makes you irreplaceable.

Final Thoughts

Automation isn’t a threat, it’s just another evolution.

The engineers who embrace it early become architects of efficiency, reliability, and scale.

Those who avoid it end up maintaining what others have already improved.

So ask yourself: do you want to lead the next wave in tech, or take a back seat to those that are?

The next time you hesitate to automate a task, remember: you’re not replacing your job, you’re rewriting it into something better. And if you don’t, someone else will.

In a future post, I’ll walk through some of the most effective tools for automating common DBA tasks , from PowerShell and dbatools to Terraform and Azure Automation, and show how to get started building your own foundation.


If you like what you’ve read, please subscribe so you don’t miss my latest posts…

Leadership Isn’t About Doing More — It’s About Creating More Leaders

When I first stepped into leadership, it wasn’t because I wanted authority or a title. It was because I realized something simple but powerful:

I was good at what I did – but I was only one person.

If I could coach, mentor, and elevate others to think and operate the same way, my impact would multiply far beyond what I could accomplish alone.

That realization changed everything for me.

The Trap of the “Hero Leader”

Too often, I see leaders step in to solve every tough challenge their team faces. They’re the go-to problem-solvers: smart, reliable, efficient. But here’s the truth: that’s not leadership. That’s being an individual contributor with a title.

When a leader becomes the bottleneck for solutions, they’re not leading, they’re limiting. They might keep the team afloat in the short term, but they rob people of the growth that comes from wrestling with hard problems, making mistakes, and finding their own way forward.

Real leadership means letting go. It’s about resisting the urge to jump in and fix things, even when you know exactly how to do it faster or better. It’s about trading immediate control for long-term capability.

From Solving Problems to Building Problem Solvers

Leadership isn’t about making sure all the problems get solved. It’s about making sure your team learns how to solve them.

  • Will they make mistakes? Absolutely.
  • Will it be painful at times? Of course.

But imagine this: a few months from now, you have a team full of people who can solve complex issues, make confident decisions, and lead others, without you needing to step in.

That’s when you stop being a manager of tasks and become a builder of capability.

That’s when you move from success to lasting significance.

Multiplying Your Impact

The greatest leaders don’t succeed because they do more. They succeed because they create more people who can do.

If you spend your time mentoring and developing others to think critically, take ownership, and stay calm under pressure, your influence compounds. You’re no longer responsible for a single output, you’re responsible for an ecosystem of performance.

That’s real leadership.

Your Success Is Measured Through Others

As a leader, your success is no longer about what you deliver. It’s about what your team delivers, consistently, sustainably, and independently.

Success doesn’t come from doing. It comes from teaching, guiding, and trusting.

If your team succeeds when you’re not in the room, that’s the true test of leadership.

Final Thought:

A great leader doesn’t say, “I solved the problem.”

A great leader says, “My team solved it — and they didn’t even need me this time.”

That’s when you know you’ve done your job.


Don’t miss my latest posts…