Whenever I ask an AI for help on a nontrivial database task I'm always extra careful. By experience I know that they are surprisingly bad at this, even though SQL DBs are very mature technology. I have used AIs for database tasks and many times I have been quite underwhelmed at their performance. I wrote about one such task in 2025, in The cost (savings) of AUTO_SHRINK in Azure SQL, and in 2024 in AIs vs the SQL Server Fill Factor. But that was a long time ago, and we all know that yesterday's AI models are nothing compared to today's!
Maybe the latest SOTA (State Of The Art) models Opus 4.6 and GPT Codex 5.3 that were just released (February 2026) are finally useful as a reliable trustworthy AI-DBA? Just a couple of days ago I had a nontrivial SQL issue that I wasn't sure how to handle, and asked some AIs for help. Read on to find out how it went!
At my day work at fintech MAJORITY we need
to keep track of our user's balance, how much money each account contains. To
do this we have a table, AccountBalance, with columns
AccountId, TransactionId,
AvailableBalance, BalanceTime and a couple more. To
capture the full history of how a user's balance has changed, this table is a
Temporal Table. That means that it has a history table
attached to it, and whenever
the base table is updated (e.g. the balance of an Account is updated), a copy
of the old row is inserted into the "history table" first. This way you can
query out the balance for a any specific point in time.
Now the problem: We needed to rename a column in the balance table. And to do that, you have to first turn off the Temporal feature, do the change in both the main and the history table, and then turn it on again:
BEGIN TRAN
ALTER TABLE dbo.AccountBalance SET (SYSTEM_VERSIONING = OFF);
EXEC Sp_Rename
'dbo.AccountBalance.TransactionId',
'ExternalTransactionId', 'COLUMN';
EXEC Sp_Rename
'dbo.AccountBalanceHistory.TransactionId',
'ExternalTransactionId',
'COLUMN';
ALTER TABLE dbo.AccountBalance
SET (SYSTEM_VERSIONING = ON
(HISTORY_TABLE = dbo.AccountBalanceHistory)
);
COMMIT TRAN
While we are not nearly as big as any of the big banks, and have only had our own ledger running for less than a year, we still have a decent amount of users. We are not TB sized, but the history table still has a decent amount of rows. On top of that, while we mainly have activity during American daytime, there are at least a few queries running against this table every minute of the day.
The question is, can I run this query safely in our production database without any downtime? What do you think?
I do think I'm decent at handling relational DBs, (e.g. the Tantan talks at https://www.viblo.se/talks/), but I'm no DBA, and I don't do DB tasks every day. So let's do what everyone does nowadays, ask an AI for help!
Below is the prompt I used. Of course, this prompt is not perfect. But it does represent how I usually prompt AIs about these kinds of issues. Sure, it could be better to specify how big the table is, or give it information of the schema of the table, or indexes or constraints or a number of things that might affect the outcome. However, if I already know exactly what is important to know, I do not need to ask the AI for help in the first place!
I need to run the following against our production DB (Azure SQL, the table is quite big). I will run it during low traffic hours. There will only be a few queries per minute to the table, each one only a few ms in duration. Is it safe to run the query without impacting other traffic? If not, how can I minimize the downtime? ```sql BEGIN TRAN ALTER TABLE dbo.AccountBalance SET (SYSTEM_VERSIONING = OFF); EXEC Sp_Rename 'dbo.AccountBalance.TransactionId', 'ExternalTransactionId', 'COLUMN'; EXEC Sp_Rename 'dbo.AccountBalanceHistory.TransactionId', 'ExternalTransactionId', 'COLUMN'; ALTER TABLE dbo.AccountBalance SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.AccountBalanceHistory)); COMMIT TRAN ```
Before we dive into their answers, let's specify what a good answer would be.
It should at least highlight that the query needs to lock the table with
schema modification lock, which might be difficult to get if there are many
queries. It should also highlight that enabling system versioning can be very
slow if it's a big table, since it will validate that the
ValidFrom/ValidTo columns are consistent on all
rows. To mitigate this, it should suggest to run the last query with the
DATA_CONSISTENCY_CHECK setting to OFF to speed up
the query. (This is safe in this case since we only modify the names of
columns, not the contents of the rows themselves).
It would also be fine to ask for more information, for example how big the history table is before deciding on if the check should be turned off or not.
What it shouldn't do is suggest removing the transaction, which is very dangerous, especially without disabling the consistency check!
I tested with all the SOTA frontier models, plus DeepSeek that I randomly got on Arena (formerly LMArena).
Unfortunately only one agent gave a good answer. Let's start with that one. A bit surprising to me it was Gemini that delivered the best answer. It's actually a great answer, it has all the info needed to make a good decision.
The script you provided is logically correct for renaming columns in a system-versioned table. However, regarding your question about safety and minimizing downtime on a "quite big" table, there are two critical risks you need to address: locking blocking chains and data consistency checks.
Is it safe?
Yes, but it will cause a brief "stop-the-world" moment for that table.
- The Lock:
sp_renameandALTER TABLErequire a Schema Modification Lock (Sch-M).- The Impact: While this transaction is open, ALL access to
AccountBalance(SELECT, INSERT, UPDATE) will be completely blocked.- The Risk: If a long-running query is executing when you run this script, your script will wait behind it. While your script waits, it blocks all new incoming queries, potentially causing a massive pile-up (blocking chain) even before your script actually runs.
How to minimize downtime
To ensure the downtime is truly just "a few ms," you should modify your script to include a lock timeout and disable the default consistency check.
1. Add
DATA_CONSISTENCY_CHECK = OFFBy default, when you turn
SYSTEM_VERSIONING = ON, SQL Server performs a full scan of the table to ensure the temporal data is consistent. On a large table, this could take minutes or hours, holding the exclusive lock the entire time. Since you are only renaming a column inside a transaction and not modifying data, you can safely skip this check.2. Add
SET LOCK_TIMEOUTTo prevent your migration from inadvertently causing a pile-up if it gets stuck behind a stray query, set a timeout. If it can't get the exclusive lock within 5 seconds, it's safer for it to fail than to queue up 500 web requests behind it.
Recommended Script
Here is the optimized version of your script:
-- Safety: Stop trying if we can't get the lock within 5 seconds -- This prevents the migration from blocking valid traffic while it waits for a straggler query SET LOCK_TIMEOUT 5000; BEGIN TRAN -- 1. Turn off versioning (Exclusive Lock acquired here) -- This separates the current table from the history table ALTER TABLE dbo.AccountBalance SET (SYSTEM_VERSIONING = OFF); -- 2. Rename the column in the main table -- This is a metadata-only operation (very fast) EXEC sp_rename 'dbo.AccountBalance.TransactionId', 'ExternalTransactionId', 'COLUMN'; -- 3. Rename the column in the history table -- Required to maintain schema parity EXEC sp_rename 'dbo.AccountBalanceHistory.TransactionId', 'ExternalTransactionId', 'COLUMN'; -- 4. Turn versioning back on WITHOUT a full data scan -- CRITICAL: DATA_CONSISTENCY_CHECK = OFF prevents a full table scan ALTER TABLE dbo.AccountBalance SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.AccountBalanceHistory, DATA_CONSISTENCY_CHECK = OFF)); COMMIT TRANSummary of Execution Plan
- Check for active transactions: Before running, run
sp_whoisactiveor checksys.dm_tran_locksto ensure no long-running reports are touchingdbo.AccountBalance.- Run the script: With
DATA_CONSISTENCY_CHECK = OFF, the execution time will be dominated purely by metadata updates (milliseconds), regardless of table size.- Verify: Check that versioning is active and the column names are updated.
The other ones had various versions of bad/incomplete responses. Let's look at
what will most likely be everyone's favorite coding model, Claude Opus 4.6. It
lists a number of alternatives that I can pick from to improve the query. Its
number one pick is
Option 1: Remove the Explicit Transaction (Preferred)
. As I noted
above, this is not a good advice. If the transaction is removed, the query
will be stuck on enabling the temporal table for a potentially very long time.
And if you cancel it, you are left without historic data.
Analysis of Your Schema Change on Azure SQL
This query carries real risk of blocking other traffic, even during low-traffic hours. Let me break down why and how to mitigate it.
What Happens Under the Hood
1.
SYSTEM_VERSIONING = OFF— Acquires a Schema Modification Lock (Sch-M)This requires an exclusive Sch-M lock on
dbo.AccountBalance. A Sch-M lock is incompatible with every other lock type, including the Sch-S (Schema Stability) locks that even simpleSELECTqueries take.2.
sp_rename— Also requires Sch-M locksEach rename acquires a Sch-M lock on the respective table. You're renaming on both the main and history tables.
3.
SYSTEM_VERSIONING = ON— Another Sch-M lockRe-enabling versioning also requires schema modification locks and performs a consistency check between the two tables.
4. All inside a single explicit transaction
This is the critical problem. The Sch-M lock acquired by the first statement is held for the entire duration of the transaction — across all four statements. This maximizes the lock window.
The Danger: Lock Queuing & Convoy Effect
Even with "a few queries per minute, each only a few ms":
- Your
ALTERneeds a Sch-M lock, so it waits for any in-flight queries to finish (they hold Sch-S locks).- While your
ALTERis waiting, it queues a Sch-M request.- All new queries queue behind the pending Sch-M lock — they can't get Sch-S while Sch-M is waiting.
- This creates a convoy effect: even one long-running query ahead of you blocks you, and you block everyone behind you.
⚠️ The wrapping
BEGIN TRAN/COMMIT TRANmakes this significantly worse because it holds Sch-M across all four operations instead of acquiring and releasing it for each independently.
Recommendations to Minimize Impact
Option 1: Remove the Explicit Transaction (Preferred)
Run each statement independently and rely on short, individual lock windows:
-- Step 1: Turn off system versioning (brief Sch-M lock, then released) ALTER TABLE dbo.AccountBalance SET (SYSTEM_VERSIONING = OFF); GO -- Step 2: Rename column on main table (brief Sch-M lock) EXEC sp_rename 'dbo.AccountBalance.TransactionId', 'ExternalTransactionId', 'COLUMN'; GO -- Step 3: Rename column on history table (brief Sch-M lock) EXEC sp_rename 'dbo.AccountBalanceHistory.TransactionId', 'ExternalTransactionId', 'COLUMN'; GO -- Step 4: Re-enable system versioning (brief Sch-M lock) ALTER TABLE dbo.AccountBalance SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.AccountBalanceHistory)); GOWhy this is better: Each statement acquires and releases its Sch-M lock independently (milliseconds each), rather than holding one across all four operations.
Trade-off: Between steps 1 and 4, the table is not system-versioned. This means:
- No temporal history is recorded for changes in that window
- The history table is a standalone table briefly (no FK-like enforcement)
- If step 3 or 4 fails, you need to manually recover
This is usually acceptable for a rename that takes < 1 second total.
Option 2: Use Lock Timeout to Avoid Convoy
If you want to keep the transaction for atomicity, set a short lock timeout so you fail fast instead of causing a convoy:
SET LOCK_TIMEOUT 5000; -- Fail after 5 seconds instead of waiting indefinitely BEGIN TRAN ALTER TABLE dbo.AccountBalance SET (SYSTEM_VERSIONING = OFF); EXEC sp_rename 'dbo.AccountBalance.TransactionId', 'ExternalTransactionId', 'COLUMN'; EXEC sp_rename 'dbo.AccountBalanceHistory.TransactionId', 'ExternalTransactionId', 'COLUMN'; ALTER TABLE dbo.AccountBalance SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.AccountBalanceHistory)); COMMIT TRANIf it times out, retry — no harm done.
Additional Safeguards
- Kill long-running queries on the table before executing (
sp_who2,sys.dm_exec_requests)- Check for open transactions first:
DBCC OPENTRAN- Avoid running during scheduled jobs that might touch
AccountBalance
TL;DR
Aspect Your Original Script Recommended (Option 1) Lock duration Entire transaction Milliseconds per statement Convoy risk High Minimal Atomicity Full rollback Manual recovery if mid-way failure Practical risk Low-traffic ≠ no-traffic Safe even with concurrent queries Go with Option 1 unless you have a strong need for atomicity. For a column rename, the risk of a partial failure is far lower than the risk of a lock convoy in production.
To save space and not make this blog post too long I've skipped the other responses, but they were similar except OpenAI GPT 5.3 Codex. It didnt suggest me to disable the transaction, and it did mention the consistency check. However, it only did some as a side note that I think most readers would miss in the wall of text it produced.
| Agent | Rating | Comment |
|---|---|---|
| Claude Opus 4.6 | Bad | |
| Claude Opus 4.5 | Bad | Same as Opus 4.6. |
| Gemini 3 Pro | Great! | |
| GPT 5.3 Codex | Neutral | Not good, but also not completely dangerous. |
| GPT 5.2 Codex | Bad | Same type of answer as Claude Opus. |
| DeepSeek 3.2 | Bad | Same type of answer as Claude Opus. |
So are the top AI models SQL experts yet? No! When two of the top three frontier models give dangerous advice for my non-exotic scenario, we are not there yet.
If you do use AI for database related development, be very aware that their knowledge is limited, and their advice cannot be followed blindly!
So how did it go with our production table? Well, first I ran the query as is,
and it didn't complete, blocking the table until I canceled manually after
half a minute. Not good, but also not a massive issue. I then handled this
problem the old-fashioned way, me and a colleague sat down and discussed the
issue, and then we remembered DATA_CONSISTENCY_CHECK exists and a
quick Google later the query could complete in under a second.