Estimated reading time: 13 minutes
Ever had a production app break because an Azure Key Vault secret expired? It’s painful—and usually happens at the worst time. The good news is you can catch it early. With Event Grid publishing Key Vault events and Log Analytics giving you KQL visibility, you can track expirations proactively and trigger alerts (or even automation) before anything goes down.
This comprehensive guide shows you how to proactively monitor Azure Key Vault secret expiration using Kusto Query Language (KQL) in Azure Log Analytics.
By the end of this tutorial, you’ll be able to:
- Connect Azure Key Vault to Log Analytics Workspace
- Write KQL queries to find expiring and expired secrets
- Handle the Azure Monitor 500-column schema limitation
- Parse AdditionalFields for advanced secret monitoring
- Automate expiration alerts and notifications
Table of Contents
- Prerequisits
- Connecting Azure Key Vault with Azure Log Analytics
- How to Verify That Key Vault Logs Are Ingesting into Log Analytics
- Understanding KQL Fundamentals for Azure Key Vault Monitoring
- Understanding The Log Analytic 500 Columns Limitation
- How To Find Expiring or Expired Azure Key Vault Secrets
- Frequently Asked Questions
Prerequisits
- Azure subscription with Contributor access
- Existing Azure Key Vault with active secrets
- Log Analytics workspace (or permission to create one)
- Basic familiarity with Azure Portal
- Optional: Understanding of KQL basics (we’ll teach you)
Connecting Azure Key Vault with Azure Log Analytics
Azure Key Vault secret monitoring requires diagnostic settings that route logs to a Log Analytics workspace. Follow these steps to establish the connection.
- Sign in to the Azure Portal with admin credentials
- In the search bar at the top, type your Key Vault name or search for “Key Vaults”
- Select your Key Vault from the results
- In your Key Vault’s left navigation menu, scroll to the Monitoring section, click on Diagnostic settings

Configure Log Categories
Azure Key Vault generates several log categories. For secret expiration monitoring, you need specific ones.
- In the Diagnostic Settings, click on Add Diagnostic Settings

- Under Category groups, select the following
- Audit: Captures all data plane operations (secret access, creation, deletion)
- AllLogs: Includes Event Grid notifications for secret lifecycle events
- Under Destination details, select Send to Log Analytics workspace, and choose your workspace.
- Click Save.
It might take some time for the data to show in the Log Analytics, so allow it some time.

How to Verify That Key Vault Logs Are Ingesting into Log Analytics
Before writing complex queries, confirm that your Azure Key Vault logs are successfully ingesting into the Log Analytics workspace
- Open the Azure Portal and search for your Key Vault
- From the Key Vault menu, click Logs (under the Monitoring section)
- Adjust the time range – Click the time picker in the top-right and select “Last 24 hours.”
- Run this verification query:

- Paste the following KQL in the editor and click on the blue >Run button.
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.KEYVAULT"
If you see recent operations, you’re good; if you don’t, widen the time range and generate fresh activity because ingestion can take a few minutes.
What this query does
- AzureDiagnostics: This is your starting table. Imagine a giant master spreadsheet that collects logs from almost every Azure service, Firewalls, SQL Databases, Keyvaults, and more.
- The Pipe |: The pipe character is the glue of KQL. It takes the data from the left (the big table) and pushes it to the right (the next command). Usually for filtering.
- where ResourceProvider == “MICROSOFT.KEYVAULT“: This command filters the data rows. It looks at the column named ResourceProvider and keeps only the rows that match “MICROSOFT.KEYVAULT”.
- In Plain English: “I don’t care about databases or firewalls right now. Throw away everything except the logs that came specifically from my Key Vaults.”

Think of this query as a funnel. You start with everything and narrow it down to just the specific resource you care about.
To see all ResourceProviders sending logs to your workspace
AzureDiagnostics | summarize by ResourceProvider
You can learn more about Resource Providers in the Azure Resource Providers and Types documentation.
Understanding KQL Fundamentals for Azure Key Vault Monitoring
Before diving into secret expiration queries, let’s build a solid foundation in KQL (Kusto Query Language). This section covers the essential operators and concepts you’ll use throughout this guide.
Why learn KQL? It’s the query language for Azure Monitor, Azure Sentinel, Application Insights, and many other Microsoft services. Master it once, use it everywhere.
How KQL Queries Work: The Pipeline Pattern
Think of KQL as a series of pipes. Each operator takes data from the left, transforms it, and passes the results to the right. So we start with the big table, then filter down the result by using where and the show the result output using project. we will cover all of these details in this post.
TableName
| where <filter>
| extend <new column>
| project <select columns>
| take 10
Understanding TimeGenerated In KQL
TimeGenerated is the timestamp you use to filter “when did this happen?” Start every troubleshooting query by limiting the time window first, then drill down.
Why filter by time first?
- Queries run faster when you scan less data
- Troubleshooting is always time-bound (“what happened in the last hour?”)
- Azure Log Analytics charges by data scanned (time filters reduce costs)
Filtering on TimeGenerated is the normal way to limit the noise:
// Last 24 hours
| where TimeGenerated >= ago(24h)
// Last 2 days
| where TimeGenerated >= ago(2d)
// Specific date range
| where TimeGenerated between (datetime(2026-02-01) .. datetime(2026-02-08))
// Today only
| where TimeGenerated >= startofday(now())
Best Practice: Always start queries with a time filter unless you specifically need historical analysis
Converting Unix Time To a Readable DateTime
Azure Key Vault stores secret expiration dates as Unix epoch time (also called Unix timestamps). This is a single number representing seconds since January 1, 1970 00:00:00 UTC.
Example: The number 1769804488 represents the datetime 2026-03-30 14:01:28 UTC.
Why Unix time? It’s compact, unambiguous (no timezone confusion), and easily sortable. But it’s unreadable to humans.
unixtime_seconds_todatetime() is a built-in KQL function that converts a Unix timestamp in seconds into a normal, human-readable datetime. This function expects one input
- Input: the column name that has the Unix datetime.
- Output: a KQL datetime value you can display, filter, and sort
print
OriginalEpoch = 1769804488,
ConvertedTime = unixtime_seconds_todatetime(1769804488)
Output:
Take a look at the results. There is one value showing the datetime in epoch format, and the other is the readable datetime.
| OriginalEpoch | ConvertedTime |
|---|---|
| 1769804488 | 2026-03-30 14:01:28 |

The extend Operator: Adding Calculated Columns
extend is how you add new columns to your results without losing the existing ones. Think of it as “calculate something new and attach it to every row”.
This is super common in KQL because logs often store values in formats that aren’t easy to read (strings, Unix time, JSON fields). With extend, you can create a cleaner, human-friendly version while keeping the raw data for troubleshooting.
| extend <new_column_name> = <calculation>
print OriginalEpoch = 1769804488
| extend NewConvertedTime_Column= unixtime_seconds_todatetime(OriginalEpoch)

The Key Vault logs store an expiry timestamp as Unix seconds. That’s not very human-friendly, so we convert it:
Note: This example may fail on your end, as eventGridEventProperties_data_EXP_d might not be exist in your schema; this part will be covered later on. But the most important thing is to understand the concept and why the Extend is used.
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.KEYVAULT"
| where TimeGenerated > ago(20d)
| where OperationName == "SecretExpiredEventGridNotification"
| extend ExpiryUtc = unixtime_seconds_todatetime(eventGridEventProperties_data_EXP_d)

The project Operator: Cleaning Up Output
Project: is how you clean up your output. Azure diagnostic tables are wide, and most columns aren’t useful. Using Project, you choose only the columns you want to see. That makes your results easier to read and easier to share, and faster to scan
| project <column1>, <column2>, <column3>
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.KEYVAULT"
| project TimeGenerated, ResourceId, OperationName, ResultType
Notice how only the columns we specified in the project returned in the output.

Renaming columns with project:
Using Project also allows you to rename a column. For example, you don’t want to output the TimeGenerated as is, instead you want to projected as “RecordTime”. Use this trick to see how to rename any column:
| project
NewNameValue= ColumnName, ...
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.KEYVAULT"
| project
RecordTime = TimeGenerated,
KeyVaultName = ResourceId,
Operation = OperationName,
Status = ResultType
Limitation:
projectrequires exact column names—no wildcards likeOperation*.
The project-keep Operator: Wildcards and Flexibility
Project-keep is like project, but with one key advantage: it supports wildcards and is great when you want to keep a “known set” of columns while also keeping a family of related columns.
This matters a lot in Azure logs because you’ll sometimes see columns that share a common prefix, for example, Key Vault Event Grid fields might appear as:
- eventGridEventProperties_data_EXP
- eventGridEventProperties_data_EXP_d
- (and other variations)
With project, you must list each column explicitly. With project-keep, you can keep them all using a wildcard:
| project-keep <column1>, <column_prefix>*
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.KEYVAULT"
| where TimeGenerated > ago(20d)
| where OperationName == "SecretExpiredEventGridNotification"
| project-keep ResourceId, OperationName, eventGridEventProperties_data_EXP*
This query keeps:
- ResourceId and OperationName (explicitly), and
- Any column whose name starts with eventGridEventProperties_data_EXP…
Important: project-keep is for keeping columns by name, not for calculations.
If you want to add something like unixtime_seconds_todatetime(…), compute it first with extend
Understanding The Log Analytic 500 Columns Limitation
Log Analytics has a limit of 500 columns; any additional columns are included under the AdditionalFields column. This is one of the reasons why you need to understand your own Analytic logs table first.
Why does this matter?
- Small environments (few services) → Event Grid properties appear as direct columns
- Large environments (many services) → Event Grid properties move to AdditionalFields.
To see the number of schema columns in the Log Analytic run the following query.
AzureDiagnostics
| getschema
| summarize ColumnCount = count()

Run the following KQL and see the output.
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.KEYVAULT"
As you can see, and as this schema has more than 500 columns, the AdditionalFields appear, and all the additional columns are added under this column.

In a smaller environment with fewer columns, which is under 500 columns, running the same query will result in a different output. This is challenging as you need to know where the columns exist in order to query them correctly.
Take a look at the small Log Analytics below, it does not have any AdditionalFields as the schema is less than 500 column

How To Find Expiring or Expired Azure Key Vault Secrets
As you read and see in the previous section, the table schema plays a vital role in defining how to write the query. Now with this solid understanding, we can proceed and write the query that helps us in finding the expired or expiring Azure Key Vault secrets. We will cover both cases
Azure Key Vault stores the secret events in a column named OperationName. This column is used to store vault and management operations, secret operations, Key operations, and Certificate operations.
Azure Key Vault OperationName Values Explained
Every Azure Key Vault operation is logged with an OperationName that describes what happened. For secret expiration monitoring, focus on these values:
| OperationName | Meaning | When It Fires |
| SecretNearExpiryEventGridNotification | Secret is approaching expiration | 30 days before expiry (default) |
| SecretExpiredEventGridNotification | Secret has expired | Exactly at expiration datetime |
| SecretSet | Secret was created or updated | Exactly at the expiration datetime |
| SecretGet | Secret was read/accessed | Every time an app retrieves the secret |
| SecretList | Secrets were listed | When viewing secrets in Portal or API |
| SecretDelete | Secret was deleted | When secret is removed from Key Vault |
To see the of value you might in your OperationName column, Read Operation Name Table
As the OperationName holds the secret status, it does not hold the actual expiration date. We need to look at another column that has the datetime value.
- eventGridEventProperties_data_EXP: This is a Unix epoch time format representing the secret’s expiration date in seconds since January 1, 1970 (UTC).
The eventGridEventProperties_data_EXP column can also be named as eventGridEventProperties_data_EXP_d depend on the schema, so to get whatever the column name is as can use the wildcard eventGridEventProperties_data_EXP*. But remember, use project-keep with a wildcard.
Finding Expiring Secrets When Your Table Schema Is Under 500 Columns
Let’s give it a try and run the following. This query works if your schema is less than 500 columns.
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.KEYVAULT"
| where TimeGenerated > ago(20d)
| where OperationName == "SecretExpiredEventGridNotification"
| project-keep ResourceId, OperationName, eventGridEventProperties_data_EXP*

To make it a human-readable date time, run it as follows
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.KEYVAULT"
| where TimeGenerated > ago(20d)
| where OperationName == "SecretExpiredEventGridNotification"
|extend HumanTime=unixtime_seconds_todatetime(eventGridEventProperties_data_EXP_d)
| project-keep ResourceId, OperationName, HumanTime

Running this query transforms the messy raw data into a clean table, immediately highlighting which secrets need your attention before they cause an outage.
Finding Expiring Secrets When Your Table Schema Is Above 500 Columns
If your Log Analytics workspace has more than 500 columns in the AzureDiagnostics table, Azure stores overflow columns in the AdditionalFields column as a JSON string. This section teaches you how to parse that JSON and extract secret expiration data.
Understanding AdditionalFields
AdditionalFields is a text column containing JSON-formatted key-value pairs. It looks like this:
{
"eventGridEventProperties_data_EXP": "1769804488",
"eventGridEventProperties_subject_s": "/subscriptions/.../secrets/my-secret",
"other_property": "value"
}
To query this data, we must:
- Parse the JSON text into a KQL dynamic object
- Extract specific properties
- Convert data types (string to number, number to datetime)
Reading What’s Inside the AdditionalFields Using Parse_Json() Function
parse_json() converts a JSON string into a KQL dynamic object that you can navigate using dot notation.
| extend ParsedFields = parse_json(AdditionalFields)
In plain English: parse_json() turns the messy “bag of extra columns” into something KQL can navigate
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.KEYVAULT"
| where TimeGenerated >= ago(30d)
| where OperationName == "SecretExpiredEventGridNotification"
| extend ParsedFields = parse_json(AdditionalFields) // <-- Read Here
After this step, ParsedFields is a dynamic object where you can access nested properties:
ParsedFields.eventGridEventProperties_data_EXPParsedFields.eventGridEventProperties_subject_s
The tolong() Function: Converting Dynamic Values to Numbers
Even after parsing, ParsedFields.eventGridEventProperties_data_EXP is often treated as a string (or a dynamic value). But unixtime_seconds_todatetime() expects a number representing Unix seconds.
tolong() safely converts dynamic or string values to integers
Why use tolong():
- Handles type mismatches gracefully
- Returns
nullif conversion fails (prevents query crashes) - Required before passing to
unixtime_seconds_todatetime()
| extend UnixTime = tolong(ParsedFields.eventGridEventProperties_data_EXP)
| extend ReadableTime = unixtime_seconds_todatetime(UnixTime))
Complete Query for Schemas Over 500 Columns
This query finds expired secrets when Event Grid properties are in AdditionalFields.
AdditionalFields (JSON text)
↓
parse_json()
↓
ParsedFields (dynamic object)
↓
tolong()
↓
UnixExpiry (integer)
↓
unixtime_seconds_todatetime()
↓
ExpiryDateUtc (datetime)
AzureDiagnostics
| where TimeGenerated > ago(30d)
| where ResourceProvider == "MICROSOFT.KEYVAULT"
| where OperationName == "SecretExpiredEventGridNotification"
// Step 1: Parse the text string into a usable object
| extend ParsedFields = parse_json(AdditionalFields)
// Step 2: Extract, Cast to Long (Number), and Convert to Date
| project
OperationName,
ResourceId,
HumanReadblae = unixtime_seconds_todatetime(tolong(ParsedFields.eventGridEventProperties_data_EXP)),
OriginalFormat=ParsedFields.eventGridEventProperties_data_EXP
Frequently Asked Questions
Enable diagnostic settings on your Azure Key Vault to send logs to Log Analytics. Then use KQL queries to monitor SecretNearExpiryEventGridNotification and SecretExpiredEventGridNotification events. These events are published by Azure Event Grid when secrets approach or reach expiration
Azure Monitor limits each table to 500 columns. When diagnostic logs from multiple services exceed this limit, overflow columns are stored in a JSON field called AdditionalFields. You must use parse_json() to access data in this field.
By default, Azure Key Vault publishes SecretNearExpiryEventGridNotification 30 days before expiration. This gives you a full month to rotate secrets before they expire. You can adjust this threshold in Event Grid subscription settings.
Secrets have no expiration date: Only secrets with explicit expiration dates trigger notifications
Time range is too narrow: Notifications only fire at specific intervals (30 days before expiry, at expiry).
Log ingestion delay: Wait 15-30 minutes after enabling diagnostics
Conclusion
By reaching here, you can now read and analyze your own AzureDiagnostic Table, and query with confidence as you learn the essential way to find and manipulate it.
Looking for more, how about understanding the deep technical details for RepAdmin