Locked Logins Query

The following query uses the status bits in master..syslogins to identify locked logins

select name from master..syslogins where status&2=2
go



The various status bits in syslogins correspond to the below, to find them in syslogins simply substitute 2 above with the relevant number:

2 – Account is locked.
4 – Password has expired. This is set on the user’s first login attempt after expiration.
8 – Indicates that the value of exempt inactive lock is set to TRUE. It is not applicable for login profile rows.
16 – OMNI:autoconnect mode is enabled.
32 – May use SAP ASE internal authentication mechanism – syslogins.
64 – May use LDAP external authentication.
128 – May use PAM external authentication.
256 – May use Kerberos external authentication.
512 – Indicates a login profile.
1536 – Indicates a default login profile.
2048 – Indicates an authentication mechanism specified in a login profile.

Keep-alive and chrome refresh scripts

Keep-alive is a VBScript file as follows:

Set WshShell = WScript.Createobject(“WScript.Shell”)
do while a < 30000
a = a + 1
WScript.Sleep 595000
WshShell.SendKeys “{F15}”
Loop

Chrome refresh is a powershell script wrapped around a bat file:

bat file;

cd H:
psa chrome_loop.ps1

Powershell script;

while(1) { # Loop forever
sleep -Seconds 899 # Wait 15 minutes
$wshell = New-Object -ComObject wscript.shell
if($wshell.AppActivate(‘Chrome’)) { # Switch to Chrome
Sleep 1 # Wait for Chrome to “activate”
$wshell.SendKeys(‘{F5}’) # Send F5 (Refresh)
} else { break; } # Chrome not open, exit the loop
}

In-Memory OLTP Guide

Table Of Contents
  1. Document Notation
  2. SQL Server In-Memory OLTP Overview
  3. In Memory Objects
  4. In Memory Planning
  5. In Memory Identification  
  6. Monitoring In Memory 
  7. Using Resource Pools With In Memory
  8. In Memory Disk Space Usage
  9. In Memory High Availability
  10. In Memory Object Maintenance
  11. Backup Restore And  Requirements For RAM
  12. Hash Indexes And Hash Buckets On In Memory Tables
  13. Configuration and Setup – Guidance For Stable Environments
  14. Administrating In Memory
  15. In Memory Troubleshooting

Document Notation

Abbreviations:

GUI = Graphical User Interface

AD = Active Directory

HA = High Availability

DR = Disaster Recovery

SSMS = SQL Server Management Studio

OOM = Out of Memory

Terminology:

In Memory = In-Memory OLTP

In Memory tables = Memory-optimized tables

In Memory table types = Memory-optimized table types

Primary Replica = A SQL instance on a node hosting an availability group where the databases are in read / write mode. This replica sends log data to the secondary replicas.

Secondary Replica = A SQL instance on a node hosting an availability group where the databases are in read only or unavailable mode. This replica receives log data from the primary replica.

SQL Server In-Memory OLTP Overview

SQL Server In-Memory OLTP is an embedded technology within SQL since SQL 2014. It consists of an engine referred to as ‘Hekaton’ that runs interactively with the traditional disk based query processing engine. Traditional SQL server operations that occur in data cache could also be described as “in memory” however In-Memory OLTP refers to operations performed by the Hekaton engine with specific memory optimized objects such as in memory tables. This combination can significantly improve the performance of transaction processing, data ingestion and data load, and transient data scenarios.

Note, whilst In-Memory OLTP is largely compatible with traditional SQL databases there are some limitations on feature support and interoperability. These are listed in the appendix.

In Memory Objects

There are 3 types of in memory objects. The following table describes each.

TypeDescription
Memory-optimized tablestables Used to store data in a similar way to traditional tables. There are two sub types
Durable – Data is permanent stored in memory and on disk and available after a server restart
Non Durable – Data is transient, stored only in memory and not available after a server restart
Memory-optimized table typesUsed to store transient data as an alternative to using traditional temporary tables or table variables.
Natively compiled T-SQL modulesT-SQL modules can be natively compiled: stored procedures, triggers, and scalar user-defined functions. Natively compiled modules are stored as .dll files that exist on the host file system

In Memory Planning

Use of in memory technology and it’s suitability for individual applications is out of scope for this guide. The following section is concerned with ensuring that when an application does use in memory the environment is adequately sized.

RAM AND DISK REQUIREMENTS

Durable and non durable objects RAM use for each database using in memory should be estimated as accurately as possible. 

Example

Database NameNon Durable (GB)Durable (GB)
IN_MEMDB1113
IN_MEMDB205
IN_MEMDB347

Minimum RAM required = (Total Non Durable + Total Durable) + 30% of (Total Non Durable + Total Durable) + Non in memory SQL requirements + 4GB for OS

e.g  5GB + 25GB + 9GB + 16GB + 4GB = 59GB Minimum RAM in server.

Minimum RAM required should be treated as a rough guideline unless the application estimates are very accurate. It is highly recommended to over spec RAM for in memory applications as out of memory conditions are best resolved by having more RAM available. On physical machines especially, there is a significant time delay in adding RAM. In the example if 48GB is calculated by a best guess it may be prudent to go for 64GB if unsure.

Note: if the server is co hosting with other applications “4GB for OS” should be increased to accommodate their requirements.

Minimum Disk Space Required

By default the E: drive will hold the in memory “on disk” related files. Ram requirements need to be known at this stage.

Example

TypeSize in GB
Total Durable RAM GB * 4100
Total size GB of on disk datafiles Mdf/ndf files on E:300
Min 25% overhead100

Minimum Disk Space Required for E: = (Total Durable RAM GB * 4) + Total size GB of on disk checkpoint file + Min 25% overhead

e.g. 100GB + 300GB + 100GB = 500GB

Note: As soon as an in memory object of any kind is created in a new database the in memory filegroup on disk gets pre-populated with checkpoint files. On <=8GB RAM instances this is approximately 135MB of disk space, on >8GB RAM it is approximately 2GB. 

DR / UAT / DEV SERVER REQUIREMENTS

In order to transfer a production database to another environment the target server should have enough RAM and Disk Space to accommodate it.

 DR Server must have identical RAM and Disk space to Production 

 DR and UAT servers should have enough RAM and disk space to accommodate all or a subset of the production databases. This decision is application specific. If the databases contain durable objects then disk space and RAM should be considered on the target, if only non durable objects are contained then only RAM needs to be considered.

 Conversely if a database needs to be transferred from non prod to prod the target should have at least the same disk and RAM specifications as the source.

 Any RAM/Disk upgrades to the production and DR should also be reflected in non prod where applicable.

In Memory Identification  

In Memory is seamlessly integrated into SQL server and can be administered using SSMS and TSQL commands. There are also numerous DMVs containing in memory object related information. It is useful however to be aware of additional components and differences inside and outside of SQL. 

IN MEMORY DATABASE FILEGROUP

To use any in memory objects the user database requires a memory optimized file group, this can be added to an existing database or specified at initial creation. 

An example create database statement is below. Note the yellow highlighted section.

CREATE DATABASE [inmemDB]

 CONTAINMENT = NONE

 ON  PRIMARY 

( NAME = N’Data’, FILENAME = N’E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB.mdf’ , SIZE = 102400KB , MAXSIZE = UNLIMITED, FILEGROWTH = 102400KB ), 

 FILEGROUP [inmemDB_inmem] CONTAINS MEMORY_OPTIMIZED_DATA  DEFAULT

( NAME = N’InMemDB_inmem1′, FILENAME = N’E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB_inmem1′ , MAXSIZE = 500MB)

 LOG ON 

( NAME = N’Log’, FILENAME = N’F:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB.ldf’ , SIZE = 102400KB , MAXSIZE = 2048GB , FILEGROWTH = 102400KB )

This special filegroup is logically associated with one or more “containers”. A container is a folder/directory on the host file system, the example above has one container and it is located at E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB_inmem1′. 

Containers for filegroups should be located in subdirectories off the E: drive at this location

E:\MSSQL13.<instancename>\MSSQL\DATA

e.g. for a default instance 

E:\MSSQL13.MSSQLSERVER\MSSQL\DATA

Inside the container folder(s) are subdirectories containing in memory object related files such as .dll files and checkpoint files (discussed later).

Key points are

* In memory objects are stored on the filesystem and use disk space

* They require the same access rights as .mdf/.ldf files

* They should not be deleted/modified outside of SQL 

* Durable in memory objects can use up a lot of disk space and the MAXSIZE parameter must be carefully considered

* Only one in memory filegroup is allowed per user database although it may have multiple containers

SERVER PROPERTY ISXTPSUPPORTED

To verify a SQL instance can support the in memory feature run the following t-sql command.

select SERVERPROPERTY(‘IsXTPSupported’)

This should return 1 for the DB Engineered versions of SQL as it is Enterprise Edition. If it returns 0 then in memory cannot be used. 

SP_BLITZINMEMORYOLTP

In memory objects can be difficult to identify quickly. In SSMS in memory tables and natively compiled stored procedures appear the same as non in memory equivalents. This can hide the presence of in memory objects and cause confusion. To get a snapshot of all in memory objects run dba.dbo.sp_BlitzInMemoryOLTP. This provides detailed information including 

* What databases contain in memory objects

* Current RAM and on disk space usage of in memory objects

* Names of in memory objects and their types (table / module / table type)

* Locations of filegroups, containers and details of checkpoint file status

* Hash index information

With the output from sp_BlitzInMemoryOLTP it is possible to see what is “in memory” and take action appropriate to the technology.

The main contributors are Ned Otter, Konstantin Taranov, Aleksey Nagorskiy who are leading experts in the subject of in memory for SQL. This should be installed as part of additional build configuration detailed in the section Configuration And Setup.

Monitoring In Memory 

In memory objects use RAM and often some file system space. How much depends on what type of object and whether data is durable or non- durable. 

The following table describes the resource usage of each type

In Memory Object TypeRam UsageDisk UsageNotes
Durable
Table
Yes, amount depends on application requirementsYes, this varies but an approximate guide is to allow x 4 of the RAM usageIf an in memory durable table requires 1GB of RAM it will need approximately 4Gb of disk space By default the on disk components are stored under subdirectories here E:\MSSQL13.<instancename>\MSSQL\DATA e.g. for a default instance  E:\MSSQL13.MSSQLSERVER\MSSQL\DATA
Non-Durable
Table
Yes, amount depends on application requirementsYesData is available until the instance is restarted across all sessions. By default the on disk components are stored under subdirectories here E:\MSSQL13.<instancename>\MSSQL\DATA Amount required varies sizes can go up to 5GB see section Overview for details.
Table TypeYes, amount depends on application requirementsYesData is specific to the session / spid that created it within the batch only. Like traditional table variables it is removed when the batch is completed so RAM usage is highly transient. By default the on disk components are stored under subdirectories here E:\MSSQL13.<instancename>\MSSQL\DATA Amount required varies sizes can go up to 5GB see section Overview for details.
Natively
compiled
module
Yes, a very small amount < 100KBYes, a very small amount for the .dll and associated files < 1024KB By default the on disk components are stored under subdirectories here E:\MSSQL13.<instancename>\MSSQL\DATA\xtp e.g. for a default instance  E:\MSSQL13.MSSQLSERVER\MSSQL\DATA\xtp

Key point to note from the table above is that durable tables need the most management, both RAM and potentially large amounts of disk space are required. They also will grow until data is explicitly deleted by the application in the same way an on disk table would. Non durable tables also need disk space.

VIEWING IN MEMORY OBJECT USAGE VIA SSMS

SQL Server Management Studio (SSMS) version 16 and above contains a high level report detailing durable and non-durable memory-optimized tables. It is accessible via

Right Click on database name – Reports – Memory Usage By Memory Optimized Objects

This report does NOT include active Memory-optimized table types RAM usage or give details of on disk space usage but can be useful in getting a quick overview of how much RAM is in use by a single database. 

VIEWING IN MEMORY OBJECTS VIA DMV’S

There are numerous DMV’s available to monitor in memory objects. Many of these are referenced in the stored procedure sp_BlitzInMemoryOLTP described here .  Examination of this stored procedure can help provide the basis of custom scripts and adhoc queries.  Other useful queries are detailed in section Administrating In Memory . An example query that details memory usage for current active Memory-optimized table types RAM is below.

SELECT type,name,memory_node_id, pages_kb/1024 AS pages_MB   

FROM sys.dm_os_memory_clerks WHERE type LIKE ‘%xtp%’

Using Resource Pools With In Memory

Resource pools are logical containers used to define a specific percentage of resource (RAM) space available to database in memory objects. Only databases bound to the resource pool may use this space. This prevents in memory objects from consuming all the memory available , stops other processes from consuming space reserved for in memory objects and allows monitoring to alert when a specific database bound to a pool is getting close to its available allocated memory.

Note: The actual size of available space inside a resource pool can change due to the fluctuations in availability of total free RAM within the instance. This occurs less on instances with larger amounts of RAM. 

An instance may contain multiple resource pools but a database can only be bound to one at a time. The bank design standards require each user database containing in memory objects go in its own pool. This allows finer grain control and monitoring.

Resource Pools And In Memory Table Allocation

Creating resource pools, adding databases to resource pools and other administrative tasks are documented in section Resource Pool Viewing , Creation And Modification.

Inside a resource pool only a certain percentage is able to be used by Memory-optimized tables this percentage depends on how much RAM is available to the instance. This figure is shown in the committed_target_kb column of sys.dm_os_sys_info. The following table shows the percentages based on the RAM available.

committed_target_kb (GB)Percentage allocated for in memory tables
<= 8GB70%
<= 16GB75%
<= 32GB80%
<=96GB85%
>9690%

Source:

https://docs.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/bind-a-database-with-memory-optimized-tables-to-a-resource-pool?view=sql-server-ver15

For example if a 10GB resource pool is allocated for an instance with <=8GB available only 7GB will be available for Memory-optimized tables, therefore when creating resource pools the size should be allocated based on this percentage.

NOTE: Non in memory tables and other objects are not affected by RAM restrictions in resource pools.

Binding Databases To Resource Pools

After a resource pool is created the database containing in memory objects needs to be added to it. This is referred to as binding. The binding only becomes effective when the database is taken offline and put online. An alternative technique is to bind the database and perform an AlwaysOn failover and failback. This is described in this section Binding A Database To A Resource Pool .

In Memory Disk Space Usage

As mentioned previously in memory tables with durable data require disk space. As an approximate guide it requires 4 times the amount of disk space that is stored in RAM, E.g. 8GB of RAM usage needs 32GB of disk space. In certain circumstances in memory durable tables can consume considerably more than expected and if there is insufficient disk space the databases can go into a suspect state. Knowledge of how durable objects use space, correct planning and suitable maintenance is required to maintain stability.

Transactions And The In Memory Checkpoint Process

All in memory transactions with durable data go through the normal database transaction log and use up space within it until they are flushed to disk by the in memory checkpoint process. The in memory checkpoint process is a second checkpoint process run by the hekaton engine running on it’s own threads. It (the checkpoint process) is designed with high performance in mind and will only automatically trigger when in memory transactions in the log has produced 1.5 GB* since the last checkpoint. This can cause slower recovery times for in memory databases as potentially there are more uncommitted transactions.

The checkpoint process has a secondary role to perform a file merge of the data/delta files on disk. This is a separate process that combine data/delta files into a smaller number of files to reduce disk space and speed up query performance. The merge process happens asynchronously and disk space used for data/delta files can be observed to grow and shrink in between bursts of transactions as it catches up. Sustained transactions can cause disk space used considerably and the amount of disk space required should be based on the size of the data/delta files before they are merged.

This guide is primarily concerned with stability however it is worth noting that for in memory durable tables to perform transactions, additional IOPS capacity is required to allow for the file merge process overhead. Microsoft recommends x3 IOPS capacity than that required for disk based tables.

Important !!! Databases containing in memory objects should have a log size of at least 5GB

*12GB when large checkpoint mode is enabled see section Large Checkpoint Mode

Controlling Disk Space Use

The amount of disk space used by in memory objects is governed by how much data is stored and the transnational characteristics of the application. This is made more complicated than normal because of the relationship of the size of the table in RAM and the size needed on disk. It is primarily the job of the application development team to ascertain how much space is required and to run the application on hardware with enough resources.  The application team will also have to ensure that in memory table sizes are controlled through suitable data purging and that application monitoring is in place. Whilst filegroups can be set to a certain size to prevent filling the disk up the database will stop working and go into a suspect state. It can often be resumed using techniques detailed in  however stability will be affected. Thorough testing, correct planning and on going monitoring is the best way to control disk space.

In Memory High Availability

Transaction replication using SQL AlwaysOn and log shipping can be used with in memory objects. Database mirroring does NOT work with in memory objects.

Whilst memory-optimized tables are supported only the data from tables with durable data is replicated. If a failover occurs memory-optimized tables defined as schema only need to be manually repopulated on the new primary node. Secondary replicas defined as read only can only be used to provide read only access to durable memory-optimized tables.

In Memory Object Maintenance

Whilst in memory objects appear to be very similar to the objects on disks many of the traditional scheduled maintenance jobs are not required or are implemented in a different way.

Maintenance for traditional disk based tables consists of three main activities

  1. Consistency checks using DBCC commands
  2. Reindexing and reorganisation of indexes
  3. Statistics updates

The following table describes maintenance required for in memory objects and how they compare with traditional activities

TraditionalIn Memory EquivalentRequired ifNotes
Consistency checksPerform a full backup or restoreMemory-optimized tables with durable data existsA checksum is calculated for every block written to data/delta files, those checksums are recalculated any time the block is read such as a full backup. If the backup completes successfully it indicates that there is no corruption. . Backup failures should be investigated thoroughly. Corruption can also be detected when restoring a backup file.
Reindexing and reorganisation of indexesNoneN/AIn memory indexes cannot be reindexed or reorganised. A table with an inefficient hash index can be identified and altered to optimise or replace with a more suitable index discussed in section Hash Indexes And Hash Buckets On In Memory Tables .This is not in the scope of scheduled maintenance.   
Statistics updatesStatistics updates with recompileMemory-optimized tables existDatabases at compatibility level 13 or higher with in memory tables can be set with Auto Update Statistics option set to on. Additionally the standard update statistics job should be run. This has been modified to also recompile any Natively compiled T-SQL modules found as they will not automatically recompile as traditional stored procedures do. See for further details.

Use Of DBCC Commands

DBCC commands with REPAIR options do NOT work on in memory tables. If a table is corrupted it must be recreated and repopulated or the data restored from a last known good backup.

DBCC CHECKTABLE is not compatible with in memory tables

An attempt to run it will result in an error message similar to below

Object ID 1957582012 (object ‘badhashInMem’): The operation is not supported with memory optimized tables. This object has been skipped and will not be processed.

Backup Restore And  Requirements For RAM

The standard T-SQL BACKUP and RESTORE commands are fully compatible with databases containing in memory objects.

The following MUST be noted for restore operations. 

Unlike disk-based tables, memory-optimized tables with durable data must be loaded into memory before the database is available for user access. This requirement adds a new step in the database recovery. If the server does not have enough available memory, database recovery fails and the database is marked as suspect. Therefore when restoring databases containing memory-optimized tables with durable data to different environments the target must have at least as much available RAM as the source.

This restriction only applies for databases containing memory-optimized tables with durable data. Other in memory object types and non durable memory-optimized tables are NOT affected.

Hash Indexes And Hash Buckets On In Memory Tables

There is no maintenance required for indexes on in memory tables, however knowledge of one type of index known as a HASH index is useful to indicate potential performance problems and in extreme cases prevent out of memory (OOM) issues. Use of HASH indexes and their design is primarily of interest to the developer but as memory allocation is involved and there is only a finite amount DBA’s should be aware of how they work. Note, HASH indexes can exist on memory-optimized tables with durable and non durable data and memory-optimized table types.

HASH buckets are specified when a HASH index is created. This type of index is extremely efficient for queries using ‘=’ . Specifying the correct number of buckets needs knowledge of the data uniqueness and amount of rows in the table. An example create statement is below.

CREATE TABLE [dbo].[badhashInMem]

(

       [PK] [int] IDENTITY(1,1) NOT NULL,

       [col1] [varchar](1000) COLLATE Latin1_General_CI_AS NOT NULL,

 PRIMARY KEY NONCLUSTERED HASH

(

       [PK]

)WITH ( BUCKET_COUNT = 16)

)WITH ( MEMORY_OPTIMIZED = ON , DURABILITY = SCHEMA_AND_DATA )

HASH buckets can be thought of as pigeon hole storage. If you had 26 pigeon holes marked A-Z you could sort mail by the first letter of the surname. Mr Cooper would go in hole marked ‘C’ , Bob Brown would go in hole marked ‘B’ and so on.  If you had 1000 mail items people all with a surname beginning with ‘B’ the pigeon hole ‘B’ would contain 1000 mail items and although you could locate pigeon hole ‘B’ quickly you would then spend a lot of time going through the contents. This is analogous to hash index row chaining.

At the other extreme if you had only 26 mail items to surnames spanning exactly A-Z you would have only one mail item in each pigeon hole. This would be the perfect distribution for a hash index with bucket count of 26.

It is rare that there will always be optimal key value to bucket ratio. This guidance from Books On Line suggests that within certain boundaries over allocation of buckets (empty buckets) or row chaining is acceptable.

Empty buckets:

                    33% of total buckets is a good target value, but a larger percentage (even 90%) is usually fine.

                    When the bucket count equals the number of distinct key values, approximately 33% of the buckets are empty.

                    A value below 10% is too low.

Chains within buckets:

                    An average chain length of 1 is ideal in case there are no duplicate index key values. Chain lengths up to 10 are usually acceptable.

                    If the average chain length is greater than 10, and the empty bucket percent is greater than 10%,

                    the data has so many duplicates that a hash index might not be the most appropriate type.

Database Administration stability perspective

Key point: The number of buckets does not restrict the size of the index or how many records it can contain. You can have a bucket count of 1024 and 1 million records or a million buckets and 1024 records. Bucket numbers are there for optimisation and performance.

From a stability perspective ONLY, each bucket take up 8 bytes of RAM. In an extreme situation if buckets are hugely over allocated this could cause an out of memory issue even if there were no rows in the underlying table. An under allocation of buckets may affect performance but would not take up so much memory.

To view HASH index state the following query can be run

— Run in the database containing in mem tables with hash indexes

select t.name AS [TableName],i.name as [IndexName],FLOOR((CAST(h.empty_bucket_count AS FLOAT) / h.total_bucket_count) * 100) as [free_bucket_pct],

IIF(FLOOR((CAST(h.empty_bucket_count AS FLOAT) / h.total_bucket_count) * 100) < 33, ‘Free buckets % is low! Increase buckets to prevent bucket sharing’, ‘OK’) AS [Free buckets status],

h.avg_chain_length,

IIF(h.avg_chain_length > 10 AND FLOOR((CAST(h.empty_bucket_count AS FLOAT) / h.total_bucket_count) * 100) > 10, ‘avg_chain_length has many collisions! Index Key columns are highly duplicated not suitable for HASH index ‘, ‘OK’) AS [avg_chain_length status]

from sys.dm_db_xtp_hash_index_stats h

inner join

sys.indexes i ON h.object_id = i.object_id AND h.index_id = i.index_id

inner join

sys.tables t on h.object_id = t.object_id

To alter the amount of buckets on an index an example query is below

ALTER TABLE dbo.inmemory_tab1

ALTER INDEX inmemory_tab1_primaryKey

REBUILD WITH(BUCKET_COUNT = 123030)

GO

Configuration and Setup – Guidance For Stable Environments

It is crucial that there is sufficient RAM available to allow in memory to function. The following table provides guidance based on the percentage of RAM used by in memory objects.

Percentage of RAM required for in memory objectsAction
Between 0 and 40%Proceed with setting up instance
Between 40 and 70%Discuss with the application team the likely growth of the in memory objects when live.  If you are confident that data levels will not increase beyond 70% proceed. If required arrange a meeting with database engineering and the application team before proceeding.
Greater than 70%Arrange a meeting with the application team and database engineering to discuss further. Do not proceed until signed off by stakeholders.

Administrating In Memory

In Memory Filegroup And Containers

The following section provides examples of how to create and modify an in memory filegroup and it’s contianers. Note. Once a filegroup is added it cannot be removed without dropping the database.

Create A New Database With An In Memory Filegroup And One Container

CREATE DATABASE [inmemDB]

 CONTAINMENT = NONE

 ON  PRIMARY

( NAME = N’Data’, FILENAME = N’E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB.mdf’ , SIZE = 102400KB , MAXSIZE = UNLIMITED, FILEGROWTH = 102400KB ),

 FILEGROUP [inmemDB_inmem] CONTAINS MEMORY_OPTIMIZED_DATA  DEFAULT — memory optimized filegroup

( NAME = N’InMemDB_inmem1′, FILENAME = N’E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB_inmem1′ , MAXSIZE = UNLIMITED) –container

 LOG ON

( NAME = N’Log’, FILENAME = N’F:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB.ldf’ , SIZE = 102400KB , MAXSIZE = 2048GB , FILEGROWTH = 102400KB )

Note: The container line does not have a FILEGROWTH parameter as it is a directory. The MAXSIZE parameter can be set to a fixed figure e.g. 10GB, in this example it is UNLIMITED

Create A New Database With An In Memory Filegroup And Two Containers

CREATE DATABASE [inmemDB]

 CONTAINMENT = NONE

 ON  PRIMARY

( NAME = N’Data’, FILENAME = N’E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB.mdf’ , SIZE = 102400KB , MAXSIZE = UNLIMITED, FILEGROWTH = 102400KB ),

 FILEGROUP [inmemDB_inmem] CONTAINS MEMORY_OPTIMIZED_DATA  DEFAULT — memory optimized filegroup

( NAME = N’InMemDB_inmem1′, FILENAME = N’E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB_inmem1′ , MAXSIZE = UNLIMITED), –container 1

( NAME = N’InMemDB_inmem2′, FILENAME = N’E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB_inmem2′ , MAXSIZE = UNLIMITED) –container 2

 LOG ON

( NAME = N’Log’, FILENAME = N’F:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB.ldf’ , SIZE = 102400KB , MAXSIZE = 2048GB , FILEGROWTH = 102400KB )

Add An In Memory Filegroup With One Container To An Existing Database

ALTER DATABASE [inmemDB] ADD FILEGROUP  [inmemDB_inmem] CONTAINS MEMORY_OPTIMIZED_DATA — add filegroup

GO

ALTER DATABASE [inmemDB] ADD FILE

( NAME = N’InMemDB_inmem1′, FILENAME = N’E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB_inmem1′ , MAXSIZE = UNLIMITED)

TO FILEGROUP  [inmemDB_inmem] — add container to filegroup

GO

Add Another Container To An Existing Database

Useful if an emergency container needs to be added on another volume because of space shortages on primary volume.  Note, if database is in an AG the path specified must exist on all the replicas.

ALTER DATABASE [inmemDB] ADD FILE

( NAME = N’InMemDB_inmem2′, FILENAME = N’F:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB_inmem2′ , MAXSIZE = UNLIMITED)

TO FILEGROUP  [inmemDB_inmem] — add container to filegroup

GO

Remove A Container From An In Memory Filegroup

Note: Once in memory objects have been added to a database the containers are initialised. At this point there must be at least one container. The last one can never be removed. All other containers can only be removed if there is enough space in the existing containers to do so.

ALTER DATABASE [inmemDB]  REMOVE FILE InMemDB_inmem2

Change The Maxsize Parameter On An Existing Container

Useful if a fixed size has been set and more space is required

ALTER DATABASE [inmemDB] MODIFY FILE

( NAME = N’InMemDB_inmem1′, FILENAME = N’E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB_inmem1′ , MAXSIZE = UNLIMITED) –was 500MB

View How Much File Space Is In Use By Memory Optimized Filegroups

There are several methods to do this suggestion are below.

Run sp_BlitzInMemoryOLTP:

This has several parameters two of the main options are below.

.EXAMPLE

    EXEC sp_BlitzInMemoryOLTP;

    — Get all In-memory information

.EXAMPLE

    EXEC sp_BlitzInMemoryOLTP @dbName = N’inmemDB’;

    — Get In-memory information for database with name inmemDB

This displays a lot of in memory information including space in use by containers

Run a SQL Query:

The example queries below are for a database called inmemDB, substitute this name for the relevant database.

–this query shows the in memory filegroups

SELECT * FROM inmemDB.SYS.FILEGROUPS

— this query gives the size of the files – for normal files – and the directory size for the in memory folder/container

— note its listed as type filestream which it’s not but uses a similar method of storing data

SELECT (SIZE*8)/1024 [mb used],* FROM inmemDB.SYS.DATABASE_FILES

Look on the file system:

Use explorer / DOS / Powershell  to see how much space is used by the containers.

Resource Pool Viewing , Creation And Modification

The following section provides instructions and  examples of how to create and modify resource pools and add and remove databases to and from them.

Creating A Resource Pool

An example command to create and bind a resource pool is below.

CREATE RESOURCE POOL inmemDB_IM_RP WITH ( MIN_MEMORY_PERCENT = 6  ,MAX_MEMORY_PERCENT = 6  );

GO

ALTER RESOURCE GOVERNOR RECONFIGURE;

GO

sp_xtp_bind_db_resource_pool [inmemDB],[inmemDB_IM_RP]

GO

Resource pools are created using percentages of memory available to the instance. Additionally, as mentioned in this section Resource Pools And In Memory Table Allocation an overhead is required to ensure there is enough space for in memory tables.

Once you know how much memory to allocated load this script into SSMS and alter these two parameters

–enter GB figure of in memory requirements

declare @memoryNeeded NUMERIC(8,2) = 0.5

–enter Database Name

declare @db_name sysname = ‘inmemDB’

Change @memoryNeeded to the required value in GB and @db_name to the name of the database with the in memory objects.

Run the script and go to the messages tab. This contains the create resource pool statement with the required percentages.

Important !!! Create the resource pool identically on all the AlwaysOn replicas to ensure there is adequate space if the database fails over.

Modifying A Resource Pool

An example command to modify a resource pool is below.

ALTER RESOURCE POOL inmemDB_IM_RP WITH ( MIN_MEMORY_PERCENT = 14 ,MAX_MEMORY_PERCENT = 14 );

GO

ALTER RESOURCE GOVERNOR RECONFIGURE;

GO

If in memory objects require more space the resource pool containing the database needs to be resized. Like the create resource pool sizing is specified in percentage. This requires converting the size in GB to a percentage and as mentioned in this section Resource Pools And In Memory Table Allocation an overhead is required to ensure there is enough space for in memory tables.

Once you know how much memory to allocate load this script into SSMS and alter these two parameters

–enter GB figure of in memory requirements

declare @memoryNeeded NUMERIC(8,2) = 1.5

–enter Database Name

declare @db_name sysname = ‘inmemDB’

Change @memoryNeeded to the required value in GB and @db_name to the name of the database with the in memory objects.

Run the script and go to the messages tab. This contains the alter resource pool statement with the required percentages.

Important !!! Resize the pool identically on all the AlwaysOn replicas to ensure there is adequate space if the database fails over.

Note: The database does NOT need to be rebound following a resize of the resource pool.

Viewing Resource Pool Usage

Pool size and usage can be viewed via T-SQL or a custom report from SSMS or windows performance monitor.

Note: In memory space usage will NOT show up until a database is fully bound to a resource pool.  Similarly free space in a resource is not reflected until a databases is removed AND it is set offline and online.

Note: If resource governor is disabled pool max sizes are IGNORED and all pools will have full access to all the available target memory.

Check that resource governor is enabled by running this query

select * from sys.resource_governor_configuration

A “0” indicates it is DISABLED, to enable issue this command

ALTER RESOURCE GOVERNOR RECONFIGURE;

Via TSQL

The following T-SQL script shows pool information and whether or not there is a database bound to the pool

SELECT p.name as ‘Pool Name’,

CASE WHEN d.name IS NULL THEN ‘No Database Bound !’ ELSE d.name END as Database_Name,

p.used_memory_kb/1024 as ‘Space Used by In Memory Objects MB’,

(p.target_memory_kb)/1024 as ‘Pool_Size_MB’,

(p.target_memory_kb-p.used_memory_kb)/1024 as ‘Pool_Space_Available_MB’,

CAST ( (CONVERT(DECIMAL,p.used_memory_kb)/(CONVERT(DECIMAL,p.target_memory_kb))*100) as decimal(18,2)) as ‘Percent_Used’

FROM sys.databases d

right outer join sys.dm_resource_governor_resource_pools p

on p.pool_id = d.resource_pool_id

where p.name not in (‘internal’,’default’)

VIA Performance Monitor

Two main performance monitor counters are used

SQLServer:Resource Pool Stats(<Resource Pool Name>)\Used memory (KB)

SQLServer:Resource Pool Stats(<Resource Pool Name >)\Target memory (KB)

These can be added to a data collector set to monitor pool usage over time.

Binding A Database To A Resource Pool

A database needs to be bound to a resource pool, an example of how to do this is below.

sp_xtp_bind_db_resource_pool [OOM_DB],[OOM_DB_IM_RP]

Output:

A binding has been created. Take database ‘OOM_DB’ offline and then bring it back online to begin using resource pool ‘OOM_DB_IM_RP’

An example statement to complete the binding is below

ALTER DATABASE OOM_DB SET OFFLINE

ALTER DATABASE OOM_DB SET ONLINE

GO

Note: If the database is participating in an AlwaysOn Availability group this message is generated when an attempt is made to “SET OFFLINE”

Msg 1468, Level 16, State 1, Line 21

The operation cannot be performed on database “OOM_DB” because it is involved in a database mirroring session or an availability group. Some operations are not allowed on a database that is participating in a database mirroring session or in an availability group.

Msg 5069, Level 16, State 1, Line 21

ALTER DATABASE statement failed.

In this case failover and failback the resource group containing the database to complete the binding.

Important !!! Remember to bind the database to the same resource group on all the AlwaysOn replicas or you will have databases not bound to pools on failover.

Unbinding A Database From A Resource Pool

An example of how to do unbind a database from a resource pool is below.

sp_xtp_unbind_db_resource_pool [OOM_DB]

GO

ALTER RESOURCE GOVERNOR RECONFIGURE;

GO

Although the database is now unbound the resource pool will still show resources in use. To update this take the database OFFLINE and ONLINE.

An example statement to reset the resource pool usage is below

ALTER DATABASE OOM_DB SET OFFLINE

ALTER DATABASE OOM_DB SET ONLINE

GO

Note: If the database is participating in an AlwaysOn Availability group this message is generated when an attempt is made to “SET OFFLINE”

Msg 1468, Level 16, State 1, Line 21

The operation cannot be performed on database “OOM_DB” because it is involved in a database mirroring session or an availability group. Some operations are not allowed on a database that is participating in a database mirroring session or in an availability group.

Msg 5069, Level 16, State 1, Line 21

ALTER DATABASE statement failed.

In this case failover and failback the resource group containing the database to complete the update to the resource pool usage.

Important !!! Remember to unbind the database from the same resource group on all the AlwaysOn replicas or there will be databases bound to incorrect pools on failover.

Removing A Resource Pool

Removing a resource pools returns its memory reservations to the default pool.

Before removing a resource pool the databases bound to it must be unbound, see above.

An example of how to remove a resource pool is below.

USE [master]

GO

DROP RESOURCE POOL [OOM_DB_IM_RP]

GO

USE [master]

GO

ALTER RESOURCE GOVERNOR RECONFIGURE;

GO

Important !!! Remember to remove unwanted resource pools from all the AlwaysOn replicas so all resources are available on all replicas.

In Memory Databases

Identify in memory databases, bound pools and space usage

The query below can be used to identify what in memory databases exist, what pool they are bound to and details of the size and used space. In addition it also provides remarks on correct pool name assignment.

SELECT d.name [in_memory_database],

CASE WHEN p.name IS NULL THEN ‘Not in Resource Pool !’ ELSE p.name END [Resource Pool Name],

p.used_memory_kb/1024 [Pool Used MB],p.target_memory_kb/1024 [Pool Size MB],

pc.cntr_value/1024 [Database IM Size MB],

CASE WHEN p.name <> RTRIM(pc.instance_name)+’_IM_RP’ THEN ‘Database bound to incorrect pool name’ ELSE

       CASE WHEN p.name IS NULL THEN ‘Database NOT bound to any resource pool’ ELSE ‘OK’ END

END [Notes]

from sys.databases d

inner join sys.dm_os_performance_counters pc

on d.name = pc.instance_name

left outer join sys.dm_resource_governor_resource_pools p

on d.resource_pool_id = p.pool_id

where pc.counter_name = ‘XTP Memory Used (KB)’

and pc.instance_name <> ‘_Total’ and pc.cntr_value > 0

View Total Server RAM and Pool Use

The following query can be used to determine how much space is allocated to resource pools for in memory objects and available RAM for other SQL processes.

select si.total_space_for_sql/1024 [Total_Instance_RAM_MB] ,rp.total_space_for_pools/1024 [Pool_Reservation_MB],

(si.total_space_for_sql-rp.total_space_for_pools)/1024 [Space_Left_For_SQL_MB],

CAST ( (CONVERT(DECIMAL,rp.total_space_for_pools)/(CONVERT(DECIMAL,si.total_space_for_sql))*100) as decimal(18,2)) as ‘Percent_Used_By_Resource_Pools’

from

(

select sum(target_memory_kb) [total_space_for_pools],1 [link] from sys.dm_resource_governor_resource_pools where name not in (‘internal’,’default’)

) as rp

inner join

(

select committed_target_kb [total_space_for_sql],1 [link] from sys.dm_os_sys_info

) as si

on rp.link = si.link

In Memory Troubleshooting

Overview

In memory troubleshooting from a stability perspective is concerned with two types of issues.

– In memory databases running out of RAM

– In memory databases running out of disk space

RAM

Memory-optimized tables with durable data require special attention as they persist data. After an instance restart an attempt will be made to reload all data back into RAM as the database starts up. Therefore if the database stopped before the restart because of RAM full it will again be full up and not come online unless more memory is made available by adding physical RAM or stopping other processes to increase available memory.

Memory-optimized tables with non durable data and Memory-optimized tables types (similar to traditional temporary tables) do release RAM on restart.

Resource pools allow some degree of control as they can be resized without requiring instance outage, this however only provides temporary remediation as they are limited to the available RAM in the instance. Frequent upward resizing of resource pools should trigger a discussion with the application team to either reduce the data size of in memory objects or plan to upgrade the server.

Disk

Memory-optimized tables both durable / non durable and Memory-optimized table types do use disk space. Non durable tables and table types theoretically shouldn’t but in testing with databases containing only table types around 2GB per database was allocated. If the in memory database contains only non durable tables or table types it is recommended to set the filegroup containers to 20GB. Similar to resource pools this will allow early warning via the monitoring and allow for resizing to remediate. For databases containing durable data it is recommended to set the filegroup containers to unlimited as they can grow rapidly. Monitoring is configured to alert when available disk space is getting low. In memory containers can be added to other drives in emergency however long term the server will need additional space allocated or the application team need to reduce the size of in memory data.

Error 41822 – There is insufficient disk space to generate checkpoint files

Full message:

There is insufficient disk space to generate checkpoint files and as a result the database is in delete-only mode. In this mode, only deletes and drops are allowed.

There are two reasons for this error.

  1. Filegroup container maxsize value has been reached
  2. The volume(s) the filegroup containers reside on are out of disk space

Remediation for reason 1

Opton 1 – delete some data

If the database is still available data can be deleted, the application team must do this.

After the records have been deleted the dba should issue the following commands to take the database out of delete only mode. Swap the database name “inmemDB” to the name of the database with the problem.

CHECKPOINT

backup log inmemDB to disk = ‘nul:’

Option 2 –  Expand the filegroup or add another container.

To attempt to fix, filegroup and container information is required. If the database is in a suspect / unavailable state this information can be obtained as follows.

–if the database is suspect the filegroup name and the current container names and paths will be unavailable

–container names and paths are available via this query

select db_name(database_id) [DB Name],name,

physical_name,type_desc from master.sys.master_files where type_desc = ‘FILESTREAM’

Note the highlighted values, if max_size value is -1 this indicates unlimited growth. In testing the max_size figure is not an accurate value.

To obtain the filegroup name an example has been provided below

–the filegroup name can be identified by the following methods

–restore the filelist from an offline backup file

restore filelistonly from disk = ‘c:\temp\inmemDB.dmp’

Alternatively go to a replica if available, run this in the database

select * from sys.filegroups

If filegroup maxsize has been reached set the maxsize to a larger value or to unlimited. The following example sets the container to unlimited

ALTER DATABASE [inmemDB] MODIFY FILE

( NAME = N’InMemDB_inmem1′, FILENAME = N’E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB_inmem1′ , MAXSIZE = UNLIMITED)

If the volume has run out of space another container can be created on a volume with space, note this volume must exist on other replicas if the database is in an availability group. The following example adds sets the container to unlimited

ALTER DATABASE [inmemDB] ADD FILE

( NAME = N’InMemDB_inmem2′, FILENAME = N’F:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB_inmem2′ , MAXSIZE = UNLIMITED)

TO FILEGROUP  [inmemDB_inmem] — add container to filegroup

GO

Next step is based on the database mode.

If database is in suspect mode

After remediation if the databases are suspect and NOT in an availability group set the databases OFFLINE and ONLINE. Example below.

–if the database is in suspect mode attempt to bring on line with these commands

ALTER DATABASE inmemDB SET OFFLINE;

ALTER DATABASE inmemDB SET ONLINE;

If the databases are in an availability group perform an availability group failover.

If database is in delete-only mode

Issue the following commands to take the database out of delete only mode. Swap the database name “inmemDB” to the name of the database with the problem.

CHECKPOINT

backup log inmemDB to disk = ‘nul:’

Error 41805 – There is insufficient memory in the resource pool

Full message:

There is insufficient memory in the resource pool ‘%ls’ to run this operation on memory-optimized tables. See ‘http://go.microsoft.com/fwlink/?LinkID=614951‘ for more information.

The SQL errorlog may also contain messages similar to this

Disallowing page allocations for database ‘ inmemDB ‘ due to insufficient memory in the resource pool ‘inmemDB_IM_RP’. See ‘http://go.microsoft.com/fwlink/?LinkId=510837‘ for more information

This error occurs when in memory databases bound to a resource pool use all the allocated RAM. It can also occur when the resource governor is disabled.

Remediation

Check that resource governor is enabled by running this query

select * from sys.resource_governor_configuration

A “0” indicates it is DISABLED, to enable issue this command

ALTER RESOURCE GOVERNOR RECONFIGURE;

Identify the size of the pool containing the database see section Identify in memory databases, bound pools and space usage for details.

Modify the resource pool see section Modifying A Resource Pool for details.

After remediation if the databases are suspect and NOT in an availability group set the databases OFFLINE and ONLINE. Example below.

–if the database is in suspect mode attempt to bring on line with these commands

ALTER DATABASE inmemDB SET OFFLINE;

ALTER DATABASE inmemDB SET ONLINE;

If the databases are in an availability group perform an availability group failover.

Checkpoint Files Filling Up Disk Space Log Not Shrinking

On disk checkpoint files will grow as in memory objects in RAM grow. This is expected behavior. In certain rare circumstances checkpoint files will grow and the database transaction log will not truncate even when no additional in memory data is added. This is due to the requirement for transaction log backups to occur to allow the in memory checkpoint process.

Remediation

Take a full backup of the database. Backup the transaction log. Manually issue a CHECKPOINT command.

This condition is described fully in this link.

https://techcommunity.microsoft.com/t5/datacat/checkpoint-process-for-memory-optimized-tables-in-sql-2016-and/ba-p/305289

Re Sync SQL Server Logins with Users

–First, make sure that this is the problem. This will lists the orphaned users:

EXEC sp_change_users_login ‘Report’

–If you already have a login id and password for this user, fix it by doing:

EXEC sp_change_users_login ‘Auto_Fix’, ‘user’

–If you want to create a new login id and password for this user, fix it by doing:

EXEC sp_change_users_login ‘Auto_Fix’, ‘user’, ‘login’, ‘password’

To fix all orphaned users run:

>
USE —– change db name for which you want to fix orphan users issue
GO

declare @name varchar(150)
DECLARE cur CURSOR FOR
select name from master..syslogins
Open cur
FETCH NEXT FROM cur into @name
WHILE @@FETCH_STATUS = 0
BEGIN

EXEC sp_change_users_login 'AUTO_FIX', @name

FETCH NEXT FROM cur into @name

END

CLOSE cur
DEALLOCATE cur

>

Sybase – Assign Read to all objects in database

extract script (extract.sql):
select ‘grant select on ‘ + name + ‘ to READ-ROLE_RO’ from sysobjects where type in ( ‘U’, ‘V’)
go

Command to create set_perm.sql script:
isql -Udbaxx_operxx -SSYBSERV11 -P* -DDB1 -iextract.sql -oset_perm.sql

–Group creation script READ-ROLE_RO.sql which adds the group to relevant users
sp_addgroup READ-ROLE_RO
sp_changegroup READ-ROLE_RO, bichdanxx
sp_changegroup READ-ROLE_RO, rattparxx
sp_changegroup READ-ROLE_RO, ramasatxx
sp_changegroup READ-ROLE_RO, ramedilxx
sp_changegroup READ-ROLE_RO, singchadxx
sp_changegroup READ-ROLE_RO, rathharxx

isql -Udbaxx_operxx -SSYBSERV11 -P -iREAD-ROLE_RO.sql -DDB1

Command to execute set_perm script to set read permissions for all tables and views to that group:
isql -Udbaxx_operxx -SSYBSERV11 -P* -iset_perm.sql

Extract User Info in Database

Script to extract user permissions info from a database prior to migration

–Extract db users
— before running this script, you should fix orphaned users first
SELECT name
,type_desc
,’IF NOT EXISTS (SELECT * FROM sys.database_principals WHERE name = ”’ + name + ”’)
Exec sp_grantdbaccess ”’+suser_sname(sid)+”’, ”’+name +”” [Command to Add DB User],
sid
from sys.database_principals
where principal_id>4 and type in(‘S’, ‘U’ , ‘G’)
AND suser_sname(sid) IS NOT NULL — this is just a check just in case there are orphaned
users

— Extract user roles in database
SELECT db_name() [DatabaseName]
,name [RoleName]
,type_desc [RoleType]
,’IF NOT EXISTS (SELECT * FROM sys.database_principals WHERE name = ”’ + name + ”’
and type = ”R”)
Exec sp_addRole ”’ + name +”” [Command To Create Role]
from sys.database_principals where type in(‘R’,’A’)
and name<>’public’ and is_fixed_role<>1

–Extract database role membership
select user_name(DRM.member_principal_id) [DatabaseUser]
,user_name(DRM.role_principal_id) [DatabaseRole]
,DP.type_desc as [UserType]
,’Exec sp_addrolemember ”’+ user_name(DRM.role_principal_id)+ ”’,’ + ”” + user_name(DRM.member_principal_id)+”” [Command To Add Role Members]
from sys.database_role_members DRM
inner join sys.database_principals DP on DRM.member_principal_id=DP.principal_id
where DRM.member_principal_id>1
Order by DatabaseUser

— Extract Individual Object Permissions
select state_desc + ‘ ‘ + permission_name + ‘ ON [‘ + SCHEMA_NAME(SO.schema_id) + ‘].
[‘+OBJECT_NAME(DP.major_id)
+’] TO [‘ + USER_NAME(DP.grantee_principal_id) + ‘]’ [Command to add Special Permissions]
from sys.database_permissions DP
INNER JOIN sys.database_principals DPS
ON DP.grantee_principal_id=DPS.principal_id
Inner Join sys.objects SO ON SO.object_id=DP.major_id
where DPS.name not in (‘public’,’Guest’)

Rollback time to complete

— Query to find how long a rollback will take to complete
SELECT session_id as SPID, command, start_time, getdate() as CurrentTime,
percent_complete, (estimated_completion_time/1000)/60 as minutes_remaining,
dateadd(second,(estimated_completion_time/1000), getdate()) as estimated_completion_time
FROM sys.dm_exec_requests where percent_complete > 0
–and session_id=158
go

Mirroring Tips and Tricks

How to bring up a mirror database after a failure of the principal(Prod)

If you have a Witness server then this should happen autmatically, but if there is no witness server then you need bring up the mirror manually with the following command:

ALTER DATABASE dbname SET PARTNER OFF
RESTORE DATABASE dbname WITH RECOVERY

If for some reason you are running with Safety Off (High Performance) then there may be some transactions lost so you have to acknowledge that when restoring the database:

ALTER DATABASE dbname SET PARTNER FORCE_SERVICE_ALLOW_DATA_LOSS

Failing Over

  • This is as straight forward as clicking properties on the principal mirrored database, then clicking Mirroring and then click Failover.
  • One thing to look out for are databases running in High Performance mode, these can’t be failed over so you will need to convert them to high safety (select high safety then click ok to save) before you fail them over. Later on they can be turned back into high performance.

Set up Mirror Endpoints

Set up Endpoints for Database Mirroring

Create Certificate and end-points (On Principal Instance):

use master;
GO

CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘P@55wordxx’;
GO

CREATE CERTIFICATE principal_cert WITH SUBJECT = ‘Principal Server Certificate for database mirroring’;
GO

CREATE ENDPOINT Endpoint_Mirroring STATE = STARTED
AS TCP (LISTENER_PORT = 51200, LISTENER_IP = ALL)
FOR DATABASE_MIRRORING ( AUTHENTICATION = CERTIFICATE principal_cert, ENCRYPTION = REQUIRED ALGORITHM AES, ROLE = ALL);
GO

BACKUP CERTIFICATE principal_cert TO FILE = ‘G:\Sqlbackup\Mirroring_certxx_P.cer’
GO

Create Certificate and end-points (On Mirror Instance):

USE master
GO

CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘P@55wordxx’;
GO

CREATE CERTIFICATE Mirroring_cert WITH SUBJECT = ‘Mirror Server Certificate for database mirroring’;
GO

CREATE ENDPOINT Endpoint_Mirroring STATE = STARTED
AS TCP (LISTENER_PORT = 51200, LISTENER_IP = ALL)
FOR DATABASE_MIRRORING ( AUTHENTICATION = CERTIFICATE Mirroring_cert, ENCRYPTION = REQUIRED ALGORITHM AES, ROLE = ALL );
GO

BACKUP CERTIFICATE Mirroring_cert
TO FILE = ‘G:\Sqlbackup\Mirroring_cert_M.cer’
GO

Create User and Associate Certificate (On Principal Instance):

At this stage we need to exchange certificates to other instance. Copy Principal instance certificate on mirror server and Mirror instance certificate to Principal server manually.

USE MASTER
GO

CREATE LOGIN ##MirrorProxy## WITH PASSWORD = ‘P@55wordxx’;
GO

CREATE USER ##MirrorProxy## FOR LOGIN ##MirrorProxy##;
GO

CREATE CERTIFICATE Mirroring_cert
AUTHORIZATION ##MirrorProxy##
FROM FILE = ‘G:\Sqlbackup\Mirroring_cert_M.cer’;
GO

GRANT CONNECT ON ENDPOINT::Endpoint_Mirroring TO [##MirrorProxy##];
GO

Create User and Associate Certificate ( On Mirror Instance):
USE MASTER
GO
CREATE LOGIN ##MirrorProxy## WITH PASSWORD = ‘P@55wordxx’;
go

CREATE USER ##MirrorProxy## FROM LOGIN ##MirrorProxy##;
go

CREATE CERTIFICATE principal_cert
AUTHORIZATION ##MirrorProxy##
FROM FILE = ‘G:\Sqlbackup\Mirroring_cert_P.cer’;
Go

GRANT CONNECT ON Endpoint::Endpoint_Mirroring TO [##MirrorProxy##];
go

Also refer to https://tecadmin.net/setup-database-mirroring-sql-server/

Steps to Set up Mirroring

Steps to set up mirroring

Prerequisite: Make sure endpoints are set up

CREATE ENDPOINT endpoint_mirroring
STATE = STARTED
AS TCP ( LISTENER_PORT = 51200 )
FOR DATABASE_MIRRORING (ROLE=PARTNER);
GO
both primary and mirror

1) Alter the log backup job, DBM.LogBackup, to ignore the database temporarily, do this by editing and adding the dbname to @Databases e.g = ‘ALL_DATABASES, -ETWeb11SRBank’

2) Make sure that the database is in full recovery mode and take a full backup and a log backup.

3) Copy the full and log backups over to the mirror server. e.g
FF2SQLETMIR01 (\Ff2sqletmir01\mir)

4) Restore the backup (full and log) with norecovery using following sql, this will set them as mirrored databases;
RESTORE DATABASE [DBSYSSPK]
FROM DISK = ‘G:\Sqlbackup\DBSYSSPK_db_20180804_174400_1_of_1.bak’
WITH NORECOVERY
go
RESTORE LOG [DBSYSSPK]
FROM DISK = ‘G:\Sqlbackup\DBSYSSPK_tlog_20180804_180101.bak’
WITH NORECOVERY
go
ALTER DATABASE [DBSYSSPK]
SET PARTNER = ‘TCP://JKTINDBP0001.id.db.com:51200’
go

5) On the server where you are mirroring from run the folloiwng sql:
ALTER DATABASE [DBSYSSPK]
SET PARTNER = ‘TCP://JKTINDBB0001.ID.DB.COM:51200’
go

6) The final step is to edit job DBM.LogBackup on FF1SQLETW01 and take out the database you added in step (1) e.g “, -ETWeb11SRBank”.

Mirroring Issues

Mirroring issues

If Mirroring stops working between 2 endpoints then there are a few things you can try before you rebuild the mirroring.

One quick thing to try is to stop and start the mirroring endpoint:

select * from sys.endpoints

–To stop an endpoint:

ALTER ENDPOINT dbMirror STATE = STOPPED

–To start an endpoint:

ALTER ENDPOINT dbMirror STATE = STARTED

If all that fails then you will just need to rebuild mirroring by making anote of the mirroring connections, removing it, taking a database dump and tran log dump, copying it over, loading it up and setting up mirroring, For further details refer to “Steps to set up mirroring”

Mirroring Check

–SQL to check status of mirroring:

SELECT
DB_NAME(database_id) As DatabaseName,
CASE WHEN mirroring_guid IS NOT NULL THEN ‘Mirroring is On’ ELSE ‘No mirror configured’ END AS IsMirrorOn,
CASE WHEN mirroring_safety_level=1 THEN ‘High Performance’ WHEN mirroring_safety_level=2 THEN ‘High Safety’ ELSE NULL END AS MirrorSafety,
mirroring_state_desc,
mirroring_role_desc,
mirroring_partner_name AS MirrorServer,
CASE WHEN mirroring_witness_name IS NOT NULL THEN mirroring_witness_name ELSE ‘NULL’ END AS [WitnessServer],
mirroring_witness_state_desc
FROM sys.database_mirroring
where mirroring_guid IS NOT NULL
GO

Migration Login and User steps

To copy over the Password for a login do the folling:

  • On the source instance run:
    SELECT convert(varbinary(256),password) as password FROM master..syslogins
    WHERE name = ;
  • On the destination instance run:
    CREATE LOGIN WITH PASSWORD = HASHED;

To remove users you may need to remove old schemas first and you then need to first check that no objects belong to them:

  • First just try to drop the user normally
  • You may well get an error saying the user owns objects or schemas in the database
  • To check what schema the user owns run:
    SELECT name FROM sys.schemas WHERE principal_id = USER_ID(”);
  • Next check which objects are attached to this schema with:
    SELECT *
    FROM sys.objects
    WHERE schema_id = SCHEMA_ID(”);
  • You will now be able to drop the schema and then th euser form the database

To realign sysusers with their logins run:

 EXEC sp_change_users_login 'update_one', 'WFMUser', 'WFMUser';